Month: July 2019

AI, Reinforcement Learning

Introduction to Reinforcement Learning 2

Reading Time: < 1 minute

So continuing the journey on learning RL, I went through the deep Q blog.

https://www.freecodecamp.org/news/an-introduction-to-deep-q-learning-lets-play-doom-54d02d8017d8/

Now this time it required more setup on the colab environment. First, you got to make sure you can install the VizDoom package on colab. There are some dependencies needs to be satisfied. Also make sure you install scikit-image package. This takes about 10 minutes as the package need to be build using CMAKE.

Then I initially just pulled the stock “basic.cfg”, “basic.wad” file of the github site. But found out later that had problems. The big one being the frame was sampled in RGB and give you the wrong dimensions. So now I just download them from the github with the already setup config files. Nevertheless, it helped me to understand what was passed around.

A big thing to learn was, of course, the Deep Q network. Which is in the class “DQNetwork”. I’m more used to program in Keras, so it takes a little more time to understand. You can do this in Keras, although you need to update loss function, which is actually easier to understand to in TensorFlow.

The rest seems straightforward, just let it run and learn. Here is the full colab link.

https://colab.research.google.com/drive/1gZM4pAfH4kroa_44gNYZEE8RDVMiO9cP

AI, Reinforcement Learning

Introduction to Reinforcement Learning 1

Reading Time: < 1 minute

As I discussed before, there are not at many good reinforcement learning material out there. But I found this great set of tutorial and I will share my journey of learning it.

Start reading these two set of of blogs,

  1. Blog Number 1
  2. Blog Number 2

At the end of the second blog, you will find jupyter notebook for the Frozen Lake tutorial.

https://github.com/simoninithomas/Deep_reinforcement_learning_Course/blob/master/Q%20learning/FrozenLake/Q%20Learning%20with%20FrozenLake_unslippery%20(Deterministic%20version).ipynb

Before you start the tutorial, you will like need to learn how the Gym environment works. Go to this link and read the super basic tutorial they have there. Note especially what are the component of each episode. Actually figure out what are the possible actions, and what does each value of the state means.

http://gym.openai.com/docs/

Here is the wiki for the basic parameters. https://github.com/openai/gym/wiki/CartPole-v0

Refer to the source for what “Discrete” and “Box” are. https://github.com/openai/gym/tree/master/gym/spaces

Run the code on Google Colab and see how it runs. Print out the variable for each episode and step. I made an example in case you want to follow. https://colab.research.google.com/drive/1oqon14Iq8jzx6PhMJvja-mktFTru5GPl

Run the deterministic state first and then stochastic. Now you know how to create the super basic, Q table.

AI, Reinforcement Learning

Reinforcement learning

Reading Time: 2 minutes

Ever since the Deepmine Alpha Go paper came out, I have always been fascinated by what reinforcement learning could do. In my opinion, it’s the closest branch of AI to Artificial General Intelligence. Because no matter what your goal is, as long as you determined the reward correctly, it will eventually help you to find the “optimal” solution. What it lacks, perhaps, is that it takes a long time to learn. For example, human babies don’t have to stumble millions of times to learn how to walk. And it’s even more evident for a deer, where they pretty much know how to walk after they were born. Now, that is not too say evolution didn’t give some prebuilt model for the deer to use, but maybe we can make some pretrained model for AI to know how to walk, like the modules they insert into Neo to give him fighting abilities.

When I set out to learn reinforcement learning, there wasn’t much formalized material. There is Sutton and Barto’s book, but I found it was and still written for experts. The exercises they give are necessary for me to understand what’s going on, but there is no explanation given. There is too much “for obvious reasons” for me to understand some basic concepts, like how do I calculate state value for each grid. There is a formula for it, but no detailed walkthrough to how to assign those values. Then there are David Silver’s lectures, which shined more light for me than the book, but there is also too much detail left unexplained. So, I have searched the internet for some tutorials and finally landed on this medium article. https://medium.com/free-code-camp/an-introduction-to-reinforcement-learning-4339519de419. It has enough little details for me to work out how to get to individual values, starting from what’s a Q table and how implement it in python. I highly recommend going through the tutorial and exercises on Google Colab and make sure you understand how the environment is used in the Gym package.

Anyway, I’m still working through all of the code and understanding it. I also recommend a Lego Mindstorm set to give you a physical manifestation to work with. It can help you familiarize with how to apply it in the physical world. The basic robotics with a stable, not bipedal, robot, also makes learning easier. Make sure you use a customize python package like ev3dev so you can control the robot with python scripts. https://www.ev3dev.org/ Now that’s how you have endless, time sucking fun!

AI

AI learning material

Reading Time: 2 minutes

Recently, I have received many requests about how I learned Neural Networks. Since this is so new that not that many Universities offer a class in it, I thought I should put together a list of learning material I found useful. Hopefully other people will find it useful too.

  1. I’m an audio learner and love to listen to people. So Andrew Ng’s Coursera course was good for me. https://www.coursera.org/specializations/deep-learning. Take all four if you have time and try to understand the exercises and not just manipulate the program to get the correct answer. Actually understand the Math behind it.
  2. Ian Goodfellow’s books cover the math background you need and he shows you how to apply it. https://www.deeplearningbook.org/. This is a good reference book if you find yourself lost in some concept a paper mentioned. The dreaded, “for obviously reasons, …”
  3. If you find you want some statistics background, I read this book, http://www-bcf.usc.edu/~gareth/ISL/. There is another more bible like statistics book, but if I cannot understand it, I’m not going to suggest it to anybody.
  4. Cornell University have an open archive for all sciences. And you can find the specific journal articles related to AI, Computer Vision, Speech, and NLP. https://arxiv.org/. It is intimidating in the beginning to read papers, just from the amount of jargons people use. But after you read several it’s not that bad. Otherwise, refer to Ian’s book or just google it.
  5. MIT online course videos are good for general background in Math and CS. Again, I prefer listening to lectures much more than reading books. I just fall asleep too much.
  6. Stanford class notes is where I started: http://cs231n.stanford.edu/. Image recognition is easier to understand but hard to master.
  7. Grant’s video on youtube has the best visualizations. And they are the best linear algebra video I have ever seen on transformation. You are going to need linear algebra. It speed up your computation by a ton, if nothing else. https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
  8. I would say most import of all, find a paper with code and just play with it. You can put code on Google Colab and get free GPU. https://paperswithcode.com/

Secret: Use Twitter to follow the experts in AI. Build it up to specialize in the field you like. e.g. reinforcement learning. I will get you started with some famous names: @geoffreyhinton, @ylecun, @AndrewYNg, and @goodfellow_ian.