Posted in Deep Learning, Theory

Deep Learning 16: Understanding Capsule Nets

This post is the learning notes from Prof Hung-Yi Lee‘s lecture, the pdf could be found here (page40-52). I have read few articles, and I found this is a must-read. It is simple, and you can easily understand what is going on. I would say it is a good starting point for further readings.

Paper link: Sara Sabour, Nicholas Frosst, Geoffrey E. Hinton, “Dynamic Routing Between Capsules”, NIPS, 2017

Continue reading “Deep Learning 16: Understanding Capsule Nets”

Posted in Algorithm, Deep Learning, Machine Learning, Theory

Reinforcement Learning (1): Q-Learning basics

Hi! In the following posts, I will introduce Q-Learning, the first part to learn if you want to pick up reinforcement learning. But before that, let us shed light on some fundamental concepts in reinforcement learning (RL).

Kindergarten Example

Q-Learning works in this way: do an action and get reward and observation from the environment, as shown below. Image is taken from here :


Berkeley’s CS 294: Deep Reinforcement Learning by John Schulman & Pieter Abbeel

Imagine a baby boy in a kindergarten and how he performs on the first day. He does not know the kindergarten and knows nothing about how to behave. So he begins with random actions, say he hits the other kids, and when he performed this, he has no idea if it is right or not. Then the teacher becomes mad and gives him a punishment (a negative reward), then he knows hits others is not a good action; in the next time, the boy washes his lunch box, and the teacher rewards him with candy, then he knows this action is a good one. So in our kindergarten example, simply the Agent is the boy, who has no knowledge in the very beginning; Action is how he behaves; Environment contains all the objectives that he could perform on; Reward is something he gets from the environment (punishment or the candy), and Observation is what he could observe or the feedback from the environment.


Candies lol

Exploitation vs. Exploration

To understand how Q-learning works, it is important to know exploration and exploitation.
Let’s say our baby boy from the kindergarten goes home one day, and his mom prepares five boxes (we call them A-E), where there are different numbers of candies inside the boxes and he doesn’t know which one has more candies. So if his goal is to get as much candy as possible, what he would do?

Method 1: Obviously he could choose an arbitrary box each time. However, it is not guaranteed that he could get as much as possible.

Method 2: Another method would choose a “possible” box. Each time, he can choose the box with the maximum expectation of the candies. So to get a distribution of the candies, say he could open box 1000 times uniformly then keep track of the number of candies.

Method 3: If he has some prior knowledge about these boxes, for example, his mom told him that box A has 10 (expectation), box B has 20 (expectation) and others unknown. So based on his goal, it seems box B is a good choice. But box C might have even more candies! We could either choose box B, or randomly choose a box from C-E.

We call these methods policies in Q-learning. In brief, we choose our action (choose a box) based on our current state in a policy. So we define latex]\pi[/latex] as a policy, which maps states to actions.

Exploitation is to choose an action based on information that we have known. Method 2 is an exploitation-only policy. We say we know the expectations of all actions and then choose the best one.
Exploration is to explore the new actions that we have no information about. Method 1 is an exploration-only policy. Method 3 is a balanced version of these two. This provides us the idea of \epsilon-greedy policy.

Epsilon-greedy policy

Ranges from 0 to 1,\epsilon is the probability of exploration, which is set to search for new things. Typically, we just random a state and return that action. In practice, we initialize it with a value between 0 and 1; then we usually let it shrink during episodes t . An Episode is a whole game process from the start to the terminal state. Say in Flappy Bird, you start the game until the death state. Intuitively, imagine when an agent starts to play a new game, it has no “experience” about the game, so it is natural to go randomly; after some episodes, it is about to learn the skills and tricks, then it tends to use its own experience to play instead of randomly choose an action, because the more episodes it plays, the more confident it is about the experience (the more accurate the reward approximation is). There are various settings to \epsilon, say \epsilon=\frac{1}{ \sqrt { t } }, where t refers to episode.


Slide from Percy Liang

Q-table

Q-learning has a table called Q-table, which is a table of states, actions and approximated rewards. Let’s get back to the kindergarten example.


Kindergarten states and actions

We simplify the problem: states for the boy are washing lunch box (wash) and hitting others (hit), there are four actions marked as action A to D. Our Q table shows bellow, where we could observe that each row is a state and the corresponding reward values to different actions. Some state-action pairs are illegal and reach no values. The values indicate number of candies as rewards.

A B C D
Wash 10 -5
Hit -10 5

Q-table example

Posted in Algorithm, Deep Learning, Theory

Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!

There are unsupervised learning models in multiple-level learning methods, for example, RBMs and Autoencoder. In brief, Autoencoder is trying to find a way to reconstruct the original inputs — another way to represent itself. In addition, it is useful for dimensionality reduction. For example, say there is a 32 * 32 -sized image, it is possible to represent it by using a fewer number of parameters. This is called you are “encoding” an image. The goal is to learn the new representation, so it is also applied as pre-training; then a traditional machine learning models could be applied depending on the tasks — it is a typical “two-stage” way for solving problems.[3] by Hinton is a great work on this problem, which shows the ability of “compressing” in neural networks, solving the bottleneck of massive information.

Continue reading “Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!”

Posted in Algorithm, Deep Learning, Theory

Deep Learning 13: Understanding Generative Adversarial Network

Proposed in 2014, the interesting Generative Adversarial Network (GAN) has now many variants. You might not surprised that the relevant papers are more like statistics research. When a model was proposed, the evaluations would be based on some fundamental probability distributions, where generalized applications start. Continue reading “Deep Learning 13: Understanding Generative Adversarial Network”

Posted in Deep Learning, Energy-Based Learning, Theory

Deep Learning 12: Energy-Based Learning (2)–Regularization & Loss Functions

First, let’s see what is regularization from a simple example. Then we will have a look at some different types of loss functions.

Regularization

Reviewed the definition of regularization today from Andrew’s lecture videos. Continue reading “Deep Learning 12: Energy-Based Learning (2)–Regularization & Loss Functions”

Posted in Deep Learning, Energy-Based Learning, Theory

Deep Learning 11: Energy-Based Learning (1)–What is EBL?

As a part of our goals, it is absolutely important to look back and think about the loss functions we applied, for example, the cross entropy. There are other types, however, targeting on different practical problems and you will need to think about which one is suitable. Besides, the Energy-Based Models (EBMs) provides more. These are learning notes from A Tutorial on Energy-Based Learning. Continue reading “Deep Learning 11: Energy-Based Learning (1)–What is EBL?”

Posted in Deep Learning, Python, Theory

TensorFlow 05: Understanding Basic Usage

Until recently, I realized I missed some basics about TF. I went directly to the MNIST when I learned. Also, I asked few people if they have some nice tutorials for TF or for DL. Well, it is not like other modules, where you can easily find good ones like Andrew’s ML. But I did find something (in the reference section), I did not go through every one. For those who are interested, have a check by yourself. Or you might happy with sharing your recommends.
Continue reading “TensorFlow 05: Understanding Basic Usage”