Posted in Algorithm, Deep Learning, Theory

Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!

There are unsupervised learning models in multiple-level learning methods, for example, RBMs and Autoencoder. In brief, Autoencoder is trying to find a way to reconstruct the original inputs — another way to represent itself. In addition, it is useful for dimensionality reduction. For example, say there is a 32 * 32 -sized image, it is possible to represent it by using a fewer number of parameters. This is called you are “encoding” an image. The goal is to learn the new representation, so it is also applied as pre-training; then a traditional machine learning models could be applied depending on the tasks — it is a typical “two-stage” way for solving problems.[3] by Hinton is a great work on this problem, which shows the ability of “compressing” in neural networks, solving the bottleneck of massive information.

Continue reading “Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!”

Posted in Algorithm, Machine Learning, Theory

ML Recap Slides sharing

Few friends with me did some works together since last October. All of us were looking for jobs in machine learning or deep learning. We all agreed that we need to review some interesting algorithms together. We had a draft of machine learning algorithms (part 1) during this new year:

17269603_979221052213463_1450538885_o
17273458_979219372213631_1993658890_o

Click here for a full version: mlrecap.

Also, we are working on part 2; there are some advanced algorithms which you can see from our outline. It is expected to finish around this June.

16252540_953507928118109_5048700468146419459_o

16387102_953509771451258_8312895342483013558_n

These slides are suitable for people to review old things. Some details are not included, so do not suggest readers learn some concepts from our slides. If you find mistakes, please leave comments. If you are interested in some particular algorithms, leave comments and we will consider updating our part 2 outline.

Posted in Algorithm, Theory

Deep Learning 14 : Optimization, an Overview

As a cardinal part of deep learning or machine learning, optimization has long been a mathematical problem for researchers. Why we need optimization? Remember you have a loss function for linear regression, and then you would need to find the optimum of the function to minimize the square error, for example. You might also very familiar about gradient descent, especially when you start learning neural networks. In this blog, we will cover some method in optimization, and in what conditions should we apply them.

We could split the optimization problem into two groups: constraint and unconstraint. Unconstraint means given a function, we try to minimize or maximize it without any other conditions. Constraint means we would fit some conditions while optimizing a function.

Unconstraint Optimization

Definition: \min_{x\in R^n}{f(x)}, where x^* is the optimum.
There are some stochastic and iterative methods for this kind of unconstraint problem. For example, Gradient Descent, Newton Method and Quasi Newton Method (an optimized Newton Method). Compared with GD, the other two would converge faster. In the graph, we could notice that point: the red arrow denotes GD and the green is Newton Method.

pastedimage0

The original “Newton Method” (also known as the Newton–Raphson method) is trying to find the roots of a function, f(x)=0. It still needs some iterations. If you look at the fantastic gif below, which I borrowed from wiki, you would have a quick point of view. So the job is to find the x, where we want f(x)=0. First, we select a point randomly x1, and get f(x1), then we get the tangent at that point (x1,f(x1)). The next point to choose is exactly the x where the tangent and x-axis has the intersection, so x_{2}=x_{1}-{\frac {f(x_{1})}{f'(x_{1})}}. Then we do it many times until a sufficiently accurate value is reached.

newtoniteration_ani

Now you understand that Newton Method is perfect for solving something like f(x)=0 problem. Then how it is going to help the optimization. You need some math knowledge then. If the function f is a twice-differentiable function f, and you want to find the max or min of it. Then the problem is to find the roots of the derivative f’ (solutions to f ‘(x)=0), also known as the stationary points of f.

Constraint Optimization: Equality Optimization

Definition: \min{f(x, y)}, subject to g(x,y)=c.
In the equality optimization problem, the equality constraints could be more than one. Here we would talk about one constraint for convenience. The method is called Lagrange Multiplier. When you have constraints, a natural way is try to eliminate them. So it goes this way. We brings in a new variable \lambda and create a Lagrange function:

0

Then what we need to do is to calculate the following equations:

1

When we successfully get the right \lambda, remember it can not be zero, we could bring it into the Lagrange function. When the Lagrange function has the optimum, the f(x,y) has, too. Because you have g(x,y)-c is always 0.

Constraint Optimization: Inequality Optimization

Definition:

image
Following the same idea, the Lagrange Multiplier could be extended. So a Generalized Lagrange function is written in this way:

image0

So all \alpha and \beta are Lagrange multipliers and the \alpha >=0.

 

Posted in Algorithm, Deep Learning, Theory

Deep Learning 13: Understanding Generative Adversarial Network

Proposed in 2014, the interesting Generative Adversarial Network (GAN) has now many variants. You might not surprised that the relevant papers are more like statistics research. When a model was proposed, the evaluations would be based on some fundamental probability distributions, where generalized applications start. Continue reading “Deep Learning 13: Understanding Generative Adversarial Network”

Posted in Algorithm, Theory

Two sample problem(2): kernel function, feature space and reproducing kernel map

Find Two sample problem (1) here.

We will take a look at RHKS (Reproducing Hilbert Kernel Space ) in this post. You might think of it a very statistical term but it is amazing because of various applications. You will need to refresh your mind for some linear algebra computations. We start with some basic terms and definitions. Continue reading “Two sample problem(2): kernel function, feature space and reproducing kernel map”