Posted in Algorithm, Deep Learning, Theory

Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!

There are unsupervised learning models in multiple-level learning methods, for example, RBMs and Autoencoder. In brief, Autoencoder is trying to find a way to reconstruct the original inputs — another way to represent itself. In addition, it is useful for dimensionality reduction. For example, say there is a 32 * 32 -sized image, it is possible to represent it by using a fewer number of parameters. This is called you are “encoding” an image. The goal is to learn the new representation, so it is also applied as pre-training; then a traditional machine learning models could be applied depending on the tasks — it is a typical “two-stage” way for solving problems.[3] by Hinton is a great work on this problem, which shows the ability of “compressing” in neural networks, solving the bottleneck of massive information.

Continue reading “Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!”

Posted in Algorithm, Deep Learning, Theory

Deep Learning 13: Understanding Generative Adversarial Network

Proposed in 2014, the interesting Generative Adversarial Network (GAN) has now many variants. You might not surprised that the relevant papers are more like statistics research. When a model was proposed, the evaluations would be based on some fundamental probability distributions, where generalized applications start. Continue reading “Deep Learning 13: Understanding Generative Adversarial Network”

Posted in Deep Learning, Energy-Based Learning, Theory

Deep Learning 12: Energy-Based Learning (2)–Regularization & Loss Functions

First, let’s see what is regularization from a simple example. Then we will have a look at some different types of loss functions.

Regularization

Reviewed the definition of regularization today from Andrew’s lecture videos. Continue reading “Deep Learning 12: Energy-Based Learning (2)–Regularization & Loss Functions”

Posted in Deep Learning, Energy-Based Learning, Theory

Deep Learning 11: Energy-Based Learning (1)–What is EBL?

As a part of our goals, it is absolutely important to look back and think about the loss functions we applied, for example, the cross entropy. There are other types, however, targeting on different practical problems and you will need to think about which one is suitable. Besides, the Energy-Based Models (EBMs) provides more. These are learning notes from A Tutorial on Energy-Based Learning. Continue reading “Deep Learning 11: Energy-Based Learning (1)–What is EBL?”

Posted in Deep Learning, Python, Theory

TensorFlow 05: Understanding Basic Usage

Until recently, I realized I missed some basics about TF. I went directly to the MNIST when I learned. Also, I asked few people if they have some nice tutorials for TF or for DL. Well, it is not like other modules, where you can easily find good ones like Andrew’s ML. But I did find something (in the reference section), I did not go through every one. For those who are interested, have a check by yourself. Or you might happy with sharing your recommends.
Continue reading “TensorFlow 05: Understanding Basic Usage”