Posted in Algorithm, Deep Learning, Theory

Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!

There are unsupervised learning models in multiple-level learning methods, for example, RBMs and Autoencoder. In brief, Autoencoder is trying to find a way to reconstruct the original inputs — another way to represent itself. In addition, it is useful for dimensionality reduction. For example, say there is a 32 * 32 -sized image, it is possible to represent it by using a fewer number of parameters. This is called you are “encoding” an image. The goal is to learn the new representation, so it is also applied as pre-training; then a traditional machine learning models could be applied depending on the tasks — it is a typical “two-stage” way for solving problems.[3] by Hinton is a great work on this problem, which shows the ability of “compressing” in neural networks, solving the bottleneck of massive information.

Continue reading “Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!”

Posted in Algorithm, Machine Learning, Theory

ML Recap Slides sharing

Few friends with me did some works together since last October. All of us were looking for jobs in machine learning or deep learning. We all agreed that we need to review some interesting algorithms together. We had a draft of machine learning algorithms (part 1) during this new year:

17269603_979221052213463_1450538885_o
17273458_979219372213631_1993658890_o

Click here for a full version: mlrecap.

Also, we are working on part 2; there are some advanced algorithms which you can see from our outline. It is expected to finish around this June.

16252540_953507928118109_5048700468146419459_o

16387102_953509771451258_8312895342483013558_n

These slides are suitable for people to review old things. Some details are not included, so do not suggest readers learn some concepts from our slides. If you find mistakes, please leave comments. If you are interested in some particular algorithms, leave comments and we will consider updating our part 2 outline.