First, let’s see what is regularization from a simple example. Then we will have a look at some different types of loss functions.
Reviewed the definition of regularization today from Andrew’s lecture videos. Continue reading “Deep Learning 12: Energy-Based Learning (2)–Regularization & Loss Functions”
As a part of our goals, it is absolutely important to look back and think about the loss functions we applied, for example, the cross entropy. There are other types, however, targeting on different practical problems and you will need to think about which one is suitable. Besides, the Energy-Based Models (EBMs) provides more. These are learning notes from A Tutorial on Energy-Based Learning. Continue reading “Deep Learning 11: Energy-Based Learning (1)–What is EBL?”
Until recently, I realized I missed some basics about TF. I went directly to the MNIST when I learned. Also, I asked few people if they have some nice tutorials for TF or for DL. Well, it is not like other modules, where you can easily find good ones like Andrew’s ML. But I did find something (in the reference section), I did not go through every one. For those who are interested, have a check by yourself. Or you might happy with sharing your recommends.
Continue reading “TensorFlow 05: Understanding Basic Usage”
Learning notes for Lecture 7 Modeling sequences: A brief overview. by Geoffrey Hinton 
Continue reading “Deep Learning 10: Sequence Modeling”
I was optimizing my code for the ConvNet these days. Not all the methods are doing good, because I do not have very strong knowledge on the hyper-parameters. Anyway, just write something I’ve learnt here.
Continue reading “Deep Learning 08: Small Tricks(1)”
The blog is a solution of Udacity DL Assignment 4, using a CNN to classify notMNIST images. Visit here to get a full version of my codes.
Continue reading “TensorFlow 04 : Implement a LeNet-5-like NN to classify notMNIST Images”