For the past few years after my Master’s, I did many jobs, long term, short term, internship, or full-time. I also had too many interviews, some of them I failed. Together with my friends, we had collected many materials, including basic algorithms, popular questions, basic machine learning knowledge, and deep learning knowledge. Then I organizedContinue reading “Prepare for the Interviews!”
Category Archives: Theory
Understanding Variational Graph Auto-Encoders
Variational Auto-Encoders My post about Auto-encoder. For Variational Auto-Encoders (VAE) (from paper Auto-Encoding Variational Bayes), we actually add latent variables to the existing Autoencoders. The main idea is, we want to restrict the parameters from a known distribution. Why we want this? We wish the generative model to provide more “creative” things. If the modelContinue reading “Understanding Variational Graph Auto-Encoders”
Understanding Graph Convolutional Networks
Why Graphs? Graph Convolution Networks (GCNs) [0] deal with graphs where the data form with a graph structure. A typical graph is represented as G(V, E), where V is the collection of all the nodes and Eis the collection of all the edges.
Slideshare (4): A brief Introduction on Transfer Learning
Please check my notes for Transfer Learning introduction! Transfer Learning
Transfer Learning Materials
Keep Updating
Slideshare (3): Unsupervised Transfer Learning Methods
A brief introduction on unsupervised transfer learning methods. The presentation focused on unsupervised transfer learning methods, introducing feature-based and model-based strategies and few recent papers from ICML, ACL. Unsupervised Transfer Learning Comments are welcomed!
TensorFlow 08: save and restore a subset of variables
TensorFlow provides save and restore functions for us to save and re-use the model parameters. If you have a trained VGG model, for example, it will be helpful for you to restore the first few layers then apply them in your own networks. This may raise a problem, how do we restore a subset ofContinue reading “TensorFlow 08: save and restore a subset of variables”
To copy or not, that is the question: copying mechanism
In our daily life, we always repeating something mentioned before in our dialogue, like the name of people or organizations. “Hi, my name is Pikachu”, “Hi, Pikachu,…” There is a high probability that the word “Pikachu” will not be in the vocabulary extracted from the training data. So in the paper (Incorporating Copying Mechanism inContinue reading “To copy or not, that is the question: copying mechanism”
What matters: attention mechanism
People would be attracted only on a part of an image, say a person on a photo. Similarly, for a given sequence of words, we should pay attention to few keywords instead of treating each word equally. For example, “this is an apple”, when you read it loudly, I am sure you will stress “apple”Continue reading “What matters: attention mechanism”
What’s next: seq2seq models
The short blog contains my notes from Seq2seq Tutorial. Please leave comments if you are interested in this topic.
