ELMo: Deep contextualized word representations In this blog, I show a demo of how to use pre-trained ELMo embeddings, and how to train your own embeddings.
Category Archives: Deep Learning
Slideshare (3): Unsupervised Transfer Learning Methods
A brief introduction on unsupervised transfer learning methods. The presentation focused on unsupervised transfer learning methods, introducing feature-based and model-based strategies and few recent papers from ICML, ACL. Unsupervised Transfer Learning Comments are welcomed!
To copy or not, that is the question: copying mechanism
In our daily life, we always repeating something mentioned before in our dialogue, like the name of people or organizations. “Hi, my name is Pikachu”, “Hi, Pikachu,…” There is a high probability that the word “Pikachu” will not be in the vocabulary extracted from the training data. So in the paper (Incorporating Copying Mechanism inContinue reading “To copy or not, that is the question: copying mechanism”
Deep Learning 16: Understanding Capsule Nets
This post is the learning notes from Prof Hung-Yi Lee‘s lecture, the pdf could be found here (page40-52). I have read few articles, and I found this is a must-read. It is simple, and you can easily understand what is going on. I would say it is a good starting point for further readings. PaperContinue reading “Deep Learning 16: Understanding Capsule Nets”
Reinforcement Learning (1): Q-Learning basics
Hi! In the following posts, I will introduce Q-Learning, the first part to learn if you want to pick up reinforcement learning. But before that, let us shed light on some fundamental concepts in reinforcement learning (RL). Kindergarten Example Q-Learning works in this way: do an action and get reward and observation from the environment,Continue reading “Reinforcement Learning (1): Q-Learning basics”
Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!
There are unsupervised learning models in multiple-level learning methods, for example, RBMs and Autoencoder. In brief, Autoencoder is trying to find a way to reconstruct the original inputs — another way to represent itself. In addition, it is useful for dimensionality reduction. For example, say there is a 32 * 32 -sized image, it isContinue reading “Deep Learning 15: Unsupervised learning in DL? Try Autoencoder!”
Deep Learning 13: Understanding Generative Adversarial Network
Proposed in 2014, the interesting Generative Adversarial Network (GAN) has now many variants. You might not surprised that the relevant papers are more like statistics research. When a model was proposed, the evaluations would be based on some fundamental probability distributions, where generalized applications start.
Deep Learning 12: Energy-Based Learning (2)–Regularization & Loss Functions
First, let’s see what is regularization from a simple example. Then we will have a look at some different types of loss functions. Regularization Reviewed the definition of regularization today from Andrew’s lecture videos.
Deep Learning 11: Energy-Based Learning (1)–What is EBL?
As a part of our goals, it is absolutely important to look back and think about the loss functions we applied, for example, the cross entropy. There are other types, however, targeting on different practical problems and you will need to think about which one is suitable. Besides, the Energy-Based Models (EBMs) provides more. TheseContinue reading “Deep Learning 11: Energy-Based Learning (1)–What is EBL?”
TensorFlow 05: Understanding Basic Usage
Until recently, I realized I missed some basics about TF. I went directly to the MNIST when I learned. Also, I asked few people if they have some nice tutorials for TF or for DL. Well, it is not like other modules, where you can easily find good ones like Andrew’s ML. But I didContinue reading “TensorFlow 05: Understanding Basic Usage”