Hi! In the following posts, I will introduce Q-Learning, the first part to learn if you want to pick up reinforcement learning. But before that, let us shed light on some fundamental concepts in reinforcement learning (RL). Kindergarten Example Q-Learning works in this way: do an action and get reward and observation from the environment,…Read more »
Few friends with me did some works together since last October. All of us were looking for jobs in machine learning or deep learning. We all agreed that we need to review some interesting algorithms together. We had a draft of machine learning algorithms (part 1) during this new year: Click here for a full…Read more »
Was working on my research with sklearn, but realized that choosing the right evaluation metrics was always a problem to me. If someone asks me ,”does your model performs well?” The first thing in my mind is “accuracy”. Besides the accuracy, there are a lot, depending on your own problem.
A brief Introduction here. (Wrote a blog about it last year, but do not think it is detailed.) This blog is learning notes from this video (English slides but Chinese speaker). First a quick introduction on SVM, then the magic of how to solve max/min values. Also, you could find Kernel SVM.
After HMMs, let’s work on a Trigram HMM directly on texts.First will introduce the model, then pieces of code for practicing. But not going to give a full solution as the course is still going every year, find out more in references.
Learning notes for Lecture 7 Modeling sequences: A brief overview. by Geoffrey Hinton 
AlphaGo! http://c.brightcove.com/services/viewer/federated_f9?isVid=1&isUI=1 When you play any games, probably you have strategies or experiences. But you could not deny that some times you need luck, which data scientists would say a “random choice”. Monte Carlo Method provides only an approximate optimizer, thus giving you the luck to win a game.