A brief Introduction here. (Wrote a blog about it last year, but do not think it is detailed.)
This blog is learning notes from this video (English slides but Chinese speaker). First a quick introduction on SVM, then the magic of how to solve max/min values. Also, you could find Kernel SVM. Continue reading “Understanding SVM(2)”
There is a nice tutorial from Alex. I expanded the math part to show you more details. I used latex then posted screenshots. Continue reading “Two sample problem(1): Parzen Windows, Maximum Mean Discrepancy”
First introduced by Mikolov 1 in 2013, the word2vec is to learn distributed representations (word embeddings) when applying neural network. It is based on the distributed hypothesis that words occur in similar contexts (neighboring words) tend to have similar meanings. Two models here: cbow ( continuous bag of words) where we use a bag of words to predict a target word and skip-gram where we use one word to predict its neighbors. For more, although not highly recommended, have a look at TensorFlow tutorial here. Continue reading “NLP 05: From Word2vec to Doc2vec: a simple example with Gensim”
We will focus on POS tagging in this blog.
While HMM gives us a joint probability on tags and words: . Tags t and words w are one-to-one mapping, so in the series, they share the same length.
Continue reading “NLP 04: Log-Linear Models for Tagging Task (Python)”
The blog is a solution of Udacity DL Assignment 4, using a CNN to classify notMNIST images. Visit here to get a full version of my codes.
Continue reading “TensorFlow 04 : Implement a LeNet-5-like NN to classify notMNIST Images”
When you play any games, probably you have strategies or experiences. But you could not deny that some times you need luck, which data scientists would say a “random choice”. Monte Carlo Method provides only an approximate optimizer, thus giving you the luck to win a game.
Continue reading “Lucky or not: Monte Carlo Method”