NLP 05: From Word2vec to Doc2vec: a simple example with Gensim

  Introduction First introduced by Mikolov 1 in 2013, the word2vec is to learn distributed representations (word embeddings) when applying neural network. It is based on the distributed hypothesis that words occur in similar contexts (neighboring words) tend to have similar meanings. Two models here: cbow ( continuous bag of words) where we use aContinue reading “NLP 05: From Word2vec to Doc2vec: a simple example with Gensim”

Lucky or not: Monte Carlo Method

AlphaGo! http://c.brightcove.com/services/viewer/federated_f9?isVid=1&isUI=1 When you play any games, probably you have strategies or experiences. But you could not deny that some times you need luck, which data scientists would say a “random choice”. Monte Carlo Method provides only an approximate optimizer, thus giving you the luck to win a game.