Lucky or not: Monte Carlo Method

AlphaGo!

http://c.brightcove.com/services/viewer/federated_f9?isVid=1&isUI=1

When you play any games, probably you have strategies or experiences. But you could not deny that some times you need luck, which data scientists would say a “random choice”. Monte Carlo Method provides only an approximate optimizer, thus giving you the luck to win a game.

Randomized Algorithms

Monte Carlo: more samples we have, more approximate the optimizer we get.
Las Vegas Algorithm: more samples we have, more chances we have to get the optimizer.

Depends on the problem, to choose a better algorithm. If samples are finite, and a solution is expected but not the best, MCTS is okay. If the best solution is wanted, then LVA is to be applied with no restrictions on the samples ( you know if with a bad luck you would probably sample EVERY record). In terms of Go, with limitation of computational time and stack space, and not THE BEST is wanted, so MCTS is the right choice.

Monte Carlo Method

Monte Carlo Method, designed to calculate the calculus via random sampling.

You are given a function  h(x) , and want to know I :

I = \int _{ a }^{ b }{ h(x)dx } We sample x from the range (a,b) based on a distribution function p(x) . Then  h(x) could be treated as a function f(x) times a probability density function p(x).

 I=\int _{ a }^{ b }{ h(x)dx } =\int _{ a }^{ b }{ f(x)p\left( x \right) dx={ E }_{ p(x) } } [f(x)]
Then I becomes the average of the distribution of  f(x) on  p(x) . Let’s sample x1,x2,… from  p(x) . For each i, we have:

 \frac { { x }_{ i } }{ \sum { { x }_{ i } } } \approx p({ x }_{ i })Such that,

I\approx \frac { 1 }{ n } \sum _{ i=1 }^{ n }{ f\left( { x }_{ i } \right) }

Four types of MC Methods

mc1

The edges are probabilities of choosing at each stage.
Mixture Modeling: Use Non-linear Probabilities.
Markov Chain Monte Carlo( MCMC): with a given distribution, we have algorithms to generate unbiased samples easily. But for any not-specified distributions, we choose MCMC as the method.

MCTS

mc2

(1)Selection
Starting from the root node R, recursively select optimal child nodes. We want to expand a new node (aka to see all the children of the node unti we reach a leaf) T.

(2)Expansion
From the current node T, expand a child node if T is not terminating the whole game. (From the current situation, make a choice and reach a child.)

(3)Simulation
Starting from node T, black and white play in turns randomly. Eventually get a result of the game, win or lose. Update a winning rate of node T.

(4)Backpropagation
When simulation at node T finish, The parant node and all the ancesstor nodes update winning rate ( from down to the top) until reaches the root R. The winning rate at a node equals to an average of that of children nodes.

Then starts from step1 again. New nodes are added, and more accurate winning rates are calculated. Eventually pick up a max winning rate at one stage.

Tree search method is always a good way of providing dicisions. MCT is an asymmetric tree, and with MC method, it is an aheuristic algorithm (no requirements for experts in a certain domain to make resonable decisions).
But MCTS could take a number of iterations to converge to a good solution, making it an issue of large costs on both time and memory.

In the AlphaGo:
mc3

 


References:
http://jeffbradberry.com/posts/2015/09/intro-to-monte-carlo-tree-search/

http://mcts.ai/about/index.html

Click to access nature16961.pdf

https://www.zhihu.com/question/39916945/answer/83799720(Chinese)

Published by Irene

Keep calm and update blog.

2 thoughts on “Lucky or not: Monte Carlo Method

Leave a comment