DeepMind's Nature paper about AlphaGo introduced the first AI to ever beat a human pro at Go without free moves.
There are 2 important rules:
Despite its very simple rules, Go is very hard to master:
Explore the game tree efficiently to find the best move.
Maximize your minimum score:
→ 200^{300} moves to explore
Converges to Min-Max in the limit.
Models capable of image and text understanding:
Approximators of very complex functions, usually hard for computers, “intuitive” ones.
Following section heavily based on Stanford cs231n course
Inspired by biological neurons:
Solve an optimization problem: minimize a loss w.r.t. training data based on the weights of the neurons:
Follow the derivative to find a local minimum of the loss:
Neurons depending on small areas of the input (also called filters). Allows a hierarchical representation.
Multiple layers of filters combined together.
Instead of working on pixels, work on intersections.
Augment Monte Carlo Tree Search with two Convolutional Neural Networks.
Predict the next move given the position.
Predict the winner given the position.
Google scale:
AlphaGo already changed Go forever: new theory, new mentality, in only 5 games. Much more to come!
Alphago uses a generic learning framework → Applicable (with some/lots of efforts) to most other domains.
Despite what the press says, AlphaGo is not a general AI. No reason to worry (yet)!