top of page
Writer's pictureFahad H

Google’s AI Breakthrough: Machine Beats Human Go Player For First Time

ss-machine-learning

Google has invested a great deal in machine learning and artificial intelligence. In early 2014, it acquired London-based DeepMind for roughly $500 million, according to some estimates.

Today, the company is showing off an AI milestone. Google says a program created by DeepMind called AlphaGo has done what nobody has done before: beaten a professional human Go player.

The game of Go is vastly more complicated than chess and takes a lifetime to master. That’s why, according to DeepMind, this event is a much bigger deal than IBM’s Deep Blue defeat of Russian chess master Garry Kasparov in 1997 or Watson’s Jeopardy victory in 2011.

Researches told journalists on a conference call that it’s much more difficult to program a computer to play Go than chess. That’s because there are more possible positions in Go than there are atoms in the universe, “more than a googol times larger than chess” (pun probably intended). They also boasted that “Watson couldn’t play Go.”

DeepMind was founded by Demis Hassabis, who is also CEO. Hassabis, who was a chess prodigy, has an impressive resume of computing accomplishments. He and colleagues answered questions from journalists on the conference call about AlphaGo and future applications of the technology.

Google/DeepMind have published an article on the accomplishment in Nature, which contains all sorts of technical details that I won’t attempt to reproduce. The following, from a Google blog post, provides some of that background:

Traditional AI methods — which construct a search tree over all possible positions — don’t have a chance in Go. So when we set out to crack Go, we took a different approach. We built a system, AlphaGo, that combines an advanced tree search with deep neural networks. These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections. One neural network, the “policy network,” selects the next move to play. The other neural network, the “value network,” predicts the winner of the game. We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time (the previous record before AlphaGo was 44 percent). But our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning. Of course, all of this requires a huge amount of computing power, so we made extensive use of Google Cloud Platform.

Asked about how the technology developed for AlphaGo might be applied in broader contexts, Hassabis and colleagues said that there were many potential applications, including healthcare and “AI-assisted science.” Search and other consumer experiences from Google will probably see benefits, but those areas didn’t get much attention on the call.

The DeepMind researchers added that the system was potentially applicable to a wide variety of problems and complex data issues in the real world. They said some of the technology behind AlphaGo would likely appear in some form over the next year or two. However, they explained it would take five to 10 years before the full system could be more broadly or commercially deployed.

Journalists on the call, including me, asked questions about the ethical implications of AlphaGo’s accomplishment and whether there were concerns about the power of this technology being put to questionable uses. Hassabis responded that he and his colleagues were very conscious of ethical issues and were taking them quite seriously. “We have to ensure benefits accrue to the many and not to the few,” he added.

The next move for AlphaGo will be a five-game Go match in South Korea against the top Go champion in the world, Lee Sedol.

0 views0 comments

Comments


bottom of page