Want to share with your friends too?

Science
07 Dec 2017

Google's AI engine self-learns chess in 4hours, beats leading program

Google has claimed that its AlphaGo Zero artificial intelligence (AI) program has defeated Stockfish 8, the world's leading specialist software, in chess.

Google's DeepMind division says that AlphaGo Zero won or drew all 100 matches played against Stockfish.

This came within hours after the Google program taught itself the game from scratch.

The DeepMind research has not yet been peer-reviewed.

In context

Google's "superhuman" AlphaGo AI claims chess crown
How AlphaGo won against Stockfish

Chess matches

How AlphaGo won against Stockfish

AlphaGo was given the rules of chess and told to learn the game by playing simulations against itself, according to Cornell University's Arxiv site.

Four hours later, it played 100 matches against Stockfish. Each program was allowed a thinking time of one minute per move.

AlphaGo won 28 matches and drew the remaining 72.

DeepMind described AlphaGo's performance as "superhuman."

Praise

Scientists praise AlphaGo and DeepMind's "dazzling results"

Google has declined to comment on the DeepMind research until it's published in a peer-reviewed journal.

Meanwhile, scientists have praised the achievement, saying it will strengthen Google's position in the competitive AI sector.

"From a scientific point of view, it's the latest in a series of dazzling results that DeepMind has produced," the University of Oxford's Prof Michael Wooldridge said.

Love Tech news?

Stay updated with the latest happenings.

Notify Me

Other victories

AlphaGo had previously won at Shogi and Go

The AlphaGo algorithm had previously defeated another leading AI program named Elmo in the Japanese board game Shogi, after just two hours of self-training. AlphaGo won 90 matches, drew two and lost eight.

Previously, AlphaGo had defeated many of the world's best players of the Chinese board game Go.

It even taught itself how to play Pong, Space Invaders, and other video games.

Challenge

Expert: AlphaGo must contend with real world scenarios

Oxford's Prof Woodridge believes the games played by AlphaGo were largely "closed" and had to contend with a limited set of rules.

"In the real world we don't know what is round the corner," he said.

"Coping when you don't know what is coming is much more complicated, and things will get even more exciting when DeepMind moves on to more open problems."

Ask NewsBytes
User Image

Next Timeline