TechNews
Home > News > TechNews > Detail
Go Grandmaster Says He’s ‘in Shock’ But Can Still Beat Google’s AI

 

SEOUL, SOUTH KOREA — Lee Sedol is rather surprised that Google has fashioned an artificially intelligent system that so skillfully plays the ancient game of Go. But after losing to this Google creation Wednesday in the opening contest of a five-game match here in South Korea—a match that will test the progress of modern AI—the young Go grandmaster believes he can recover lost ground. “I am in shock. I can admit that,” Lee Sedol said through an interpreter during the post-game press conference at Seoul’s Four Seasons hotel. “But what’s done is done.”

Prior to Wednesday’s game, he was quite confident he would beat Google’s system, known as AlphaGo, and afterwards, he indicated that such confidence may have contributed to his loss. “The failure I made at the very beginning of the game lasted until the the very end,” he said. “I didn’t think that AlphaGo would play the game in such a perfect manner.”

The loss has clearly dented his confidence, but he remains at least somewhat optimistic for the second game, set for one o’clock local time on Thursday. “I look forward to the following games,” he said. “If I do a better job on the opening [of the game], I think I will increase my probability of winning.” He now gives himself a 50-50 chance.

Until recently, most experts believed another decade would pass before a machine beat a top player at the game of Go, which is exponentially more complicated than chess and requires an added degree of intuition, at least among humans. But a team of Google researchers has accelerated this crusade, thanks to a pair of machine learning technologies—technologies that let machines learn tasks largely on their own. In October, AlphaGo trounced the three-time European Go champion over a five-game match, winning five games and losing none. But now, it’s matching wits with a far better player. Lee Sedol is ranked fifth in the world and is widely considered the best player of the past decade.

Judging by its play Wednesday, AlphaGo has significantly improved since the match in October, thanks to five additional months of training at DeepMind, a Google AI lab in London. These advances in the skill of AlphaGo seem to have caught Lee Sedol off guard. But it’s unclear why he was unhappy with his opening to the game. For match commentators Michael Redmond and Chris Garlock, Lee Sedol started rather well, and the match was balanced until quite late in the game, when he made a pretty clear mistake. At move 121, they say, Lee Sedol played an aggressive move in the top right-hand corner of the board when he should have protected himself in the lower right.

“Unless he had some special reason to make that move that I am missing, it’s a pretty elementary mistake,” Redmond, a highly successful professional Go player himself, told us after the press conference. He believes this may have been the result of fatigue. Lee Sedol played right after arriving in Seoul following a closely contested international tournament in China. One of the strengths of AlphaGo, of course, is that it doesn’t get tired.

Wednesday’s game was so close that Lee Sedol could certainly turn things around in game two. He has learned at least some of the tendencies of AlphaGo and can make adjustments tomorrow. Meanwhile, AlphaGo can’t really adjust. Thanks to its machine learning technologies, it can vastly improve itself over time. But Demis Hassabis, the founder and CEO of DeepMind, says that the system needs more than a day to train. So, it’s done all the learning it’s going to do—until after the match. Like humans, machines have their weaknesses. At least for now.(www.wired.com)

Share:
Copyright © 2013 Zhejiang University Institute of Technology Innovation Co., Ltd Zhe ICP 13028891 Power By:HanSun