Why AI Chess Champs Are Not Taking Over the World – Walter Bradley Center for Natural and Artificial Intelligence

At one time, the AI that beat humans at chess calculated strategies by studying the outcomes of human moves. Then, it turned out, there was a faster way:

In October 2017, the DeepMind team published details of a new Go-playing system, AlphaGo Zero, that studied no human games at all. Instead, it started with the games rules and played against itself. The first moves it made were completely random. After each game, it folded in new knowledge of what led to a win and what didnt. At the end of these scrimmages, AlphaGo Zero went head to head with the already superhuman version of AlphaGo that had beaten Lee Sedol. It won 100 games to zero.

Champion Lee Sedol announced his retirement soon after. And AlphaGoZero went on to be defeated by an even bigger AI, AlphaZero.

But does that mean AI can take over and run everything? The University of Washingtons Pedro Domingos offers a different take: Games are a very, very unusual thing:

One characteristic shared by many games, chess and Go included, is that players can see all the pieces on both sides at all times. Each player always has whats termed perfect information about the state of the game. However devilishly complex the game gets, all you need to do is think forward from the current situation.

Plenty of real situations arent like that. Imagine asking a computer to diagnose an illness or conduct a business negotiation. Most real-world strategic interactions involve hidden information, said Noam Brown, a doctoral student in computer science at Carnegie Mellon University. I feel like thats been neglected by the majority of the AI community.

Or, as George Gilder says in Gaming AI, in games like chess and Go, the map is the territory. But in the real world, the map is not the territory:

Self-driving cars, for example, have a hard time dealing with bad weather, or cyclists. Or they might not capture bizarre possibilities that turn up in real data, like a bird that happens to fly directly toward the cars camera. For robot arms, Finn said, initial simulations provide basic physics, allowing the arm to at least learn how to learn. But they fail to capture the details involved in touching surfaces, which means that tasks like screwing on a bottle cap or conducting an intricate surgical procedure require real-world experience, too.

IBM Watson triumphed at Jeopardy where correct answers were available online but then flopped in the medical environment where correct answers dont even exist.

For example, there is no correct way to tell a patient that further efforts against a disease may result only in a less pleasant life, not greater life expectancy. We dont use machines for that. We just walk with the person for a while.

And, as Jeffrey Funk and Gary Smith remind us, failed prophecies of an AI takeover come at a cost: We dont improve what we could improve in human-based services like health care if we are waiting for the phantom AI takeover.

You may also wish to read: How to flummox an AI neural network. Kids can figure out the same-different distinction. So can ducklings and bees. But top AI cant. Can every form of thought be a computation? If not, same-different may be a fundamental limit on computer-based AI.

Go here to read the rest:
Why AI Chess Champs Are Not Taking Over the World - Walter Bradley Center for Natural and Artificial Intelligence

Related Posts

Comments are closed.