Archive for the ‘Alphazero’ Category

The art of chess: a brief history of the World Championship – TheArticle

Last week Barry Martin, along with Patrick Hughes, one of the worlds top chess playing artists, asked me to identify the most significant happenings in the chess world over the past ten years. Barry and Patrick used to meet in the final of the Chelsea Arts Club Championship and Barry writes an excellent monthly column in Kensington, Westminster and Chelsea Today(KWC). The point of the question was to celebrate ten years ofKWCand ten years of Barrys column, many of which have been gathered together in the anthology,Chess, Problems, Play and Personalities(Filament Publishing).

Of those significant developments, which define the contemporary chess scene, I have already covered the phenomenon of the new Netflix chess-based TV series,Queens Gambit, in last weeks column. The combination of brilliance and beauty, exemplified in the persona of the chess champion heroine, Beth Harmon, has proved irresistible to record-breaking audiences around the world. Sales of chess sets alone, a key indicator of a new-found enthusiasm, have soared by 300 per cent since Beth first appeared on our screens.

A second vital element has been the creation of the AlphaZero chess-playing engine, with its amazing abilities, including an almost vertical learning curve, resulting in the strongest chess-playing entity the world has ever seen. The science has primarily been the work of Demis Hassabis, rewarded with the CBE for his efforts, and a $400 million sale to Google of his company, Deep Mind. The achievements of Demis, and the brilliantly paradoxical strategies and tactics of AlphaZero, were likewise already covered in my column Arise Sir Demis The games were contested against the most powerful available commercial chess programme, called Stockfish itself many times stronger than the IBM Deep Blue programme which defeated Garry Kasparov in 1997.

The 1993 World Title Challenger, the British Grandmaster Nigel Short,described the AlphaZero games as being of such beauty that he felt he was in the presence of God. Demis himself explained that his self-taught programme, which had already mastered the quasi-infinite complexities of the oriental games of Shogi (Japanese Chess) and Go, was the key to understanding intelligence.

This week I turn to the third most decisive development of the past ten years, the meteoric rise and lasting domination of the Norwegian World Chess Champion, Magnus Carlsen. Carlsen is the culmination of a line of champions which stretches back into the 18th century, yet he is also a uniquely talented representative of the modern era. Magnus has attained the highest ever chess rating ever recorded, outclassing even the mighty Garry Kasparov. Magnus wins virtually every competition which he enters, and has adapted seamlessly to the current coronavirus crisis, which has obliged chess to migrate online to a huge extent. Magnus has prudently avoided the damage to his reputation occasioned by suffering defeats against chess computers, a fate which overtook both Garry Kasparov and Vladimir Kramnik.Finally, Magnus has leveraged all the opportunities afforded by his title of World Chess Champion, adapting perfectly to the modern environment, even to the extent of floating his online chess company, Play Magnus, for $85 million dollars, while simultaneously earning a fortune as a trendy ambassador for the fashion line G-Star Raw, often appearing alongside Hollywood superstar, Liv Tyler.

The title of World Chess Champion dates to no later than 1886, when Wilhelm Steinitz defeated Johannes Zukertort in a gladiatorial contest, specifically designed to resolve the question of who was the strongest player in the world after Paul Morphys death in 1884, though Steinitz had claimed that status since 1866. Less clear is whether the great predecessors of Steinitz also merited that proud title. Part of the difficulty of authentication is lack of evidence of important contests and gaps in the record.

The story begins in the 18th century, when the French chess expertFranois-Andr Danican Philidorwon an important match in 1747 against the erudite Philip Stamma, translator of oriental languages to the court of King George II. Sadly, none of those games has survived. Following Philidor, who died in 1795, there comes a hiatus, until the brief flourishing of La Bourdonnais during the 1830s. After this, there is a further gap in the record until the 1840s, when French heir to the Philidor tradition, Saint-Amant, was overthrown in Paris, the epicentre of European chess life at that time, by the English champion Howard Staunton.

Fortunately, from Staunton onwards, there is a relatively unbroken line of succession, with each champion being dethroned by the next in line. The exceptions are the trinity of Morphy, Fischer (who simply downed tools), and Alekhine who died in office, thus permanently preserving their hallowed nimbus of invincibility.

Also worthy of mention are various champions who have won the FID title (FID is the International Chess Federation, the governing body of chess competitions), without gaining universal recognition from the global chess community. These include Max Euwe, Efim Bogolyubov, Vesselin Topalov and Viswanathan Anand. A common outcome is that such FID champions have gone on to contest matches against the universally recognised laureate, and in two such cases (Euwe and Anand) have emerged victorious to become undisputed champions themselves.

The most recent world championship match, staged in London 2018, was run entirely under the auspices of FID, the authority of which is now universally accepted under the reliable new Presidency of Russian Arkady Dvorkovich, and his English Vice President, Nigel Short.

The first great player who could be considered a World Champion was Philidor, whodominatedthe chess scene of his day. The term World Champion was not used when describing him, with commentators preferring to employ such metaphors as wielding the sceptre. There is also the problem that very few of Philidors games on level terms have survived, his reputation largely being constructed on his blindfold simultaneous displays, which so electrified London chess enthusiasts. Philidor was able to conduct three games blindfold at once, a feat that led to a letter of admonishment from the French encyclopaedist, Denis Diderot, warning Philidor that such exploits might lead to brain damage.

It is interesting to note that Philidor was the first great apostle of pawn power in chess. According to Philidor, pawns determined the structure of the game, they were in fact the soul of chess not mere cannon fodder, whose sole task was to make way for the power of the pieces. In this respect his chess teachings paralleled the rise of the masses embodied in the French Revolution of 1789.

France was the dominant chess nation at the turn of the 18th and 19th centuries, and the next player after Philidorwho couldbe considered an early world champion was the 19th-century French master Louis-Charles Mah de La Bourdonnais. La Bourdonnais claim to fame rests primarily on his mammoth series of matches against Alexander McDonnell, contested in London in 1834. This represented the finest corpus of games ever created up to that time and numerous generations of chess devotees learned their basic chess strategies and tactics from these ingenious and well contested battles. Both protagonists appear to have become mentally exhausted by their efforts and died shortly after their epic series.

In the panoply of proto-champions, Howard Staunton, the Victorian polymath, Shakespearean scholar, and assiduous chronicler of the English schools system, is the only English player who could legitimately be considered as world champion. In a series of matches between 1843 and 1846, Staunton defeated the French master Pierre Charles Fournier de Saint-Amant, followed closely by victories against the German master Bernhard Horwitz and Daniel Harrwitz, originally from Poland. Stauntons match against Saint-Amant was the first contest at the highest level that closely resembled the template for modern World Championship competitions. The chess pieces in regular use for important competitions, including the2018 Londoncontest between Carlsen and his challenger, Fabiano Caruana, are named the Staunton pattern, after Howard Staunton.

The German master Adolf Anderssen seized the sceptre from Howard Staunton when he decisively defeated the English champion in the very first international tournament in London 1851. Anderssen was one of that select group, which includes Mikhail Botvinnik and Viswanathan Anand, who initially assumed the accolade of supreme chess master from a tournament rather than a match. The London event was in fact put together by Staunton, who thereby created a perfect pretext for losing out to Anderssen in their knockout match, it being notoriously difficult to compete in an event, whilst simultaneously organising it.

Anderssen can claim to be one of the supreme tacticians of all time. Three of his wins are of imperishable beauty. On their own they would justify anyones devotion to chess. They are his Immortal Game against Kieseritsky (played at Simpsons-in-the-Strand, not the tournament) of London, 1851; his Evergreen game against the pseudonymous Dufresne (in reality the German player E. S. Freund) of Berlin 1856, and his majestic sacrificial masterpiece against Zukertort of Breslau 1869.

Paul Morphy was the American meteor who took the world by storm over thetwo momentous, whirlwind years of 1857 and 1858. His grand tour of Europe culminated in a match victory against Adolf Anderssen, after which Morphy was universally acknowledged as the worlds greatest player. Thereafter Morphy issued a challenge to anyone in the world to take him on at odds (Morphy starting the game with a pawn handicap) but no one accepted. At this point the meteor had burnt itself out and Morphy, tragically, retired from chess, a curious forerunner of Bobby Fischers behaviour following his famous 1972 World Championship victory against Boris Spassky.

Morphy understood the principles of chess better than anyone who came before him. Anderssens tactical brilliance sprang like Athene from the head of Zeus, without necessarily having grown from regular organic pre-conditions. Morphy, on the other hand, constructed his positions along sound strategic and positional lines, before unleashing his devastating arsenal of tactical weaponry.On Morphys retirement, Anderssen resumed the position of world leadership which had belonged so fleetingly to the first great genius of American chess. Anderssen lost a match in 1866 toWilhelm Steinitz, the first player who could definitively be describedas an official World Champion. The previous wielders of the sceptre, Philidor, La Bourdonnais, Staunton, Anderssen and Morphy, were all, at the time, acknowledged as the leading chess practitioner of their day, but it is less clear that the title world champion had been universally accepted. Steinitz, on the other hand, insisted on this description and he himself dated his tenure from his 1866 match victory, also in London, against Anderssen. Steinitzs pre-eminence wasconfirmed 20 years later when he demolished Johannes Zukertort in their 1886 match in the US, which was specifically described as a World Championship contest.

Thus far I have described the early years of the World Championship and now I return to Magnus Carlsens defence of his title, which he has held since 2013. The 2018 Championship match in London was fought out between the Norwegian Magnus Carlsen, the highest ever rated chess grandmaster, and the previously unexpected challenger, Fabiano Caruana, who had been considered somewhat vulnerable and fragile.

Caruana originated from Italy but became an American citizen. With energy and vigour, he decimated his rivals amongthe top ten Grandmasters. In order toqualify,the winner had to exhibit strength, agility, power, alertness, incredible persistence, stamina, and the power of the will to win.From this shark pool, Fabiano became the number one contender, and number two ranked player in the world. Throughout all the complications of selecting the challenger to the World Chess Champion, the pairing was ideal: a battle between the two best in the world fighting for the world title.

The implication is thatchess at this exalted level is a sport, both mentalandphysical an appropriately termed Mind Sport. As the Championship was in process a wonderful flash of confirmatory news emerged from the media: Magnus Carlsen was nominated, in Norway, to win the Sports Personality of the Year. This Championship had emerged as a realBattle of the Titans. Magnus had now won four world title bouts, twice versus Anand and once each against Karjakin and Caruana. The latter two ended with the tie-breaks, at which Magnus excels. On this occasion, Magnus praised Fabiano, as being his most difficult opponent of the three.

Magnus has secured his tenure as World Champion until at least 2021. He will then have held the title for 8 years thus moves into an equal category of championship longevity with such greats as Capablanca, Petrosian, Kramnik and Anand, ahead of Euwe, Smyslov, Tal, Spassky and Fischer. Only Steinitz, Lasker, Alekhine, Botvinnik, Karpov and Kasparov held the title for significantly longer periods. In the modern world, where everything has speeded up, can Carlsen go on to outperform all these titans?

If his ambition had seemed to wane during the classical phase of the London contest, it certainly flared up, as Carlsens predator instincts flashed on for the tiebreak.Like the Terminator, Magnus would be back.In every boxing match and in every tennis set, each minute encapsulates a real battle. Every move in chess is the same. The draws were magnificent mini-battles in every one of the often 65+ moves over the duration of as much as six hours of non-stop sport. And then it came down to speed.Only in the speed play-off did Carlsen finally overcome the onslaught of Caruana, with the World Champion taking the accelerated shoot out by three wins to zero.

I have tried to distil the quintessential elements of Magnus success. Remember that, in Latin Magnus was a title meaning Great, as in Alexander Magnus (Alexander the Great), or Pompeius Magnus (Pompey the Great), Julius Caesars senatorial rival, as noted in ShakespearesJulius Caesar, Act I, Scene One:

You blocks, you stones, you worse than senseless things!

O you hard hearts, you cruel men of Rome.

Knew you not Pompey? Many a time and oft

Have you climbd up to walks and battlements,

To towrs and windows, yea, to chimney tops,

Your infants in your arms, and there have sat

The live long day, with patient expectation.

To see Great Pompey pass the streets of Rome.

I have reduced the formula to seven memorable M principles for Magnus:

Motivation

Mobilisation

Momentum

Material

Masquerade

Massacre

Mate

And this weeks game exemplifies these key ingredients of a Magnus triumph. The game was the decisive win which clinched Magnus World Title defence against the former World Champion, The Tiger of Madras: Viswanathan Anand.

We are the only publication thats committed to covering every angle. We have an important contribution to make, one thats needed now more than ever, and we need your help to continue publishing throughout the pandemic. So please, make a donation.

Read the rest here:
The art of chess: a brief history of the World Championship - TheArticle

Podcast: Can you teach a machine to think? – MIT Technology Review

Artificial intelligence has become such a big part of our lives, youd be forgiven for losing count of the algorithms you interact with. But the AI powering your weather forecast, Instagram filter, or favorite Spotify playlist is a far cry from the hyper-intelligent thinking machines industry pioneers have been musing about for decades.

Deep learning, the technology driving the current AI boom, can train machines to become masters at all sorts of tasks. But it can only learn only one at a time. And because most AI models train their skillset on thousands or millions of existing examples, they end up replicating patterns within historical dataincluding the many bad decisions people have made, like marginalizing people of color and women.

Still, systems like the board-game champion AlphaZero and the increasingly convincing fake-text generator GPT-3 have stoked the flames of debate regarding when humans will create an artificial general intelligencemachines that can multitask, think, and reason for themselves.

The idea is divisive. Beyond the answer to how we might develop technologies capable of common sense or self-improvement lies yet another question: who really benefits from the replication of human intelligence in an artificial mind?

Most of the value that's being generated by AI today is returning back to the billion dollar companies that already have a fantastical amount of resources at their disposal, says Karen Hao, MIT Technology Reviews senior AI reporter and the writer of The Algorithm. And we haven't really figured out how to convert that value or distribute that value to other people.

In this episode of Deep Tech, Hao and Will Douglas Heaven, our senior editor for AI, join our editor-in-chief, Gideon Lichfield, to discuss the different schools of thought around whether an artificial general intelligence is even possible, and what it would take to get there.

Check out more episodes of Deep Tech here.

Gideon Lichfield: Artificial intelligence is now so ubiquitous, you probably dont even think about the fact that youre using it. Your web searches. Google Translate. Voice assistants like Alexa and Siri. Those cutesy little filters on Snapchat and Instagram. What you seeand dont seeon social media. Fraud alerts from your credit-card company. Amazon recommendations. Spotify playlists. Traffic directions. The weather forecast. Its all AI, all the time.

And its all what we might call dumb AI. Not real intelligence. Really just copying machines: algorithms that have learned to do really specific things by being trained on thousands or millions of correct examples. On some of those things, like face and speech recognition, theyre already even more accurate than humans.

All this progress has reinvigorated an old debate in the field: can we create actual intelligence, machines that can independently think for themselves? Well, with me today are MIT Technology Reviews AI team: Will Heaven, our senior editor for AI, and Karen Hao, our senior AI reporter and the writer of The Algorithm, our AI newsletter. Theyve both been following the progress in AI and the different schools of thought around whether an artificial general intelligence is even possible and what it would take to get there.

Im Gideon Lichfield, editor in chief of MIT Technology Review, and this is Deep Tech.

Will, you just wrote a 4,000 word story on the question of whether we can create an artificial general intelligence. So you must've had some reason for doing that to yourself. Why is this question interesting right now?

Will Douglas Heaven: So in one sense, it's always been interesting. Building a machine that can think and do things that people can do has been the goal of AI since the very beginning, but it's been a long, long struggle. And past hype has led to failure. So this idea of artificial general intelligence has become,you know, very controversial and very divisivebut it's having a comeback. That's largely thanks to the success of deep learning over the last decade. And in particular systems like Alpha Zero which was made by DeepMind and can play Go and Shogi, a kind of Japanese chess, and chess. The same algorithm can play all three games. And GPT-3, the large language model from OpenAI, which can uncannily mimic the way that humans write. That has prompted people, especially over the last year, to jump in and ask these questions again. Are we on the cusp of building artificial general intelligence? Machines that can think and do things like humans can.

Gideon Lichfield: Karen, let's talk a bit more about GPT-3, which Will just mentioned. It's this algorithm that, you know, you give it a few words and it will spit out paragraphs and paragraphs of what looks convincingly like Shakespeare or whatever else you tell it to do. But what is so remarkable about it from an AI perspective? What does it do that couldn't be done before?

Karen Hao: What's interesting is I think the breakthroughs that led to GPT-3 actually happened quite a number of years earlier. In 2017, the main breakthrough that triggered a wave of advancement in natural language processing occurred with the publishing of the paper that introduced the idea of transformers. And the way a transformer algorithm deals with language is it looks at millions or even billions of examples, of sentences of paragraph structure of, maybe even code structure. And it can extract the patterns and begin to predict to a very impressive degree, which words make the most sense together, which sentences make the most sense together. And then therefore construct these really long paragraphs and essays. What I think GPT-3 has done differently is the fact that there's just orders of magnitude more data that is now being used to train this transformer technique. So what OpenAI did with GPT-3 is they're not just training it on more examples of words from corpora like Wikipedia or from articles like the New York Times or Reddit forums or all of these things, they're also training it on, sentence patterns, it trains it on paragraph patterns, looking at what makes sense as an intro paragraph versus a conclusion paragraph. So it's just getting way more information and really starting to mimic very closely how humans write, or how music scores are composed, or how coding is coded.

So it's just getting way more information and really starting to mimic very closely how humans write, or how music scores are composed, or how coding is coded.

Gideon Lichfield: And before transformers, which can extract patterns from all of these different kinds of structures, what was AI doing?

Karen Hao: Before, natural language processing was actually.. it was much more basic. So transformers are kind of a self-supervised technique where the algorithm is not being told exactly what to look for among the language. It's just looking for patterns by itself and what it thinks are the repeating features of language composition. But before that, there were actually a lot more supervised approaches to language and much more hard coded the approaches to language where people were teaching machines like "these are nouns, these are adjectives. This is how you construct these things together." And unfortunately that is a very laborious process to try and curate language in that way where every word kind of has to have a label. And the machine has to be manually taught how to construct these things. And so it limited the amount of data that these techniques could feed off of. And that's why language systems really weren't very good.

Gideon Lichfield: So let's come back to that distinction between supervised and self supervised learning, because I think we're going to see its a fairly important part of the advances towards something that might become a general intelligence. Will, as you wrote in your piece, there's a lot of ambiguity about what we even mean when we say artificial general intelligence. Can you talk a bit about what are the options there?

Will Douglas Heaven: There's a sort of spectrum. I mean on one end, you've got systems which, you know, can do many of the things that narrow AI or dumb AI, if you like can do today, but sort of all at once. And Alpha Zero is perhaps the first glimpse of that. This one algorithm that can train itself to do three different things, but important caveat there, it can't make itself do those three things at once. So it's not like a single brain that can switch between tasks. As Shane Legg, on the co-founders of Deepmind, put it that it's as if you or I have to, you know, when we started playing chess, we had to swap out our brain and put it in our chess brain.

That's clearly not very general, but we're on the cusp of that kind of thingyour kind of multi-tool AI where one AI can do several different things that narrow AI can already do. And then moving up the spectrum, what probably more people mean when they talk about AGI is, you know, thinking machines, machines that are human-like in scare quotes that can multitask in the way that a person can. You know were extremely adaptable. We can switch between, you know, frying an egg to, you know, writing a blog post to singing, whatever. Still, there are also folk, going right to the other end of the spectrum, who would rope in a machine consciousness too to talk about AGI. You know, that we're not going to have true general intelligence or human-like intelligence until we have a machine that can not only do things that we can do, but knows that it can do things that we can do that has some kind of self reflection in there. I think all those definitions have been around since the beginning, but it's one of the things that makes AGI difficult to talk about and quite controversial because there's no clear definition.

Gideon Lichfield: When we talk about artificial general intelligence, theres this sort of implicit assumption that human intelligence itself is also absolutely general. Its universal. We can fry an egg or we can write a blog post or we can dance or sing. And that all of these are skills that any general intelligence should have. But is that really the case or are there going to be different kinds of general intelligence?

Will Douglas Heaven: I think, and I think many in the AI community would also agree that there are many different intelligences. We're sort of stuck on this idea of human-like intelligence largely I think because humans for a long time have been the best example of general intelligence that we've had, so it's obvious why they're a role model, you know, we want to build machines in our own image, but you just look around the animal kingdom and there are many, many different ways being intelligent. From the sort of the social intelligence that ants have, where they could collectively do really remarkable things to octopuses, which we're only just beginning to understand the ways that they're intelligent, but then they're intelligent in a very alien way compared to ourselves. And even our closest cousins like chimps have intelligences, which are different to, and you I, they have different skill sets than, than humans do.

So I think the idea that machines, if they become generally intelligent, needs to be like us is, as you know, is nonsense, is going out the window. The very mission of building an AGI that is human is perhaps pointless because we have human intelligences, right? We have ourselves. So why do we need to make machines that do those things? It'd be much, much better to build intelligences that can do things that we can't do. They're intelligent in different ways to compliment our abilities.

Gideon Lichfield: Karen, people obviously love to talk about the threat of a super-intelligent AI taking over the world, but what are the things that we should really be worried about?

Karen Hao: One of the really big ones in recent years has been algorithmic discrimination. This phenomenon we started noticing where, when we train algorithms, small or large, to make decisions based on historical data, it ends up replicating the patterns that we might not necessarily want it to replicate within historical data, such as the marginalization of people of color or the marginalization of women.

Things in our history that we would rather do without, as we move forward and progress as a society. But because of the way that algorithms are not very smart and they extract these patterns and replicate these patterns mindlessly, they end up making decisions that discriminate against people of color discriminating against women discriminate against particular cultures that are not Western-centric cultures.

And if you observe the conversations that are happening among people who talk about some of the ways that we need to think about mitigating threats around superintelligence or around AGI, however you want to call it, they will talk about this challenge of value alignment. Value alignment being defined as how do we get this super-intelligent AI to understand our values and align with our values. If they don't align with our values, they might go do something crazy. And that's how it sort of starts to harm people.

Gideon Lichfield: How do we create an AI, a super intelligent AI, that isn't evil?

Karen Hao: Exactly. Exactly. So instead of talking in the future about trying to figure out value alignment a hundred years from now, we should be talking right now about how we failed to align the values with very basic AIs today and actually solve the algorithmic discrimination problem.

Another huge challenge is the concentration of power that, um, AI naturally creates. You need an incredible amount of computational power today to create advanced AI systems and break state of the art. And the only players really that have that amount of computational power now are the large tech companies and maybe the top tier research universities. And even the top tier research universities can barely compete with the large tech companies anymore.

So the Googles Facebooks apples of the world. Um, another concern that people have, for a hundred years from now is once super-intelligent AI is unleashed, is it actually going to be benefiting people evenly? Well, we haven't figured that out today either. Like most of the value that's being generated by AI today is returning back to the billion dollar companies that already have a fantastical amount of resources at their disposal. And we haven't really figured out how to convert that value or distribute that value to other people.

Gideon Lichfield: Ok well let's get back then to that idea of a general intelligence and how we would build it if we could. Will mentioned deep learning earlier. Which is the foundational technique of most of the AI that we use today. And it's only about eight years old. Karen, you talked to essentially the father of deep learning Geoffrey Hinton at our EmTech conference recently. And he thinks that deep learning, the technique that we're using for things like translation services or face recognition, is also going to be the basis of a general intelligence when we eventually get there.

Geoffrey Hinton [ From EmTech 2020]: I do believe deep learning is going to be able to do everything. But I do think there's going to have to be quite a few conceptual breakthroughs that we haven't had yet. // Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reasoning, but we also need a massive increase in scale. // The Human brain has about a hundred trillion parameters, that is synapsis. A hundred trillion. What are now called really big models like GPT-3 has 175 billion. It's thousands of times smaller than the brain.

Gideon Lichfield: Can you maybe start by explaining what deep learning is?

Karen Hao: Deep learning is a category of techniques that is founded on this idea that the way to create artificial intelligence is to create artificial neural networks that are based off of the neural networks in our brain. Human brains are the smartest form of intelligence that we have today.

Obviously Will has already talked about some challenges to this theory, but assuming that human intelligence is sort of like the epitome of intelligence that we have today, we want to try and recreate artificial brains in sort of the image of a human brain. And deep learning is that. Is a technique that tries to use artificial neural networks as a way to achieve artificial intelligence.

What you were referring to sort of is there are largely two different camps within the field around how we might go about approaching building artificial general intelligence. The first camp being that we already have all the techniques that we need, we just need to scale them massively with more data and larger neural networks.

The other camp is deep learning is not enough. We need something else that we haven't yet figured out to supplement deep learning in order to achieve some of the things like common sense or reasoning that has sort of been elusive to the AI field today.

Gideon Lichfield: So Will, as Karen alluded to just now, the people who think we can build a general intelligence off of deep learning think that we need to add some things to it. What are some of those things?

Will Douglas Heaven: Among those who think deep learning is, is the way to go. I mean, as well as loads more data, like Karen said, there are a bunch of techniques that people are using to push deep learning forward.

You've got unsupervised learning, which is.. traditionally many deep learning successes, like image recognition, just simply to use the cliched example of recognizing cats. That's because the AI has been trained on millions of images that have been labeled by humans with cat. You know, this is what a cat looks like, learn it. The unsupervised learning is when the machine goes in and looks at data that hasn't been labeled in that way and itself tries to spot patterns.

Gideon Lichfield: So in other words, you would give it like a bunch of cats, a bunch of dogs, a bunch of pecan pies, and it would sort them into groups?

Will Douglas Heaven: Yeah. It essentially has to first learn what the sort of distinguishing features between those categories are rather than being prompted. And that ability to identify itself, you know, what those distinguishing features are, is a step towards a better way of learning. And it's practically useful because of course the task of labeling all this data is enormous.

And we can't continue along this path, especially if we want the system to train on more and more data. We can't continue on the path of having it manually labeled. And even more interestingly I think an unsupervised learning system has a potential of spotting your categories that humans haven't. So we might actually learn something from the machine.

And then you've got things like transfer learning, and this is crucial for general intelligence. This is where you've got a model that has been trained on a set of data in one way or another. And what it's learned in that training, you want to be able to then transfer that to a new task so that you don't have to start from scratch each time.

So there are various ways you'd approach transfer learning, but for example you could take some of the, some of the values from one training, from one train network and sort of preload another one in a way that when you asked it to recognize, an image of a different animal, it already has some sense of, you know, what animals have, you know, legs and heads and tails.

What have you. So you just want to be able to transfer some of the things that's learned from one task to another. And then there are things like few shot learning, which is where the system learns from or as the name implies from very few training examples. And that's also going to be crucial because we don't always have lots and lots of data to throw at these systems to teach them.

I mean they're extremely inefficient when you think about it compared to humans. You know, we can learn a lesson from, you know, one example, two examples. You show a kid, a picture of a giraffe and it knows what a giraffe is. We can even learn what something is without saying any example.

Karen Hao: yeah. Yeah. If you think about it, kids if you show them a picture of a horse and then you show them a picture of a rhino and you say, you know, a unicorn is something in between a horse and rhino, maybe they will actually, when they first see a unicorn in a picture book, be able to know that that's a unicorn. And so that's how you kind of start learning more categories than examples that you're seeing, and this is inspiration for yet another frontier of deep learning called low shot learning or less than one shot learning. And again, it's the same principle as few shot learning where if we are able to get these systems to learn from very, very, very tiny samples of data, the same way that humans do, then that can really supercharge the learning process.

Gideon Lichfield: For me, this raises an even more general question; which is what makes people in the field of AGI so sure that you can produce intelligence in a machine that represents information digitally, in the forms of ones and zeros, when we still know so little about how the human brain represents information. Isn't it a very big assumption that we can just recreate human intelligence in a digital machine?

Will Douglas Heaven: yeah, I agree. In spite of the massive complexity of some of the neural networks we're seeing today in terms of their size and their connections, we are orders of magnitude away from anything that matches the scale of a brain, even sort of a rather basic animal brain. So yeah, there's an enormous gulf between that idea that we are going to be able to do it, especially with the present technology, the present deep learning technology.

And of course, even though, as Karen described earlier, neural networks are inspired by the brain, the neurons neurons in our brain. That's only one way of looking at the brain. I mean, brains aren't just lumps of neurons. They have discrete sections that are dedicated to different tasks.

So again, this idea that just one very large neural network is going to achieve general intelligence is again, a bit of a leap of faith because maybe general intelligence will require some breakthrough in how dedicated structures communicate. So there's another divide in you know those chasing this goal.

You know, some think that you can just scale up, neural networks. Other people think we need to step back from the sort of specifics of any individual deep learning algorithm and look at the bigger picture. Actually, you know, maybe neural networks aren't the best model of the brain and we can build better ones, that look at how different parts of the brain communicates to, you know, the, the, the sum is greater than the whole.

Gideon Lichfield: I want to end with a philosophical question. We said earlier that even the proponents of AGI dont think it will be conscious. Could we even say whether it will have thoughts? Will it understand its own existence in the sense that we do?

Will Douglas Heaven: In Alan Turing's paper from 1950 Can Machines Think, which even, you know, that's when AI was still just this theoretical idea, we haven't even addressed it as a sort of an engineering possibility. He raised this question: how do we tell if a machine can think? And in that paper, he addresses, you know, this, this idea of consciousness. Maybe some people will come along and say machines can never think because we won't ever be able to tell that machines can think because we won't be able to tell they're conscious. And he sort of dismisses that by saying, well, if you push that argument so far, then you have to say the same thing about. Well, the fellow humans that you meet every, every day, there's no ultimate way that I can say that any of you guys aren't conscious. You know the only way that I would know that is if I experienced being you. And you get to the point that where communication breaks down and it's sort of a place where we can't go. So that's one way of dismissing that question. I mean, I think the consciousness question will be around forever. One day I think we will have machines, which act as if they were.. they could think and you know, could mimic humans so well, that we might as well treat them as if they're conscious, but as to whether they actually are, I don't think we'll ever know.

Gideon Lichfield: Karen, what do you think about conscious machines?

Karen Hao: I mean, building off of what Will said is, like, do we even know what consciousness. And I guess I would draw on the work of a professor at Tufts actually. He approaches artificial intelligence from the perspective of artificial life. Like how do you replicate all of the different things?

Not just the brain, but also like the electrical pulses or the electrical signals that we use within the body to communicate and that has intelligence too. If we are fundamentally able to recreate every little thing, every little process in our bodies or in an animal's body eventually, then why wouldn't those beings have the same consciousness that we do?

Will Douglas Heaven: You know there's a wonderful debate going on right now about brain organoids, which are little clumps of stem cells that are made to grow into neurons and they can even develop connections and you see in some of them this electrical activity. And there are various labs around the world studying these little blobs of brain to understand human brain diseases better. But there's a really interesting ethical debate going on about, you know, At what point does this electrical activity raise? The possibility that these little plops in Petri dishes are conscious. And that shows that we have no good definition of consciousness, even for our own brains, let alone machine ones.

Karen Hao: And want to add, we also don't really have a good definition of artificial. So that just adds, I mean, if we talk about artificial, general, intelligence.

We don't have a good definition of any of those three words that compose that term. So going to the point that Will made about these organoids that were growing in Petri dishes is that considered artificial? If not, why? Do we define artificial as things that are just not made out of organic material? There's just a lot of ambiguity and definitions around all of the things that we're talking about, which makes the consciousness question very complicated.

Will Douglas Heaven: It also makes them fun things to talk about.

Gideon Lichfield: Thats it for this episode of Deep Tech. And its also the last episode were doing for now. Were working on some other audio projects that were hoping to launch in the coming months. So please keep an eye out for them. And if you havent already, you should check out our AI podcast called In Machines We Trust, which comes out every two weeks. You can find it wherever you normally listen to podcasts.

Deep Tech is written and produced by Anthony Green and edited by Jennifer Strong and Michael Reilly. Im Gideon Lichfield. Thanks for listening.

Visit link:
Podcast: Can you teach a machine to think? - MIT Technology Review

Retired Chess Grandmaster, AlphaZero AI Reinvent Chess – Science Times

Russian chess grandmaster Vladimir Kramnik is working with DeepMind's chess program, AlphaZero, to analyze new variants in an attempt to reinvent the popular strategy board game.

Vladimir Kramnik, Classical World Chess Champion from 2000 to 2006, has proposed nine new chess variants last Wednesday, September 9. He has worked together with the artificial intelligence (AI) laboratory DeepMind, a subsidiary of Google's parent company Alphabet Inc, to evaluate his proposals with the help of the AlphaZero AI.

(Photo: Photo by Sebastian Reuter/Getty Images for World Chess)BERLIN, GERMANY - MARCH 10: Vladimir Kramnik is seen playing the first round at the First Move Ceremony during the World Chess Tournament on March 10, 2018, in Berlin, Germany.

In a report submitted to Cornell University, Nenad Tomaev, Ulrich Paquet, and Demis Hassabis from DeepMind worked with grandmaster Vladimir Kramnik to assess game balance in the new variations with help from AlphaZero.

It defines AlphaZero as "a reinforcement learning system that can learn near-optimal strategies for any rules set from scratch without any human supervision, and provides an in silico alternative for game balance assessment."

RELATED: Google's AlphaGo Will Have A Man vs Machine Rematch in Ancient Game of Go

The reinvention of chess involved introducing nine different alterations to existing rules of modern chess. These alterations are intended to keep the new game close to the original but allowing the generation of novel strategies and gameplay patterns. Proponents of the study aimed to preserve the appeal of the original while trying to "uncover dynamic variants" in the opening, mid-game, and endgame stages. The alterations, as a rule, do not tweak the board itself, the number of pieces, or their starting arrangements.

Some of the alterations introduced in the study include a ban of the castling move throughout the game, setting a victory condition for the side that forces the stalemate, and allowing capture of one's own pieces.

AlphaZero, which draws from a deep neural network in analyzing the move possibilities at any given chessboard condition, uses the Monte Carlo tree search (MCTS) to assess board positions. The AI program was trained by DeepMindscientists to adopt each of the rule alterations, with their computational models accommodating 1 million training steps.

After simulating human play adapted to the revised rules, finding variants that appear dynamic and interesting. Furthermore, the importance of rule design on the overall dynamics of the game.

In the report's introduction, proponents explain that "Rule design is a critical part of game development, and small alterations of game rules can have a large effect on the overall playability and game dynamics."

The Russian chess GM has also provided his assessment for each of the proposed variants, with data provided by the AlphaZero AI. For example, a chess variant that does not allow castling all throughout was "a potentially exciting variant" for the grandmaster, mainly because it increases the risk for both players' kings. Removing the rook-supported strategic move allows for added complexity in the opening stages of the game.

RELATED: Complete Set of Viking Chess Unearthed in Lincolnshire, Will Be up for Sale Next Week

Another scenario, which allows a player to capture his own pieces, opens new avenues in gameplays that involve sacrificing pieces, with Kramnik noting the "aesthetic appeal" involved with giving up pieces in exchange for a strategic advantage. As the variant can feature in any stage of the game, it can appear in a large number of games, even occurring multiple times in one play.

Aside from Vladimir Kramnik, the team also tapped other chess grandmasters such as the Danish GM Peter Heine Nielsen and English GM Matthew Sadler who provided feedback on the study.

Check out more news and information on Artificial Intelligence on Science Times.

Here is the original post:
Retired Chess Grandmaster, AlphaZero AI Reinvent Chess - Science Times

DeepMind’s AI is helping to re-write the rules of chess – ZDNet

In the game between chess and artificial intelligence, Google DeepMind's researchers have made yet another move, this time teaming up with ex-chess world champion Vladimir Kramnik to design and trial new AI-infused variants of the game.

With the objective of improving the design of balanced sets of game rules, the research team set out to discover the best tweaks they could possibly give to the centuries-old board game, in an ambitious effort to refresh chess dynamics thanks to AI.

The scientists used AlphaZero, an adaptive learning system that can teach itself new rules from scratch and achieve superhuman levels of play, to test the outcomes of nine different chess variants that they pre-defined with Kramnik's help.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

For each variant, AlphaZero played tens of thousands of games against itself, analyzing every possible move for any given chessboard condition, and generating new strategies and gameplay patterns. Kramnik and the researchers then assessed what games between human players might look like if these variants were adopted, to find out whether different sets of rules might improve the game.

Chess has evolved significantly over the centuries, with new variants coming in to improve perceived issues with the classical game, or to introduce new complications in the competition. Changing the rules can have a huge impact on game strategy, playability and dynamics but historically, understanding the consequences of implementing a particular chess variant has only been possible over time, by observing enough human players.

"Training an AlphaZero model under these rules changes helped us effectively simulate decades of human play in a matter of hours," said DeepMind's researchers, "and answer the 'what if' question: what the play would potentially look like under developed theory in each chess variant."

Some of the alterations tested by AlphaZero included the ability for a player to capture their own pieces, for example, or for pawns to move backwards by one square. "No-castling" disallowed castling throughout the game, while another variant equaled forcing a stalemate with a win, rather than a draw.

The AI system played each variant in 10,000 games at one second per move, and another 1,000 with one minute per move. To determine as objectively as possible how the changes impacted the games' quality, the scientists looked at a number of factors; one of them, which has frustrated chess players since time immemorial, was the proportion of draws observed.

Overall, most variants increased the amount of potentially decisive results, with some rules such as "stalemate = win" understandably driving the improvement. The researchers also found that time controls impacted game decisiveness: games at one second per move were much less likely to end in a draw than those with one minute per move.

Games at one second per move were much less likely to end in a draw than those with one minute per move.

The results also showed that, in a large percentage of games, AlphaZero actively used the additional moves at its disposal thanks to the new rules, rather than sticking strictly to classical moves. "This suggests that the new options are indeed useful, and contribute to the game," said the researchers.

On top of the statistical analysis of AlphaZero's new gameplay, DeepMind's team asked Kramnik to answer more subjective questions about the positions, moves and patterns that emerged as a result of the variants. The player's input, in principle, should reflect which rules might get traction within the traditional chess community.

The Russian grandmaster has for a long time been an advocate of the no-castling variant, and confirmed that the rule is potentially exciting, because it encourages aggressive play by increasing the vulnerability of both players' kings. On the other hand, Kramnik found that the "stalemate = win" variant seems to have a minor overall effect on the game.

SEE: Explainable AI: From the peak of inflated expectations to the pitfalls of interpreting machine learning models

International master Danny Rensch, chief chess officer at chess website Chess.com, also reviewed DeepMind's findings in a video. He described the "stalemate = win" rule, on the contrary, as the one most likely to significantly change gameplay in the chess community.

"You can't fix the draw in chess unless you really eliminate stalemate," said Rensch. "I strongly believe stalemate equaling a win might not only help grow the game to beginner players but actually could make an impact in terms of decisive results we saw."

Ultimately, as useful as AlphaZero's insights can be, it is impossible to predict which rules of chess will stick, if any. The only way to find out will be to observe how human players adopt, change or abandon different variants: who knows, it might be the right time to re-open that long-forgotten chess app.

Read this article:
DeepMind's AI is helping to re-write the rules of chess - ZDNet

AI messed up mentally stimulating games. Right now it is actually creating the video game wonderful once again – Publicist Recorder

Chess possesses a track record for chilly reasoning, yet Vladimir Kramnik likes the ready its own charm.

It is actually a type of development, he mentions. His enthusiasm for the virtuosity of thoughts arguing over the panel, investing facility yet sophisticated justifications as well as counters, assisted him dismiss Garry Kasparov in 2000 as well as devote a number of years as planet champ.

Yet Kramnik, that relinquished reasonable mentally stimulating games in 2014, additionally feels his treasured video game has actually increased much less imaginative. He mostly criticizes personal computers, whose feral estimations have actually created a substantial collection of positions as well as defenses that excellent gamers recognize through memorizing. For very a variety of activities on the highest degree, fifty percent of the video game occasionally a total video game is actually participated in away from mind, Kramnik mentions. You do not also play your very own prep work; you play your pcs prep work.

Wednesday, Kramnik showed some tips for exactly how to bring back a few of the individual fine art to mentally stimulating games, along with support coming from a counterproductive resource the planets very most strong mentally stimulating games pc. He coordinated with Alphabet expert system laboratory DeepMind, whose analysts tested their extraordinary game-playing program AlphaZero to know 9 variations of mentally stimulating games selected to stun gamers right into imaginative brand new styles.

In 2017, AlphaZero presented it could possibly instruct on its own to roundly trump the greatest pc gamers at either mentally stimulating games, Go, or even the Japanese video game shogi. Kramnik claims its own newest end results uncover seductive brand new views of mentally stimulating games to become looked into, if individuals want to take on some chump changes to the well-known policies.

The task additionally showcased an even more joint setting for the partnership in between mentally stimulating games gamers as well as equipments. Chess motors were actually at first constructed to bet people along with the target of beating all of them, mentions Nenad Tomaev, a DeepMind analyst that worked with the task. Now our company view a body like AlphaZero utilized for imaginative expedition in tandem along with people instead of set against to all of them.

People have actually participated in mentally stimulating games for around 1,500 years, as well as tweaks to the policies may not be brand new. Neither are actually fusses that personal computers have actually brought in the video game boring.

Chess dispersed swiftly around 500 years back after European gamers advertised a slow part right into the strong contemporary queen, providing the video game extra zip. In 1996, one year prior to IBMs Deep Blue finished off Kasparov, mentally stimulating games wunderkind-turned-fugitive Bobby Fischer referred to as an interview in Buenos Aires as well as whined that mentally stimulating games required a redesign to bench computer-enhanced memory as well as motivate ingenuity. He introduced Fischer Random Chess, which keeps the common policies of play yet randomizes the beginning placements of the strong parts on the rear ranking of the panel each video game. Fischer Random, additionally referred to as Chess960, gradually got a particular niche in the mentally stimulating games planet as well as right now possesses its very own competitions.

DeepMind as well as Kramnik touched AlphaZeros capability to know a video game from the ground up to discover brand new variations faster than the years or even centuries of individual play that would certainly uncover their charm as well as problems. You do not wish to commit lots of months or even years of your lifestyle making an effort to participate in one thing, just to recognize that, Oh, this merely isnt a gorgeous video game,' mentions Tomaev.

AlphaZero is actually an even more strong as well as adaptable follower to AlphaGo, which set a pen in Artificial Intelligence background when it beat a champ at Go in2016 It begins discovering a video game outfitted along with just the policies, a means to always keep rating, as well as a preprogrammed desire to practice as well as gain. When it begins playing it is actually thus poor I wish to conceal under my dining table, mentions Ulrich Paquet, one more DeepMind analyst on the task. But finding it grow coming from a devoid of blank is actually nearly pure as well as impressive.

In mentally stimulating games, AlphaZero at first does not recognize it may take a challengers parts. Over hrs of fast bet together even more strong manifestations of on its own, it comes to be extra proficient as well as, to some eyes extra all-natural than previous mentally stimulating games motors. While doing so, it finds tips observed in centuries of individual mentally stimulating games as well as incorporates style of its very own. English grandmaster Matthew Sadler illustrated looking AlphaZeros activities as like finding out the top secret note pads of some wonderful gamer coming from recent.

Expand/ Former mentally stimulating games planet champ Vladimir Kramnik, left behind, collaborated with Alphabets DeepMind, established through Demis Hassabis, straight, to discover brand new types of mentally stimulating games utilizing expert system.

Deepmind

The 9 different goals of mentally stimulating games that AlphaZero evaluated featured no-castling mentally stimulating games, which Kramnik as well as others had actually presently been actually thinking of as well as which possessed its own 1st devoted event in January. It does away with a technique referred to as castling that permits a gamer to put their master responsible for a preventive display screen of various other parts strong stronghold that may additionally be actually suppressing. 5 of the variations changed the activity of toys, consisting of torpedo mentally stimulating games, through which toys may go up to 2 squares at once throughout the video game, as opposed to just on their 1st relocation.

One method of checking out AlphaZeros end results remains in chilly amounts. Draws were actually much less popular under no-castling mentally stimulating games than under typical policies. And also discovering various policies switched the worth AlphaZero positioned on various parts: under typical policies, it valued a queen at 9.5 toys; under torpedo policies, the queen was actually just worth 7.1 toys.

DeepMinds analysts were actually inevitably extra curious about the evaluation of the various other wonderful mentally stimulating games intellect on the task, Kramnik. This is actually certainly not regarding amounts, yet whether it is actually qualitatively, cosmetically satisfying for people to sit as well as participate in, mentions Tomaev. A technological newspaper discharged Wednesday consists of much more than 70 web pages of comments through Kramnik on AlphaZeros expeditions.

Kramnik viewed flashes of charm in exactly how AlphaZero conformed to the brand new policies. No-castling mentally stimulating games produced abundant brand new designs for maintaining the master risk-free, he mentions. An even more harsh modification, self-capture mentally stimulating games, through which a gamer may take their very own parts, confirmed a lot more attractive. The policy properly offers a gamer extra possibilities to give up a part to advance, Kramnik mentions, an approach took into consideration a characteristic of sophisticated bet centuries. All in all it merely produces the video game extra wonderful, he mentions.

Kramnik really hopes AlphaZeros journeys in unusual types of mentally stimulating games will certainly entice gamers of all degrees to attempt all of them. It is our present to the planet of mentally stimulating games, he mentions. Right now may be a favorable instant.

Chess has actually been actually obtaining recognition for several years yet experienced an astronomical improvement as lots of people found brand new mental excitement, mentions Jennifer Shahade, a two-time ladiess United States mentally stimulating games champ. Rate Of Interest in Chess960 has actually increased as well, proposing a hunger for brand new sorts of stage show, consisting of coming from some celebrities. Eventually recently, Shahade will certainly supply comments for a Chess960 event consisting of planet No. 1 Magnus Carlsen as well as Kasparov, the past champ.

Like Kramnik, Shahade viewed factors to just like in a number of variations AlphaZero evaluated, regardless of whether modifications like enabling toys to relocate laterally experienced psychedelic. If any kind of increase footing, some gamers will certainly still wish to bank on personal computers as well as deep-seated investigation to advance, yet recasting the pattern may be amazing to see. The revelations would certainly experience clean perhaps really impressive as well as profit a various form of gamer, mentions Shahade, that is actually additionally ladiess system supervisor at the United States Chess Federation.

DeepMind as well as Kramniks task could additionally motivate pc mentally stimulating games to receive even more imaginative, once equipments are actually unsurpassable. Instead of creating pc mentally stimulating games more powerful as well as trashing people, our company may pay attention to mentally stimulating games as a fine art such as a video game, mentions Eli David, an analyst at Bar-Ilan University in Israel that has actually constructed machine-learning-powered mentally stimulating games motors of his very own. One college student in his laboratory is actually working with mentally stimulating games program that knows to resemble the type of a specific gamer, which could possibly create it achievable to talk to a maker what a beloved grandmaster past times or even existing would certainly perform in a specific condition.

Kramniks knowledge advises that possessing people collaborate with, certainly not versus, equipments may broaden the psychological and also specialized knowledge of the video game. AlphaZero took him to locations outside also his huge understanding. After 3 actions you merely do not recognize what to perform, he mentions. It is actually a wonderful sensation, like youre a little one.

This tale actually seemed on wired.com.

Read the original post:
AI messed up mentally stimulating games. Right now it is actually creating the video game wonderful once again - Publicist Recorder