Archive for the ‘Alphazero’ Category

MCI Day 6: Ding Liren beats MVL in 16 moves – chess24

Ding Liren is up to 3rd place in the Magnus CarlsenInvitational standings after a tense and well-played match came to an abruptend in the sudden death game. Its farce, not Armageddon, commented thewatching Alexander Grischuk as Maxime Vachier-Lagrave blundered and lost in just 16 moves.Meanwhile Anish Giris quest to win a game in the event goes on as he missed agreat chance before IanNepomniachtchi picked up the full 3 match points.

You can replay all the Magnus Carlsen Invitational gamesusing the selector below (click on a game to open it with computer analysis):

That meant Ian Nepomniachtchi was the days highest scorerfor winning without the need for Armageddon, while Ding Liren took two pointsand MVL one after the Armageddon game:

You can replay the live commentary with Tania Sachdev,Lawrence Trent, Jan Gustafsson, Peter Svidler, Alireza Firouzja and laterAlexander Grischuk, Fabiano Caruana and Ian Nepomniachtchi below:

And heres the aftershow with Pascal Charbonneau:

This was billed before it began as a heavyweight struggle,and it lived up to that reputation. The world numbers 3 and 5, or 3 and 2 ifyou took the rapid ratings, gave each other only glimmers of hope in the fourrapid games. Ding Liren perhaps came closest to breaking through with the whitepieces, but almost the greatest moment of peril in the first four games of thematch was when Ding disconnected in one of the games. Fortunately thistime the players resumed the game from the same position and times with aminimum of fuss.

That meant an Armageddon game between two players who hadlost their previous Armageddons - Ding Liren to Caruana,and MVL to Nepomniachtchi. If one thing seemed certain, it was that Maximecouldnt do worse than hed done in that game, when he was lost in 17 moves andresigned on move 20. But it turned out that was nothing!

Our commentators were already calling MVLs 4Bf5!? a blunder,while 12Nc2+? and 13Qxb2? were the final straws:

14.Ne5! is the only move that wins for White, but its absolutelycrushing, threatening mate on d7, among other things. 14b5 15.Qa5+ Ke8 16.Qc7 and Maxime threw in the towel rather thangive any spite checks:

It was a strange end to the match, but at least the breakbefore the Armageddon had given us time for an enjoyable intervention from the chessworlds most diplomatic player, Alexander Grischuk:

This match also got off to a quiet start, but in Game 2 IanNepomniachtchi showed that he meant business with 7.Qc2:

This quiet little move is not so innocent e.g. 70-0?8.Nxd5! Qxd5 9.Ng5! and Black could resign but the most noteworthy point isthat it seems to be a novelty. There was a time when no-one would "burn" anovelty in a rapid or blitz event, but this one is for serious stakes, and asFabiano Caruana said in a previous interview:

This is the only tournament that I do have for a long timeand it is also a tournament with many if not most of the best players in theworld so I do take it seriously and I really would love to do very well.

He added during Day 6, I don't know about other players,but I already showed all my Candidates ideas! referring to the virus-interruptedtournament in Yekaterinburg that will determine Magnus Carlsens next WorldChampionship challenger.

Anish wasnt going to fall into any simple traps, but he waseventually undone by a bold exchange sacrifice from his opponent:

25.Qxc7!? Bxf1 26.Kxf1 f6 27.Qxa7 and Nepos passed a-pawn eventually decided the game in Whites favour, though Giri could have done more to stop it.

That meant Anish needed to win one of the next two gamesand, in defiance of his critics, it looked as though he might make it in Game3. Critic-in-chief was that man Alexander Grischuk again, who didnt believe in the Dutchno. 1s instincts:

Of course for Anish it's very attractive to exchange queensand win a pawn - how can anything else be more attractive?

Giri, however, resisted the siren call of 27.Qxe4 Bxe428.Rxe6 and channelled his inner AlphaZero to go for 27.h6+! Kf7 28.Nf4! andonly swapped off queens when it gave him a clearly winning position. It was allgoing right until 34Rc4!, a fine double-purpose move:

The obvious threat is Rc1+ and Rh1 mate, which is not to betaken lightly, but it could have been parried by e.g. 35.f3! Instead 35.Rb1?allowed Nepo to change targets and go for the h7-knight instead with 35Rc8!and 36Rh8. In the end it was Nepo who was closer to a win.

In the final must-win game with the black pieces it wasnominally Giri who was pressing for a win, but he never came close and remainson 0 match points, tied with Alireza Firouzja:

There are still four rounds to go, a potential 12 matchpoints, for the players to improve their situation, but its approaching themust-win stage for Alireza as he takes on Fabiano Caruana tomorrow. Remember, the top four go forward to a knockout for the big prizes. The othermatch is another crowd-pleaser, as rapid world no. 1 Magnus Carlsen takes onrapid world no. 2 Maxime Vachier-Lagrave.

We hope you're enjoying the action...

...and if you think you can predict what will happen in Round 4 make sure to enter our Round 4 Fantasy Chess Contest.

Tune in again for all the Magnus Carlsen Invitational action from 15:30 CEST here on chess24.

See also:

Read the original:
MCI Day 6: Ding Liren beats MVL in 16 moves - chess24

Creator David Silver On AlphaZero’s (Infinite?) Strength – Chess.com

Making an appearance inLex Fridman's Artificial Intelligence Podcast, DeepMind'sDavid Silver gave lots of insights into the history of AlphaGo and AlphaZero and deep reinforcement learning in general.

Today, the finals of the Chess.com Computer Chess Championship (CCC) start between Stockfish and Lc0 (Leela Chess Zero). It's a clash between a conventional chess engine that implements an advanced alphabeta search (Stockfish) and a neural-network based engine (Lc0).

One could say that Leela Chess Zero is the open-source version of DeepMind's AlphaZero, which controversially crushed Stockfish in a 100-game match (andthen repeated the feat).

Even a few years on, the basic concept behind engines like AlphaZero and Leela Zero is breathtaking: learning to play chess just by reinforcement learning from repeated self-play. This idea, and its meaning for the wider world, was discussed in episode 86 of Lex Fridman's Artificial Intelligence Podcast, where Fridman hadDeepMind'sDavid Silver as a guest.

Silver leads the reinforcement learning research group at DeepMind and was lead researcher on AlphaGo and AlphaZero, and he was the co-lead on AlphaStar and MuZero. He did a lot of important work in reinforcement learning, defined as how agents ought to take actions in an environment in order to maximize the notion of cumulative reward.

Silver explains: "The goal is clear: The agent has to take actions, those actions have some effect on the environment, and the environment gives back an observation to the agent saying: This is what you see or sense.One special thing it gives back is called the reward signal: how well it's doing in the environment. The reinforcement learning problem is to simply take actions over time so as to maximize that reward signal."

The first part of the podcast is mostly about the board game go and DeepMind's successful quest in building a system that can beat the best players in the worldsomething that had been achieved in many other board games much earlier, including chess. The story was also depicted in a motion picture.

While AlphaGo was still using human knowledge to some extent (in the form of patterns from games played by humans), the next step for DeepMind was to create a system that wasn't fed by such knowledge.Moving from go to chess, so from AlphaGo to AlphaZero, was an example of taking out initial knowledge and wanting to know how far you could go with self-play alone. The ultimate goal is to use algorithms in other systems and solve problems in the real world.

The first new version that was developed was a fully self-learning version of AlphaGo, without prior knowledge and with the same algorithm. It beat the originalAlphaGo 100-0.

It was then applied in chess (AlphaZero) and Japanese chess (shogi), and in both cases, it beat the best engines in the world.

"It worked out of the box. There's something beautiful about that principle. You can take an algorithm, and not twiddle anything, it just works," said Silver.

There's something beautiful about that principle. You can take an algorithm, and not twiddle anything, it just works.David Silver

In one of the most interesting parts of the podcast, Silver suggests that the (already incredibly strong) AlphaZero that crushed Stockfish can be even stronger and potentially crush its current version. To be fair, he starts by calling this a falsifiable hypothesis:

"If someone in the future was to take AlphaZero as an algorithm and run it with greater computational resources than we have available today, then I will predict that they would be able to beat the previous system 100-0. If they were then to do the same thing a couple of years later, that system would beat the previous system 100-0. That process would continue indefinitely throughout at least my human lifetime."

David Silver and Julian Schrittwieser in a photo from DeepMind's Twitter page prior to a Reddit AMA.

Earlier in the podcasts, Silver explained this mind-boggling idea of AlphaZero losing to a future generation that can benefit from bigger computer power and learn from itself even more:

"Whenever you have errors in a system, how can you remove all of these errors? The only way to address them in any complex system is to give the system the ability to correct its own errors. It must be able to correct them; it must be able to learn for itself when its doing something wrong and correct for it.And so it seems to me that the way to correct delusions was indeed to have more iterations of reinforcement learning. (...)

"Now if you take that same idea and trace it back all the way to the beginning, it should be able to take you from no knowledge, from a completely random starting point, all the way to the highest levels of knowledge that you can achieve in a domain."

There is already a new step for AlphaZero, which called MuZero. In this version, the algorithm, combined with tree-search, works without even learning the rules of a particular game. Perhaps unsurprisingly, it's performing superhumanly as well.

Why skip the step of feeding the rules? Because eventually DeepMind is working towards systems that can have meaning in the real world. And, as Silver notes, for that, we need toacknowledge that "The world is a really messy place, and no one gives us the rules."

Listen to the full podcast here.

View post:
Creator David Silver On AlphaZero's (Infinite?) Strength - Chess.com

Fat Fritz 1.1 update and a small gift – Chessbase News

3/5/2020 As promised in the announcement of the release of Fat Fritz, the first update to the neural network has been released, stronger and more mature, and with it comes the brand new smaller and faster Fat Fritz for CPU neural network which will produce quality play even on a pure CPU setup. If you leave it analyzing the start position, it will say it likes the Sicilian Najdorf, which says a lot about its natural style. Read on to find out more!

If you havent yet updated your copy of Fat Fritz, now is the time to do it as it brings more thanminor enhancements or a few bug fixes. This update will bring the first major update to the Fat Fritz neural network, stronger than ever, as well as a new smaller one that is quite strong on a GPU, but also shines on even a plain CPU setup.

When you open Fritz 17, presuming you have Fat Fritz installed, you will be greeted with a message in the bottom right corner of your screen advising you there is an update available for Fat Fritz.

When you see this click on 'Update Fat Fritz'

Then you will be greeted with the update pane, and just need to click Next to get to it

When Fat Fritz was released with Fritz 17, updates were promised with the assurance it was still improving. Internally the version number of the release was v226, while this newest one is v471.

While thorough testing is always a challenge since resources are limited, a match against Leela 42850 at 1600 nodes per move over 1000 games yielded a positive result:

Score of Fat Fritz 471k vs Leela 42850: +260 -153 =587 [0.553]Elo difference: 37.32 +/- 13.79

1000 of 1000 games finished.

Also, in a match of 254 games at 3m +1s against Stockfish 11 in AlphaZero ratio conditions, this new version also came ahead by roughly 10 Elo.

Still, it isnt about Elo and never was, and the result is merely to say that you should enjoy strong competitive analysis. For one thing, it is eminently clear that while both Leela and Fat Fritz enjoy much of the same AlphaZero heritage,there are also distinct differences in style.

Perhaps one of the most obvious ways to highlight this is just the start position. If you let the engine run for a couple of minutes on decent hardware, it will tell you what it thinks is the best line of play for both White and Black based on its understanding of chess.

As such, I ran Leela 42850 with its core settings to see what it thought. After 2 million nodes it was adamant that perfect chess should take both players down the highly respected Berlin Defence of the Ruy Lopez.

Leela 42850 analysis:

info depth 19 seldepth 56 time 32675 nodes 2181544 score cp 23 hashfull 210 nps 75740 tbhits 0 pv e2e4 e7e5 g1f3 b8c6 f1b5 g8f6 e1g1 f6e4 d2d4 e4d6 b5c6 d7c6 d4e5 d6f5 d1d8 e8d8 h2h3

This is fine, but it is also very much a matter of taste.

Fat Fritz has a different outlook on chess as has already been pointed out in the past. At first it too will show a preference for the Ruy Lopez, though not the Berlin, but given a bit more time by 2.6 million nodes it will declare the best opening per its understanding of chess and calculations is the Sicillian Najdorf.

Within a couple of minutes this is its mainline:

info depth 16 seldepth 59 time 143945 nodes 7673855 score cp 28 wdl 380 336 284 hashfull 508 nps 54227 tbhits 0 pv e2e4 c7c5 g1f3 d7d6 b1c3 g8f6 d2d4 c5d4 f3d4 a7a6 f1e2 e7e5 d4b3 f8e7 e1g1 c8e6 c1e3 e8g8 f1e1 b8c6 h2h3 h7h6 e2f3 a8c8 d1d2 c6b8 a2a4 f6h7 a1d1 b8d7 f3e2 h7f6

From a purely analytical point of view it is quite interesting that it found 10.Re1! in the mainline. In a position where white scores 52.5% on average it picks a move that scores 58.3% / 58.9%.

Remember there is no right or wrong here, but it does help show the natural inclinations of each of these neural networks.

Even if chess is ultimately a draw, that doesnt mean there is only onepath, so while all roads may lead to Rome, they dont all need to pass through New Jersey.

Trying to find the ideal recipe of parameters for an engine can be daunting, and previously multiple attempts had been made with the well-know tuner called CLOP by Remi Coulom. Very recently a completely new tuner 'Bayes-Skopt' was designed byKarlson Pfannschmidt, a PhD student in Machine Learning in Paderborn University inGermany, who goes by the online nickname "Kiudee" (pronounced like the letters Q-D). It was used to find new improved values for Leela, which are now the new defaults.

His tuner is described as "A fully Bayesian implementation of sequential model-based optimization", a mouthful I know, and was set up with his kind help as it ran for over a week. It produces quite fascinating graphical imagery with its updated values. Here is what the final version looked like:

These values, slightly rounded, have been added as the new de facto defaults for Fat Fritz.

This is a completely new neural network trained from Fat Fritz games, but in a much smaller frame. Objectively it is not as strong as Fat Fritz, but it will run much faster, and above all it has the virtue of being quite decent on even a pure CPU machine. It wont challenge the likes of Stockfish, so lets get that out of the way, but in testing on quad-core machines (i.e. my i7 laptop) it defeats Fritz 16 by a healthy margin.

Note that this is not in the product description, soneedless to say, it is more nor less a gift to Fritz 17 owners.

Enjoy it!

More stories on Fat Fritz and Fritz 17...

Read more:
Fat Fritz 1.1 update and a small gift - Chessbase News

Chess champion Garry Kasparov who was replaced by AI says most US jobs are next – The Verge

Garry Kasparov dominated chess until he was beaten by an IBM supercomputer called Deep Blue in 1997. The event made man loses to computer headlines the world over. Kasparov recently returned to the ballroom of the New York hotel where he was defeated for a debate with AI experts. Wireds Will Knight was there for a revealing interview with perhaps the greatest human chess player the world has ever known.

I was the first knowledge worker whose job was threatened by a machine, says Kasparov, something he foresees coming for us all.

Every technology destroys jobs before creating jobs. When you look at the statistics, only 4 percent of jobs in the US require human creativity. That means 96 percent of jobs, I call them zombie jobs. Theyre dead, they just dont know it. For several decades we have been training people to act like computers, and now we are complaining that these jobs are in danger. Of course they are.

Experts say only about 14 percent of US jobs are at risk of replacement by AI and robots. Nevertheless, Kasparov has some advice for us zombies looking to re-skill.

There are different machines, and it is the role of a human and understand exactly what this machine will need to do its best. ... I describe the human role as being shepherds.

Kasparov, for example, helps Alphabets DeepMind division understand potential weaknesses with AlphaZeros chess play.

The interview also yielded this gem of a quote from Kasparov:

People say, oh, we need to make ethical AI. What nonsense. Humans still have the monopoly on evil. The problem is not AI. The problem is humans using new technologies to harm other humans.

Its a fascinating read and one that should be done in its entirety, if only to find out why Kasparov thinks AI is making chess more interesting, even though humanity doesnt stand a chance of beating it.

See the article here:
Chess champion Garry Kasparov who was replaced by AI says most US jobs are next - The Verge

Why asking an AI to explain itself can make things worse – MIT Technology Review

Upol Ehsan once took a test ride in an Uber self-driving car. Instead of fretting about the empty drivers seat, anxious passengers were encouraged to watch a pacifier screen that showed a cars-eye view of the road: hazards picked out in orange and red, safe zones in cool blue.

For Ehsan, who studies the way humans interact with AI at the Georgia Institute of Technology in Atlanta, the intended message was clear: Dont get freaked outthis is why the car is doing what its doing. But something about the alien-looking street scene highlighted the strangeness of the experience rather than reassuring. It got Ehsan thinking: what if the self-driving car could really explain itself?

The success of deep learning is due to tinkering: the best neural networks are tweaked and adapted to make better ones, and practical results have outpaced theoretical understanding. As a result, the details of how a trained model works are typically unknown. We have come to think of them as black boxes.

A lot of the time were okay with that when it comes to things like playing Go or translating text or picking the next Netflix show to binge on. But if AI is to be used to help make decisions in law enforcement, medical diagnosis, and driverless cars, then we need to understand how it reaches those decisionsand know when they are wrong.

People need the power to disagree with or reject an automated decision, says Iris Howley, a computer scientist at Williams College in Williamstown, Massachusetts. Without this, people will push back against the technology. You can see this playing out right now with the public response to facial recognition systems, she says.

Sign up for The Algorithm artificial intelligence, demystified

Ehsan is part of a small but growing group of researchers trying to make AIs better at explaining themselves, to help us look inside the black box. The aim of so-called interpretable or explainable AI (XAI) is to help people understand what features in the data a neural network is actually learningand thus whether the resulting model is accurate and unbiased.

One solution is to build machine-learning systems that show their workings: so-called glassboxas opposed to black-boxAI. Glassbox models are typically much-simplified versions of a neural network in which it is easier to track how different pieces of data affect the model.

There are people in the community who advocate for the use of glassbox models in any high-stakes setting, says Jennifer Wortman Vaughan, a computer scientist at Microsoft Research. I largely agree. Simple glassbox models can perform as well as more complicated neural networks on certain types of structured data, such as tables of statistics. For some applications that's all you need.

But it depends on the domain. If we want to learn from messy data like images or text, were stuck with deepand thus opaqueneural networks. The ability of these networks to draw meaningful connections between very large numbers of disparate features is bound up with their complexity.

Even here, glassbox machine learning could help. One solution is to take two passes at the data, training an imperfect glassbox model as a debugging step to uncover potential errors that you might want to correct. Once the data has been cleaned up, a more accurate black-box model can be trained.

It's a tricky balance, however. Too much transparency can lead to information overload. In a 2018 study looking at how non-expert users interact with machine-learning tools, Vaughan found that transparent models can actually make it harder to detect and correct the models mistakes.

Another approach is to include visualizations that show a few key properties of the model and its underlying data. The idea is that you can see serious problems at a glance. For example, the model could be relying too much on certain features, which could signal bias.

These visualization tools have proved incredibly popular in the short time theyve been around. But do they really help? In the first study of its kind, Vaughan and her team have tried to find outand exposed some serious issues.

The team took two popular interpretability tools that give an overview of a model via charts and data plots, highlighting things that the machine-learning model picked up on most in training. Eleven AI professionals were recruited from within Microsoft, all different in education, job roles, and experience. They took part in a mock interaction with a machine-learning model trained on a national income data set taken from the 1994 US census. The experiment was designed specifically to mimic the way data scientists use interpretability tools in the kinds of tasks they face routinely.

What the team found was striking. Sure, the tools sometimes helped people spot missing values in the data. But this usefulness was overshadowed by a tendency to over-trust and misread the visualizations. In some cases, users couldnt even describe what the visualizations were showing. This led to incorrect assumptions about the data set, the models, and the interpretability tools themselves. And it instilled a false confidence about the tools that made participants more gung-ho about deploying the models, even when they felt something wasnt quite right. Worryingly, this was true even when the output had been manipulated to show explanations that made no sense.

To back up the findings from their small user study, the researchers then conducted an online survey of around 200 machine-learning professionals recruited via mailing lists and social media. They found similar confusion and misplaced confidence.

Worse, many participants were happy to use the visualizations to make decisions about deploying the model despite admitting that they did not understand the math behind them. It was particularly surprising to see people justify oddities in the data by creating narratives that explained them, says Harmanpreet Kaur at the University of Michigan, a coauthor on the study. The automation bias was a very important factor that we had not considered.

Ah, the automation bias. In other words, people are primed to trust computers. Its not a new phenomenon. When it comes to automated systems from aircraft autopilots to spell checkers, studies have shown that humans often accept the choices they make even when they are obviously wrong. But when this happens with tools designed to help us avoid this very phenomenon, we have an even bigger problem.

What can we do about it? For some, part of the trouble with the first wave of XAI is that it is dominated by machine-learning researchers, most of whom are expert users of AI systems. Says Tim Miller of the University of Melbourne, who studies how humans use AI systems: The inmates are running the asylum.

This is what Ehsan realized sitting in the back of the driverless Uber. It is easier to understand what an automated system is doingand see when it is making a mistakeif it gives reasons for its actions the way a human would. Ehsan and his colleague Mark Riedl are developing a machine-learning system that automatically generates such rationales in natural language. In an early prototype, the pair took a neural network that had learned how to play the classic 1980s video game Frogger and trained it to provide a reason every time it made a move.

Upol Ehsan

To do this, they showed the system many examples of humans playing the game while talking out loud about what they were doing. They then took a neural network for translating between two natural languages and adapted it to translate instead between actions in the game and natural-language rationales for those actions. Now, when the neural network sees an action in the game, it translates it into an explanation. The result is a Frogger-playing AI that says things like Im moving left to stay behind the blue truck every time it moves.

Ehsan and Riedls work is just a start. For one thing, it is not clear whether a machine-learning system will always be able to provide a natural-language rationale for its actions. Take DeepMinds board-game-playing AI AlphaZero. One of the most striking features of the software is its ability to make winning moves that most human players would not think to try at that point in a game. If AlphaZero were able to explain its moves, would they always make sense?

Reasons help whether we understand them or not, says Ehsan: The goal of human-centered XAI is not just to make the user agree to what the AI is sayingit is also to provoke reflection. Riedl recalls watching the livestream of the tournament match between DeepMind's AI and Korean Go champion Lee Sedol. The commentators were talking about what AlphaGo was seeing and thinking. "That wasnt how AlphaGo worked," says Riedl. "But I felt that the commentary was essential to understanding what was happening."

What this new wave of XAI researchers agree on is that if AI systems are to be used by more people, those people must be part of the design from the startand different people need different kinds of explanations. (This is backed up by a new study from Howley and her colleagues, in which they show that peoples ability to understand an interactive or static visualization depends on their education levels.) Think of a cancer-diagnosing AI, says Ehsan. Youd want the explanation it gives to an oncologist to be very different from the explanation it gives to the patient.

Ultimately, we want AIs to explain themselves not only to data scientists and doctors but to police officers using face recognition technology, teachers using analytics software in their classrooms, students trying to make sense of their social-media feedsand anyone sitting in the backseat of a self-driving car. Weve always known that people over-trust technology, and thats especially true with AI systems, says Riedl. The more you say its smart, the more people are convinced that its smarter than they are.

Explanations that anyone can understand should help pop that bubble.

See more here:
Why asking an AI to explain itself can make things worse - MIT Technology Review