Archive for the ‘Alphago’ Category

Seizing Artificial Intelligence’s Opportunities in the 2020s – AiThority

Artificial Intelligence (AI) has made major progress in recent years. But even milestones like AlphaGo or the narrow AI used by big tech only scratch the surface of the seismic changes yet to come.

Modern AI holds the potential to upend entire profession while unleashing brand new industries in the process. Old assumptions will no longer hold, and new realities will dictate those who are swallowed by the tides of change from those able to anticipate and ride the AI wave headlong into a prosperous future.

Heres how businesses and employees can both leverage AI in the 2020s.

Like many emerging technologies, AI comes with a substantial learning curve. As a recent McKinsey report highlights, AI is a slow burn technology that requires a heavy upfront investment, with returns only ramping up well down the road.

Because of this slow burn, an AI front-runner and an AI laggard may initially appear to be on equal footing. The front-runner may even be a bit behind during early growing pains. But as the effects of AI adoption kick in, the gap between the two widens dramatically and exponentially. McKinseys models estimate that within around 10 years, the difference in cumulative net change in cash flow between front-runners and laggards could be as high as 145 percent.

The first lesson for any business hoping to seize new AI opportunities is to start making moves to do so right now.

Read More: How is Artificial Intelligence (AI) Changing the Future of Architecture?

Despite popular opinion, the coming AI wave will be mostly a net positive for employees. The World Economic Forum found that by 2022, AI and Machine Learning will have created over 130 million new jobs. Though impressive, these gains will not be distributed evenly.

Jobs characterized by unskilled and repetitive tasks face an uncertain future, while jobs in need of greater social and creative problem-solving will spike. According to McKinsey, the coming decade could see a 10 percent fall in the share of low digital skill jobs, with a corresponding rise in the share of jobs requiring high digital skill.

So how can employees successfully navigate the coming future of work? One place to start is to investigate the past. Nearly half a century ago, the first ATM was installed outside Barclays Bank in London. In 1967, the thought of bank tellers surviving the introduction of automated teller machines felt impossible. ATMs caught on like wildfire, cut into tellers hours, offered unbeatable flexibility and convenience, and should have all but wiped tellers out.

But, in fact, exactly the opposite happened. No longer having to handle simple deposits freed tellers up to engage with more complex and social facets of the business. They started advising customers on mortgages and loans, forging relationships and winning loyalty. Most remarkable of all, in the years following the ATMs introduction, the total number of tellers employed worldwide didnt fall off a cliff. In fact, it rose higher than ever.

Though AI could potentially threaten some types of jobs, many jobs will see rising demand. Increased reliance on automated systems for core business functions, frees up valuable employee time and enables them to focus on different areas to add even more value to the company.

As employees grow increasingly aware of the changing nature of work, they are also clamoring for avenues for development, aware that they need to hold a variety of skills to remain relevant in a dynamic job market. Companies will, therefore, need to provide employees with a wide range of experiences and the opportunity to continuously enhance their skillsets or suffer high turnover. This is already a vital issue to businesses with the cost of losing an employee equating to 90%-200% of their annual salary. This costs each large enterprise an estimated $400 million a year. If employees feel their role is too restrictive or that their organization is lagging, their likelihood of leaving will climb.

The only way to capture the full value of AI for business is to retain the highly skilled employees capable of wielding it. Departmental silos and rigid job descriptions will have no place in the AI future.

Read More: How Artificial Intelligence and Blockchain is Revolutionizing Mobile Industry in 2020?

For employees to maximize their chances of success in the face of rapid AI advancement, they must remain flexible and continuously acquire new skills. Both businesses and employees will need to realign their priorities in accordance with new realities. Workers will have to be open to novel ideas and perspectives, while employers will need to embrace the latest technological advancements.

Fortunately, the resources and avenues for ambitious employers to pursue continued growth for their employees are blossoming. Indeed, the very AI advancements prompting the need for accelerated career development paths are also powering technologies to maximize and optimize professional enrichment.

AI is truly unlocking an exciting new future of work. Smart algorithms now enable hyper-flexible workplaces to seamlessly shuffle and schedule employee travel, remote work, and mentorship opportunities. At the cutting edge, these technologies can even let employees divide their time between multiple departments across their organization. AI can also tailor training and reskilling programs to each employees unique goals and pace.

The rise of AI holds the promise of great change, but if properly managed, it can be a change for the better.

Read More: Predictions of AI AdTech in 2020

See more here:
Seizing Artificial Intelligence's Opportunities in the 2020s - AiThority

Can synthetic biology help deliver an AI brain as smart as the real thing? – Genetic Literacy Project

In building the worlds first airplane at the dawn of the 20th century, the Wright Brothers took inspiration from the insightful movements of birds. They observed and reverse-engineered aspects of the wing in nature, which in turn helped them make important discoveries about aerodynamics and propulsion.

Similarly, to build machines that think, why not seek inspiration from the three pounds of matter that operates between our ears? Geoffrey Hinton, a pioneer ofartificial intelligenceand winner of theTuring Award, seemed to agree: I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain.

So whats next for artificial intelligence (AI)?Could the next wave of AI be inspired by rapid advances in biology?Can the tools for understanding brain circuits at the molecular level lead us to a higher, systems-level understanding of how the human mind works?

The answer is likely yes, and the flow of ideas between learning about biological systems and developing artificial ones has actually been going on for decades.

First of all, what does biology have to do with machine learning? It may surprise you to learn that much of the progress in machine learning stems from insights from psychology and neuroscience. Reinforcement learning (RL) one of the three paradigms of machine learning (the two others being supervised learning and unsupervised learning) originates from animal and cognitive neuroscience studies going all the way back to the 1940s. RL is central to some of todays most advanced AI systems, such asAlphaGo, the widely-publicized AI agent developed by leading AI companyGoogle DeepMind. AlphaGo defeated the worlds top-ranked players atGo, a Chinese board game that comprises more board combinations than there are atoms in the universe.

Despite AlphaGos superhuman performance in the game of Go, its human opponent still possesses far more general intelligence. He can drive a car, speak languages, play soccer, and perform a myriad of other tasks in any kind of environment. Current AI systems are largely incapable of using the knowledge learned to play poker and transfer it to another task, like playing a game of Cluedo. These systems are focused on a single, narrow environment and require vast amounts of data, and training time. And still, they make simple errors like mistaking achihuahua for a muffin!

Similar to child learning, reinforcement learning is based on the AI systems interaction with its environment. It performs actions that seek to maximize the reward and avoid punishments. Driven by curiosity, children are active learners that simultaneously explore their surrounding environment and predict their actions outcomes, allowing them to build mental models to think causally. If, for example, they decide to push the red car, spill the flower vase, or crawl the other direction, they will adjust their behavior based on the outcomes of their actions.

Children experience different environments in which they find themselves navigating and interacting with various contexts and objects dispositions, often in unusual manners. Just as child brain development could inspire the development of AI systems, the RL agents learning mechanisms are parallel to the brains learning mechanisms driven by the release of dopamine a neurotransmitter key to the central nervous system which trains the prefrontal cortex in response to experiences and thus shapes stimulus-response associations as well as outcome predictions.

Biology is one of the most promising beneficiaries of artificial intelligence.From investigating mind-boggling combinations of genetic mutations that contribute to obesity to examining the byzantine pathways that lead some cells to go haywire and produce cancer, biology produces an inordinate amount of complex, convoluted data. But the information contained within these datasets often offers valuable insights that could be used to improve our health.

In the field of synthetic biology, where engineers seek to rewire living organisms and program them with new functions,many scientists are harnessing AI to design more effective experiments, analyze their data, and use it to create groundbreaking therapeutics. I recently highlightedfive companies that are integrating machine learning with synthetic biologyto pave the way for better science and better engineering.

Artificial general intelligence (AGI) describes a system that is capable of mimicking human-like abilities such as planning, reasoning, or emotions. Billions of dollars have been invested in this exciting and potentially lucrative area, leading some to make claims like data is the new oil.

Among the many companies working on general artificial intelligence are GooglesDeepMind, the Swiss AI labIDSIA, Nnaisense,Vicarious,Maluuba, theOpenCog Foundation, Adaptive AI,LIDA, andNumenta. Organizations such as theMachine Intelligence Research InstituteandOpenAIalso state AGI as their main goal. One of the goals of the internationalHuman Brain Projectis to simulate the human brain.

Despite a growing body of talent, tools, and high-quality data needed to achieve AGI, we still have a long way to go to achieve this.

Today, AI techniques such as Machine Learning (ML) are ubiquitous in our society, reaching from healthcare and manufacturing to transportation and warfare but are qualified as narrow AI. They indeed process and learn powerfully large amounts of data to identify insightful and informative patterns for a single task, such as predicting airline ticket prices, distinguishing dogs from cats in images, and generating your movie recommendations on Netflix.

In biology, AI is also changing your health care. It is generating more and better drug candidates (Insitro), sequencing your genome (Veritas Genetics), and detecting your cancer earlier and earlier (Freenom).

As humans, we are able to quickly acquire knowledge in one context and generalize it to another environment across novel multiple situations and tasks, which would allow us to develop more efficient self-driving car systems as they need to perform many tasks on the road concurrently. In AI research, this concept is known as transfer learning. It assists an AI system in learning from just a few examples instead of the millions that traditional computing systems usually need to build a system that learns from first principles, abstracts the acquired knowledge, and generalizes it to new tasks and contexts.

To produce more advanced AI, we need to better understand the brains inner workings that allow us to portray the world around us. There is a synergistic mission between understanding biological intelligence and creating an artificial one, seeking inspiration from our brain might help us bridge that gap.

John Cumbers is the founder and CEO ofSynBioBeta, the leading community of innovators, investors, engineers, and thinkers who share a passion for using synthetic biology to build a better, more sustainable universe. He publishes the weekly SynBioBeta Digest, host theSynBioBeta Podcast, and wroteWhats Your Biostrategy?, the first book to anticipate how synthetic biology is going to disrupt virtually every industry in the world. He earned his PhD in Molecular Biology, Cell Biology, and Biochemistry from Brown University. Follow him on Twitter @SynBioBeta or @johncumbers

A version of this article was originally published on Forbes website as Can Synthetic Biology Inspire The Next Wave Of AI? and has been republished here with permission.

See the original post here:
Can synthetic biology help deliver an AI brain as smart as the real thing? - Genetic Literacy Project

These 6 Incredible Discoveries From The Past Decade Have Changed Science Forever – ScienceAlert

From finding the building blocks for life on Mars to breakthroughs in gene editing and the rise of artificial intelligence, here are six major scientific discoveries that shaped the 2010s - and what leading experts say could come next

We don't yet know whether there was ever life on Mars - but thanks to a small, six-wheeled robot, we do know the Red Planet was habitable.

Shortly after landing on 6 August 2012, NASA's Curiosity rover discovered rounded pebbles - new evidence that rivers flowed there billions of years ago.

The proof has since multiplied, showing there was in fact a lot of water on Mars - the surface was covered in hot springs, lakes, and maybe even oceans.

A crater on the Red Planet filled with water ice. (ESA/DLR/FU Berlin, CC BY-SA 3.0 IGO)

Curiosity also discovered what NASA calls the building blocks of life, complex organic molecules, in 2014.

And so the hunt continues for signs that Earth-based life is not (or wasn't always) alone.

Two new rovers will be launched next year - America's Mars 2020 and Europe's Rosalind Franklin rovers, looking for ancient microbes.

"Going into the coming decade, Mars research will shift from the question 'Was Mars habitable?' to 'Did (or does) Mars support life?'" said Emily Lakdawalla, a geologist at The Planetary Society.

We had long thought of the little corner of the Universe that we call home as unique, but observations made thanks to the Kepler space telescope blew apart those pretensions.

Launched in 2009, the Kepler mission helped identify more than 2,600 planets outside of our Solar System, also known as exoplanets - and astronomers believe each star has a planet, meaning there are billions out there.

Kepler's successor TESS was launched by NASA in 2018, as we scope out the potential for extraterrestrial life.

Expect more detailed analysis of the chemical composition of these planets' atmospheres in the 2020s, said Tim Swindle, an astrophysicist at the University of Arizona.

We also got our first glimpse of a black hole this year thanks to the groundbreaking work of the Event Horizon Telescope collaboration.

(Event Horizon Telescope Collaboration)

"What I predict is that by the end of the next decade, we will be making high quality real-time movies of black holes that reveal not just how they look, but how they act on the cosmic stage," Shep Doeleman, the project's director, told AFP.

But one event from the decade undoubtedly stood above the rest: the detection for the first time on September 14, 2015 of gravitational waves, ripples in the fabric of the universe.

The collision of two black holes 1.3 billion years earlier was so powerful it spread waves throughout the cosmos that bend space and travel at the speed of light. That morning, they finally reached Earth.

The phenomenon had been predicted by Albert Einstein in his theory of relativity, and here was proof he was right all along.

Three Americans won the Nobel prize in physics in 2017 for their work on the project, and there have been many more gravitational waves detected since.

Cosmologists meanwhile continue to debate the origin and composition of the universe. The invisible dark matter that makes up its vast majority remains one of the greatest puzzles to solve.

"We're dying to know what it might be," said cosmologist James Peebles, who won this year's Nobel prize in physics.

Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) - a family of DNA sequences - is a phrase that doesn't exactly roll off the tongue.

(Meletios Verras/iStock)

But the field of biomedicine can now be divided into two eras, one defined during the past decade: before and after CRISPR-Cas9 (or CRISPR for short), the basis for a gene editing technology.

"CRISPR-based gene editing stands above all the others," William Kaelin, a 2019 Nobel prize winner for medicine, told AFP.

In 2012, Emmanuelle Charpentier and Jennifer Doudna reported that they had developed the new tool that exploits the immune defense system of bacteria to edit the genes of other organisms.

It is much simpler than preceding technology, cheaper and easy to use in small labs.

Charpentier and Doudna were showered in awards. but the technique is also far from perfect and can create unintended mutations.

Experts believe this may have happened to Chinese twins born in 2018 as a result of edits performed by a researcher who was widely criticized for ignoring scientific and ethical norms.

Still, CRISPR remains one of the biggest science stories of recent years, with Kaelin predicting an "explosion" in its use to combat human disease.

For decades, doctors had three main weapons to fight cancer: surgery, chemotherapy drugs, and radiation.

The 2010s saw the rise of a fourth, one that was long doubted: immunotherapy, or leveraging the body's own immune system to target tumor cells.

(Design Cells/iStock)

One of the most advanced techniques is known as CAR T-cell therapy, in which a patient's T-cells - part of their immune system - are collected from their blood, modified and reinfused into the body.

A wave of drugs have hit the market since the mid-2010s for more and more types of cancer including melanomas, lymphomas, leukemias and lung cancers - heralding what some oncologists hope could be a golden era.

For William Cance, scientific director of the American Cancer Society, the next decade could bring new immunotherapies that are "better and cheaper" than what we have now.

The decade began with a major new addition to the human family tree: Denisovans, named after the Denisova Cave in the Altai Mountains of Siberia.

Scientists sequenced the DNA of a female juvenile's finger bone in 2010, finding it was distinct both from genetically modern humans and Neanderthals, our most famous ancient cousins who lived alongside us until around 40,000 years ago.

The mysterious hominin species is thought to have ranged from Siberia to Indonesia, but the only remains have been found in the Altai region and Tibet.

We also learned that, unlike previously assumed, Homo sapiens bred extensively with Neanderthals - and our relatives were not the brutish simpletons previously assumed but were responsible for artworks, such as the handprints in a Spanish cave they were credited for crafting in 2018.

They also wore jewelry, and buried their dead with flowers - just like we do.

Next came Homo naledi, remains of which were discovered in South Africa in 2015, while this year, paleontologists classified yet another species found in the Philippines: a small-sized hominin called Homo luzonensis.

Advances in DNA testing have led to a revolution in our ability to sequence genetic material tens of thousands of years old, helping unravel ancient migrations, like that of the Bronze Age herders who left the steppes 5,000 years ago, spreading Indo-European languages to Europe and Asia.

"This discovery has led to a revolution in our ability to study human evolution and how we came to be in a way never possible before," said Vagheesh Narasimhan, a geneticist at Harvard Medical School.

One exciting new avenue for the next decade is paleoproteomics, which allows scientists to analyze bones millions of years old.

"Using this technique, it will be possible to sort out many fossils whose evolutionary position is unclear," said Aida Gomez-Robles, an anthropologist at University College London.

"Neo" skull of Homo naledi from the Lesedi Chamber. (John Hawks/University of the Witwatersrand)

Machine learning - what we most commonly mean when talking about "artificial intelligence" - came into its own in the 2010s.

Using statistics to identify patterns in vast datasets, machine learning today powers everything from voice assistants to recommendations on Netflix and Facebook.

So-called "deep learning" takes this process even further and begins to mimic some of the complexity of a human brain.

It is the technology behind some of the most eye-catching breakthroughs of the decade: from Google's AlphaGo, which beat the world champion of the fiendishly difficult game Go in 2017, to the advent of real-time voice translations and advanced facial recognition on Facebook.

In 2016, for example, Google Translate - launched a decade earlier - transformed from a service that provided results that were stilted at best, nonsensical at worst, to one that offered translations that were far more natural and accurate.

At times, the results even seemed polished.

"Certainly the biggest breakthrough in the 2010s was deep learning - the discovery that artificial neural networks could be scaled up to many real-world tasks," said Henry Kautz, a computer science professor at the University of Rochester.

"In applied research, I think AI has the potential to power new methods for scientific discovery," from enhancing the strength of materials to discovering new drugs and even making breakthroughs in physics, Kautz said.

For Max Jaderberg, a research scientist at DeepMind, owned by Google's parent company Alphabet, the next big leap will come via "algorithms that can learn to discover information, and rapidly adapt and internalize and act on this new knowledge," as opposed to depending on humans to feed them the correct data.

That could eventually pave the way to "artificial general intelligence", or a machine capable of performing any tasks humans can, rather than excelling at a single function.

Agence France-Presse

Read more here:

These 6 Incredible Discoveries From The Past Decade Have Changed Science Forever - ScienceAlert

AI has bested chess and Go, but it struggles to find a diamond in Minecraft – The Verge

Whether were learning to cook an omelet or drive a car, the path to mastering new skills often begins by watching others. But can artificial intelligence learn the same way? A new challenge teaching AI agents to play Minecraft suggests its much trickier for computers.

Announced earlier this year, the MineRL competition asked teams of researchers to create AI bots that could successfully mine a diamond in Minecraft. This isnt an impossible task, but it does require a mastery of the games basics. Players need to know how to cut down trees, craft pickaxes, and explore underground caves while dodging monsters and lava. These are the sorts of skills that most adults could pick up after a few hours of experimentation or learn much faster by watching tutorials on YouTube.

But of the 660 entries in the MineRL competition, none were able to complete the challenge, according to results that will be announced at the AI conference NeurIPS and that were first reported by BBC News. Although bots were able to learn intermediary steps, like constructing a furnace to make durable pickaxes, none successfully found a diamond.

The task we posed is very hard, Katja Hofmann, a principal researcher at Microsoft Research, which helped organize the challenge, told BBC News. While no submitted agent has fully solved the task, they have made a lot of progress and learned to make many of the tools needed along the way.

This may be a surprise, especially when you think that AI has managed to best humans at games like chess, Go, and Dota 2. But it reflects important limitations of the technology as well as restrictions put in place by MineRLs judges to really challenge the teams.

The bots in MineRL had to learn using a combination of methods known as imitation learning and reinforcement learning. In imitation learning, agents are shown data of the task ahead of them, and they try to imitate it. In reinforcement learning, theyre simply dumped into a virtual world and left to work things out for themselves using trial and error.

Often, AI is only able to take on big challenges by combining these two methods. The famous AlphaGo system, for example, first learned to play Go by being fed data of old games. It then honed its skills and surpassed all humans by playing itself over and over.

The MineRL bots took a similar approach, but the resources available to them were comparatively limited. While AI agents like AlphaGo are created with huge datasets, powerful computer hardware, and the equivalent of decades of training time, the MineRL bots had to make do with just 1,000 hours of recorded gameplay to learn from, a single Nvidia graphics processor to train with, and just four days to get up to speed.

Its the difference between the resources available to an MLB team coaches, nutritionists, the finest equipment money can buy and what a Little League squad has to make do with.

It may seem unfair to hamstring the MineRL bots in this way, but these constraints reflect the challenges of integrating AI into the real world. While bots like AlphaGo certainly push the boundary of what AI can achieve, very few companies and research labs can match the resources of Google-owned DeepMind.

The competitions lead organizer, Carnegie Mellon University PhD student William Guss, told BBC News that the challenge was meant to show that not every AI problem should be solved by throwing computing power at it. This mindset, said Guss, works directly against democratizing access to these reinforcement learning systems, and leaves the ability to train agents in complex environments to corporations with swathes of compute.

So while AI may be struggling in Minecraft now, when it cracks this challenge, itll hopefully deliver benefits to a wider audience. Just dont think about those poor Minecraft YouTubers who might be out of a job.

Read more from the original source:

AI has bested chess and Go, but it struggles to find a diamond in Minecraft - The Verge

MuZero figures out chess, rules and all – Chessbase News

12/12/2019 Just imagine you had a chess computer the auto-sensor kind. Would someone who had no knowledge of the game be able to work it out, just by moving pieces. Or imagine you are a very powerful computer. By looking at millions of images of chess games would you be able to figure out the rules and learn to play the game proficiently? The answer is yes because that has just been done by Google's Deep Mind team. For chess and 76 other games. It is interesting, and slightly disturbing. | Graphic: DeepMind

ChessBase 15 - Mega package

Find the right combination! ChessBase 15 program + new Mega Database 2020 with 8 million games and more than 80,000 master analyses. Plus ChessBase Magazine (DVD + magazine) and CB Premium membership for 1 year!

More...

In 1980 the first chess computer with an auto response board, the Chafitz ARB Sargon 2.5, was released. It was programmed by Dan and Kathe Spracklen and had a sensory board and magnet pieces. The magnets embedded in the pieces were all the same kind, so that the board could only detect whether there was a piece on the square or not. It would signal its moves with LEDs located on the corner of each square.

Chafitz ARB Sargon 2.5 | Photo:My Chess Computers

Some years after the release of this computer I visited the Spracklens in their home in San Diego, and one evening had an interesting discussion, especially with Kathy. What would happen, we wondered, if we set up a Sargon 2.5 in a jungle village where nobody knew chess. If we left the people alone with the permanently switched-on board and pieces, would they be able to figure out the game? If they lifted a piece, the LED on that square would light up; if they put it on another square that LED would light up briefly. If the move was legal, there would be a reassuring beep; the square of a piece of the opposite colour would light up, and if they picked up that piece another LED would light up. If the original move wasnt legal, the board would make an unpleasant sound.

Our question was: could they figure out, by trial and error, how chess was played? Kathy and I discussed it at length, over the Sargon board, and in the end came to the conclusion that it was impossible they could never figure out the game without human instructions. Chess is far too complex.

Now, three decades later, I have to modify our conclusion somewhat: maybe humans indeed cannot learn chess by pure trial and error, but computers can...

You remember how AlphaGo and AlphaZero were created, by Google's DeepMind division. The programs Leela and Fat Fritz were generated using the same principle: tell an AI program the rules of the game, how the pieces move, and then let it play millions of games against itself. The program draws its own conclusions about the game and starts to play master-level chess. In fact, it can be argued that these programs are the strongest entities to have ever played chess human or computer.

Now DeepMind has come up with a fairly atrocious (but scientifically fascinating) idea: instead of telling the AI software the rules of the game, just let it play, using trial and error. Let it teach itself the rules of the game, and in the process learn to play it professionally. DeepMind combined a tree-based search (where a tree is a data structure used for locating information from within a set) with a learning model. They called the project MuZero. The program must predict the quantities most relevant to game planning not just for chess, but for 57 different Atari games. The result: MuZero, we are told, matches the performance of AlphaZero in Go, chess, and shogi.

And this is how MuZero works (description from VenturBeat):

Fundamentally MuZero receives observations images of a Go board or Atari screen and transforms them into a hidden state. This hidden state is updated iteratively by a process that receives the previous state and a hypothetical next action, and at every step the model predicts the policy (e.g., the move to play), value function (e.g., the predicted winner), and immediate reward (e.g., the points scored by playing a move)."

Evaluation of MuZero throughout training in chess, shogi, Go, and Atari the y-axis shows Elo rating| Image: DeepMind

As the DeepMind researchers explain, one form of reinforcement learning the technique in which rewards drive an AI agent toward goals involves models. This form models a given environment as an intermediate step, using a state transition model that predicts the next step and a reward model that anticipates the reward. If you are interested in this subject you can read thearticle on VenturBeat,or visit the Deep Mind site. There you can read this paper on the general reinforcement learning algorithm that masters chess, shogi and Go through self-play. Here's an abstract:

The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.

That refers to the original AlphaGo development, which has now been extended to MuZero. Turns out it is possible not just to become highly proficient at a game by playing it a million times against yourself, but in fact it is possible to work out the rules of the game by trial and error.

I have just now learned about this development and need to think about the consequences discuss it with experts. My first somewhat flippant reaction to a member of the Deep Mind team: "What next? Show it a single chess piece and it figures out the whole game?"

Link:

MuZero figures out chess, rules and all - Chessbase News