Archive for the ‘Alphago’ Category

AI scholars win Turing Prize for technique that made possible AlphaGo’s chess triumph – ZDNet

AI scholars win Turing Prize for technique that made possible AlphaGo's chess triumph  ZDNet

Read the original:
AI scholars win Turing Prize for technique that made possible AlphaGo's chess triumph - ZDNet

Tags:

The evolution of AI: From AlphaGo to AI agents, physical AI, and beyond – MIT Technology Review

The evolution of AI: From AlphaGo to AI agents, physical AI, and beyond  MIT Technology Review

More:
The evolution of AI: From AlphaGo to AI agents, physical AI, and beyond - MIT Technology Review

Tags:

AlphaGo led Lee 4-1 in March 2016. One round Lee Se-dol won remains the last round in which a man be.. –

AlphaGo led Lee 4-1 in March 2016. One round Lee Se-dol won remains the last round in which a man be..  

Read the original here:
AlphaGo led Lee 4-1 in March 2016. One round Lee Se-dol won remains the last round in which a man be.. -

Tags:

Koreans picked Google Artificial Intelligence (AI) AlphaGo as an image that comes to mind when they .. – MK –

Lee Se-dol 9 dan (right) and AlphaGo's March 2016 match. AlphaGo developer Hwang Sze (left) is placing a Go stone instead of AlphaGo. Supplied | Google

Koreans picked Google Artificial Intelligence (AI) AlphaGo as an image that comes to mind when they think of 'go' following Lee Se-dol, who retired.

According to the "National Perception and Utilization Survey Report" recently commissioned by the Korea Baduk Association to T&O Korea, Lee Se-dol (25.6%) was overwhelmingly cited as an image related to Go. AlphaGo (6.0%), an artificial intelligence (AI) developed by Google DeepMind, succeeded Lee Se-dol. The 9th dan and AlphaGo's "Great Power of the Century" eight years ago still appear to remain in the minds of the people.

In March 2016, the AI "AlphaGo," developed by Google DeepMind, and Lee Se-dol, a professional Go player who was considered the world's best at the time, shocked the game. Lee Se-dol, the strongest player in the representative brain sport Go, won a dramatic victory in the fifth match against AlphaGo. As AlphaGo retired from the Go world the following year, Lee Se-dol was recorded as "the only human who beat AlphaGo."

Lee Se-dol lamented, saying, "Even if you become the No. 1 player, you can't win anyway," as the reason for retiring at the age of 36 in 2019, three years after the match against AlphaGo.

According to the survey, the number of people who can play Go was 8.83 million, or about 20% of the entire nation. Among the Go population, men accounted for 74.7% with 6.6 million. Among men, those in their 60s were the most at 2.75 million.

Nevertheless, six out of 10 Koreans who do not play Go said they would like to play it in the future. The main reasons for this were 'brain development' and 'old-age hobbies'.

Factors that hinder the learning of Go include 'because there is no opportunity to learn' and 'because it seems difficult to play the rules'.

As a way to popularize Go, there were many opinions that it was necessary to escape from the static image due to the long game time, difficult rules, and old and unpopular images.

The survey was conducted with the support of the Ministry of Culture, Sports and Tourism and the National Sports Promotion Agency.

Read more from the original source:
Koreans picked Google Artificial Intelligence (AI) AlphaGo as an image that comes to mind when they .. - MK -

DeepMind AI rivals the world’s smartest high schoolers at geometry – Ars Technica

Enlarge / Demis Hassabis, CEO of DeepMind Technologies and developer of AlphaGO, attends the AI Safety Summit at Bletchley Park on November 2, 2023 in Bletchley, England.

A system developed by Googles DeepMind has set a new record for AI performance on geometry problems. DeepMinds AlphaGeometry managed to solve 25 of the 30 geometry problems drawn from the International Mathematical Olympiad between 2000 and 2022.

That puts the software ahead of the vast majority of young mathematicians and just shy of IMO gold medalists. DeepMind estimates that the average gold medalist would have solved 26 out of 30 problems. Many view the IMO as the worlds most prestigious math competition for high school students.

Because language models excel at identifying general patterns and relationships in data, they can quickly predict potentially useful constructs, but often lack the ability to reason rigorously or explain their decisions, DeepMind writes. To overcome this difficulty, DeepMind paired a language model with a more traditional symbolic deduction engine that performs algebraic and geometric reasoning.

The research was led by Trieu Trinh, a computer scientist who recently earned his PhD from New York University. He was a resident at DeepMind between 2021 and 2023.

Evan Chen, a former Olympiad gold medalist who evaluated some of AlphaGeometrys output, praised it as impressive because it's both verifiable and clean. Whereas some earlier software generated complex geometry proofs that were hard for human reviewers to understand, the output of AlphaGeometry is similar to what a human mathematician would write.

AlphaGeometry is part of DeepMinds larger project to improve the reasoning capabilities of large language models by combining them with traditional search algorithms. DeepMind has published several papers in this area over the last year.

Lets start with a simple example shown in the AlphaGeometry paper, which was published by Nature on Wednesday:

The goal is to prove that if a triangle has two equal sides (AB and AC), then the angles opposite those sides will also be equal. We can do this by creating a new point D at the midpoint of the third side of the triangle (BC). Its easy to show that all three sides of triangle ABD are the same length as the corresponding sides of triangle ACD. And two triangles with equal sides always have equal angles.

Geometry problems from the IMO are much more complex than this toy problem, but fundamentally, they have the same structure. They all start with a geometric figure and some facts about the figure like side AB is the same length as side AC. The goal is to generate a sequence of valid inferences that conclude with a given statement like angle ABC is equal to angle BCA.

For many years, weve had software that can generate lists of valid conclusions that can be drawn from a set of starting assumptions. Simple geometry problems can be solved by brute force: mechanically listing every possible fact that can be inferred from the given assumption, then listing every possible inference from those facts, and so on until you reach the desired conclusion.

But this kind of brute-force search isnt feasible for an IMO-level geometry problem because the search space is too large. Not only do harder problems require longer proofs, but sophisticated proofs often require the introduction of new elements to the initial figureas with point D in the above proof. Once you allow for these kinds of auxiliary points, the space of possible proofs explodes and brute-force methods become impractical.

Visit link:
DeepMind AI rivals the world's smartest high schoolers at geometry - Ars Technica