Archive for the ‘Alphago’ Category

Koreans picked Google Artificial Intelligence (AI) AlphaGo as an image that comes to mind when they .. – MK –

Lee Se-dol 9 dan (right) and AlphaGo's March 2016 match. AlphaGo developer Hwang Sze (left) is placing a Go stone instead of AlphaGo. Supplied | Google

Koreans picked Google Artificial Intelligence (AI) AlphaGo as an image that comes to mind when they think of 'go' following Lee Se-dol, who retired.

According to the "National Perception and Utilization Survey Report" recently commissioned by the Korea Baduk Association to T&O Korea, Lee Se-dol (25.6%) was overwhelmingly cited as an image related to Go. AlphaGo (6.0%), an artificial intelligence (AI) developed by Google DeepMind, succeeded Lee Se-dol. The 9th dan and AlphaGo's "Great Power of the Century" eight years ago still appear to remain in the minds of the people.

In March 2016, the AI "AlphaGo," developed by Google DeepMind, and Lee Se-dol, a professional Go player who was considered the world's best at the time, shocked the game. Lee Se-dol, the strongest player in the representative brain sport Go, won a dramatic victory in the fifth match against AlphaGo. As AlphaGo retired from the Go world the following year, Lee Se-dol was recorded as "the only human who beat AlphaGo."

Lee Se-dol lamented, saying, "Even if you become the No. 1 player, you can't win anyway," as the reason for retiring at the age of 36 in 2019, three years after the match against AlphaGo.

According to the survey, the number of people who can play Go was 8.83 million, or about 20% of the entire nation. Among the Go population, men accounted for 74.7% with 6.6 million. Among men, those in their 60s were the most at 2.75 million.

Nevertheless, six out of 10 Koreans who do not play Go said they would like to play it in the future. The main reasons for this were 'brain development' and 'old-age hobbies'.

Factors that hinder the learning of Go include 'because there is no opportunity to learn' and 'because it seems difficult to play the rules'.

As a way to popularize Go, there were many opinions that it was necessary to escape from the static image due to the long game time, difficult rules, and old and unpopular images.

The survey was conducted with the support of the Ministry of Culture, Sports and Tourism and the National Sports Promotion Agency.

Read more from the original source:
Koreans picked Google Artificial Intelligence (AI) AlphaGo as an image that comes to mind when they .. - MK -

DeepMind AI rivals the world’s smartest high schoolers at geometry – Ars Technica

Enlarge / Demis Hassabis, CEO of DeepMind Technologies and developer of AlphaGO, attends the AI Safety Summit at Bletchley Park on November 2, 2023 in Bletchley, England.

A system developed by Googles DeepMind has set a new record for AI performance on geometry problems. DeepMinds AlphaGeometry managed to solve 25 of the 30 geometry problems drawn from the International Mathematical Olympiad between 2000 and 2022.

That puts the software ahead of the vast majority of young mathematicians and just shy of IMO gold medalists. DeepMind estimates that the average gold medalist would have solved 26 out of 30 problems. Many view the IMO as the worlds most prestigious math competition for high school students.

Because language models excel at identifying general patterns and relationships in data, they can quickly predict potentially useful constructs, but often lack the ability to reason rigorously or explain their decisions, DeepMind writes. To overcome this difficulty, DeepMind paired a language model with a more traditional symbolic deduction engine that performs algebraic and geometric reasoning.

The research was led by Trieu Trinh, a computer scientist who recently earned his PhD from New York University. He was a resident at DeepMind between 2021 and 2023.

Evan Chen, a former Olympiad gold medalist who evaluated some of AlphaGeometrys output, praised it as impressive because it's both verifiable and clean. Whereas some earlier software generated complex geometry proofs that were hard for human reviewers to understand, the output of AlphaGeometry is similar to what a human mathematician would write.

AlphaGeometry is part of DeepMinds larger project to improve the reasoning capabilities of large language models by combining them with traditional search algorithms. DeepMind has published several papers in this area over the last year.

Lets start with a simple example shown in the AlphaGeometry paper, which was published by Nature on Wednesday:

The goal is to prove that if a triangle has two equal sides (AB and AC), then the angles opposite those sides will also be equal. We can do this by creating a new point D at the midpoint of the third side of the triangle (BC). Its easy to show that all three sides of triangle ABD are the same length as the corresponding sides of triangle ACD. And two triangles with equal sides always have equal angles.

Geometry problems from the IMO are much more complex than this toy problem, but fundamentally, they have the same structure. They all start with a geometric figure and some facts about the figure like side AB is the same length as side AC. The goal is to generate a sequence of valid inferences that conclude with a given statement like angle ABC is equal to angle BCA.

For many years, weve had software that can generate lists of valid conclusions that can be drawn from a set of starting assumptions. Simple geometry problems can be solved by brute force: mechanically listing every possible fact that can be inferred from the given assumption, then listing every possible inference from those facts, and so on until you reach the desired conclusion.

But this kind of brute-force search isnt feasible for an IMO-level geometry problem because the search space is too large. Not only do harder problems require longer proofs, but sophisticated proofs often require the introduction of new elements to the initial figureas with point D in the above proof. Once you allow for these kinds of auxiliary points, the space of possible proofs explodes and brute-force methods become impractical.

Visit link:
DeepMind AI rivals the world's smartest high schoolers at geometry - Ars Technica

Why top AI talent is leaving Google’s DeepMind – Sifted

Long before OpenAI was wowing the world with ChatGPT, there was DeepMind.

Founded in 2010 in London, it built a team of researchers plucked from the UKs top universities, who have since pioneered some of the worlds most high-profile breakthroughs in AI, including the protein structure prediction system AlphaFold in 2020 and the champion-beating board game player AlphaGo in 2016.

In 2014, it was scooped up by Google for $400m one of the largest European tech acquisitions ever, at the time.

And it has, until recently, operated largely independently enjoying access to the financial and hardware resources of its parent company, and the freedom to conduct blue sky research across generative models, reinforcement learning, robotics, safety and protein folding. In 2021, the company spun out Isomorphic Labs, an independent lab dedicated to applying protein folding techniques to drug discovery.

But now, as other Big Tech companies like Meta, Microsoft and Amazon are betting the house on AI, Google has realised the race is on. In April, it announced its internal AI lab, Google Brain, would merge with DeepMind. Its goal: to win the race to build the worlds first artificial general intelligence (AGI).

Now, with the competition of OpenAI and the realisation that AGI is going to be perhaps the worlds most profitable product ever its not a sure slam dunk that its going to be Google that gets there, one former DeepMind research engineer, who asked to be kept anonymous, tells Sifted.

The first result of this pooling of resources to stay ahead of the pack looks set to be Gemini a large language model thats powered by some of the problem-solving techniques that went into AlphaGo. Its expected to be released in the coming months.

At the same time, Google is facing a new AI economy where the best AI researchers have more options than ever to build their own thing or join one of several other well-funded AI labs with huge resources and are, increasingly, choosing to explore them.

Googles merger with DeepMind is a big transformation for a company thats unlike any other in the field of AI and spent much of the 2010s hiring the brightest minds in machine learning from Europes top universities.

What DeepMind did was it bought academia It took so many of the best professors and graduates where they all would have gone into academia otherwise and it built this research hub, says one former employee who worked with the ethics team. The early premise was that youd only be researching, it wouldnt be about making money.

In 2022, DeepMind was responsible for 12% of the most-cited AI research papers published globally, putting it ahead of Microsoft, Stanford and UC Berkeley, with only Meta and Google creating more research impact, according to research from AI search startup Zeta Alpha.

DeepMind generates revenue from selling services internally within the Alphabet Group, as well as through external contracts such as a partnership with Britains National Health Service. Its been profitable since 2020 but saw its margins squeezed in its latest company accounts.

This is where Gemini comes in. With OpenAI on track to make more than $1bn in revenue in 2023 from its LLMs, Google wants to release something bigger and better.

The fact that Gemini will be built using techniques from AlphaGo the game-playing AI that beat a human Go champion in 2016 suggests it could end up being more powerful, and useful, than OpenAIs GPT-4. Thats because the model will combine the brute-force statistical prediction capabilities of LLMs, with the problem-solving capabilities of reinforcement learning (the machine learning approach used in AlphaGo).

Google also has a lot of computing power (known as compute in the AI industry) resources at hand. Access to specialist chips for AI training is a key factor in training powerful models, and semiconductor news site SemiAnalysis recently described Google as the most compute-rich company in the world.

The publication estimates that the companys compute infrastructure will be five times more powerful than OpenAIs by the end of this year, and 20 times heftier by the end of next year.

But while Google DeepMind flexes its language model and reinforcement learning chops to build Gemini, question marks hang over what the merger means for researchers who are focused on more foundational research thats further from commercialisation.

Former employees tell Sifted that its still unclear how the push for productisation of DeepMinds research will affect teams in the long run, but some would rather leave and start their own thing than wait and see.

The move towards a more product focus meant morale was low among some people more on the frontier research side, says Sid Jayakumar, founder of GenAI startup Finster AI, who spent seven years at DeepMind.

We hired a lot of really good, really senior engineers, researchers who we basically asked to replicate an academic setting within industry, which was unique at the time and what was needed to build things like AlphaGo and AlphaFold.

It's no longer just an academic setting and rightfully so, in my view. But if you came from that [academic] perspective, you go, This isn't great what we were hired to do is no longer the priority, Jayakumar adds.

One former research scientist tells Sifted that one of the reasons he recently left DeepMind was that he wasnt sure if the projects he was working on would survive the push to productise the labs research.

We were working on quite fundamental stuff and its not always clear how that survives a change, they tell Sifted. My personal thoughts were, Whats going to happen to these fundamental research programmes when were asked for more commercial impact?

For many AI engineers, DeepMind remains a killer place to have on the CV but top researchers are leaving to found their own ventures, in apparently increasing numbers. Sixteen former DeepMinders launched their own ventures in the last twelve months, compared to seven in the previous year, according to Sifted analysis of LinkedIn.

Recent leavers include Cyprien de Masson dAutume, cofounder of AI research and product company Reka AI, and Michael Johanson, cofounder of Artificial.Agency, a Canada-based AI startup thats currently still in stealth mode. Both de Masson dAutume and Johanson served as senior researchers at DeepMind.

The outflow of top researchers is a trend that mirrors Googles own track record on AI talent retention, as many of the researchers behind its biggest breakthroughs have now left the company. In the past eight years, twenty top researchers who worked across milestone papers have moved on to found companies including Character.AI, Cohere and Adept, or to work at big AI labs like Meta, Hugging Face and Anthropic.

The companys most high-profile loss is likely Arthur Mensch cofounder of Mistral AI, the Paris-based AI startup that recently raised a massive 105m seed round and is seen as one of Europes brightest contenders to build LLMs like GPT-4.

He recently told Sifted hed left DeepMind because the company was not innovative enough with Mistral going on to release its own language model in just three months.

Another former DeepMind researcher-turned-founder also told Sifted that given the rapid progress in AI they left the company this year to launch a company that could be more agile.

As a large listed organisation, I think theres a lot of worry around releasing something to users thats not perfect, they tell Sifted. You can iterate much faster and get feedback faster outside [of Google] and I think that was my main motivation.

Those who havent left are getting constantly approached by recruiters.

Theres lots of people who are biding their time working on ideas and intending to leave. Youve got to understand DeepMind researchers are being called up by recruiters who are saying 'I can easily get you a $700k or $800k salary, says one investor thats close to the company.

But there are also plenty who want to stay, says former employee Jayakumar.

Google DeepMinds got the best AI team and has had consistently. Google has never moved faster and I dont remember urgency being shown like it is now I would actually be more worried if they were still focusing the most on that very open blue sky research and hadn't moved towards productionising.

Sifted reached out to DeepMind asking for an interview and responses to the points made in this piece. The company declined an interview, but Dex Hunter-Torricke, head of communications at Google DeepMind, says that the work the company does reaches billions of people through Googles products and delivers industry-leading breakthroughs in science and research.

Were proud of our world-class team and delighted to continue attracting the best talent, he adds.

See more here:
Why top AI talent is leaving Google's DeepMind - Sifted

Who Is Ilya Sutskever, Meet The Man Who Fired Sam Altman – Dataconomy

The latest tea on social media is Sam Altman getting fired from OpenAI, but who is Ilya Sutskever, the man who is said to be responsible for Altman leaving the company that is the mastermind behind todays hottest tech, ChatGPT? Lets take a closer look at Sutskever and his life in this piece!

Rumors and questions have swirled regarding the nature of Altmans exit, fueling speculation about potential internal strife. During a crucial all-hands meeting on the day of the leadership shake-up, Sutskever took center stage to address growing concerns. Reports from the New York Times suggest that he vehemently refuted claims of a hostile takeover, instead characterizing the move as a protective measure safeguarding the core mission of OpenAI. Now lets answer the real question. Who is Ilya Sutskever?

Sutskevers story kicks off in Gorky, Russia, in the mid-1980s. A move to Israel at the age of 5 set the stage for his formative years in Jerusalem. Fast forward to the early 2000s, and Sutskever is honing his math skills at the Open University of Israel. His thirst for knowledge took him to the University of Toronto in Canada, where he clinched his Ph.D. in computer science in 2013 under the guidance of Geoffrey Hinton.

Sutskevers impact in the field is undeniable. Co-inventing AlexNet with Krizhevsky and Hinton, he laid the groundwork for modern deep learning. His fingerprints are also on the AlphaGo paper, showcasing his knack for staying ahead in the ever-evolving AI landscape.

Sam Altman fired: Meet Mira Murati, OpenAIs new CEO

A stint at Google Brain sees Sutskever collaborating with industry heavyweights on cutting-edge projects. His work on the sequence-to-sequence learning algorithm and contributions to TensorFlow underscore his commitment to pushing AIs boundaries. But, in 2015, he takes a leap of faith, leaving Google to co-found OpenAI.

Sutskevers brilliance doesnt go unnoticed. MIT Technology Review lauds him in 2015 as one of the 35 Innovators Under 35. Keynote speeches at Nvidia Ntech 2018 and AI Frontiers Conference 2018 cement his status as a thought leader. In 2022, he achieves the pinnacle of recognition as a Fellow of the Royal Society (FRS).

Yet, no narrative is complete without its twists and turns. The reason why people want to know the answer to Who is Ilya Sutskever is a little complicated. In November 2023, OpenAI found itself at the epicenter of controversy. Sutskever, a prominent board member, played a pivotal role in the decision to remove Sam Altman from his position and witnessed the subsequent resignation of Greg Brockman. Reports surfaced, indicating a clash over the companys stance on AI safety.

OpenAI has yet another CEO: Emmett Shear

In a company-wide address, Sutskever defended the decision, framing it as the board doing its duty. However, the fallout was palpable, leading to the departure of three senior researchers from OpenAI.

In the grand tapestry of artificial intelligence, Ilya Sutskevers narrative unfolds as a riveting chapter. From his roots in Russia to the tumultuous boardroom discussions at OpenAI, Sutskevers journey is emblematic of the challenges and triumphs that define the ever-evolving field of AI. As the technological landscape continues to shift, Sutskever remains a key player, shaping the trajectory of artificial intelligence with each stride.

Featured image credit: Nvidia

Follow this link:
Who Is Ilya Sutskever, Meet The Man Who Fired Sam Altman - Dataconomy

Microsoft’s LLM ‘Everything Of Thought’ Method Improves AI … – AiThority

Everything of Thought (XOT)

As illogical language models continue to progressively impact every part of our lives, Microsoft has revealed a strategy to make AI reason better, termed Everything of Thought (XOT). This approach was motivated by Google DeepMinds AlphaZero, which achieves competitive performance with extremely small neural networks.

East China Normal University and Georgia Institute of Technology worked together to create the new XOT technique. They employed a combination of well-known successful strategies for making difficult decisions, such as reinforcement learning and Monte Carlo Tree Search (MCTS).

Read the Latest blog from us: AI And Cloud- The Perfect Match

Researchers claim that by combining these methods, language models may more effectively generalize to novel scenarios. Experiments by the researchers on hard problems including the Game of 24, the 8-Puzzle, and the Pocket Cube showed promising results. When compared to alternative approaches, XOT has shown superior in solving previously intractable issues. There are, of course, limits to this supremacy. The system, despite its achievements, has not attained a level of 100% dependability.

Read:AI and Machine Learning Are Changing Business Forever

However, the study team thinks the framework is a good way to bring in outside knowledge for linguistic model inference. They are certain that it boosts performance, efficiency, and adaptability all at once, which is not possible with other approaches.

Researchers are looking at games as a potential next step in incorporating language models since current models can create sentences with outstanding precision but lack a key component of human-like thinking: the capacity to reason.

Academics have been looking at this for quite some time. The scholarly and technological community has spent years delving deeply into this mystery. However, despite their efforts to supplement AI with more layers, parameters, and attention methods, a solution remains elusive. Multimodality research has also been conducted, although thus far, it has not yielded any particularly promising or cutting-edge results.

We all know how terrible ChatGPT was at arithmetic when it was published, but earlier this year a team from Virginia Tech and Microsoft created a method called Algorithm of Thoughts (AoT) to enhance AIs algorithmic reasoning. It also hinted that by using this training strategy, huge language models might eventually be able to use their intuition in conjunction with optimized search to provide more accurate results.

A little over a month ago, Microsoft also investigated the moral justifications used by these models. As a result, the group suggested a revised framework to gauge its capacity for forming moral judgments. The 70-billion-parameter LlamaChat model fared better than its bigger rivals in the end. The findings ran counter to the conventional wisdom that more is better and the communitys dependence on large values for key metrics.

Microsoft looks to be taking a cautious approach to advancement while the large internet companies continue to suffer the repercussions of their flawed language models. They are taking it slow and steady with the addition of complexity to their models.

The XOT technique has not been announced for inclusion in any Microsoft products. Meanwhile, Google DeepMind CEO Demis Hassabis indicated in an interview that the company is exploring incorporating ideas inspired by AlphaGo into its Gemini project.

Metas CICERO, named for the famous Roman orator, also joined the fray a year ago, and its impressive proficiency in the difficult board game Diplomacy raised eyebrows among the AI community. Because it requires not only strategic thinking but also the ability to negotiate, this game has long been seen as an obstacle for artificial intelligence. CICERO, however, showed that it could handle these situations since it was able to hold sophisticated, human-like discussions. Considering the standards established by DeepMind, this finding was not ignored. The research team in the UK has long advocated for the use of games to train neural networks.

Their successes with AlphaGo set a high standard, which Meta matched by taking a page from DeepMinds playbook and fusing strategic thinking algorithms (like AlphaGos) with a natural language processing model (GPT-3). Because an AI agent playing Diplomacy requires not just knowledge of the rules and strategies of the game, but also an accurate assessment of the likelihood of treachery by human opponents, Metas model stood out. As Meta continues to develop Llama-3, this agent is the greatest alternative because of its capacity to carry on conversations with people using natural-sounding language. Metas larger AI programs, including CICERO, may herald the arrival of conversational AI.

[To share your insights with us, please write tosghosh@martechseries.com]

Excerpt from:
Microsoft's LLM 'Everything Of Thought' Method Improves AI ... - AiThority