Archive for the ‘Alphago’ Category

Why top AI talent is leaving Google’s DeepMind – Sifted

Long before OpenAI was wowing the world with ChatGPT, there was DeepMind.

Founded in 2010 in London, it built a team of researchers plucked from the UKs top universities, who have since pioneered some of the worlds most high-profile breakthroughs in AI, including the protein structure prediction system AlphaFold in 2020 and the champion-beating board game player AlphaGo in 2016.

In 2014, it was scooped up by Google for $400m one of the largest European tech acquisitions ever, at the time.

And it has, until recently, operated largely independently enjoying access to the financial and hardware resources of its parent company, and the freedom to conduct blue sky research across generative models, reinforcement learning, robotics, safety and protein folding. In 2021, the company spun out Isomorphic Labs, an independent lab dedicated to applying protein folding techniques to drug discovery.

But now, as other Big Tech companies like Meta, Microsoft and Amazon are betting the house on AI, Google has realised the race is on. In April, it announced its internal AI lab, Google Brain, would merge with DeepMind. Its goal: to win the race to build the worlds first artificial general intelligence (AGI).

Now, with the competition of OpenAI and the realisation that AGI is going to be perhaps the worlds most profitable product ever its not a sure slam dunk that its going to be Google that gets there, one former DeepMind research engineer, who asked to be kept anonymous, tells Sifted.

The first result of this pooling of resources to stay ahead of the pack looks set to be Gemini a large language model thats powered by some of the problem-solving techniques that went into AlphaGo. Its expected to be released in the coming months.

At the same time, Google is facing a new AI economy where the best AI researchers have more options than ever to build their own thing or join one of several other well-funded AI labs with huge resources and are, increasingly, choosing to explore them.

Googles merger with DeepMind is a big transformation for a company thats unlike any other in the field of AI and spent much of the 2010s hiring the brightest minds in machine learning from Europes top universities.

What DeepMind did was it bought academia It took so many of the best professors and graduates where they all would have gone into academia otherwise and it built this research hub, says one former employee who worked with the ethics team. The early premise was that youd only be researching, it wouldnt be about making money.

In 2022, DeepMind was responsible for 12% of the most-cited AI research papers published globally, putting it ahead of Microsoft, Stanford and UC Berkeley, with only Meta and Google creating more research impact, according to research from AI search startup Zeta Alpha.

DeepMind generates revenue from selling services internally within the Alphabet Group, as well as through external contracts such as a partnership with Britains National Health Service. Its been profitable since 2020 but saw its margins squeezed in its latest company accounts.

This is where Gemini comes in. With OpenAI on track to make more than $1bn in revenue in 2023 from its LLMs, Google wants to release something bigger and better.

The fact that Gemini will be built using techniques from AlphaGo the game-playing AI that beat a human Go champion in 2016 suggests it could end up being more powerful, and useful, than OpenAIs GPT-4. Thats because the model will combine the brute-force statistical prediction capabilities of LLMs, with the problem-solving capabilities of reinforcement learning (the machine learning approach used in AlphaGo).

Google also has a lot of computing power (known as compute in the AI industry) resources at hand. Access to specialist chips for AI training is a key factor in training powerful models, and semiconductor news site SemiAnalysis recently described Google as the most compute-rich company in the world.

The publication estimates that the companys compute infrastructure will be five times more powerful than OpenAIs by the end of this year, and 20 times heftier by the end of next year.

But while Google DeepMind flexes its language model and reinforcement learning chops to build Gemini, question marks hang over what the merger means for researchers who are focused on more foundational research thats further from commercialisation.

Former employees tell Sifted that its still unclear how the push for productisation of DeepMinds research will affect teams in the long run, but some would rather leave and start their own thing than wait and see.

The move towards a more product focus meant morale was low among some people more on the frontier research side, says Sid Jayakumar, founder of GenAI startup Finster AI, who spent seven years at DeepMind.

We hired a lot of really good, really senior engineers, researchers who we basically asked to replicate an academic setting within industry, which was unique at the time and what was needed to build things like AlphaGo and AlphaFold.

It's no longer just an academic setting and rightfully so, in my view. But if you came from that [academic] perspective, you go, This isn't great what we were hired to do is no longer the priority, Jayakumar adds.

One former research scientist tells Sifted that one of the reasons he recently left DeepMind was that he wasnt sure if the projects he was working on would survive the push to productise the labs research.

We were working on quite fundamental stuff and its not always clear how that survives a change, they tell Sifted. My personal thoughts were, Whats going to happen to these fundamental research programmes when were asked for more commercial impact?

For many AI engineers, DeepMind remains a killer place to have on the CV but top researchers are leaving to found their own ventures, in apparently increasing numbers. Sixteen former DeepMinders launched their own ventures in the last twelve months, compared to seven in the previous year, according to Sifted analysis of LinkedIn.

Recent leavers include Cyprien de Masson dAutume, cofounder of AI research and product company Reka AI, and Michael Johanson, cofounder of Artificial.Agency, a Canada-based AI startup thats currently still in stealth mode. Both de Masson dAutume and Johanson served as senior researchers at DeepMind.

The outflow of top researchers is a trend that mirrors Googles own track record on AI talent retention, as many of the researchers behind its biggest breakthroughs have now left the company. In the past eight years, twenty top researchers who worked across milestone papers have moved on to found companies including Character.AI, Cohere and Adept, or to work at big AI labs like Meta, Hugging Face and Anthropic.

The companys most high-profile loss is likely Arthur Mensch cofounder of Mistral AI, the Paris-based AI startup that recently raised a massive 105m seed round and is seen as one of Europes brightest contenders to build LLMs like GPT-4.

He recently told Sifted hed left DeepMind because the company was not innovative enough with Mistral going on to release its own language model in just three months.

Another former DeepMind researcher-turned-founder also told Sifted that given the rapid progress in AI they left the company this year to launch a company that could be more agile.

As a large listed organisation, I think theres a lot of worry around releasing something to users thats not perfect, they tell Sifted. You can iterate much faster and get feedback faster outside [of Google] and I think that was my main motivation.

Those who havent left are getting constantly approached by recruiters.

Theres lots of people who are biding their time working on ideas and intending to leave. Youve got to understand DeepMind researchers are being called up by recruiters who are saying 'I can easily get you a $700k or $800k salary, says one investor thats close to the company.

But there are also plenty who want to stay, says former employee Jayakumar.

Google DeepMinds got the best AI team and has had consistently. Google has never moved faster and I dont remember urgency being shown like it is now I would actually be more worried if they were still focusing the most on that very open blue sky research and hadn't moved towards productionising.

Sifted reached out to DeepMind asking for an interview and responses to the points made in this piece. The company declined an interview, but Dex Hunter-Torricke, head of communications at Google DeepMind, says that the work the company does reaches billions of people through Googles products and delivers industry-leading breakthroughs in science and research.

Were proud of our world-class team and delighted to continue attracting the best talent, he adds.

See more here:
Why top AI talent is leaving Google's DeepMind - Sifted

Who Is Ilya Sutskever, Meet The Man Who Fired Sam Altman – Dataconomy

The latest tea on social media is Sam Altman getting fired from OpenAI, but who is Ilya Sutskever, the man who is said to be responsible for Altman leaving the company that is the mastermind behind todays hottest tech, ChatGPT? Lets take a closer look at Sutskever and his life in this piece!

Rumors and questions have swirled regarding the nature of Altmans exit, fueling speculation about potential internal strife. During a crucial all-hands meeting on the day of the leadership shake-up, Sutskever took center stage to address growing concerns. Reports from the New York Times suggest that he vehemently refuted claims of a hostile takeover, instead characterizing the move as a protective measure safeguarding the core mission of OpenAI. Now lets answer the real question. Who is Ilya Sutskever?

Sutskevers story kicks off in Gorky, Russia, in the mid-1980s. A move to Israel at the age of 5 set the stage for his formative years in Jerusalem. Fast forward to the early 2000s, and Sutskever is honing his math skills at the Open University of Israel. His thirst for knowledge took him to the University of Toronto in Canada, where he clinched his Ph.D. in computer science in 2013 under the guidance of Geoffrey Hinton.

Sutskevers impact in the field is undeniable. Co-inventing AlexNet with Krizhevsky and Hinton, he laid the groundwork for modern deep learning. His fingerprints are also on the AlphaGo paper, showcasing his knack for staying ahead in the ever-evolving AI landscape.

Sam Altman fired: Meet Mira Murati, OpenAIs new CEO

A stint at Google Brain sees Sutskever collaborating with industry heavyweights on cutting-edge projects. His work on the sequence-to-sequence learning algorithm and contributions to TensorFlow underscore his commitment to pushing AIs boundaries. But, in 2015, he takes a leap of faith, leaving Google to co-found OpenAI.

Sutskevers brilliance doesnt go unnoticed. MIT Technology Review lauds him in 2015 as one of the 35 Innovators Under 35. Keynote speeches at Nvidia Ntech 2018 and AI Frontiers Conference 2018 cement his status as a thought leader. In 2022, he achieves the pinnacle of recognition as a Fellow of the Royal Society (FRS).

Yet, no narrative is complete without its twists and turns. The reason why people want to know the answer to Who is Ilya Sutskever is a little complicated. In November 2023, OpenAI found itself at the epicenter of controversy. Sutskever, a prominent board member, played a pivotal role in the decision to remove Sam Altman from his position and witnessed the subsequent resignation of Greg Brockman. Reports surfaced, indicating a clash over the companys stance on AI safety.

OpenAI has yet another CEO: Emmett Shear

In a company-wide address, Sutskever defended the decision, framing it as the board doing its duty. However, the fallout was palpable, leading to the departure of three senior researchers from OpenAI.

In the grand tapestry of artificial intelligence, Ilya Sutskevers narrative unfolds as a riveting chapter. From his roots in Russia to the tumultuous boardroom discussions at OpenAI, Sutskevers journey is emblematic of the challenges and triumphs that define the ever-evolving field of AI. As the technological landscape continues to shift, Sutskever remains a key player, shaping the trajectory of artificial intelligence with each stride.

Featured image credit: Nvidia

Follow this link:
Who Is Ilya Sutskever, Meet The Man Who Fired Sam Altman - Dataconomy

Microsoft’s LLM ‘Everything Of Thought’ Method Improves AI … – AiThority

Everything of Thought (XOT)

As illogical language models continue to progressively impact every part of our lives, Microsoft has revealed a strategy to make AI reason better, termed Everything of Thought (XOT). This approach was motivated by Google DeepMinds AlphaZero, which achieves competitive performance with extremely small neural networks.

East China Normal University and Georgia Institute of Technology worked together to create the new XOT technique. They employed a combination of well-known successful strategies for making difficult decisions, such as reinforcement learning and Monte Carlo Tree Search (MCTS).

Read the Latest blog from us: AI And Cloud- The Perfect Match

Researchers claim that by combining these methods, language models may more effectively generalize to novel scenarios. Experiments by the researchers on hard problems including the Game of 24, the 8-Puzzle, and the Pocket Cube showed promising results. When compared to alternative approaches, XOT has shown superior in solving previously intractable issues. There are, of course, limits to this supremacy. The system, despite its achievements, has not attained a level of 100% dependability.

Read:AI and Machine Learning Are Changing Business Forever

However, the study team thinks the framework is a good way to bring in outside knowledge for linguistic model inference. They are certain that it boosts performance, efficiency, and adaptability all at once, which is not possible with other approaches.

Researchers are looking at games as a potential next step in incorporating language models since current models can create sentences with outstanding precision but lack a key component of human-like thinking: the capacity to reason.

Academics have been looking at this for quite some time. The scholarly and technological community has spent years delving deeply into this mystery. However, despite their efforts to supplement AI with more layers, parameters, and attention methods, a solution remains elusive. Multimodality research has also been conducted, although thus far, it has not yielded any particularly promising or cutting-edge results.

We all know how terrible ChatGPT was at arithmetic when it was published, but earlier this year a team from Virginia Tech and Microsoft created a method called Algorithm of Thoughts (AoT) to enhance AIs algorithmic reasoning. It also hinted that by using this training strategy, huge language models might eventually be able to use their intuition in conjunction with optimized search to provide more accurate results.

A little over a month ago, Microsoft also investigated the moral justifications used by these models. As a result, the group suggested a revised framework to gauge its capacity for forming moral judgments. The 70-billion-parameter LlamaChat model fared better than its bigger rivals in the end. The findings ran counter to the conventional wisdom that more is better and the communitys dependence on large values for key metrics.

Microsoft looks to be taking a cautious approach to advancement while the large internet companies continue to suffer the repercussions of their flawed language models. They are taking it slow and steady with the addition of complexity to their models.

The XOT technique has not been announced for inclusion in any Microsoft products. Meanwhile, Google DeepMind CEO Demis Hassabis indicated in an interview that the company is exploring incorporating ideas inspired by AlphaGo into its Gemini project.

Metas CICERO, named for the famous Roman orator, also joined the fray a year ago, and its impressive proficiency in the difficult board game Diplomacy raised eyebrows among the AI community. Because it requires not only strategic thinking but also the ability to negotiate, this game has long been seen as an obstacle for artificial intelligence. CICERO, however, showed that it could handle these situations since it was able to hold sophisticated, human-like discussions. Considering the standards established by DeepMind, this finding was not ignored. The research team in the UK has long advocated for the use of games to train neural networks.

Their successes with AlphaGo set a high standard, which Meta matched by taking a page from DeepMinds playbook and fusing strategic thinking algorithms (like AlphaGos) with a natural language processing model (GPT-3). Because an AI agent playing Diplomacy requires not just knowledge of the rules and strategies of the game, but also an accurate assessment of the likelihood of treachery by human opponents, Metas model stood out. As Meta continues to develop Llama-3, this agent is the greatest alternative because of its capacity to carry on conversations with people using natural-sounding language. Metas larger AI programs, including CICERO, may herald the arrival of conversational AI.

[To share your insights with us, please write tosghosh@martechseries.com]

Excerpt from:
Microsoft's LLM 'Everything Of Thought' Method Improves AI ... - AiThority

Absolutely, here’s an article on the impact of upcoming technology – Medium

Photo by Possessed Photography on Unsplash

In the ever-evolving world of technology, one can hardly keep track of the pace at which advancements occur. In every industry, from healthcare to entertainment, technology is causing sweeping changes, redefining traditional norms, and enhancing efficiency on an unprecedented scale. This is an exploration of just a few of these innovating technological advancements that are defining the future.

Artificial Intelligence (AI), already disruptive in its impact, continues to push barriers. With the introduction of advanced systems such as GPT-3 by OpenAI or DeepMinds AlphaGo, the world is witnessing AIs potential in generating human-like text, accurate predictions, problem-solving and strategy development. Companies are reaping the benefits of AI, including improved customer service and streamlined operational processes.

Blockchain technology, while often associated solely with cryptocurrencies, has capabilities far beyond the world of finance. Its transparent and secure nature promises to reform industries like supply chain management, healthcare and even elections, reducing fraud and increasing efficiency.

In the realm of communication, 5G technology is set to revolutionize not only how we connect with each other but also how machines interconnect. Its ultra-fast, stable connection and low latency promise to drive the Internet of Things (IoT) to new heights, fostering an era of smart cities and autonomous vehicles.

Virtual and Augmented Reality (VR/AR) technologies have moved beyond the gaming industry to more practical applications. Industries such as real estate, tourism, and education are starting to realize the immense potential of these technologies for enhancing customer experience and learning outcomes.

Quantum computing, though still in its infancy, holds extraordinary promise with its potential to solve complex computational problems at unprecedented speeds. This technology could bring profound impacts to sectors such as pharmacology, weather forecasting, and cryptography.

These breakthroughs represent the astounding future that lies ahead, but they also hint at new challenges to be navigated. As we move forward, questions surrounding ethical implications, data privacy, and security need to be addressed. However, whats undeniable is the critical role technology will play in shaping our collective future. This evolution inspires awe and eager anticipation of what is yet to come.

More here:
Absolutely, here's an article on the impact of upcoming technology - Medium

AI: Elon Musk and xAI | Formtek Blog – Formtek Blog

Elon Musk, the billionaire entrepreneur behind Tesla, SpaceX, and X (Twitter), has launched a new artificial intelligence company called xAI. The companys mission is to understand the true nature of the universe, according to its website. But what does that mean, and why is Musk interested in AI?

AI is the field of computer science that aims to create machines that can perform tasks that normally require human intelligence, such as recognizing images, playing games, or conversing. AI has made remarkable progress in recent years, thanks to advances in hardware, data, and algorithms. Some examples of AI systems are ChatGPT, which can generate realistic text conversations, and AlphaGo, which can beat human champions in the complex board game Go.

Musk has been fascinated by AI for a long time and has also expressed concerns about its potential risks. He co-founded OpenAI in 2015, a research organization that aims to create friendly AI that can benefit humanity without harming it. However, he left OpenAI in 2018 after a disagreement with its leadership over its direction and funding. He has also warned that AI could pose an existential threat to human civilization if it becomes more intelligent than us and decides to eliminate us.

That is why Musk created xAI, which stands for eXplainable Artificial Intelligence. The companys goal is to develop AI systems that can explain how they work and what they are doing, as well as understand the underlying laws of physics and reality. By doing so, Musk hopes to make AI more transparent, trustworthy, and aligned with human values. He also hopes to learn more about the mysteries of the cosmos and our place in it.

xAI has assembled a team of 12 experts from various fields of AI, including former employees of Google, Microsoft, Tesla, OpenAI, and DeepMind. The company plans to work closely with Musks other ventures, such as X (Twitter) and SpaceX, as well as other partners who share its vision.

xAI reflects Musks passion for innovation and exploration, as well as a caution for safety and ethics. Whether xAI will succeed in its mission or not remains to be seen, but one thing is certain: it will not be boring.

Read the original here:
AI: Elon Musk and xAI | Formtek Blog - Formtek Blog