When Might AI Outsmart Us? It Depends Who You Ask – TIME
In 1960, Herbert Simon, who went on to win both the Nobel Prize for economics and the Turing Award for computer science, wrote in his book The New Science of Management Decision that machines will be capable, within 20 years, of doing any work that a man can do.
History is filled with exuberant technological predictions that have failed to materialize. Within the field of artificial intelligence, the brashest predictions have concerned the arrival of systems that can perform any task a human can, often referred to as artificial general intelligence, or AGI.
So when Shane Legg, Google DeepMinds co-founder and chief AGI scientist, estimates that theres a 50% chance that AGI will be developed by 2028, it might be tempting to write him off as another AI pioneer who hasnt learnt the lessons of history.
Still, AI is certainly progressing rapidly. GPT-3.5, the language model that powers OpenAIs ChatGPT was developed in 2022, and scored 213 out of 400 on the Uniform Bar Exam, the standardized test that prospective lawyers must pass, putting it in the bottom 10% of human test-takers. GPT-4, developed just months later, scored 298, putting it in the top 10%. Many experts expect this progress to continue.
Read More: 4 Charts That Show Why AI Progress Is Unlikely to Slow Down
Leggs views are common among the leadership of the companies currently building the most powerful AI systems. In August, Dario Amodei, co-founder and CEO of Anthropic, said he expects a human-level AI could be developed in two to three years. Sam Altman, CEO of OpenAI, believes AGI could be reached sometime in the next four or five years.
But in a recent survey the majority of 1,712 AI experts who responded to the question of when they thought AI would be able to accomplish every task better and more cheaply than human workers were less bullish. A separate survey of elite forecasters with exceptional track records shows they are less bullish still.
The stakes for divining who is correct are high. Legg, like many other AI pioneers, has warned that powerful future AI systems could cause human extinction. And even for those less concerned by Terminator scenarios, some warn that an AI system that could replace humans at any task might replace human labor entirely.
Many of those working at the companies building the biggest and most powerful AI models believe that the arrival of AGI is imminent. They subscribe to a theory known as the scaling hypothesis: the idea that even if a few incremental technical advances are required along the way, continuing to train AI models using ever greater amounts of computational power and data will inevitably lead to AGI.
There is some evidence to back this theory up. Researchers have observed very neat and predictable relationships between how much computational power, also known as compute, is used to train an AI model and how well it performs a given task. In the case of large language models (LLM)the AI systems that power chatbots like ChatGPTscaling laws predict how well a model can predict a missing word in a sentence. OpenAI CEO Sam Altman recently told TIME that he realized in 2019 that AGI might be coming much sooner than most people think, after OpenAI researchers discovered the scaling laws.
Read More: 2023 CEO of the Year: Sam Altman
Even before the scaling laws were observed, researchers have long understood that training an AI system using more compute makes it more capable. The amount of compute being used to train AI models has increased relatively predictably for the last 70 years as costs have fallen.
Early predictions based on the expected growth in compute were used by experts to anticipate when AI might match (and then possibly surpass) humans. In 1997, computer scientist Hans Moravec argued that cheaply available hardware will match the human brain in terms of computing power in the 2020s. An Nvidia A100 semiconductor chip, widely used for AI training, costs around $10,000 and can perform roughly 20 trillion FLOPS, and chips developed later this decade will have higher performance still. However, estimates for the amount of compute used by the human brain vary widely from around one trillion floating point operations per second (FLOPS) to more than one quintillion FLOPS, making it hard to evaluate Moravecs prediction. Additionally, training modern AI systems requires a great deal more compute than running them, a fact that Moravecs prediction did not account for.
More recently, researchers at nonprofit Epoch have made a more sophisticated compute-based model. Instead of estimating when AI models will be trained with amounts of compute similar to the human brain, the Epoch approach makes direct use of scaling laws and makes a simplifying assumption: If an AI model trained with a given amount of compute can faithfully reproduce a given portion of textbased on whether the scaling laws predict such a model can repeatedly predict the next word almost flawlesslythen it can do the work of producing that text. For example, an AI system that can perfectly reproduce a book can substitute for authors, and an AI system that can reproduce scientific papers without fault can substitute for scientists.
Some would argue that just because AI systems can produce human-like outputs, that doesnt necessarily mean they will think like a human. After all, Russell Crowe plays Nobel Prize-winning mathematician John Nash in the 2001 film, A Beautiful Mind, but nobody would claim that the better his acting performance, the more impressive his mathematical skills must be. Researchers at Epoch argue that this analogy rests on a flawed understanding of how language models work. As they scale up, LLMs acquire the ability to reason like humans, rather than just superficially emulating human behavior. However, some researchers argue it's unclear whether current AI models are in fact reasoning.
Epochs approach is one way to quantitatively model the scaling hypothesis, says Tamay Besiroglu, Epochs associate director, who notes that researchers at Epoch tend to think AI will progress less rapidly than the model suggests. The model estimates a 10% chance of transformative AIdefined as AI that if deployed widely, would precipitate a change comparable to the industrial revolutionbeing developed by 2025, and a 50% chance of it being developed by 2033. The difference between the models forecast and those of people like Legg is probably largely down to transformative AI being harder to achieve than AGI, says Besiroglu.
Although many in leadership positions at the most prominent AI companies believe that the current path of AI progress will soon produce AGI, theyre outliers. In an effort to more systematically assess what the experts believe about the future of artificial intelligence, AI Impacts, an AI safety project at the nonprofit Machine Intelligence Research Institute, surveyed 2,778 experts in fall 2023, all of whom had published peer-reviewed research in prestigious AI journals and conferences in the last year.
Among other things, the experts were asked when they thought high-level machine intelligence, defined as machines that could accomplish every task better and more cheaply than human workers without help, would be feasible. Although the individual predictions varied greatly, the average of the predictions suggests a 50% chance that this would happen by 2047, and a 10% chance by 2027.
Like many people, the experts seemed to have been surprised by the rapid AI progress of the last year and have updated their forecasts accordinglywhen AI Impacts ran the same survey in 2022, researchers estimated a 50% chance of high-level machine intelligence arriving by 2060, and a 10% chance by 2029.
The researchers were also asked when they thought various individual tasks could be carried out by machines. They estimated a 50% chance that AI could compose a Top 40 hit by 2028 and write a book that would make the New York Times bestseller list by 2029.
Nonetheless, there is plenty of evidence to suggest that experts dont make good forecasters. Between 1984 and 2003, social scientist Philip Tetlock collected 82,361 forecasts from 284 experts, asking them questions such as: Will Soviet leader Mikhail Gorbachev be ousted in a coup? Will Canada survive as a political union? Tetlock found that the experts predictions were often no better than chance, and that the more famous an expert was, the less accurate their predictions tended to be.
Next, Tetlock and his collaborators set out to determine whether anyone could make accurate predictions. In a forecasting competition launched by the U.S. Intelligence Advanced Research Projects Activity in 2010, Tetlocks team, the Good Judgement Project (GJP), dominated the others, producing forecasts that were reportedly 30% more accurate than intelligence analysts who had access to classified information. As part of the competition, the GJP identified superforecastersindividuals who consistently made above-average accuracy forecasts. However, although superforecasters have been shown to be reasonably accurate for predictions with a time horizon of two years or less, it's unclear whether theyre also similarly accurate for longer-term questions such as when AGI might be developed, says Ezra Karger, an economist at the Federal Reserve Bank of Chicago and research director at Tetlocks Forecasting Research Institute.
When do the superforecasters think AGI will arrive? As part of a forecasting tournament run between June and October 2022 by the Forecasting Research Institute, 31 superforecasters were asked when they thought Nick Bostromthe controversial philosopher and author of the seminal AI existential risk treatise Superintelligencewould affirm the existence of AGI. The median superforecaster thought there was a 1% chance that this would happen by 2030, a 21% chance by 2050, and a 75% chance by 2100.
All three approaches to predicting when AGI might be developedEpochs model of the scaling hypothesis, and the expert and superforecaster surveyshave one thing in common: theres a lot of uncertainty. In particular, the experts are spread widely, with 10% thinking it's as likely as not that AGI is developed by 2030, and 18% thinking AGI wont be reached until after 2100.
Still, on average, the different approaches give different answers. Epochs model estimates a 50% chance that transformative AI arrives by 2033, the median expert estimates a 50% probability of AGI before 2048, and the superforecasters are much further out at 2070.
There are many points of disagreement that feed into debates over when AGI might be developed, says Katja Grace, who organized the expert survey as lead researcher at AI Impacts. First, will the current methods for building AI systems, bolstered by more compute and fed more data, with a few algorithmic tweaks, be sufficient? The answer to this question in part depends on how impressive you think recently developed AI systems are. Is GPT-4, in the words of researchers at Microsoft, the sparks of AGI? Or is this, in the words of philosopher Hubert Dreyfus, like claiming that the first monkey that climbed a tree was making progress towards landing on the moon?
Second, even if current methods are enough to achieve the goal of developing AGI, it's unclear how far away the finish line is, says Grace. Its also possible that something could obstruct progress on the way, for example a shortfall of training data.
Finally, looming in the background of these more technical debates are peoples more fundamental beliefs about how much and how quickly the world is likely to change, Grace says. Those working in AI are often steeped in technology and open to the idea that their creations could alter the world dramatically, whereas most people dismiss this as unrealistic.
The stakes of resolving this disagreement are high. In addition to asking experts how quickly they thought AI would reach certain milestones, AI Impacts asked them about the technologys societal implications. Of the 1,345 respondents who answered questions about AIs impact on society, 89% said they are substantially or extremely concerned about AI-generated deepfakes and 73% were similarly concerned that AI could empower dangerous groups, for example by enabling them to engineer viruses. The median respondent thought it was 5% likely that AGI leads to extremely bad, outcomes, such as human extinction.
Given these concerns, and the fact that 10% of the experts surveyed believe that AI might be able to do any task a human can by 2030, Grace argues that policymakers and companies should prepare now.
Preparations could include investment in safety research, mandatory safety testing, and coordination between companies and countries developing powerful AI systems, says Grace. Many of these measures were also recommended in a paper published by AI experts last year.
If governments act now, with determination, there is a chance that we will learn how to make AI systems safe before we learn how to make them so powerful that they become uncontrollable, Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the papers authors, told TIME in October.
Link:
When Might AI Outsmart Us? It Depends Who You Ask - TIME
- Whats Next in Artificial Intelligence: Agents that can do more than chatbots - Pittsburgh Post-Gazette - February 9th, 2025 [February 9th, 2025]
- Geopolitics of artificial intelligence to be focus of major summit in Paris; AP explains - Yahoo - February 9th, 2025 [February 9th, 2025]
- Geopolitics of artificial intelligence to be focus of major summit in Paris; AP explains - The Associated Press - February 9th, 2025 [February 9th, 2025]
- 3 Top Artificial Intelligence Stocks to Buy in February - MSN - February 9th, 2025 [February 9th, 2025]
- Geopolitics of artificial intelligence to be focus of major summit in Paris; AP explains - Lufkin Daily News - February 9th, 2025 [February 9th, 2025]
- 2 of the Hottest Artificial Intelligence (AI) Stocks on the Planet Can Plunge Up to 94%, According to Select Wall Street Analysts - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- These 2 Stocks Are Leading the Data Center Artificial Intelligence (AI) Trend, but Are They Buys Right Now? - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Book Review | Genesis: Artificial Intelligence, Hope, and the Human Spirit - LSE - February 9th, 2025 [February 9th, 2025]
- The Artificial Intelligence Action Summit In France: Maintaining The Dialogue On Global AI Regulation - Forrester - February 9th, 2025 [February 9th, 2025]
- Is prediction the next frontier for artificial intelligence? - Healthcare IT News - February 9th, 2025 [February 9th, 2025]
- The Artificial Intelligence in Medicines Market Is Set to Reach $18,119 Million | CAGR of 49.6% - openPR - February 9th, 2025 [February 9th, 2025]
- Geopolitics of artificial intelligence to be focus of major summit in Paris; AP explains - The Audubon County Advocate Journal - February 9th, 2025 [February 9th, 2025]
- Around and About with Richard McCarthy: Asking AI about itself: Will artificial intelligence ever surpass humankind? - GazetteNET - February 9th, 2025 [February 9th, 2025]
- Will the Paris artificial intelligence summit set a unified approach to AI governanceor just be another conference? - Bulletin of the Atomic... - February 9th, 2025 [February 9th, 2025]
- Apple Stock Jumps on Artificial Intelligence (AI) Driving iPhone Sales. Here's Why It's Not Getting Crushed by the DeepSeek Launch. - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Who will win the race to Artificial General Intelligence? - The Indian Express - February 9th, 2025 [February 9th, 2025]
- Prediction: This Artificial Intelligence (AI) Chip Stock Will Win Big From DeepSeek's Feat - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Prediction: 2 Artificial Intelligence (AI) Stocks That Will Be Worth More Than Nvidia 3 Years From Now - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- State of Louisiana Launches Innovation Brand, Announces Creation of $50 Million Growth Fund and Artificial Intelligence Research Institute - Louisiana... - February 9th, 2025 [February 9th, 2025]
- Using smart technologies and artificial intelligence in food packaging can reduce food waste - Yahoo News Canada - February 9th, 2025 [February 9th, 2025]
- BigBear.ai Wins Department of Defense Contract to Prototype Near-Peer Adversary Geopolitical Risk Analysis for Chief Digital and Artificial... - February 9th, 2025 [February 9th, 2025]
- Should Investors Change Their Artificial Intelligence (AI) Investment Strategy After the DeepSeek Launch? - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- 1 Unstoppable Artificial Intelligence (AI) Stock to Buy Before It Punches Its Ticket to the $4 Trillion Club - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Got 10 Years and $1000? These 3 Artificial Intelligence (AI) Stocks Are Set to Soar. - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- 1 Artificial Intelligence (AI) Stock Down 33% to Buy Hand Over Fist, According to Wall Street - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Rihanna Calls Out Use of Artificial Intelligence on Her Voice to Doctor a Clip of Her Speaking - Billboard - February 9th, 2025 [February 9th, 2025]
- 3 Best Artificial Intelligence (AI) Stocks to Buy in February - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Buying This Top Artificial Intelligence (AI) Stock Looks Like a No-Brainer Right Now - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Is Arm Stock a Buy After the Artificial Intelligence (AI) Chip Designer Released Its Quarterly Earnings Report? - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Artificial Intelligence, the Academy, And A New Studia Humanitatis - Minding The Campus - February 9th, 2025 [February 9th, 2025]
- The Trump Administrations Artificial Intelligence Rollback Is a Chance to Rethink AI Policy - Ms. Magazine - February 5th, 2025 [February 5th, 2025]
- Workday layoffs: California-based company lays off 1,750 employees, 8.5% of its workforce in favor of artificial intelligence - ABC7 Los Angeles - February 5th, 2025 [February 5th, 2025]
- It can really transform lives: Navigating the ethical landscape of artificial intelligence - WKMG News 6 & ClickOrlando - February 5th, 2025 [February 5th, 2025]
- Legal Restrictions Governing Artificial Intelligence in the Workplace - Law.com - February 5th, 2025 [February 5th, 2025]
- Google drops AI weapons banwhat it means for the future of artificial intelligence - VentureBeat - February 5th, 2025 [February 5th, 2025]
- MPs to scrutinise use of artificial intelligence in the finance sector - ComputerWeekly.com - February 5th, 2025 [February 5th, 2025]
- Catalyzing Change: Innovation and Efficiency through Artificial Intelligence in Contracting - United States Army - February 5th, 2025 [February 5th, 2025]
- STSD to hear cost breakdown, address artificial intelligence in education - The Wellsboro Gazette - February 5th, 2025 [February 5th, 2025]
- OECD activities during the Artificial Intelligence (AI) Action Summit - OECD - February 5th, 2025 [February 5th, 2025]
- Tether Ventures Into Artificial Intelligence With New Application Suite - Bitcoin.com News - February 5th, 2025 [February 5th, 2025]
- Will Artificial Intelligence Kill Acting? Nicholas Cage Thinks It Could - Movieguide - February 5th, 2025 [February 5th, 2025]
- 3 Reasons to Buy This Artificial Intelligence (AI) Stock on the Dip - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- 1 No-Brainer Artificial Intelligence (AI) Stock to Buy With $35 and Hold for the Long Run - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Google renounces its promise not to develop weapons with artificial intelligence - Mezha.Media - February 5th, 2025 [February 5th, 2025]
- DeepSeek Just Changed Generative Artificial Intelligence (AI) Forever. 2 Surprising Winners From Its Innovation. - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare - The BMJ - February 5th, 2025 [February 5th, 2025]
- DeepSeek Just Exposed the Biggest Flaw of the Artificial Intelligence (AI) Revolution - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Artificial Intelligence Is Here: How The Innovative Technology Is Taking Over The Stateline - WREX.com - February 5th, 2025 [February 5th, 2025]
- The Ultimate Artificial Intelligence (AI) Stocks to Buy in 2025 - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- This Magnificent Artificial Intelligence (AI) Stock Has Shot Up Over 175% in Just 3 Months, and It Could Soar Higher in 2025 - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Artificial intelligence is bringing nuclear power back from the dead maybe even in California - CalMatters - February 5th, 2025 [February 5th, 2025]
- Got $5,000? These Are 3 of the Cheapest Artificial Intelligence Stocks to Buy Right Now - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Compass Capital partners with MIT Sloan School of Management on an artificial intelligence project - ZAWYA - February 5th, 2025 [February 5th, 2025]
- 3 No-Brainer Artificial Intelligence (AI) Stocks to Buy With $500 Right Now - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Nvidia vs. Alphabet: Which Artificial Intelligence (AI) Stock Should You Buy After the Emergence of China's DeepSeek? - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- A look inside the Trump administration approach to artificial intelligence - Federal News Network - February 5th, 2025 [February 5th, 2025]
- Artificial Intelligence (AI) in Cardiology Market Industry Growth Trends: Market Forecast and Revenue Share by 2031 - openPR - February 5th, 2025 [February 5th, 2025]
- Riverhead hospital employees picket for raises, protections from artificial intelligence - RiverheadLOCAL - February 5th, 2025 [February 5th, 2025]
- 1 Wall Street Analyst Thinks This Artificial Intelligence (AI) Chip Stock Could Benefit From DeepSeek's Breakthrough - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- 1 No-Brainer Artificial Intelligence (AI) Stock That Will Crush the Market in 2025 - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- 3 Artificial Intelligence (AI) Stocks That Could Deliver Stunning Returns This Year - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Trumps White House and the New Artificial Intelligence Era - The Dispatch - January 27th, 2025 [January 27th, 2025]
- Artificial intelligence confirms it - these are the jobs that will become extinct in the next 5 years - Unin Rayo - January 27th, 2025 [January 27th, 2025]
- My Top 2 Artificial Intelligence (AI) Stocks for 2025 (Hint: Nvidia Is Not One of Them) - Nasdaq - January 27th, 2025 [January 27th, 2025]
- Artificial intelligence bill passes in the Arkansas House - THV11.com KTHV - January 27th, 2025 [January 27th, 2025]
- Chen elected fellow of Association for the Advancement of Artificial Intelligence - The Source - WashU - WashU - January 27th, 2025 [January 27th, 2025]
- Nvidia Plummeted Today -- Time to Buy the Artificial Intelligence (AI) Leader's Stock? - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Super Micro Computer Plummeted Today -- Is It Time to Buy the Artificial Intelligence (AI) Stock? - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- The Brief: Impact practitioners on the perils and possibilities of artificial intelligence - ImpactAlpha - January 27th, 2025 [January 27th, 2025]
- 3 Mega-Cap Artificial Intelligence (AI) Stocks Wall Street Thinks Will Soar the Most Over the Next 12 Months - sharewise - January 27th, 2025 [January 27th, 2025]
- 3 Mega-Cap Artificial Intelligence (AI) Stocks Wall Street Thinks Will Soar the Most Over the Next 12 Months - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Ask how you can do human good: artificial intelligence and the future at HKS - Harvard Kennedy School - January 27th, 2025 [January 27th, 2025]
- This Unstoppable Artificial Intelligence (AI) Stock Climbed 90% in 2024, and Its Still a Buy at Todays Price - MSN - January 27th, 2025 [January 27th, 2025]
- Nvidia Plummeted Today -- Time to Buy the Artificial Intelligence (AI) Leader's Stock? - MSN - January 27th, 2025 [January 27th, 2025]
- Artificial intelligence: key updates and developments (20 27 January) - Lexology - January 27th, 2025 [January 27th, 2025]
- Here's 1 Trillion-Dollar Artificial Intelligence (AI) Chip Stock to Buy Hand Over Fist While It's Still a Bargain - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Artificial intelligence curriculum being questioned as the future of education in Pennsylvania 'cyber charters' - Beaver County Radio - January 27th, 2025 [January 27th, 2025]
- Why Rezolve Could Be the Next Big Name in Artificial Intelligence - MarketBeat - January 27th, 2025 [January 27th, 2025]
- Artificial Intelligence Market to Hit $3819.2 Billion By 2034, US Leading the Way in Artificial Intelligence - EIN News - January 27th, 2025 [January 27th, 2025]
- President Donald Trump Just Announced Project Stargate: 3 Unstoppable Stocks That Could Profit From the Artificial Intelligence (AI) Buildout - The... - January 26th, 2025 [January 26th, 2025]