When Might AI Outsmart Us? It Depends Who You Ask – TIME
In 1960, Herbert Simon, who went on to win both the Nobel Prize for economics and the Turing Award for computer science, wrote in his book The New Science of Management Decision that machines will be capable, within 20 years, of doing any work that a man can do.
History is filled with exuberant technological predictions that have failed to materialize. Within the field of artificial intelligence, the brashest predictions have concerned the arrival of systems that can perform any task a human can, often referred to as artificial general intelligence, or AGI.
So when Shane Legg, Google DeepMinds co-founder and chief AGI scientist, estimates that theres a 50% chance that AGI will be developed by 2028, it might be tempting to write him off as another AI pioneer who hasnt learnt the lessons of history.
Still, AI is certainly progressing rapidly. GPT-3.5, the language model that powers OpenAIs ChatGPT was developed in 2022, and scored 213 out of 400 on the Uniform Bar Exam, the standardized test that prospective lawyers must pass, putting it in the bottom 10% of human test-takers. GPT-4, developed just months later, scored 298, putting it in the top 10%. Many experts expect this progress to continue.
Read More: 4 Charts That Show Why AI Progress Is Unlikely to Slow Down
Leggs views are common among the leadership of the companies currently building the most powerful AI systems. In August, Dario Amodei, co-founder and CEO of Anthropic, said he expects a human-level AI could be developed in two to three years. Sam Altman, CEO of OpenAI, believes AGI could be reached sometime in the next four or five years.
But in a recent survey the majority of 1,712 AI experts who responded to the question of when they thought AI would be able to accomplish every task better and more cheaply than human workers were less bullish. A separate survey of elite forecasters with exceptional track records shows they are less bullish still.
The stakes for divining who is correct are high. Legg, like many other AI pioneers, has warned that powerful future AI systems could cause human extinction. And even for those less concerned by Terminator scenarios, some warn that an AI system that could replace humans at any task might replace human labor entirely.
Many of those working at the companies building the biggest and most powerful AI models believe that the arrival of AGI is imminent. They subscribe to a theory known as the scaling hypothesis: the idea that even if a few incremental technical advances are required along the way, continuing to train AI models using ever greater amounts of computational power and data will inevitably lead to AGI.
There is some evidence to back this theory up. Researchers have observed very neat and predictable relationships between how much computational power, also known as compute, is used to train an AI model and how well it performs a given task. In the case of large language models (LLM)the AI systems that power chatbots like ChatGPTscaling laws predict how well a model can predict a missing word in a sentence. OpenAI CEO Sam Altman recently told TIME that he realized in 2019 that AGI might be coming much sooner than most people think, after OpenAI researchers discovered the scaling laws.
Read More: 2023 CEO of the Year: Sam Altman
Even before the scaling laws were observed, researchers have long understood that training an AI system using more compute makes it more capable. The amount of compute being used to train AI models has increased relatively predictably for the last 70 years as costs have fallen.
Early predictions based on the expected growth in compute were used by experts to anticipate when AI might match (and then possibly surpass) humans. In 1997, computer scientist Hans Moravec argued that cheaply available hardware will match the human brain in terms of computing power in the 2020s. An Nvidia A100 semiconductor chip, widely used for AI training, costs around $10,000 and can perform roughly 20 trillion FLOPS, and chips developed later this decade will have higher performance still. However, estimates for the amount of compute used by the human brain vary widely from around one trillion floating point operations per second (FLOPS) to more than one quintillion FLOPS, making it hard to evaluate Moravecs prediction. Additionally, training modern AI systems requires a great deal more compute than running them, a fact that Moravecs prediction did not account for.
More recently, researchers at nonprofit Epoch have made a more sophisticated compute-based model. Instead of estimating when AI models will be trained with amounts of compute similar to the human brain, the Epoch approach makes direct use of scaling laws and makes a simplifying assumption: If an AI model trained with a given amount of compute can faithfully reproduce a given portion of textbased on whether the scaling laws predict such a model can repeatedly predict the next word almost flawlesslythen it can do the work of producing that text. For example, an AI system that can perfectly reproduce a book can substitute for authors, and an AI system that can reproduce scientific papers without fault can substitute for scientists.
Some would argue that just because AI systems can produce human-like outputs, that doesnt necessarily mean they will think like a human. After all, Russell Crowe plays Nobel Prize-winning mathematician John Nash in the 2001 film, A Beautiful Mind, but nobody would claim that the better his acting performance, the more impressive his mathematical skills must be. Researchers at Epoch argue that this analogy rests on a flawed understanding of how language models work. As they scale up, LLMs acquire the ability to reason like humans, rather than just superficially emulating human behavior. However, some researchers argue it's unclear whether current AI models are in fact reasoning.
Epochs approach is one way to quantitatively model the scaling hypothesis, says Tamay Besiroglu, Epochs associate director, who notes that researchers at Epoch tend to think AI will progress less rapidly than the model suggests. The model estimates a 10% chance of transformative AIdefined as AI that if deployed widely, would precipitate a change comparable to the industrial revolutionbeing developed by 2025, and a 50% chance of it being developed by 2033. The difference between the models forecast and those of people like Legg is probably largely down to transformative AI being harder to achieve than AGI, says Besiroglu.
Although many in leadership positions at the most prominent AI companies believe that the current path of AI progress will soon produce AGI, theyre outliers. In an effort to more systematically assess what the experts believe about the future of artificial intelligence, AI Impacts, an AI safety project at the nonprofit Machine Intelligence Research Institute, surveyed 2,778 experts in fall 2023, all of whom had published peer-reviewed research in prestigious AI journals and conferences in the last year.
Among other things, the experts were asked when they thought high-level machine intelligence, defined as machines that could accomplish every task better and more cheaply than human workers without help, would be feasible. Although the individual predictions varied greatly, the average of the predictions suggests a 50% chance that this would happen by 2047, and a 10% chance by 2027.
Like many people, the experts seemed to have been surprised by the rapid AI progress of the last year and have updated their forecasts accordinglywhen AI Impacts ran the same survey in 2022, researchers estimated a 50% chance of high-level machine intelligence arriving by 2060, and a 10% chance by 2029.
The researchers were also asked when they thought various individual tasks could be carried out by machines. They estimated a 50% chance that AI could compose a Top 40 hit by 2028 and write a book that would make the New York Times bestseller list by 2029.
Nonetheless, there is plenty of evidence to suggest that experts dont make good forecasters. Between 1984 and 2003, social scientist Philip Tetlock collected 82,361 forecasts from 284 experts, asking them questions such as: Will Soviet leader Mikhail Gorbachev be ousted in a coup? Will Canada survive as a political union? Tetlock found that the experts predictions were often no better than chance, and that the more famous an expert was, the less accurate their predictions tended to be.
Next, Tetlock and his collaborators set out to determine whether anyone could make accurate predictions. In a forecasting competition launched by the U.S. Intelligence Advanced Research Projects Activity in 2010, Tetlocks team, the Good Judgement Project (GJP), dominated the others, producing forecasts that were reportedly 30% more accurate than intelligence analysts who had access to classified information. As part of the competition, the GJP identified superforecastersindividuals who consistently made above-average accuracy forecasts. However, although superforecasters have been shown to be reasonably accurate for predictions with a time horizon of two years or less, it's unclear whether theyre also similarly accurate for longer-term questions such as when AGI might be developed, says Ezra Karger, an economist at the Federal Reserve Bank of Chicago and research director at Tetlocks Forecasting Research Institute.
When do the superforecasters think AGI will arrive? As part of a forecasting tournament run between June and October 2022 by the Forecasting Research Institute, 31 superforecasters were asked when they thought Nick Bostromthe controversial philosopher and author of the seminal AI existential risk treatise Superintelligencewould affirm the existence of AGI. The median superforecaster thought there was a 1% chance that this would happen by 2030, a 21% chance by 2050, and a 75% chance by 2100.
All three approaches to predicting when AGI might be developedEpochs model of the scaling hypothesis, and the expert and superforecaster surveyshave one thing in common: theres a lot of uncertainty. In particular, the experts are spread widely, with 10% thinking it's as likely as not that AGI is developed by 2030, and 18% thinking AGI wont be reached until after 2100.
Still, on average, the different approaches give different answers. Epochs model estimates a 50% chance that transformative AI arrives by 2033, the median expert estimates a 50% probability of AGI before 2048, and the superforecasters are much further out at 2070.
There are many points of disagreement that feed into debates over when AGI might be developed, says Katja Grace, who organized the expert survey as lead researcher at AI Impacts. First, will the current methods for building AI systems, bolstered by more compute and fed more data, with a few algorithmic tweaks, be sufficient? The answer to this question in part depends on how impressive you think recently developed AI systems are. Is GPT-4, in the words of researchers at Microsoft, the sparks of AGI? Or is this, in the words of philosopher Hubert Dreyfus, like claiming that the first monkey that climbed a tree was making progress towards landing on the moon?
Second, even if current methods are enough to achieve the goal of developing AGI, it's unclear how far away the finish line is, says Grace. Its also possible that something could obstruct progress on the way, for example a shortfall of training data.
Finally, looming in the background of these more technical debates are peoples more fundamental beliefs about how much and how quickly the world is likely to change, Grace says. Those working in AI are often steeped in technology and open to the idea that their creations could alter the world dramatically, whereas most people dismiss this as unrealistic.
The stakes of resolving this disagreement are high. In addition to asking experts how quickly they thought AI would reach certain milestones, AI Impacts asked them about the technologys societal implications. Of the 1,345 respondents who answered questions about AIs impact on society, 89% said they are substantially or extremely concerned about AI-generated deepfakes and 73% were similarly concerned that AI could empower dangerous groups, for example by enabling them to engineer viruses. The median respondent thought it was 5% likely that AGI leads to extremely bad, outcomes, such as human extinction.
Given these concerns, and the fact that 10% of the experts surveyed believe that AI might be able to do any task a human can by 2030, Grace argues that policymakers and companies should prepare now.
Preparations could include investment in safety research, mandatory safety testing, and coordination between companies and countries developing powerful AI systems, says Grace. Many of these measures were also recommended in a paper published by AI experts last year.
If governments act now, with determination, there is a chance that we will learn how to make AI systems safe before we learn how to make them so powerful that they become uncontrollable, Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the papers authors, told TIME in October.
Link:
When Might AI Outsmart Us? It Depends Who You Ask - TIME
- 2 Artificial Intelligence (AI) Stocks That Could Be Poised for a Big Second-Half Comeback - The Motley Fool - June 29th, 2025 [June 29th, 2025]
- Goodbye to thousands of jobs at Amazon - artificial intelligence is already replacing workers in key roles - Unin Rayo - June 29th, 2025 [June 29th, 2025]
- Artificial Intelligence in Agriculture Market to Reach USD 12.8 Billion by 2032, Driven by Climate-Smart Practices and Yield Optimization AI Tools |... - June 29th, 2025 [June 29th, 2025]
- 2 Artificial Intelligence (AI) Stocks That Could Be Poised for a Big Second-Half Comeback - Yahoo Finance - June 29th, 2025 [June 29th, 2025]
- Artificial intelligence and private equity are threatening to transform consulting - Financial News London - June 29th, 2025 [June 29th, 2025]
- 2 Artificial Intelligence (AI) Stocks Worth Buying on the Next Dip - The Motley Fool - June 29th, 2025 [June 29th, 2025]
- 2 Artificial Intelligence (AI) Stocks Worth Buying on the Next Dip - Nasdaq - June 29th, 2025 [June 29th, 2025]
- 5 Artificial Intelligence (AI) Stocks Are Worth Over $2 Trillion. Here Are the 2 Most Likely to Join the Club Next. - The Motley Fool - June 29th, 2025 [June 29th, 2025]
- 5 Artificial Intelligence (AI) Stocks Are Worth Over $2 Trillion. Here Are the 2 Most Likely to Join the Club Next. - Yahoo Finance - June 29th, 2025 [June 29th, 2025]
- Prediction: This Artificial Intelligence (AI) Stock Could Be the Next Nvidia -- and It's Not What You Think - Yahoo Finance - June 29th, 2025 [June 29th, 2025]
- This Artificial Intelligence (AI) Powerhouse Could Be Just Getting Started - The Motley Fool - June 29th, 2025 [June 29th, 2025]
- Beyond the buzz: What everyday people really think about artificial intelligence - London Daily News - June 29th, 2025 [June 29th, 2025]
- If I Could Only Buy 1 Artificial Intelligence (AI) Stock, It Would Be This Monster "Magnificent Seven" Member Approved by Billionaires... - June 29th, 2025 [June 29th, 2025]
- 3 Artificial Intelligence (AI) Stocks That Still Look Like Long-Term Winners - MSN - June 29th, 2025 [June 29th, 2025]
- Argentina integrates artificial intelligence into education - TV BRICS - June 29th, 2025 [June 29th, 2025]
- 5 Artificial Intelligence (AI) Stocks Are Worth Over $2 Trillion. Here Are the 2 Most Likely to Join the Club Next. - The Globe and Mail - June 29th, 2025 [June 29th, 2025]
- AI and the Creative Revolution: How Artificial Intelligence Is Changing the Way We Tell Stories - Vocal - June 29th, 2025 [June 29th, 2025]
- 5 Artificial Intelligence (AI) Stocks Are Worth Over $2 Trillion. Here Are the 2 Most Likely to Join the Club Next. - Barchart.com - June 29th, 2025 [June 29th, 2025]
- 2 Artificial Intelligence (AI) Stocks (Besides Nvidia) to Buy Hand Over Fist for the Long Term - Yahoo Finance - June 29th, 2025 [June 29th, 2025]
- Artificial Intelligence in Obstetrics and Gynaecology: Advancing Precision and Personalised Care - Cureus - June 29th, 2025 [June 29th, 2025]
- Orange Jordan sponsors Artificial Intelligence and Future Technologies Day at the Hashemite University - ZAWYA - June 29th, 2025 [June 29th, 2025]
- AMD vs. Arista Networks: Which Artificial Intelligence (AI) Stock Is a Better Buy Right Now? - The Motley Fool - June 29th, 2025 [June 29th, 2025]
- Amid the heated competition for artificial intelligence (AI) development, OpenAI has come up with co.. - - June 29th, 2025 [June 29th, 2025]
- 5,800 acre 'Hypergrid' campus to power next-gen Artificial Intelligence coming to Amarillo - KVII - June 28th, 2025 [June 28th, 2025]
- What Are the 5 Best Artificial Intelligence (AI) Semiconductor Stocks to Buy Right Now? - The Motley Fool - June 28th, 2025 [June 28th, 2025]
- Arlington pastor reflects on Pope Leo XIV and artificial intelligence - Arlington Catholic Herald - June 28th, 2025 [June 28th, 2025]
- What Are the 5 Best Artificial Intelligence (AI) Semiconductor Stocks to Buy Right Now? - Nasdaq - June 28th, 2025 [June 28th, 2025]
- Artificial Intelligence and GenAI In Motion: County Innovations and Use Cases - National Association of Counties - June 28th, 2025 [June 28th, 2025]
- Classroom #2 Before You Trust Artificial Intelligence, Listen to This | Out of the Dark - Signals AZ - June 28th, 2025 [June 28th, 2025]
- Should You Forget SoundHound AI and Buy 2 Artificial Intelligence (AI) Stocks Right Now? - The Motley Fool - June 28th, 2025 [June 28th, 2025]
- Prediction: This Artificial Intelligence (AI) Stock Will Outperform the Market for the Next Decade - The Motley Fool - June 28th, 2025 [June 28th, 2025]
- Will AI need its own insurance policy? How artificial intelligence could follow cyber's path - Insurance Business America - June 28th, 2025 [June 28th, 2025]
- Undervalued and Profitable: 1 Artificial Intelligence (AI) Stock for Buffett-Minded Investors - The Motley Fool - June 28th, 2025 [June 28th, 2025]
- Skynet is coming: the malware that attacks Artificial Intelligence! - Red Hot Cyber - June 28th, 2025 [June 28th, 2025]
- Radiology artificial intelligence firm Qure.ai among Time Magazines Most Influential Companies in the World for 2025 - Radiology Business - June 28th, 2025 [June 28th, 2025]
- AI's Green Thumb Turned Sour: How Artificial Intelligence Is Disrupting Houseplant Communities - OpenTools - June 28th, 2025 [June 28th, 2025]
- Got $5,000? These 3 Artificial Intelligence Stocks Are Absurdly Cheap Right Now. - The Motley Fool - June 28th, 2025 [June 28th, 2025]
- Bernie Sanders issues eerie warning that 'artificial intelligence is going to displace millions and millions of workers' - UNILAD Tech - June 28th, 2025 [June 28th, 2025]
- MojiWeather Further Advances Its Technology to Use Artificial Intelligence and Data Analytics - Newsfile - June 28th, 2025 [June 28th, 2025]
- Down More Than 30% This Year, Could This Struggling Artificial Intelligence Stock Be a Bargain Buy Right Now? - The Motley Fool - June 28th, 2025 [June 28th, 2025]
- Stocks for the Next Stages of the Artificial Intelligence Boom - Morningstar - June 26th, 2025 [June 26th, 2025]
- Troy University to help Economic Development of Southeast Counties with Artificial Intelligence - WAKA 8 - June 26th, 2025 [June 26th, 2025]
- Congress looks to block states from regulating artificial intelligence - NBC Connecticut - June 26th, 2025 [June 26th, 2025]
- Second year of snags for statewide regulation of artificial intelligence - NBC Connecticut - June 26th, 2025 [June 26th, 2025]
- Microsoft and OpenAI dueling over artificial general intelligence, The Information reports - Reuters - June 26th, 2025 [June 26th, 2025]
- Troy University educates the Wiregrass on artificial intelligence - WSFA - June 26th, 2025 [June 26th, 2025]
- 1 Unstoppable Artificial Intelligence (AI) Growth Stock to Buy Before It Is Too Late - The Motley Fool - June 26th, 2025 [June 26th, 2025]
- 1 Unstoppable Artificial Intelligence (AI) Growth Stock to Buy Before It Is Too Late - Yahoo Finance - June 26th, 2025 [June 26th, 2025]
- What Are the Environmental Impacts of Artificial Intelligence? - The Equation - Union of Concerned Scientists - June 26th, 2025 [June 26th, 2025]
- Artificial Intelligence (AI) Titan Nvidia Has Scored a $4 Billion "Profit" in an Unexpected Way - The Motley Fool - June 26th, 2025 [June 26th, 2025]
- Music and artificial intelligence: AI isnt just a new sound. Its a new infrastructure baked into our products and services - The Irish Times - June 26th, 2025 [June 26th, 2025]
- Artificial Intelligence Regulation Must Balance Ethics and Innovation - National Review - June 26th, 2025 [June 26th, 2025]
- How MySALT AI builds trust in artificial intelligence and rebuilds personal connections - FOX 2 Detroit - June 26th, 2025 [June 26th, 2025]
- Superblocks Named "Overall Agentic AI Solution of the Year" in 2025 Artificial Intelligence Breakthrough Awards Program - Yahoo Finance - June 26th, 2025 [June 26th, 2025]
- Governor signs legislation to protect Mainers from artificial intelligence - foxbangor.com - June 26th, 2025 [June 26th, 2025]
- Cafe24 adopts four-day workweek thanks to 'artificial intelligence and automation' - Korea JoongAng Daily - June 26th, 2025 [June 26th, 2025]
- 3 Skyrocketing Artificial Intelligence (AI) Stocks That Can Plummet 71% to 80%, According to Select Wall Street Analysts - The Motley Fool - June 26th, 2025 [June 26th, 2025]
- Adam Silver plans to use Artificial Intelligence after the NBA's injury surge: "Ingest all video of every game a player's played in to see if we... - June 26th, 2025 [June 26th, 2025]
- WMO faces the future, with action plan on Artificial Intelligence - World Meteorological Organization WMO - June 26th, 2025 [June 26th, 2025]
- The Greatest NHL Players of All Time According to Artificial Intelligence - Al Bat - June 26th, 2025 [June 26th, 2025]
- The Greatest NFL Offensive Lineman of All Time According to Artificial Intelligence - Al Bat - June 26th, 2025 [June 26th, 2025]
- Global collaboration for inclusive and equitable artificial intelligence - Polity.org.za - June 26th, 2025 [June 26th, 2025]
- Goodbye to job security at Amazonits CEO warns that artificial intelligence will put thousands of jobs at risk in the coming years - El Adelantado de... - June 24th, 2025 [June 24th, 2025]
- 2 Top Artificial Intelligence Stocks to Buy in June - Yahoo Finance - June 24th, 2025 [June 24th, 2025]
- Artificial Intelligence policy finalized for the coming school year - SiouxlandProud - June 24th, 2025 [June 24th, 2025]
- Artificial intelligence and the wellbeing of workers - Nature - June 24th, 2025 [June 24th, 2025]
- MSU researchers use nanomedicine and artificial intelligence to diagnose diseases a biology first - MSUToday - June 24th, 2025 [June 24th, 2025]
- 10 Reasons to Buy and Hold This Artificial Intelligence (AI) Stock Forever - The Motley Fool - June 24th, 2025 [June 24th, 2025]
- The Best Artificial Intelligence ETF to Invest $100 In Right Now - The Motley Fool - June 24th, 2025 [June 24th, 2025]
- The Greatest Boxers of All Time According to Artificial Intelligence - Al Bat - June 24th, 2025 [June 24th, 2025]
- Artificial intelligence distorted images of a black hole - Universe Space Tech - June 24th, 2025 [June 24th, 2025]
- The Greatest NFL Wide Receivers of All Time According to Artificial Intelligence - Al Bat - June 24th, 2025 [June 24th, 2025]
- The Greatest MLB Shortstops of All Time According to Artificial Intelligence - Al Bat - June 24th, 2025 [June 24th, 2025]
- The Greatest Soccer Players of All Time According to Artificial Intelligence - Al Bat - June 24th, 2025 [June 24th, 2025]
- Billionaire David Tepper Sold 97% of Appaloosa's Nvidia Stake and His Entire Position in AMD in Favor of This Trillion-Dollar Artificial Intelligence... - June 24th, 2025 [June 24th, 2025]
- NMSU to offer BS in Artificial Intelligence Fall 2026 - KVIA - June 24th, 2025 [June 24th, 2025]
- The Mythological blueprint of AI: How the concept of Artificial Intelligence dates back to 2500 years ago - The Economic Times - June 24th, 2025 [June 24th, 2025]
- Finally, Were Replacing Todd With Artificial Intelligence (Hour 2) - Civic Media - June 24th, 2025 [June 24th, 2025]
- Rise Of Artificial Intelligence In Fraud Investigations: Implications Of Using AI In Prevention, Detection - NDTV Profit - June 24th, 2025 [June 24th, 2025]
- UN says use of artificial intelligence must comply with international human rights - JURIST Legal News - June 24th, 2025 [June 24th, 2025]