New study: Countless AI experts doesnt know what to think on AI risk – Vox.com
In 2016, researchers at AI Impacts, a project that aims to improve understanding of advanced AI development, released a survey of machine learning researchers. They were asked when they expected the development of AI systems that are comparable to humans along many dimensions, as well as whether to expect good or bad results from such an achievement.
The headline finding: The median respondent gave a 5 percent chance of human-level AI leading to outcomes that were extremely bad, e.g. human extinction. That means half of researchers gave a higher estimate than 5 percent saying they considered it overwhelmingly likely that powerful AI would lead to human extinction and half gave a lower one. (The other half, obviously, believed the chance was negligible.)
If true, that would be unprecedented. In what other field do moderate, middle-of-the-road researchers claim that the development of a more powerful technology one they are directly working on has a 5 percent chance of ending human life on Earth forever?
Each week, we explore unique solutions to some of the world's biggest problems.
In 2016 before ChatGPT and AlphaFold the result seemed much likelier to be a fluke than anything else. But in the eight years since then, as AI systems have gone from nearly useless to inconveniently good at writing college-level essays, and as companies have poured billions of dollars into efforts to build a true superintelligent AI system, what once seemed like a far-fetched possibility now seems to be on the horizon.
So when AI Impacts released their follow-up survey this week, the headline result that between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction didnt strike me as a fluke or a surveying error. Its probably an accurate reflection of where the field is at.
Their results challenge many of the prevailing narratives about AI extinction risk. The researchers surveyed dont subdivide neatly into doomsaying pessimists and insistent optimists. Many people, the survey found, who have high probabilities of bad outcomes also have high probabilities of good outcomes. And human extinction does seem to be a possibility that the majority of researchers take seriously: 57.8 percent of respondents said they thought extremely bad outcomes such as human extinction were at least 5 percent likely.
This visually striking figure from the paper shows how respondents think about what to expect if high-level machine intelligence is developed: Most consider both extremely good outcomes and extremely bad outcomes probable.
As for what to do about it, there experts seem to disagree even more than they do about whether theres a problem in the first place.
The 2016 AI impacts survey was immediately controversial. In 2016, barely anyone was talking about the risk of catastrophe from powerful AI. Could it really be that mainstream researchers rated it plausible? Had the researchers conducting the survey who were themselves concerned about human extinction resulting from artificial intelligence biased their results somehow?
The survey authors had systematically reached out to all researchers who published at the 2015 NIPS and ICML conferences (two of the premier venues for peer-reviewed research in machine learning, and managed to get responses from roughly a fifth of them. They asked a wide range of questions about progress in machine learning and got a wide range of answers: Really, aside from the eye-popping human extinction answers, the most notable result was how much ML experts disagreed with one another. (Which is hardly unusual in the sciences.)
But one could reasonably be skeptical. Maybe there were experts who simply hadnt thought very hard about their human extinction answer. And maybe the people who were most optimistic about AI hadnt bothered to answer the survey.
When AI Impacts reran the survey in 2022, again contacting thousands of researchers who published at top machine learning conferences, their results were about the same. The median probability of an extremely bad, e.g., human extinction outcome was 5 percent.
That median obscures some fierce disagreement. In fact, 48 percent of respondents gave at least a 10 percent chance of an extremely bad outcome, while 25 percent gave a 0 percent chance. Responding to criticism of the 2016 survey, the team asked for more detail: how likely did respondents think it was that AI would lead to human extinction or similarly permanent and severe disempowerment of the human species? Depending on how they asked the question, this got results between 5 percent and 10 percent.
In 2023, in order to reduce and measure the impact of framing effects (different answers based on how the question is phrased), many of the key questions on the survey were asked of different respondents with different framings. But again, the answers to the question about human extinction were broadly consistent in the 5-10 percent range no matter how the question was asked.
The fact the 2022 and 2023 surveys found results so similar to the 2016 result makes it hard to believe that the 2016 result was a fluke. And while in 2016 critics could correctly complain that most ML researchers had not seriously considered the issue of existential risk, by 2023 the question of whether powerful AI systems will kill us all had gone mainstream. Its hard to imagine that many peer-reviewed machine learning researchers were answering a question theyd never considered before.
I think the most reasonable reading of this survey is that ML researchers, like the rest of us, are radically unsure about whether to expect the development of powerful AI systems to be an amazing thing for the world or a catastrophic one.
Nor do they agree on what to do about it. Responses varied enormously on questions about whether slowing down AI would make good outcomes for humanity more likely. While a large majority of respondents wanted more resources and attention to go into AI safety research, many of the same respondents didnt think that working on AI alignment was unusually valuable compared to working on other open problems in machine learning.
In a situation with lots of uncertainty like about the consequences of a technology like superintelligent AI, which doesnt yet exist theres a natural tendency to want to look to experts for answers. Thats reasonable. But in a case like AI, its important to keep in mind that even the most well-regarded machine learning researchers disagree with one another and are radically uncertain about where all of us are headed.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!
Yes, I'll give $5/month
Yes, I'll give $5/month
We accept credit card, Apple Pay, and Google Pay. You can also contribute via
Read more here:
New study: Countless AI experts doesnt know what to think on AI risk - Vox.com
- AI is both a new threat and a new solution at the UN climate conference - Business Insider - November 24th, 2024 [November 24th, 2024]
- The Only Free AI Tools You Need for Peak Productivity - How-To Geek - November 24th, 2024 [November 24th, 2024]
- Marc Benioff thinks we've reached the 'upper limits' of LLMs the future, he says, is AI agents - Business Insider - November 24th, 2024 [November 24th, 2024]
- Taiwan Semi (TSM) Positioned for Growth Amid NVIDIAs AI Demand Surge, Says BofA Analyst - Yahoo Finance - November 24th, 2024 [November 24th, 2024]
- AI Models Secretly Learn Capabilities Long Before They Show Them, Researchers Find - Decrypt - November 24th, 2024 [November 24th, 2024]
- Generative AI Revenue on Track to 10X by 2030: 1 AI Stock That Will Benefit (Hint: It's Not Nvidia) - The Motley Fool - November 24th, 2024 [November 24th, 2024]
- Alien Civilizations May Have Already Formed a New Kind of AI-Based Consciousness, Scientists Say - Popular Mechanics - November 24th, 2024 [November 24th, 2024]
- Most Gen Zers are terrified of AI taking their jobs. Their bosses consider themselves immune - Fortune - November 24th, 2024 [November 24th, 2024]
- AI voice scams are on the rise heres how to stay safe, according to security experts - TechRadar - November 24th, 2024 [November 24th, 2024]
- Why you're wrong about AI art, according to the Ai-Da robot that just made a $1 million painting - TechRadar - November 24th, 2024 [November 24th, 2024]
- The curious case of Nebius, the publicly traded AI infrastructure startup - TechCrunch - November 24th, 2024 [November 24th, 2024]
- A new culture war Is brewing and Coca-Cola's AI Christmas ad is at the center - Salon - November 24th, 2024 [November 24th, 2024]
- A new generation of shopping cart, with GPS and AI - CBS News - November 24th, 2024 [November 24th, 2024]
- AI bots could be a new tool to get people to be open about their feelings - Fast Company - November 24th, 2024 [November 24th, 2024]
- Do You Believe That AI Will Ruin Photography? Do You See It Already Happening? - Fstoppers - November 24th, 2024 [November 24th, 2024]
- Weekend Round-Up: AI Dominates Headlines With Nvidia, Elon Musk, And Hollywood's Big Names - Benzinga - November 24th, 2024 [November 24th, 2024]
- Conservationists turn to AI in battle to save red squirrels - BBC.com - November 24th, 2024 [November 24th, 2024]
- The Many Ways WSJ Readers Use AI in Their Everyday Lives - The Wall Street Journal - November 24th, 2024 [November 24th, 2024]
- Jensen says solving AI hallucination problems is 'several years away,' requires increasing computation - Tom's Hardware - November 24th, 2024 [November 24th, 2024]
- 4 AI Data-Center Stocks to Buy for the Big Trend. Demand Is Robust. - Barron's - November 24th, 2024 [November 24th, 2024]
- Nvidia Sees Continued AI Momentum. Is This a Golden Opportunity to Buy the Stock? - The Motley Fool - November 24th, 2024 [November 24th, 2024]
- Stanford Professor Allegedly Includes Fake AI Citations in Filing on Deepfake Bill - PCMag - November 24th, 2024 [November 24th, 2024]
- Ex-Google CEO Eric Schmidt says AI will 'shape' identity and that 'normal people' are not ready for it - Business Insider - November 24th, 2024 [November 24th, 2024]
- AI can be used to create job promotion, not be a job replacement, says AWS vice president - Business Insider - November 24th, 2024 [November 24th, 2024]
- Nvidia Has $71 Million Invested in These Smaller-Cap AI Stocks - Yahoo Finance - November 24th, 2024 [November 24th, 2024]
- Advancing urban tree monitoring with AI-powered digital twins - MIT News - November 24th, 2024 [November 24th, 2024]
- A Pennsylvania boy used AI to make nude images of female students. Was it illegal? - USA TODAY - November 24th, 2024 [November 24th, 2024]
- Wakeup Call for HR: Employees Trust AI More Than They Trust You - Josh Bersin - November 24th, 2024 [November 24th, 2024]
- The AI Reporter That Took My Old Job Just Got Fired - WIRED - November 24th, 2024 [November 24th, 2024]
- US ahead in AI innovation, easily surpassing China in Stanfords new ranking - The Associated Press - November 21st, 2024 [November 21st, 2024]
- Announcing recipients of the Google.org AI Opportunity Fund: Europe - The Keyword - November 21st, 2024 [November 21st, 2024]
- AI agents what they are, and how theyll change the way we work - Microsoft - November 21st, 2024 [November 21st, 2024]
- Shannon Vallor says AI does present an existential risk but not the one you think - Vox.com - November 21st, 2024 [November 21st, 2024]
- US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their work - The Associated Press - November 21st, 2024 [November 21st, 2024]
- Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects - The Hacker News - November 21st, 2024 [November 21st, 2024]
- The intersection of AI and the downfall of long-form literature - Tufts Daily - November 21st, 2024 [November 21st, 2024]
- Silicon Valley billionaire warns 'absolutely there's a bubble' in AI valuations: 'Nobody would be surprised' if OpenAI 'disappeared next Monday' -... - November 21st, 2024 [November 21st, 2024]
- Advancing red teaming with people and AI - OpenAI - November 21st, 2024 [November 21st, 2024]
- Can Google Scholar survive the AI revolution? - Nature.com - November 21st, 2024 [November 21st, 2024]
- Nearly half of Gen AI adopters want it open source - here's why - ZDNet - November 21st, 2024 [November 21st, 2024]
- Founder of AI education chatbot charged with defrauding investors of $10 million - USA TODAY - November 21st, 2024 [November 21st, 2024]
- Microsoft at 50: An AI Giant. A Kinder Culture. And Still Hellbent on Domination - WIRED - November 21st, 2024 [November 21st, 2024]
- Matthew Libby on the dark underbelly of AI and his new play Data at Arena Stage - DC Theater Arts - November 21st, 2024 [November 21st, 2024]
- Cruise fesses up, Pony AI raises its IPO ambitions, and the TuSimple drama dials back up - TechCrunch - November 21st, 2024 [November 21st, 2024]
- I Called AI Santa Claus. He Hung Up On Me - The Daily Beast - November 21st, 2024 [November 21st, 2024]
- Nvidia says its Blackwell AI chip is full steam ahead - The Verge - November 21st, 2024 [November 21st, 2024]
- AI in drug discovery is nonsense, but call Schrdinger AI if you want, says CEO - STAT - November 21st, 2024 [November 21st, 2024]
- Is This a Sign That SoundHound AI Is Becoming a Safer Stock to Buy? - The Motley Fool - November 21st, 2024 [November 21st, 2024]
- Why the U.S. Launched an International Network of AI Safety Institutes - TIME - November 21st, 2024 [November 21st, 2024]
- Nvidias boss dismisses fears that AI has hit a wall - The Economist - November 21st, 2024 [November 21st, 2024]
- Will the bubble burst for AI in 2025, or will it start to deliver? - The Economist - November 21st, 2024 [November 21st, 2024]
- Thousands of AI agents later, who even remembers what they do? - The Register - November 21st, 2024 [November 21st, 2024]
- Child safety org flags new CSAM with AI trained on real child sex abuse images - Ars Technica - November 21st, 2024 [November 21st, 2024]
- Nvidias Sales Soar as AI Spending Boom Barrels Ahead - The Wall Street Journal - November 21st, 2024 [November 21st, 2024]
- How Oracle Got Its Mojo Back. What's Behind The AI Cloud Push Powering Its 80% Stock Gain. - Investor's Business Daily - November 21st, 2024 [November 21st, 2024]
- KPMG to spend $100 million on AI partnership with Google Cloud - Reuters - November 21st, 2024 [November 21st, 2024]
- Microsoft is the mystery AI company licensing HarperCollins books, says Bloomberg - The Verge - November 21st, 2024 [November 21st, 2024]
- How Students Can AI-Proof Their Careers - The Wall Street Journal - November 21st, 2024 [November 21st, 2024]
- The US Patent and Trademark Office Banned Staff From Using Generative AI - WIRED - November 21st, 2024 [November 21st, 2024]
- Wall Street strategists aren't relying on AI to drive the stock market rally anymore: Morning Brief - Yahoo Finance - November 19th, 2024 [November 19th, 2024]
- Move over chatbots, AI agents are the next big thing. What are they? - Quartz - November 19th, 2024 [November 19th, 2024]
- Meta AI Begins Roll Out on Ray-Ban Meta Glasses in France, Italy, Ireland and Spain - Meta - November 19th, 2024 [November 19th, 2024]
- Exclusive: Leaked Amazon documents identify critical flaws in the delayed AI reboot of Alexa - Fortune - November 19th, 2024 [November 19th, 2024]
- How Mark Zuckerberg went all-in to make Meta a major AI player and threaten OpenAIs dominance - Fortune - November 19th, 2024 [November 19th, 2024]
- AI maths assistant could help solve problems that humans are stuck on - New Scientist - November 19th, 2024 [November 19th, 2024]
- AI Is Now Co-Creator Of Our Collective Intelligence So Watch Your Back - Forbes - November 19th, 2024 [November 19th, 2024]
- Itching to write a book? AI publisher Spines wants to make a deal - TechCrunch - November 19th, 2024 [November 19th, 2024]
- AI is hitting a wall just as the hype around it reaches the stratosphere - CNN - November 19th, 2024 [November 19th, 2024]
- AI can learn to think before it speaks - Financial Times - November 19th, 2024 [November 19th, 2024]
- Can AI Robots Offer Advice That Heals Souls? - Religion Unplugged - November 19th, 2024 [November 19th, 2024]
- Crook breaks into AI biz, points $250K wire payment at their own account - The Register - November 19th, 2024 [November 19th, 2024]
- Symbotic Stock Rises 28%. Heres Why the AI-Robot Company Is Surging. - Barron's - November 19th, 2024 [November 19th, 2024]
- Leaked: Amazon held talks with Instacart, Uber, Ticketmaster, and others for help on its new AI-powered Alexa - Business Insider - November 19th, 2024 [November 19th, 2024]
- Got $3,000? 3 Artificial Intelligence (AI) Stocks to Buy and Hold for the Long Term - The Motley Fool - November 19th, 2024 [November 19th, 2024]
- Theres No Longer Any Doubt That Hollywood Writing Is Powering AI - The Atlantic - November 19th, 2024 [November 19th, 2024]
- Is AI making job applications easier, or creating another problem? - NBC News - November 19th, 2024 [November 19th, 2024]
- Microsoft announces its own Black Hat-like hacking event with big rewards for AI security - The Verge - November 19th, 2024 [November 19th, 2024]
- AI startup Perplexity adds shopping features as search competition tightens - Reuters - November 19th, 2024 [November 19th, 2024]
- Scientists Are Using AI To Improve Vegan Meat Alternatives - Plant Based News - November 19th, 2024 [November 19th, 2024]
- Microsofts new Copilot Actions use AI to automate repetitive tasks - The Verge - November 19th, 2024 [November 19th, 2024]