The apocalypse isnt coming. We must resist cynicism and fear about AI – The Guardian
Opinion
Remember when WeWork would kill commercial real estate? Crypto would abolish banks? The metaverse would end meeting people in real life?
Mon 15 May 2023 04.06 EDT
In the field of artificial intelligence, doomerism is as natural as an echo. Every development in the field, or to be more precise every development that the public notices, immediately generates an apocalyptic reaction. The fear is natural enough; it comes partly from the lizard-brain part of us that resists whatever is new and strange, and partly from the movies, which have instructed us, for a century, that artificial intelligence will take the form of an angry god that wants to destroy all humanity.
The recent public letter calling for a six-month ban on AI lab work will not have the slightest measurable effect on the development of artificial intelligence, it goes without saying. But it has changed the conversation: every discussion about artificial intelligence must begin with the possibility of total human extinction. Its silly and, worse, its an alibi, a distraction from the real dangers technology presents.
The most important thing to remember about tech doomerism in general is that its a form of advertising, a species of hype. Remember when WeWork was going to end commercial real estate? Remember when crypto was going to lead to the abolition of central banks? Remember when the metaverse was going to end meeting people in real life? Silicon Valley uses apocalypse for marketing purposes: they tell you their tech is going to end the world to show you how important they are.
I have been working with and reporting on AI since 2017, which is prehistoric in this field. During that time, I have heard, from intelligent sources who were usually reliable, that the trucking industry was about to end, that China was in possession of a trillion-parameter natural language processing AI with superhuman intelligence. I have heard geniuses bona fide geniuses declare that medical schools should no longer teach radiology because it would all be automated soon.
One of the reasons AI doomerism bores me is that its become familiar Ive heard it all before. To stay sane, I have had to abide by twin principles: I dont believe it until I see it. Once I see it, I believe it.
Many of the most important engineers in the field indulge in AI doomerism; this is unquestionably true. But one of the defining features of our time is that the engineers who do not, in my experience, have even the faintest education in the humanities or even recognize that society and culture are worthy of study simply have no idea how their inventions interact with the world. One of the most prominent signatories of the open letter was Elon Musk, an early investor in OpenAI. He is brilliant at technology. But if you want to know how little he understands about people and their relationships to technology, go on Twitter for five minutes.
Not that there arent real causes of worry when it comes to AI; its just that theyre almost always about something other than AI. The biggest anxiety that an artificial general intelligence is about to take over the world doesnt even qualify as science fiction. That fear is religious.
Computers do not have will. Algorithms are a series of instructions. The properties that emerge in the emergent properties of artificial intelligence have to be discovered and established by human beings. The anthropomorphization of statistical pattern-matching machinery is storytelling; its a movie playing in the collective mind, nothing more. Turning off ChatGPT isnt murder. Engineers who hire lawyers for their chatbots are every bit as ridiculous as they sound.
The much more real anxieties brought up by the more substantial critics of artificial intelligence are that AI will super-charge misinformation and will lead to the hollowing out of the middle class by the process of automation. Do I really need to point out that both of these problems predate artificial intelligence by decades, and are political rather than technological?
AI might well make it slightly easier to generate fake content, but the problem of misinformation has never been generation but dissemination. The political space is already saturated with fraud and its hard to see how AI could make it much worse. In the first quarter of 2019, Facebook had to remove 2.2bn fake profiles; AI had nothing to do with it. The response to the degradation of our information networks from government and from the social media industry has been a massive shrug, a bunch of antiquated talk about the first amendment.
Regulating AI is enormously problematic; it involves trying to fathom the unfathomable and make the inherently opaque transparent. But we already know, and have known for over a decade, about the social consequences of social media algorithms. We dont have to fantasize or predict the effects of Instagram. The research is consistent and established: that technology is associated with higher levels of depression, anxiety and self-harm among children. Yet we do nothing. Vague talk about slowing down AI doesnt solve anything; a concrete plan to regulate social media might.
As for the hollowing out of the middle class, inequality in the United States reached the highest level since 1774 back in 2012. AI may not be the problem. The problem may be the foundational economic order AI is entering. Again, vague talk about an AI apocalypse is a convenient way to avoid talking about the self-consumption of capitalism and the extremely hard choices that self-consumption presents.
The way you can tell that doomerism is just more hype is that its solutions are always terminally vague. The open letter called for a six-month ban. What, exactly, do they imagine will happen over those six months? The engineers wont think about AI? The developers wont figure out ways to use it? Doomerism likes its crises numinous, preferably unsolvable. AI fits the bill.
Recently, I used AI to write a novella: The Death of an Author. I wont say that the experience wasnt unsettling. It was quite weird, actually. It felt like I managed to get an alien to write, an alien that is the sum total of our language. The novella itself has, to me anyway, a hypnotic but removed power inhuman language that makes sense. But the experience didnt make me afraid. It awed me. Lets reside in the awe for a moment, just a moment, before we go to the fear.
If we have to think through AI by way of the movies, can we at least do Star Trek instead of Terminator 2? Something strange has appeared in the sky lets be a little more Jean-Luc Picard and a little less Klingon in our response. The truth about AI is that nobody not the engineers who have created it, not the developers converting it into products understands fully what it is, never mind what its consequences will be. Lets get a sense of what this alien is before we blow it out of the sky. Maybe its beautiful.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
Read the original post:
The apocalypse isnt coming. We must resist cynicism and fear about AI - The Guardian
- How Do You Get to Artificial General Intelligence? Think Lighter - WIRED - November 28th, 2024 [November 28th, 2024]
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]