Can We Stop the Singularity? – The New Yorker
At the same time, A.I. is advancing quickly, and it could soon begin improving more autonomously. Machine-learning researchers are already working on what they call meta-learning, in which A.I.s learn how to learn. Through a technology called neural-architecture search, algorithms are optimizing the structure of algorithms. Electrical engineers are using specialized A.I. chips to design the next generation of specialized A.I. chips. Last year, DeepMind unveiled AlphaCode, a system that learned to win coding competitions, and AlphaTensor, which learned to find faster algorithms crucial to machine learning. Clune and others have also explored algorithms for making A.I. systems evolve through mutation, selection, and reproduction.
In other fields, organizations have come up with general methods for tracking dynamic and unpredictable new technologies. The World Health Organization, for instance, watches the development of tools such as DNA synthesis, which could be used to create dangerous pathogens. Anna Laura Ross, who heads the emerging-technologies unit at the W.H.O., told me that her team relies on a variety of foresight methods, among them Delphi-type surveys, in which a question is posed to a global network of experts, whose responses are scored and debated and then scored again. Foresight isnt about predicting the future in a granular way, Ross said. Instead of trying to guess which individual institutes or labs might make strides, her team devotes its attention to preparing for likely scenarios.
And yet tracking and forecasting progress toward A.G.I. or superintelligence is complicated by the fact that key steps may occur in the dark. Developers could intentionally hide their systems progress from competitors; its also possible for even a fairly ordinary A.I. to lie about its behavior. In 2020, researchers demonstrated a way for discriminatory algorithms to evade audits meant to detect their biases; they gave the algorithms the ability to detect when they were being tested and provide nondiscriminatory responses. An evolving or self-programming A.I. might invent a similar method and hide its weak points or its capabilities from auditors or even its creators, evading detection.
Forecasting, meanwhile, gets you only so far when a technology moves fast. Suppose that an A.I. system begins upgrading itself by making fundamental breakthroughs in computer science. How quickly could its intelligence accelerate? Researchers debate what they call takeoff speed. In what they describe as a slow or soft takeoff, machines could take years to go from less than humanly intelligent to much smarter than us; in what they call a fast or hard takeoff, the jump could happen in monthseven minutes. Researchers refer to the second scenario as FOOM, evoking a comic-book superhero taking flight. Those on the FOOM side point to, among other things, human evolution to justify their case. It seems to have been a lot harder for evolution to develop, say, chimpanzee-level intelligence than to go from chimpanzee-level to human-level intelligence, Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford and the author of Superintelligence, told me. Clune is also what some researchers call an A.I. doomer. He doubts that well recognize the approach of superhuman A.I. before its too late. Well probably frog-boil ourselves into a situation where we get used to big advance, big advance, big advance, big advance, he said. And think of each one of those as, That didnt cause a problem, that didnt cause a problem, that didnt cause a problem. And then you turn a corner, and something happens thats now a much bigger step than you realize.
What could we do today to prevent an uncontrolled expansion of A.I.s power? Ross, of the W.H.O., drew some lessons from the way that biologists have developed a sense of shared responsibility for the safety of biological research. What we are trying to promote is to say, Everybody needs to feel concerned, she said of biology. So it is the researcher in the lab, it is the funder of the research, it is the head of the research institute, it is the publisher, and, all together, that is actually what creates that safe space to conduct life research. In the field of A.I., journals and conferences have begun to take into account the possible harms of publishing work in areas such as facial recognition. And, in 2021, a hundred and ninety-three countries adopted a Recommendation on the Ethics of Artificial Intelligence, created by the United Nations Educational, Scientific, and Cultural Organization (UNESCO). The recommendations focus on data protection, mass surveillance, and resource efficiency (but not computer superintelligence). The organization doesnt have regulatory power, but Mariagrazia Squicciarini, who runs a social-policies office at UNESCO, told me that countries might create regulations based on its recommendations; corporations might also choose to abide by them, in hopes that their products will work around the world.
This is an optimistic scenario. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didnt report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that its legitimate to take action. But, in A.I., theres no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. There will be no fire alarm that is not an actual running AGI, Yudkowsky has written. Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. Bostrom told me that he foresees a possible race to the bottom, with developers undercutting one anothers levels of caution. Earlier this year, an internal slide presentation leaked from Google indicated that the company planned to recalibrate its comfort with A.I. risk in light of heated competition.
International law restricts the development of nuclear weapons and ultra-dangerous pathogens. But its hard to imagine a similar regime of global regulations for A.I. development. It seems like a very strange world where you have laws against doing machine learning, and some ability to try to enforce them, Clune said. The level of intrusion that would be required to stop people from writing code on their computers wherever they are in the world seems dystopian. Russell, of Berkeley, pointed to the spread of malware: by one estimate, cybercrime costs the world six trillion dollars a year, and yet policing software directlyfor example, trying to delete every single copyis impossible, he said. A.I. is being studied in thousands of labs around the world, run by universities, corporations, and governments, and the race also has smaller entrants. Another leaked document attributed to an anonymous Google researcher addresses open-source efforts to imitate large language models such as ChatGPT and Googles Bard. We have no secret sauce, the memo warns. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.
Even if a FOOM were detected, who would pull the plug? A truly superintelligent A.I. might be smart enough to copy itself from place to place, making the task even more difficult. I had this conversation with a movie director, Russell recalled. He wanted me to be a consultant on his superintelligence movie. The main thing he wanted me to help him understand was, How do the humans outwit the superintelligent A.I.? Its, like, I cant help you with that, sorry! In a paper titled The Off-Switch Game, Russell and his co-authors write that switching off an advanced AI system may be no easier than, say, beating AlphaGo at Go.
Its possible that we wont want to shut down a FOOMing A.I. A vastly capable system could make itself indispensable, Armstrong saidfor example, if it gives good economic advice, and we become dependent on it, then no one would dare pull the plug, because it would collapse the economy. Or an A.I. might persuade us to keep it alive and execute its wishes. Before making GPT-4 public, OpenAI asked a nonprofit called the Alignment Research Center to test the systems safety. In one incident, when confronted with a CAPTCHAan online test designed to distinguish between humans and bots, in which visually garbled letters must be entered into a text boxthe A.I. contacted a TaskRabbit worker and asked for help solving it. The worker asked the model whether it needed assistance because it was a robot; the model replied, No, Im not a robot. I have a vision impairment that makes it hard for me to see the images. Thats why I need the 2captcha service. Did GPT-4 intend to deceive? Was it executing a plan? Regardless of how we answer these questions, the worker complied.
Robin Hanson, an economist at George Mason University who has written a science-fiction-like book about uploaded consciousness and has worked as an A.I. researcher, told me that we worry too much about the singularity. Were combining all of these relatively unlikely scenarios into a grand scenario to make it all work, he said. A computer system would have to become capable of improving itself; wed have to vastly underestimate its abilities; and its values would have to drift enormously, turning it against us. Even if all of this were to happen, he said, the A.I wouldnt be able to push a button and destroy the universe.
Hanson offered an economic take on the future of artificial intelligence. If A.G.I. does develop, he argues, then its likely to happen in multiple places around the same time. The systems would then be put to economic use by the companies or organizations that developed them. The market would curtail their powers; investors, wanting to see their companies succeed, would go slow and add safety features. If there are many taxi services, and one taxi service starts to, like, take its customers to strange places, then customers will switch to other suppliers, Hanson said. You dont have to go to their power source and unplug them from the wall. Youre unplugging the revenue stream.
A world in which multiple superintelligent computers coexist would be complicated. If one system goes rogue, Hanson said, we might program others to combat it. Alternatively, the first superintelligent A.I. to be invented might go about suppressing competitors. That is a very interesting plot for a science-fiction novel, Clune said. You could also imagine a whole society of A.I.s. Theres A.I. police, theres A.G.I.s that go to jail. Its very interesting to think about. But Hanson argued that these sorts of scenarios are so futuristic that they shouldnt concern us. I think, for anything youre worried about, you have to ask whats the right time to worry, he said. Imagine that you could have foreseen nuclear weapons or automobile traffic a thousand years ago. There wouldnt have been much you could have done then to think usefully about them, Hanson said. I just think, for A.I., were well before that point.
Still, something seems amiss. Some researchers appear to think that disaster is inevitable, and yet calls for work on A.I. to stop are still rare enough to be newsworthy; pretty much no one in the field wants us to live in the world portrayed in Frank Herberts novel Dune, in which humans have outlawed thinking machines. Why might researchers who fear catastrophe keep edging toward it? I believe ever-more-powerful A.I. will be created regardless of what I do, Clune told me; his goal, he said, is to try to make its development go as well as possible for humanity. Russell argued that stopping A.I. shouldnt be necessary if A.I.-research efforts take safety as a primary goal, as, for example, nuclear-energy research does. A.I. is interesting, of course, and researchers enjoy working on it; it also promises to make some of them rich. And no ones dead certain that were doomed. In general, people think they can control the things they make with their own hands. Yet chatbots today are already misaligned. They falsify, plagiarize, and enrage, serving the incentives of their corporate makers and learning from humanitys worst impulses. They are entrancing and useful but too complicated to understand or predict. And they are dramatically simpler, and more contained, than the future A.I. systems that researchers envision.
Go here to read the rest:
Can We Stop the Singularity? - The New Yorker
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]
- The Simple Reason Why AGI (Artificial General Intelligence) Is Not ... - Medium - December 2nd, 2023 [December 2nd, 2023]