We Cannot Trust AI With Control Of Our Bombs – Fair Observer
A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable hallucinations, resulting in potentially catastrophic outcomes. But theres an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.
The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film WarGames, a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced whopper) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The Terminator franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called Skynet that, like WOPR, was designed to control US nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.
Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of autonomous, or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called robot generals. In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over Americas atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanitys demise.
Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the US Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the air force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As C2 capabilities are increasingly loaded onto AI-controlled systems, they may soon be issuing fire instructions directly to shooters, largely bypassing human control.
A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp, a military show of force, or early engagementthats how Will Roper, assistant secretary of the air force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that we do need to change the name as the system evolves, Roper added, I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just dont think we can go there.
And while he cant go there, thats just where the rest of us may, indeed, be going.
Mind you, thats only the start. In fact, the air forces ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all US combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced jad-cee-two). JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon to engage the target, the Congressional Research Service reported in 2022.
Initially, JADC2 will be designed to coordinate combat operations among conventional or non-nuclear American forces. Eventually, however, it is expected to link up with the Pentagons nuclear command-control-and-communications systems (NC3), potentially giving computers significant control over the use of the American nuclear arsenal. JADC2 and NC3 are intertwined, General John E. Hyten, vice chairman of the Joint Chiefs of Staff, indicated in a 2020 interview. As a result, he added in typical Pentagonese, NC3 has to inform JADC2 and JADC2 has to inform NC3.
It doesnt require great imagination to picture a time in the not-too-distant future when a crisis of some sortsay a US-China military clash in the South China Sea or near Taiwanprompts ever more intense fighting between opposing air and naval forces. Imagine then the JADC2 ordering an intense bombardment of enemy bases and command systems in China itself, triggering reciprocal attacks on US facilities and a lightning decision by JADC2 to retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.
The possibility that nightmare scenarios of this sort could result in the accidental or unintended onset of nuclear war has long troubled analysts in the arms control community. But the growing automation of military C2 systems has generated anxiety not just among them but among senior national security officials as well.
As early as 2019, when I questioned Lieutenant General Jack Shanahan, director of the Pentagons Joint Artificial Intelligence Center, about such a risky possibility, he responded, You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do with nuclear command and control. This is the ultimate human decision that needs to be made and so we have to be very careful. Given the technologys immaturity, he added, we need a lot of time to test and evaluate before applying AI to NC3.
In the years since, despite such warnings, the Pentagon has been racing ahead with the development of automated C2 systems. In its budget submission for 2024, the Department of Defense requested $1.4 billion for the JADC2 in order to transform warfighting capability by delivering information advantage at the speed of relevance across all domains and partners. Uh-oh! And then it requested another $1.8 billion for other kinds of military-related AI research.
Pentagon officials acknowledge that it will be some time before robot generals will be commanding vast numbers of US troops (and autonomous weapons) in battle, but they have already launched several projects intended to test and perfect just such linkages. One example is the armys Project Convergence, involving a series of field exercises designed to validate ABMS and JADC2 component systems. In a test held in August 2020 at the Yuma Proving Ground in Arizona, for example, the army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at Joint Base Lewis McChord in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. This entire sequence was supposedly accomplished within 20 seconds, the Congressional Research Service later reported.
Less is known about the navys AI equivalent, Project Overmatch, as many aspects of its programming have been kept secret. According to Admiral Michael Gilday, chief of naval operations, Overmatch is intended to enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain. Little else has been revealed about the project.
Despite all the secrecy surrounding these projects, you can think of ABMS, JADC2, Convergence and Overmatch as building blocks for a future Skynet-like mega-network of super-computers designed to command all US forces, including its nuclear ones, in armed combat. The more the Pentagon moves in that direction, the closer well come to a time when AI possesses life-or-death power over all American soldiers along with opposing forces and any civilians caught in the crossfire.
Such a prospect should be ample cause for concern. To start with, consider the risk of errors and miscalculations by the algorithms at the heart of such systems. As top computer scientists have warned us, those algorithms are capable of remarkably inexplicable mistakes and, to use the AI term of the moment, hallucinationsthat is, seemingly reasonable results that are entirely illusionary. Under the circumstances, its not hard to imagine such computers hallucinating an imminent enemy attack and launching a war that might otherwise have been avoided.
And thats not the worst of the dangers to consider. After all, theres the obvious likelihood that Americas adversaries will similarly equip their forces with robot generals. In other words, future wars are likely to be fought by one set of AI systems against another, both linked to nuclear weaponry, with entirely unpredictablebut potentially catastrophicresults.
Not much is known (from public sources at least) about Russian and Chinese efforts to automate their military command-and-control systems, but both countries are thought to be developing networks comparable to the Pentagons JADC2. As early as 2014, in fact, Russia inaugurated a National Defense Control Center (NDCC) in Moscow, a centralized command post for assessing global threats and initiating whatever military action is deemed necessary, whether of a non-nuclear or nuclear nature. Like JADC2, the NDCC is designed to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses.
China is said to be pursuing an even more elaborate, if similar, enterprise under the rubric of Multi-Domain Precision Warfare (MDPW). According to the Pentagons 2022 report on Chinese military developments, its military, the Peoples Liberation Army, is being trained and equipped to use AI-enabled sensors and computer networks to rapidly identify key vulnerabilities in the US operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.
Picture, then, a future war between the US and Russia or China (or both) in which the JADC2 commands all US forces, while Russias NDCC and Chinas MDPW command those countries forces. Consider, as well, that all three systems are likely to experience errors and hallucinations. How safe will humans be when robot generals decide that its time to win the war by nuking their enemies?
If this strikes you as an outlandish scenario, think again, at least according to the leadership of the National Security Commission on Artificial Intelligence, a congressionally mandated enterprise that was chaired by Eric Schmidt, former head of Google, and Robert Work, former deputy secretary of defense. While the Commission believes that properly designed, tested and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit, the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability, it affirmed in its Final Report. Such dangers could arise, it stated, because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems on the battlefieldwhen, that is, AI fights AI.
Though this may seem an extreme scenario, its entirely possible that opposing AI systems could trigger a catastrophic flash warthe military equivalent of a flash crash on Wall Street, when huge transactions by super-sophisticated trading algorithms spark panic selling before human operators can restore order. In the infamous Flash Crash of May 6, 2010, computer-driven trading precipitated a 10% fall in the stock markets value. According to Paul Scharre of the Center for a New American Security, who first studied the phenomenon, the military equivalent of such crises on Wall Street would arise when the automated command systems of opposing forces become trapped in a cascade of escalating engagements. In such a situation, he noted, autonomous weapons could lead to accidental death and destruction at catastrophic scales in an instant.
At present, there are virtually no measures in place to prevent a future catastrophe of this sort or even talks among the major powers to devise such measures. Yet, as the National Security Commission on Artificial Intelligence noted, such crisis-control measures are urgently needed to integrate automated escalation tripwires into such systems that would prevent the automated escalation of conflict. Otherwise, some catastrophic version of World War III seems all too possible. Given the dangerous immaturity of such technology and the reluctance of Beijing, Moscow and Washington to impose any restraints on the weaponization of AI, the day when machines could choose to annihilate us might arrive far sooner than we imagine and the extinction of humanity could be the collateral damage of such a future war.
[TomDispatch first published this piece.]
[Anton Schauble edited this piece.]
The views expressed in this article are the authors own and do not necessarily reflect Fair Observers editorial policy.
See the original post:
We Cannot Trust AI With Control Of Our Bombs - Fair Observer
- The Best Altcoin with 100x Potential: Qubetics ($TICS) Has Earned the Trust of Over 14,000 Holders, While Artificial Super Intelligence Alliance... - January 13th, 2025 [January 13th, 2025]
- SoftBank's Masayoshi Son says artificial super intelligence to exist by 2035 - MSN - November 2nd, 2024 [November 2nd, 2024]
- SoftBank's Son says artificial super intelligence to exist by 2035 - MSN - November 2nd, 2024 [November 2nd, 2024]
- Qubetics Leads the Charge Against Quantum Threats, Fantom Soars and Artificial Super Intelligence Alliance Set for Growth: Guest Post by TheCoinrise... - October 12th, 2024 [October 12th, 2024]
- $OCEAN, $AGIX, And $FET Merge To Propel The Development Of Artificial Super Intelligence - The Merkle News - September 10th, 2024 [September 10th, 2024]
- Specter of Artificial Super Intelligence Looms in Camden Discussion - Freepress Online - August 25th, 2024 [August 25th, 2024]
- AI Coin Price: Will Artificial Superintelligence Alliance Have Bullish Impact? - Bankless Times - July 6th, 2024 [July 6th, 2024]
- 3 crypto firms are combining into one AI token - Morning Brew - June 16th, 2024 [June 16th, 2024]
- Could This New Artificial Intelligence (AI) Crypto Token Be a Millionaire Maker? - The Motley Fool - June 16th, 2024 [June 16th, 2024]
- Former OpenAI researcher outlines AI advances expectations in the next decade - Windows Central - June 16th, 2024 [June 16th, 2024]
- Creepy Study Suggests AI Is The Reason We've Never Found Aliens - ScienceAlert - May 11th, 2024 [May 11th, 2024]
- Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium - January 16th, 2024 [January 16th, 2024]
- AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News - January 16th, 2024 [January 16th, 2024]
- OpenAI's Ilya Sutskever Has a Plan for Keeping Super-Intelligent AI in Check - WIRED - December 17th, 2023 [December 17th, 2023]
- Sam Altman on OpenAI and Artificial General Intelligence - TIME - December 17th, 2023 [December 17th, 2023]
- Will AIs Next Wave of Super Intelligence Replace Human Ingenuity? Its Complicated - Grit Daily - December 17th, 2023 [December 17th, 2023]
- New Novel Skillfully Weaves Artificial Intelligence, Martial Arts and ... - Lakenewsonline.com - November 14th, 2023 [November 14th, 2023]
- Googles artificial intelligence predicts the weather around the globe in just one minute - EL PAS USA - November 14th, 2023 [November 14th, 2023]
- Nick Bostrom: Will AI lead to tyranny? - UnHerd - November 14th, 2023 [November 14th, 2023]
- Appeals court mulls whether to revive Wynn FARA case - POLITICO - November 14th, 2023 [November 14th, 2023]
- The AI Revolution From Evolution to Super intelligence - Cryptopolitan - October 21st, 2023 [October 21st, 2023]
- AI Symposium Explores Flaws and Potential of Artificial Intelligence - The Skanner - October 21st, 2023 [October 21st, 2023]
- Artificial intelligence has surprising pick to win 2024 Super Bowl - ClutchPoints - October 21st, 2023 [October 21st, 2023]
- Artificial Intelligence isn't taking over anything - Talon Marks - October 21st, 2023 [October 21st, 2023]
- AI and You: The Chatbots Are Talking to Each Other, AI Helps ... - CNET - October 21st, 2023 [October 21st, 2023]
- How to Build a Chatbot Using Streamlit and Llama 2 - MUO - MakeUseOf - October 21st, 2023 [October 21st, 2023]
- ONU's Polar SURF undergraduate research projects expand into the ... - Northern News - October 21st, 2023 [October 21st, 2023]
- Why Artificial Intelligence Needs to Consider the Unique Needs of ... - Women's eNews - September 27th, 2023 [September 27th, 2023]
- What Is Image-to-Image Translation? | Definition from TechTarget - TechTarget - September 27th, 2023 [September 27th, 2023]
- There is probably an 80% consensus that free will is actually ... - CTech - September 27th, 2023 [September 27th, 2023]
- Meta is planning on introducing dozens of chatbot personas ... - TechRadar - September 27th, 2023 [September 27th, 2023]
- AI: is the end nigh? | Laura Dodsworth - The Critic - August 26th, 2023 [August 26th, 2023]
- "Most Beautiful Car in the World" Alfa Romeo Asks People To ... - autoevolution - August 26th, 2023 [August 26th, 2023]
- Managing Past, Present and Future Epidemics - Australian Institute ... - Australian Institute of International Affairs - August 26th, 2023 [August 26th, 2023]
- The Best Games From Rare Per Metacritic - GameRant - August 26th, 2023 [August 26th, 2023]
- AI is the Scariest Beast Ever Created, Says Sci-Fi Writer Bruce Sterling - Newsweek - July 2nd, 2023 [July 2nd, 2023]
- Lets focus on AIs risks rather than existential threats - Business Plus - July 2nd, 2023 [July 2nd, 2023]
- Risks of artificial intelligence must be considered as the technology ... - University of Toronto - July 2nd, 2023 [July 2nd, 2023]
- Best Evil Technology Movies, From Terminator to M3GAN - CBR - Comic Book Resources - July 2nd, 2023 [July 2nd, 2023]
- 15 Super Cool Wallpapers for iPhone and Android - YMWC 18 - YTECHB - July 2nd, 2023 [July 2nd, 2023]
- PUB CHAT: Changing lives congrats to all grads and those who ... - Finger Lakes Times - July 2nd, 2023 [July 2nd, 2023]
- AI poses an existential threat, according to Munk Debates crowd ... - The Hub - July 2nd, 2023 [July 2nd, 2023]
- The Cautionary Tale of J. Robert Oppenheimer - Alta Magazine - July 2nd, 2023 [July 2nd, 2023]
- Virgin Voyages and JLo Bust on A.I. To Sell Vacations - We Got This Covered - July 2nd, 2023 [July 2nd, 2023]
- Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Hollywood Reporter - May 20th, 2023 [May 20th, 2023]
- Schools 'bewildered' by very fast rate of change in AI education ... - The Irish News - May 20th, 2023 [May 20th, 2023]
- Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI - Fortune - May 20th, 2023 [May 20th, 2023]
- The Future of War Is AI - The Nation - May 20th, 2023 [May 20th, 2023]
- NFL fans outraged after ChatGPT names best football teams since 2000 including a surprise at No 1... - The US Sun - May 20th, 2023 [May 20th, 2023]
- We need to prepare for the public safety hazards posed by artificial intelligence - The Conversation - May 20th, 2023 [May 20th, 2023]
- What are the four main types of artificial intelligence? Find out how future AI programs can change the world - Fox News - May 20th, 2023 [May 20th, 2023]
- Did Tom Hanks Say He Will Use AI to Make Films After His Death? - Snopes.com - May 20th, 2023 [May 20th, 2023]
- These are the top 10 athletes of all time from the state of Iowa, according to ChatGPT - KCCI Des Moines - May 20th, 2023 [May 20th, 2023]
- Inside The High-Tech Homes Of The Super-Rich: Smart Systems, Security Fortresses And Personalized Gadgets - Yahoo Finance - May 20th, 2023 [May 20th, 2023]
- ChatGPT cant think consciousness is something entirely different to today's AI - The Conversation - May 20th, 2023 [May 20th, 2023]
- IIT-Mandi startup develops AI-based affordable solution to detect respiratory, genetic disorders - The Hindu - May 2nd, 2023 [May 2nd, 2023]
- Horrors Best And Scariest Uses of Artificial Intelligence - Dread Central - May 2nd, 2023 [May 2nd, 2023]
- Artificial intelligence or active imagination with ChatGPT? - Irish Examiner - May 2nd, 2023 [May 2nd, 2023]
- Reggie Watts on Late Late Show and Artificial Intelligence - Vulture - May 2nd, 2023 [May 2nd, 2023]
- Centaur Labs CEO: Unlocking AI for Healthcare Requires Expert Annotation - PYMNTS.com - May 2nd, 2023 [May 2nd, 2023]
- Super Active 32-Year-Old Dealmaker Is Japan's Newest Billionaire - Forbes - May 2nd, 2023 [May 2nd, 2023]
- Kevin McKenna meets tech thinker Margaret Totten | HeraldScotland - HeraldScotland - May 2nd, 2023 [May 2nd, 2023]
- Those 'Mrs. Davis' Sneakers Are Real and You Can Buy Them Now - Yahoo News - May 2nd, 2023 [May 2nd, 2023]
- Norway's $1.4tn wealth fund calls for state regulation of AI - Financial Times - May 2nd, 2023 [May 2nd, 2023]
- Macquarie chief Shemara Wikramanayake believes greater ... - The Australian Financial Review - May 2nd, 2023 [May 2nd, 2023]