How politics and business are driving the AI arms race with China – Bulletin of the Atomic Scientists
In March, thousands of tech leadersElon Musk among themsigned an open letter asking artificial intelligence (AI) labs to stop developing next-generation training systems for at least six months. There is precedent for such temporary pauses in other fields of research: In 2019, for example, scientists successfully called for a moratorium on any human gene editing that would pass along heritable DNA to genetically modified children.
While a pause in the field of AI is unlikely to happen, at least it means the United States is finally starting to realize the importance of regulating AI systems.
The reasons that a pause in AI wont happen are multifoldand are about more than just the research itself. Critics of the proposed pause argue that regulating or restricting AI would help China pull ahead in AI development, causing the United States to lose its military and economic edge. To be sure, the United States must keep its citizens secure. But failing to regulate AI or to coordinate with China in cases where that is in the United States interest would endanger US citizens.
History shows us that this worry is more than just theoretical. As a presidential candidate, John F. Kennedy invented the missile gap narrative to make President Dwight D. Eisenhower seem weak on defense, claiming that the Soviet Union was overtaking the United States in nuclear missile deployment. Kennedys rhetoric may have helped him politically but also hindered cooperation with the Soviet leadership. Historically, arms races are often driven more by domestic economics and politics than by rational responses to external threats.
China, which is actually regulating AI much more tightly than the United States or even the European Union and is likely to be hamstrung by US semiconductor export controls in the coming years, is far behind the United States in AI development. Much like the Cold War nuclear arms race, todays US-China AI competition is heavily influenced by domestic forces such as private interest groups, bureaucratic infighting, electoral politics, and public opinion. By better understanding these domestic forces, policy makers in the United States can minimize the risks faced by the United States, China, and the world.
Private interests. In the US-China AI competition, companies developing AI systems and promoting their own interests might lobby against domestic or international AI regulation. There is historical precedent for this. In 2001, the United States rejected a Protocol to strengthen the Biological Weapons Convention, in part because of pressure from the US chemical and pharmaceutical industries, which wanted to limit inspections of their facilities.
US AI companies appear to be aware of the risks of using their products. OpenAIs stated mission is to ensure that artificial general intelligence benefits all of humanity. DeepMinds operating principles include commitment to act as responsible pioneers in the field of AI. DeepMinds founders have pledged not to work on lethal AI, and Googles AI Principles state that Google will not deploy or design AI for weapons intended to injure humans, or for surveillance that violates international norms.
However, there are already worrisome signs that commercial competition may undermine these commitments. Google, fearing that OpenAIs ChatGPT could replace its search engine, told employees it would recalibrate the amount of risk it is prepared to accept when deploying new AI systems. While not strictly relevant to international agreements, this move suggests that tech companies are willing to compromise on AI safety in response to commercial incentives.
Another potentially concerning development is the creation of links between AI startups and big tech companies. OpenAI partnered with Microsoft in January, and Google acquired DeepMind in 2014. Acquisition and partnership may limit the ability of AI startups to act in ways that lower risk. DeepMind and Google, for example, have clashed over the governance of DeepMind projects since their merger.
Lobbying may also raise risks. The big tech companies are experienced lobbyists: Amazon spent $21.4 million on lobbying in 2022, making it the 6th largest spender; Meta (the parent company of Facebook, Instagram, and WhatsApp) came in 10th with $19.2 million; and Alphabet (parent company of Google) was 19th with $13.2 million. Last year, big tech companies increased their donations to US foreign policy think tanks in an effort to promote the argument that stricter rules will harm their ability to compete with China.
In the future, suppliers of military AI systems might increase the chances of an AI arms race by lobbying for the development of more advanced weapons systems, or by opposing arms control agreements that would limit their future sales. This is probably long way off. Analysis from the Brookings Institutiona nonprofit public policy organizationfound that 95 percent of federal contracts from the last five years with artificial intelligence in the description were for professional, scientific, and technical services (essentially external funding for research and development). The same analysis found that there were 307 different vendors and 474 total contracts.
Taken together, this analysis suggests an immature market, with many smaller vendors focused on developing AI systems rather than on larger contracts for supplying hardware or software, which are more typical for military procurement. In the future, though, larger contracts for military AI and a more concentrated supplier base would probably mean increased lobbying by military AI suppliersand increased chances of a military AI arms race.
Bureaucratic politics. There were many instances of bureaucratic politics exacerbating the Cold War nuclear arms race. As Slate columnist and author of several books on military strategy Fred Kaplan has described, the Air Force and the Navy repeatedly came up with new nuclear strategies and doctrines that would give them more of the nuclear weapons budget. For example, the Navys think tank came up with finite deterrence, which suggested that the United States could deter the Soviet Union by deploying a relatively small number of nuclear missiles on submarines, obviating the need for large numbers of nuclear bombers and missiles (which were operated by the Air Force).
Bureaucratic incentives often cause organizations to attempt to accumulate more resources and influence than is optimal from the perspective of the state. Although most cutting-edge AI development is currently carried out in the private sector, that could change. History suggests that as a technologys strategic importance and cost grow, the inclination and capacity for the state to exert control over its development and deployment will also grow.
There is another reason for concern about AI developed in or for the public sectorparticularly the defense sector, despite the current private-sector dominance. As former US Navy Secretary Richard Danzig has written, military development and use of technology tends to be particularly risky, for several reasons: secrecy makes oversight and regulation more difficult; the unpredictability of warfare environments; and the adversarial, unconstrained nature of military operations. The military already accounts for significant proportion of US government spending on AI.
Regardless of how the military uses AI, it is likely there will be resistance to any AI arms control initiatives. An arms control agreement almost always interferes with the interests of one or more groups within the defense establishment. Military support is particularly important for ratification, which is why President Kennedy had to abandon his push for a comprehensive test ban in the face of resistance from the Joint Chiefs of Staff.
Electoral politics and public opinion. The relationship between foreign policy and electoral politics is not straightforward. An influential paper published in 2005 found that US foreign policy is most heavily and consistently influenced by internationally oriented business leaders, followed by experts, with some small influence for organized labor groups, and very weak or no influence from public opinion. (It should be noted that not all researchers agree with this finding, however: Many case studies and experiments have found that public opinion does influence decision makers in certain circumstances.)
Studies suggest public opinion is more important for high-salience issuesthat is, issues that are seen as particularly noticeable or important. Public opinion does not come into play as much for issues that (rightly or wrongly) feel less relevant. For example, voters generally do not care much about trade policy: They do not know their political representatives trade policy positions, so trade policy does not affect their voting behavior. According to the Secret Congress theory, which contends that it is much easier to pass legislation on topics that are more under the radar and consequently not politically salient, if AI policy issues were politically salient and the parties were divided on them, it would be much more difficult to pass regulation and treaties that would reduce risks from AI.
At the moment, AI is too esoteric to be politically salient, although this is starting to change. The electoral politics of AI policy are overshadowed by broader concerns about strategic competition with China. In the United States, elite opinion, business opinion, and public opinion have shifted toward the view that engagement with China has failed and a more confrontational approach is now required. Current US policy toward Chinaincluding accelerating US AI development and restricting Chinese AI progresscommands bipartisan support.
However, if one party becomes more hawkish on China-related policy issues, public opinion on AI might split accordingly, with supporters of the more hawkish party viewing cooperation on AI policy less favorably. This may have happened in the past with nuclear weapons. There is some evidence to suggest that Obamas 2009 Prague speech, in which he announced Americas commitment to seek the peace and security of a world without nuclear weapons, led to disarmament being associated with Obama personally. This polarized the issue of arms control and disarmament along partisan lines, making future policy making more difficult.
If AI policy issues do become politically salient, the history and political science literature suggest that electoral politics might impede arms control in a number of ways. For example, if arms control policy gets caught up in partisan politics, it becomes much harder to develop and implement, particularly given that treaty ratification requires a two-thirds majority in the Senate.
In the past, political groups have held dovish positions on some nuclear issues while holding hawkish positions on others. For example, the Nunn-Lugar Cooperative Threat Reduction program, which worked with the states of the former Soviet Union to dismantle and secure the legacies of the Cold War, had strong bipartisan support, even as arms control agreements faced resistance from many Republicans. Certain nuclear issues are idiosyncratic. For example, Iran issues are politicized in a different way than other nuclear issues, because of the link to Israels security: Many otherwise liberal Democrats who are Jewish or represent heavily Jewish districts are hawkish on Iran. AI may turn out to be similar, with political cooperation on some aspects of AI policy and partisan gridlock on others.
Finally, it is worth noting that the large number of potential uses for AI means that AI will touch peoples lives frequently and in significant ways. However, it is unlikely that these applications will cohere into a consistent pro- or anti-AI perspective. Public opinion on AI foreign policy will probably resemble other technology-related foreign policy issueswith the two major parties split according to their levels of hawkishness.
The United States has a tricky balance to strike. On the one hand, promoting AI development could create economic and social benefits, and the government has a duty to keep US citizens safe by maintaining technological superiority. On the other hand, if AI is not sufficiently well-regulated, and the United States and China cant cooperate where necessary, the whole world could be at risk.
Striking this balance is like walking a tightrope. Domestic forces threaten to knock the United States off balance.
The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.
View post:
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]
- The Simple Reason Why AGI (Artificial General Intelligence) Is Not ... - Medium - December 2nd, 2023 [December 2nd, 2023]