OpenAI Quietly Deletes Ban on Using ChatGPT for Military and Warfare – The Intercept
OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.
Up until January 10, OpenAIs usage policies page included a ban on activity that has high risk of physical harm, including, specifically, weapons development and military and warfare. That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to use our service to harm yourself or others and gives develop or use weapons as an example, but the blanket ban on military and warfare use has vanished.
The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document clearer and more readable, and which includes many other substantial language and formatting changes.
We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs, OpenAI spokesperson Niko Felix said in an email to The Intercept. A principle like Dont harm others is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.
Felix declined to say whether the vaguer harm ban encompassed all military use, writing, Any use of our technology, including by the military, to [develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system, is disallowed.
OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications, said Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits and an expert on machine learning and autonomous systems safety, citing a 2022 paper she co-authored with OpenAI researchers that specifically flagged the risk of military use. Khlaaf added that the new policy seems to emphasize legality over safety. There is a distinct difference between the two policies, as the former clearly outlines that weapons development, and military and warfare is disallowed, while the latter emphasizes flexibility and compliance with the law, she said. Developing weapons, and carrying out activities related to military and warfare is lawful to various extents. The potential implications for AI safety are significant. Given the well-known instances of bias and hallucination present within Large Language Models (LLMs), and their overall lack of accuracy, their use within military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm and civilian casualties.
The real-world consequences of the policy are unclear. Last year, The Intercept reported that OpenAI was unwilling to say whether it would enforce its own clear military and warfare ban in the face of increasing interest from the Pentagon and U.S. intelligence community.
Given the use of AI systems in the targeting of civilians in Gaza, its a notable moment to make the decision to remove the words military and warfare from OpenAIs permissible use policy, said Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission. The language that is in the policy remains vague and raises questions about how OpenAI intends to approach enforcement.
While nothing OpenAI offers today could plausibly be used to directly kill someone, militarily or otherwise ChatGPT cant maneuver a drone or fire a missile any military is in the business of killing, or at least maintaining the capacity to kill. There are any number of killing-adjacent tasks that a LLM like ChatGPT could augment, like writing code or processing procurement orders. A review of custom ChatGPT-powered bots offered by OpenAI suggests U.S. military personnel are already using the technology to expedite paperwork. The National Geospatial-Intelligence Agency, which directly aids U.S. combat efforts, has openly speculated about using ChatGPT to aid its human analysts. Even if OpenAI tools were deployed by portions of a military force for purposes that arent directly violent, they would still be aiding an institution whose main purpose is lethality.
Experts who reviewed the policy changes at The Intercepts request said OpenAI appears to be silently weakening its stance against doing business with militaries. I could imagine that the shift away from military and warfare to weapons leaves open a space for OpenAI to support operational infrastructures as long as the application doesnt directly involve weapons development narrowly defined, said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. Of course, I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous, removing the weapon from the sociotechnical system including command and control infrastructures of which its part. Suchman, a scholar of artificial intelligence since the 1970s and member of the International Committee for Robot Arms Control, added, It seems plausible that the new policy document evades the question of military contracting and warfighting operations by focusing specifically on weapons.
Suchman and Myers West both pointed to OpenAIs close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the companys software tools.
The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs. LLMs are trained on giant volumes of books, articles, and other web data in order to approximate human responses to user prompts. Though the outputs of an LLM like ChatGPT are often extremely convincing, they are optimized for coherence rather than a firm grasp on reality and often suffer from so-called hallucinations that make accuracy and factuality a problem. Still, the ability of LLMs to quickly ingest text and rapidly output analysis or at least the simulacrum of analysis makes them a natural fit for the data-laden Defense Department.
While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools. In a November address, Deputy Secretary of Defense Kathleen Hicks stated that AI is a key part of the comprehensive, warfighter-centric approach to innovation that Secretary [Lloyd] Austin and I have been driving from Day 1, though she cautioned that most current offerings arent yet technically mature enough to comply with our ethical AI principles.
Last year, Kimberly Sablon, the Pentagons principal director for trusted AI and autonomy, told a conference in Hawaii that [t]heres a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.
Read more:
OpenAI Quietly Deletes Ban on Using ChatGPT for Military and Warfare - The Intercept
- AI is both a new threat and a new solution at the UN climate conference - Business Insider - November 24th, 2024 [November 24th, 2024]
- The Only Free AI Tools You Need for Peak Productivity - How-To Geek - November 24th, 2024 [November 24th, 2024]
- Marc Benioff thinks we've reached the 'upper limits' of LLMs the future, he says, is AI agents - Business Insider - November 24th, 2024 [November 24th, 2024]
- Taiwan Semi (TSM) Positioned for Growth Amid NVIDIAs AI Demand Surge, Says BofA Analyst - Yahoo Finance - November 24th, 2024 [November 24th, 2024]
- AI Models Secretly Learn Capabilities Long Before They Show Them, Researchers Find - Decrypt - November 24th, 2024 [November 24th, 2024]
- Generative AI Revenue on Track to 10X by 2030: 1 AI Stock That Will Benefit (Hint: It's Not Nvidia) - The Motley Fool - November 24th, 2024 [November 24th, 2024]
- Alien Civilizations May Have Already Formed a New Kind of AI-Based Consciousness, Scientists Say - Popular Mechanics - November 24th, 2024 [November 24th, 2024]
- Most Gen Zers are terrified of AI taking their jobs. Their bosses consider themselves immune - Fortune - November 24th, 2024 [November 24th, 2024]
- AI voice scams are on the rise heres how to stay safe, according to security experts - TechRadar - November 24th, 2024 [November 24th, 2024]
- Why you're wrong about AI art, according to the Ai-Da robot that just made a $1 million painting - TechRadar - November 24th, 2024 [November 24th, 2024]
- The curious case of Nebius, the publicly traded AI infrastructure startup - TechCrunch - November 24th, 2024 [November 24th, 2024]
- A new culture war Is brewing and Coca-Cola's AI Christmas ad is at the center - Salon - November 24th, 2024 [November 24th, 2024]
- A new generation of shopping cart, with GPS and AI - CBS News - November 24th, 2024 [November 24th, 2024]
- AI bots could be a new tool to get people to be open about their feelings - Fast Company - November 24th, 2024 [November 24th, 2024]
- Do You Believe That AI Will Ruin Photography? Do You See It Already Happening? - Fstoppers - November 24th, 2024 [November 24th, 2024]
- Weekend Round-Up: AI Dominates Headlines With Nvidia, Elon Musk, And Hollywood's Big Names - Benzinga - November 24th, 2024 [November 24th, 2024]
- Conservationists turn to AI in battle to save red squirrels - BBC.com - November 24th, 2024 [November 24th, 2024]
- The Many Ways WSJ Readers Use AI in Their Everyday Lives - The Wall Street Journal - November 24th, 2024 [November 24th, 2024]
- Jensen says solving AI hallucination problems is 'several years away,' requires increasing computation - Tom's Hardware - November 24th, 2024 [November 24th, 2024]
- 4 AI Data-Center Stocks to Buy for the Big Trend. Demand Is Robust. - Barron's - November 24th, 2024 [November 24th, 2024]
- Nvidia Sees Continued AI Momentum. Is This a Golden Opportunity to Buy the Stock? - The Motley Fool - November 24th, 2024 [November 24th, 2024]
- Stanford Professor Allegedly Includes Fake AI Citations in Filing on Deepfake Bill - PCMag - November 24th, 2024 [November 24th, 2024]
- Ex-Google CEO Eric Schmidt says AI will 'shape' identity and that 'normal people' are not ready for it - Business Insider - November 24th, 2024 [November 24th, 2024]
- AI can be used to create job promotion, not be a job replacement, says AWS vice president - Business Insider - November 24th, 2024 [November 24th, 2024]
- Nvidia Has $71 Million Invested in These Smaller-Cap AI Stocks - Yahoo Finance - November 24th, 2024 [November 24th, 2024]
- Advancing urban tree monitoring with AI-powered digital twins - MIT News - November 24th, 2024 [November 24th, 2024]
- A Pennsylvania boy used AI to make nude images of female students. Was it illegal? - USA TODAY - November 24th, 2024 [November 24th, 2024]
- Wakeup Call for HR: Employees Trust AI More Than They Trust You - Josh Bersin - November 24th, 2024 [November 24th, 2024]
- The AI Reporter That Took My Old Job Just Got Fired - WIRED - November 24th, 2024 [November 24th, 2024]
- US ahead in AI innovation, easily surpassing China in Stanfords new ranking - The Associated Press - November 21st, 2024 [November 21st, 2024]
- Announcing recipients of the Google.org AI Opportunity Fund: Europe - The Keyword - November 21st, 2024 [November 21st, 2024]
- AI agents what they are, and how theyll change the way we work - Microsoft - November 21st, 2024 [November 21st, 2024]
- Shannon Vallor says AI does present an existential risk but not the one you think - Vox.com - November 21st, 2024 [November 21st, 2024]
- US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their work - The Associated Press - November 21st, 2024 [November 21st, 2024]
- Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects - The Hacker News - November 21st, 2024 [November 21st, 2024]
- The intersection of AI and the downfall of long-form literature - Tufts Daily - November 21st, 2024 [November 21st, 2024]
- Silicon Valley billionaire warns 'absolutely there's a bubble' in AI valuations: 'Nobody would be surprised' if OpenAI 'disappeared next Monday' -... - November 21st, 2024 [November 21st, 2024]
- Advancing red teaming with people and AI - OpenAI - November 21st, 2024 [November 21st, 2024]
- Can Google Scholar survive the AI revolution? - Nature.com - November 21st, 2024 [November 21st, 2024]
- Nearly half of Gen AI adopters want it open source - here's why - ZDNet - November 21st, 2024 [November 21st, 2024]
- Founder of AI education chatbot charged with defrauding investors of $10 million - USA TODAY - November 21st, 2024 [November 21st, 2024]
- Microsoft at 50: An AI Giant. A Kinder Culture. And Still Hellbent on Domination - WIRED - November 21st, 2024 [November 21st, 2024]
- Matthew Libby on the dark underbelly of AI and his new play Data at Arena Stage - DC Theater Arts - November 21st, 2024 [November 21st, 2024]
- Cruise fesses up, Pony AI raises its IPO ambitions, and the TuSimple drama dials back up - TechCrunch - November 21st, 2024 [November 21st, 2024]
- I Called AI Santa Claus. He Hung Up On Me - The Daily Beast - November 21st, 2024 [November 21st, 2024]
- Nvidia says its Blackwell AI chip is full steam ahead - The Verge - November 21st, 2024 [November 21st, 2024]
- AI in drug discovery is nonsense, but call Schrdinger AI if you want, says CEO - STAT - November 21st, 2024 [November 21st, 2024]
- Is This a Sign That SoundHound AI Is Becoming a Safer Stock to Buy? - The Motley Fool - November 21st, 2024 [November 21st, 2024]
- Why the U.S. Launched an International Network of AI Safety Institutes - TIME - November 21st, 2024 [November 21st, 2024]
- Nvidias boss dismisses fears that AI has hit a wall - The Economist - November 21st, 2024 [November 21st, 2024]
- Will the bubble burst for AI in 2025, or will it start to deliver? - The Economist - November 21st, 2024 [November 21st, 2024]
- Thousands of AI agents later, who even remembers what they do? - The Register - November 21st, 2024 [November 21st, 2024]
- Child safety org flags new CSAM with AI trained on real child sex abuse images - Ars Technica - November 21st, 2024 [November 21st, 2024]
- Nvidias Sales Soar as AI Spending Boom Barrels Ahead - The Wall Street Journal - November 21st, 2024 [November 21st, 2024]
- How Oracle Got Its Mojo Back. What's Behind The AI Cloud Push Powering Its 80% Stock Gain. - Investor's Business Daily - November 21st, 2024 [November 21st, 2024]
- KPMG to spend $100 million on AI partnership with Google Cloud - Reuters - November 21st, 2024 [November 21st, 2024]
- Microsoft is the mystery AI company licensing HarperCollins books, says Bloomberg - The Verge - November 21st, 2024 [November 21st, 2024]
- How Students Can AI-Proof Their Careers - The Wall Street Journal - November 21st, 2024 [November 21st, 2024]
- The US Patent and Trademark Office Banned Staff From Using Generative AI - WIRED - November 21st, 2024 [November 21st, 2024]
- Wall Street strategists aren't relying on AI to drive the stock market rally anymore: Morning Brief - Yahoo Finance - November 19th, 2024 [November 19th, 2024]
- Move over chatbots, AI agents are the next big thing. What are they? - Quartz - November 19th, 2024 [November 19th, 2024]
- Meta AI Begins Roll Out on Ray-Ban Meta Glasses in France, Italy, Ireland and Spain - Meta - November 19th, 2024 [November 19th, 2024]
- Exclusive: Leaked Amazon documents identify critical flaws in the delayed AI reboot of Alexa - Fortune - November 19th, 2024 [November 19th, 2024]
- How Mark Zuckerberg went all-in to make Meta a major AI player and threaten OpenAIs dominance - Fortune - November 19th, 2024 [November 19th, 2024]
- AI maths assistant could help solve problems that humans are stuck on - New Scientist - November 19th, 2024 [November 19th, 2024]
- AI Is Now Co-Creator Of Our Collective Intelligence So Watch Your Back - Forbes - November 19th, 2024 [November 19th, 2024]
- Itching to write a book? AI publisher Spines wants to make a deal - TechCrunch - November 19th, 2024 [November 19th, 2024]
- AI is hitting a wall just as the hype around it reaches the stratosphere - CNN - November 19th, 2024 [November 19th, 2024]
- AI can learn to think before it speaks - Financial Times - November 19th, 2024 [November 19th, 2024]
- Can AI Robots Offer Advice That Heals Souls? - Religion Unplugged - November 19th, 2024 [November 19th, 2024]
- Crook breaks into AI biz, points $250K wire payment at their own account - The Register - November 19th, 2024 [November 19th, 2024]
- Symbotic Stock Rises 28%. Heres Why the AI-Robot Company Is Surging. - Barron's - November 19th, 2024 [November 19th, 2024]
- Leaked: Amazon held talks with Instacart, Uber, Ticketmaster, and others for help on its new AI-powered Alexa - Business Insider - November 19th, 2024 [November 19th, 2024]
- Got $3,000? 3 Artificial Intelligence (AI) Stocks to Buy and Hold for the Long Term - The Motley Fool - November 19th, 2024 [November 19th, 2024]
- Theres No Longer Any Doubt That Hollywood Writing Is Powering AI - The Atlantic - November 19th, 2024 [November 19th, 2024]
- Is AI making job applications easier, or creating another problem? - NBC News - November 19th, 2024 [November 19th, 2024]
- Microsoft announces its own Black Hat-like hacking event with big rewards for AI security - The Verge - November 19th, 2024 [November 19th, 2024]
- AI startup Perplexity adds shopping features as search competition tightens - Reuters - November 19th, 2024 [November 19th, 2024]
- Scientists Are Using AI To Improve Vegan Meat Alternatives - Plant Based News - November 19th, 2024 [November 19th, 2024]
- Microsofts new Copilot Actions use AI to automate repetitive tasks - The Verge - November 19th, 2024 [November 19th, 2024]