Artificial Intelligence and Nuclear Stability – War On The Rocks
Policymakers around the world are grappling with the new opportunities and dangers that artificial intelligence presents. Of all the effects that AI can have on the world, among the most consequential would be integrating it into the command and control for nuclear weapons. Improperly used, AI in nuclear operations could have world-ending effects. If properly implemented, it could reduce nuclear risk by improving early warning and detection and enhancing the resilience of second-strike capabilities, both of which would strengthen deterrence. To take full advantage of these benefits, systems must take into account the strengths and limitations of humans and machines. Successful human-machine joint cognitive systems will harness the precision and speed of automation with the flexibility of human judgment and do so in a way that avoids automation bias and surrendering human judgment to machines. Because of the early state of AI implementation, the United States has the potential to make the world safer by more clearly outlining its policies, pushing for broad international agreement, and acting as a normative trendsetter.
The United States has been extremely transparent and forward-leaning in establishing and communicating its policies on military AI and autonomous systems, publishing its policy on autonomy in weapons in 2012, adopting ethical principles for military AI in 2020, and updating its policy on autonomy in weapons in 2023. The department stated formally and unequivocally in the 2022 Nuclear Posture Review that it will always maintain a human in the loop for nuclear weapons employment. In November 2023, over 40 nations joined the United States in endorsing a political declaration on responsible military use of AI. Endorsing states included not just U.S. allies but also nations in Africa, Southeast Asia, and Latin America.
[wotr_memer_button]
Building on this success, the United States should push for international agreements with other nuclear powers to mitigate the risks of integrating AI into nuclear systems or placing nuclear weapons onboard uncrewed vehicles. The United Kingdom and France released a joint statement with the United States in 2022 agreeing on the need to maintain human control of nuclear launches. Ideally, this could represent the beginning of a commitment by the permanent members of the United Nations Security Council if Russia and China could be convinced to join this principle. Even if they are not willing to agree, the United States should further mature its own policies to address critical gaps and work with other nuclear-armed states to strengthen their commitments as an interim measure and as a way to build international consensus on the issue.
The Dangers of Automation
As militaries increasingly adopt AI and automation, there is an urgent need to clarify how these technologies should be used in nuclear operations. Absent formal agreements, states risk an incremental trend of creeping automation that could undermine nuclear stability. While policymakers are understandably reluctant to adopt restrictions on emerging technologies lest they give up a valuable future capability, U.S. officials should not be complacent in assuming other states will approach AI and automation in nuclear operations responsibly. Examples such as Russias Perimeter dead hand system and Poseidon autonomous nuclear-armed underwater drone demonstrate that other nations might see these risks differently than the United States and might be willing to take risks that U.S. policymakers would find unacceptable.
Existing systems, such as Russias Perimeter, highlight the risks of states integrating automation into nuclear systems. Perimeter is reportedly a system created by the Soviet Union in the 1980s to act as a failsafe in case Soviet leadership was destroyed in a decapitation strike. Perimeter reportedly has a network of sensors to determine if a nuclear attack has occurred. If these sensors are triggered while Perimeter is activated, the system would wait a predetermined period of time for a signal from senior military commanders. If there is no signal from headquarters, presumably because Soviet/Russian leadership had been wiped out, then Perimeter would bypass the normal chain of command and pass nuclear launch authority to a relatively junior officer on duty. Senior Russian officials have stated the system is still functioning, noting in 2011 that the system was combat ready and in 2018 that it had been improved.
The system was designed to reduce the burden on Soviet leaders of hastily making a nuclear decision under time pressure and with incomplete information. In theory, Soviet/Russian leaders could take more time to deliberate knowing that there is a failsafe guaranteeing retaliation if the United States succeeded in a decapitation strike. The cost, however, is a system that risks easing pathways to nuclear annihilation in the event of an accident.
Allowing autonomous systems to participate in nuclear launch decisions risks degrading stability and increasing the dangers of nuclear accidents. The Stanislav Petrov incident is an illustrative example of the dangers of automation in nuclear decision-making. In 1983, a Soviet early warning system indicated that the United States had launched several intercontinental ballistic missiles. Lieutenant Colonel Stanislav Petrov, the duty officer at the time, suspected that the system was malfunctioning because the number of missiles launched was suspiciously low and the missiles were not picked up by early warning radars. Petrov reported it (correctly) as a malfunction instead of an attack. AI and autonomous systems often lack the contextual understanding that humans have and that Petrov used to recognize that the reported missile launch was a false alarm. Without human judgment at critical stages of nuclear operations, automated systems could make mistakes or elevate false alarms, heightening nuclear risk.
Moreover, merely having humans in the loop will not be enough to ensure effective human decision-making. Human operators frequently fall victim to automation bias, a condition in which humans overtrust automation and surrender their judgment to machines. Accidents with self-driving cars demonstrate the dangers of humans overtrusting automation, and military personnel are not immune to this phenomenon. To ensure humans remain cognitively engaged in their decision-making, militaries will need to take into account not only the automation itself but also human psychology and human-machine interfaces.
More broadly, when designing human-machine systems, it is essential to consciously determine the appropriate roles for humans and machines. Machines are often better at precision and speed, while humans are often better at understanding the broader context and applying judgment. Too often, human operators are left to fill in the gaps for what automation cant do, acting as backups or failsafes for the edge cases that autonomous systems cant handle. But this model often fails to take into account the realities of human psychology. Even if human operators dont fall victim to automation bias, to assume that a person can sit passively watching a machine perform a task for hours on end, whether a self-driving car or a military weapon system, and then suddenly and correctly identify a problem when the automation is not performing and leap into action to take control is not realistic. Human psychology doesnt work that way. And tragic accidents with complex highly automated systems, such as the Air France 447 crash in 2009 and the 737 MAX crashes in 2018 and 2019, demonstrate the importance of taking into account the dynamic interplay between automation and human operators.
The U.S. military has also suffered tragic accidents with automated systems, even when humans are in the loop. In 2003, U.S. Army Patriot air and missile defense systems shot down two friendly aircraft during the opening phases of the Iraq war. Humans were in the loop for both incidents. Yet a complex mix of human and technical failures meant that human operators did not fully understand the complex, highly automated systems they were in charge of and were not effectively in control.
The military will need to establish guidance to inform system design, operator training, doctrine, and operational procedures to ensure that humans in the loop arent merely unthinking cogs in a machine but actually exercise human judgment. Issuing this concrete guidance for weapons developers and operators is most critical in the nuclear domain, where the consequences of an accident could be grave.
Clarifying Department of Defense Guidance
Recent policies and statements on the role of autonomy and AI in nuclear operations are an important first step in establishing this much-needed guidance, but additional clarification is needed. The 2022 Nuclear Posture Review states: In all cases, the United States will maintain a human in the loop for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment. The United Kingdom adopted a similar policy in 2022, stating in their Defence Artificial Intelligence Strategy: We will ensure that regardless of any use of AI in our strategic systems human political control of our nuclear weapons is maintained at all times.
As the first official policies on AI in nuclear command and control, these are landmark statements. Senior U.S. military officers had previously emphasized the importance of human control over nuclear weapons, including statements by Lt. Gen. Jack Shanahan, then-director of the Joint Artificial Intelligence Center in 2019. Official policy statements are more significant, however, in signaling to audiences both internal and external to the military the importance of keeping humans firmly in charge of all nuclear use decisions. These high-level statements nevertheless leave many open questions about implementation.
The next step for Department of Defense is to translate what the high-level principle of human in the loop means for nuclear systems, doctrine, and training. Key questions include: Which actions are critical to informing and executing decisions by the president? Do those only consist of actions immediately surrounding the president, or do they also include actions further down the chain of command before and after a presidential decision? For example, would it be acceptable for a human to deliver an algorithm-based recommendation to the president to carry out a nuclear attack? Or does a human need to be involved in understanding the data and rendering their own human judgment?
The U.S. military already uses AI to process information, such as satellite images and drone video feeds. Presumably, AI would also be used to support intelligence analysis that could support decisions about nuclear use. Under what circumstances is AI appropriate and beneficial to nuclear stability? Are some applications and ways of using AI more valuable than others?
When AI is used, what safeguards should be put in place to guard against mistakes, malfunctions, or spoofing of AI systems? For example, the United States currently employs a dual phenomenology mechanism to ensure that a potential missile attack is confirmed by two independent sensing methods, such as satellites and ground-based radars. Should the United States adopt a dual algorithm approach to any use of AI in nuclear operations, ensuring that there are two independent AI systems trained on different data sets with different algorithms as a safeguard against spoofing attacks or unreliable AI systems?
When AI systems are used to process information, how should that information be presented to human operators? For example, if the military used an algorithm trained to detect signs of a missile being fueled, that information could be interpreted differently by humans if the AI system reported fueling versus preparing to launch. Fueling is a more precise and accurate description of what the AI system is actually detecting and might lead a human analyst to seek more information, whereas preparing to launch is a conclusion that might or might not be appropriate depending on the broader context.
When algorithmic recommendation systems are used, how much of the underlying data should humans have to directly review? Is it sufficient for human operators to only see the algorithms conclusion, or should they also have access to the raw data that supports the algorithms recommendation?
Finally, what degree of engagement is expected from a human in the loop? Is the human merely there as a failsafe in case the AI malfunctions? Or must the human be engaged in the process of analyzing information, generating courses of actions, and making recommendations? Are some of these steps more important than others for human involvement?
These are critical questions that the United States will need to address as it seeks to harness the benefits of AI in nuclear operations while meeting the human in the loop policy. The sooner the Department of Defense can clarify answers to these questions, the more that it can accelerate AI adoption in ways that are trustworthy and meet the necessary reliability standards for nuclear operations. Nor does clarifying these questions overly constrain how the United States approaches AI. Guidance can always be changed over time as the technology evolves. But a lack of clear guidance risks forgoing valuable opportunities to use AI or, even worse, adopting AI in ways that might undermine nuclear surety and deterrence.
Dead Hand Systems
In clarifying its human-in-the-loop policy, the United States should make a firm commitment to reject dead hand nuclear launch systems or a system with a standing order to launch that incorporates algorithmic components. Dead hand systems akin to Russias Perimeter would appear to be prohibited by current Department of Defense policy. However, the United States should explicitly state that it will not build such systems given their risk.
Despite their danger, some U.S. analysts have suggested that the United States should adopt a dead hand system to respond to emerging technologies such as AI, hypersonics, and advanced cruise missiles. There are safer methods for responding to these threats, however. Rather than gambling humanitys future on an algorithm, the United States should strengthen its second-strike deterrent in response to new threats.
Some members of the U.S. Congress have even expressed a desire for writing this requirement into law. In April 2023, a bipartisan group of representatives introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act, which would prohibit funding for any system that launches nuclear weapons without meaningful human control. There is precedent for a legal requirement to maintain a human in the loop for strategic systems. In the 1980s, during development of the Strategic Defense Initiative (also known as Star Wars), Congress passed a law requiring affirmative human decision at an appropriate level of authority for strategic missile defense systems. This legislation could serve as a blueprint for a similar legislative requirement for nuclear use. One benefit of a legal requirement is that it ensures that such an important policy could not be overturned by a future administration or Pentagon leadership that is more risk-accepting without Congressional authorization.
Nuclear Weapons and Uncrewed Vehicles
The United States should similarly clarify its policy for nuclear weapons on uncrewed vehicles. The United States is producing a new nuclear-capable strategic bomber, the B-21, that will be able to perform uncrewed missions in the future, and is developing large undersea uncrewed vehicles that could carry weapons payloads. U.S. military officers have stated a strong reticence for placing nuclear weapons aboard uncrewed platforms. In 2016, then-Commander of Air Force Global Strike Command Gen. Robin Rand noted that the B-21 would always be crewed when carrying nuclear weapons: If you had to pin me down, I like the man in the loop; the pilot, the woman in the loop, very much, particularly as we do the dual-capable mission with nuclear weapons. General Rands sentiment may be shared among senior military officers, but it is not official policy. The United States should adopt an official policy that nuclear weapons will not be placed aboard recoverable uncrewed platforms. Establishing this policy could help provide guidance to weapons developers and the services about the appropriate role for uncrewed platforms in nuclear operations as the Department of Defense fields larger uncrewed and optionally crewed platforms.
Nuclear weapons have long been placed on uncrewed delivery vehicles, such as ballistic and cruise missiles, but placing nuclear weapons on a recoverable uncrewed platform such as a bomber is fundamentally different. A human decision to launch a nuclear missile is a decision to carry out a nuclear strike. Humans could send a recoverable, two-way uncrewed platform, such as a drone bomber or undersea autonomous vehicle, out on patrol. In that case, the human decision to launch the nuclear-armed drone would not yet be a decision to carry out a nuclear strike. Instead, the drone could be sent on patrol as an escalation signal or to preposition in case of a later decision to launch a nuclear attack. Doing so would put enormous faith in the drones communications links and on-board automation, both of which may be unreliable.
The U.S. military has lost control of drones before. In 2017, a small tactical Army drone flew over 600 miles from southern Arizona to Denver after Army operators lost communications. In 2011, a highly sensitive U.S. RQ-170 stealth drone ended up in Iranian hands after U.S. operators lost contact with it over Afghanistan. Losing control of a nuclear-armed drone could cause nuclear weapons to fall into the wrong hands or, in the worst case, escalate a nuclear crisis. The only way to maintain nuclear surety is direct, physical human control over nuclear weapons up until the point of a decision to carry out a nuclear strike.
While the U.S. military would likely be extremely reluctant to place nuclear weapons onboard a drone aircraft or undersea vehicle, Russia is already developing such a system. The Poseidon, or Status-6, undersea autonomous uncrewed vehicle is reportedly intended as a second- or third-strike weapon to deliver a nuclear attack against the United States. How Russia intends to use the weapon is unclear and could evolve over time but an uncrewed platform like the Poseidon in principle could be sent on patrol, risking dangerous accidents. Other nuclear powers could see value in nuclear-armed drone aircraft or undersea vehicles as these technologies mature.
The United States should build on its current momentum in shaping global norms on military AI use and work with other nations to clarify the dangers of nuclear-armed drones. As a first step, the U.S. Defense Department should clearly state as a matter of official policy that it will not place nuclear weapons on two-way, recoverable uncrewed platforms, such as bombers or undersea vehicles. The United States has at times foresworn dangerous weapons in other areas, such as debris-causing antisatellite weapons, and publicly articulated their dangers. Similarly explaining the dangers of nuclear-armed drones could help shape the behavior of other nuclear powers, potentially forestalling their adoption.
Conclusion
It is imperative that nuclear powers approach the integration of AI and autonomy in their nuclear operations thoughtfully and deliberately. Some applications, such as using AI to help reduce the risk of a surprise attack, could improve stability. Other applications, such as dead hand systems, could be dangerous and destabilizing. Russias Perimeter and Poseidon systems demonstrate that other nations might be willing to take risks with automation and autonomy that U.S. leaders would see as irresponsible. It is essential for the United States to build on its current momentum to clarify its own policies and work with other nuclear-armed states to seek international agreement on responsible guardrails for AI in nuclear operations. Rumors of a U.S.-Chinese agreement on AI in nuclear command and control at the meeting between President Joseph Biden and General Secretary Xi Jinping offer a tantalizing hint of the possibilities for nuclear powers to come together to guard against the risks of AI integrated into humanitys most dangerous weapons. The United States should seize this moment and not let this opportunity pass to build a safer, more stable future.
Michael Depp is a research associate with the AI safety and stability project at the Center for a New American Security (CNAS).
Paul Scharre is the executive vice president and director of studies at CNAS and the author of Four Battlegrounds: Power in the Age of Artificial Intelligence.
Image: U.S. Air Force photo by Senior Airman Jason Wiese
Read the rest here:
Artificial Intelligence and Nuclear Stability - War On The Rocks
- Impact of Artificial Intelligence (AI) on Media and Creative Industries - EDMO - March 1st, 2025 [March 1st, 2025]
- 1 Artificial Intelligence (AI) Stock That Could Be Bigger Than Nvidia in 5 Years - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- This Artificial Intelligence (AI) Stock Is Up 15% in 2025 Already. It Is Still a Solid Buy? - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- Can artificial intelligence be the future solution to the enormous challenges and suffering caused by Schizophrenia? - Nature.com - March 1st, 2025 [March 1st, 2025]
- Applications of Artificial Intelligence in Medical Education: A Systematic Review - Cureus - March 1st, 2025 [March 1st, 2025]
- This Artificial Intelligence (AI) Stock Is Up 15% in 2025 Already. It Is Still a Solid Buy? - AOL - March 1st, 2025 [March 1st, 2025]
- Federal Executive Forum Artificial Intelligence Strategies in Government Progress and Best Practices 2025 - Federal News Network - March 1st, 2025 [March 1st, 2025]
- Introduction to Artificial Intelligence for General Surgeons: A Narrative Review - Cureus - March 1st, 2025 [March 1st, 2025]
- How is Artificial Intelligence Affecting Health Care? - Workers Comp Forum - March 1st, 2025 [March 1st, 2025]
- 1 Spectacular Artificial Intelligence (AI) Stock to Buy With $50 Right Now - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- 20+ Advantages and Disadvantages of AI | Pros of Artificial Intelligence - Simplilearn - March 1st, 2025 [March 1st, 2025]
- Prediction: This Top Artificial Intelligence (AI) Stock Will Start Skyrocketing After March 6 - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- 1 Surprising Stock Harnessing the Power of Artificial Intelligence (AI) - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- 2 Top Artificial Intelligence (AI) Stocks to Buy On the Dip Amid Nasdaq Selloff - Yahoo Finance - March 1st, 2025 [March 1st, 2025]
- Review: Artificial intelligence is shaping the future of diabetes care - News-Medical.Net - March 1st, 2025 [March 1st, 2025]
- Prediction: This Artificial Intelligence (AI) Stock -- a 1,020% Gainer Since Its IPO -- Won't Split Its Stock in 2025. Here's Why - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- A Nobel laureate on the economics of artificial intelligence - MIT Technology Review - March 1st, 2025 [March 1st, 2025]
- Prediction: This Top Artificial Intelligence (AI) Stock Will Start Skyrocketing After March 6 - Nasdaq - March 1st, 2025 [March 1st, 2025]
- Meta Platforms Just Caused This Crucial Artificial Intelligence (AI) Stock to Plummet. Should You Buy the Dip? - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- Artificial Intelligence - AI Update, February 28, 2025: AI News and Views From the Past Week - MarketingProfs.com - March 1st, 2025 [March 1st, 2025]
- The Ultimate Artificial Intelligence (AI) ETF to Buy With $50 Right Now - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- Prediction: This Artificial Intelligence (AI) Company Will Split Its Stock in 2025 - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- Should You Forget Nvidia and Buy 2 Artificial Intelligence (AI) Stocks Instead? - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- Why Artificial Intelligence Stocks SoundHound AI, IonQ, and C3.ai Are Struggling Today - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- 2 Top Artificial Intelligence (AI) Stocks Ready for a Bull Run - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- AI Cant Do This Anymore: The Dangers of Artificial Intelligence in Academia - Skidmore News - March 1st, 2025 [March 1st, 2025]
- Whats Next in Artificial Intelligence: Agents that can do more than chatbots - Pittsburgh Post-Gazette - February 9th, 2025 [February 9th, 2025]
- Geopolitics of artificial intelligence to be focus of major summit in Paris; AP explains - Yahoo - February 9th, 2025 [February 9th, 2025]
- Geopolitics of artificial intelligence to be focus of major summit in Paris; AP explains - The Associated Press - February 9th, 2025 [February 9th, 2025]
- 3 Top Artificial Intelligence Stocks to Buy in February - MSN - February 9th, 2025 [February 9th, 2025]
- Geopolitics of artificial intelligence to be focus of major summit in Paris; AP explains - Lufkin Daily News - February 9th, 2025 [February 9th, 2025]
- 2 of the Hottest Artificial Intelligence (AI) Stocks on the Planet Can Plunge Up to 94%, According to Select Wall Street Analysts - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- These 2 Stocks Are Leading the Data Center Artificial Intelligence (AI) Trend, but Are They Buys Right Now? - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Book Review | Genesis: Artificial Intelligence, Hope, and the Human Spirit - LSE - February 9th, 2025 [February 9th, 2025]
- The Artificial Intelligence Action Summit In France: Maintaining The Dialogue On Global AI Regulation - Forrester - February 9th, 2025 [February 9th, 2025]
- Is prediction the next frontier for artificial intelligence? - Healthcare IT News - February 9th, 2025 [February 9th, 2025]
- The Artificial Intelligence in Medicines Market Is Set to Reach $18,119 Million | CAGR of 49.6% - openPR - February 9th, 2025 [February 9th, 2025]
- Geopolitics of artificial intelligence to be focus of major summit in Paris; AP explains - The Audubon County Advocate Journal - February 9th, 2025 [February 9th, 2025]
- Around and About with Richard McCarthy: Asking AI about itself: Will artificial intelligence ever surpass humankind? - GazetteNET - February 9th, 2025 [February 9th, 2025]
- Will the Paris artificial intelligence summit set a unified approach to AI governanceor just be another conference? - Bulletin of the Atomic... - February 9th, 2025 [February 9th, 2025]
- Apple Stock Jumps on Artificial Intelligence (AI) Driving iPhone Sales. Here's Why It's Not Getting Crushed by the DeepSeek Launch. - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Who will win the race to Artificial General Intelligence? - The Indian Express - February 9th, 2025 [February 9th, 2025]
- Prediction: This Artificial Intelligence (AI) Chip Stock Will Win Big From DeepSeek's Feat - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Prediction: 2 Artificial Intelligence (AI) Stocks That Will Be Worth More Than Nvidia 3 Years From Now - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- State of Louisiana Launches Innovation Brand, Announces Creation of $50 Million Growth Fund and Artificial Intelligence Research Institute - Louisiana... - February 9th, 2025 [February 9th, 2025]
- Using smart technologies and artificial intelligence in food packaging can reduce food waste - Yahoo News Canada - February 9th, 2025 [February 9th, 2025]
- BigBear.ai Wins Department of Defense Contract to Prototype Near-Peer Adversary Geopolitical Risk Analysis for Chief Digital and Artificial... - February 9th, 2025 [February 9th, 2025]
- Should Investors Change Their Artificial Intelligence (AI) Investment Strategy After the DeepSeek Launch? - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- 1 Unstoppable Artificial Intelligence (AI) Stock to Buy Before It Punches Its Ticket to the $4 Trillion Club - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Got 10 Years and $1000? These 3 Artificial Intelligence (AI) Stocks Are Set to Soar. - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- 1 Artificial Intelligence (AI) Stock Down 33% to Buy Hand Over Fist, According to Wall Street - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Rihanna Calls Out Use of Artificial Intelligence on Her Voice to Doctor a Clip of Her Speaking - Billboard - February 9th, 2025 [February 9th, 2025]
- 3 Best Artificial Intelligence (AI) Stocks to Buy in February - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Buying This Top Artificial Intelligence (AI) Stock Looks Like a No-Brainer Right Now - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Is Arm Stock a Buy After the Artificial Intelligence (AI) Chip Designer Released Its Quarterly Earnings Report? - The Motley Fool - February 9th, 2025 [February 9th, 2025]
- Artificial Intelligence, the Academy, And A New Studia Humanitatis - Minding The Campus - February 9th, 2025 [February 9th, 2025]
- The Trump Administrations Artificial Intelligence Rollback Is a Chance to Rethink AI Policy - Ms. Magazine - February 5th, 2025 [February 5th, 2025]
- Workday layoffs: California-based company lays off 1,750 employees, 8.5% of its workforce in favor of artificial intelligence - ABC7 Los Angeles - February 5th, 2025 [February 5th, 2025]
- It can really transform lives: Navigating the ethical landscape of artificial intelligence - WKMG News 6 & ClickOrlando - February 5th, 2025 [February 5th, 2025]
- Legal Restrictions Governing Artificial Intelligence in the Workplace - Law.com - February 5th, 2025 [February 5th, 2025]
- Google drops AI weapons banwhat it means for the future of artificial intelligence - VentureBeat - February 5th, 2025 [February 5th, 2025]
- MPs to scrutinise use of artificial intelligence in the finance sector - ComputerWeekly.com - February 5th, 2025 [February 5th, 2025]
- Catalyzing Change: Innovation and Efficiency through Artificial Intelligence in Contracting - United States Army - February 5th, 2025 [February 5th, 2025]
- STSD to hear cost breakdown, address artificial intelligence in education - The Wellsboro Gazette - February 5th, 2025 [February 5th, 2025]
- OECD activities during the Artificial Intelligence (AI) Action Summit - OECD - February 5th, 2025 [February 5th, 2025]
- Tether Ventures Into Artificial Intelligence With New Application Suite - Bitcoin.com News - February 5th, 2025 [February 5th, 2025]
- Will Artificial Intelligence Kill Acting? Nicholas Cage Thinks It Could - Movieguide - February 5th, 2025 [February 5th, 2025]
- 3 Reasons to Buy This Artificial Intelligence (AI) Stock on the Dip - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- 1 No-Brainer Artificial Intelligence (AI) Stock to Buy With $35 and Hold for the Long Run - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Google renounces its promise not to develop weapons with artificial intelligence - Mezha.Media - February 5th, 2025 [February 5th, 2025]
- DeepSeek Just Changed Generative Artificial Intelligence (AI) Forever. 2 Surprising Winners From Its Innovation. - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare - The BMJ - February 5th, 2025 [February 5th, 2025]
- DeepSeek Just Exposed the Biggest Flaw of the Artificial Intelligence (AI) Revolution - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Artificial Intelligence Is Here: How The Innovative Technology Is Taking Over The Stateline - WREX.com - February 5th, 2025 [February 5th, 2025]
- The Ultimate Artificial Intelligence (AI) Stocks to Buy in 2025 - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- This Magnificent Artificial Intelligence (AI) Stock Has Shot Up Over 175% in Just 3 Months, and It Could Soar Higher in 2025 - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Artificial intelligence is bringing nuclear power back from the dead maybe even in California - CalMatters - February 5th, 2025 [February 5th, 2025]
- Got $5,000? These Are 3 of the Cheapest Artificial Intelligence Stocks to Buy Right Now - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Compass Capital partners with MIT Sloan School of Management on an artificial intelligence project - ZAWYA - February 5th, 2025 [February 5th, 2025]
- 3 No-Brainer Artificial Intelligence (AI) Stocks to Buy With $500 Right Now - The Motley Fool - February 5th, 2025 [February 5th, 2025]