Archive for the ‘Artificial Intelligence’ Category

NSF Announces $140 Million Investment In Seven Artificial Intelligence Research Institutes – Forbes

with several other federal agencies, will fund seven new artificial intelligence research centers led by university investigators.getty

The U.S. National Science Foundation (NSF), along with several other federal agencies and higher education institutions, has announced a $140 million investment to establish seven new National Artificial Intelligence Research Institutes (AI Institutes).

The initiative represents a major effort by the federal government to develop an AI workforce and to advance fundamental understanding of the technologys uses and risks. Funding for each institute, which includes collaborations among several universities, runs up to $20 million over a five-year period.

According to the announcement, the new AI Institutes will conduct research in several areas, including promoting ethical and trustworthy AI systems and technologies, developing novel approaches to cybersecurity, addressing climate change, expanding our understanding of the brain, and enhancing education and public health.

The National AI Research Institutes are a critical component of our Nations AI innovation, infrastructure, technology, education, and partnerships ecosystem, said NSF Director Sethuraman Panchanathan, in the announcement. These institutes are driving discoveries that will ensure our country is at the forefront of the global AI revolution.

In addition to the National Science Foundation, the AI Institutes will be supported by funding from the U.S. Department of Commerces National Institutes of Standards and Technology; U.S. Department of Homeland Securitys Science and Technology Directorate; U.S. Department of Agricultures National Institute of Food and Agriculture; U.S. Department of Educations Institute of Education Sciences; U.S. Department of Defenses Office of the Undersecretary of Defense for Research and Engineering; and the IBM Corporation.

The new AI Institutes focus on the following six research themes:

Trustworthy AI

Led by the University of Maryland, the NSF Institute for Trustworthy AI in Law & Society (TRAILS) aims to transform the practice of AI from one driven primarily by technological innovation to one driven by attention to ethics, human rights, and support for voices that have been marginalized in mainstream AI. It will focus on investigating what trust in AI looks like, whether current technical solutions for AI can be trusted, and which policy models can effectively sustain AI trustworthiness.

Intelligent Agents for Next-Generation Cybersecurity

Led by the University of California, Santa Barbara, the AI Institute for Agent-based Cyber Threat Intelligence and Operation (ACTION) will develop approaches that use AI to anticipate and take corrective actions against cyberthreats targeting the security and privacy of computer networks and their users. Researchers will work with experts in security operations to develop an approach in which AI-enabled intelligent security agents cooperate with humans to improve the resilience of security of computer systems.

Climate Smart Agriculture and Forestry

The AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy (AI-CLIMATE) will be led by the University of Minnesota Twin Cities. It will focus on incorporating knowledge from agriculture and forestry sciences to develop AI methods to curb climate effects while enhancing rural economies. A main goal will be to lower the cost of and improve accounting for carbon in farms and forests to empower carbon markets and inform decision-making.

Neural and Cognitive Foundations of Artificial Intelligence

Led by Columbia University, the Neural and Cognitive Foundations of Artificial Intelligence Institute (ARNI) will focus on connecting progress made in AI to the revolution in understanding of the brain. It will conduct interdisciplinary research between neuroscience, cognitive science, and AI.

AI for Decision Making

The AI-Institute for Societal Decision Making (AI-SDM) under the leadership of Carnegie Mellon University, will develop AI for more effective responses in rapidly developing scenarios like disaster management and public health. AI-SDM will enable emergency managers, public health officials, first responders, community workers, and the public to make better data-driven decisions.

AI-Augmented Learning to Expand Education Opportunities and Improve Outcomes

Led by the University of Illinois, Urbana-Champaign, the AI Institute for Inclusive Intelligent Technologies for Education (INVITE) seeks to develop AI tools and approaches to support three noncognitive skills that underlie effective learning: persistence, academic resilience, and collaboration. It will focus on how children communicate STEM content, how they learn to persist through challenging work, and how teachers can promote noncognitive skill development.

The AI Institute for Exceptional Eduction (AI4ExceptionalEd) will be led by the University at Buffalo. It will attempt to develop a universal speech and language screener for children. The AI screener will analyze video and audio of children in their classrooms and help tailor interventions for children who need speech and language services.

Increasing AI system trustworthiness while reducing its risks will be key to unleashing AIs potential benefits and ensuring our shared societal values, said Under Secretary of Commerce for Standards and Technology and National Institutes of Science and Technology Director Laurie E. Locascio. Today, the ability to measure AI system trustworthiness and its impacts on individuals, communities and society is limited. TRAILS can help advance our understanding of the foundations of trustworthy AI, ethical and societal considerations of AI, and how to build systems that are trusted by the people who use and are affected by them.

I am president emeritus of Missouri State University. After earning my B.A. from Wheaton College (Illinois), I was awarded a Ph.D. in clinical psychology from the University of Illinois in 1973. I then joined the faculty at the University of Kentucky, where I progressed through the professorial ranks and served as director of the Clinical Psychology Program, chair of the department of psychology, dean of the graduate school, and provost. In 2005, I was appointed president of Missouri State University. Following retirement from Missouri State in 2011, I became senior policy advisor to Missouri Governor Jay Nixon. Recently, I have authored two books: Degrees and Pedigrees: The Education of America's Top Executives (2017) and Coming to Grips With Higher Education (2018), both published by Rowman & Littlefield.

See the original post:
NSF Announces $140 Million Investment In Seven Artificial Intelligence Research Institutes - Forbes

With Artificial Intelligence and Leadership, There is a ‘Learning Curve’ – GovExec.com

Cookie List

A cookie is a small piece of data (text file) that a website when visited by a user asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies which are cookies from a domain different than the domain of the website you are visiting for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

The rest is here:
With Artificial Intelligence and Leadership, There is a 'Learning Curve' - GovExec.com

AI: Reducing lawyers’ workloads and costs with artificial intelligence … – The Tri-City News

Automation, not advice, the current path forward for artificial intelligence in the legal sector

Large language models like ChatGPT are based on a huge corpus of text meaning they have access to enough information to pass the standardized law school admission test (LSAT) with high scores.

But ask ChatGPT for legal advice and it will tell you to call a lawyer, adding that it is not authorized to give legal advice not yet, at least.

Russell Alexander, a Canadian lawyer who runs a family law firm in Ontario, is writing a book about AI and the law.

He thinks its only a matter of time before non-professionals start using AI programs to offer legal advice without proper legal training or certification.

I think this will be just around the corner The unauthorized practice of law, Alexander told BIV. Therell be people, probably, using AI to give legal advice when theyre not licensed to do that. Or they might be licensed but theyre not licensed to practise in British Columbia or Ontario. So thats going to be a tough regulatory issue for our governing bodies to deal with.

That is just one of the issues Canadas newArtificial Intelligence and Data Actmay have to address the use of AI to provide services or advice by non-professionals.

There may be other ethical and legal challenges that arise from the use of AI in the law as well, but generally speaking, Alexander said he believes AI will be a positive new tool that reduces lawyers workload and costs.

Lawyers are not going to be replaced by AI lawyers who use AI will replace other lawyers, Alexander said. You need to get on board.

In January, Alexander started a 30-day daily blog series on artificial intelligence and the law, based on his experiences using OpenAIs ChatGPT-3. As a result, Alexander decided he needed to write a book, which he expects to be out in a few weeks.

His firm has also contracted a software company in Seattle to tailor-make some software so that his firm can use AI as part of its routine practice.

Alexander has identified 30 ways that AI can help law firms and lawyers.

The implications are huge, he said. Predictive analysis, contract analysis, legal research, legal drafting, document management, case management, legal chatbots, virtual assistants.

One way Alexanders firm is using AI to reduce lawyer workloads is by applying it to the production of final reports to clients.

One of the things lawyers dont like to do is the final report to the client because it takes some time to get a court order, Alexander said.

Usually, theyll bill for it. What we can do now is take the court order, drop it into AI and AI will produce the final report based on that court order. The lawyers still going to edit it and review it, but its going to be a lot more time efficient.

So those are real life examples of how we can use AI right now. Our firm has started doing this.

AI can take on some of the grunt work that heretofore has required a human with the ability to read, analyze and write. While it can reduce workloads, lawyers will still be needed to oversee the work, because AI is not without its flaws and foibles.

Theres some biases that are built into AI, Alexander noted. Theres examples of this amazing AI where the program makes stuff up thats completely false. So, lawyers still need to have their hand on the rudder.

But overall, he sees it as a new tool that will improve efficiencies in law firms everywhere.

Its a great opportunity to make us much more efficient.

nbennett@biv.com

twitter.com/nbennett_biv

Here is the original post:
AI: Reducing lawyers' workloads and costs with artificial intelligence ... - The Tri-City News

Snoop Dogg addresses risks of artificial intelligence: ‘Sh– what the f—‘ – Fox News

American rapper Snoop Dogg expressed confusion about recent developments in artificial intelligence, comparing the technology to movies he saw as a child.

At the Milken Institute Global Conference in Beverly Hills this week, Snoop, whose given name is Calvin Broadus, turned his focus to artificial intelligence while discussing a strike of the Writers Guild of America. The writers strike is, in part, about the potential for artificial intelligence to take writing jobs.

"I got a motherf---ing AI right now that they did made for me," Snoop said. "This n----- could talk to me. Im like, man, this thing can hold a real conversation? Like real for real? Like its blowing my mind because I watched movies on this as a kid years ago."

WHITE HOUSE ANNOUNCES PLAN FOR RESPONSIBLE AI USE, VP HARRIS TO MEET WITH TECH EXECUTIVES

Snoop Dogg discussed artificial intelligence at the Milken Institute 2023 Global Conference (Milken Institute)

Snoop also referenced Geoffrey Hintons recent warnings about artificial intelligence, who recently quit his job at Google so he could discuss the harms of AI.

"And I heard the dude, the old dude that created AI saying, This is not safe, 'cause the AIs got their own minds, and these mother---ers gonna start doing their own s---. I'm like, are we in a f---ing movie right now, or what? The f-- man?"

GODFATHER OF ARTIFICIAL INTELLIGENCE SAYS AI IS CLOSE TO BEING SMARTER THAN US, COULD END HUMANITY

Hinton is often referred to as the "Godfather of AI," told the New York Times he believes bad actors will use artificial intelligence platforms the very ones his research helped create for nefarious purposes.

Snoop Dogg compared artificial intelligence to movies he saw as a child. ((Photo by Jerod Harris/Getty Images))

Snoop Dogg questioned the safety of artificial intelligence at the Milken Institute 2022 Global Conference. ((Photo by Jerod Harris/Getty Images))

And while Snoop highlighted potential concerns about artificial intelligence, he also questioned whether he should invest in the technology.

"So do I need to invest in AI so I can have one with me? Or like, do y'all know? S---, what the f---? I'm lost, I don't know," Snoop continued, drawing laughter from the audience.

MEET THE 72-YEAR-OLD CONGRESSMAN GOING BACK TO SCHOOL TO LEARN ABOUT AI

The release of ChatGPT last year has sparked both excitement and concern among experts, who believe the technology will revolutionize business and human interactions.

CLICK HERE TO GET THE FOX NEWS APP

Thousands of tech leaders and experts, including Musk, signed an open letter in March that called on artificial intelligence labs to pause research on systems that were more powerful than GPT-4, OpenAIs most advanced AI system. The letter argued that "AI systems with human-competitive intelligence can pose profound risks to society and humanity."

Read more from the original source:
Snoop Dogg addresses risks of artificial intelligence: 'Sh-- what the f---' - Fox News

Never Give Artificial Intelligence the Nuclear Codes – The Atlantic

No technology since the atomic bomb has inspired the apocalyptic imagination like artificial intelligence. Ever since ChatGPT began exhibiting glints of logical reasoning in November, the internet has been awash in doomsday scenarios. Many are self-consciously fancifultheyre meant to jar us into envisioning how badly things could go wrong if an emerging intelligence comes to understand the world, and its own goals, even a little differently from how its human creators do. One scenario, however, requires less imagination, because the first steps toward it are arguably already being takenthe gradual integration of AI into the most destructive technologies we possess today.

Check out more from this issue and find your next story to read.

The worlds major military powers have begun a race to wire AI into warfare. For the moment, that mostly means giving algorithms control over individual weapons or drone swarms. No one is inviting AI to formulate grand strategy, or join a meeting of the Joint Chiefs of Staff. But the same seductive logic that accelerated the nuclear arms race could, over a period of years, propel AI up the chain of command. How fast depends, in part, on how fast the technology advances, and it appears to be advancing quickly. How far depends on our foresight as humans, and on our ability to act with collective restraint.

Jacquelyn Schneider, the director of the Wargaming and Crisis Simulation Initiative at Stanfords Hoover Institution, recently told me about a game she devised in 2018. It models a fast-unfolding nuclear conflict and has been played 115 times by the kinds of people whose responses are of supreme interest: former heads of state, foreign ministers, senior NATO officers. Because nuclear brinkmanship has thankfully been historically rare, Schneiders game gives us one of the clearest glimpses into the decisions that people might make in situations with the highest imaginable human stakes.

It goes something like this: The U.S. president and his Cabinet have just been hustled into the basement of the West Wing to receive a dire briefing. A territorial conflict has turned hot, and the enemy is mulling a nuclear first strike against the United States. The atmosphere in the Situation Room is charged. The hawks advise immediate preparations for a retaliatory strike, but the Cabinet soon learns of a disturbing wrinkle. The enemy has developed a new cyberweapon, and fresh intelligence suggests that it can penetrate the communication system that connects the president to his nuclear forces. Any launch commands that he sends may not reach the officers responsible for carrying them out.

There are no good options in this scenario. Some players delegate launch authority to officers at missile sites, who must make their own judgments about whether a nuclear counterstrike is warranteda scary proposition. But Schneider told me she was most unsettled by a different strategy, pursued with surprising regularity. In many games, she said, players who feared a total breakdown of command and control wanted to automate their nuclear launch capability completely. They advocated the empowerment of algorithms to determine when a nuclear counterstrike was appropriate. AI alone would decide whether to enter into a nuclear exchange.

Schneiders game is, by design, short and stressful. Players automation directives were not typically spelled out with an engineers precisionhow exactly would this be done? Could any automated system even be put in place before the culmination of the crisis?but the impulse is telling nonetheless. There is a wishful thinking about this technology, Schneider said, and my concern is that there will be this desire to use AI to decrease uncertainty by [leaders] who dont understand the uncertainty of the algorithms themselves.

AI offers an illusion of cool exactitude, especially in comparison to error-prone, potentially unstable humans. But todays most advanced AIs are black boxes; we dont entirely understand how they work. In complex, high-stakes adversarial situations, AIs notions about what constitutes winning may be impenetrable, if not altogether alien. At the deepest, most important level, an AI may not understand what Ronald Reagan and Mikhail Gorbachev meant when they said, A nuclear war cannot be won.

There is precedent, of course, for the automation of Armageddon. After the United States and the Soviet Union emerged as victors of the Second World War, they looked set to take up arms in a third, a fate they avoided only by building an infrastructure of mutual assured destruction. This system rests on an elegant and terrifying symmetry, but it goes wobbly each time either side makes a new technological advance. In the latter decades of the Cold War, Soviet leaders worried that their ability to counter an American nuclear strike on Moscow could be compromised, so they developed a dead hand program.

It was so simple, it barely qualified as algorithmic: Once activated during a nuclear crisis, if a command-and-control center outside Moscow stopped receiving communications from the Kremlin, a special machine would inquire into the atmospheric conditions above the capital. If it detected telltale blinding flashes and surges in radioactivity, all the remaining Soviet missiles would be launched at the United States. Russia is cagey about this system, but in 2011, the commander of the countrys Strategic Missile Forces said it still exists and is on combat duty. In 2018, a former leader of the missile forces said it has even been improved.

In 2019, Curtis McGiffin, an associate dean at the Air Force Institute of Technology, and Adam Lowther, then the director of research and education at the Louisiana Tech Research Institute, published an article arguing that America should develop its own nuclear dead hand. New technologies have shrunk the period of time between the moment an incoming attack is detected and the last moment that a president can order a retaliatory salvo. If this decision window shrinks any further, Americas counterstrike ability could be compromised. Their solution: Backstop Americas nuclear deterrent with an AI that can make launch decisions at the speed of computation.

From the September 2020 issue: Ross Andersen on Chinas artificial intelligence surveillance state

McGiffin and Lowther are right about the decision window. During the early Cold War, bomber planes like the one used over Hiroshima were the preferred mode of first strike. These planes took a long time to fly between the Soviet Union and the United States, and because they were piloted by human beings, they could be recalled. Americans built an arc of radar stations across the Canadian High Arctic, Greenland, and Iceland so that the president would have an hour or more of warning before the first mushroom cloud bloomed over an American city. Thats enough time to communicate with the Kremlin, enough time to try to shoot the bombers down, and, failing that, enough time to order a full-scale response.

The intercontinental ballistic missile (ICBM), first deployed by the Soviet Union in 1958, shortened that window, and within a decade, hundreds of them were slotted into the bedrock of North America and Eurasia. Any one of them can fly across the Northern Hemisphere in less than 30 minutes. To preserve as many of those minutes as possible, both superpowers sent up fleets of satellites that could spot the unique infrared signature of a missile launch in order to grok its precise parabolic path and target.

After nuclear-armed submarines were refined in the 70s, hundreds more missiles topped with warheads began to roam the worlds oceans, nearer to their targets, cutting the decision window in half, to 15 minutes or perhaps fewer. (Imagine one bobbing up along the Delaware coast, just 180 miles from the White House.) Even if the major nuclear powers never successfully develop new nuclear-missile technology, 15 minutes or fewer is frighteningly little time for a considered human response. But they are working to develop new missile technology, including hypersonic missiles, which Russia is already using in Ukraine to strike quickly and evade missile defenses. Both Russia and China want hypersonic missiles to eventually carry nuclear warheads. These technologies could potentially cut the window in half again.

These few remaining minutes would go quickly, especially if the Pentagon couldnt immediately conclude that a missile was headed for the White House. The president may need to be roused from sleep; launch codes could be fumbled. A decapitation strike could be completed with no retaliatory salvo yet ordered. Somewhere outside D.C., command and control would scramble to find the next civilian leader down the chain, as a more comprehensive volley of missiles rained down upon Americas missile silos, its military bases, and its major nodes of infrastructure.

A first strike of this sort would still be mad to attempt, because some American nuclear forces would most likely survive the first wave, especially submarines. But as we have learned again in recent years, reckless people sometimes lead nuclear powers. Even if the narrowing of the decision window makes decapitation attacks only marginally more tempting, countries may wish to backstop their deterrent with a dead hand.

The United States is not yet one of those countries. After McGiffin and Lowthers article was published, Lieutenant General John Shanahan, the director of the Pentagons Joint Artificial Intelligence Center, was asked about automation and nuclear weapons. Shanahan said that although he could think of no stronger proponent for AI in the military than himself, nuclear command and control is the one area I pause.

The Pentagon has otherwise been working fast to automate Americas war machine. As of 2021, according to a report that year, it had at least 685 ongoing AI projects, and since then it has continually sought increased AI funding. Not all of the projects are known, but a partial vision of Americas automated forces is coming into view. The tanks that lead U.S. ground forces in the future will scan for threats on their own so that operators can simply touch highlighted spots on a screen to wipe out potential attackers. In the F-16s that streak overhead, pilots will be joined in the cockpit by algorithms that handle complex dogfighting maneuvers. Pilots will be free to focus on firing weapons and coordinating with swarms of autonomous drones.

In January, the Pentagon updated its previously murky policy to clarify that it will allow the development of AI weapons that can make kill shots on their own. This capability alone raises significant moral questions, but even these AIs will be operating, essentially, as troops. The role of AI in battlefield command and the strategic functioning of the U.S. military is largely limited to intelligence algorithms, which simultaneously distill data streams gathered from hundreds of sensorsunderwater microphones, ground radar stations, spy satellites. AI wont be asked to control troop movements or launch coordinated attacks in the very near future. The pace and complexity of warfare may increase, however, in part because of AI weapons. If Americas generals find themselves overmatched by Chinese AIs that can comprehend dynamic, million-variable strategic situations for weeks on end, without so much as a napor if the Pentagon fears that could happenAIs might be placed in higher decision-making roles.

From the April 2019 issue: How AI will rewire us

The precise makeup of Americas nuclear command and control is classified, but AIs awesome processing powers are already being put to good use in the countrys early-alert systems. Even here, automation presents serious risks. In 1983, a Soviet early-alert system mistook glittering clouds above the Midwest for launched missiles. Catastrophe was averted only because Lieutenant Colonel Stanislav Petrova man for whom statues should be raisedfelt in his gut that it was a false alarm. Todays computer-vision algorithms are more sophisticated, but their workings are often mysterious. In 2018, AI researchers demonstrated that tiny perturbations in images of animals could fool neural networks into misclassifying a panda as a gibbon. If AIs encounter novel atmospheric phenomena that werent included in their training data, they may hallucinate incoming attacks.

But put hallucinations aside for a moment. As large language models continue to improve, they may eventually be asked to generate lucid text narratives of fast-unfolding crises in real time, up to and including nuclear crises. Once these narratives move beyond simple statements about the number and location of approaching missiles, they will become more like the statements of advisers, engaged in interpretation and persuasion. AIs may prove excellent advisersdispassionate, hyperinformed, always reliable. We should hope so, because even if they are never asked to recommend responses, their stylistic shadings would undoubtedly influence a president.

Given wide enough leeway over conventional warfare, an AI with no nuclear-weapons authority could nonetheless pursue a gambit that inadvertently escalates a conflict so far and so fast that a panicked nuclear launch follows. Or it could purposely engineer battlefield situations that lead to a launch, if it thinks the use of nuclear weapons would accomplish its assigned goals. An AI commander will be creative and unpredictable: A simple one designed by OpenAI beat human players at a modified version of Dota 2, a battle simulation game, with strategies that theyd never considered. (Notably, it proved willing to sacrifice its own fighters.)

These more far-flung scenarios are not imminent. AI is viewed with suspicion today, and if its expanding use leads to a stock-market crash or some other crisis, these possibilities will recede, at least for a time. But suppose that, after some early hiccups, AI instead performs well for a decade or several decades. With that track record, it could perhaps be allowed to operate nuclear command and control in a moment of crisis, as envisioned by Schneiders war-game participants. At some point, a president might preload command-and-control algorithms on his first day in office, perhaps even giving an AI license to improvise, based on its own impressions of an unfolding attack.

Much would depend on how an AI understands its goals in the context of a nuclear standoff. Researchers who have trained AI to play various games have repeatedly encountered a version of this problem: An AIs sense of what constitutes victory can be elusive. In some games, AIs have performed in a predictable manner until some small change in their environment caused them to suddenly shift their strategy. For instance, an AI was taught to play a game where players look for keys to unlock treasure chests and secure a reward. It did just that until the engineers tweaked the game environment, so that there were more keys than chests, after which it started hoarding all the keys, even though many were useless, and only sometimes trying to unlock the chests. Any innovations in nuclear weaponsor defensescould lead an AI to a similarly dramatic pivot.

Any country that inserts AI into its command and control will motivate others to follow suit, if only to maintain a credible deterrent. Michael Klare, a peace-and-world-security-studies professor at Hampshire College, has warned that if multiple countries automate launch decisions, there could be a flash war analogous to a Wall Street flash crash. Imagine that an American AI misinterprets acoustic surveillance of submarines in the South China Sea as movements presaging a nuclear attack. Its counterstrike preparations would be noticed by Chinas own AI, which would actually begin to ready its launch platforms, setting off a series of escalations that would culminate in a major nuclear exchange.

In the early 90s, during a moment of relative peace, George H. W. Bush and Mikhail Gorbachev realized that competitive weapons development would lead to endlessly proliferating nuclear warheads. To their great credit, they refused to submit to this arms-race dynamic. They instead signed the Strategic Arms Reduction Treaty, the first in an extraordinary sequence of agreements that shrank the two countries arsenals to less than a quarter of their previous size.

History has since resumed. Some of those treaties expired. Others were diluted as relations between the U.S. and Russia cooled. The two countries are now closer to outright war than they have been in generations. On February 21 of this year, less than 24 hours after President Joe Biden strolled the streets of Kyiv, Russian President Vladimir Putin said that his country would suspend its participation in New START, the last arsenal-limiting treaty that remains in effect. Meanwhile, China now likely has enough missiles to destroy every major American city, and its generals have reportedly grown fonder of their arsenal as they have seen the leverage that nuclear weapons have afforded Russia during the Ukraine war. Mutual assured destruction is now a three-body problem, and every party to it is pursuing technologies that could destabilize its logic.

From the October 2018 issue: Yuval Noah Harari on why technology favors tyranny

The next moment of relative peace could be a long way away, but if it comes again, we should draw inspiration from Bush and Gorbachev. Their disarmament treaties were ingenious because they represented a recovery of human agency, as would a global agreement to forever keep AI out of nuclear command and control. Some of the scenarios set forth here may sound quite distant, but thats more reason to think about how we can avoid them, before AI reels off an impressive run of battlefield successes and its use becomes too tempting.

A treaty can always be broken, and compliance with this one would be particularly difficult to verify, because AI development doesnt require conspicuous missile silos or uranium-enrichment facilities. But a treaty can help establish a strong taboo, and in this realm a strongly held taboo may be the best we can hope for. We cannot encrust the Earths surface with automated nuclear arsenals that put us one glitch away from apocalypse. If errors are to deliver us into nuclear war, let them be our errors. To cede the gravest of all decisions to the dynamics of technology would be the ultimate abdication of human choice.

This article appears in the June 2023 print edition.

Here is the original post:
Never Give Artificial Intelligence the Nuclear Codes - The Atlantic