Will Artificial Intel get along with us? Only if we design it that way | TheHill – The Hill
Artificial Intelligence (AI) systems that interact with us the way we interact with each other have long typified Hollywoods image, whether you think of HAL in 2001: A Space Odyssey, Samantha in Her, or Ava in Ex Machina. It thus might surprise people that making systems that interact, assist or collaborate with humans has never been high on the technical agenda.
From its beginning, AI has had a rather ambivalent relationship with humans. The biggest AI successes have come either at a distance from humans (think of the Spirit and Opportunity rovers navigating the Martian landscape) or in cold adversarial faceoffs (the Deep Blue defeating world chess champion Gary Kasparov, or AlphaGo besting Lee Sedol). In contrast to the magnetic pull of these replace/defeat humans ventures, the goal of designing AI systems that are human-aware, capable of interacting and collaborating with humans and engendering trust in them, has received much less attention.
More recently, as AI technologies started capturing our imaginations, there has been a conspicuous change with human becoming the desirable adjective for AI systems. There are so many variations human-centered, human-compatible, human-aware AI, etc. that there is almost a need for a dictionary of terms. Some of this interest arose naturally from a desire to understand and regulate the impacts of AI technologies on people. In previous columns, I've looked, for example, at bias in AI systems and the impact of AI-generated synthetic reality, such as deep fakes or "mind twins."
This time, let us focus on the challenges and impacts of AI systems that continually interact with humans as decision support systems, personal assistants, intelligent tutoring systems, robot helpers, social robots, AI conversational companions, etc.
To be aware of humans, and to interact with them fluently, an AI agent needs to exhibit social intelligence. Designing agents with social intelligence received little attention when AI development was focused on autonomy rather than coexistence. Its importance for humans cannot be overstated, however. After all, evolutionary theory shows that we developed our impressive brains not so much to run away from lions on the savanna but to get along with each other.
A cornerstone of social intelligence is the so-called theory of mind the ability to model mental states of humans we interact with. Developmental psychologists have shown (with compelling experiments like the Sally-Anne test) that children, with the possible exception of those on the autism spectrum, develop this ability quite early.
Successful AI agents need to acquire, maintain and use such mental models to modulate their own actions. At a minimum, AI agents need approximations of humans task and goal models, as well as the humans model of the AI agents task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of humans in the loop (think of the prescient abilities of the character Radar on the TV series M*A*S*H*), and the latter allow it to act in ways that are interpretable to humans by conforming to their mental models of it and be ready to provide customized explanations when needed.
With the increasing use of AI-based decision support systems in many high-stakes areas, including health and criminal justice, the need for AI systems exhibiting interpretable or explainable behavior to humans has become quite critical. The European Unions General Data Protection Regulation posits a right to contestable explanations for all machine decisions that affect humans (e.g., automated approval or denial of loan applications). While the simplest form of such explanations could well be a trace of the reasoning steps that lead to the decision, things get complex quickly once we recognize that an explanation is not a soliloquy and that the comprehensibility of an explanation depends crucially on the mental states of the receiver. After all, your physician gives one kind of explanation for her diagnosis to you and another, perhaps more technical one, to her colleagues.
Provision of explanations thus requires a shared vocabulary between AI systems and humans, and the ability to customize the explanation to the mental models of humans. This task becomes particularly challenging since many modern data-based decision-making systems develop their own internal representations that may not be directly translatable to human vocabulary. Some emerging methods for facilitating comprehensible explanations include explicitly having the machine learn to translate explanations based on its internal representations to an agreed-upon vocabulary.
AI systems interacting with humans will need to understand and leverage insights from human factors and psychology. Not doing so could lead to egregious miscalculations. Initial versions of Teslas auto-pilot self-driving assistant, for example, seemed to have been designed with the unrealistic expectation that human drivers can come back to full alertness and manually override when the self-driving system runs into unforeseen modes, leading to catastrophic failures. Similarly,the systems will need to provide an appropriate emotional response when interacting with humans (even though there is no evidence, as yet, that emotions improve an AI agents solitary performance). Multiple studies show that people do better at a task when computer interfaces show appropriate affect. Some have even hypothesized that part of the reason for the failure of Clippy, the old Microsoft Office assistant, was because it had a permanent smug smile when it appeared to help flustered users.
AI systems with social intelligence capabilities also produce their own set of ethical quandaries. After all, trust can be weaponized in far more insidious ways than a rampaging robot. The potential for manipulation is further amplified by our own very human tendency to anthropomorphize anything that shows even remotely human-like behavior. Joe Weizenbaum had to shut down Eliza, historys first chatbot, when he found his staff pouring their hearts out to it; and scholars like Sherry Turkle continue to worry about the artificial intimacy such artifacts might engender. Ability to manipulate mental models can also allow AI agents to engage in lying or deception with humans, leading to a form of head fakes that will make todays deep fakes tame by comparison. While a certain level of white lies are seen as the glue for human social fabric, it is not clear whether we want AI agents to engage in them.
As AI systems increasingly become human-aware, even quotidian tools surrounding us will start gaining mental-modeling capabilities. This adaptivity can be both a boon and a bane. While we talked about the harms of our tendency to anthropomorphize AI artifacts that are not human-aware, equally insidious are the harms that can arise when we fail to recognize that what we see as a simple tool is actually mental-modeling us. Indeed, micro-targeting by social media can be understood as a weaponized version of such manipulation; people would be much more guarded with social media platforms if they realized that those platforms are actively profiling them.
Given the potential for misuse, we should aim to design AI systems that must understand human values, mental models and emotions, and yet not exploit them with intent to cause harm. In other words, they must be designed with an overarching goal of beneficence to us.
All this requires a meaningful collaboration between AI and humanities including sociology, anthropology and behavioral psychology. Such interdisciplinary collaborations were the norm rather than the exception at the beginning of the AI field and are coming back into vogue.
Formidable as this endeavor might be, it is worth pursuing. We should be proactively building a future where AI agents work along with us, rather than passively fretting about a dystopian one where they are indifferent or adversarial. By designing AI agents to be human-aware from the ground up, we can increase the chances of a future where such agents both collaborate and get along with us.
Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and the Chief AI Officer for AI Foundation, which develops realistic AI companions with social skills. He was the president of the Association for the Advancement of Artificial Intelligence, a founding board member of Partnership on AI, and is an Innovators Network Foundation Privacy Fellow. He can be followed on Twitter @rao2z.
Read more here:
Will Artificial Intel get along with us? Only if we design it that way | TheHill - The Hill
- Koreans picked Google Artificial Intelligence (AI) AlphaGo as an image that comes to mind when they .. - MK - - March 16th, 2024 [March 16th, 2024]
- DeepMind AI rivals the world's smartest high schoolers at geometry - Ars Technica - January 20th, 2024 [January 20th, 2024]
- Why top AI talent is leaving Google's DeepMind - Sifted - November 20th, 2023 [November 20th, 2023]
- Who Is Ilya Sutskever, Meet The Man Who Fired Sam Altman - Dataconomy - November 20th, 2023 [November 20th, 2023]
- Microsoft's LLM 'Everything Of Thought' Method Improves AI ... - AiThority - November 20th, 2023 [November 20th, 2023]
- Absolutely, here's an article on the impact of upcoming technology - Medium - November 20th, 2023 [November 20th, 2023]
- AI: Elon Musk and xAI | Formtek Blog - Formtek Blog - November 20th, 2023 [November 20th, 2023]
- Rise of the Machines Exploring the Fascinating Landscape of ... - TechiExpert.com - November 20th, 2023 [November 20th, 2023]
- What can the current EU AI approach do to overcome the challenges ... - Modern Diplomacy - November 20th, 2023 [November 20th, 2023]
- If I had to pick one AI tool... this would be it. - Exponential View - November 20th, 2023 [November 20th, 2023]
- For the first time, AI produces better weather predictions -- and it's ... - ZME Science - November 20th, 2023 [November 20th, 2023]
- Understanding the World of Artificial Intelligence: A Comprehensive ... - Medium - October 17th, 2023 [October 17th, 2023]
- On AI and the soul-stirring char siu rice - asianews.network - October 17th, 2023 [October 17th, 2023]
- Nvidias Text-to-3D AI Tool Debuts While Its Hardware Business Hits Regulatory Headwinds - Decrypt - October 17th, 2023 [October 17th, 2023]
- One step closer to the Matrix: AI defeats human champion in Street ... - TechRadar - October 17th, 2023 [October 17th, 2023]
- The Vanishing Frontier - The American Conservative - October 17th, 2023 [October 17th, 2023]
- Alphabet: The complete guide to Google's parent company - Android Police - October 17th, 2023 [October 17th, 2023]
- How AI and ML Can Drive Sustainable Revenue Growth by Waleed ... - Digital Journal - October 9th, 2023 [October 9th, 2023]
- The better the AI gets, the harder it is to ignore - BSA bureau - October 9th, 2023 [October 9th, 2023]
- What If the Robots Were Very Nice While They Took Over the World? - WIRED - September 27th, 2023 [September 27th, 2023]
- From Draughts to DeepMind (Scary Smart) | by Sud Alogu | Aug, 2023 - Medium - August 5th, 2023 [August 5th, 2023]
- The Future of Competitive Gaming: AI Game Playing AI - Fagen wasanni - August 5th, 2023 [August 5th, 2023]
- AI's Transformative Impact on Industries - Fagen wasanni - August 5th, 2023 [August 5th, 2023]
- Analyzing the impact of AI in anesthesiology - INDIAai - August 5th, 2023 [August 5th, 2023]
- Economic potential of generative AI - McKinsey - June 20th, 2023 [June 20th, 2023]
- The Intersection of Reinforcement Learning and Deep Learning - CityLife - June 20th, 2023 [June 20th, 2023]
- Chinese AI Giant SenseTime Unveils USD559 Robot That Can Play ... - Yicai Global - June 20th, 2023 [June 20th, 2023]
- Cyber attacks on AI a problem for the future - Verdict - June 20th, 2023 [June 20th, 2023]
- Taming AI to the benefit of humans - Asia News NetworkAsia News ... - asianews.network - May 20th, 2023 [May 20th, 2023]
- Evolutionary reinforcement learning promises further advances in ... - EurekAlert - May 20th, 2023 [May 20th, 2023]
- Commentary: AI's successes - and problems - stem from our own ... - CNA - May 20th, 2023 [May 20th, 2023]
- Machine anxiety: How to reduce confusion and fear about AI technology - Thaiger - May 20th, 2023 [May 20th, 2023]
- We need more than ChatGPT to have true AI. It is merely the first ingredient in a complex recipe - Freethink - May 20th, 2023 [May 20th, 2023]
- Taming AI to the benefit of humans - Opinion - Chinadaily.com.cn - China Daily - May 16th, 2023 [May 16th, 2023]
- To understand AI's problems look at the shortcuts taken to create it - EastMojo - May 16th, 2023 [May 16th, 2023]
- Terence Tao Leads White House's Generative AI Working Group ... - Pandaily - May 16th, 2023 [May 16th, 2023]
- Why we should be concerned about advanced AI - Epigram - May 16th, 2023 [May 16th, 2023]
- Purdue President Chiang to grads: Let Boilermakers lead in ... - Purdue University - May 16th, 2023 [May 16th, 2023]
- 12 shots at staying ahead of AI in the workplace - pharmaphorum - May 16th, 2023 [May 16th, 2023]
- Hypotheses and Visions for an Intelligent World - Huawei - May 16th, 2023 [May 16th, 2023]
- Cloud storage is the key to unlocking AI's full potential for businesses - TechRadar - May 16th, 2023 [May 16th, 2023]
- The Quantum Frontier: Disrupting AI and Igniting a Patent Race - Lexology - April 19th, 2023 [April 19th, 2023]
- Putin and Xi seek to weaponize Artificial Intelligence against America - FOX Bangor/ABC 7 News and Stories - April 19th, 2023 [April 19th, 2023]
- The Future of Generative Large Language Models and Potential ... - JD Supra - April 19th, 2023 [April 19th, 2023]
- A Chatbot Beat the SAT. What Now? - The Atlantic - March 23rd, 2023 [March 23rd, 2023]
- Exclusive: See the cover for Benjamn Labatut's new novel, The ... - Literary Hub - March 23rd, 2023 [March 23rd, 2023]
- These companies are creating ChatGPT alternatives - Tech Monitor - March 23rd, 2023 [March 23rd, 2023]
- Google's AlphaGo AI Beats Human Go Champion | PCMag - February 24th, 2023 [February 24th, 2023]
- AlphaGo: using machine learning to master the ancient game of Go - Google - February 10th, 2023 [February 10th, 2023]
- AI Behind AlphaGo: Machine Learning and Neural Network - February 10th, 2023 [February 10th, 2023]
- Google AlphaGo: How a recreational program will change the world - February 10th, 2023 [February 10th, 2023]
- Computer Go - Wikipedia - November 22nd, 2022 [November 22nd, 2022]
- AvataGo's Metaverse AR Environment will be Your Eternal Friend - Digital Journal - September 17th, 2022 [September 17th, 2022]
- This AI-Generated Artwork Won 1st Place At Fine Arts Contest And Enraged Artists - Bored Panda - September 3rd, 2022 [September 3rd, 2022]
- The best performing from AI in blockchain games, a new DRL model published by rct AI based on training AI in Axie Infinity, AI surpasses the real... - September 3rd, 2022 [September 3rd, 2022]
- Three Methods Researchers Use To Understand AI Decisions - RTInsights - August 20th, 2022 [August 20th, 2022]
- What is my chatbot thinking? Nothing. Here's why the Google sentient bot debate is flawed - Diginomica - August 7th, 2022 [August 7th, 2022]
- Opinion: Can AI be creative? - Los Angeles Times - August 2nd, 2022 [August 2nd, 2022]
- AI predicts the structure of all known proteins and opens a new universe for science - EL PAS USA - August 2nd, 2022 [August 2nd, 2022]
- What is Ethereum Gray Glacier? Should you be worried? - Cryptopolitan - June 24th, 2022 [June 24th, 2022]
- How AI and human intelligence will beat cancer - VentureBeat - June 19th, 2022 [June 19th, 2022]
- Race-by-race tips and preview for Newcastle on Monday - Sydney Morning Herald - June 19th, 2022 [June 19th, 2022]
- A gentle introduction to model-free and model-based reinforcement learning - TechTalks - June 13th, 2022 [June 13th, 2022]
- The role of 'God' in the 'Matrix' - Analytics India Magazine - June 3rd, 2022 [June 3rd, 2022]
- The Powerful New AI Hardware of the Future - CDOTrends - June 3rd, 2022 [June 3rd, 2022]
- The 50 Best Documentaries of All Time 24/7 Wall St. - 24/7 Wall St. - June 3rd, 2022 [June 3rd, 2022]
- How Could AI be used in the Online Casino Industry - Rebellion Research - April 12th, 2022 [April 12th, 2022]
- 5 Times Artificial Intelligence Have Busted World Champions - Analytics Insight - April 2nd, 2022 [April 2nd, 2022]
- The Guardian view on bridging human and machine learning: its all in the game - The Guardian - April 2nd, 2022 [April 2nd, 2022]
- How to Strengthen America's Artificial Intelligence Innovation - The National Interest - April 2nd, 2022 [April 2nd, 2022]
- Why it's time to address the ethical dilemmas of artificial intelligence - Economic Times - April 2nd, 2022 [April 2nd, 2022]
- About - Deepmind - March 18th, 2022 [March 18th, 2022]
- Experts believe a neuro-symbolic approach to be the next big thing in AI. Does it live up to the claims? - Analytics India Magazine - March 18th, 2022 [March 18th, 2022]
- Measuring Attention In Science And Technology - Forbes - March 18th, 2022 [March 18th, 2022]
- The Discontents Of Artificial Intelligence In 2022 - Inventiva - March 16th, 2022 [March 16th, 2022]
- Is AI the Future of Sports? - Built In - March 5th, 2022 [March 5th, 2022]
- This is the reason Demis Hassabis started DeepMind - MIT Technology Review - February 28th, 2022 [February 28th, 2022]
- Sony's AI system outraces some of the world's best e-sports drivers | The Asahi Shimbun: Breaking News, Japan News and Analysis - Asahi Shimbun - February 28th, 2022 [February 28th, 2022]
- SysMoore: The Next 10 Years, The Next 1,000X In Performance - The Next Platform - February 28th, 2022 [February 28th, 2022]
- The World's Shortest List Of Technologies To Watch In 2022 - Forbes - February 3rd, 2022 [February 3rd, 2022]