Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement – Medium
With many of us stuck at home this past year, weve seen a surge in the popularity of video games. That trend hasnt been limited to humans. DeepMind and Google AI both released results from their Atari playing AIs, which have taught themselves to play over fifty Atari games from scratch, with no provided rules or guidelines. The unique thing about these new results is how general the AI agent is. While previous efforts have achieved human performance on the games they were trained to play, DeepMinds new AI Agent, MuZero could teach itself to beat humans at Atari games it had never encountered in under a day. If this reminds you of AlphaZero which taught itself to play Go then Chess well enough to outperform world champions, thats because it demonstrates an advance in the same suite of algorithms, a class of machine learning called Reinforcement Learning (RL).
While traditional machine learning parses out its model of the world (typically a small world pertaining only to the problem its designed to solve) from swathes of data, RL is real-time observation based. This means RL learns its model primarily through trial and error interactions with its environment, not by pulling out correlations from data representing a historical snapshot of it. In the RL framework, each interaction with the environment is an opportunity to build towards an overarching goal, referred to as a reward. An RL agent is trained to make a sequence of decisions on how to interact with its environment that will ultimately maximize its reward (i.e. help it win the game).
This unique iterative learning paradigm allows the AI model to change and adapt to its environment, making RL an attractive solution for open-ended, real-world problem-solving. It also makes it a leading candidate for artificial general intelligence (AGI) and has some researchers concerned about the rise of truly autonomous AI that does not align with human values. Nick Bostrom first posed what is now the canonical example of this risk among AI Safety researchers a paperclip robot with one goal: optimize the production efficiency of paperclips. With no other specifications, the agent quickly drifts from optimizing its own paperclip factory to commandeering food production supply chains for the paperclip making cause. It proceeds to place paperclips above all other human needs until all thats left of the world is a barren wasteland covered end to end with unused paper clips. The takeaway? Extremely literal problem solving combined with inaccurate problem definition can lead to bad outcomes.
This rogue AGI (albeit in more high-stakes incarnations like weapons management) is the type of harm usually thought of when trying to make RL safe in the context of society. However, between an autonomous agent teaching itself games in the virtual world and an intelligent but misguided AI putting humanity in existential risk lay a multitude of sociotechnical concerns. As RL is being rolled out in domains ranging from social media to medicine and education, its time we seriously think about these near-term risks.
How the paperclip problem will play out in the near term is likely to be rather subtle. For example, medical treatment protocols are currently popular candidates for RL modeling; they involve a series of decisions (which treatment options to try) with uncertain outcomes (different options work better for different people) that all connect to the eventual outcome (patient health). One such study tried to identify the best treatment decisions to avoid sepsis in ICU patients based off of multitudes of data, including medical histories, clinical charts and doctors notes. Their first iteration was an astounding success. With very high accuracy, it identified treatment paths that resulted in patient death. However, upon further examination and consultation with clinicians it turned out that though the agent had been allowed to learn from a plethora of potentially relevant treatment considerations, it had latched onto only one main indicator for death whether or not a chaplain was called. The goal of the system was to flag treatment paths that led to deaths, and in a very literal sense thats what it did. Clinicians only called a chaplain when a patient presented as close to death.
Youll notice that in this example, the incredibly literal yet unhelpful solution the RL agent was taking was discovered by the researchers. This is no accident. The field of modern medicine is built around the reality that connections between treatment and outcomes typically have no known causal explanations. Aspirin, for example, was used as an anti-inflammatory for over seventy years before we had any insight into why it worked. This lack of causal understanding is sometimes referred to as intellectual debt; if we cant describe why something works, we may not be able to predict when or how it will fail. Medicine has grown around this fundamental uncertainty. Through strict codes of ethics, industry standards, and regulatory infrastructure (i.e. clinical trials), the field has developed the scaffolding to minimize the accompanying harms. RL systems aiming to help with diagnosis and treatment have to develop within this infrastructure. Compliance with the machinery medicine has around intellectual debt is more likely to result in slow and steady progress, without colossal misalignment. This same level of oversight does not apply to fields like social media, the potential harms of which are hard to pin down and which have virtually no regulatory scaffolding in place.
We may have already experienced some of the early harms of RL based algorithms in complex domains. In 2018 YouTube engineers released a paper describing an RL addition to their recommendation algorithm that increased daily watch time by 6 million hours in the beta testing phase. Meanwhile, anecdotal accounts of radicalization through YouTube rabbit holes of increasingly conspiratorial content (e.g., NYTimes reporting on YouTubes role in empowering Brazils far right) were on the rise. While it is impossible to know exactly which algorithms powered the platforms recommendations at the time, this rabbit hole effect would be a natural result of an RL algorithm trying to maximize view time by nudging users towards increasingly addictive content.
In the near future, dynamic manipulation of this sort may end up at odds with established protections under the law. For example, Facebook has recently been put under scrutiny by the Department of Housing and Urban Development for discriminatory housing advertisements. The HUD suit alleges that even without explicit targeting filters that amount to the exclusion of protected groups, its algorithms are likely to hide ads from users whom the system determines are unlikely to engage with the ad, even if the advertiser explicitly wants to reach those users. Given the types of (non-RL) ML algorithms FB currently uses in advertising, proving this disparate impact would be a matter of examining the data and features used to train the algorithm. While the current lack of transparency makes this challenging, it is fundamentally possible to roll out benchmarks capable of flagging such discrimination.
If advertising were instead powered by RL, benchmarks would not be enough. An RL advertising algorithm tasked with ensuring it does not discriminate against protected classes, could easily end up making it look as though it were not discriminating instead. If the RL agent were optimized for profit and the practice of discrimination was profitable, the RL agent would be incentivized to find loopholes under which it could circumvent protections. Just like in the sepsis treatment case, the system is likely to find a shortcut towards reaching its objective, only in this case the lack of regulatory scaffolding makes it unlikely this failure will be picked up. The propensity of RL to adapt to meet metrics, while skirting over intent, will make it challenging to tag such undesirable behavior. This situation is further complicated by our heavy reliance on data as a means to flag potential bias in ML systems.
Unlike RL, traditional machine learning is innately static; it takes in loads of data, parses it for correlations, and outputs a model. Once a system has been trained, updating it to accommodate a new environment or changes to the status quo requires repeating most or all of that initial training with updated data. Even for firms that have the computing power to make such retraining seamless, the reliance on data has allowed an in for transparency. The saying goes, machine learning is like money laundering for bias. If an ML system is trained using biased or unrepresentative data, its model of the world will reflect that. In traditional machine learning, we can at least follow the marked bills and point out when an ML system is going to be prone to discrimination by examining its training data. We may even be able to preprocess the data before training the system in an attempt to preemptively correct for bias.
Since RL is generally real-time observation-based rather than training data-based, this follow-the-data approach to algorithmic oversight does not apply. There is no controlled input data to help us anticipate or correct for where an RL system can go wrong before we set it loose in the world.
In certain domains, this lack of data-born insight may not be too problematic. The more we can specify what the moving parts of a given application are and the ways in which they may failbe it through an understanding of the domain or regulatory scaffoldingthe safer it is for us to use RL. DeepMinds use of RL to lower the energy costs of its computing centers, a process ultimately governed by the laws of physics, deserves less scrutiny than the RL based K-12 curriculum generator Googles Ed Chi views as a near-term goal of the field. The harder it is to describe what success looks like within a given domain, the more prone to bad outcomes it is. This is true of all ML systems, but even more crucial for RL systems that cannot be meaningfully validated ahead of use. As regulators, we need to think about which domains need more regulatory scaffolding to minimize the fallout from our intellectual debt, while allowing for the immense promise of algorithms that can learn from their mistakes.
Follow this link:
Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement - Medium
- Demis Hassabis - when the chess prodigy won the Nobel Prize in Chemistry - Chess.com - October 14th, 2024 [October 14th, 2024]
- AI Could Learn a Thing or Two From Rat Brains - The Daily Beast - November 13th, 2023 [November 13th, 2023]
- Episode What sets great teams apart | Lane Shackleton (CPO of Coda) - Mirchi Plus - October 1st, 2023 [October 1st, 2023]
- The timeless charm of of 'Chaturanga' - Daily Pioneer - October 1st, 2023 [October 1st, 2023]
- Creating New Stories That Don't Suck - Hollywood in Toto - October 1st, 2023 [October 1st, 2023]
- AI Agents: Adapting to the Future of Software Development - ReadWrite - October 1st, 2023 [October 1st, 2023]
- The Race for AGI: Approaches of Big Tech Giants - Fagen wasanni - July 30th, 2023 [July 30th, 2023]
- Book Review: Re-engineering the Chess Classics by GM Matthew ... - Chess.com - June 4th, 2023 [June 4th, 2023]
- The Sparrow Effect: How DeepMind is Rewriting the AI Script - CityLife - June 4th, 2023 [June 4th, 2023]
- Vitalik Buterin Exclusive Interview: Longevity, AI and More - Lifespan.io News - June 4th, 2023 [June 4th, 2023]
- How to play chess against ChatGPT (and why you probably shouldn't) - Android Authority - May 29th, 2023 [May 29th, 2023]
- Weekend Movers - Conflux (CFX) and Klaytn (KLAY) - Securities.io - May 16th, 2023 [May 16th, 2023]
- How technology reinvented chess as a global social network - Financial Times - May 8th, 2023 [May 8th, 2023]
- Our moral panic over AI - The Spectator Australia - April 13th, 2023 [April 13th, 2023]
- Liability Considerations for Superhuman (and - Fenwick & West LLP - April 13th, 2023 [April 13th, 2023]
- Aston by-election minus one day The Poll Bludger - The Poll Bludger - April 2nd, 2023 [April 2nd, 2023]
- No-Castling Masters: Kramnik and Caruana will play in Dortmund - ChessBase - March 26th, 2023 [March 26th, 2023]
- AI is teamwork Bits&Chips - Bits&Chips - March 20th, 2023 [March 20th, 2023]
- Resolve Strategic nuclear subs poll (open thread) The Poll Bludger - The Poll Bludger - March 20th, 2023 [March 20th, 2023]
- How AlphaZero Learns Chess - Chess.com - February 24th, 2023 [February 24th, 2023]
- AI Topic: AlphaZero, ChatGPT, Bard, Stable Diffusion and more! - February 24th, 2023 [February 24th, 2023]
- AlphaZero Tackles Chess Variants - by Dennis Monokroussos - February 20th, 2023 [February 20th, 2023]
- AlphaZero Vs. Stockfish 8 | AI Is Conquering Computer Chess - February 10th, 2023 [February 10th, 2023]
- Stockfish (chess) - Wikipedia - November 22nd, 2022 [November 22nd, 2022]
- AlphaZero Chess Engine: The Ultimate Guide - October 14th, 2022 [October 14th, 2022]
- Whos going to save us from bad AI? - MIT Technology Review - October 14th, 2022 [October 14th, 2022]
- DeepMinds game-playing AI has beaten a 50-year-old record in computer science - MIT Technology Review - October 8th, 2022 [October 8th, 2022]
- The Download: TikTok moral panics, and DeepMinds record-breaking AI - MIT Technology Review - October 8th, 2022 [October 8th, 2022]
- Top 5 stories of the week: DeepMind and OpenAI advancements, Intels plan for GPUs, Microsofts zero-day flaws - VentureBeat - October 8th, 2022 [October 8th, 2022]
- Taxing times (open thread) The Poll Bludger - The Poll Bludger - October 8th, 2022 [October 8th, 2022]
- AlphaGo Zero Explained In One Diagram | by David Foster - Medium - October 1st, 2022 [October 1st, 2022]
- A chess scandal brings fresh attention to computers role in the game - The Record by Recorded Future - October 1st, 2022 [October 1st, 2022]
- Meta AI Boss: current AI methods will never lead to true intelligence - Gizchina.com - October 1st, 2022 [October 1st, 2022]
- Meta's AI guru LeCun: Most of today's AI approaches will never lead to true intelligence - ZDNet - September 24th, 2022 [September 24th, 2022]
- Stockfish - Chess Engines - Chess.com - September 9th, 2022 [September 9th, 2022]
- DeepMinds AlphaFold could be the future of science and AI - Vox.com - August 7th, 2022 [August 7th, 2022]
- Correspondence chess server, Go (weiqi) games online - FICGS - July 4th, 2022 [July 4th, 2022]
- Chennai Chess Olympiad and AI - Analytics India Magazine - June 28th, 2022 [June 28th, 2022]
- Yann LeCun has a bold new vision for the future of AI - MIT Technology Review - June 28th, 2022 [June 28th, 2022]
- Special Street Fighter 35th anniversary website launched, features impressive timeline of game release dates over the years - EventHubs - June 28th, 2022 [June 28th, 2022]
- The Nightmarish Frontier of AI in Chess - uschess.org - June 19th, 2022 [June 19th, 2022]
- Four Draws in Round Three of 2022 Candidates | US Chess.org - uschess.org - June 19th, 2022 [June 19th, 2022]
- Part 1: A Realistic Framing Of The Progress In Artificial Intelligence - Investing.com UK - June 19th, 2022 [June 19th, 2022]
- Who Will Win The Candidates: The Case For Each Player - Chess.com - June 13th, 2022 [June 13th, 2022]
- A tale of two universities and two engines - Chess News - March 22nd, 2022 [March 22nd, 2022]
- AlphaZero (And Other!) Chess Variants Now Available For Everyone - Chess.com - March 20th, 2022 [March 20th, 2022]
- How AI is impacting the video game industry - ZME Science - December 17th, 2021 [December 17th, 2021]
- Q&A: How Speechmatics is leading the way in tackling AI bias and improving inclusion - Information Age - November 4th, 2021 [November 4th, 2021]
- AlphaGo | DeepMind - October 22nd, 2021 [October 22nd, 2021]
- Leela Zero - Wikipedia - October 22nd, 2021 [October 22nd, 2021]
- Leela Chess Zero - Wikipedia - October 22nd, 2021 [October 22nd, 2021]
- How AI is reinventing what computers are - MIT Technology Review - October 22nd, 2021 [October 22nd, 2021]
- graphneural.network - Spektral - October 12th, 2021 [October 12th, 2021]
- MuZero - Wikipedia - October 12th, 2021 [October 12th, 2021]
- Bin Yu - October 12th, 2021 [October 12th, 2021]
- A general reinforcement learning algorithm that masters ... - August 29th, 2021 [August 29th, 2021]
- What would it be like to be a conscious AI? We might never know. - MIT Technology Review - August 29th, 2021 [August 29th, 2021]
- AlphaZero to analyse no-castling match of the champions - Chessbase News - July 13th, 2021 [July 13th, 2021]
- How This Startup Aims to Disrupt Copywriting Forever - Inc. - June 6th, 2021 [June 6th, 2021]
- Trapping the queen - Chessbase News - April 17th, 2021 [April 17th, 2021]
- AI 101: All the Ways AI Could Improve or End Our World - Interesting Engineering - April 2nd, 2021 [April 2nd, 2021]
- Quick Scripts AlphaZero - February 17th, 2021 [February 17th, 2021]
- How to Kickstart an AI Venture Without Proprietary Data - Medium - February 17th, 2021 [February 17th, 2021]
- Street Fighter V: What to Expect After the Winter Update | CBR - CBR - Comic Book Resources - February 17th, 2021 [February 17th, 2021]
- This AI chess engine aims to help human players rather than defeat them - The Next Web - February 1st, 2021 [February 1st, 2021]
- Open source at Facebook: 700 repositories and 1.3 million followers - ZDNet - February 1st, 2021 [February 1st, 2021]
- Scientists say dropping acid can help with social anxiety and alcoholism - The Next Web - February 1st, 2021 [February 1st, 2021]
- AlphaZero - Chess Engines - Chess.com - November 21st, 2020 [November 21st, 2020]
- AlphaZero: Shedding new light on chess, shogi, and Go ... - November 21st, 2020 [November 21st, 2020]
- The art of chess: a brief history of the World Championship - TheArticle - November 21st, 2020 [November 21st, 2020]
- Podcast: Can you teach a machine to think? - MIT Technology Review - November 15th, 2020 [November 15th, 2020]
- Retired Chess Grandmaster, AlphaZero AI Reinvent Chess - Science Times - September 17th, 2020 [September 17th, 2020]
- DeepMind's AI is helping to re-write the rules of chess - ZDNet - September 17th, 2020 [September 17th, 2020]
- AI messed up mentally stimulating games. Right now it is actually creating the video game wonderful once again - Publicist Recorder - September 17th, 2020 [September 17th, 2020]
- A|I: The AI Times Surveillance mandated - BetaKit - September 17th, 2020 [September 17th, 2020]
- Starting on Friday: Chess 9LX with Carlsen and Kasparov - Chessbase News - September 17th, 2020 [September 17th, 2020]
- AlphaZero Match Will Be Replicated In Computer Chess Champs - Chess.com - August 3rd, 2020 [August 3rd, 2020]
- Facebook's New Algorithm Can Play Poker And Beat Humans At It - Digital Information World - August 3rd, 2020 [August 3rd, 2020]
- Survival of the Fattest: Macheide and Superman - TheArticle - August 3rd, 2020 [August 3rd, 2020]
- Facebook develops AI algorithm that learns to play poker on the fly - VentureBeat - July 29th, 2020 [July 29th, 2020]