AI, Moloch, and the race to the bottom – IAI
Moloch is an evil, child-sacrificing god from the Hebrew bible, its name now is used to describe a pervasive dynamic between competing groups or individuals. Moloch describes when a locally optimum strategy, leads to negative effects on a wider scale. The addictive nature of social media, the mass of nuclear weapons on the planet, and the race towards a dangerous AI, all have Molochian dynamics to blame. Ken Mogi offers us hope of a way out.
With the rapid advancements of artificial intelligence systems, concerns are rising as to the future welfare of humans. There is this urgent question whether AI would make us well-off, equal, and empowered. As AI is deeply transformative, we need to observe well where we are heading, lest we should drive head-on to the wall or off the cliff with full speed.
One of the best scenarios for human civilization would be a world in which most of the work is done by AI, with humans comfortably enjoying a permanent vacation under the blessing of basic income generated by machines. One possible nightmare on the other hand would be the annihilation of the whole human species by malfunctioning AI, through widespread social unrest induced by AI generated misinformation and gaslighting or massacre by runaway killer robots.
The human brain works best when the dominant emotion is optimism. Creative people are typically hopeful. Wolfgang Amadeus Mozart famously composed an upbeat masterpiece shortly after his mother's death while staying together in Paris. With AI, therefore, the default option might be optimism. However, we cannot afford to preach a simplistic mantra of optimism, especially when the hard facts go against such a naive assumption. Indeed, the effects of AI on human lives is a subject requiring careful analysis, not something to be judged outright to be either black or white. The most likely outcome would lie somewhere in the fifty shades of grey of what AI could do to humans from here.
___
Moloch has come to signify a condition in which we humans are coerced to make futile efforts and compete with each other in such ways that we are eventually driven to our demise.
___
The idea that newly emerging technologies would makes us more enlightened and better-off is sometimes called the Californian Ideology. Companies such as Google, Facebook, Apple, and Microsoft are often perceived to be proponents of this worldview. Now that AI research companies such as DeepMind and OpenAI are joining the bandwagon, it is high time we estimated the possible effects of artificial intelligence on humans rather seriously.
One of the critical, and perhaps surprisingly true-to-life, concepts concerning the dark side of AI is Moloch. Historically the name of a deity demanding unreasonable sacrifice based on often irritatingly trivial purposes, Moloch has come to signify a condition in which we humans are coerced to make futile efforts and compete with each other in such ways that we are eventually driven to our demise. In the near future, we might be induced to be in a race to the bottom by AI, without realizing the terrible situation.
In the more technical context of AI research, Moloch is an umbrella term acknowledging the difficulty of aligning artificial intelligence systems in such a way to promote human welfare. Max Tegmark, a MIT physicist who has been vocal in warning of the dangers of AI, often cite Moloch to discuss negative AI effects brought upon humanity. As AI researcher Eliezer Yudkowsky asserts, safely aligning a powerful AGI (artificial general intelligence) is difficult.
It is not hard to see why we might have to be beware of Moloch as AI systems increasingly influence our everyday lives. Some argue that the social media were our first serious encounter with AI, as the algorithms came to dominate our experience on platforms such as Twitter, YouTube, Facebook, and TikTok. Depending on our past browsing records, the algorithms (which are forms of AI) would determine what we view on our computer or smartphone. As a user, it is often difficult to get free from this algorithm-induced echo chamber.
Those competing in the attention economy would try to optimize their posts to be favored by the algorithm. The result is often literally a race to the bottom, in terms of quality of contents and user experience. We hear horror stories of teenagers resorting to evermore extreme and possibly self-harming ways of expression on social media. The tyranny of algorithm is a toolbox used by Moloch in today's world. Even if there are occasional silver linings, such as genuinely great contents emerging from competition on the social media, the cloud of dehumanizing attention-grabbing race is too dire to be ignored, especially for the young and immature.
___
The tyranny of algorithm is a toolbox used by Moloch in today's world.
___
The ultimate form of Moloch would be the so-called existential risk. Elon Musk once famously tweeted that AI was "potentially more dangerous than nukes." The comparison with nuclear weapons might actually help us understand why and how AI could entangle us in a race to the bottom, where Moloch would await to devour and destroy humanity.
Nuclear weapons are terrible. They bring death and destruction literally at the push of a button. Some argue, paradoxically, that nuclear weapons have helped the humans maintain peace after the Second World War. Indeed, this interpretation happens to be the standard credo in international politics today. Mutually Assured Destruction is the game theoretic analysis of how the presence of nukes might help peace keeping. If attack me, I would attack you back, and both of us would be destroyed. So do not attack. This is the simple logic of peace by nukes. This could be, however, a self-introduced Trojan Horse which would eventually bring about the end of the human race. Indeed, the acronym MAD is fitting for this particular instance of game theory. We are literally mad to assume that the presence of nukes would assure the sustainability of peace. Things could go terribly wrong, especially when artificial intelligence is introduced in the attack and defense processes.
In game theory, people's behaviors are assessed by an evaluation function, a hypothetical scoring scheme describing how good a particular situation is as the result of choices one makes. The Nash equilibrium describes a state where each player would be worse off in terms of the evaluation function by changing the strategy from the status quo, provided that other players do not alter theirs. Originally proposed by American mathematician John Nash, a Nash equilibrium does not necessarily mean that the present state is globally optimum. It could actually be a miserable trap. The human species would be better off if nuclear weapons were abolished, but it is difficult to achieve universal nuclear disarmament simultaneously. From game theoretic point of view, it does not make sense for a country like the U.K. to abandon its nuclear arsenal, while other nations keep weapons of mass destruction.
Moloch caused by AI is like MAD in the nuclear arms race, in that a particular evaluation function unreasonably dominates. In the attention economy craze on the social media, everyone would be better off if people just stopped optimizing for algorithms. However, if you quit, someone else would just occupy your niche, taking away the revenue. Therefore you keep doing it, remaining a hopeful monster to someday become a Mr. Beast. Thus, Moloch reigns through people's spontaneous submission to the Nash equilibrium, dictated by an evaluation function.
So how do we escape from the dystopia of Moloch? Is the jailbreak even possible?
Goodhart's law is a piece of wisdom we may adapt to escape the pitfall of Moloch. The adage, often stated as "when a measure becomes a target, it ceases to be a good measure", was due to Charles Goodhart, a British economist. Originally a sophisticated argument about how to handle monetary policy, Goodhart's law has resonance with a wide range of aspects in our daily lives. Simply put, following an evaluation function can sometimes be bad.
For example, it would be great to have a lot of money as a result of satisfying and rewarding life habits, but it would be a terrible mistake to try to make as much money as possible no matter what. Excellent academic performance as a fruit of curiosity driven investigations is great: Aiming at high grades at school for their own sake could stifle a child. It is one of life's ultimate blessings to fall in love with someone special: It would be stupid to count how many lovers you had. That is why the Catalogue Aria sung by Don Giovanni's servant Leporello is at best a superficial caricature of what human life is all about, although musically profoundly beautiful, coming from the genius of Mozart.
AI in general learns in an optimization process of some assigned evaluation function towards a goal. As a consequence, AI is most useful when the set goal makes sense. Moloch happens when the goal is ill-posed or too rigid.
___
The terrible truth about Moloch is that it is mediocre, never original, reflecting its origin of statistically optimized evaluation functions
___
Economist John Maynard Keynes once said the economy is driven by the animal spirits. The wonderful insight then is that as animals, we can always opt out of the optimization game. In order to escape the pitfall of Moloch, we need to become a black belt in applied Goodhart's law. When a measure becomes a target, it ceases to be a good measure. We can always update the evaluation function, or use a portfolio of different value systems simultaneously. A dystopia like the one depicted in George Orwell's Nineteen Eighty-Four is the result of taking a particular evaluation function too seriously. When a measure becomes a target, there would be a dystopia, and Moloch would reign. All work and no play makes Jack a dull boy. Trying to satisfy the dictates of the status quo only leads to uninteresting results. We don't have to pull the plug on AI. We can just ignore it. AI does not have this insight, but we humans do. At least some of us.
Being aware of Goodhart's law, we would be well advised to keep an educated distance from the suffocating workings of the evaluation functions in AI. The human brain allocates resources through the attentional system in the prefrontal cortex. If your attention is too focused on a particular evaluation function, your life would become rigid and narrow, encouraging Moloch. You should make more flexible and balanced uses of attention to things that really matter to you.
When watching YouTube or TikTok, rather than viewing videos and clips suggested by the algorithm and fall victim to the attention economy, you may opt to do an inner search. What are the things that come up to your mind when you look back on your childhood, for example? Are there things that tickles your interest from recent experiences in your life? If there are, search for them on the social media. You cannot entirely beat the algorithms, as the search results are formed by them, but you would have initiated a new path of investigation from your inner insights. Practicing mindfulness and making flexible uses of attention on your own interests and wants would be the best medicine against the symptoms of Moloch, because it makes your life's effective evaluation functions more broad and flexible. By making clever uses of your attention, you can improve your own life, and make the attention economy turn for the better, even if by a small step.
Flexible and balanced attention control would lead to more unique creativity, which would be highly valuable in an era marked by tsunamis of contents generated by AI. It is great to use ChatGPT, as long as you remember it is only a tool. Students might get along well by mastering prompt engineering to write academic essays. However, sentences generated by AI tend to be bland, if good enough to earn grades. Alternatively, you can write a prose entirely on your own, like I have been doing with this discussion of Moloch. What you write could be interesting only when you sometimes surprise the reader with twists and turns away from the norm, a quality currently lacking in generative AI.
The terrible truth about Moloch is that it is mediocre, never original, reflecting its origin of statistically optimized evaluation functions. Despite the advent of AI, the problem remains human, all too human. Algorithms do not have direct access to the inner workings of our brains. Attention is the only outlet of brain's computations. In order to pull this through, we need to be focused on the best in us, paying attention to nice things. If we learn to appreciate the truly beautiful, and distinguish genuine desires from superficial ones induced by social media, the spectre of Moloch would recede to our peripheral vision.
The wisdom is to keep being human, by making flexible, broad, and focused uses of the brain's attentional network. In choosing our focuses of attention, we are exercising our free will, in defiance of Moloch. Indeed, the new era of artificial intelligence could yet prove to be a new renaissance, with full blown blossoming of the potentials of humans, if only we knew what to attend to. As the 2017 paper by Google researchers which initiated the transformer revolution eventually leading to ChatGPT was famously titled, attention is all you need.
Here is the original post:
AI, Moloch, and the race to the bottom - IAI
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]
- The Simple Reason Why AGI (Artificial General Intelligence) Is Not ... - Medium - December 2nd, 2023 [December 2nd, 2023]