Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. – The New York Times
There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in A.I. technology have also brought forth a unifying realization of the risks and the steps we need to take to mitigate them.
The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts and the public declarations issued about A.I. are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.
The result is a cacophony of coded language, contradictory views and provocative policy demands that are undermining our ability to grapple with a technology destined to drive the future of politics, our economy and even our daily lives.
These factions are in dialogue not only with the public but also with one another. Sometimes, they trade letters, opinion essays or social threads outlining their positions and attacking others in public view. More often, they tout their viewpoints without acknowledging alternatives, leaving the impression that their enlightened perspective is the inevitable lens through which to view A.I. But if lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.
To understand the fight and the impact it may have on our shared future, look past the immediate claims and actions of the players to the greater implications of their points of view. When you do, youll realize this isnt really a debate only about A.I. Its also a contest about control and power, about how resources should be distributed and who should be held accountable.
Beneath this roiling discord is a true fight over the future of society. Should we focus on avoiding the dystopia of mass unemployment, a world where China is the dominant superpower or a society where the worst prejudices of humanity are embodied in opaque algorithms that control our lives? Should we listen to wealthy futurists who discount the importance of climate change because theyre already thinking ahead to colonies on Mars? It is critical that we begin to recognize the ideologies driving what we are being told. Resolving the fracas requires us to see through the specter of A.I. to stay true to the humanity of our values.
One way to decode the motives behind the various declarations is through their language. Because language itself is part of their battleground, the different A.I. camps tend not to use the same words to describe their positions. One faction describes the dangers posed by A.I. through the framework of safety, another through ethics or integrity, yet another through security and others through economics. By decoding who is speaking and how A.I. is being described, we can explore where these groups differ and what drives their views.
The loudest perspective is a frightening, dystopian vision in which A.I. poses an existential risk to humankind, capable of wiping out all life on Earth. A.I., in this vision, emerges as a godlike, superintelligent, ungovernable entity capable of controlling everything. A.I. could destroy humanity or pose a risk on par with nukes. If were not careful, it could kill everyone or enslave humanity. Its likened to monsters like the Lovecraftian shoggoths, artificial servants that rebelled against their creators, or paper clip maximizers that consume all of Earths resources in a single-minded pursuit of their programmed goal. It sounds like science fiction, but these people are serious, and they mean the words they use.
These are the A.I. safety people, and their ranks include the Godfathers of A.I., Geoff Hinton and Yoshua Bengio. For many years, these leading lights battled critics who doubted that a computer could ever mimic capabilities of the human mind. Having steamrollered the public conversation by creating large language models like ChatGPT and other A.I. tools capable of increasingly impressive feats, they appear deeply invested in the idea that there is no limit to what their creations will be able to accomplish.
This doomsaying is boosted by a class of tech elite that has enormous power to shape the conversation. And some in this group are animated by the radical effective altruism movement and the associated cause of long-term-ism, which tend to focus on the most extreme catastrophic risks and emphasize the far-future consequences of our actions. These philosophies are hot among the cryptocurrency crowd, like the disgraced former billionaire Sam Bankman-Fried, who at one time possessed sudden wealth in search of a cause.
Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.
Many doomsayers say they are acting rationally, but their hype about hypothetical existential risks amounts to making a misguided bet with our future. In the name of long-term-ism, Elon Musk reportedly believes that our society needs to encourage reproduction among those with the greatest culture and intelligence (namely, his ultrarich buddies). And he wants to go further, such as limiting the right to vote to parents and even populating Mars. Its widely believed that Jaan Tallinn, the wealthy long-termer who co-founded the most prominent centers for the study of A.I. safety, has made dismissive noises about climate change because he thinks that it pales in comparison with far-future unknown unknowns like risks from A.I. The technology historian David C. Brock calls these fears wishful worries that is, problems that it would be nice to have, in contrast to the actual agonies of the present.
More practically, many of the researchers in this group are proceeding full steam ahead in developing A.I., demonstrating how unrealistic it is to simply hit pause on technological development. But the roboticist Rodney Brooks has pointed out that we will see the existential risks coming, the dangers will not be sudden and we will have time to change course. While we shouldnt dismiss the Hollywood nightmare scenarios out of hand, we must balance them with the potential benefits of A.I. and, most important, not allow them to strategically distract from more immediate concerns. Lets not let apocalyptic prognostications overwhelm us and smother the momentum we need to develop critical guardrails.
While the doomsayer faction focuses on the far-off future, its most prominent opponents are focused on the here and now. We agree with this group that theres plenty already happening to cause concern: Racist policing and legal systems that disproportionately arrest and punish people of color. Sexist labor systems that rate feminine-coded rsums lower. Superpower nations automating military interventions as tools of imperialism and, someday, killer robots.
The alternative to the end-of-the-world, existential risk narrative is a distressingly familiar vision of dystopia: a society in which humanitys worst instincts are encoded into and enforced by machines. The doomsayers think A.I. enslavement looks like the Matrix; the reformers point to modern-day contractors doing traumatic work at low pay for OpenAI in Kenya.
Propagators of these A.I. ethics concerns like Meredith Broussard, Safiya Umoja Noble, Rumman Chowdhury and Cathy ONeil have been raising the alarm on inequities coded into A.I. for years. Although we dont have a census, its noticeable that many leaders in this cohort are people of color, women and people who identify as L.G.B.T.Q. They are often motivated by insight into what it feels like to be on the wrong end of algorithmic oppression and by a connection to the communities most vulnerable to the misuse of new technology. Many in this group take an explicitly social perspective: When Joy Buolamwini founded an organization to fight for equitable A.I., she called it the Algorithmic Justice League. Ruha Benjamin called her organization the Ida B. Wells Just Data Lab.
Others frame efforts to reform A.I. in terms of integrity, calling for Big Tech to adhere to an oath to consider the benefit of the broader public alongside or even above their self-interest. They point to social media companies failure to control hate speech or how online misinformation can undermine democratic elections. Adding urgency for this group is that the very companies driving the A.I. revolution have, at times, been eliminating safeguards. A signal moment came when Timnit Gebru, a co-leader of Googles A.I. ethics team, was dismissed for pointing out the risks of developing ever-larger A.I. language models.
While doomsayers and reformers share the concern that A.I. must align with human interests, reformers tend to push back hard against the doomsayers focus on the distant future. They want to wrestle the attention of regulators and advocates back toward present-day harms that are exacerbated by A.I. misinformation, surveillance and inequity. Integrity experts call for the development of responsible A.I., for civic education to ensure A.I. literacy and for keeping humans front and center in A.I. systems.
This groups concerns are well documented and urgent and far older than modern A.I. technologies. Surely, we are a civilization big enough to tackle more than one problem at a time; even those worried that A.I. might kill us in the future should still demand that it not profile and exploit us in the present.
Other groups of prognosticators cast the rise of A.I. through the language of competitiveness and national security. One version has a post-9/11 ring to it a world where terrorists, criminals and psychopaths have unfettered access to technologies of mass destruction. Another version is a Cold War narrative of the United States losing an A.I. arms race with China and its surveillance-rich society.
Some arguing from this perspective are acting on genuine national security concerns, and others have a simple motivation: money. These perspectives serve the interests of American tech tycoons as well as the government agencies and defense contractors they are intertwined with.
OpenAIs Sam Altman and Metas Mark Zuckerberg, both of whom lead dominant A.I. companies, are pushing for A.I. regulations that they say will protect us from criminals and terrorists. Such regulations would be expensive to comply with and are likely to preserve the market position of leading A.I. companies while restricting competition from start-ups. In the lobbying battles over Europes trailblazing A.I. regulatory framework, U.S. megacompanies pleaded to exempt their general purpose A.I. from the tightest regulations, and whether and how to apply high-risk compliance expectations on noncorporate open-source models emerged as a key point of debate. All the while, some of the moguls investing in upstart companies are fighting the regulatory tide. The Inflection AI co-founder Reid Hoffman argued, The answer to our challenges is not to slow down technology but to accelerate it.
Any technology critical to national defense usually has an easier time avoiding oversight, regulation and limitations on profit. Any readiness gap in our military demands urgent budget increases, funds distributed to the military branches and their contractors, because we may soon be called upon to fight. Tech moguls like Googles former chief executive Eric Schmidt, who has the ear of many lawmakers, signal to American policymakers about the Chinese threat even as they invest in U.S. national security concerns.
The warriors narrative seems to misrepresent that science and engineering are different from what they were during the mid-20th century. A.I. research is fundamentally international; no one country will win a monopoly. And while national security is important to consider, we must also be mindful of self-interest of those positioned to benefit financially.
As the science-fiction author Ted Chiang has said, fears about the existential risks of A.I. are really fears about the threat of uncontrolled capitalism, and dystopias like the paper clip maximizer are just caricatures of every start-ups business plan. Cosma Shalizi and Henry Farrell further argue that weve lived among shoggoths for centuries, tending to them as though they were our masters as monopolistic platforms devour and exploit the totality of humanitys labor and ingenuity for their own interests. This dread applies as much to our future with A.I. as it does to our past and present with corporations.
Regulatory solutions do not need to reinvent the wheel. Instead, we need to double down on the rules that we know limit corporate power. We need to get more serious about establishing good and effective governance on all the issues we lost track of while we were becoming obsessed with A.I., China and the fights picked among robber barons.
By analogy to the health care sector, we need an A.I. public option to truly keep A.I. companies in check. A publicly directed A.I. development project would serve to counterbalance for-profit corporate A.I. and help ensure an even playing field for access to the 21st centurys key technology while offering a platform for the ethical development and use of A.I.
Also, we should embrace the humanity behind A.I. We can hold founders and corporations accountable by mandating greater A.I. transparency in the development stage, in addition to applying legal standards for actions associated with A.I. Remarkably, this is something that both the left and the right can agree on.
Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment. As those with power and privilege seem poised to harness A.I. to accumulate much more or pursue extreme ideologies, lets think about how we can constrain their influence in the public square rather than cede our attention to their most bombastic nightmare visions for the future.
More:
Opinion | Elon Musk, Geoff Hinton, and the War Over A.I. - The New York Times
- VC Fundraising Jumps As Investors Bet on Transformative AI - PYMNTS.com - March 11th, 2025 [March 11th, 2025]
- Cathie Wood on What Comes Next in AI and Big Tech - Bloomberg - March 11th, 2025 [March 11th, 2025]
- How real-world businesses are transforming with AI with more than 140 new stories - Microsoft - March 11th, 2025 [March 11th, 2025]
- Expanding AI Overviews and introducing AI Mode - The Keyword - March 11th, 2025 [March 11th, 2025]
- VSCO Canvas is a Reminder that Generative AI is Still Not There - The Phoblographer - March 11th, 2025 [March 11th, 2025]
- 1 Unstoppable AI Stock That Could Skyrocket When the Market Comes to Its Senses - The Motley Fool - March 11th, 2025 [March 11th, 2025]
- This is the fourth AMD Ryzen AI Max+ 395 and it could be the cheapest but fastest mini PC ever launched - TechRadar - March 11th, 2025 [March 11th, 2025]
- What Coca-Cola has learned on its generative AI journey so far - Marketing Dive - March 11th, 2025 [March 11th, 2025]
- GenLayer offers novel approach for AI agent transactions: getting multiple LLMs to vote on a suitable contract - VentureBeat - March 11th, 2025 [March 11th, 2025]
- Vibe Coding: The AI Revolution Thats Making VCs Bet Big On Human Intuition - Forbes - March 11th, 2025 [March 11th, 2025]
- WHO announces new collaborating centre on AI for health governance - World Health Organization - March 11th, 2025 [March 11th, 2025]
- ServiceNow to Extend Leading Agentic AI to Every Employee for Every Corner of the Business With Acquisition of Moveworks - Business Wire - March 11th, 2025 [March 11th, 2025]
- China's top universities expand enrolment to beef up capabilities in AI, strategic areas - Reuters - March 11th, 2025 [March 11th, 2025]
- The Human-AI Playbook: Moving Beyond Automation To True Collaboration - Forbes - March 11th, 2025 [March 11th, 2025]
- How the AI Talent Race Is Reshaping the Tech Job Market - The Wall Street Journal - March 11th, 2025 [March 11th, 2025]
- Praxis AI pioneers AI-driven education with Claude in Amazon Bedrock - Anthropic - March 11th, 2025 [March 11th, 2025]
- The Dangerous Reason We Fall in Love With AI - TIME - March 11th, 2025 [March 11th, 2025]
- China's DeepSeek resolves issue briefly affecting its AI reasoning model - Reuters - March 11th, 2025 [March 11th, 2025]
- How pharmaceutical companies are training their workers on AI - Business Insider - March 11th, 2025 [March 11th, 2025]
- 30 Ways to Use AI to Make Life Better and Easier - Art of Manliness - March 11th, 2025 [March 11th, 2025]
- Global expansion in Generative AI: a year of growth, newcomers, and attacks - The Cloudflare Blog - March 11th, 2025 [March 11th, 2025]
- ServiceNow Buys AI Startup for $2.85 Billion. Why It's Making Its Largest Deal Yet. - Barron's - March 11th, 2025 [March 11th, 2025]
- Microsoft developing AI reasoning models to compete with OpenAI, The Information reports - Reuters - March 11th, 2025 [March 11th, 2025]
- Top 20 AI Research Scientists: The People Leading in LLM & AI Technology - The Information - March 11th, 2025 [March 11th, 2025]
- Manus mania is here: Chinese general agent is this weeks future of AI' and OpenAI-killer - The Register - March 11th, 2025 [March 11th, 2025]
- Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem - Tech Policy Press - March 11th, 2025 [March 11th, 2025]
- Anthropics Recommendations to OSTP for the U.S. AI Action Plan - Anthropic - March 11th, 2025 [March 11th, 2025]
- As AI agents multiply, IT becomes the new HR department - ZDNet - March 11th, 2025 [March 11th, 2025]
- Huawei reportedly acquired two million Ascend 910 AI chips from TSMC last year through shell companies - Tom's Hardware - March 11th, 2025 [March 11th, 2025]
- Asia stocks vulnerable to tariffs, but AI could drive growth - Goldman Sachs - Investing.com - March 11th, 2025 [March 11th, 2025]
- AI at the Brink: Preventing the Subversion of Democracy - Tech Policy Press - March 3rd, 2025 [March 3rd, 2025]
- Impact Analytics Brings AI-Native InventorySmart Technology to Merchandise Planning Curriculum at the Fashion Institute of Technology (FIT) -... - March 3rd, 2025 [March 3rd, 2025]
- Beyond the buzz, state lawmakers weigh in on health care AI - American Medical Association - March 3rd, 2025 [March 3rd, 2025]
- Kyndryl Announces Collaboration with Microsoft to Enable AI-powered Healthcare - PR Newswire - March 3rd, 2025 [March 3rd, 2025]
- Survey Shows How AI Is Reshaping Healthcare and Life Sciences, From Lab to Bedside - NVIDIA Blog - March 3rd, 2025 [March 3rd, 2025]
- AI Joins the Team: How FME Students Learn to Use Generative AI Babson Thought & Action - Babson Thought & Action - March 3rd, 2025 [March 3rd, 2025]
- Lenovo at MWC 2025: Advancing AI-Powered Business Computing with Latest ThinkPad, ThinkBook, and Visionary Concept Devices - Lenovo StoryHub - March 3rd, 2025 [March 3rd, 2025]
- Californias AI Revolution: Proposed CPA Regulations Target Automated Decision Making - Workforce Bulletin - March 3rd, 2025 [March 3rd, 2025]
- Lenovo at MWC 2025: Expanding the Boundaries of AI-Powered Creativity, Productivity, and Innovation - Lenovo StoryHub - March 3rd, 2025 [March 3rd, 2025]
- IAEA Board Briefed on Ukraine, Iran, Gender Parity, AI and More - International Atomic Energy Agency - March 3rd, 2025 [March 3rd, 2025]
- Provider organizations that invest in cloud-first, AI-powered strategies will thrive - Healthcare IT News - March 3rd, 2025 [March 3rd, 2025]
- AI tool can write and evaluate business plans as well as or better than humans can, research indicates - Phys.org - March 3rd, 2025 [March 3rd, 2025]
- Were at MWC showcasing the latest AI features on Android. - The Keyword - March 3rd, 2025 [March 3rd, 2025]
- Sneak Peek of The AI+HI Project 2025 - SHRM - March 3rd, 2025 [March 3rd, 2025]
- AWS Returns as Diamond Sponsor for Qlik Connect 2025 to Advance AI Execution - Business Wire - March 3rd, 2025 [March 3rd, 2025]
- How agentic AI is redefining the tax and accounting profession - Thomson Reuters Tax & Accounting - March 3rd, 2025 [March 3rd, 2025]
- Keysight and Northeastern University to Demonstrate AI-RAN Orchestration at Mobile World Congress 2025 - Business Wire - March 3rd, 2025 [March 3rd, 2025]
- AlgoRhythms summit will explore the future of music and AI - IU Newsroom - March 3rd, 2025 [March 3rd, 2025]
- Cincinnati Children's is Exceeding Patient Expectations with AI-first ThinkAndor - PR Newswire - March 3rd, 2025 [March 3rd, 2025]
- How AI was used in the making of some of this years Oscar favorites - PBS NewsHour - March 3rd, 2025 [March 3rd, 2025]
- How AI can distort clinical decision-making to prioritize profits over patients - STAT - March 3rd, 2025 [March 3rd, 2025]
- How to stop American AI from becoming the next Myspace - Breaking Defense - March 3rd, 2025 [March 3rd, 2025]
- Will AI Replace Writers? Here's Why It's Not Happening Anytime Soon - Forbes - March 3rd, 2025 [March 3rd, 2025]
- At HIMSS25, Thinking About Governance and Agentic AI - Healthcare Innovation - March 3rd, 2025 [March 3rd, 2025]
- From the vision to our AI Phone: the next chapter - Deutsche Telekom - March 3rd, 2025 [March 3rd, 2025]
- Winning in the Intelligence Age: A Guide to AI-Driven Advantage - Consumer Goods Technology - March 3rd, 2025 [March 3rd, 2025]
- Leveraging AI To Propel Small Business Growth - Forbes - March 3rd, 2025 [March 3rd, 2025]
- Why AI Isnt Always the Answer for Photo Edits - Fstoppers - March 3rd, 2025 [March 3rd, 2025]
- Fruit Fly Research Led NJIT Scientists and Edison Teens to Better AI Habits on Supercomputers - NJIT News | - March 3rd, 2025 [March 3rd, 2025]
- Pushing the AI Boundaries to Win in the Intelligence Age - Consumer Goods Technology - March 3rd, 2025 [March 3rd, 2025]
- The Trump administration can avoid a strategic misstep in the AI global race - Microsoft - March 1st, 2025 [March 1st, 2025]
- The Humane Ai Pin Has Already Been Brought Back to Life - WIRED - March 1st, 2025 [March 1st, 2025]
- As Africa races towards its AI revolution, China is with it each step of the way - South China Morning Post - March 1st, 2025 [March 1st, 2025]
- Two dozen arrested in international swoop for links to AI-made child sex abuse images - Reuters - March 1st, 2025 [March 1st, 2025]
- What is Mistral AI? Everything to know about the OpenAI competitor - TechCrunch - March 1st, 2025 [March 1st, 2025]
- Why SoundHound AI Stock Soared Higher Today - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- AI Fever in Power Stocks Moves From Nuclear to Plain Natural Gas - The Wall Street Journal - March 1st, 2025 [March 1st, 2025]
- How to turn ChatGPT into your AI coding power tool - and double your output - ZDNet - March 1st, 2025 [March 1st, 2025]
- When will we be able to trust AI? - Star Tribune - March 1st, 2025 [March 1st, 2025]
- Everything you need to know about Alexa+, Amazon's new generative AI assistant - ZDNet - March 1st, 2025 [March 1st, 2025]
- Meet The University Dropouts Using AI To Train Clinicians - Forbes - March 1st, 2025 [March 1st, 2025]
- AI can spot depression through driving habits, study finds - PsyPost - March 1st, 2025 [March 1st, 2025]
- Chip Ganassi Racing partners with OpenAI in first motorsports venture for AI company - The Associated Press - March 1st, 2025 [March 1st, 2025]
- The Hidden Material Breakthrough That Could Supercharge AI and Save Energy - SciTechDaily - March 1st, 2025 [March 1st, 2025]
- Microsoft wants Donald Trump to change AI-chip rules that names India, UAE and others; warns it will beco - The Times of India - March 1st, 2025 [March 1st, 2025]
- How unchecked AI could trigger a nuclear war - Brookings Institution - March 1st, 2025 [March 1st, 2025]
- The Spy Sheikh Taking the AI World by Storm - The Wall Street Journal - March 1st, 2025 [March 1st, 2025]
- Microsoft kills Skype, confirms AI in CoD, and tests free Office - Windows Central - March 1st, 2025 [March 1st, 2025]
- Apple Once Lagged in AI. Thats Helping the Stock Today. - Barron's - March 1st, 2025 [March 1st, 2025]
- It almost happened: Trump, Vance, Zelensky come to blows in wild AI-generated video - Hindustan Times - March 1st, 2025 [March 1st, 2025]