Nick Bostrom: Will AI lead to tyranny? – UnHerd
Flo Read is UnHerd's producer and a presenter for UnHerd TV.
November 12, 2023
November 12, 2023
In the last year, artificial intelligence has progressed from a science-fiction fantasy to an impending reality. We can see its power in everything from online gadgets to whispers of a new, post-singularity tech frontier as well as in renewed fears of an AI takeover.
One intellectual who anticipated these developments decades ago is Nick Bostrom, a Swedish philosopher at Oxford University and director of its Future of Humanity Institute. He joined UnHerds Florence Read to discuss the AI era, how governments might exploit its power for surveillance, and the possibility of human extinction.
Already registered? Sign in
Florence Read: Youre particularly well-known for your work on existential risk what do you mean by that?
Nick Bostrom: The concept of existential risk refers to ways that the human story could end prematurely. That might mean literal extinction. But it could also mean getting ourselves permanently locked into some radically suboptimal state, that could either collapse, or you could imagine some kind of global totalitarian surveillance dystopia that you could never overthrow. If it were sufficiently bad, that could also count as an existential catastrophe. Now, as for collapse scenarios, many of those might not be existential catastrophes, because civilisations have risen and fallen, empires have come and gone and eventually. If our own contemporary civilisation totally collapsed, perhaps out of the ashes would eventually rise another civilisation hundreds or thousands of years from now. So for something to be an existential catastrophe it would not just have to be bad, but have some sort of indefinite longevity.
FR: It might be too extreme, but to many people it feels that a state of semi-anarchy has already descended.
NB: I think there has been a general sense in the last few years that the wheels are coming off, and institutional processes and long-term trends that were previously taken for granted can no longer be relied upon. Like that there are going to be fewer wars every year, or that the education system is gradually improving. The faith people had in those assumptions has been shaken over the last five years or so.
FR: Youve written a great deal about how we need to learn from each existential threat as we move forward, so that next time when it becomes more severe or more intelligent or more sophisticated, we can cope. And that specifically, of course, relates to artificial intelligence.
NB: Its quite striking how radically the public discourse on this has shifted, even just in the last six to 12 months. Having been involved in the field for a long time, there were people working on it but broadly, in society, it was more viewed as science-fiction speculation, not as a mainstream concern, and certainly nothing that top-level policymakers would have been concerned with. But in the UK weve recently had this Global AI Summit, and the White House just came out with executive orders. Theres been quite a lot of talk, including about potential existential risks from AI as well as more near-term issues, and that is kind of striking.
I think that technical progress is really what has been primarily responsible for this. People saw for themselves with GPT-3, then GPT-3.5 and GPT-4 how much this technology has improved.
FR: How close are we to something that you might consider the singularity or AGI that does actually supersede any human control over it?
NB: There is no obvious clear barrier that would necessarily prevent systems next year or the year after from reaching this level. It doesnt mean that thats the most likely scenario. But we dont know what happens as you scale GPT-4 to GPT-5. But we know that when you scaled it from GPT-3 to GPT-4 it unlocked new abilities. There is also this phenomenon of grokking. So initially, you try to teach the AI some tasks, and its too hard. Maybe it gets slightly better over time because it memorises more and more specific instances of the problem, but thats the hard, sluggish way of learning to do something. But then at some point, it kind of gets it. Once it has enough neurons in its brain or has seen enough examples, it sort of sees the underlying principle, or develops the right higher-level concept that enables it to suddenly have a rapid spike in performance.
FR: You write about the idea that we have to begin to teach AI a set of values by which it will function, if we have any hope of maintaining its benefit for humanity in the long term. And one of the liberal values that has been called into question when it comes to AI is freedom of speech. There have been examples of AI effectively censoring information, or filtering information that is available on a platform. Do you think that there is a genuine threat to freedom or a totalitarian impulse built into some of these systems that were going to see extended and exaggerated further down the line?
NB: I think AI is likely to greatly increase the ability of centralised powers to keep track of what people are thinking and saying. Weve already had, for a couple of decades, the ability to collect huge amounts of information. You can eavesdrop on peoples phone calls or social-media postings and it turns out governments do that. But what can you do with that information? So far, not that much. You can map out the network of who is talking to whom. And then, if there is a particular individual of concern, you could assign some analyst to read through their emails.
With AI technology, you could simultaneously analyse everybodys political opinions in a sophisticated way, using sentiment analysis. You could probably form a pretty good idea of what each citizen thinks of the government or the current leader if you had access to their communications. So you could have a kind of mass manipulation, but instead of sending out one campaign message to everybody, you could have customised persuasion messages for each individual. And then, of course, you can combine that with physical surveillance systems like facial recognition, gait recognition and credit card information. If you imagine all of this information feeding into one giant model, I think you will have a pretty good idea of what each person is up to, what and who they know, but also what they are thinking and intending to do.
If you have some sufficiently powerful regime in place, it might then implement these measures and then, perhaps, make itself immune to overthrow.
FR: Do you think the rise in hyper-realistic propaganda deep-fake videos, which AI is going to make possible in the coming years will coincide with the rise in generalised scepticism in Western societies?
NB: I think in principle a society could adjust to it. But I think it will come at the same time as a whole bunch of other things: automated persuasion bots for instance, social companions built from these large language models and then with visual components that might be very compelling and addictive. And then also mass surveillance, mass potential censorship or propaganda.
FR: Were talking about a tyrannical government that uses AI to surveil its citizens but is there an innate moral component to the AI itself? Is there a chance that an AGI model could in some way become a bad actor on its own without human intervention?
NB: There are a bunch of different concerns that one might have as we move towards increasingly powerful AI tools and there are completely unnecessary feuds that people have between them. Well, I think concern X should be taken seriously, and somebody else says I think concern Y should be taken seriously. People love to form tribes and to beat one another, but X, Y, Z and B and W need to be taken into account. But yes, youre right that there is also the separate alignment problem which is: with an arbitrarily powerful AI system, how can you make sure that it does what the people building it intend it to do?
FR: And this is where its about building in certain principles, an ethical code, into the system is that the way of mitigating that risk?
NB: Yes, or being able to steer it basically. Its a separate question of where you do steer it if you build in some principle or goal which goal or which principle? But even just having the ability to point it towards any particular outcome you want, or a set of principles you want it to follow that is a difficult technical problem. And in particular, what is hard is to figure out if the way we would do that would continue to work even if the AI system became smarter than us and perhaps eventually super-intelligent. If, at that point, we are no longer able to understand what it is doing or why it is doing it, or whats going on inside its brain, we still want an original scaling method to keep working to arbitrarily high levels of intelligence. And we might need to get that right on the first try.
FR: How do we do that with such incredible levels of dispute and ideological schism across the world?
NB: Even if its toothless, we should make an affirmation of the general principle that ultimately AI should be for the benefit of all sentient life. If were talking about a transition to the super-intelligence era, all humans will be exposed to some of the risk, whether they want it or not. And so it seems fair that all should also stand to have some slice of the upside if it goes well. And those principles should go beyond all currently existing humans and include, for example, animals that we are treating very badly in many cases today, but also some of the digital minds themselves that might become moral subjects. As of right now, all we might hope for is some general, vague principle, and then that can sort of be firmed up as we go along.
Another hope, and some recent progress has been made on this, is for the next-generation systems to be tested prior to deployment to check that they dont lend themselves to people who would want to make biological weapons of mass destruction or commit cybercrime. And so far AI companies have done some voluntary work on this: Open AI, before releasing GPT-4, had the technology for around half a year and did red-teaming exercises too. Research on technical AI alignment would be good to solve the problem of scalable alignment before we have super-intelligence.
I think the whole area of the moral status of digital mind will require more attention. I think it needs to start to migrate from a philosophy seminar topic to a serious mainstream issue. We dont want to have a future where the majority of sentient minds or digital minds are horribly oppressed and were like pigs in Animal Farm. That would be one way of creating a dystopia. And its going to be a big challenge, because its already hard for us to extend empathy sufficiently to animals, even though animals have eyes and faces and can squeak.
Incidentally, I think there might be grounds for moral status besides sentience. I think if somebody can suffer, that might be sufficient to give them moral status. But I think even if you thought they were not conscious but they had goals, a conception of self, the sense of an entity persisting through time, the ability to enter into reciprocal relationships with other beings and humans that might also ground various forms of moral status.
FR: Weve talked a lot about the risks of AI, but what are its potential upsides? What would be the best case scenario?
NB: I think the upsides are enormous. In fact, it would be tragic if we never developed advanced artificial intelligence. I think all the paths to really great futures ultimately lead through the development of machine super-intelligence. But the actual transition itself will be associated with major risks, and we need to be super-careful to get that right. But Ive started slightly worrying now in the last year or so that we might overshoot with this increase in attention to the risks and downsides. It still seems unlikely, but less unlikely than it did a year ago, that we might get to the point of a permafrost, some situation where it is never developed.
FR: A kind of AI nihilism?
NB: Yes, where it becomes so stigmatised that it just becomes impossible for anybody to say anything positive about it. There may pretty much be a permanent ban on AI. I think that could be very bad. I still think we need to have a greater level of concern than we currently have. But I would want us to reach the optimal level of concern and stop there.
FR: Like a Goldilocks level of fear for AI.
NB: People like to move in herds, and I worry about it becoming a big stampede to say negative things about AI, and then destroying the future in that way. We could go extinct through some other method instead, maybe synthetic biology, without even ever getting to at least roll the die with AI.
I would think that, actually, the optimal level of concern is slightly greater than what we currently have, and I still think there should be more concern. Its more dangerous than most people have realised. But Im just starting to worry about overshooting, the conclusion being: lets wait for a thousand years before we develop it. Then of course, its unlikely that our civilisation will remain on track for a thousand years.
NB: We will hopefully be fine either way, but I think I would like the AI before some radical biotech revolution. If you think about it this way: if you first get some sort of super-advanced synthetic biology, that might kill us, but if were lucky, we survive it, and then maybe invent some super-advanced molecular nanotechnology and that might kill us, but if were lucky we survive that, and then you do the AI, and then maybe that will kill us. Or, if were lucky, we survive that and we get utopia. Well, then you have to get through three separate existential risk, like first a biotech risk, plus the nanotech risk, plus the AI risks.
Whereas if we get AI first, maybe that will kill us, but if not, we get through that and then I think that will handle the biotech and nanotech risks. And so the total amount of existential risk on that second trajectory would be less than on the former. Now, its more complicated than that, because we need some time to prepare for the Ay, but you can start to think about optimal trajectories rather than a very simplistic binary question of: Is technology X good or bad? We should be thinking, on the margin, Which ones should we try to accelerate and which ones retard?
NB: It is weird. If this worldview is even remotely correct, that we should happen to be alive at this particular point in human history so close to this fulcrum or nexus on which the giant future of earth-originating intelligent life might hinge out of all the different people that have lived throughout history, people that might come later if things go well: that one should sit so close to this critical juncture, that seems a bit too much of a coincidence. And then youre led to these questions about the simulation hypothesis, and so on. I think there is more in heaven and on earth than is dreamed of in our philosophy and that we understand quite little about how all of these pieces fit together.
Read the rest here:
Nick Bostrom: Will AI lead to tyranny? - UnHerd
- Qubetics: Investor's Favorite & Top Crypto to Invest in Today, Plus Kaspa Blockchain Tech & Artificial Super Intelligence Alliance Networks -... - March 28th, 2025 [March 28th, 2025]
- Qubetics Among The Best Crypto ICOs to Invest In Now, Solanas Rise, and Artificial Super Intelligence Alliance Dominance - Analytics Insight - March 3rd, 2025 [March 3rd, 2025]
- Elon Musk Is Not Taking Over the GovernmentAI Is: The NSA and the Emergence of Artificial Super Intelligence - substack.com - March 3rd, 2025 [March 3rd, 2025]
- Next Bull Run Crypto: How Qubetics, Artificial Super Intelligence Alliance, and Solana Are Shaping the Future of Blockchain - Techpoint Africa - February 27th, 2025 [February 27th, 2025]
- The Best Crypto to Buy This Week: Qubetics, Stacks, and Artificial Super Intelligence AllianceWhich One Deserves Your Investment? - MSN - February 27th, 2025 [February 27th, 2025]
- Learn How Solana And Artificial Super Intelligence Alliance Are Gaining Traction And Why Qubetics ($TICS) Could Be the Next Bull Run Crypto With... - February 16th, 2025 [February 16th, 2025]
- The Singularity Is Near: How Super Artificial Intelligence Will Reshape The World - Space Daily - January 26th, 2025 [January 26th, 2025]
- 5 Top Altcoins to Invest in This Week: Qubetics ($TICS), Arweave, Ondo, ZIGnaly And Artificial Super Intelligence Alliance Are On Spotlight -... - January 26th, 2025 [January 26th, 2025]
- The Best Altcoin with 100x Potential: Qubetics ($TICS) Has Earned the Trust of Over 14,000 Holders, While Artificial Super Intelligence Alliance... - January 13th, 2025 [January 13th, 2025]
- SoftBank's Masayoshi Son says artificial super intelligence to exist by 2035 - MSN - November 2nd, 2024 [November 2nd, 2024]
- SoftBank's Son says artificial super intelligence to exist by 2035 - MSN - November 2nd, 2024 [November 2nd, 2024]
- Qubetics Leads the Charge Against Quantum Threats, Fantom Soars and Artificial Super Intelligence Alliance Set for Growth: Guest Post by TheCoinrise... - October 12th, 2024 [October 12th, 2024]
- $OCEAN, $AGIX, And $FET Merge To Propel The Development Of Artificial Super Intelligence - The Merkle News - September 10th, 2024 [September 10th, 2024]
- Specter of Artificial Super Intelligence Looms in Camden Discussion - Freepress Online - August 25th, 2024 [August 25th, 2024]
- AI Coin Price: Will Artificial Superintelligence Alliance Have Bullish Impact? - Bankless Times - July 6th, 2024 [July 6th, 2024]
- 3 crypto firms are combining into one AI token - Morning Brew - June 16th, 2024 [June 16th, 2024]
- Could This New Artificial Intelligence (AI) Crypto Token Be a Millionaire Maker? - The Motley Fool - June 16th, 2024 [June 16th, 2024]
- Former OpenAI researcher outlines AI advances expectations in the next decade - Windows Central - June 16th, 2024 [June 16th, 2024]
- Creepy Study Suggests AI Is The Reason We've Never Found Aliens - ScienceAlert - May 11th, 2024 [May 11th, 2024]
- Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium - January 16th, 2024 [January 16th, 2024]
- AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News - January 16th, 2024 [January 16th, 2024]
- OpenAI's Ilya Sutskever Has a Plan for Keeping Super-Intelligent AI in Check - WIRED - December 17th, 2023 [December 17th, 2023]
- Sam Altman on OpenAI and Artificial General Intelligence - TIME - December 17th, 2023 [December 17th, 2023]
- Will AIs Next Wave of Super Intelligence Replace Human Ingenuity? Its Complicated - Grit Daily - December 17th, 2023 [December 17th, 2023]
- New Novel Skillfully Weaves Artificial Intelligence, Martial Arts and ... - Lakenewsonline.com - November 14th, 2023 [November 14th, 2023]
- Googles artificial intelligence predicts the weather around the globe in just one minute - EL PAS USA - November 14th, 2023 [November 14th, 2023]
- Appeals court mulls whether to revive Wynn FARA case - POLITICO - November 14th, 2023 [November 14th, 2023]
- The AI Revolution From Evolution to Super intelligence - Cryptopolitan - October 21st, 2023 [October 21st, 2023]
- AI Symposium Explores Flaws and Potential of Artificial Intelligence - The Skanner - October 21st, 2023 [October 21st, 2023]
- Artificial intelligence has surprising pick to win 2024 Super Bowl - ClutchPoints - October 21st, 2023 [October 21st, 2023]
- Artificial Intelligence isn't taking over anything - Talon Marks - October 21st, 2023 [October 21st, 2023]
- AI and You: The Chatbots Are Talking to Each Other, AI Helps ... - CNET - October 21st, 2023 [October 21st, 2023]
- How to Build a Chatbot Using Streamlit and Llama 2 - MUO - MakeUseOf - October 21st, 2023 [October 21st, 2023]
- ONU's Polar SURF undergraduate research projects expand into the ... - Northern News - October 21st, 2023 [October 21st, 2023]
- Why Artificial Intelligence Needs to Consider the Unique Needs of ... - Women's eNews - September 27th, 2023 [September 27th, 2023]
- What Is Image-to-Image Translation? | Definition from TechTarget - TechTarget - September 27th, 2023 [September 27th, 2023]
- There is probably an 80% consensus that free will is actually ... - CTech - September 27th, 2023 [September 27th, 2023]
- Meta is planning on introducing dozens of chatbot personas ... - TechRadar - September 27th, 2023 [September 27th, 2023]
- We Cannot Trust AI With Control Of Our Bombs - Fair Observer - August 26th, 2023 [August 26th, 2023]
- AI: is the end nigh? | Laura Dodsworth - The Critic - August 26th, 2023 [August 26th, 2023]
- "Most Beautiful Car in the World" Alfa Romeo Asks People To ... - autoevolution - August 26th, 2023 [August 26th, 2023]
- Managing Past, Present and Future Epidemics - Australian Institute ... - Australian Institute of International Affairs - August 26th, 2023 [August 26th, 2023]
- The Best Games From Rare Per Metacritic - GameRant - August 26th, 2023 [August 26th, 2023]
- AI is the Scariest Beast Ever Created, Says Sci-Fi Writer Bruce Sterling - Newsweek - July 2nd, 2023 [July 2nd, 2023]
- Lets focus on AIs risks rather than existential threats - Business Plus - July 2nd, 2023 [July 2nd, 2023]
- Risks of artificial intelligence must be considered as the technology ... - University of Toronto - July 2nd, 2023 [July 2nd, 2023]
- Best Evil Technology Movies, From Terminator to M3GAN - CBR - Comic Book Resources - July 2nd, 2023 [July 2nd, 2023]
- 15 Super Cool Wallpapers for iPhone and Android - YMWC 18 - YTECHB - July 2nd, 2023 [July 2nd, 2023]
- PUB CHAT: Changing lives congrats to all grads and those who ... - Finger Lakes Times - July 2nd, 2023 [July 2nd, 2023]
- AI poses an existential threat, according to Munk Debates crowd ... - The Hub - July 2nd, 2023 [July 2nd, 2023]
- The Cautionary Tale of J. Robert Oppenheimer - Alta Magazine - July 2nd, 2023 [July 2nd, 2023]
- Virgin Voyages and JLo Bust on A.I. To Sell Vacations - We Got This Covered - July 2nd, 2023 [July 2nd, 2023]
- Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Hollywood Reporter - May 20th, 2023 [May 20th, 2023]
- Schools 'bewildered' by very fast rate of change in AI education ... - The Irish News - May 20th, 2023 [May 20th, 2023]
- Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI - Fortune - May 20th, 2023 [May 20th, 2023]
- The Future of War Is AI - The Nation - May 20th, 2023 [May 20th, 2023]
- NFL fans outraged after ChatGPT names best football teams since 2000 including a surprise at No 1... - The US Sun - May 20th, 2023 [May 20th, 2023]
- We need to prepare for the public safety hazards posed by artificial intelligence - The Conversation - May 20th, 2023 [May 20th, 2023]
- What are the four main types of artificial intelligence? Find out how future AI programs can change the world - Fox News - May 20th, 2023 [May 20th, 2023]
- Did Tom Hanks Say He Will Use AI to Make Films After His Death? - Snopes.com - May 20th, 2023 [May 20th, 2023]
- These are the top 10 athletes of all time from the state of Iowa, according to ChatGPT - KCCI Des Moines - May 20th, 2023 [May 20th, 2023]
- Inside The High-Tech Homes Of The Super-Rich: Smart Systems, Security Fortresses And Personalized Gadgets - Yahoo Finance - May 20th, 2023 [May 20th, 2023]
- ChatGPT cant think consciousness is something entirely different to today's AI - The Conversation - May 20th, 2023 [May 20th, 2023]
- IIT-Mandi startup develops AI-based affordable solution to detect respiratory, genetic disorders - The Hindu - May 2nd, 2023 [May 2nd, 2023]
- Horrors Best And Scariest Uses of Artificial Intelligence - Dread Central - May 2nd, 2023 [May 2nd, 2023]
- Artificial intelligence or active imagination with ChatGPT? - Irish Examiner - May 2nd, 2023 [May 2nd, 2023]
- Reggie Watts on Late Late Show and Artificial Intelligence - Vulture - May 2nd, 2023 [May 2nd, 2023]
- Centaur Labs CEO: Unlocking AI for Healthcare Requires Expert Annotation - PYMNTS.com - May 2nd, 2023 [May 2nd, 2023]
- Super Active 32-Year-Old Dealmaker Is Japan's Newest Billionaire - Forbes - May 2nd, 2023 [May 2nd, 2023]
- Kevin McKenna meets tech thinker Margaret Totten | HeraldScotland - HeraldScotland - May 2nd, 2023 [May 2nd, 2023]
- Those 'Mrs. Davis' Sneakers Are Real and You Can Buy Them Now - Yahoo News - May 2nd, 2023 [May 2nd, 2023]
- Norway's $1.4tn wealth fund calls for state regulation of AI - Financial Times - May 2nd, 2023 [May 2nd, 2023]
- Macquarie chief Shemara Wikramanayake believes greater ... - The Australian Financial Review - May 2nd, 2023 [May 2nd, 2023]