The race to God-like AI and what it means for humanity – The Australian Financial Review
Lisa: For decades, theres been this fear about AI overtaking the world. Weve made films and series about machines becoming smarter than humans, and then trying to wipe us out. But theres particular debate and discussion now about the existential threat of AI. Why is everyone talking about it now?
John: Well, since late last year, we had ChatGPT sort of burst onto the scene. And then Googles Bard and Microsoft being quickly followed. And suddenly, millions of people, potentially billions of people in the world are exposed to AI directly in ways that they never have been before. And at the same time, weve got AI ethicists and AI experts who are saying, well, maybe this is happening too fast. Maybe we should step back a little and think about what is the downside? What are the risks of AI, because some of the risks of AI are pretty serious.
[In March, after OpenAI released the latest model of its chat bot, GPT, more than 1000 people from the tech industry, including billionaire Elon Musk and Apple co-founder Steve Wozniak, signed a letter calling for a moratorium on AI development.]
John: On the development of anything more powerful than the engine that was under ChatGPT, which is known as GPT four. And there was a lot of controversy about this. And in the end, there was no moratorium. And then in May ...
[Hundreds of artificial intelligence scientists and tech executives signed an open letter warning about the threat posed to humanity by artificial intelligence ChatGPT creators.]
Another group of AI leaders put their names to a one-sentence statement and the signatures on this statement included Sam Altman, the guy behind ChatGPT
[Altman: My worse fears are that we cause significant, we the field, the technology, the industry, cause significant harm to the world ]
John: And Geoffrey Hinton, who is often referred to as the godfather of AI.
[Hinton: I think there are things to be worried about. Theres all the normal things that everybody knows about, but theres another threat. Its rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?]
Lisa: Ive got that statement here. It was only one line and it read, mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks, such as pandemics and nuclear war.
John: And the statement was deliberately pretty vague. It was designed to get people thinking but without giving you enough sort of detail that you could criticise it.
Like, we know that theres going to be another pandemic, and weve had threat of nuclear war hanging over us for a long time. We dont know for sure whether theres going to be human extinction. But its, its we dont know for sure, but that were going to have human extinction because of AI. But it is one of those things that could happen.
Well, arguably its already a threat. Theres the classic example of when Amazon was using an AI to vet resumes for job applicants.
And then they discovered that the AI was deducting points from peoples overall score if the word woman or women were in the resume.
[The glitch stemmed from the fact that Amazons computer models were trained by observing patterns in resumes of job candidates over a 10-year period, largely from men, in effect, teaching themselves that male candidates were preferable.]
So the data set that Amazon gave the AI to learn from, already contained those biases, is called misalignment where you think the AI is doing one thing, which is a fast and efficient job at wading through resumes, but its actually not doing quite the thing you asked for.
And theres another classic example of misalignment, theres a group of pharmaceutical researchers in 2020, and 2021, who were AI experts, theyve been using AI to generate pharmaceuticals for human good for some time. And they decided they were going to see what would happen if they turned that very same machine towards dangerous goals. They told the AI rather than avoid toxic compounds, invent some toxic compounds for me. And they read it for around six hours, I think. And in that time, the artificial intelligence came up with about 40,000 toxic compounds, not all of them when many of them were new. And one of them was almost identical to a toxic nerve agent known as VX, which is one of the most pernicious chemical warfare drugs there is. So that was 2021.
And theres been big improvements since then, as weve all seen with ChatGPT and Bard and things like that. So people are starting to wonder, what does the threat become when the artificial intelligence gets really smart when it becomes whats known as an artificial general intelligence which is, much like the human level intellect once it reaches the level of AGI. A lot of AI ethicists and AI researchers think that the risk is just going to get so much bigger.
Lisa: So for many computer scientists and researchers the question of AI becoming more intelligent than humans moving from lets get the acronyms right. AI artificial intelligence to AGI artificial general intelligence is one of when rather than if. So when is it? When is it expected to happen? How long have we got?
John: Well, theres actually two things that are going to happen down this pathway.
Theres the moving from where we are now to AGI. And then theres the moving from AGI, which is sort of human-level intelligence to God-level intelligence. And once it hits God AI level, or also known as superhuman machine intelligence SMI for another acronym once it gets there, thats when we really dont know what might happen. And thats when a lot of researchers think that human extinction might be on the cards. So the second phase, which is getting from AGI to SMI, that could actually happen very fast relative to the historic development of artificial intelligence. Theres this theory known as recursive self-improvement.
And it goes something like this, you build an AGI and artificial general intelligence. And one of the things that the AGI can do is build the next version of itself. And one of the things that the next version itself is very likely to be better at is building the next version of itself. So, you get into this virtuous, or vicious depending on your perspective, this computer cycle where its looping through and looping through, potentially very quickly.
And theres sort of a betting website. Its a forecasting website called Metaculus where they asked this question: they said, after a week, AGI is created, how many months? Will it be before the first super-intelligent Oracle appears? And the average answer from experts on Metaculus was 6.38 months.
So in that sense, the second phase of it is going to be quite fast, right? It could be quite fast. So the question is, how long will it take for us to get from where we are now ChatGPT to an AGI, to a human-level intelligence? Well, a lot of experts, including Geoffrey Hinton, the godfather of AI, used to think that would take around 30 to 50 to maybe 100 years to get from where we are now to an artificial general intelligence. But now, a lot of researchers are thinking it could be a lot faster than that. It could be two years or three years, or certainly by the end of the decade.
Lisa: Weve talked about how we got to this point, and whats coming next AI becoming as good at thinking as humans are and about how that might happen sooner than expected. So what are we so afraid of?
John: Well, its important to point out that not everyone is afraid of human extinction as the end result of AI. Theres a lot of good things to come from AI theres drug exploration in ways that weve never seen before. Artificial intelligence was used as part of the response to the pandemic they used AI to rapidly sequence the COVID-19 genome, theres a lot of upside to AI. So not everyones worried about human extinction. And even the people who are worried about AI risks, even theyre not all worried about extinction. A lot of people are more worried about the near-term risks, the discrimination, the potential that it could, that AI could, be used or generative AI in particular could be used for misinformation on a scale weve never seen before.
[Toby Walsh: Im the chief scientist at UNSWs new AI Institute. I think that its intelligent people who think too much highly of intelligence. Intelligence is not the problem. If I go to the university, its full of really intelligent people who lacked any political power at all.]
John: And he said, hes not worried that artificial intelligence is going to suddenly escape the box and get out of control in the way that it did in the movies.
[Toby Walsh: When ChatGPT is sitting there, waiting for you to type its prompt, its not thinking about taking over the planet. Its just waiting for you to type your next character. Its not plotting the takeover of humanity.]
John: He says that, unless we give artificial intelligence agency, it cant really do much.
[Toby Walsh: Intelligence itself is not harmful, but most of the harms you can think of the human behind them and AI is just a tool that amplifies what they can do.]
John: Its just a computer, its not sitting there wondering how can I take over the world? If you turn it off, you turn it off.
Lisa: But there are a growing number of experts who are worried that we wont be able to turn it off. So why is there so much anxiety now?
John: Youve got to keep in mind that Western culture has sort of mythologised the threat of artificial intelligence for a long time, and we need to untangle that, we need to figure out which are the real risks and which are the risks that have sort of just been the bogeyman since machines were invented.
Firstly, its important to remember that AI is not conscious in the way that we understand human consciousness, ChatGPT doesnt sit there waiting for you to type in keystrokes and think to itself it might just take over the world.
Theres this thought experiment thats been around in AI for a while: its called the paper-clip maximiser. And the experiment runs roughly along these lines: that you ask an AI to build an optimal system, thats going to make the maximum number of paper-clips and it seems like a pretty innocuous task. But the AI doesnt have a human ethics. Its just been given this one goal, and who knows what its going to do to achieve that one goal. And one of the things that it might do is kill all the humans. It might be that humans are using too many resources that could either go, otherwise go into paper-clips, or it might be that its worried that the humans see that its making too many paper-clips and it decides to actively kill humans.
Now, its just a thought experiment and no one really thinks that were literally going to be killed by a paper-clip maximiser but it sort of points out AI alignment or AI misalignment, where we give an AI a goal, and we think its achieving that goal. We think its setting out to achieve that goal that maybe it is, but we dont know, we dont really know how its going about that. Like the example of the resumes at Amazon. It was doing the simple task of vetting resumes, but it was doing it differently from how Amazon imagined it was. And so in the end, they had to switch it off.
So part of the concern is not so much about what the AI is capable of. But what are these big technology companies capable of? What are they going to do with the AI? Are they going to produce systems that can be used for wholesale misinformation?
Theres other concerns, and the other one is to do with a notion of agency. And one of the things about agency is that if the AI has got it, humans can be cut out of the decision-making process. Weve seen that with autonomous weapons and the ban on using AI in autonomous weapons. And there are, there are a lot of different ways for an AI to get agency. A big tech company could build an AI that they give more power than they ought to have. Or terrorists could seize control of an AI, or some sort of bad actor or anarchists or, or you name it. So weve got this range of threats that people perceive from AI. On the one hand, theres the very real threat that it will discriminate. And at the other end of the spectrum, theres the distant threat, that it might kill us all indiscriminately.
Lisa: John, how do we manage this existential threat? How do we ensure that we derive the benefits from AI and avoid this dystopian extreme?
John: Theres a lot of experts who are now calling for regulation. In fact, even a lot of the AI companies themselves, like OpenAI, have said that we need this to be regulated. Left to their own devices, its doubtful that AI companies can be trusted to always work in the best interests of humanity at large. Theres the profit motive going on. I mean, weve seen that already.
We saw Google, for instance, scramble to produce Bard even though six months prior to that day, it said we dont really want to release Bard because we dont think its particularly safe. But then ChatGPT came out. And Google thought they had to respond. And then Microsoft responded. So everyone has very quickly gone from being quite worried about how harmful these things could be to releasing them as an experiment, a very large experimental test on the whole of humanity. So a lot of people are saying, well, you know, maybe we shouldnt be doing that, maybe we should be sort of regulating the application of AI, maybe not have a moratorium on research into AI, but maybe stop the roll-out of these big language models, these big AIs, until we have a sense of what the risks are.
Theres an expert at the ANU, a woman named Professor Genevieve Bell. I spoke to her about this. And shes an anthropologist who has studied centuries of technological change. And she said to me that we always do manage to regulate systems we had we had the railway, we had electricity, and it can be messy. And it can it can take a while. But we always get there. And we always come up with some sort of regulatory framework that works for most people, and doesnt kill us all. And she thinks that we will come up with a regulatory framework for AI.
But her concern is that this time, it is a little different. Its happening at a scale and a speed that humanity has never seen before, that regulators have never seen before. And its an open question whether well be able to regulate it before the damage is done.
And of course, theres another difference, which is that when the railways were rolled out, or electricity was rolled out, or the internet was rolled out, or mobile phones or any of these big technical revolutions, the engineers kind of understood how these machines worked. But when it comes to AI, the engineers cant necessarily make the same claim they dont fully understand how AI works. It can be a bit of a black box.
Explore the big issues in business, markets and politics with the journalists who know the inside story. New episodes of The Fin are published every Thursday.
View original post here:
The race to God-like AI and what it means for humanity - The Australian Financial Review
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]
- The Simple Reason Why AGI (Artificial General Intelligence) Is Not ... - Medium - December 2nd, 2023 [December 2nd, 2023]