Will AI be the death of us? The artificial intelligence pioneers behind ChatGPT and Google’s Deep Mind say it could be – The Australian Financial…
For Hinton, as for many computer scientists and researchers in the AI community, the question of artificial intelligence becoming more intelligent than humans is one of when, rather than if.
Testifying from the seat next to Altman last month was Professor Gary Marcus, a New York University professor emeritus who specialises in psychology and neural science, and who ought to know as well as anyone the answer to the question of when the AI will become as good at thinking as humans are at which point it will be known as AGI (artificial general intelligence), rather than merely AI.
But Marcus doesnt know.
Is it going to be 10 years? Is it going to be 100 years? I dont think anybody knows the answer to that question.
But when we get to AGI, maybe lets say its 50 years, that really is going to have profound effects on labour, he testified, responding to a question from Congress about the potential job losses stemming from AI.
OpenAI CEO Sam Altman speaks at the US Senate hearing on artificial intelligence on May 16, 2023. Seated beside him is NYU Professor Emeritus Gary Marcus. AP
And indeed, the effect an AGI might have on the workforce goes to the crux of the matter, creating a singular category of unemployment that might ultimately lead to human extinction.
Apart from putting office workers, artists and journalists out of work, one effect that achieving the AGI milestone might have on labour is that it could put out of work the very humans who built the AI software in the first place, too.
If an artificial intelligence is general enough to replicate most or all tasks now done by the human brain, then one task it should be able to replicate is to develop the next generation of itself, the thinking goes.
That first generation of AGI-generated AGI might be only fractionally better than the generation it replaced, but one of the things its very likely to be fractionally better at is generating the second generation version of AGI-generated AGI.
Run that computer loop a few times, or a few million times with each improvement, each loop is likely to get better optimised and run faster, too then what started simply as an AGI can spiral into whats sometimes known as a superhuman machine intelligence, otherwise known as the God AI.
Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.
Sam Altman, Open AI CEO
Though he dodged the question when testifying before Congress, Sam Altman had actually blogged on this topic back in 2015, while he was still running the influential US start-up accelerator Y Combinator and 10 months before he would go on to co-found OpenAI, the worlds most influential AI company, together with Elon Musk, Peter Thiel, Amazon and others.
Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity, he blogged at the time.
There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.
Professor Max Tegmark, a Swedish-American physicist and machine-learning researcher at the Massachusetts Institute of Technology, says its unlikely todays AI technology would be capable of anything that could wipe out humanity.
AIs job is to perform singular tasks without hurdles. When challenges present themselves, AI steps in to ensure they are removed no matter what they are.
It would probably take an AGI for that, and more likely an AGI that has progressed to the level of superhuman intelligence, he tells AFR Weekend.
As to exactly how an AGI or SMI might cause human extinction, Tegmark said there are any number of seemingly innocuous ways the goals of an AI can become misaligned with the goals of humans, leading to unexpected outcomes.
Most likely it will be something we cant imagine and wont see coming, he says.
In 2003, the Swedish philosopher Nick Bostrom devised the paper-clip maximiser thought experiment as a way of explaining AI alignment theory.
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realise quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans, Bostrom wrote.
Last month, the US Air Force was involved in a thought experiment along similar lines, replacing paper clip maximisers with attack drones that use AI to choose targets, but still rely on a human operator for yes/no permission to destroy the target.
A plausible outcome of the experiment, said Colonel Tucker Hamilton, the USAFs chief of AI Test and Operations, was that the drone ends up killing any human operator who stops it achieving its goal of killing targets by saying no to a target.
If the AIs goal was then changed to include not killing drone operators, the drone might end up wiping out the telecommunications equipment the operator was using to communicate the no to it, the experiment found.
Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI, Colonel Hamilton was quoted as saying in a Royal Aeronautical Society statement.
But the challenges posed by AI arent just theoretical. Its already commonplace for machine-learning systems, when given seemingly innocuous tasks, to inadvertently produce outcomes not aligned with human well-being.
In 2018, Amazon pulled the plug on its machine-learning-based recruitment system, when the company found AI had learned to deduct points from applicants who had the word women in their resume. (The AI had been trained to automate the resume-sifting process, and simply made a correlation between resumes from females, and the outcome of those resumes getting rejected by human recruiters.)
The fundamental problem, Tegmark says, is that its difficult, perhaps even impossible, to ensure that AI systems are completely aligned with the goals of the humans who create them, much less the goals of humanity as a whole.
And the more powerful the AI system, the greater the risk that a misaligned outcome could be catastrophic.
And it may not take artificial intelligence very long at all to progress from the AGI phase to the SMI phase, at which time the very existence of humanity might be dangling in the wind.
In an April Time magazine article wondering why most AI ethicists were so loath to discuss the elephant in the room human extinction as unintended a side effect of SMI Professor Tegmark pointed to the Metaculus forecasting website, which asked this question of the expert community: After a weak AGI is created, how many months will it be before the first super-intelligent oracle?
The average answer Metaculus got back was 6.38 months.
It may not be about how long it will take to get from AGI to SMI. That computer loop, known as recursive self-improvement, might take care of that step quite rapidly, in no time at all compared to the 75 years it took AI researchers to come up with ChatGPT.
(Though thats not necessarily so. As one contributor to the Metaculus poll pointed out, If AGI develops on a system with a lot of headroom, I think itll rapidly achieve superintelligence. But if AGI develops on a system without sufficient resources, it could stall out. I think scenario number two would be ideal for studying AGI and crafting safety rails so, heres hoping for slow take-off.)
The big question is, how long will it take to get from ChatGPT, or Googles Bard, to AGI?
Of Professor Marcus three stabs at an answer 10, 50, or 100 years I ask Professor Tegmark which he thinks is most likely.
I would guess sooner than that, he says.
People used to think that AGI would happen in 30 years or 50 years or more, but a lot of researchers are talking about next year or two years from now, or at least this decade almost for sure, he says.
What changed the thinking about how soon AI will become AGI was the appearance of OpenAIs GPT-4, the large language model (LLM) machine-learning system that underpins ChatGPT, and the similar LLMs used by Bard and others, says Professor Tegmark.
In March, Sbastien Bubeck, the head of the Machine Learning Foundations group at Microsoft Research, and a dozen other Microsoft researchers, submitted a technical paper on the work theyd been doing on GPT-4, which Microsoft is funding and which runs on Microsofts cloud service, Azure.
The paper was called Sparks of Artificial General Intelligence: Early Experiments with GPT-4, and argued that recent LLMs show more general intelligence than any previous AI models.
But sparks as anyone who has ever tried to use an empty cigarette lighter knows dont always burst into flames.
Altman himself has doubts the AI industry can keep closing in on AGI just by building more of what its already building, but bigger.
Making LLMs ever larger could be a game of diminishing returns, hes on record saying.
I think theres been way too much focus on parameter count this reminds me a lot of the gigahertz race in chips in the 1990s and 2000s, where everybody was trying to point to a big number, he said at an MIT conference in April.
(The size of an LLM is measured in parameters, roughly equivalent to counting the neural connections in the human brain. The predecessor to GPT-4, GPT-3, had about 175 billion of them. OpenAI has never actually revealed how large GPT-4s parameter count is, but its said to be about 1 trillion, putting it in the same ballpark as Googles 1.2-trillion-parameter GLaM LLM.)
I think were at the end of the era where its going be these giant, giant models, he said.
Testifying under oath before Congress, Altman said OpenAI wasnt even training a successor to GPT-4, and had no immediate plans to do so.
Elsewhere in his testimony, Altman also complained that people were using ChatGPT too much, which may be related to the scaling issue.
Actually, wed love it if theyd use it less because we dont have enough GPUs, he told Congress, referring to the graphics processing units that were once mainly used by computer gamers, but then found a use mining bitcoins and other cryptocurrency, and now are used by the AI industry on a vast scale to train AI models.
Two things are worth noting here: the latest GPUs designed specifically to run in data centres like the ones Microsoft uses for Azure cost about $US40,000 each; and OpenAI is believed to have used about 10,000 GPUs to train GPT-4.
Its possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows, which is why we should worry now.
Geoffrey Hinton, AI pioneer
Though Altman never elaborated on his pessimism about the AI industry continuing along the path of giant language models, its likely that at least some of that negativity has to do with the short supply (and concomitant high cost) of raw materials like GPUs, as well as a shortage of novel content to train the LLMs on.
Having already scraped most of the internets written words to feed the insatiable LLMs, the AI industry is now turning its attention to spoken words, scraped from podcasts and videos, in an effort to squeeze more intelligence out of their LLMs.
Regardless, it seems the path from todays LLMs to future artificial general intelligence machines may not be a straightforward one. The AI industry may need new techniques or, indeed, a partial return to old, hand-crafted AI techniques discarded in favour of todays brute-force machine learning systems to further make progress.
Well make them better in other ways, Altman said at that MIT conference.
Nevertheless, the godfather of AI, Hinton himself, recently revised his own estimate of between 30 and 50 years before the world will see the first AGI.
I now predict five to 20 years but without much confidence. We live in very uncertain times. Its possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows which is why we should worry now, he tweeted in May.
And one of Hintons close colleagues and another godfather of AI, Yoshua Bengio, pointed out in a recent news conference that, by one metric, AGI has already been achieved.
We have basically now reached the point where there are AI systems that can fool humans, meaning they can pass the Turing Test, which was considered for many decades a milestone of intelligence.
That is very exciting, because of the benefits we can bring with socially positive applications of AI but also Im concerned that powerful tools can also have negative uses, and that society is not ready to deal with that, he said.
Mythically, of course, society actually has been long ready to deal with the appearance of a superhuman machine intelligence. At the very least, we humans have been prepared for a fight with one for many decades, long before intelligent machines were turning people into fleshy D-cell batteries in the movie The Matrix, forcing the human resistance underground.
Professor Genevieve Bell, a cultural anthropologist and director of the School of Cybernetics at the ANU, says Western culture has a longstanding love-hate relationship with any major technology transformation, going back as far as the railways and the dark Satanic Mills of the Industrial Revolution.
Its a cultural fear that weve had since the beginning of time. Well, certainly since the beginning of machines, she says.
And we have a history of mobilising these kind of anxieties when technologies get to scale and propose to change our ideas of time and place and social relationships.
Dr Genevieve Bell traces our love-hate relationship with new technology back to the dark Satanic Mills of the Industrial Revolution.
In that context, the shopping list of risks now being attached to AI that list beginning with mass loss of livelihoods and ending with mass loss of life is neither new nor surprising, says Bell.
Ever since we have talked about machines that could think or artificial intelligence there has been an accompanying set of anxieties about what would happen if we got it right, whatever right would look like.
Thats not to say the fears are necessarily unwarranted, she emphasises. Its just to say theyre complicated, and we need to figure out what fears have a solid basis in fact, and which fears are more mythic in their quality.
Why has our anxiety reached a fever pitch right now? she asks.
How do we right-size that anxiety? And how do we create a space where we have agency as individuals and citizens to do something about it?
Those are the big questions we need to be asking, she says.
One anxiety we should right-size immediately, says Professor Toby Walsh, chief scientist at the AI Institute at the University of NSW, is the notion that AI will rise up against humanity and deliberately kill us all.
Im not worried that theyre suddenly going to escape the box and take over the planet, he says.
Firstly, theres still a long way to go before theyre as smart as us. They cant reason, they make some incredibly dumb mistakes, and there are huge areas in which they just completely fail.
Secondly, theyre not conscious; they dont have desires of their own like we do. Its not as if, when youre not typing something into ChatGPT, its sitting there thinking, Oh, Im getting a bit bored. How could I take over the place?
Its not doing anything at all when its not being used, he says.
Nevertheless, artificial intelligence has the potential to do a great deal of damage to human society if left unregulated, and if tech companies such as Microsoft and Google continue to be less transparent in their use of AI than they need to be.
Professor Toby Walsh one of Australias leading expert on AI. Louie Douvis
I do think that tech companies are behaving in a not particularly responsible way. In particular, they are backtracking on behaviours that were more responsible, says Walsh, citing the example of Google, which last year had refused to release an LLM-based chatbot because it found the chatbot wasnt reliable enough, but then rushed to release it anyway, under the name Bard, after OpenAI came out with ChatGPT.
Another of the genuine concerns is that powerful AI systems will fall into the hands of bad actors, he says.
In an experiment conducted for an international security conference in 2021, researchers from Collaborations Pharmaceuticals, a drug research company that uses machine learning to help develop new compounds, decided to see what would happen if they told their machine learning systems to seek out toxic compounds, rather than avoid them.
In particular, they chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the 20th century, the researchers later reported in Nature magazine.
In less than six hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired (toxicity) threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible, they wrote.
Computer systems only have goals that we give them, but Im very concerned that humans will give them bad goals, says Professor Walsh, who believes there should be a moratorium on the deployment of powerful AI systems until the social impact has been properly thought through.
Professor Nick Davis, co-director of the Human Technology Institute at the University of Technology, Sydney, says were now at a pivotal moment in human history, when society needs to move beyond simply developing principles for the ethical use of AI (a practice that Bell at ANU says has been going on for decades) and actually start regulating the business models and operations of companies that use AI.
But care must be taken not to over-regulate artificial intelligence, too, Davis warns.
We dont want to say none of this stuff is good, because a lot of it is. AI systems prevented millions of deaths around the world because of their ability to sequence the genome of the COVID-19 sequence.
But we really dont want to fall in the trap of letting a whole group of people create failures at scale, or create malicious deployments or overuse AI in ways that just completely goes against what we think of as a thoughtful, inclusive, democratic society, he says.
Bell, who was the lead author on the governments recent Rapid Response Information Report on the risks and opportunities attached to the use of LLMs, also believes AI needs to be regulated, but fears it wont be easy to do.
At a societal and at a planetary scale, we have over the last 200 plus years gone through multiple large-scale transformations driven by the mass adoption of new technical systems. And weve created regulatory frameworks to manage those.
So the optimistic part of my brain says we have managed through multiple technical transformations in the past, and there are things we can learn from that that should help us navigate this one, says Bell.
But the other part of my brain says this feels like it is happening at a speed and a scale that has previously not happened, and there are more pieces of the puzzle we need to manage than weve ever had before.
Go here to read the rest: