Artificial intelligence: Powerful AI systems ‘can’t be controlled’ and ‘are causing harm’, says UK expert – Sky News

Sunday 30 April 2023 16:04, UK

A British scientist known for his contributions to artificial intelligence has told Sky News that powerful AI systems "can't be controlled" and "are already causing harm".

Professor Stuart Russell was one of more than 1,000 experts who last month signed an open letter calling for a six-month pause in the development of systems even more capable than OpenAI's newly-launched GPT-4 - the successor to its online chatbot ChatGPT which is powered by GPT-3.5.

The headline feature of the new model is its ability to recognise and explain images.

Speaking to Sky's Sophy Ridge, Professor Russell said of the letter: "I signed it because I think it needs to be said that we don't understand how these [more powerful] systems work. We don't know what they're capable of. And that means that we can't control them, we can't get them to behave themselves."

He said that "people were concerned about disinformation, about racial and gender bias in the outputs of these systems".

And he argued with the swift progression of AI, time was needed to "develop the regulations that will make sure that the systems are beneficial to people rather than harmful".

He said one of the biggest concerns was disinformation and deep fakes (videos or photos of a person in which their face or body has been digitally altered so they appear to be someone else - typically used maliciously or to spread false information).

He said even though disinformation has been around for a long time for "propaganda" purposes, the difference now is that, using Sophy Ridge as an example, he could ask GPT-4 to try to "manipulate" her so she's "less supportive of Ukraine".

He said the technology would read Ridge's social media presence and what she has ever said or written, and then carry out a gradual campaign to "adjust" her news feed.

Professor Russell told Ridge: "The difference here is I can now ask GPT-4 to read all about Sophy Ridge's social media presence, everything Sophy Ridge has ever said or written, all about Sophy Ridge's friends and then just begin a campaign gradually by adjusting your news feed, maybe occasionally sending some fake news along into your news feed so that you're a little bit less supportive of Ukraine, and you start pushing harder on politicians who say we should support Ukraine in the war against Russia and so on.

"That will be very easy to do. And the really scary thing is that we could do that to a million different people before lunch."

The expert, who is a professor of computer science at the University of California, Berkeley, warned of "a huge impact with these systems for the worse by manipulating people in ways that they don't even realise is happening".

Ridge described it as "genuinely really scary" and asked if that kind of thing was happening now, to which the professor replied: "Quite likely, yes."

He said China, Russia and North Korea have large teams who "pump out disinformation" and with AI "we've given them a power tool".

"The concern of the letter is really about the next generation of the system. Right now the systems have some limitations in their ability to construct complicated plans."

This is a limited version of the story so unfortunately this content is not available. Open the full version

Read more:What is GPT-4 and how does it improve upon ChatGPT?Elon Musk reveals plan to build 'TruthGPT' despite warning of AI dangers

He suggested under the next generation of systems, or the one after that, corporations could be run by AI systems. "You could see military campaigns being organised by AI systems," he added.

"If you're building systems that are more powerful than human beings, how do human beings keep power over those systems forever? That's the real concern behind the open letter."

Click to subscribe to the Sophy Ridge on Sunday podcast

The professor said he was trying to convince governments of the need to start planning ahead for when "we need to change the way our whole digital ecosystem... works."

Since it was released last year, Microsoft-backed OpenAI's ChatGPT has prompted rivals to accelerate the development of similar large language models and encouraged companies to integrate generative AI models into their products.

UK unveils proposals for 'light touch' regulations around AI

It comes as the UK government recently unveiled proposals for a "light touch" regulatory framework around AI.

The government's approach, outlined in a policy paper, would split the responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.

Follow this link:
Artificial intelligence: Powerful AI systems 'can't be controlled' and 'are causing harm', says UK expert - Sky News

Related Posts

Comments are closed.