EU urged to protect grassroots AI research or risk losing out to US – The Guardian

Artificial intelligence (AI)

Experts warn Brussels it cannot afford to leave artificial intelligence in the hands of foreign firms such as Google

The EU has been warned that it risks handing control of artificial intelligence to US tech firms if it does not act to protect grassroots research in its forthcoming AI bill.

In an open letter coordinated by the German research group Laion, or Large-scale AI Open Network, the European parliament was told that one-size-fits-all rules risked eliminating open research and development.

Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe, which would entrench large firms and hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas, the letter says.

It adds: Europe cannot afford to lose AI sovereignty. Eliminating open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.

The largest AI efforts, by companies such as OpenAI and Google, are heavily controlled by their creators. It is impossible to download the model behind ChatGPT, for instance, and the paid-for access that OpenAI provides to customers comes with a number of restrictions, legal and technical, on how it can be used. By contrast, open-source AI efforts involve creating an AI model and then releasing it for anyone to use, improve or adapt as they see fit.

We are working on open-source AI because we think that sort of AI will be more safe, more accessible and more democratic, said Christoph Schuhmann, the lead of Laion.

Unlike his peers at US AI businesses, who control billion-dollar organisations and frequently have a personal wealth in the hundreds of millions, Schuhmann is a volunteer in the AI world. Im a tenured high-school teacher in computer science, and Im doing everything for free as a hobby, because Im convinced that we will have near-human-level AI within the next five to 10 years, he said.

This technology is a digital superpower that will change the world completely, and I want to see my kids growing up in a world where this power is democratised.

Laions work has already been influential. The group, which has received funding from the UK startup Stability AI, focuses on producing open datasets and models for other AI researchers to train their own systems on. One database, of almost 6bn labelled images collected from the internet, underpins the popular Stable Diffusion image-generating AI, while another model, called Openclip, is a recreation of a private system built by OpenAI that can be used to label images.

Such work can prove controversial. Stable Diffusion, for instance, can be used to generate explicit, obscene and disturbing images, while Laoins image database has been criticised for not respecting the rights of the creators whose work is included. Those criticisms are what has led bodies such as the EU to consider holding companies responsible for what their AI systems do but such regulation would render it impossible to release systems to the public at large, which Schuhmann says would destroy the continents ability to compete.

Instead, he argues that the EU should actively back open-source research with its own public facilities, to accelerate the safe development of next-generation models under controlled conditions with public oversight and following European values. Other groups such as the Tony Blair Institute have called for the UK to do similarly, and fund the creation of a BritGPT to bring future AI under public control.

Schuhmann and his co-signatories are part of a growing chorus of AI experts hitting back at calls to slow down development. At a conference in Florence discussing the future of the EU, many lined up to decry a recent letter signed by Elon Musk and others calling for a pause on the creation of giant AIs for at least six months.

Sandra Wachter, a professor at the Oxford internet institute at Oxford University, said: The hype around large language models, the noise is deafening. Lets focus on who is screaming, who is promising that this technology will be so disruptive: the people who have a vested financial interest that thing is going to be successful. So dont separate the message from the speaker.

She told the audience at the European University Institutes State of the Union event that the world had seen this cycle of hype and fear before with the web, cryptocurrency and driverless cars. Every time we see something like this happens, its like: Oh my God, the world will never be the same.

She urged against haste in regulation, warning that angst and panic is not a good political adviser, and said the focus should be on talking to people in health, finance and education about their opinions.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original:

EU urged to protect grassroots AI research or risk losing out to US - The Guardian

Related Posts

Comments are closed.