G7 data protection authorities point to key concerns on generative AI – EURACTIV

The privacy watchdogs of the G7 countries are set to detail a common vision of the data protection challenges of generative AI models like ChatGPT, according to a draft statement seen by EURACTIV.

The data protection and privacy authorities of the United States, France, Germany, Italy, United Kingdom, Canada and Japan have been meeting in Tokyo on Tuesday and Wednesday (20-21 June) for a G7 roundtable to discuss data free flows, enforcement cooperation and emerging technologies.

The risks of generative AI models from the privacy watchdog perspective related to their rapid proliferation in various contexts and domains have taken centre stage, the draft statement reads.

We recognize that there are growing concerns that generative AI may present risks and potential harms to privacy, data protection, and other fundamental human rights if not properly developed and regulated, the statement reads.

Generative AI is a sophisticated technology capable of providing human-like text, image or audiovisual content based on a users input. Since the meteoric rise of ChatGPT, the emerging technology has brought great excitement but also massive anxiety over its possible misuse.

In April, the G7 of digital ministers that gathered set out the so-called Hiroshima Process to align on some of these topics, such as governance, safeguarding Intellectual Property rights, promoting transparency, preventing disinformation and promoting responsible use of the technology.

The Hiroshima Process is due to drive a voluntary Code of Conduct on generative AI that the European Commission is developing with the United States and other G7 partners.

Meanwhile, the EU is close to adopting the worlds first comprehensive legislation on Artificial Intelligence, which is set to include some provisions specific to generative AI.

Still, the privacy regulators point out to a series of risks that generative AI tools entail from a data protection standpoint.

The starting point is the legal authority AI developers have for processing personal information, particularly of minors, in the datasets used to train the AI models, how users interactions are fed into the tools and what information is then spat out as output.

The statement also calls for security safeguards to avoid the generative AI models being used to extract or reproduce personal information or that their privacy safeguards can be circumvented with carefully-crafted prompts.

The authorities also call on the AI developers to ensure that personal information used by generative AI tools is kept accurate, complete and up-to-date and free from discriminatory, unlawful, or otherwise unjustifiable effects.

In addition, the G7 regulators point to transparency measures to promote openness and explainability in the operation of generative AI tools, especially in cases where such tools are used to make or assist in decision-making about individuals.

The provision of technical documentation across the development lifecycle, measures to ensure an appropriate level of responsibility among actors of the AI supply chain and the principle to limit the collection of personal data to the strict necessary are also referenced.

Finally, the statement urges generative AI providers to put in place technical and organisational measures to ensure individuals affected by and interacting with these systems can still exercise their rights, such as access, rectification, and erasure of personal information, as well as the possibility to refuse to be subject solely to automated decisions that have significant effects.

The declaration stressed the case of Italy, where the data protection authority temporarily suspended ChatGPT due to possible privacy violations, but the service was eventually reinstated following improvements from OpenAI.

The authorities mention several ongoing actions, including investigating generative AI models in their respective legislation, providing guidance to AI developers for privacy compliance and supporting innovative projects such as regulatory sandboxes.

Fostering cooperation, particularly with establishing a dedicated task force, is also referenced, as EU authorities set up one to streamline enforcement on ChatGPT following the Italian regulators decision addressed to the worlds most famous chatbot.

However, according to a source informed on the matter, the work of this task force has been progressing very slowly, mostly due to the administrative process and coordination, and the European regulators are now expecting OpenAI to provide clarifications by the end of the summer.

Developers and providers should embed privacy in the design, conception, operation, and management of new products and services that use generative AI technologies, based on the concept of Privacy by Design and document their choices and analyses in a privacy impact assessment, the statement continues.

Moreover, AI developers are urged to enable downstream economic operators that deploy or adapt the modelto comply with data protection obligations.

Further discussions on how to address the privacy challenges of generative AI will take place in an emerging technology working group of the G7 of the data protection authorities.

[Edited by Nathalie Weatherald]

Originally posted here:

G7 data protection authorities point to key concerns on generative AI - EURACTIV

Related Posts

Comments are closed.