ChatGPT Throws Wrench into Europe’s Attempts to Regulate AI – JD Supra

After using a large language model, such as ChatGPT, for a while, it is not hard to image an array of nightmarish scenarios that these generative artificial intelligence (AI) programs could bring about. While ChatGPT and its emerging rivals currently have "guardrails" -- ethical limits on what it will do in response to a prompt -- the bounds thereof are not well understood. Through clever prompting, it is not hard to convince the current iteration of ChatGPT to do away with certain guardrails from time to time. Further, the companies behind these models have not defined the extent of the guardrails, while the very structures underlying the models are well known to behave in unpredictable ways. Not to mention what might happen if a "jailbroken" large language model is ever released to the public.

As an example, a user might ask the model to describe terrorist attack vectors that no human has ever previously conceived of. Or, a model might generate software code and convince a gullible user to download and execute it on their computer, resulting in personal financial information being sent to a third party.

Perhaps one of the most relevant risks of large language models is that once they are implemented and deployed, the marginal cost of creating misinformation becomes close to zero. If a political campaign, interest group, or government wishes to inundate social media with misleading posts about a public figure, a policy, or a law, it will be able to do so at volume without having to employ a roomful of humans.

In 2021, the European Commission of the European Union (EU) proposed harmonized rules for the regulation of AI. The Commission recognized both the perils and the benefits of AI and attempted to come up with a framework for regulation that employs oversight in proportion to the specific dangers inherent in certain uses of AI. The resulting laws enacted by member states would potentially have the Brussels Effect, in that EU regulation of its own markets become a de facto standard for the rest of the world. This is largely what happened for the EU's General Data Protection Regulation (GDPR) laws.

But very few people saw generative AI coming or the meteoric rise of ChatGPT at the end of 2022. Thus, the Commission is in the process of re-evaluating its rules in view of these paradigm-breaking technologies.

The Commission's proposal places all AI systems into one of three risk levels: (i) unacceptable risk, (ii) high risk, and (iii) low or minimal risk. The amount of regulation would be the greatest for category (i) and the least (e.g., none) for category (iii).

Uses of AI that create an unacceptable risk include those that violate fundamental rights, manipulate individuals subliminally, exploit specific vulnerable groups (e.g., children and persons with disabilities), engage in social scoring (evaluating the trustworthiness of persons based on their social behavior), and facilitate real-time biometric recognition for purposes of law enforcement. These uses would be prohibited.

A high risk AI may be classified as such based on its intended purpose and modalities of use. There are two main types of high risk systems: (i) those intended to be used as safety component of products (e.g., within machinery, toys, radio equipment, recreational vehicles, and medical devices), and (ii) other systems explicitly listed (e.g., involving biometrics, critical infrastructure, education, employment, law enforcement, and immigration). These categories are quite broad and would impact many diverse industries. The proposal sets forth detailed legal requirements for such systems relating to data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy, and security, as well as conformity assessment procedures.

Regarding low or minimal risk AI systems, their use would be permitted with no restrictions. However, the Commission envisions these systems potentially adhering to voluntary codes of conduct relating to transparency concerns.

To that point, the proposal also states that "[t]ransparency obligations will apply for systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content ('deep fakes')." In these situations, there is an obligation to disclose that the content has been machine-generated in order to allow the users to make informed choices.

Currently, the Commission is considering whether to place ChatGPT and its ilk in the high risk category, thus subjecting it to significant regulation. There has been pushback, however, from parties who believe that the regulations should distinguish between harmful uses of these models (e.g., spreading misinformation) and minimal-risk uses (e.g., coming up with new recipes, composing funny poems). In other words, the amount of regulation that applied to ChatGPT should vary based on its use -- and aesthetically pleasing goal but one that would be difficult to carry out in practice because of the model's broad scope and general applicability.

Whether this results in the proposed regulations being delayed and/or rewritten remains to be seen. The Commission will be taking up the issue.

Original post:
ChatGPT Throws Wrench into Europe's Attempts to Regulate AI - JD Supra

Related Posts

Comments are closed.