Instead of Hitting the Brakes on AI, the EU Should Embrace Smart … – Disruptive Competition Project

From the discovery of earths place in the solar system to the invention of the plane, photography, or the internet: history is full of examples where scientific and technological progress was initially held back by fear.

Recent calls to pause the development of advanced artificial intelligence (AI) systems more powerful than the Generative Pre-trained Transformer 4 (GPT-4) model for at least six months are therefore no surprise. And while there are valid concerns about the potential risks of AI, it is already well understood that the overwhelming benefits of continued innovation in this field will outweigh potential risks.

Instead of hitting the brakes on AI innovation, EU policymakers should hit the accelerator on smart AI regulation. In this respect, the European Union has already taken a step in the right direction with its risk-based AI Act that is currently going through parliament.

Nevertheless, with ChatGPT and other innovative AI-powered tools making headlines, the focus of the debate is often on alarmist stories which tend to overlook the many societal benefits. For example, AI can help people with disabilities to access services more easily, or enable healthcare professionals to diagnose and treat patients more accurately and efficiently. Continued research and innovation is likely to unlock new solutions to the worlds most pressing problems, including climate change and pandemics.

Yet contrary to some alarmist claims, AI will not replace or substitute humans. What it holds is the potential to empower people in many different ways. In its early days, photography was also feared to replace the art of painting, so it is understandable that some are concerned with AI. But just like any other technology, AI first and foremost remains a tool developed and controlled by humans.

Putting a brake on research and innovation in AI will not only deprive society of all its benefits, but also jeopardise the development of many other technologies. This, in turn, will result in a distinct competitive disadvantage for countries or regions in the EUs case that choose the regressive path instead of going forward.

As recently pointed out by Bruno Sportisse, CEO of the French National Institute for Research in Digital Science and Technology, all digital innovations are in fact intertwined today. The future of cybersecurity also lies in the development of AI algorithms to automatically detect and respond to attacks, which is key to the controlled spread of the cloud.

This means that everything from cybersecurity to cloud and quantum computing is heavily dependent on developments in artificial intelligence, which is powering current and future innovations in all these fields. As European policymakers discuss the blocs new AI Act, the first of its kind in the world, they should therefore focus on promoting the safe and responsible use of AI technology and not on that brake.

The risk-based approach of the new regulation will impose rules on providers and users of AI systems, depending on the risks a particular AI application poses. For example, end users will have to be informed when they are interacting with an AI system and regulatory sandboxes will be introduced, allowing developers to create and test their systems in safe environments in collaboration with regulators.

These EU rules will thus protect consumers and provide useful guidance for developers. Although the AI Act can still be further improved in order to unlock AIs full potential, the current proposal already properly addresses the risks posed by AI and will improve trust. Combined with strong innovation and education policies, Europes new rulebook will help society to safely leverage the huge potential of AI.

While the EU is at the forefront of AI regulation in the world, some still argue that these European rules come too late. But it is wrong to think that states and society are not equipped to deal with new technological innovations. Not only does the EU already have an extensive regulatory framework for the digital sector that also applies to AI, including the General Data Protection Regulation (GDPR), but companies and AI labs themselves are adhering to strict rules when it comes to developing and deploying AI.

OpenAI, the creator of ChatGPT, for example, is strongly committed to developing trustworthy and responsible AI, and to that end works together with industry players and policymakers. Other companies that lead on AI innovation, such as Google and Meta, have introduced AI principles and pillars of responsible AI, respectively, which they are committed to uphold.

If there is one conclusion that we can draw, it is that there is a compelling case for steering towards more, not less, innovation in AI. As scientific and technological breakthroughs will continue to emerge at an increasing pace in the coming years, regulators, policymakers, and society at large should embrace a more progressive approach.

The EU should not be fixated on what is making headlines today, but rather focus on ensuring we create a regulatory framework that is ready for the future, which will bring about innovations that we are not even able to grasp yet.

Just like our generation with its pocket-sized camera phones now has to laugh at the thought that the earliest room-sized cameras were seen as an existential threat to painters like Rubens or Michelangelo, future generations will be wondering what Europe was thinking when it briefly considered a freeze on the development of those very first room-sized AI applications.

The rest is here:
Instead of Hitting the Brakes on AI, the EU Should Embrace Smart ... - Disruptive Competition Project

Related Posts

Comments are closed.