Artificial intelligence must be grounded in human rights, says High … – OHCHR

HIGH LEVEL SIDE EVENT OF THE 53rd SESSION OF THE HUMAN RIGHTS COUNCIL on

What should the limits be? A human-rights perspective on whats next for artificial intelligence and new and emerging technologies

Opening Statement by Volker Trk

UN High Commissioner for Human Rights

It is great that we are having a discussion about human rights and AI.

We all know how much our world and the state of human rights is being tested at the moment. The triple planetary crisis is threatening our existence. Old conflicts have been raging for years, with no end in sight. New ones continue to erupt, many with far-reaching global consequences. We are still reeling from consequences of the COVID-19 pandemic,which exposed and deepened a raft of inequalities the world over.

But the question before us today what the limits should be on artificial intelligence and emerging technologies is one of the most pressing faced by society, governments and the private sector.

We have all seen and followed over recent months the remarkable developments in generative AI, with ChatGPT and other programmes now readily accessible to the broader public.

We know that AI has the potential to be enormously beneficial to humanity. It could improve strategic foresight and forecasting, democratize access to knowledge, turbocharge scientific progress, and increase capacity for processing vast amounts of information.

But in order to harness this potential, we need to ensure that the benefits outweigh the risks, and we need limits.

When we speak of limits, what we are really talking about is regulation.

To be effective, to be humane, to put people at the heart of the development of new technologies, any solution any regulation must be grounded in respect for human rights.

Two schools of thoughts are shaping the current development of AI regulation.

The first one is risk-based only, focusing largely on self-regulation and self-assessment by AI developers. Instead of relying on detailed rules, risk-based regulation emphasizes identifying, and mitigating risks to achieve outcomes.

This approach transfers a lot of responsibility to the private sector. Some would say too much we hear that from the private sector itself.

It also results in clear gaps in regulation.

The other approach embeds human rights in AIs entire lifecycle. From beginning to end, human rights principles are included in the collection and selection of data; as well as the design, development, deployment and use of the resulting models, tools and services.

This is not a warning about the future we are already seeing the harmful impacts of AI today, and not only generative AI.

AI has the potential to strengthen authoritarian governance.

It can operate lethal autonomous weapons.

It can form the basis for more powerful tools of societal control, surveillance, and censorship.

Facial recognition systems, for example, can turn into mass surveillance of our public spaces, destroying any concept of privacy.

AI systems that are used in the criminal justice system to predict future criminal behaviour have already been shown to reinforce discrimination and to undermine rights, including the presumption of innocence.

Victims and experts, including many of you in this room, have raised the alarm bell for quite some time, but policy makers and developers of AI have not acted enough or fast enough on those concerns.

We need urgent action by governments and by companies. And at the international level, the United Nations can play a central role in convening key stakeholders and advising on progress.

There is absolutely no time to waste.

The world waited too long on climate change. We cannot afford to repeat that same mistake.

What could regulation look like?

The starting point should be the harms that people experience and will likely experience.

This requires listening to those who are affected, as well as to those who have already spent many years identifying and responding to harms. Women, minority groups, marginalized people, in particular, are disproportionately affected by bias in AI. We must make serious efforts to bring them to the table for any discussion on governance.

Attention is also needed to the use of AI in public and private services where there is a heightened risk of abuse of power or privacy intrusions justice, law enforcement, migration, social protection, or financial services.

Second, regulations need to require assessment of the human rights risks and impacts of AI systems before, during, and after their use. Transparency guarantees, independent oversight, and access to effective remedies are needed, particularly when the State itself is using AI technologies.

AI technologies that cannot be operated in compliance with international human rights law must be banned or suspended until such adequate safeguards are in place.

Third, existing regulations and safeguards need to be implemented for example, frameworks on data protection, competition law, and sectoral regulations, including for health, tech or financial markets. A human rights perspective on the development and use of AI will have limited impact if respect for human rights is inadequate in the broader regulatory and institutional landscape.

And fourth, we need to resist the temptation to let the AI industry itself assert that self-regulation is sufficient, or to claim that it should be for them to define the applicable legal framework. I think we have learnt our lesson from social media platforms in that regard. Whilst their input is important, it is essential that the full democratic process laws shaped by all stakeholders is brought to bear, on an issue in which all people, everywhere, will be affected far into the future.

At the same time, companies must live up to their responsibilities to respect human rights in line with the Guiding Principles on Business and Human Rights. Companies are responsible for the products they are racing to put on the market. My Office is working with a number of companies, civil society organizations and AI experts to develop guidance on how to tackle generative AI. But a lot more needs to be done along these lines.

Finally, while it would not be a quick fix, it may be valuable to explore the establishment of an international advisory body for particularly high-risk technologies, one that could offer perspectives on how regulatory standards could be aligned with universal human rights and rule of law frameworks. The body could publicly share the outcomes of its deliberations and offer recommendations on AI governance. This is something that the Secretary-General of the United Nations has also proposed as part of the Global Digital Compact for the Summit of the Future next year.

The human rights framework provides an essential foundation that can provide guardrails for efforts to exploit the enormous potential of AI, while preventing and mitigating its enormous risks.

I look forward to discussing these issues with you.

See the original post here:
Artificial intelligence must be grounded in human rights, says High ... - OHCHR

Related Posts

Comments are closed.