Award-winner warns of the failures of artificial intelligence – The Australian Financial Review

On a positive note, he says AI has been identified as a key enabler on 79 per cent (134 targets) of the United Nations Sustainable Development Goals (SDGs). However, 35 per cent (59 targets) may experience a negative impact from AI.

Unfortunately, he says unless we start to address the inequities associated with the development of AI right now, were in grave danger of not achieving the UNs SDG goals and, more pertinently, if AI is not properly governed and proper ethics are applied from the beginning, it will have not only a negative physical impact, it will also have a significant social impact globally.

There are significant risks to human dignity and human autonomy, he warns.

If AI is not properly governed and its not underpinned by ethics, it can create socio-economic inequality and impact on human dignity.

A part of the problem at present is most AI is being developed for a commercial outcome, with estimates suggesting its commercial worth to be $15 trillion a year by 2030.

Unfortunately, the path were on poses some significant challenges.

Samarawickrama says AI ethics is underpinned by human ethics and the underlying AI decision-making is driven by data and a hypothesis created by humans.

The danger is much AI is built off the back of the wrong hypothesis because there is an unintentional bias built into the initial algorithm. Every conclusion the AI is making is reached from the hypothesis, which means every decision and the quality of that decision its making is based off a humans ethics and biases.

For Samarawickrama, this huge flaw in AI can only be rectified if diversity, inclusion and socio-economic inequality are taken into account from the very beginning of the AI process.

We can only get to that point if we ensure we have good AI governance and ethics.

The alternative is were basically set up to fail if we do not have that diversity of data.

Much of his work in Australia is with the Australian Red Cross and its parent the International Federation of Red Cross and Red Crescent Societies (IFRC), where he has built a framework linking AI to the seven Red Cross principles in a bid to link AI to the IFRCs global goal of mitigating human suffering.

And while this is enhancing the data literacy across the Red Cross, it also has a potential usage in many organisations, because its about increasing diversity and social justice around AI.

Its a complex problem to solve because there are lot of perspectives as to what mitigating human suffering involves. It goes beyond socio-economic inequality and bias.

For example, the International Committee of the Red Cross is concerned about autonomous weapons and their impact on human suffering.

Samarawickrama says if we are going to achieve the UNSDGs as well as reap the benefits of a $15 trillion a year global economy by 2030, we have to work hard to ensure we get AI right now by focussing on AI governance and ethics.

If we dont, we create a risk of failing to achieve those goals and we need to reduce those by ensuring AI can bring the benefits and value it promises to all of us.

Its why the Red Cross is a good place to start because its all about reducing human suffering, wherever its found and, we need to link that to AI, Samarawickrama says.

Excerpt from:
Award-winner warns of the failures of artificial intelligence - The Australian Financial Review

Related Posts

Comments are closed.