Artificial intelligence won’t rule the world so long as humans rule AI – The Age

Four days later, the Vatican issued a paper calling for "new forms of regulation" of AI based on the principles of "transparency, inclusion, responsibility, impartiality, reliability, security and privacy".

Loading

The striking thing about both these pronouncements is the degree to which they align with the official line from Silicon Valley, which couches ethics as a set of voluntary principles that will guide, rather than direct, the development of AI.

By proposing broad principles, which are notoriously difficult to define legally, they avoid the guard rails or red lines that would give genuine oversight over the way this technology develops.

The other problem with these voluntary codes is they will always be in conflict with the key drivers of technological change: to make money (if you are a business) or save money (if you are a government).

But theres an alternative approach to harnessing technological change that warrants serious consideration. It is proposed by the Australian Human Rights Commission. Rather than woolly guiding principles, Commissioner Ed Santow argues that AI should be developed within three clear parameters.

Loading

First, it should comply with human rights law. Second, it should be used in ways that minimise harm. Finally, humans need to be accountable for the way AI is used. The difference with this approach is that it anchors AI development within the existing legal framework.

To legally operate in Australia, under this proposal, the development of artificial intelligence would need to ensure it did not discriminate on the grounds of gender, race or social demographic, either directly or in effect.

The AI proponents would also need to show they had thought through the impact of their technology, much like a property developer needs to conduct an environmental impact statement before building.

And critically, an AI tool should have a human a flesh-and-blood person who is responsible for its design and operation.

How would these principles work in practice? Its worth looking at the failed robodebt program, under which recipients of government benefits were sent letters demanding they repay money because they had been overpaid.

If it had been scrutinised before it went live, robodebt is likely to have been found discriminatory, as it shifted the onus of proof onto people from societys most marginalised groups to show their payments were valid.

If it had been subject to a public impact review, the glaring anomalies and inconsistencies in matching Australian Tax Office and social security information would have become apparent before it was trialled on vulnerable people. And if a human had been accountable for its operation, those who received a notice would have had a course of review, rather than feeling as though they were speaking to a machine.

The whole costly and destructive debacle might have been prevented.

Embracing a future where these "disruptive" technologies remake our society guided by voluntary ethical principles is not good enough. As Robert-Elliott Smith observes in his excellent book Rage Inside the Machine, the idea that AI is amoral is bunkum. The values and priorities of the humans who commission and design it will determine the end product.

Loading

This challenge will become more pressing as algorithms begin to process banks of photos and video that purport to "recognise" individuals, track their movements and predict their motivations. The Human Rights Commission report calls for a moratorium on the use of this technology in high-stakes areas such as policing. It seeks to protect citizens from "bad" applications, but also to provide an incentive for industry to support the development of an enforceable legal framework.

Champions of technology may well argue that government intervention will slow down development and risk Australia being "left behind". But if we succeed in ensuring AI is "fair by design", we might end up with a distinctly Australian technology, which reflects our values, to share with the world.

Peter Lewis is the director of the Centre for Responsible Technology.

Peter Lewis is the executive director of Essential, a progressive research and communications company and the director of the Centre for Responsible Technology.

View post:
Artificial intelligence won't rule the world so long as humans rule AI - The Age

Related Posts

Comments are closed.