Facial Recognition Software Designed by Alt-Right Extremists, Covertly Trialed by NZ Police Another Form of Colonial Violence – Salient

While we were waiting for Level 2 and the daily COVID updates, the New Zealand Police have been covertly breaching our rights. On the morning of May 13th, Detective Sergeant Tom Fitzgerald announced that the New Zealand Police had carried out a trial of Clearview AI facial recognition software.

While they ultimately concluded that they would not be implementing this technology as the value to investigations has been assessed as very limited the testing of this software raised many concerning issues.

This trial was not approved by the appropriate channels and the software tested was beyond problematic. Alongside the potential privacy breaches the software may have enabled, the programme itself was created by white-supremacists.

A recent investigation of Clearview AI exposed extensive connections between many high ranked employees with anti-Semitic hate groups, white nationalists extremists, and other Alt-Right movements.

Marko Jukic, the man who corresponded with the New Zealand Police about the software, has since left the company after being discovered as the author of excessive hate speech publications.

Despite the publication of Clearviews ties to these radical groups being released in early April, the New Zealand Police were still trialling the software as late as May 11th.

Even if you cant see the problem with the Police, as a representative of the New Zealand government, allying itself with a company composed of white nationalists, there are several other problems with the software. Having people who hold these beliefs involved in the creation of the software likely means that it is not fit for purpose.

While AI itself is a neutral tool, it learns through the data it is fed by its developers. This data, and thus the programme, can be affected by implicit human bias or in this case, explicit hate.

The most famous example of bias in AI was Amazons Rekognition. This facial recognition software was independently tested and matched the identity of 28 members of the United States Congress with convicted criminals. These incorrect matches disproportionately involved people of colour.

Although Clearview AI markets itself as 100% accurate, it has never been independently tested so there is nothing stopping the company claiming this.

The overrepresentation of Mori and Pasifika people in the New Zealand criminal justice sector due to targeted arrests and harsher sentencing is a national disgrace.

Yet the New Zealand Police have illegally trialled a software which could exacerbate this problem through inaccurate matches caused by coded bias. Among many other legal rights, this breaches New Zealand citizens right to be free from discrimination.

Even outside the discriminatory effects of this software, this trial contravenes an untold amount of privacy rights. Facial recognition technologies are most effective when they have a large database of profiles, a catalogue of real peoples identities, which the software compares an image against to find a match.

The larger the database, the more content the AI can test against, and the more accurately it matches profiles. Clearview has 2.8 billion profiles on the database, mostly images illegally lifted from social media sites. It is more than likely many New Zealanders will be on the database.

Not only have the Police allied themselves with white-supremacists by using Clearview, but they are also endorsing illegal data collection practices and prioritising their own investigative powers over citizens privacy rights.

Privacy Commissioner John Edwards responded to news of this testing in a very disappointing manner, saying he was a little surprised by this one. Given the sole role of the Privacy Commissioners Office is privacy protection, youd hope for a bit more than surprise; something like outrage and decisive condemnation of the polices actions maybe?

The Police have displayed an extreme lack of transparency around exactly what the trial entailed, so it is unclear just how many New Zealanders had their privacy rights breached and how these rights were breached.

Because the softwares matching ability would need to be comprehensively tested, I would hazard a guess that many people had their privacy breached over the course of the testing.

New Zealand doesnt have privacy principles written into our constitution like other countries, so we rely on the office of the Privacy Commission to protect our privacy rights and the lack of outrage they have shown is genuinely concerning.

The public deserves greater accountability upon what this trial involved and how it was ever carried out without the appropriate approval. The Polices refusal to respond to Radio New Zealands breaking of this story was especially shameful.

See the original post here:
Facial Recognition Software Designed by Alt-Right Extremists, Covertly Trialed by NZ Police Another Form of Colonial Violence - Salient

Related Posts

Comments are closed.