Whos going to save us from bad AI? – MIT Technology Review

About damn time. That was the response from AI policy and ethics wonks to news last week that the Office of Science and Technology Policy, the White Houses science and technology advisory agency, had unveiled anAI Bill of Rights. The document is Bidens vision of how the US government, technology companies, and citizens should work together to hold the AI sector accountable.

Its a great initiative, and long overdue.The US has so far been one of the only Western nations without clear guidance on how to protect its citizens against AI harms. (As a reminder, these harms includewrongful arrests,suicides, and entire cohorts of schoolchildren beingmarked unjustlyby an algorithm. And thats just for starters.)

Tech companies say they want to mitigate these sorts of harms, but its really hard to hold them to account.

The AI Bill of Rights outlines five protections Americans should have in the AI age, including data privacy, the right to be protected from unsafe systems, and assurances that algorithms shouldnt be discriminatory and that there will always be a human alternative. Read more about ithere.

So heres the good news:The White House has demonstrated mature thinking about different kinds of AI harms, and this should filter down to how the federal government thinks about technology risks more broadly. The EU is pressing on withregulationsthat ambitiously try to mitigate all AI harms. Thats great but incredibly hard to do, and it could take years before their AI law, called the AI Act, is ready. The US, on the other hand, can tackle one problem at a time, and individual agencies can learn to handle AI challenges as they arise, says Alex Engler, who researches AI governance at the Brookings Institution, a DC think tank.

And the bad:The AI Bill of Rights is missing some pretty important areas of harm, such as law enforcement and worker surveillance. And unlike the actual US Bill of Rights, the AI Bill of Rights is more an enthusiastic recommendation than a binding law. Principles are frankly not enough, says Courtney Radsch, US tech policy expert for the human rights organization Article 19. In the absence of, for example, a national privacy law that sets some boundaries, its only going part of the way, she adds.

The US is walking on a tightrope.On the one hand, America doesnt want to seem weak on the global stage when it comes to this issue. The US plays perhaps the most important role in AI harm mitigation, since most of the worlds biggest and richest AI companies are American. But thats the problem. Globally, the US has to lobby against rules that would set limits on its tech giants, and domestically its loath to introduce any regulation that could potentially hinder innovation.

The next two years will be critical for global AI policy.If the Democrats dont win a second term in the 2024 presidential election, it is very possible that these efforts will be abandoned. New people with new priorities might drastically change the progress made so far, or take things in a completely different direction. Nothing is impossible.

Read the rest here:
Whos going to save us from bad AI? - MIT Technology Review

Related Posts

Comments are closed.