Opinion | If You Dont Trust A.I. Yet, Youre Not Wrong – The New York Times

Americans have good reason to be skeptical of artificial intelligence. Tesla crashes have dented the dream of self-driving cars. Mysterious algorithms predict job applicants performance based on little more than video interviews. Similar technologies may soon be headed to the classroom, as administrators use learning analytics platforms to scrutinize students written work and emotional states. Financial technology companies are using social media and other sensitive data to set interest rates and repayment terms.

Even in areas where A.I. seems to be an unqualified good, like machine learning to better spot melanoma, researchers are worried that current data sets do not adequately represent all patients racial backgrounds.

U.S. authorities are starting to respond. Massachusetts passed a nuanced law this spring limiting the use of facial recognition in criminal investigations. Other jurisdictions have taken a stronger stance, prohibiting the use of such technology entirely or requiring consent before biometric data is collected. But the rise of A.I. requires a more coordinated nationwide response, guided by first principles that clearly identify the threats that substandard or unproven A.I. poses. The United States can learn from the European Unions proposed A.I. regulation.

In April, the European Union released a new proposal for a systematic regulation of artificial intelligence. If enacted, it will change the terms of the debate by forbidding some forms of A.I., regardless of their ostensible benefits. Some forms of manipulative advertising will be banned, as will real-time indiscriminate facial recognition by public authorities for law enforcement purposes.

The list of prohibited A.I. uses is not comprehensive enough for example, many forms of nonconsensual A.I.-driven emotion recognition, mental health diagnoses, ethnicity attribution and lie detection should also be banned. But the broader principle that some uses of technology are simply too harmful to be permitted should drive global debates on A.I. regulation.

The proposed regulation also deems a wide variety of A.I. high risk, acknowledging that A.I. presents two types of problems. First, there is the danger of malfunctioning A.I. harming people or things a threat to physical safety. Under the proposed E.U. regulation, standardization bodies with long experience in technical fields are mandated to synthesize best practices for companies which will then need to comply with those practices or justify why they have chosen an alternative approach.

Second, there is a risk of discrimination or lack of fair process in sensitive areas of evaluation, including education, employment, social assistance and credit scoring. This is a risk to fundamental rights, amply demonstrated in the United States in works like Cathy ONeils Weapons of Math Destruction and Ruha Benjamins Race After Technology. Here, the E.U. is insisting on formal documentation from companies to demonstrate fair and nondiscriminatory practices. National supervisory authorities in each member state can impose hefty fines if businesses fail to comply.

To be sure, Europes proposal is far from perfect, and the E.U. is not alone in considering the problems of artificial intelligence. The United States is starting to grope toward basic standards of A.I. regulation as well. In April, the Federal Trade Commission clarified a 2020 guidance document on A.I., stating that U.S. law prohibits the sale or use of racially biased algorithms.

However, the problems posed by unsafe or discriminatory A.I. do not appear to be a high-level Biden administration priority. As a remarkable coalition of civil rights and technology policy organizations complained this month: Since assuming office, this administration has not pursued a public and proactive agenda on the civil rights implications of A.I. In fact, the Trump administrations executive orders and regulatory guidance on A.I. remain in force, which constrains agencies across the federal government in setting policy priorities.

Things are somewhat better on the state level. A more robust proposal is now under discussion in California to regulate public contracts for the provision of A.I.-based products and services. Legislators in Washington State are discussing a similar proposal. The proposed California law has some elements in common with the European approach and with the Canadian model of Algorithmic Impact Assessment, designed to mitigate bias and unfairness in emerging A.I. for public administration. Despite its limited scope, the California proposal would require tech companies that provide A.I. to state agencies to prepare a detailed data management plan, to make algorithms explainable even to a nonexpert audience and to prevent discriminatory biases.

The states can accomplish a lot on their own. However, the real challenge now is national leadership. The Biden administration should harmonize the U.S. approach with that of Europe, committing to require high quality data, documentation and traceability, transparency, human oversight, accuracy and robustness for high-risk A.I. systems, as the proposed European A.I. Act puts it. For example, if a machine is going to decide whether or not you are hired for a job, at the very least you deserve regulatory oversight to ensure that it is using proper data, that it has actually performed well and in a nondiscriminatory way in the past and that you can appeal to someone if you can demonstrate it has made a mistake. And if the system is based on pseudoscientific claptrap, you should not be judged by it at all.

Federal agencies like the Equal Employment Opportunity Commission can either address these problems under existing law or propose statutory language to grant them the authority to do so. But they need to act more aggressively now, while the technology is still developing. The White House needs to bring agency leaders together to learn from experts about best practices, and solicit comments from those affected by A.I. This approach would both democratize and professionalize U.S. technology policy in crucial areas.

A.I. developers should not simply move fast and break things, to quote an early Facebook motto. Real technological advance depends on respect for fundamental rights, ensuring safety and banning particularly treacherous uses of artificial intelligence. The E.U. is now laying the intellectual foundations for such protections, in a wide spectrum of areas where advanced computation is now (or will be) deployed to make life-or-death decisions about the allocation of public assistance services, the targets of policing and the cost of credit. While its regulation will never be adopted verbatim by the United States, there is much to learn from its comprehensive approach.

Frank Pasquale is a professor at Brooklyn Law School and the author of New Laws of Robotics: Defending Human Expertise in the Age of AI. Gianclaudio Malgieri is an associate professor at the EDHEC Augmented Law Institute in France.

The Times is committed to publishing a diversity of letters to the editor. Wed like to hear what you think about this or any of our articles. Here are some tips. And heres our email: letters@nytimes.com.

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.

Continue reading here:
Opinion | If You Dont Trust A.I. Yet, Youre Not Wrong - The New York Times

Related Posts

Comments are closed.