OpenAI, Anthropic and Google DeepMind workers warn of AIs dangers – The Washington Post

A handful of current and former employees at OpenAI and other prominent artificial intelligence companies warned that the technology poses grave risks to humanity in a Tuesday letter, calling on companies to implement sweeping changes to ensure transparency and foster a culture of public debate.

The letter, signed by 13 people including current and former employees at Anthropic and Googles DeepMind, said AI can exacerbate inequality, increase misinformation, and allow AI systems to become autonomous and cause significant death. Though these risks could be mitigated, corporations in control of the software have strong financial incentives to limit oversight, they said.

Because AI is only loosely regulated, accountability rests on company insiders, the employees wrote, calling on corporations to lift nondisclosure agreements and give workers protections that allow them to anonymously raise concerns.

The move comes as OpenAI faces a staff exodus. Many critics have seen prominent departures including of OpenAI co-founder Ilya Sutskever and senior researcher Jan Leike as a rebuke of company leaders, who some employees argue chase profit at the expense of making OpenAIs technologies safer.

Daniel Kokotajlo, a former employee at OpenAI, said he left the start-up because of the companys disregard for the risks of artificial intelligence.

Summarized stories to quickly stay informed

I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence, he said in a statement, referencing a hotly contested term referring to computers matching the power of human brains.

They and others have bought into the move fast and break things approach, and that is the opposite of what is needed for technology this powerful and this poorly understood, Kokotajlo said.

Liz Bourgeois, a spokesperson at OpenAI, said the company agrees that rigorous debate is crucial given the significance of this technology. Representatives from Anthropic and Google did not immediately reply to a request for comment.

The employees said that absent government oversight, AI workers are the few people who can hold corporations accountable. They said that they are hamstrung by broad confidentiality agreements and that ordinary whistleblower protections are insufficient because they focus on illegal activity, and the risks that they are warning about are not yet regulated.

The letter called for AI companies to commit to four principles to allow for greater transparency and whistleblower protections. Those principles are a commitment to not enter into or enforce agreements that prohibit criticism of risks; a call to establish an anonymous process for current and former employees to raise concerns; supporting a culture of criticism; and a promise to not retaliate against current and former employees who share confidential information to raise alarms after other processes have failed.

The Washington Post in December reported that senior leaders at OpenAI raised fears about retaliation from CEO Sam Altman warnings that preceded the chiefs temporary ouster. In a recent podcast interview, former OpenAI board member Helen Toner said part of the nonprofits decision to remove Altman as CEO late last year was his lack of candid communication about safety.

He gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically just impossible for the board to know how well those safety processes were working, she told The TED AI Show in May.

The letter was endorsed by AI luminaries including Yoshua Bengio and Geoffrey Hinton, who are considered godfathers of AI, and renowned computer scientist Stuart Russell.

See more here:
OpenAI, Anthropic and Google DeepMind workers warn of AIs dangers - The Washington Post

Related Posts

Comments are closed.