Big AI Tech Wants To Disrupt Humanity Dataetisk Tnkehandletank – DataEthics.eu

Why are a rich group of companies allowed to work towards Artificial General Intelligence without any adults looking over their shoulders? It should be illegal.

OpenAI, the company behind ChatGPT and Dall-E, is working to build Artificial General Intelligence (AGI), according to an article in Wired, What OpenAI Really Wants. All 500+ employees of what was until recently a start-up, but is now partially owned by Microsoft, are working against AGI knowing that it is disruptive to humanity.

OpenAI insists, according to the article, that their real strategy is to create a soft landing for the singularity. It doesnt make sense to just build AGI in secret and throw it out to the world, OpenAI CEO Sam Altman said.

The definition of AGI is a computer system that can generate new scientific knowledge and perform any task that humans can. In other words, AGI can outmaneuver humans. With ChatGPT, many believe that we have come a significant step closer to AGI.

The crazy thing is that OpenAI and at least seven other large companies are openly working towards AGI without any adults looking over their shoulders to stop them.

Ian Hogarth, AI investor, co-author of The State of AI Report and one of the UK governments leading AI experts, writes in the Financial Times (FT);

We have gone from one AGI startup, DeepMind, which received $23 million in funding in 2012, to at least eight organizations that could collectively raise $20 billion in investment by 2023.

He emphasises that the AI-development is entirely profit-driven. It is not driven by what is good or bad for society and our democracies. While Google-owned DeepMind dedicates 2% of its employees to making AI responsible, OpenAI spends only 7%. The rest is about making AI more capable, according to Hogarth.

Working to disrupt humanity is a crazy thing. Weve already seen the first step, where OpenAI has made a hallucinating but extremely convincing chatbot designed as humanly as possible in its language freely available with ChatGPT and even allowed it to be built into childrens SnapChat.

Thankfully, regulation is on the way in the EU. But we also know that regulation takes time and isnt always super effective. For example, GDPR, which is almost six years old, is only now starting to be enforced in earnest. And even if the EU takes the lead in regulation and sets some precedents, it almost always ends up being voluntary and self-regulation in the US, which is afraid of losing the AI race to China.

Sam Altman co-founded OpenAI with Elon Musk as a non-profit and open source-based organization. He was afraid that it would be the profit-hungry big tech companies that would reach AGI first. Today, Musk is out, OpenAI is closed as a black box and its a profit-maximizing company hastily working towards AGI.

It should be illegal to work to build AGI. But it is happening. We constantly get new smart AI tools, small carrots, which we are overwhelmed by, and one day we have landed in singularity as Sam Altman wants to give the world.

No, instead, we should do as former Google employee and AI ethics specialist Timnit Gebru tells the FT: Trying to build AGI is an inherently unsafe practice. Instead, build well-delineated, well-defined systems. Dont try to build a God.

Photo: Photo byWayne PulfordonUnsplash

This column was first published at Prosabladet in Danish page 10.

Read this article:

Big AI Tech Wants To Disrupt Humanity Dataetisk Tnkehandletank - DataEthics.eu

Related Posts

Comments are closed.