Elon Musk and Other AI Doomers Cause Meltdown – Gizmodo

Welcome to AI This Week, Gizmodos weekly roundup where we do a deep dive on whats been happening in artificial intelligence.

Did Elon Musk Regret Buying Twitter? | Walter Isaacson Interview

As governments fumble for a regulatory approach to AI, everybody in the tech world seems to have an opinion about what that approach should be and most of those opinions do not resemble one another. Suffice it to say, this week presented plenty of opportunities for tech nerds to yell at each other online, as two major developments in the space of AI regulations took place, immediately spurring debate.

The first of those big developments was the United Kingdoms much-hyped artificial intelligence summit, which saw the UKs prime minister, Rishi Sunak, invite some of the worlds top tech CEOs and leaders to Bletchley Park, home of the UKs WWII codebreakers, in an effort to suss out the promise and peril of the new technology. The event was marked by a lot of big claims about the dangers of the emergent technology and ended with an agreement surrounding security testing of new software models. The second (arguably bigger) event to happen this week was the unveiling of the Biden administrations AI executive order, which laid out some modest regulatory initiatives surrounding the new technology in the U.S. Among many other things, the EO also involved a corporate commitment to security testing of software models.

However, some prominent critics have argued that the US and UKs efforts to wrangle artificial intelligence have been too heavily influenced by a certain strain of corporately-backed doomerism which critics see as a calculated ploy on the part of the tech industrys most powerful companies. According to this theory, companies like Google, Microsoft, and OpenAI are using AI scaremongering in an effort to squelch open-source research into the tech as well as make it too onerous for smaller startups to operate while keeping its development firmly within the confines of their own corporate laboratories. The allegation that keeps coming up is regulatory capture.

This conversation exploded out into the open on Monday with the publication of an interview with Andrew Ng, a professor at Stanford University and the founder of Google Brain. There are definitely large tech companies that would rather not have to try to compete with open source [AI], so theyre creating fear of AI leading to human extinction, Ng told the news outlet. Ng also said that two equally bad ideas had been joined together via doomerist discourse: that AI could make us go extinct and that, consequently, a good way to make AI safer is to impose burdensome licensing requirements on AI producers.

More criticism swiftly came down the pipe from Yann LeCun, Metas top AI scientist and a big proponent of open-source AI research, who got into a fight with another techie on X about how Metas competitors were attempting to commandeer the field for themselves. Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment, LeCun said, in reference to OpenAI, Google, and Anthropics top AI executives. They are the ones who are attempting to perform a regulatory capture of the AI industry. You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D, he said.

After Ng and LeCuns comments circulated, Google Deepminds current CEO, Demis Hassabis, was forced to respond. In an interview with CNBC, he said that Google wasnt trying to achieve regulatory capture and said: I pretty much disagree with most of those comments from Yann.

Predictably, Sam Altman eventually decided to jump into the fray to let everybody know that no, actually, hes a great guy and this whole scaring-people-into-submitting-to-his-business-interests thing is really not his style. On Thursday, the OpenAI CEO tweeted:

there are some great parts about the AI EO, but as the govt implements it, it will be important not to slow down innovation by smaller companies/research teams. i am pro-regulation on frontier systems, which is what openai has been calling for, and against regulatory capture.

So, capture it is then, one person commented, beneath Altmans tweet.

Of course, no squabble about AI would be complete without a healthy mouthful from the worlds most opinion-filled internet troll and AI funder, Elon Musk. Musk gave himself the opportunity to provide that mouthful this week by somehow forcing the UKs Sunak to conduct an interview with him (Musk), which was later streamed to Musks own website, X. During the conversation, which amounted to Sunak looking like he wanted to take a nap and sleepily asking the billionaire a roster of questions, Musk managed to get in some classic Musk-isms. Musks comments werent so much thought-provoking or rooted in any sort of serious policy discussion as they were dumb and entertainingwhich is more the style of rhetoric he excels at.

Included in Musks roster of comments was that AI will eventually create what he called a future of abundance where there is no scarcity of goods and services and where the average job is basically redundant. However, the billionaire also warned that we should still be worried about some sort of rogue AI-driven superintelligence and that humanoid robots that can chase you into a building or up a tree were also a potential thing to be worried about.

When the conversation rolled around to regulations, Musk claimed that he agreed with most regulations but said, of AI: I generally think its good for government to play a role when public safety is at risk. Really, for the vast majority of software, public safety is not at risk. If an app crashes on your phone or laptop its not a massive catastrophe. But when we talk about digital superintelligencewhich does pose a risk to the publicthen there is a role for government to play. In other words, whenever software starts resembling that thing from the most recent Mission Impossible movie then Musk will probably be comfortable with the government getting involved. Until then...ehhh.

Musk may want regulators to hold off on any sort of serious policies since his own AI company is apparently debuting its technology soon. In a tweet on X on Friday, Musk announced that his startup, xAI, planned to release its first AI to a select group on Saturday and that this tech was in some important respects, the best that currently exists. Thats about as clear as mud, though itd probably be safe to assume that Musks promises are somewhere in the same neighborhood of hyperbole as his original comments about the Tesla bot.

This week we spoke with Samir Jain, vice president of policy at the Center for Democracy and Technology, to get his thoughts on the much anticipated executive order from the White House on artificial intelligence. The Biden administrations EO is being looked at as the first step in a regulatory process that could take years to unfold. Some onlookers praised the Biden administrations efforts; others werent so thrilled. Jain spoke with us about his thoughts on the legislation as well as his hopes for future regulation. This interview has been edited for brevity and clarity.

I just wanted to get your initial response to Bidens executive order. Are you pleased with it? Hopeful? Or do you feel like it leaves some stuff out?

Overall we are pleased with the executive order. We think it identifies a lot of key issues, in particular current harms that are happening, and that it really tries to bring together different agencies across the government to address those issues. Theres a lot of work to be done to implement the order and its directives. So, ultimately, I think the judgment as to whether its an effective EO or not will turn to a significant degree on how that implementation goes. The question is whether those agencies and other parts of government will carry out those tasks effectively. In terms of setting a direction, in terms of identifying issues and recognizing that the administration can only act within the scope of the authority that it currently has...we were quite pleased with the comprehensive nature of the EO.

One of the things the EO seems like its trying to tackle is this idea of long-term harms around AI and some of the more catastrophic potentialities of the way in which it could be wielded. It seems like the executive order focuses more on the long-term harms rather than the short-term ones. Would you say thats true?

Im not sure thats true. I think youre characterizing the discussion correctly, in that theres this idea out there that theres a dichotomy between long-term and short-term harms. But I actually think that, in many respects, thats a false dichotomy. Its a false dichotomy both in the sense that we should have to choose one or the otherand in fact, we shouldnt; and, also, a lot of the infrastructure and steps that you would take to deal with current harms are also going to help in dealing with whatever long-term harms there may be. So, if for example, we do a good job with promoting and entrenching transparencyin terms of the use and capability of AI systemsthats going to also help us when we turn to addressing longer-term harms.

With respect to the EO, although there certainly are provisions that deal with long-term harms...theres actually a lot in the EOI would go so far as to say the bulk of the EOdeals with current and existing harms. Its directing the Secretary of Labor to mitigate potential harms from AI-based tracking of workers; its calling on the Housing and Urban Development and Consumer Financial Protection bureaus to develop guidance around algorithmic tenant screening; its directing the Department of Education to figure out some resources and guidance about the safe and non-discriminatory use of AI in education; its telling the Health and Human Services Department to look at benefits administration and to make sure that AI doesnt undermine equitable administration of benefits. Ill stop there, but thats all to say that I think it does a lot with respect to protecting against current harms.

Here is the original post:

Elon Musk and Other AI Doomers Cause Meltdown - Gizmodo

Related Posts

Comments are closed.