Archive for the ‘Artificial General Intelligence’ Category

Generative AI Is Totally Shameless. I Want to Be It – WIRED

AI has a lot of problems. It helps itself to the work of others, regurgitating what it absorbs in a game of multidimensional Mad Libs and omitting all attribution, resulting in widespread outrage and litigation. When it draws pictures, it makes the CEOs white, puts people in awkward ethnic outfits, and has a tendency to imagine women as elfish, with light-colored eyes. Its architects sometimes seem to be part of a death cult that semi-worships a Cthulu-like future AI god, and they focus great energies on supplicating to this immense imaginary demon (thrilling! terrifying!) instead of integrating with the culture at hand (boring, and you get yelled at). Even the more thoughtful AI geniuses seem OK with the idea that an artificial general intelligence is right around the corner, despite 75 years of failed precedentthe purest form of getting high on your own supply.

So I should reject this whole crop of image-generating, chatting, large-language-model-based code-writing infinite typing monkeys. But, dammit, I cant. I love them too much. I am drawn back over and over, for hours, to learn and interact with them. I have them make me lists, draw me pictures, summarize things, read for me. Where I work, weve built them into our code. Im in the bag. Not my first hypocrisy rodeo.

Theres a truism that helps me whenever the new big tech thing has every brain melting: I repeat to myself, Its just software. Word processing was going to make it too easy to write novels, Photoshop looked like it would let us erase history, Bitcoin was going to replace money, and now AI is going to ruin society, but its just software. And not even that much software: Lots of AI models could fit on a thumb drive with enough room left over for the entire run of Game of Thrones (or Microsoft Office). Theyre interdimensional ZIP files, glitchy JPEGs, but for all of human knowledge. And yet they serve such large portions! (Not always. Sometimes I ask the AI to make a list and it gives up. You can do it, I type. You can make the list longer. And it does! What a terrible interface!)

What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill itwith nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.

As with most people on Earth, shame is a part of my life, installed at a young age and frequently updated with shame service packs. I read a theory once that shame is born when a child expects a reaction from their parentsa laugh, applauseand doesnt get it. Thats an oversimplification, but given all the jokes Ive told that have landed flat, it sure rings true. Social media could be understood, in this vein, as a vast shame-creating machine. We all go out there with our funny one-liners and cool pictures, and when no one likes or faves them we feel lousy about it. A healthy person goes, Ah well, didnt land. Felt weird. Time to move on.

AI is like having my very own shameless monster as a pet.

But when you meet shameless people they can sometimes seem like miracles. They have a superpower: the ability to be loathed, to be wrong, and yet to keep going. We obsess over themour divas, our pop stars, our former presidents, our political grifters, and of course our tech industry CEOs. We know them by their first names and nicknames, not because they are our friends but because the weight of their personalities and influence has allowed them to claim their own domain names in the collective cognitive register.

Are these shameless people evil, or wrong, or bad? Sure. Whatever you want. Mostly, though, theyre just big, by their own, shameless design. They contain multitudes, and we debate those multitudes. Do they deserve their fame, their billions, their Electoral College victory? We want them to go away but they dont care. Not one bit. They plan to stay forever. They will be dead before they feel remorse.

AI is like having my very own shameless monster as a pet. ChatGPT, my favorite, is the most shameless of the lot. It will do whatever you tell it to, regardless of the skills involved. Itll tell you how to become a nuclear engineer, how to keep a husband, how to invade a country. I love to ask it questions that Im ashamed to ask anyone else: What is private equity? How can I convince my family to let me get a dog? It helps me understand whats happening with my semaglutide injections. It helps me write codehas in fact renewed my relationship with writing code. It creates meaningless, disposable images. It teaches me music theory and helps me write crappy little melodies. It does everything badly and confidently. And I want to be it. I want to be that confident, that unembarrassed, that ridiculously sure of myself.

View original post here:

Generative AI Is Totally Shameless. I Want to Be It - WIRED

OpenAI disbands team devoted to artificial intelligence risks – Port Lavaca Wave

OpenAI on Friday confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence.

OpenAI weeks ago began dissolving the so-called "superalignment" group, integrating members into other projects and research, according to the San Francisco-based firm.

Company co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the ChatGPT-maker this week.

The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.

"OpenAI must become a safety-first AGI (artificial general intelligence) company," Leike wrote Friday in a post on X, formerly Twitter.

Leike called on all OpenAI employees to "act with the gravitas" warranted by what they are building.

OpenAI chief executive Sam Altman responded to Leike's post with one of his own, thanking him for his work at the company and saying he was sad to see Leike leave.

"He's right we have a lot more to do," Altman said. "We are committed to doing it."

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, whose "trajectory has been nothing short of miraculous."

"I'm confident that OpenAI will build AGI that is both safe and beneficial," he added, referring to computer technology that seeks to perform as well as -- or better than -- human cognition.

Sutskever, OpenAI's chief scientist, sat on the board that voted to remove fellow chief executive Altman in November last year.

The ousting threw the San Francisco-based startup into a tumult, with the OpenAI board hiring Altman back a few days later after staff and investors rebelled.

OpenAI early this week released a higher-performing and even more human-like version of the artificial intelligence technology that underpins ChatGPT, making it free to all users.

"It feels like AI from the movies," Altman said in a blog post.

Altman has previously pointed to the Scarlett Johansson character in the movie "Her," where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day will come when "digital brains will become as good and even better than our own," Sutskever said during a talk at a TED AI summit in San Francisco late last year.

"AGI will have a dramatic impact on every area of life."

Go here to see the original:

OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave

OpenAI researcher resigns, claiming safety has taken a backseat to shiny products – The Verge

Jan Leike, a key OpenAI researcher who resigned earlier this week following the departure of co-founder Ilya Sutskever, posted on X Friday morning that safety culture and processes have taken a backseat to shiny products at the company.

Leikes statements came after Wired reported that OpenAI had disbanded the team dedicated to addressing long-term AI risks (called the Superalignment team) altogether. Leike had been running the Superalignment team, which formed last July to solve the core technical challenges in implementing safety protocols as OpenAI developedAI that can reason like a human.

The original idea for OpenAI was to openly provide their models to the public, hence the organizations name, but theyve become proprietary knowledge due to the companys claims that allowing such powerful models to be accessed by anyone could be potentially destructive.

We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can, Leike said in follow-up posts about his resignation Friday morning. Only then can we ensure AGI benefits all of humanity.

The Verge reported earlier this week that John Schulman, another OpenAI co-founder who supported Altman during last years unsuccessful board coup, will assume Leikes responsibilities. Sutskever, who played a key role in the notorious failed coup against Sam Altman, announced his departure on Tuesday.

Over the past years, safety culture and processes have taken a backseat to shiny products, Leike posted.

Leikes posts highlight an increasing tension within OpenAI. As researchers race to develop artificial general intelligence while managing consumer AI products like ChatGPT and DALL-E, employees like Leike are raising concerns about the potential dangers of creating super-intelligent AI models. Leike said his team was deprioritized and couldnt get compute and other resources to perform crucial work.

I joined because I thought OpenAI would be the best place in the world to do this research, Leike wrote. However, I have been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we finally reached a breaking point.

Read the original post:

OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge

Most of Surveyed Americans Do Not Want Super Intelligent AI – 80.lv

In response to the question, "Which goal of AI policy is more important?", a significant 65% of respondents opted for the answer, "Keeping dangerous models out of the hands of bad actors." This choice notably outperformed the alternative of "Providing the benefits of AI to everyone" picked up by 22% of the voters. This suggests a prevailing concern about the potential misuse of AI, which outweighs the desire for widespread access to AI benefits.

Interestingly, the apprehension around AI does not extend to AI education. When asked about an initiative to expand access to AI education, research, and training, 55% of the respondents showed support, while 24% opposed, and the rest were undecided.

The results align with the stance of the Artificial Intelligence Policy Institute, which holds the view that proactive government regulation can significantly mitigate the potentially destabilizing effects of AI. As it stands, tech companies like OpenAI and Google have a daunting task ahead in convincing the public of the benefits of Advanced General Intelligence (AGI), given the current negative sentiment around increasingly powerful AI.

Follow this link:

Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv

A former OpenAI leader says safety has ‘taken a backseat to shiny products’ at the AI company – Winnipeg Free Press

A former OpenAI leader who resigned from the company earlier this week said Friday that safety has taken a backseat to shiny products at the influential artificial intelligence company.

Jan Leike, who ran OpenAIs Superalignment team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

However, I have been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we finally reached a breaking point, wrote Leike, whose last day was Thursday.

An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building smarter-than-human machines is an inherently dangerous endeavor and that the company is shouldering an enormous responsibility on behalf of all of humanity.

OpenAI must become a safety-first AGI company, wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Open AI CEO Sam Altman wrote in a reply to Leikes posts that he was super appreciative of Leikes contributions to the company was very sad to see him leave.

Leike is right we have a lot more to do; we are committed to doing it, Altman said, pledging to write a longer post on the subject in the coming days.

The company also confirmed Friday that it had disbanded Leikes Superalignment team, which was launched last year to focus on AI risks, and is integrating the teams members across its research efforts.

Winnipeg Free Press | Newsletter

Leikes resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four board members last fall who voted to push out Altman only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

Sutskever said he is working on a new project thats meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki also easily one of the greatest minds of our generation and said he is very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.

On Monday, OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect peoples moods.

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of the APs text archives.

Original post:

A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press