Archive for the ‘Artificial General Intelligence’ Category

OpenAI disbands team devoted to artificial intelligence risks – Moore County News Press

OpenAI on Friday confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence.

OpenAI weeks ago began dissolving the so-called "superalignment" group, integrating members into other projects and research, according to the San Francisco-based firm.

Company co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the ChatGPT-maker this week.

The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.

"OpenAI must become a safety-first AGI (artificial general intelligence) company," Leike wrote Friday in a post on X, formerly Twitter.

Leike called on all OpenAI employees to "act with the gravitas" warranted by what they are building.

OpenAI chief executive Sam Altman responded to Leike's post with one of his own, thanking him for his work at the company and saying he was sad to see Leike leave.

"He's right we have a lot more to do," Altman said. "We are committed to doing it."

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, whose "trajectory has been nothing short of miraculous."

"I'm confident that OpenAI will build AGI that is both safe and beneficial," he added, referring to computer technology that seeks to perform as well as -- or better than -- human cognition.

Sutskever, OpenAI's chief scientist, sat on the board that voted to remove fellow chief executive Altman in November last year.

The ousting threw the San Francisco-based startup into a tumult, with the OpenAI board hiring Altman back a few days later after staff and investors rebelled.

OpenAI early this week released a higher-performing and even more human-like version of the artificial intelligence technology that underpins ChatGPT, making it free to all users.

"It feels like AI from the movies," Altman said in a blog post.

Altman has previously pointed to the Scarlett Johansson character in the movie "Her," where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day will come when "digital brains will become as good and even better than our own," Sutskever said during a talk at a TED AI summit in San Francisco late last year.

"AGI will have a dramatic impact on every area of life."

Read the rest here:

OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press

OpenAI researcher resigns, claiming safety has taken a backseat to shiny products – The Verge

Jan Leike, a key OpenAI researcher who resigned earlier this week following the departure of co-founder Ilya Sutskever, posted on X Friday morning that safety culture and processes have taken a backseat to shiny products at the company.

Leikes statements came after Wired reported that OpenAI had disbanded the team dedicated to addressing long-term AI risks (called the Superalignment team) altogether. Leike had been running the Superalignment team, which formed last July to solve the core technical challenges in implementing safety protocols as OpenAI developedAI that can reason like a human.

The original idea for OpenAI was to openly provide their models to the public, hence the organizations name, but theyve become proprietary knowledge due to the companys claims that allowing such powerful models to be accessed by anyone could be potentially destructive.

We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can, Leike said in follow-up posts about his resignation Friday morning. Only then can we ensure AGI benefits all of humanity.

The Verge reported earlier this week that John Schulman, another OpenAI co-founder who supported Altman during last years unsuccessful board coup, will assume Leikes responsibilities. Sutskever, who played a key role in the notorious failed coup against Sam Altman, announced his departure on Tuesday.

Over the past years, safety culture and processes have taken a backseat to shiny products, Leike posted.

Leikes posts highlight an increasing tension within OpenAI. As researchers race to develop artificial general intelligence while managing consumer AI products like ChatGPT and DALL-E, employees like Leike are raising concerns about the potential dangers of creating super-intelligent AI models. Leike said his team was deprioritized and couldnt get compute and other resources to perform crucial work.

I joined because I thought OpenAI would be the best place in the world to do this research, Leike wrote. However, I have been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we finally reached a breaking point.

Read the original post:

OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge

Most of Surveyed Americans Do Not Want Super Intelligent AI – 80.lv

In response to the question, "Which goal of AI policy is more important?", a significant 65% of respondents opted for the answer, "Keeping dangerous models out of the hands of bad actors." This choice notably outperformed the alternative of "Providing the benefits of AI to everyone" picked up by 22% of the voters. This suggests a prevailing concern about the potential misuse of AI, which outweighs the desire for widespread access to AI benefits.

Interestingly, the apprehension around AI does not extend to AI education. When asked about an initiative to expand access to AI education, research, and training, 55% of the respondents showed support, while 24% opposed, and the rest were undecided.

The results align with the stance of the Artificial Intelligence Policy Institute, which holds the view that proactive government regulation can significantly mitigate the potentially destabilizing effects of AI. As it stands, tech companies like OpenAI and Google have a daunting task ahead in convincing the public of the benefits of Advanced General Intelligence (AGI), given the current negative sentiment around increasingly powerful AI.

Follow this link:

Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv

A former OpenAI leader says safety has ‘taken a backseat to shiny products’ at the AI company – Winnipeg Free Press

A former OpenAI leader who resigned from the company earlier this week said Friday that safety has taken a backseat to shiny products at the influential artificial intelligence company.

Jan Leike, who ran OpenAIs Superalignment team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

However, I have been disagreeing with OpenAI leadership about the companys core priorities for quite some time, until we finally reached a breaking point, wrote Leike, whose last day was Thursday.

An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building smarter-than-human machines is an inherently dangerous endeavor and that the company is shouldering an enormous responsibility on behalf of all of humanity.

OpenAI must become a safety-first AGI company, wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Open AI CEO Sam Altman wrote in a reply to Leikes posts that he was super appreciative of Leikes contributions to the company was very sad to see him leave.

Leike is right we have a lot more to do; we are committed to doing it, Altman said, pledging to write a longer post on the subject in the coming days.

The company also confirmed Friday that it had disbanded Leikes Superalignment team, which was launched last year to focus on AI risks, and is integrating the teams members across its research efforts.

Winnipeg Free Press | Newsletter

Leikes resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four board members last fall who voted to push out Altman only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

Sutskever said he is working on a new project thats meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki also easily one of the greatest minds of our generation and said he is very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.

On Monday, OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect peoples moods.

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of the APs text archives.

Original post:

A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press

DeepMind CEO says Google to spend more than $100B on AGI despite hype – Cointelegraph

Googles not backing down from the challenge posed by Microsoft when it comes to the artificial intelligence sector. At least not according to the CEO of Google DeepMind, Demis Hassabis.

Speaking at a TED conference in Canada, Hassabis recently went on the record saying that he expected Google to spend more than $100 billion on the development of artificial general intelligence (AGI) over time. His comments reportedly came in response to a question concerning Microsofts recent Stargate announcement.

Microsoft and OpenAI are reportedly in discussions to build a $100 billion supercomputer project for the purpose of training AI systems. According to the Intercept, a person wishing to remain anonymous, who has had direct conversations with OpenAI CEO Sam Altman and seen the initial cost estimates on the project, says its currently being discussed under the codename Stargate.

To put the proposed costs into perspective, the worlds most powerful supercomputer, the U.S.-based Frontier system, cost approximately $600 million to build.

According to the report, Stargate wouldnt be a single system similar to Frontier. It will instead spread out a series of computers across the U.S. in five phases with the last phase being the penultimate Stargate system.

Hassabis comments dont hint at exactly how Google might respond, but seemingly confirm the notion that the company is aware of Microsoft's endeavors and plans on investing just as much, if not more.

Ultimately, the stakes are simple. Both companies are vying to become the first organization to develop artificial general intelligence (AGI). Todays AI systems are constrained by their training methods and data and, as such, fall well short of human-level intelligence across myriad benchmarks.

AGI is a nebulous term for an AI system theoretically capable of doing anything an average adult human could do, given the right resources. An AGI system with access to a line of credit or a cryptocurrency wallet and the internet, for example, should be able to start and run its own business.

Related: DeepMind co-founder says AI will be able to invent, market, run businesses by 2029

The main challenge to being the first company to develop AGI is that theres no scientific consensus on exactly what an AGI is or how one could be created.

Even among the worlds most famous AI scientists Metas Yann LeCun, Googles Demis Hassabis, etc. there exists no small amount of disagreement as to whether AGI can even be achieved using the current brute force method of increasing datasets and training parameters, or if it can be achieved at all.

In a Financial Times article published in March, Hassabis made a negative comparison to the current AI/AGI hype cycle and the scams its attracted to the cryptocurrency market. Despite the hype, both AI and crypto have exploded their respective financial spaces in the first four months of 2024.

Where Bitcoin, the worlds most popular cryptocurrency sat at about $30,395 per coin in April of 2023, its now over $60,000 as of the time of this articles publishing, having only recently retreated from an all-time-high about $73K.

Meanwhile, the current AI industry leader, Microsoft, has seen its stock go from $286 a share to around $416 in the same time period.

Continued here:

DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph