Archive for the ‘Artificial General Intelligence’ Category

19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create – MSN

19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create  MSN

Continued here:

19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN

Tags:

Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -…

Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications  Business Wire

Read the original here:

Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -...

Tags:

Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time – JD Supra

Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time  JD Supra

Go here to read the rest:

Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra

Tags:

OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough – WIRED

OpenAI has faced opprobrium in recent months from those who suggest it may be rushing too quickly and recklessly to develop more powerful artificial intelligence. The company appears intent on showing it takes AI safety seriously. Today it showcased research that it says could help researchers scrutinize AI models even as they become more capable and useful.

The new technique is one of several ideas related to AI safety that the company has touted in recent weeks. It involves having two AI models engage in a conversation that forces the more powerful one to be more transparent, or legible, with its reasoning so that humans can understand what its up to.

This is core to the mission of building an [artificial general intelligence] that is both safe and beneficial, Yining Chen, a researcher at OpenAI involved with the work, tells WIRED.

So far, the work has been tested on an AI model designed to solve simple math problems. The OpenAI researchers asked the AI model to explain its reasoning as it answered questions or solved problems. A second model is trained to detect whether the answers are correct or not, and the researchers found that having the two models engage in a back and forth encouraged the math-solving one to be more forthright and transparent with its reasoning.

OpenAI is publicly releasing a paper detailing the approach. Its part of the long-term safety research plan, says Jan Hendrik Kirchner, another OpenAI researcher involved with the work. We hope that other researchers can follow up, and maybe try other algorithms as well.

Transparency and explainability are key concerns for AI researchers working to build more powerful systems. Large language models will sometimes offer up reasonable explanations for how they came to a conclusion, but a key concern is that future models may become more opaque or even deceptive in the explanations they provideperhaps pursuing an undesirable goal while lying about it.

The research revealed today is part of a broader effort to understand how large language models that are at the core of programs like ChatGPT operate. It is one of a number of techniques that could help make more powerful AI models more transparent and therefore safer. OpenAI and other companies are exploring more mechanistic ways of peering inside the workings of large language models, too.

OpenAI has revealed more of its work on AI safety in recent weeks following criticism of its approach. In May, WIRED learned that a team of researchers dedicated to studying long-term AI risk had been disbanded. This came shortly after the departure of cofounder and key technical leader Ilya Sutskever, who was one of the board members who briefly ousted CEO Sam Altman last November.

OpenAI was founded on the promise that it would make AI both more transparent to scrutiny and safer. After the runaway success of ChatGPT and more intense competition from well-backed rivals, some people have accused the company of prioritizing splashy advances and market share over safety.

Daniel Kokotajlo, a researcher who left OpenAI and signed an open letter criticizing the companys approach to AI safety, says the new work is important, but incremental, and that it does not change the fact that companies building the technology need more oversight. The situation we are in remains unchanged, he says. Opaque, unaccountable, unregulated corporations racing each other to build artificial superintelligence, with basically no plan for how to control it.

Another source with knowledge of OpenAIs inner workings, who asked not to be named because they were not authorized to speak publicly, says that outside oversight of AI companies is also needed. The question is whether theyre serious about the kinds of processes and governance mechanisms you need to prioritize societal benefit over profit, the source says. Not whether they let any of their researchers do some safety stuff.

Originally posted here:

OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED

OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research – Singularity Hub

Despite their uncanny language skills, todays leading AI chatbots still struggle with reasoning. A secretive new project from OpenAI could reportedly be on the verge of changing that.

While todays large language models can already carry out a host of useful tasks, theyre still a long way from replicating the kind of problem-solving capabilities humans have. In particular, theyre not good at dealing with challenges that require them to take multiple steps to reach a solution.

Imbuing AI with those kinds of skills would greatly increase its utility and has been a major focus for many of the leading research labs. According to recent reports, OpenAI may be close to a breakthrough in this area.

An article in Reutersclaimed its journalists had been shown an internal document from the company discussing a project code-named Strawberry that is building models capable of planning, navigating the internet autonomously, and carrying out what OpenAI refers to as deep research.

A separate story from Bloomberg said the company had demoed research at a recent all-hands meeting that gave its GPT-4 model skills described as similar to human reasoning abilities. Its unclear whether the demo was part of project Strawberry.

According, to the Reuters report, project Strawberry is an extension of the Q* project that was revealed last year just before OpenAI CEO Sam Altman was ousted by the board. The model in question was supposedly capable of solving grade-school math problems.

That might sound innocuous, but some inside the company believed it signaled a breakthrough in problem-solving capabilities that could accelerate progress towards artificial general intelligence, or AGI. Math has long been an Achilles heel for large language models, and capabilities in this area are seen as a good proxy for reasoning skills.

A source told Reuters that OpenAI has tested a model internally that achieved a 90 percent score on a challenging test of AI math skills, though it again couldnt confirm if this was related to project Strawberry. But another two sources reported seeing demos from the Q* project that involved models solving math and science questions that would be beyond todays leading commercial AIs.

Exactly how OpenAI has achieved these enhanced capabilities is unclear at present. The Reuters report notes that Strawberry involves fine-tuning OpenAIs existing large language models, which have already been trained on reams of data. The approach, according to the article, is similar to one detailed in a 2022 paper from Stanford researchers called Self-Taught Reasoner or STaR.

That method builds on a concept known as chain-of-thought prompting, in which a large language model is asked to explain the reasoning steps behind its answer to a query. In the STaR paper, the authors showed an AI model a handful of these chain-of-thought rationales as examples and then asked it to come up with answers and rationales for a large number of questions.

If it got the question wrong, the researchers would show the model the correct answer and then ask it to come up with a new rationale. The model was then fine-tuned on all of the rationales that led to a correct answer, and the process was repeated. This led to significantly improved performance on multiple datasets, and the researchers note that the approach effectively allowed the model to self-improve by training on reasoning data it had produced itself.

How closely Strawberry mimics this approach is unclear, but if it relies on self-generated data, that could be significant. The holy grail for many AI researchers is recursive self-improvement, in which weak AI can enhance its own capabilities to bootstrap itself to higher orders of intelligence.

However, its important to take vague leaks from commercial AI research labs with a pinch of salt. These companies are highly motivated to give the appearance of rapid progress behind the scenes.

The fact that project Strawberry seems to be little more than a rebranding of Q*, which was first reported over six months ago, should give pause. As far as concrete results go, publicly demonstrated progress has been fairly incremental, with the most recent AI releases from OpenAI, Google, and Anthropic providing modest improvements over previous versions.

At the same time, it would be unwise to discount the possibility of a significant breakthrough. Leading AI companies have been pouring billions of dollars into making the next great leap in performance, and reasoning has been an obvious bottleneck on which to focus resources. If OpenAI has genuinely made a significant advance, it probably wont be long until we find out.

Image Credit:gemenu /Pixabay

Read more:

OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub