Archive for the ‘Artificial General Intelligence’ Category

Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time – JD Supra

Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time  JD Supra

Go here to read the rest:

Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra

Tags:

OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough – WIRED

OpenAI has faced opprobrium in recent months from those who suggest it may be rushing too quickly and recklessly to develop more powerful artificial intelligence. The company appears intent on showing it takes AI safety seriously. Today it showcased research that it says could help researchers scrutinize AI models even as they become more capable and useful.

The new technique is one of several ideas related to AI safety that the company has touted in recent weeks. It involves having two AI models engage in a conversation that forces the more powerful one to be more transparent, or legible, with its reasoning so that humans can understand what its up to.

This is core to the mission of building an [artificial general intelligence] that is both safe and beneficial, Yining Chen, a researcher at OpenAI involved with the work, tells WIRED.

So far, the work has been tested on an AI model designed to solve simple math problems. The OpenAI researchers asked the AI model to explain its reasoning as it answered questions or solved problems. A second model is trained to detect whether the answers are correct or not, and the researchers found that having the two models engage in a back and forth encouraged the math-solving one to be more forthright and transparent with its reasoning.

OpenAI is publicly releasing a paper detailing the approach. Its part of the long-term safety research plan, says Jan Hendrik Kirchner, another OpenAI researcher involved with the work. We hope that other researchers can follow up, and maybe try other algorithms as well.

Transparency and explainability are key concerns for AI researchers working to build more powerful systems. Large language models will sometimes offer up reasonable explanations for how they came to a conclusion, but a key concern is that future models may become more opaque or even deceptive in the explanations they provideperhaps pursuing an undesirable goal while lying about it.

The research revealed today is part of a broader effort to understand how large language models that are at the core of programs like ChatGPT operate. It is one of a number of techniques that could help make more powerful AI models more transparent and therefore safer. OpenAI and other companies are exploring more mechanistic ways of peering inside the workings of large language models, too.

OpenAI has revealed more of its work on AI safety in recent weeks following criticism of its approach. In May, WIRED learned that a team of researchers dedicated to studying long-term AI risk had been disbanded. This came shortly after the departure of cofounder and key technical leader Ilya Sutskever, who was one of the board members who briefly ousted CEO Sam Altman last November.

OpenAI was founded on the promise that it would make AI both more transparent to scrutiny and safer. After the runaway success of ChatGPT and more intense competition from well-backed rivals, some people have accused the company of prioritizing splashy advances and market share over safety.

Daniel Kokotajlo, a researcher who left OpenAI and signed an open letter criticizing the companys approach to AI safety, says the new work is important, but incremental, and that it does not change the fact that companies building the technology need more oversight. The situation we are in remains unchanged, he says. Opaque, unaccountable, unregulated corporations racing each other to build artificial superintelligence, with basically no plan for how to control it.

Another source with knowledge of OpenAIs inner workings, who asked not to be named because they were not authorized to speak publicly, says that outside oversight of AI companies is also needed. The question is whether theyre serious about the kinds of processes and governance mechanisms you need to prioritize societal benefit over profit, the source says. Not whether they let any of their researchers do some safety stuff.

Originally posted here:

OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED

OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research – Singularity Hub

Despite their uncanny language skills, todays leading AI chatbots still struggle with reasoning. A secretive new project from OpenAI could reportedly be on the verge of changing that.

While todays large language models can already carry out a host of useful tasks, theyre still a long way from replicating the kind of problem-solving capabilities humans have. In particular, theyre not good at dealing with challenges that require them to take multiple steps to reach a solution.

Imbuing AI with those kinds of skills would greatly increase its utility and has been a major focus for many of the leading research labs. According to recent reports, OpenAI may be close to a breakthrough in this area.

An article in Reutersclaimed its journalists had been shown an internal document from the company discussing a project code-named Strawberry that is building models capable of planning, navigating the internet autonomously, and carrying out what OpenAI refers to as deep research.

A separate story from Bloomberg said the company had demoed research at a recent all-hands meeting that gave its GPT-4 model skills described as similar to human reasoning abilities. Its unclear whether the demo was part of project Strawberry.

According, to the Reuters report, project Strawberry is an extension of the Q* project that was revealed last year just before OpenAI CEO Sam Altman was ousted by the board. The model in question was supposedly capable of solving grade-school math problems.

That might sound innocuous, but some inside the company believed it signaled a breakthrough in problem-solving capabilities that could accelerate progress towards artificial general intelligence, or AGI. Math has long been an Achilles heel for large language models, and capabilities in this area are seen as a good proxy for reasoning skills.

A source told Reuters that OpenAI has tested a model internally that achieved a 90 percent score on a challenging test of AI math skills, though it again couldnt confirm if this was related to project Strawberry. But another two sources reported seeing demos from the Q* project that involved models solving math and science questions that would be beyond todays leading commercial AIs.

Exactly how OpenAI has achieved these enhanced capabilities is unclear at present. The Reuters report notes that Strawberry involves fine-tuning OpenAIs existing large language models, which have already been trained on reams of data. The approach, according to the article, is similar to one detailed in a 2022 paper from Stanford researchers called Self-Taught Reasoner or STaR.

That method builds on a concept known as chain-of-thought prompting, in which a large language model is asked to explain the reasoning steps behind its answer to a query. In the STaR paper, the authors showed an AI model a handful of these chain-of-thought rationales as examples and then asked it to come up with answers and rationales for a large number of questions.

If it got the question wrong, the researchers would show the model the correct answer and then ask it to come up with a new rationale. The model was then fine-tuned on all of the rationales that led to a correct answer, and the process was repeated. This led to significantly improved performance on multiple datasets, and the researchers note that the approach effectively allowed the model to self-improve by training on reasoning data it had produced itself.

How closely Strawberry mimics this approach is unclear, but if it relies on self-generated data, that could be significant. The holy grail for many AI researchers is recursive self-improvement, in which weak AI can enhance its own capabilities to bootstrap itself to higher orders of intelligence.

However, its important to take vague leaks from commercial AI research labs with a pinch of salt. These companies are highly motivated to give the appearance of rapid progress behind the scenes.

The fact that project Strawberry seems to be little more than a rebranding of Q*, which was first reported over six months ago, should give pause. As far as concrete results go, publicly demonstrated progress has been fairly incremental, with the most recent AI releases from OpenAI, Google, and Anthropic providing modest improvements over previous versions.

At the same time, it would be unwise to discount the possibility of a significant breakthrough. Leading AI companies have been pouring billions of dollars into making the next great leap in performance, and reasoning has been an obvious bottleneck on which to focus resources. If OpenAI has genuinely made a significant advance, it probably wont be long until we find out.

Image Credit:gemenu /Pixabay

Read more:

OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub

One of the Best Ways to Invest in AI Is Dont – InvestorPlace

Hello, Reader.

Investment trends come and go.

Their fleeting nature is a reminder that whats hot today may be forgotten tomorrow.

So, rather than succumbing to FOMO, or the fear of missing out, savvy investors often find value in embracing JOMO the joy of missing out. Or, I should say, not missing out entirely, but rather looking where others arent.

This perspective can lead to unique opportunities, particularly in the world of artificial intelligence. While AI isnt an investment trend thats likely to let up anytime soon and is one I will continue to follow I do believe that one of the best ways to invest in AI may be to invest in what it isnt.

In other words, invest in the industries or assets that AI could never replace. Not even the Artificial General Intelligence (AGI) that is on its way.

No matter how intelligent AI becomes, it will never morph into timberland. It will never sprout into a lemon tree or transform itself into an ocean freighter, platinum ingot, espresso bean, or stretch of sandy beach.

A select few industries are so future proof that they deserve our attention and a place in our portfolios.

So, in todays Smart Money, well explore my AI future-proof investing strategy and its potential for long-term success. Ill even reveal several specific sectors that could help secure and increase your wealth as we go further along the road to AGI.

Lets dive in

Admittedly, the biggest gains from the next few years will come from investing directly in technologies that either facilitate AI or benefit from it.

But this high-reward approach also entails relatively high risks, simply because the future capabilities of AI are a known unknown. They are difficult to specify or quantify at this stage.

Perhaps, for example, a technology that facilitates the early stages of AIs development could become a victim of AGIs later development. In other words, it is a certainty that AI will continuously create and destroy tech-centric businesses, as it grows and matures.

Therefore, I suspect a two-pronged approach to AI investing could deliver the optimal balance between risk and reward.

The first prong is to invest directly in the technologies or industries that seem likely to prosper in an increasingly AI-centric world. Many pharmaceutical and biotech companies would fall into this category. (Thats why I am keeping my eye on AGI, in which AI systems are trained to achieve true human-like intelligence. I will present my findings soon in my premium service, The Speculator, so watch out for an invitation to that.)

Investing directly in AI beneficiaries offers the greatest promise of capturing future 10-baggers and staying ahead of the creative-distraction curve. But we are unlikely to connect on every swing.

Thats why the second prong of my AI strategy is so valuable and essential.

This prong focuses on investing in the industries or assets that AI will never replace. These are things that an AI-centric world will require, no matter how intelligent it becomes.

A short list of examples might include industries like

These industries might not be completely future-proof from the onslaught of AI, but they are at least close to it.

To expand, AI will certainly create fleets of completely autonomous, self-piloting freighters at some point. AI might also overhaul the drivetrains and/or fuel sources that power these ships, but it will not replace the ships themselves or the need to transport bulk goods across the Seven Seas.

Similarly, AI will not eliminate the need for trains or planes. Neither will it end demand for lumber, wheat, or pineapples. And it will not curb the human desire to travel. For as long as the robots of the future allow us humans to travel, we will continue to do so.

Importantly, many future-proof industries not only offer protection from the destructive side of AI, but they could also benefit immensely from its creative side. In many of these old-school industries, new AI-enabled processes could boost their efficiency and fatten profit margins.

Consider, for example, how AI might influence how people travel and enhance the overall travel experience

These AI-enabled enhancements will not only improve travel experiences, but also boost the profitability of travel and tourism companies, all else being equal.

Investing in indispensable, future-proof industries like shipping or travel might not deliver spectacular gains over the coming years, but they should provide more reliable gains than what many AI-focused tech stocks will deliver.

In fact, as continue along the road to AGI (as well be discussing in much greater detail soon at The Speculator), the worlds wealthiest investors have been moving their money out of the tech sector in whats being dubbed The Great Cash-Out.

If you have any money in the markets especially in tech stocks youll want to prepare for this coming exodus. Although JOMO has its place, this movement is one you wont want to miss out on.

So, check out this video from me for all the details.

Regards,

Eric Fry

More here:

One of the Best Ways to Invest in AI Is Dont - InvestorPlace

OpenAI is plagued by safety concerns – The Verge

OpenAI is a leader in the race to develop AI as intelligent as a human. Yet, employees continue to show up in the press and on podcasts to voice their grave concerns about safety at the $80 billion nonprofit research lab. The latest comes from The Washington Post, where an anonymous source claimed OpenAI rushed through safety tests and celebrated its product before ensuring its safety.

They planned the launch after-party prior to knowing if it was safe to launch, an anonymous employee told The Washington Post. We basically failed at the process.

Safety issues loom large at OpenAI and seem to just keep coming. Current and former employees at OpenAI recently signed an open letter demanding better safety and transparency practices from the startup, not long after its safety team was dissolved following the departure of cofounder Ilya Sutskever. Jan Leike, a key OpenAI researcher,resigned shortly after,claiming in a post that safety culture and processes have taken a backseat to shiny products at the company.

Safety is core to OpenAIs charter, with a clause that claims OpenAI will assist other organizations to advance safety if artificial general intelligence, or AGI, is reached at a competitor instead of continuing to compete. It claims to be dedicated to solving the safety problems inherent to such a large, complex system. OpenAI even keeps its proprietary models private, rather than open (causing jabs and lawsuits), for the sake of safety. The warnings make it sound as though safety has been deprioritized despite being so paramount to the culture and structure of the company.

Its clear that OpenAI is in the hot seat but public relations efforts alone wont suffice to safeguard society

Were proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk, OpenAI spokesperson Taya Christianson said in a statement to The Verge. Rigorous debate is critical given the significance of this technology, and we will continue to engage with governments, civil society and other communities around the world in service of our mission.

The stakes around safety, according to OpenAI and others studying the emergent technology, are immense. Current frontier AI development poses urgent and growing risks to national security, a report commissioned by the US State Department in March said. The rise of advanced AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons.

The alarm bells at OpenAI also follow the boardroom coup last year that briefly ousted CEO Sam Altman. The board said he was removed due to a failure to be consistently candid in his communications, leading to an investigation that did little to reassure the staff.

OpenAI spokesperson Lindsey Held told the Post the GPT-4o launch didnt cut corners on safety, but another unnamed company representative acknowledged that the safety review timeline was compressed to a single week. We are rethinking our whole way of doing it, the anonymous representative told the Post. This [was] just not the best way to do it.

Do you know more about whats going on inside OpenAI? Id love to chat. You can reach me securely on Signal, where Im @kylie.01, or via email at kylie@theverge.com.

In the face of rolling controversies (remember the Her incident?), OpenAI has attempted to quell fears with a few well-timed announcements. This week, itannounced it is teaming up with Los Alamos National Laboratory to explore how advanced AI models, such as GPT-4o, can safely aid in bioscientific research, and in the same announcement, it repeatedly pointed to Los Alamos own safety record. The next day, an anonymous spokesperson told Bloomberg that OpenAI created an internal scale to track the progress its large language models are making toward artificial general intelligence.

This weeks safety-focused announcements from OpenAI appear to be defensive window dressing in the face of growing criticism of its safety practices. Its clear that OpenAI is in the hot seat but public relations efforts alone wont suffice to safeguard society. What truly matters is the potential impact on those beyond the Silicon Valley bubble if OpenAI continues to fail to develop AI with strict safety protocols, as those internally claim: the average person doesnt have a say in the development of privatized AGI, and yet they have no choice in how protected theyll be from OpenAIs creations.

AI tools can be revolutionary, FTC Chair Lina Khan told Bloomberg in November. But as of right now, she said, there are concerns that the critical inputs of these tools are controlled by a relatively small number of companies.

If the numerous claims against the companys safety protocols are accurate, this surely raises serious questions about OpenAIs fitness for this role as steward of AGI, a role that the organization has essentially assigned to itself. Allowing one group in San Francisco to control potentially society-altering technology is cause for concern, and theres an urgent demand even within its own ranks for transparency and safety now more than ever.

View original post here:

OpenAI is plagued by safety concerns - The Verge