Archive for the ‘Artificial General Intelligence’ Category

OpenAI says there are 5 ‘levels’ for AI to reach human intelligence it’s already almost at level 2 – Quartz

OpenAI CEO Sam Altman at the AI Insight Forum in the Russell Senate Office Building on Capitol Hill on September 13, 2023 in Washington, D.C. Photo: Chip Somodevilla ( Getty Images )

OpenAI is undoubtedly one of the leaders in the race to reach human-level artificial intelligence and its reportedly four steps away from getting there.

Prime Day: The first half of Amazon's 48-hour sales event led to the biggest U.S. e-commerce day so far in 2024

The company shared a five-level system it developed to track its artificial general intelligence, or AGI, progress with employees this week, an OpenAI spokesperson told Bloomberg. The levels go from the currently available conversational AI to AI that can perform the same amount of work as an organization. OpenAI will reportedly share the levels with investors and people outside the company.

While OpenAI executives believe it is on the first level, the spokesperson said it is close to level two, which is defined as Reasoners, or AI that can perform basic problem-solving and is on the level of a human with a doctorate degree but no access to tools. The third level of OpenAIs system is reportedly called Agents, and is AI that can perform different actions for several days on behalf of its user. The fourth level is reportedly called Innovators, and describes AI that can help develop new inventions.

OpenAI leaders also showed employees a research project with GPT-4 that demonstrated it has human-like reasoning skills, Bloomberg reported, citing an unnamed person familiar with the matter. The company declined to comment further.

The system was reportedly developed by OpenAI executives and leaders who can eventually change the levels based on feedback from employees, investors, and the companys board.

In May, OpenAI disbanded its Superalignment team, which was responsible for working on the problem of AIs existential dangers. The company said the teams work would be absorbed by other research efforts across OpenAI.

See the rest here:

OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz

AIs Bizarro World, were marching towards AGI while carbon emissions soar – Fortune

Happy Friday! Ive been covering AI as a daily beat for two and a half years now, but recently Ive been feeling like we are living in a kind of Bizarro World, the fictional planet in DC Comics (also made famous in Seinfeld) where everything is oppositebeauty is hated, ugliness is prized, goodbye is helloleading to distorted societal norms, moral values, and logical reasoning.

In AIs Bizarro World, a company like OpenAI can blithely tell employees about creating a five-point checklist to track progress toward building artificial general intelligence (AGI), or AI that is capable of outperforming humans, as Bloomberg reported yesterdayin a bid towards developing AGI that benefits all of humanity. At the same time, media headlines can blare about Google and Microsofts soaring carbon emissions due to computationally intensive and power-hungry generative AI modelsto the detriment of all of humanity.

In AIs Bizarro World, the public is encouragedand increasingly mandated by their employersto use tools like OpenAIs ChatGPT and Googles Gemini to increase productivity and boost efficiency (or, lets be honest, just save a little bit of mental energy). In the meantime, according to a report by Goldman Sachs, a ChatGPT query needs nearly 10 times as much electricity as a Google search query. So while millions of Americans are advised to turn down their air conditioning to conserve energy, millions are also asking ChatGPT for an energy-sucking synonym, recipe, or haiku.

In AIs Bizarro World, AI frontier model companies including OpenAI, Anthropic, and Mistral can raise billions of dollars at massive valuations to develop their models, but it is the companies with the picks and shovels they rely onhello, Nvidia GPUsthat rake in the most money and stock market value for their energy-intensive processes and physical parts.

In AIs Bizarro World, Elon Musk can volunteer his sperm for those looking to procreate in a planned Martian city built by SpaceX, while a proposed supercomputer in Memphis, meant for his AI company X.ai, is expected to add about 150 megawatts to the electric grids peak demandan amount that could power tens of thousands of homes.

Of course, there is always a certain amount of madness that goes along with developing new technologies. And the potential for advanced AI systems to help tackle climate change issuesto predict weather, identify pollution, or improve agriculture, for exampleis real. In addition, the massive costs of developing and running sophisticated AI models will likely continue to put pressure on companies to make them more energy-efficient.

Still, as Silicon Valley and the rest of California suffer through ever-hotter summers and restricted water use, it seems like sheer lunacy to simply march towards the development of AGI without being equally concerned about data centers guzzling scarce water resources, AI computing power burning excess electricity, and Big Tech companies quietly stepping away from previously touted climate goals. I dont want Bizarro Superman to guide us toward an AGI future on Bizarro World. I just want a sustainable future on earthand hopefully, AI can be a part of it.

Sharon Goldman sharon.goldman@fortune.com

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

Todays edition of Data Sheet was curated by David Meyer.

X could face EU fine. The European Commission says Elon Musks X has broken the new Digital Services Act, which governs online content, in multiple ways. That includes deceiving users into thinking its paid-for blue checkmarks denote authenticity, not complying with rules about ad transparency, and stopping researchers from accessing its public data. X now gets to defend itself, but, if the Commission confirms its preliminary findings, it could issue a fine of up to 6% of global revenue and demand big changes to how X operates.

Apple antitrust. An investigation by Indias antitrust body found that Apple has been abusing its position as App Store proprietor by forcing developers to use its billing and payments systems, Reuters reports. Again, the regulator can hit Apple with a fine and tell it to change its ways.

SoftBank buys Graphcore. Japans SoftBank, which has been promising to go all in on AI, has bought the British AI chip company Graphcore. Graphcore, which counts Nvidia and Arm among its rivals, had been hemorrhaging money for a couple years and was desperately seeking a buyer. According to TechCrunch, Graphcore CEO Nigel Toon dismissed the reported $500 million figure for the acquisition as inaccurate, but the companies arent providing financial details about the deal.

The number of AT&T customers affected by someones illegal downloading of call and text records relating to several months in 2022. The FBI is involved and one person has been arrested, Reuters reports. AT&T reckons the data is not publicly available.

Tesla walks back Robotaxi reveal, sending its stock plummeting, by Bloomberg

65,000 mugs have gone missing at Teslas German factory, by Marco Quiroz-Gutierrez

Amazons $20 billion NBA deal isnt riskless. But its close, by Jason Del Rey

Amazon trails behind in latest U.K. compliance test and is threatened with investigation over poor supplier treatment, by Bloomberg

70,000 students are already using AI textbooks, by Sage Lazzaro

How we raised $100 million for my Silicon Valley startup in a down market, by Amir Khan (Commentary)

This 84-year-old quit an elite job and went $160K into debt to launch his career. Now hes suing ChatGPT to protect writers like him from highway robbery, by the Associated Press

COPIED Act. Theres a bipartisan push in the Senate to give artists and journalists more protection against voracious AI models. As The Verge reports, the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act would see the creation of security measures that could be added to content to prove its origin and potentially block its use in training AI models. Removing or tampering with these watermarks would be illegal.

Link:

AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune

AI News Today July 15, 2024 – The Dales Report

Welcome to AI News Today, your daily summary of the AI Industry.

OpenAIs Path to Achieving Artificial General Intelligence

OpenAI has introduced a five-level system to track progress toward Artificial General Intelligence (AGI), starting at Level 1, which represents current AI capabilities in conversational interactions. The ultimate goal is Level 5, where AI systems can perform the work of an entire organization autonomously. Read all about it on the TDR Website!

Virginia Congresswoman Advocates for AI Voice Technology

Congresswoman Jennifer Wexton is pushing for advancements in AI voice technology to enhance accessibility for individuals with disabilities. Her advocacy highlights the potential of AI in creating more inclusive communication tools.

SoftBank Acquires British AI Chipmaker Graphcore

SoftBank has acquired British AI chipmaker Graphcore, aiming to strengthen its position in the AI hardware market. This acquisition is part of SoftBanks broader strategy to invest in cutting-edge AI technologies.

Older Workers Key to AI Understanding

Older workers bring valuable experience and understanding to the AI field, bridging the gap between traditional practices and new technologies. Their insights are crucial for the successful integration of AI in various industries.

Market Correction Sparks Profit-Taking in Tech and AI Sectors

Yesterday was the wake-up call many expected and wanted in order to start taking at least some profits in Mag 7 tech and semi-AI winners, Mizuho Securities trading-desk analyst Jordan Klein said in a client note Friday.

OpenAI Develops Advanced Tool Strawberry

OpenAI is building a new advanced AI tool called Strawberry, designed to enhance user interaction and AI capabilities. This tool aims to push the boundaries of what AI can achieve in practical applications.

Research Shows AI Chatbots Enhance Creativity

Research indicates that AI chatbots can boost creativity in writing. These findings suggest that AI tools could play a significant role in creative industries by providing new avenues for inspiration and innovation.

Big Techs Talent Poaching Under Scrutiny

The ongoing issue of Big Tech companies poaching talent is raising concerns about market competition and innovation. This practice is drawing attention from regulators and industry observers alike.

Whistleblowers and SEC Investigate OpenAI Over NDAs

Whistleblowers have prompted an SEC investigation into OpenAI over allegations of illegal non-disclosure agreements. This investigation could have significant implications for OpenAIs operational transparency and legal practices.

Read more AI news on the TDR Website!

Want to be updated on Cannabis, AI, Small Cap, and Crypto? Subscribe to our Daily Baked in Newsletter!

Visit link:

AI News Today July 15, 2024 - The Dales Report

The Evolution Of Artificial Intelligence: From Basic AI To ASI – Welcome2TheBronx

In the realm of artificial intelligence (AI), we currently operate at the Language Learning Models (LLMS) level, while Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) remain in the future. Understanding these different levels is crucial as they each represent significant advancements in the capabilities of AI. Let us explore these levels in detail.

Basic AI, often referred to as Narrow AI or Weak AI, represents the most fundamental level of artificial intelligence. This type of AI is designed to perform specific tasks and operates within a predefined set of parameters. It lacks the ability to understand broader concepts or learn beyond its initial programming.

Basic AI systems excel in performing repetitive or narrowly defined tasks. They are limited to their specific function and cannot adapt to new tasks or situations. Some common examples of Basic AI include:

The primary limitation of Basic AI is its lack of generalization. These systems cannot transfer knowledge from one domain to another or improve their performance through learning beyond their initial programming. They operate purely based on the data and instructions they have been given.

Language Learning Models (LLMS) represent a more advanced form of AI, specializing in understanding and generating natural language. These models are capable of comprehending the context and meaning of text, allowing them to produce coherent and contextually relevant responses.

LLMS, such as GPT (Generative Pre-trained Transformer), are trained on vast amounts of text data. They learn patterns, grammar, and context from this data, enabling them to generate human-like text. Examples of LLMS applications include:

The key advantage of LLMS is their ability to understand and generate natural language. This capability allows for more dynamic and flexible interactions with users. LLMS can be fine-tuned for specific tasks, improving their performance and accuracy over time.

Despite their advanced capabilities, LLMS still operate within the confines of their training data. They can generate impressive results but do not possess true understanding or consciousness. Their responses are based on patterns learned from data rather than genuine comprehension.

Artificial General Intelligence (AGI) represents a significant leap in AI development. Unlike Narrow AI, AGI has the ability to understand, learn, and apply knowledge across a wide range of tasks, much like a human being.

AGI systems would possess cognitive abilities comparable to human intelligence. They could learn from experiences, adapt to new situations, and perform various tasks without requiring task-specific programming. A hypothetical example of AGI might include:

The development of AGI holds immense potential. It could revolutionize industries by performing tasks that currently require human intelligence. However, achieving AGI poses significant challenges, including ensuring safety, ethical considerations, and the sheer complexity of creating an AI that can understand and interact with the world at a human level.

The advent of AGI would raise profound ethical and societal questions. Issues such as job displacement, privacy, and the moral status of intelligent machines would need careful consideration. Ensuring that AGI systems are aligned with human values and do not pose risks to society is a critical concern.

Artificial Superintelligence (ASI) represents the pinnacle of AI development. ASI would surpass human intelligence in every aspect, from creativity to problem-solving abilities, and would be capable of driving unprecedented advancements in science and technology.

ASI would possess cognitive abilities far beyond those of humans. It could solve complex problems, create new technologies, and make discoveries that are currently beyond human reach. The potential applications of ASI are vast, including:

The development of ASI also presents significant risks. The immense power and intelligence of ASI could potentially be misused or result in unintended consequences. Ensuring that ASI is developed and controlled responsibly is paramount. Key considerations include:

The journey from Basic AI to ASI represents a profound evolution in the field of artificial intelligence. Each levelBasic AI, LLMS, AGI, and ASIbrings unique capabilities and challenges. While we currently operate at the LLMS level, the future holds the promise of AGI and ASI, which could transform our world in unimaginable ways.

Understanding these different levels is crucial for navigating the ethical, societal, and technological implications of AI development. As we progress towards more advanced forms of AI, it is essential to ensure that these technologies are developed responsibly, with a focus on enhancing human well-being and addressing global challenges.

Go here to see the original:

The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx

What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality – Observer

OpenAI CEO Sam Altman has previously discussed his desire to achieve human-level reasoning in A.I. Justin Sullivan/Getty Images

As part of OpenAIs path towards artificial general intelligence (A.G.I), a term for technology matching the intelligence of humans, the company is reportedly attempting to enable A.I. models to perform advanced reasoning. Such work is taking place under a secretive project code-named Strawberry, as reported by Reuters, which noted that the project was previously known as Q* or Q Star. While its name may have changed, the project isnt exactly new. Researchers and co-founders of OpenAI have previously warned against the initiative, with concerns over it reportedly playing a part in the brief ousting of Sam Altman as OpenAIs CEO in November.

Strawberry uses a unique method of post-training A.I. models, a process that improves their performance after being trained on datasets, according to Reuters, which cited internal OpenAI documents and a person familiar with the project. With the help of deep-research datasets, the company aims to create models that display human-level reasoning. OpenAI reportedly is looking into how Strawberry can allow models to be able to complete tasks over an extended period of time, search the web by themselves and take actions on its findings, and perform the work of engineers. OpenAI did not respond to requests for comment from Observer.

Altman, who has previously reiterated OpenAIs desire to create models able to reason, briefly lost control of his company last year when his board fired him for four days. Shortly before the ousting, several OpenAI employees had become concerned over breakthroughs presented by what was then known as Q*, a project spearheaded by Ilya Sutskever, OpenAIs former chief scientist.Sutskever himself had reportedly begun to worry about the projects technology, as did OpenAI employees working on A.I. safety at the time. After his reinstatement, Altman referred to news reports about Q* as an unfortunate leak in an interview with the Verge.

Elon Musk, another OpenAI co-founder, has also raised the alarm about Q* in the past. The billionaire, who severed ties with the company in 2018, referred to the project in a lawsuit filed against OpenAI and Altman that has since been dropped. While discussing OpenAIs close partnership with Microsoft (MSFT), Musks suit claimed that the terms of the deal dictate that Microsoft only has rights to OpenAIs pre-A.G.I. technology and that it is up to OpenAIs board to determine when the company has achieved A.G.I.

Musk argued that OpenAIs GPT-4 model constitutes as A.G.I, which he believes poses a grave threat to humanity, according to the suit. Court filings stated that OpenAI is currently developing a model known as Q* that has an even stronger claim to A.G.I.

Recent internal meetings have suggested that OpenAI is making rapid progress toward the type of human-level reasoning that Strawberry is working on. In an OpenAI all-hands meeting held earlier this month, the company unveiled a five-tiered system to track its progress towards A.G.I., as reported by Bloomberg. While the company said it is currently on the first level, known as chatbots, it revealed that it has nearly reached the second level of reasoners, which involves technology that can display human-level problem-solving. The subsequent steps consist of A.I. systems acting as agents that can take actions, innovators that aid in invention and organizations that do the work of an organization.

View post:

What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer