Archive for the ‘Artificial General Intelligence’ Category

Congressional panel outlines five guardrails for AI use in House – FedScoop

A House panel has outlined five guardrails for deployment of artificial intelligence tools in the chamber, providing more detailed guidance as lawmakers and staff explore the technology.

The Committee on House Administration released the guardrails in a flash report on Wednesday, along with an update on the committees work exploring AI in the legislative branch. The guardrails are human oversight and decision-making; clear and comprehensive policies; robust testing and evaluation; transparency and disclosure; and education and upskilling.

These are intended to be general, so that many House Offices can independently apply them to a wide variety of different internal policies, practices, and procedures, the report said. House Committees and Member Offices can use these to inform their internal AI practices. These are intended to be applied to any AI tool or technology in use in the House.

The report comes as the committee and its Subcommittee on Modernization have focused on AI strategy and implementation in the House, and is the fifth such document it has put out since September 2023.

According to the report, the guardrails are a product of a roundtable the committee held in March that included participants such as the National Institute of Standards and Technologys Elham Tabassi, the Defense Departments John Turner, the Federation of American Scientists Jennifer Pahlka, the House chief administrative officer, the clerk of the House, and senior staff from lawmakers offices.

The roundtable represented the first known instance of elected officials directly discussing AIs use in parliamentary operations, the report said. The report added that templates for the discussion were also shared with the think tank Bssola Tech, which works on modernization of parliaments and legislatures.

Already, members of Congress are experimenting with AI tools for things like research assistance and drafting, though use doesnt appear widespread. Meanwhile, both chambers have introduced policies to rein in use. In the House, the CAO has approved only ChatGPT Plus, while the Senate has allowed use of ChatGPT, Microsoft Bing Chat, and Google Bard with specific guardrails.

Interestingly, AI was used in the drafting of the committees report, modeling the transparency guardrail the committee outlined. A footnote in the document discloses that early drafts of this document were written by humans. An AI tool was used in the middle of the drafting process to research editorial clarity and succinctness. Subsequent reviews and approvals were human.

Here is the original post:

Congressional panel outlines five guardrails for AI use in House - FedScoop

The Potential and Perils of Advanced Artificial General Intelligence – elblog.pl

Artificial General Intelligence (AGI) presents a new frontier in the evolution of machine capabilities. In essence, AGI stands as a level of artificial intelligence where machines are equipped to tackle any intellectual task that a human being can perform. Unlike narrow AI that excels in specific tasks such as image recognition or weather forecasting, AGI stretches its capacity to learning, self-improvement, and adaptability across various situations, emulating human-like intellect.

The development and application of AGI is a double-edged sword. The technology holds promise for immense societal benefits, such as resolving intricate problems, enhancing the quality of life, and offering support across sectors including healthcare, scientific research, and resource management.

On the flip side, the rise of AGI comes with significant risks and challenges. Theres a tangible fear that uncontrolled AGI could become overpowering and autonomous, making decisions that might lead to dire consequences for humanity. AGIs efficiency in performing tasks could also result in job displacements across numerous professions. Furthermore, albeit AGI could lead to the creation of powerful information systems, it may simultaneously raise concerns regarding data security and privacy.

Its clear that while AGI harbors the potential for tremendous advantages, it is essential for society to carefully weigh and prepare for the potential risks and challenges that may arise from its advancement and utilization.

The Ethical and Moral Implications of AGI are substantial. As we imbue machines with human-like intelligence, questions arise about the rights of these intelligent systems, and how they fit into our moral and legal frameworks. There is an ongoing debate concerning whether AGIs should be granted personhood or legal protections, similar to those afforded to humans and animals.

Control and Alignment Issues with AGI pose critical challenges. Ensuring that AGI systems act in ways that are aligned with human values and do not diverge from intended goals is a complex problem known as the alignment problem. Researchers are working on developing safety measures to ensure that AGIs remain under human control and are beneficial rather than detrimental.

Advantages of AGI: Problem Solving: AGI can potentially solve complex issues that are beyond human capability, including those relating to climate change, medicine, and logistics. Acceleration of Innovation: AGI may dramatically speed up the pace of scientific and technological discovery, leading to rapid advancements in various fields. Efficiency and Cost Savings: By automating tasks, AGI can increase efficiency and reduce costs, making goods and services more affordable and accessible.

Disadvantages of AGI: Job Displacement: AGI could automate jobs across many sectors, leading to mass unemployment and economic disruption. Safety and Security: The difficulty in predicting the behavior of AGI systems makes them a potential risk to global security, and AGI could be utilized for malicious purposes if not properly regulated. Loss of Human Skills: Over-reliance on AGI could lead to the degradation of human skills and knowledge.

Most Important Questions regarding AGI: 1. How can we ensure that AGI will align with human values? Developing robust ethical frameworks and control mechanisms is crucial. 2. What are the implications of AGI for employment and the workforce? Proactive strategies are necessary to address job displacement, including retraining and education. 3. How can we protect against the misuse of AGI? International cooperation and regulation are key to prevent the weaponization or malicious use of AGI.

Key Controversies: Regulation: There is debate over what forms of regulation are appropriate for AGI to encourage innovation while ensuring safety. Accessibility: Concerns exist about who should have access to AGI technology and whether it could exacerbate inequality. Economic Impact: The potential transformation of the job market and economy by AGI is contested, with differing views on how to approach the transition.

For more information on AI and related topics, you can visit the following links: DeepMind OpenAI Future of Life Institute

These links direct you to organizations actively involved in the development and research of advanced AI technologies and their implications.

Read this article:

The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl

DeepMind Head: Google AI Spending Could Exceed $100 Billion – PYMNTS.com

Googles top AI executive says the companys spending on the technology will surpass $100 billion.

While speaking Monday (April 15) at a TED Conference in Vancouver, DeepMind CEO Demis Hassabis was asked about recent reports of Microsoft and OpenAIs planned artificial intelligence (AI) supercomputer known as Stargate,said to cost $100 billion.

We dont talk about our specific numbers, but I think were investing more than that over time, said Hassabis, whose comments were reported by Bloomberg News.

Hassabis, who co-founded DeepMind in 2010 before it was bought by Google four years later, did not offer further details on the potential AI investment, the report said. He also told the audience Googles computer power surpasses that of competitors like Microsoft.

Thats one of the reasons we teamed up with Google back in 2014, is we knew that in order to get to AGI we would need a lot of compute, he said, referring to artificial general intelligence, or AI that surpasses the intelligence of humans.

Thats whats transpired, he said. And Google had and still has the most computers.

Hassabis added that the massive interest kicked off by OpenAIs ChatGPT AI model demonstrated the public was ready for the technology, even if AI systems are still prone to errors.

As PYMNTS wrote earlier this month, the Stargate project spotlights the increasing role of AI in fueling innovation and determining the future of commerce. Experts believe that as tech giants invest heavily in AI research and infrastructure, the creation of sophisticated AI systems could revolutionize areas like personalized marketing and supply chain optimization.

It is important to consider the potential impact on jobs and the workforce, Jiahao Sun, founder and CEO at FLock.io, a platform for decentralized AI models, said in an interview with PYMNTS.

As AI becomes more capable in multimodal and integrated into commerce, it may automate industries that currently cannot easily be transferred into a chatbot interface, such as manufacturing, healthcare, sports coaching, etc.

Microsoft and OpenAIs $100 billion project could make AI chips more scarce, leading to more price spikes and leaving more businesses and governments behind due to limited access to hardware, CEO and co-founder of AI company NeuReality Moshe Tanachtold PYMNTS, while adding that projects like Stargate will drive commerce forward in the short term.

The installed hardware will fuel more AI projects, features and use cases, leading Microsoft to offer it at consumable prices, driving innovation on the consumer side with secondary use cases built on this accessible AI technology, Tanach said.

Read the original here:

DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com

Say hi to Tong Tong, world’s first AGI child-image figure – ecns

Tong Tong, the world's first virtual child-image figure based on AGI technology. (Photo provided to chinadaily.com.cn)

Beijing Institute for General Artificial Intelligence (BIGAI) created the world's first virtual child-image figure named Tong Tong, based on artificial general intelligence (AGI) technology, said the institute in Beijing on Wednesday.

Tong Tong has been trained using the TongOS2.0 AGI operating system and TongPL2.0 programming language a self-developed learning and reasoning framework. This training equips Tong Tong with abilities in active vision, comprehension, communication, and many other attributes.

"Tong Tong possesses a complete mindset and value system similar to that of a three or four-year-old child. Currently, it is still undergoing rapid iterations and will enter various aspects of our lives," said Zhu Songchun, director of BIGAI.

Tong Tong has the potential to assist in real-life scenarios in the future, such as smart homes, health management, education and training, and interactions. According to BIGAI, Tong Tong can provide users with a more intelligent, personalized, and adaptable industry digital intelligent human.

"AGI is the most powerful driver of new quality productive forces," Zhu added.

In addition to strengthening research and development in high-tech innovation, BIGAI has also focused on cultivating talent in the field of AGI.

View original post here:

Say hi to Tong Tong, world's first AGI child-image figure - ecns

AI stocks aren’t like the dot-com bubble. Here’s why – Quartz

David Godes remembers his first year as a Harvard Business School professor, when young graduate students started dropping out like flies. It was 2000, the dawn of the modern internet, and would-be Harvard MBA grads thought theyd be better off starting and joining nascent dot-com companies.

They didnt know it was all a bubble of historic proportions.

It was crazy, recalled Godes, whose class of more than 100 quickly shrunk to about 80 that year. Even faculty left academia to get in on the early internet frenzy. It was a FOMO thing, he said. You know, Ive got to be part of this. All my friends from undergrad are part of startups.

Investors ultimately threw too much money at risky startups like Pets.com pushing their stocks far above levels justified by their underlying businesses. Eventually it all came crashing down, with the bubble burst leading to trillions in lost market cap before the early-2000s recession.

Todays craze over generative artificial intelligence is different, Godes said. He now teaches at Johns Hopkins, and his students arent leaving for Silicon Valley any time soon. Theyve got a healthy skepticism of the emerging technology, he said. Thats just one reason why he sees excitement about AI as entirely unlike the early internet era.

Generative AI has enthralled investors to the tune of many billions of dollars over the last year. Companies that make AI hardware and software, especially the chip giant Nvidia, have seen their stocks skyrocket. Its led skeptics to warn of another tech bubble that will inevitably burst. One economist said earlier this year that the AI craze has companies even more overvalued than in the late-90s.

But to those on the side of the debate, the sense of alarm is short-sighted.

When we had the internet bubble the first time around that was hype. This is not hype, JPMorgan Chase CEO Jamie Dimon told CNBC in February. Its real.

Generative AI is the most disruptive technology since the internet, said Gil Luria, an analyst with D.A. Davidson.

But theres a skepticism about AI thats unlike the dot-com era, Godes said. Much of the political and cultural conversation about AI is doom and gloom: State-sponsored groups using it to meddle in elections, chatbots sending disturbing messages, AI making music that imitates real artists and of course, the ongoing debate over whether AI will take peoples jobs. (Its complicated.)

Its sort of more of a sense of dread than a sense of wonder, Godes said.

People were skeptical about the internet, too. But now, with the evolution of the internet as a cultural frame of reference, fears about AIs downsides are more defined. Governments across the globe, academic institutions, and even companies making AI software are studying its potential risks with a level of scrutiny that wasnt present in the 1990s.

The dot-com hype was so big that by the spring of 1999, one in 12 Americans surveyed said they were in the process of starting a business. The bubble started to form in the mid-1990s and burst in 2000. There was a massive influx of cash for internet-related tech companies as global interest in personal computers and the World Wide Web exploded. It all happened as the U.S. was experiencing its longest period of economic expansion since the post-World War II era.

Read more: What bubbles are and why they happen

Internet companies including Priceline, Pets.com, and eToys went public, captivating investors who sent their market values to soaring heights all while ignoring their shaky business fundamentals. Banks had a lot of cash as the Fed kept printing money in 1999, and they shoved that money into those same dot-coms. That fall, the market caps of 199 internet stocks tracked by Morgan Stanley were valued at a collective $450 billion even as their actual businesses lost a combined $6.2 billion. Pets.com went bankrupt less than a year after it went public.

There were websites in the late 90s that just made no sense, Godes said. There was nothing complicated about the technology.

AI startups are different, he said, because the technology is quite complicated.

Its harder for an MBA student without technical training to put together a business plan and go out there and start [an AI] business, he said.

D.A. Davidsons Gil Luria takes issue with even using the word bubble for AI. Assets can become inflated and enter a bubble, he said, while their underlying technology goes through cycles. Like all new technology, AI may be in a hype phase. But that doesnt mean all AI-related companies values are over-inflated, Luria said.

Theres an important difference between stock rallies for AI hardware companies and those of AI software producers, Luria said. While the share prices for a handful of companies, especially Microsoft, have gotten big boosts from AI, thats because AI software actually boosted their profits unlike the websites of the dot-com boom. Todays AI software stocks are still trading reasonably within range of their historical [price] multiples, Luria said. (In other words, while Big Techs stock prices are a lot higher than they used to be, their price to earnings, sales, and free-cash-flow ratios arent radically different.) And the software those companies make will continue to boost sales for years.

But hardware is a one-time sale, he said, so the bigger disappointment could be in the hardware stocks. Luria likened the AI chipmaking giant Nvidia to Cisco Systems, a company whose products helped build the early infrastructure of the internet and whose burst came to define the dot-com era. Nvidias chips are to AI what Ciscos networking hardware was to the early internet, Luria said.

We had enough tools by 1999 and 2000. We had enough equipment and fiber and routers to support the growth of the internet for years to come, Luria said. And thats what we believe is the point in time were at now. By the end of this year, Microsoft, Amazon, Google, and the like will have enough [AI chips].

Read more: Google and Intel are challenging Nvidias AI chip dominance. It wont be easy

Cisco stock plummeted 80% between 2001 and 2002 when revenues fell short of expectations, as demand for its networking hardware sunk from record heights. Like with Cisco, Luria said, demand for AI hardware wont continue at its breakneck pace.

If investors are counting on the current growth rates for equipment hardware that supports the growth of AI to continue, he said, they may be disappointed.

He pointed to Nvidias own biggest customers making their own AI chips. Just this month, Google and Meta, two of Nvidias top five buyers, released the latest iterations of their own custom AI chips. While Metas isnt powering its AI applications just yet, Googles AI chatbot Gemini is being run on its new chip. Because Nvidias top five customers make up two-thirds of its revenues, Big Tech shifting its AI hardware in-house could seriously hurt Nvidias bottom line, Luria said.

Even among the experts who see an AI bubble forming, many say it wont end as bad as the dot-com burst. Richard Windsor, founder of the research firm Radio Free Mobile, said people are using convoluted and untested methods to justify very high valuations for [AI] companies, like they did during the dot-com era.

But, he said, the internet bubble bursting [was] worse than the AI bubble bursting will be. Thats partly because even in its immature form today, AI is capable of generating substantially greater revenues than the internet was in the 1990s and early 2000s. The internet in the 1990s was super slow, he said, and it took a long time to realize its full potential. Meanwhile, Windsor said he sees AIs full potential as ultimately limited. Even if the AI bubble bursts, what the internet became will be bigger than what AI will become in its current form, Windsor said.

Windsor said one of the reasons he sees AI models as ultimately limited is that machines cant tell the difference between causality and correlation.

Because of that, they will never really get to the point where they can be super intelligent, because they cannot reason, Windsor said.

Read more: Is Nvidia stock in a bubble that will burst? Wall Street cant make up its mind

Windsor said he doesnt know when the AI bubble will burst, but there are signs to look for, including price erosion or when the price of a product falls over time due to customer demand and competition. Windsor said he is already seeing indications of price erosion starting to take hold. Those signs include OpenAI letting people use its products without an account, which Windsor said looks like the company trying to get more users, and the search engine Perplexity AI starting to sell advertisements despite previously saying search should be free from the influence of advertising-driven models which Windsor sees as a sign its monetization hasnt gone well. He also pointed to surveys indicating large companies are wary about the deployment of generative AI, due largely to safety and security fears.

The general expectation out there in the market at the moment is artificial general intelligence is on the way, Windsor said. I respectfully disagree with that statement.

Luria sees Nvidia stock coming back to Earth in 12 to 18 months.

We may not see the top of the hype until maybe even next year, he said. But when we do, theres going to be a lot of people that are going to be very disappointed.

Read more:

AI stocks aren't like the dot-com bubble. Here's why - Quartz