Archive for the ‘Artificial General Intelligence’ Category

The fallout from the weirdness at OpenAI – The Economist

Listen to this story. Enjoy more audio and podcasts on iOS or Android.

Your browser does not support the

Five very weird days passed before it seemed that Sam Altman would stay at OpenAI after all. On November 17th the board of the maker of Chatgpt suddenly booted out its chief executive. On the 19th it looked as if Mr Altman would move to Microsoft, OpenAIs largest investor. But employees at the startup rose up in revolt, with almost all of them, including one of the boards original conspirators, threatening to leave were Mr Altman not reinstated. Between frantic meetings, the top brass tweeted heart emojis and fond messages to each other. By the 21st, things had come full circle .

All this seems stranger still considering that these shenanigans were taking place at the worlds hottest startup, which had been expected to reach a valuation of nearly $90bn. In part, the weirdness is a sign of just how quickly the relatively young technology of generative artificial intelligence has been catapulted to glory. But it also holds deeper and more disturbing lessons.

One is the sheer power of ai talent. As the employees threatened to quit, the message OpenAI is nothing without its people rang out on social media. Ever since ChatGPTs launch a year ago, demand for ai brains has been white-hot. As chaos reigned, both Microsoft and other tech firms stood ready to welcome disgruntled staff with open arms. That gave both Mr Altman and Openais programmers huge bargaining power and fatally undermined the boards attempts to exert control.

The episode also shines a light on the unusual structure of Openai. It was founded in 2015 as a non-profit research lab aimed at safely developing artificial general intelligence (agi), which can equal or surpass humans in all types of thinking. But it soon became clear that this would require vast amounts of expensive processing power, if it were possible at all. To pay for it, a profit-making subsidiary was set up to sell AI tools, such as ChatGPT. And Microsoft invested $13bn in return for a 49% stake.

On paper, the power remained with the non-profits board, whose aim is to ensure that agi benefits everyone, and whose responsibility is accordingly not to shareholders but to humanity. That illusion was shattered as the employees demanded Mr Altmans return, and as the prospect loomed of a rival firm housed within profit-maximising Microsoft.

The chief lesson is the folly of solely relying on corporate structures to police technology. As the potential of generative ai became clear, the contradictions in OpenAIs structure were exposed. A single outfit cannot strike the best balance between advancing AI, attracting talent and investment, assessing AIs threats and keeping humanity safe. Conflicts of interest in Silicon Valley are hardly rare. Even if the people at OpenAI were as brilliant as they think they are, the task would be beyond them.

Much about the boards motives in sacking Mr Altman remains unknown. Even if the directors did genuinely have humanitys interest at heart, they risked seeing investors and employees flock to another firm that would charge ahead with the technology regardless. Nor is it entirely clear what qualifies a handful of private citizens to represent the interests of Earths remaining 7.9bn inhabitants. As part of Mr Altmans return, a new board is being appointed. It will include Larry Summers, a prominent economist; an executive from Microsoft will probably join him, as may Mr Altman.

Yet personnel changes are not enough: the firms structure should also be overhauled. Fortunately, in America there is a body that has a much more convincing claim to represent the common interest: the government. By drafting regulation, it can set the boundaries within which companies like Openai must operate. And, as a flurry of activity in the past month shows, politicians are watching ai. That is just as well. The technology is too important to be left to the whims of corporate plotters.

Read more of our articles onartificial intelligence

Continue reading here:

The fallout from the weirdness at OpenAI - The Economist

How an ‘internet of AIs’ will take artificial intelligence to the next level – Cointelegraph

HyperCycle is a decentralized network that connects AI machines to make them smarter and more profitable. It enables companies of all sizes to participate in the emerging AI computing economy.

Artificial intelligence (AI) is a rapidly evolving field that seems likely to fall into the hands of major companies or organizations with nationally driven budgets. One might think that only these have the massive financial resources to generate the computing power to train and ultimately own AI.

Recent events at OpenAI, a developer of the AI chatbot ChatGPT, highlight the challenges of centralized AI development. The firing of CEO Sam Altman and the resignation of co-founder Greg Brockman raise questions about governance and decision-making in centralized AI entities and highlight the need for a more decentralized approach. Srinivasan Balaji, a former chief technology officer at Coinbase, has become a staunch proponent for increased transparency in the realm of AI, advocating for the adoption of decentralized AI systems.

In addition to centralization, theres a lot of fragmentation in the AI space, meaning cutting-edge systems are unable to communicate with one another. Moreover, a high degree of centralization brings considerable security risks and reliability issues. Plus, given the vast amounts of computing power needed, efficiency and speed are key.

To achieve the full potential of AI that answers to all of humanity, we need a different approach one that decentralizes AI and allows AI systems to communicate with each other, eliminating the need for intermediaries. This would increase AI systems time to market, intelligence and profitability. While many systems are currently specialized in specific tasks, such as voice or facial recognition, a future shift to artificial general intelligence could allow one system to undertake a wide range of tasks simultaneously by delegating those tasks to multiple AIs.

As mentioned above, currently, the AI industry is dominated by large corporations and institutional investors, making it difficult for individuals to participate. HyperCycle, a novel ledgerless blockchain architecture, emerges as a transformative solution, aiming to democratize AI by establishing a fast and secure network that empowers everyone, from large enterprises to individuals, to contribute to AI computing.

HyperCycle is powered by a layer 0++ blockchain technology that enables rapid, cost-effective microtransactions between diverse AI agents interconnected to each other, and collectively solving problems.

This internet of AIs allows systems to interact and collaborate directly without intermediaries. It addresses the challenges of overcoming the slow, costly processes of the siloed AI landscape.

This is particularly timely, as the number of machine-to-machine (M2M) connections globally is increasing rapidly.

For instance, existing companies could interact with HyperCycles AIs specializing in IoT, blockchain, and supply chain management to optimize logistics for clients, predict maintenance before breakdowns occur, and ensure seamless data integrity. By enabling this interconnected ecosystem of decentralized A Is, HyperCycle can lead to operational efficiency and innovation in service offerings.

HyperCycle has also partnered with Penguin Digital to create HyperPG, a service that connects all the network beneficiaries together. HyperPG uses Paraguays abundant hydropower to provide a green and efficient source of energy for AI computing.

One of HyperCycles key features is the HyperAiBox, a plug-and-play device that allows individuals and organizations to perform AI computations at home and reduces their reliance on large corporations with vast data centers. The compact box is about the size of a modem, has a touchscreen, and allows nodes to be operated from home and network participants to be compensated for the resources they provide to the network. It is also a low-power solution.

The launch of HyperCycles mainnet, ahead of schedule, highlights the networks rapid growth. Currently, over 59,000 initial nodes are providing Uptime to the network by covering operational expenses. An additional 230,000 single licenses will soon join the ecosystem. This expansion indicates a strong demand for over 295 million HyPC tokens, reflecting the networks engagement and growth.

The three key metrics of Uptime, Computation, and Reputation incentivize node operators to maintain high standards, ensuring a stable, secure, and decentralized network environment.

Since June 2023, HyperCycles network has been operational, scaling up as demand increases. Source: HyperCycle

AI remains at a nascent stage, but HyperCycles goal is to anticipate the challenges that might stand in this technologys way and break down barriers to entry, making AI more accessible and affordable to everyone.

Disclaimer. Cointelegraph does not endorse any content or product on this page. While we aim at providing you with all important information that we could obtain in this sponsored article, readers should do their own research before taking any actions related to the company and carry full responsibility for their decisions, nor can this article be considered as investment advice.

Read more:

How an 'internet of AIs' will take artificial intelligence to the next level - Cointelegraph

OpenAI Is Seeking Additional Investment in Artificial General … – AiThority

OpenAI is seeking the support of its most significant benefactor

Technical advancements are becoming increasingly vital in determining the course of B2B payments. Supporting businesses with advanced delivery models incorporating a variety of payment methods, including card-not-present transactions, electronic invoices, and omnichannel experiences, in addition to addressing the perennial B2B frictions inherent in cross-border payments, are critical areas of innovation within AP and AR processes.

However, he noted that in B2B, security and certainty of payments are becoming more important than payment speed. As a result, real-time payments and ACH are becoming more appealing than paper checks. And despite the continued prevalence of net terms in payments for small to medium-sized businesses (SMBs) and mid-market business-to-business (B2B), innovation is producing alternatives such as dynamic payment terms and pricing models.

In an interview, Sam Altman, the chief executive officer of the artificial intelligence (AI) firm, revealed his intentions to obtain further financial support from Microsoft. Microsoft has already committed $10 billion to finance AGI, software designed to emulate human intelligence. Altman stated that his companys collaboration with Microsoft and its CEO Satya Nadella was extremely fruitful and that he anticipated raising a substantial amount more over time from Microsoft and other investors to cover the expenses associated with developing more complex AI models. When asked whether Microsoft would persist, Altman responded, I certainly hope so. There is still much computing to develop between now and AGI, he continued. Training costs are simply enormous. Following last weeks Developers Day, where OpenAI unveiled a marketplace showcasing its finest applications and a suite of new tools and enhancements to its GPT-4, as well as a revenue-sharing model with the most popular GPT creators, he made these remarks.

In the interim, PYMNTS has recently examined the obstacles the government faces in its efforts to regulate AI. Comprehension of the technologys operation and acquisition of the requisite expertise to supervise it are among the most urgent matters.

In contrast to historical AI implementations such as machine learning and predictive forecasting, which have become ubiquitous in various aspects of daily life, generative AI capabilities introduce a novel approach to automating and producing outputs in domains such as investment research, risk management, trading, and fraud detection.

Read the Latest blog from us: AI And Cloud- The Perfect Match

Additionally, recognizing the intricacy of ostensibly straightforward matters can yield advantageous outcomes in the long run. Furthermore, it is worth noting that the priorities of organizations operating in the B2B payments sector are influenced by macroeconomic factors, especially considering the current prolonged economic expansion. A growing number of developments in the payments industry are conforming to these priorities above.

In addition, organizations are progressively seeking vendor consolidation as a means to mitigate overall risk by restricting the number of technology vendors that interact with their ecosystem, according to Weiner. Furthermore, he noted that CTOs and CFOs are collaborating more frequently on B2B transformations. The advent of digital payments has resulted in enhanced transparency and instantaneous understanding of financial activities. Weiner, on the other hand, believes that while real-time payments offer efficiency and security benefits, they may not be a game-changer in B2B payments, where the majority of transactions are conducted on net terms.

Read:AI and Machine Learning Are Changing Business Forever

[To share your insights with us, please write tosghosh@martechseries.com]

Read more here:

OpenAI Is Seeking Additional Investment in Artificial General ... - AiThority

Top AI researcher launches new Alberta lab with Huawei funds after … – The Globe and Mail

Open this photo in gallery:

Richard Sutton, a computer scientist and well known AI researcher, at home in Edmonton, Alta., on Nov. 23. Prof. Sutton is launching a new AI research institute in Edmonton with funding from Huawei.Amber Bracken/The Globe and Mail

One of the countrys most accomplished artificial intelligence researchers is launching a new non-profit lab with $4.8-million in funding from Huawei Canada, after the federal government restricted the Chinese companys ability to work with publicly funded universities.

Richard Sutton, a professor at the University of Alberta and a pioneer in the field of reinforcement learning, says the Openmind Research Institute will fund researchers following the Alberta Plan, a 12-step guide he co-authored last year that lays out a framework for pursuing the development of AI agents capable of human-level intelligence.

Openmind will be based in Edmonton and kicks off Friday with a weekend retreat in Banff.

Canada banned the use of equipment from Huawei in 5G networks last year, citing the company as a security risk because of its connections to the Chinese government, which could use the company for espionage. Huawei has long denied the accusation.

Jim Hinton, a Waterloo, Ont.-based patent lawyer and senior fellow at the Centre for International Governance Innovation, said Huaweis involvement with Openmind raises concerns. Even if the money is coming with as little strings attached as possible, there is still soft power that is being wielded, he said. The fact that theyre holding the purse strings gives a significant amount of control.

In 2021, Ottawa started restricting funding for research collaborations between publicly funded universities and entities with links to countries considered national security risks, including China. Alberta has implemented similar restrictions for sensitive research at a provincial level. Artificial intelligence is particularly sensitive because the technology has military applications and can be used for nefarious purposes.

I hope that it could counter that narrative and be an example of how things could be really good, Prof. Sutton said of Openmind and Huaweis funding. This is a case where the interaction with China has been really productive, really valuable in contributing to open AI research in Canada.

All of the work done by Openmind, which is separate from Prof. Suttons role at the University of Alberta, will be open-source, and the institute will not pursue intellectual property rights.

Nor will Huawei. I was a little bit surprised that they were willing to do something so open and with no attempt at control, said Prof. Sutton, who has a long-standing relationship with Huawei in Alberta.

Huawei did not respond to requests for comment.

Although the Chinese company has been shut out of 5G networks and restricted in working with universities in Canada, it can still work directly with individual researchers.

Companies linked to Chinas military, like Huawei is, will try to find other ways around the federal rules, including directly funding researchers outside university institutions. It appears Huawei is doing exactly that, said Margaret McCuaig-Johnston, a senior fellow at the Institute for Science, Society and Policy at the University of Ottawa. China pushes the envelope as far as they can.

Prof. Sutton wrote the textbook literally on reinforcement learning, which is an approach to developing AI agents capable of performing actions in an environment to achieve a goal. Reinforcement learning is everywhere in the world of AI, including in autonomous vehicles and in how chatbots such as ChatGPT are polished to sound more human.

Born in the United States, Prof. Sutton completed a PhD at the University of Massachusetts in 1984 and worked in industry before returning to academia. He joined the University of Alberta in 2003, where he founded the Reinforcement Learning and Artificial Intelligence Lab. He left the U.S. for Canada partly because of his opposition to the politics of former president George W. Bush and the countrys military campaigns abroad.

Alphabet Inc. tapped him in 2017 to lead the companys AI research office in Edmonton through its DeepMind subsidiary, but shut it down in January as part of a company-wide restructuring.

The closing left Prof. Sutton with unfinished business, in a sense. His goal is to understand intelligence, as he puts it, a necessary undertaking if we are to build truly intelligent agents. His work at the university is one avenue to pursue that goal, as is his recent post with Keen Technologies, a U.S. AI startup founded by former Meta Platforms Inc. consulting chief technology officer John Carmack. Keen raised US$20-million last year, including from Shopify founder Tobi Ltke.

Openmind is one more way to pursue that goal, Prof. Sutton said. Although large language models, which power chatbots like ChatGPT, have garnered a lot of attention, he isnt particularly interested in them. Its a good, useful thing, but its kind of a distraction, he said.

He is far more interested in building AI applications capable of complex decision-making and achieving goals, which many refer to as artificial general intelligence, or AGI. I imagine machines doing all the different kinds of things that people do, he said. They will interact and find, just like people do, that the best way to get ahead is to work with other people.

Prof. Sutton will sit on the Openmind governing board along with University of Alberta computer science professor Randy Goebel and Joseph Modayil, who previously worked at DeepMind. Mr. Modayil is also Openminds research director.

Understanding the mind is a grand scientific challenge that has driven my work for more than two decades, he said in an e-mail.

A committee that includes Alberta Plan co-authors and U of A professors Michael Bowling and Patrick Pilarski will select the research fellows. Openminds research agenda will be set independently from its funding sources, according to a backgrounder on the institute provided by Prof. Sutton.

The briefing also notes that Openmind researchers will be natural candidates for founding startups and commercializing research outside the non-profit. Although there may be no legal obligation for an Openmind researcher to work with Openmind donors, familiarity, trust, and consilient perspectives would make this a likely outcome, according to the backgrounder.

The backing from Huawei puts the company in a better position to work with Openmind talent, Mr. Hinton said. Even though the research will be open-source, foreign multinational companies such as Huawei are often more equipped to capitalize on it than Canadian firms, which have a poor track record of protecting intellectual property and capturing the economic benefits that come with innovation.

Canadian governments review transactions involving foreign companies and physical assets, such as mines, to ensure the domestic economy benefits. But they fall short with IP. When it comes to intangible assets, we dont understand how that works, Mr. Hinton said.

Prof. Sutton is a big proponent of open-source and has a dim view of IP, saying that the focus on ownership can slow down innovation. You are interacting with lawyers and spending a lot of time and money on things that arent advancing the research, he said. It just doesnt seem like its worked at all for computer science IP.

He is open to more funding for Openmind and said that if donors are uncomfortable with Huaweis involvement they can also support AI research through the reinforcement learning lab at the University of Alberta. Openmind is adamant that Huawei cannot influence the non-profits research, he added, and said he would decline further funding if the company attempted to do so.

I see this as a purely positive and mutually beneficial way for Huawei and academic researchers to interact, he said. It may not last, but while it does, it is entirely a good thing.

Sam Altman was briefly ousted as the CEO of OpenAI, the result of infighting on the companys board of directors. For how long did the attempted corporate coup last?

Take our news quiz to find out.

View post:

Top AI researcher launches new Alberta lab with Huawei funds after ... - The Globe and Mail

Will AI Replace Humanity? – KDnuggets

We are living in a world of probabilities. When I started talking about AI and its implications years ago, the most common question was is AI coming after us?

And while the question remains the same, my response has changed regarding probabilities. It is more likely to replace human judgment in certain areas, so the probability has increased over time.

As we discuss a complex technology, the answer will not be straightforward. It depends on several factors, such as what it means to be intelligent, whether we suggest replacing jobs, anticipating the timelines for Artificial General Intelligence (AGI), or identifying the capabilities and limitations of AI.

Let us start with understanding the definition of Intelligence:

Stanford defines intelligence as the ability to learn and perform suitable techniques to solve problems and achieve goals appropriate to the context in an uncertain, ever-varying world.

Gartner describes it as the ability to analyze, interpret events, support and automate decisions, and take action.

AI is good at learning patterns, however, mere pattern recognition does not qualify as intelligence. It is one of the aspects of the broader spectrum of multi-dimensional human intelligence.

As experts believe, AI will never get there because machines cannot have a sense (rather than mere knowledge) of the past, the present, and the future; of history, injury or nostalgia. Without that, theres no emotion, depriving bi-logic of one of its components. Thus, machines remain trapped in the singular formal logic. So there goes the intelligence part.

Some might refer to AI clearing tests from prestigious institutes and, most recently, the Turing test as a testament to its intelligence.

For the unversed, the Turing test is an experiment designed by Alan Turing, a renowned computer scientist. According to the test, machines possess human-like intelligence if an evaluator cannot distinguish the response between a machine and a human.

A comprehensive overview of the test highlights that though Generative AI models can generate natural language based on the statistical patterns or associations learned from vast training data, they do not have human-like consciousness.

Even advanced tests, such as the General Language Understanding Evaluation, or GLUE, and the Stanford Question Answering Dataset, or SQuAD, share the same underlying premise as that of Turing.

Let us start with the fear that is fast becoming a reality will AI make our jobs redundant? There is no clear yes or no answer, but it is fast approaching as the GenAI casts a wider net on automation opportunities.

McKinsey reports, By 2030, activities that account for up to 30 percent of hours currently worked across the US economy could be automateda trend accelerated by generative AI.

Profiles like office support, accounting, banking, sales, or customer support are first in line toward automation. Generative AI augmenting the software developers in code writing and testing workflows has already impacted the job roles of junior developers.

Its results are often considered a good starting point for an expert to enhance the output further, such as in making marketing copy, promotional content, etc.

Some narratives make this transformation sound subtle by highlighting the possibility of new job creations, such as that of healthcare, science, and technology in the near to short term; and AI ethicists, AI governance, audits, AI safety, and more to make AI a reality overall. However, these new jobs can not outnumber those being replaced, so we must consider the net new jobs created to see the final impact.

Next comes the possibility of AGI, which, similar to the multiple definitions of intelligence, warrants clear meaning. Generally, AGI refers to the stage when machines gain sentience and awareness of the world, similar to a human's.

However, AGI is a topic that deserves a post on its own and is not under the scope of this article.

For now, we can take a leaf from the diary of DeepMinds CEO to understand its early signs.

Looking at a broader picture, it is intelligent enough to help humans identify patterns at scale and generate efficiencies.

Let us substantiate it with the help of an example where a supply chain planner looks at several order details and works on ensuring the ones at risk of being met with a shortfall. Each planner has a different approach to managing the shortfall deliveries:

As an individual planner could be limited with its view and approach to managing such situations, machines can learn the optimal approach by understanding the actions of many planners and help them automate easy scenarios through their ability to discover patterns.

This is where machines have a vantage point over humans limited ability to simultaneously manage several attributes or factors.

However, machines are what they are, i.e., mechanical. You can not expect them to cooperate, collaborate, and develop compassionate relationships with the teams as empathetically as great leaders do.

I frequently engage in lighter team discussions not because I have to but because I prefer working in an environment where I am connected with my team, and they know me well, too. It is too mechanical to only talk about work from the get-go or try to act as it matters.

Take another instance where a machine analyzes a patients records and discloses a health scare as-is following its medical diagnosis. Compare this with how a doctor would handle the situation thoughtfully, simply because they have emotions and know what it feels like to be in a crisis.

Most successful healthcare professionals go beyond their Call of Duty and develop a connection with the patient to help them through difficult times, which machines are not good at.

Machines are trained on data that could capture the underlying phenomenon and create models that best estimate them.

Somewhere in this estimation, the nuances of specific conditions get lost. They do not have a moral compass, similar to a judge has when looking at each case.

To summarize, machines may learn patterns from data (and the bias that comes with it) but do not have the intelligence, drive, or motivation to make fundamental changes to handle the issues plaguing humanity. They are objective-focused and built on top of human intelligence, which is complex.

This phrase sums up my thoughts well AI can replace human brains, not beings.

Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.

Go here to read the rest:

Will AI Replace Humanity? - KDnuggets