Archive for the ‘Artificial General Intelligence’ Category

Amazon unleashes Q, an AI assistant for the workplace – Ars Technica

Enlarge / The Amazon Q logo.

Amazon

On Tuesday, Amazon unveiled Amazon Q, an AI chatbot similar to ChatGPT that is tailored for corporate environments. Developed by Amazon Web Services (AWS), Q is designed to assist employees with tasks like summarizing documents, managing internal support tickets, and providing policy guidance, differentiating itself from consumer-focused chatbots. It also serves as a programming assistant.

According to The New York Times, the name "Q" is a play on the word question" and a reference to the character Q in the James Bond novels, who makes helpful tools. (And there's apparently a little bit of Q from Star Trek: The Next Generation thrown in, although hopefully the new bot won't cause mischief on that scale.)

Amazon Q's launch positions it against existing corporate AI tools like Microsoft's Copilot, Google's Duet AI, and ChatGPT Enterprise. Unlike some of its competitors, Amazon Q isn't built on a singular AI large language model (LLM). Instead, it uses a platform called Bedrock, integrating multiple AI systems, including Amazon's Titan and models from Anthropic and Meta.

Amazon

"Developers can use Amazon Q to explain specific programming logic by asking questions (e.g., Provide me with a description of what this application does and how it works)," writes Amazon in a press release. "And Amazon Q will give details like which services the code uses and what different functions do (e.g., This application is building a basic support ticketing system using Python Flask and AWS Lambda), along with a description of the applications core capabilities, how they are implemented, and more."

Notably, Amazon did not reveal performance benchmarks for Q that would allow us to evaluate its capabilities versus chatbot solutions from other providers. As of press time, we have not experimented with Q yet.

Amazon Q promotional video on YouTube.

Following significant investments in AI, including a partnership with AI-startup Anthropic and the development of AI-tuned GPU chips, Amazon has intensified its AI focus. The Q announcement came as part of a series of reveals at Amazon's annual cloud-computing conference, re:Invent 2023, including plans to create yet another new AI chip for its data centers.

Amazon Q is priced at $20 per user per month, which is lower than Microsoft and Google's enterprise AI solutions, which are priced at $30 per user per month. Amazon Q is available now "in preview in AWS Regions US East (N. Virginia) and US West (Oregon)," according to the company.

It's worth noting that the Amazon Q name is apparently unrelated to recent rumors about an OpenAI breakthrough called "Q*" (pronounced "Q-star") that caused premature hype over the development of AGI (artificial general intelligence), with experts calling the claims largely overblown.

More here:

Amazon unleashes Q, an AI assistant for the workplace - Ars Technica

You’re not imagining things: The end of the -3- – Morningstar

Even if India does not become a formal ally to Western countries, it will continue to position itself as an independent, rising power whose interests are more aligned with the West than with China and its de facto allies (Russia, Iran, North Korea, and Pakistan). Moreover, India is a formal member of the Quadrilateral Security Dialogue (the Quad) with the U.S., Japan, and Australia, the explicit purpose of which is to deter China. Japan and India have longstanding friendly relations and a shared history of adversarial relations with China.

Japan also invited Indonesia, South Korea (with which it is pursuing a diplomatic thaw, driven by common concerns about China), Brazil (another key Global South power), and Ukrainian President Volodymyr Zelensky to the G7. In each case, the message was clear: The Sino-Russian friendship "without limits" is having serious consequences for how other powers perceive China.

In its final communiqu?, the G7 explained at length how it will confront and deter China in the years ahead. It decried Chinese "economic coercion" and expansionism in the East and South China Seas, stressed the importance of an Indo-Pacific partnership, and issued a clear warning to China not to attack or invade Taiwan.

In taking steps to "de-risk" their relationships with China, Western leaders settled on language that is only slightly less aggressive than "de-coupling." But it isn't just the diplomatic argot that has changed. According to the communiqu?, Western containment efforts will be accompanied by large investments in clean energy and infrastructure across the Global South, lest key middle powers be drawn into China's sphere of influence through its Belt and Road Initiative.

Meanwhile, the Western-Chinese tech and economic war continues to escalate. Japan recently imposed restrictions on semiconductor exports to China that are no less draconian than those introduced by the U.S., and the Biden administration has since pressured Taiwan and South Korea to follow suit. In response, China has banned semiconductors made by the U.S.-based chipmaker Micron Technology (MU), and has begun to restrict exports of some critical metals over which it has a near-monopoly in production and refining.

Likewise, U.S. chipmaker Nvidia (NVDA) - which is quickly becoming a corporate superpower, owing to surging demand for its advanced chips to power AI applications - is facing new constraints on selling to China. U.S. policymakers have made clear that they intend to keep China at least a generation behind in the race for AI supremacy. To that end, the U.S. CHIPS and Science Act of 2022 introduced massive incentives to re-shore chip production.

The risk now is that China will leverage its dominant role in producing and refining rare-earth metals that are key inputs in the green transition. China has already increased its exports of electric vehicles by about 700% in value terms since 2017, and it is starting to deploy commercial airliners that eventually could compete with Boeing (BA) and Airbus (FR:AIR). So, while the G7 wants to deter China without escalating the cold war, the response from Beijing suggests that it has failed to thread the needle.

The U.S. and China cold war will mean more fragmentation of the global economy.

Of course, the Chinese would like to forget that their own aggressive policies contributed to the situation. In interviews marking his 100th birthday in May, Henry Kissinger - the architect of America's "opening to China" in 1972 - warned that unless the U.S. and China find a new strategic understanding, they will remain on a collision course that could end in outright war. The deeper the freeze, the greater the risk of a violent crack-up and military hostilities this decade.

Even without an actual hot war between the U.S. and China, a colder war will mean more fragmentation of the global economy, more balkanization of global supply chains, more de-risking or decoupling, and more restrictions on cross-border flows of goods, services, capital, people, data, and knowledge. Neoliberal free trade is out; industrial policies, "homeland economics," subsidies, and secure trade are in, as the world increasingly divides into two economic, monetary, financial, currency, trade, investment, and technological domains.

Climate risks are climbing

At the same time, the costs of climate change will continue to increase rapidly. Scientists now expect global average temperatures to reach 1.5deg Celsius above pre-industrial levels - the Paris climate agreement target - in the next five years. To hold temperature increases there, greenhouse-gas emissions would have to be cut by half by 2030, which is basically impossible. Even if all the commitments made at COP26 in Glasgow and COP27 in Sharm El-Sheikh were to be met - a very big if - temperatures would still be on track to hit 2.4degC above pre-industrial levels by the end of this century.

Humanity's handling of climate change amounts to a slow-motion, but accelerating, train wreck.

In the absence of real action, greenwashing, greenwishing and greenflation have become rampant. The good news is that there are many technological options that can accelerate decarbonization and help us achieve net-zero emissions with limited impact on economic growth: renewable energy, carbon capture and storage, clean and green hydrogen, and nuclear fusion.

The bad news is that fusion is still a long way from commercialization, and many of the other options remain costly compared to fossil fuels. Humanity's handling of climate change amounts to a slow-motion, but accelerating, train wreck.

Making matters worse, poorer emerging markets and developing countries are facing dire economic prospects. After an anemic recovery from the COVID pandemic, they bore the brunt of higher food and energy prices following Russia's invasion of Ukraine. Higher inflation has eroded real incomes, and their currencies have weakened against the U.S. dollar (DX00). This, combined with higher interest rates, has left many nations with unsustainable debts. The International Monetary Fund and the World Bank estimate that about 60% of poor countries and 25% of emerging markets cannot service their debts and will need to restructure them.

Social strife, political instability and AI's rise

Against this backdrop, increased poverty, climate change, inequality, and social strife could easily lead to domestic political instability or even failed states, causing mass migration and fueling the trend toward economic populism. Most of Latin America is now ruled by left-wing populists, while far-right authoritarian populism is on the rise in other parts of the world.

In the U.S., former president Donald Trump is the clear favorite to win the Republican Party's nomination for the 2024 presidential election, and could well retake the White House. In the U.K., the demagogic Boris Johnson remains very popular. A party with fascist roots is running Italy, and the far-right Marine Le Pen remains the de facto opposition leader in France. In Turkey, the recently re-elected President Recep Tayyip Erdogan continues to consolidate autocratic rule. Until the Hamas attack, Israel was governed by the most right-wing coalition in its history. And, of course, Russian President Vladimir Putin and China's Xi have formed a new authoritarian axis.

Finally, in the year since Megathreats appeared, AI has become an even bigger topic, owing to the public release of generative AI platforms like ChatGPT. I had originally predicted that deep-learning architectures ("transformer networks") would revolutionize AI, and that does seem to be what has happened. The potential benefits - and pitfalls - of generative AI are profound, and they are becoming increasingly clear. On the positive side, productivity growth could be sharply increased, vastly enlarging the economic pie; but, as was true of the first digital revolution and the creation of the internet and its applications, it will take time for such gains to emerge and achieve scale.

The risks associated with AI are also becoming clear. Many worry about permanent technological unemployment, not just among low-skilled blue-collar workers, but also across creative professions. In an extreme scenario, the economy two decades from now could be growing at a rate of 10% per year, but with unemployment at 80%. A related risk, then, is that AI will be another winner-takes-all industry that turbocharges income and wealth inequality.

AI also will have a similar effect on disinformation, including through "deep fake" videos, and various forms of cyber-warfare, especially around elections. And, of course, there is the small but terrible risk that advances in AI will lead to AGI (artificial general intelligence) and the obsolescence of the human species.

The debate over whether tech firms should be regulated more strictly, or even broken up, continues to intensify. But the obvious counter-argument is that America needs Big Tech and AI firms to assure its dominance over China, which is doing everything it can to become a military superpower.

Fortunately, if AI does usher in a world of 10% annual growth, a UBI or substantially more income redistribution could well be possible. Moreover, AI also could help us address other megathreats such as climate change and future pandemics. While none of these positive outcomes can be taken for granted, given the power and influence that elites wield, problems of distribution are always easier to tackle in a high-growth setting than in a low-growth one.

While stagflationary forces will weigh on growth and exacerbate megathreats in the medium term, the future could be bright if we can avert a dystopian scenario in which megathreats destructively feed on each other. Our first priority will be to survive the next few decades of instability and chaos.

(MORE TO FOLLOW) Dow Jones Newswires

12-02-23 1445ET

See more here:

You're not imagining things: The end of the -3- - Morningstar

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say – Reuters

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altmans four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

[1/2]Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. REUTERS/Julia Nikhinson/File Photo Acquire Licensing Rights

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math where there is only one right answer implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AIs prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

Link:

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say - Reuters

What the OpenAI drama means for AI progress and safety – Nature.com

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November but has now reinstated him.Credit: Justin Sullivan/Getty

OpenAI the company behind the blockbuster artificial intelligence (AI) bot ChatGPT has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the companys board.

The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.

The push to retain dominance is leading to toxic competition. Its a race to the bottom, says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altmans initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altmans return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.

The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he was not consistently candid in his communications with the board and later adding that the decision had nothing to do with malfeasance or anything related to our financial, business, safety or security/privacy practice.

But some speculate that the firing might have its origins in a reported schism at OpenAI between those focused on commercial growth and those uncomfortable with the strain of rapid development and its possible impacts on the companys mission to ensure that artificial general intelligence benefits all of humanity.

OpenAI, which is based in San Francisco, California, was founded in 2015 as a non-profit organization. In 2019, it shifted to an unusual capped-profit model, with a board explicitly not accountable to shareholders or investors, including Microsoft. In the background of Altmans firing is very clearly a conflict between the non-profit and the capped-profit; a conflict of culture and aims, says Jathan Sadowski, a social scientist of technology at Monash University in Melbourne, Australia.

Ilya Sutskever, OpenAIs chief scientist and a member of the board that ousted Altman, this July shifted his focus to superalignment, a four-year project attempting to ensure that future superintelligences work for the good of humanity.

Its unclear whether Altman and Sutskever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.

With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown Universitys Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.

It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.

OpenAI released ChatGPT almost a year ago, catapulting the company to worldwide fame. The bot was based on the companys GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that have emerged from this technique (including what some see as logical reasoning) has astounded and worried scientists and the general public alike.

OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be detrimental for society.

The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.

West notes that these start-ups rely heavily on the vast and expensive computing resources provided by just three companies Google, Microsoft and Amazon potentially creating a race for dominance between these controlling giants.

Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes, he says. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)

OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) a deep-learning system thats trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. The jury is very much out on that front, says West. But some are starting to bet on it. Hinton says he used to think AGI would happen on the timescale of 30, 50 or maybe 100 years. Right now, I think well probably get it in 520 years, he says.

The imminent dangers of AI are related to it being used as a tool by human bad actors people who use it to, for example, create misinformation, commit scams or, potentially, invent new bioterrorism weapons1. And because todays AI systems work by finding patterns in existing data, they also tend to reinforce historical biases and social injustices, says West.

In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed in line with OpenAIs superalignment mission to promote humanitys best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that cant be turned off and veers onto a destructive path is very real.

The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, some two dozen nations have agreed to work together on the problem, although what exactly they will do remains unclear.

West emphasizes that its important to focus on already-present threats from AI ahead of far-flung concerns and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power something she thinks needs more scrutiny from anti-trust regulators. Regulators for a very long time have taken a very light touch with this market, says West. We need to start by enforcing the laws we have right now.

Follow this link:

What the OpenAI drama means for AI progress and safety - Nature.com

The fallout from the weirdness at OpenAI – The Economist

Listen to this story. Enjoy more audio and podcasts on iOS or Android.

Your browser does not support the

Five very weird days passed before it seemed that Sam Altman would stay at OpenAI after all. On November 17th the board of the maker of Chatgpt suddenly booted out its chief executive. On the 19th it looked as if Mr Altman would move to Microsoft, OpenAIs largest investor. But employees at the startup rose up in revolt, with almost all of them, including one of the boards original conspirators, threatening to leave were Mr Altman not reinstated. Between frantic meetings, the top brass tweeted heart emojis and fond messages to each other. By the 21st, things had come full circle .

All this seems stranger still considering that these shenanigans were taking place at the worlds hottest startup, which had been expected to reach a valuation of nearly $90bn. In part, the weirdness is a sign of just how quickly the relatively young technology of generative artificial intelligence has been catapulted to glory. But it also holds deeper and more disturbing lessons.

One is the sheer power of ai talent. As the employees threatened to quit, the message OpenAI is nothing without its people rang out on social media. Ever since ChatGPTs launch a year ago, demand for ai brains has been white-hot. As chaos reigned, both Microsoft and other tech firms stood ready to welcome disgruntled staff with open arms. That gave both Mr Altman and Openais programmers huge bargaining power and fatally undermined the boards attempts to exert control.

The episode also shines a light on the unusual structure of Openai. It was founded in 2015 as a non-profit research lab aimed at safely developing artificial general intelligence (agi), which can equal or surpass humans in all types of thinking. But it soon became clear that this would require vast amounts of expensive processing power, if it were possible at all. To pay for it, a profit-making subsidiary was set up to sell AI tools, such as ChatGPT. And Microsoft invested $13bn in return for a 49% stake.

On paper, the power remained with the non-profits board, whose aim is to ensure that agi benefits everyone, and whose responsibility is accordingly not to shareholders but to humanity. That illusion was shattered as the employees demanded Mr Altmans return, and as the prospect loomed of a rival firm housed within profit-maximising Microsoft.

The chief lesson is the folly of solely relying on corporate structures to police technology. As the potential of generative ai became clear, the contradictions in OpenAIs structure were exposed. A single outfit cannot strike the best balance between advancing AI, attracting talent and investment, assessing AIs threats and keeping humanity safe. Conflicts of interest in Silicon Valley are hardly rare. Even if the people at OpenAI were as brilliant as they think they are, the task would be beyond them.

Much about the boards motives in sacking Mr Altman remains unknown. Even if the directors did genuinely have humanitys interest at heart, they risked seeing investors and employees flock to another firm that would charge ahead with the technology regardless. Nor is it entirely clear what qualifies a handful of private citizens to represent the interests of Earths remaining 7.9bn inhabitants. As part of Mr Altmans return, a new board is being appointed. It will include Larry Summers, a prominent economist; an executive from Microsoft will probably join him, as may Mr Altman.

Yet personnel changes are not enough: the firms structure should also be overhauled. Fortunately, in America there is a body that has a much more convincing claim to represent the common interest: the government. By drafting regulation, it can set the boundaries within which companies like Openai must operate. And, as a flurry of activity in the past month shows, politicians are watching ai. That is just as well. The technology is too important to be left to the whims of corporate plotters.

Read more of our articles onartificial intelligence

Continue reading here:

The fallout from the weirdness at OpenAI - The Economist