Archive for the ‘Artificial General Intelligence’ Category

The revolution in artificial intelligence and artificial general intelligence – Washington Times

OPINION:

A version of this story appeared in the daily Threat Status newsletter from The Washington Times. Click here to receive Threat Status delivered directly to your inbox each weekday.

We are on the edge of two revolutions, which will overwhelm virtually everything currently covered by the media.

Artificial intelligence is the development of massive computational capabilities for understanding and managing specific activities. For example, the air traffic control system already relies heavily on AI to match up the four-dimensional process of moving aircraft around the world. An aircraft carrier battle group has extensive AI in its defensive system. Israels Iron Dome anti-missile system relies heavily on computational analysis and decision-making, in virtual real time, to decide which incoming projectiles and drones are likely to hit populated areas and which can be safely ignored to focus on the gravest threats.

In health care, AI is increasingly capable of evaluating diagnostic information and CT scans, MRIs and other tests. If it had been properly used, AI could have dramatically improved our understanding of and response to COVID-19. Unfortunately, the public health service in general and the Centers for Disease Control and Prevention in particular are obsolete bureaucratic systems incapable of adapting to modern technology. Americans pay with their health and their lives for the refusal of these bureaucracies to modernize.

These are examples of the ways in which AI is already affecting our lives. It is getting faster, more comprehensive, and more capable of learning from its mistakes and improving through repetitive use.

Artificial general intelligence, or AGI, is a dramatically more powerful theoretical system. Some people argue it may be unattainable. Essentially, AGI would be a system that could constantly learn and evolve without being limited to one particular area or topic. It would be a constantly evolving and self-improving system. At least in theory, it could outthink humans and even compete with them. There is a consensus that AGI is still years away while AI is around us constantly improving in speed and capability.

As it improves, AI is going to transform our way of doing things on a scale that resembles the combination of electricity, chemistry and internal combustion engines around 1880.

No one in 1880 could have forecast the scale and breadth of change coming although a few futurist novelists such as Jules Verne and H.G. Wells wrote fascinating fictional forecasts of the coming scientific and technological revolution.

No one in 1880 could have foreseen that electric lights would eliminate night. Farmers used to work from light to dark, and then Thomas Edison made dark obsolete.

No one at the peak of vaudeville could have imagined its replacement by movies, radio, and then television (Steve Allens The Funny Men is a remarkable outline of that process and its impact on comedians and their work).

My favorite example of the unimaginable scale of change is the 1894 London Times story about the Horse Manure Crisis. London and New York had so many horses that their daily production of horse manure threatened to use up all the vacant lots in the two cities.

It did not occur to anyone in 1894 that in a few short years, Henry Ford would begin to eliminate horse manure as an urban problem by giving it a new problem in cars, trucks and buses.

In the early 1950s, there were 58,000 cases of polio annually. In 1953, Dr. Jonas Salk tested a polio vaccine on himself and his family and in 1955, the polio vaccine was tested on 1.6 million children in Canada, Finland and the United States. This is inconceivable with todays Food and Drug Administration rules, which prefer the certainty of disease over risks from cures.

We are at the same moment of dramatic change that Thomas Kuhn described in The Structure of Scientific Revolutions and called a paradigm shift.

The challenge will be to understand AIs potential (putting off applying AGI until it is developed) and then reimagine the way the world works with these powerful new tools.

The key is to leap into the future and have the kind of imagination Verne and Wells showed in imagining the future for their generation.

The first instinct will be to apply AI to marginally improve existing bureaucracies, processes and activities.

It will take a great leap of imagination to fully explore what AI could achieve if we redesigned our systems and habits around the capabilities it will make available to improve our lives, increase our productivity, and enhance our range of choices.

We are at the edge of an enormous opportunity.

For more commentary from Newt Gingrich, visit Gingrich360.com. Also, subscribe to the Newts World podcast.

View original post here:

The revolution in artificial intelligence and artificial general intelligence - Washington Times

OpenAI disbands team devoted to artificial intelligence risks – Yahoo! Voices

OpenAI on Friday confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence.

OpenAI weeks ago began dissolving the so-called "superalignment" group, integrating members into other projects and research, according to the San Francisco-based firm.

Company co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the ChatGPT-maker this week.

The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.

"OpenAI must become a safety-first AGI (artificial general intelligence) company," Leike wrote Friday in a post on X, formerly Twitter.

Leike called on all OpenAI employees to "act with the gravitas" warranted by what they are building.

OpenAI chief executive Sam Altman responded to Leike's post with one of his own, thanking him for his work at the company and saying he was sad to see Leike leave.

"He's right we have a lot more to do," Altman said. "We are committed to doing it."

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, whose "trajectory has been nothing short of miraculous."

"I'm confident that OpenAI will build AGI that is both safe and beneficial," he added, referring to computer technology that seeks to perform as well as -- or better than -- human cognition.

Sutskever, OpenAI's chief scientist, sat on the board that voted to remove fellow chief executive Altman in November last year.

The ousting threw the San Francisco-based startup into a tumult, with the OpenAI board hiring Altman back a few days later after staff and investors rebelled.

OpenAI early this week released a higher-performing and even more human-like version of the artificial intelligence technology that underpins ChatGPT, making it free to all users.

"It feels like AI from the movies," Altman said in a blog post.

Altman has previously pointed to the Scarlett Johansson character in the movie "Her," where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day will come when "digital brains will become as good and even better than our own," Sutskever said during a talk at a TED AI summit in San Francisco late last year.

"AGI will have a dramatic impact on every area of life."

gc/bjt

Go here to read the rest:

OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices

OpenAI disbands safety team focused on risk of artificial intelligence causing ‘human extinction’ – New York Post

OpenAI eliminated a team focused on the risks posed by advanced artificial intelligence less than a year after it was formed and a departing executive warned Friday that safety has taken a backseat to shiny products at the company.

The Microsoft-backed ChatGPT maker disbanded its so-called Superalignment, which was tasked with creating safety measures for advanced general intelligence (AGI) systems that could lead to the disempowerment of humanity or even human extinction, according to a blog post last July.

The teams dissolution, which was first reported by Wired, came just days after OpenAI executives Ilya Sutskever and Jan Leike announced their resignations from the Sam Altman-led company.

OpenAI is shouldering an enormous responsibility on behalf of all of humanity, Leike wrote in a series of X posts on Friday. But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI.

Sutskever and Leike, who headed the OpenAIs safety team, quit shortly after the company unveiled an updated version of ChatGPT that was capable of holding conversations and translating languages for users in real time.

The mind-bending reveal drew immediate comparisons to the 2013 sci-fi film Her, which features a superintelligent AI portrayed by actress Scarlett Johannson.

When reached for comment, OpenAI referred to Altmans tweet in response to Leikes thread.

Im super appreciative of @janleikes contributions to OpenAIs alignment research and safety culture, and very sad to see him leave, Altman said. Hes right we have a lot more to do; we are committed to doing it. Ill have a longer post in the next couple of days.

Some members of the safety team are being reassigned to other parts of the company, CNBC reported, citing a person familiar with the situation.

AGI broadly defines AI systems that have cognitive abilities that are equal or superior to humans.

In its announcement regarding the safety teams formation last July, OpenAI said it was dedicating 20% of its available computing power toward long-term safety measures and hoped to solve the problem within four years.

Sutskever gave no indication of the reasons that led to his departure in his own X post on Tuesday though he acknowledged he was confident that OpenAI will build [AGI] that is both safe and beneficial under Altman and the firms other leads.

Sutskever was notably one of four OpenAI board members who participated in a shocking move to oust Altman from the company last fall. The coup sparked a governance crisis that nearly toppled OpenAI.

OpenAI eventually welcomed Altman back as CEO and unveiled a revamped board of directors.

A subsequent internal review cited a breakdown in trust between the prior Board and Mr. Altman ahead of his firing.

Investigators also concluded that the leadership spat was not related to the safety or security of OpenAIs advanced AI research or the pace of development, OpenAIs finances, or its statements to investors, customers, or business partners, according to a release in March.

Read the original here:

OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post

Generative AI Is Totally Shameless. I Want to Be It – WIRED

AI has a lot of problems. It helps itself to the work of others, regurgitating what it absorbs in a game of multidimensional Mad Libs and omitting all attribution, resulting in widespread outrage and litigation. When it draws pictures, it makes the CEOs white, puts people in awkward ethnic outfits, and has a tendency to imagine women as elfish, with light-colored eyes. Its architects sometimes seem to be part of a death cult that semi-worships a Cthulu-like future AI god, and they focus great energies on supplicating to this immense imaginary demon (thrilling! terrifying!) instead of integrating with the culture at hand (boring, and you get yelled at). Even the more thoughtful AI geniuses seem OK with the idea that an artificial general intelligence is right around the corner, despite 75 years of failed precedentthe purest form of getting high on your own supply.

So I should reject this whole crop of image-generating, chatting, large-language-model-based code-writing infinite typing monkeys. But, dammit, I cant. I love them too much. I am drawn back over and over, for hours, to learn and interact with them. I have them make me lists, draw me pictures, summarize things, read for me. Where I work, weve built them into our code. Im in the bag. Not my first hypocrisy rodeo.

Theres a truism that helps me whenever the new big tech thing has every brain melting: I repeat to myself, Its just software. Word processing was going to make it too easy to write novels, Photoshop looked like it would let us erase history, Bitcoin was going to replace money, and now AI is going to ruin society, but its just software. And not even that much software: Lots of AI models could fit on a thumb drive with enough room left over for the entire run of Game of Thrones (or Microsoft Office). Theyre interdimensional ZIP files, glitchy JPEGs, but for all of human knowledge. And yet they serve such large portions! (Not always. Sometimes I ask the AI to make a list and it gives up. You can do it, I type. You can make the list longer. And it does! What a terrible interface!)

What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill itwith nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.

As with most people on Earth, shame is a part of my life, installed at a young age and frequently updated with shame service packs. I read a theory once that shame is born when a child expects a reaction from their parentsa laugh, applauseand doesnt get it. Thats an oversimplification, but given all the jokes Ive told that have landed flat, it sure rings true. Social media could be understood, in this vein, as a vast shame-creating machine. We all go out there with our funny one-liners and cool pictures, and when no one likes or faves them we feel lousy about it. A healthy person goes, Ah well, didnt land. Felt weird. Time to move on.

AI is like having my very own shameless monster as a pet.

But when you meet shameless people they can sometimes seem like miracles. They have a superpower: the ability to be loathed, to be wrong, and yet to keep going. We obsess over themour divas, our pop stars, our former presidents, our political grifters, and of course our tech industry CEOs. We know them by their first names and nicknames, not because they are our friends but because the weight of their personalities and influence has allowed them to claim their own domain names in the collective cognitive register.

Are these shameless people evil, or wrong, or bad? Sure. Whatever you want. Mostly, though, theyre just big, by their own, shameless design. They contain multitudes, and we debate those multitudes. Do they deserve their fame, their billions, their Electoral College victory? We want them to go away but they dont care. Not one bit. They plan to stay forever. They will be dead before they feel remorse.

AI is like having my very own shameless monster as a pet. ChatGPT, my favorite, is the most shameless of the lot. It will do whatever you tell it to, regardless of the skills involved. Itll tell you how to become a nuclear engineer, how to keep a husband, how to invade a country. I love to ask it questions that Im ashamed to ask anyone else: What is private equity? How can I convince my family to let me get a dog? It helps me understand whats happening with my semaglutide injections. It helps me write codehas in fact renewed my relationship with writing code. It creates meaningless, disposable images. It teaches me music theory and helps me write crappy little melodies. It does everything badly and confidently. And I want to be it. I want to be that confident, that unembarrassed, that ridiculously sure of myself.

View original post here:

Generative AI Is Totally Shameless. I Want to Be It - WIRED

OpenAI disbands team devoted to artificial intelligence risks – Port Lavaca Wave

OpenAI on Friday confirmed that it has disbanded a team devoted to mitigating the long-term dangers of super-smart artificial intelligence.

OpenAI weeks ago began dissolving the so-called "superalignment" group, integrating members into other projects and research, according to the San Francisco-based firm.

Company co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the ChatGPT-maker this week.

The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.

"OpenAI must become a safety-first AGI (artificial general intelligence) company," Leike wrote Friday in a post on X, formerly Twitter.

Leike called on all OpenAI employees to "act with the gravitas" warranted by what they are building.

OpenAI chief executive Sam Altman responded to Leike's post with one of his own, thanking him for his work at the company and saying he was sad to see Leike leave.

"He's right we have a lot more to do," Altman said. "We are committed to doing it."

Altman promised more on the topic in the coming days.

Sutskever said on X that he was leaving after almost a decade at OpenAI, whose "trajectory has been nothing short of miraculous."

"I'm confident that OpenAI will build AGI that is both safe and beneficial," he added, referring to computer technology that seeks to perform as well as -- or better than -- human cognition.

Sutskever, OpenAI's chief scientist, sat on the board that voted to remove fellow chief executive Altman in November last year.

The ousting threw the San Francisco-based startup into a tumult, with the OpenAI board hiring Altman back a few days later after staff and investors rebelled.

OpenAI early this week released a higher-performing and even more human-like version of the artificial intelligence technology that underpins ChatGPT, making it free to all users.

"It feels like AI from the movies," Altman said in a blog post.

Altman has previously pointed to the Scarlett Johansson character in the movie "Her," where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.

The day will come when "digital brains will become as good and even better than our own," Sutskever said during a talk at a TED AI summit in San Francisco late last year.

"AGI will have a dramatic impact on every area of life."

Go here to see the original:

OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave