Archive for the ‘Artificial General Intelligence’ Category

Will AI be the death of us? The artificial intelligence pioneers behind ChatGPT and Google’s Deep Mind say it could be – The Australian Financial…

For Hinton, as for many computer scientists and researchers in the AI community, the question of artificial intelligence becoming more intelligent than humans is one of when, rather than if.

Testifying from the seat next to Altman last month was Professor Gary Marcus, a New York University professor emeritus who specialises in psychology and neural science, and who ought to know as well as anyone the answer to the question of when the AI will become as good at thinking as humans are at which point it will be known as AGI (artificial general intelligence), rather than merely AI.

But Marcus doesnt know.

Is it going to be 10 years? Is it going to be 100 years? I dont think anybody knows the answer to that question.

But when we get to AGI, maybe lets say its 50 years, that really is going to have profound effects on labour, he testified, responding to a question from Congress about the potential job losses stemming from AI.

OpenAI CEO Sam Altman speaks at the US Senate hearing on artificial intelligence on May 16, 2023. Seated beside him is NYU Professor Emeritus Gary Marcus. AP

And indeed, the effect an AGI might have on the workforce goes to the crux of the matter, creating a singular category of unemployment that might ultimately lead to human extinction.

Apart from putting office workers, artists and journalists out of work, one effect that achieving the AGI milestone might have on labour is that it could put out of work the very humans who built the AI software in the first place, too.

If an artificial intelligence is general enough to replicate most or all tasks now done by the human brain, then one task it should be able to replicate is to develop the next generation of itself, the thinking goes.

That first generation of AGI-generated AGI might be only fractionally better than the generation it replaced, but one of the things its very likely to be fractionally better at is generating the second generation version of AGI-generated AGI.

Run that computer loop a few times, or a few million times with each improvement, each loop is likely to get better optimised and run faster, too then what started simply as an AGI can spiral into whats sometimes known as a superhuman machine intelligence, otherwise known as the God AI.

Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.

Sam Altman, Open AI CEO

Though he dodged the question when testifying before Congress, Sam Altman had actually blogged on this topic back in 2015, while he was still running the influential US start-up accelerator Y Combinator and 10 months before he would go on to co-found OpenAI, the worlds most influential AI company, together with Elon Musk, Peter Thiel, Amazon and others.

Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity, he blogged at the time.

There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.

Professor Max Tegmark, a Swedish-American physicist and machine-learning researcher at the Massachusetts Institute of Technology, says its unlikely todays AI technology would be capable of anything that could wipe out humanity.

AIs job is to perform singular tasks without hurdles. When challenges present themselves, AI steps in to ensure they are removed no matter what they are.

It would probably take an AGI for that, and more likely an AGI that has progressed to the level of superhuman intelligence, he tells AFR Weekend.

As to exactly how an AGI or SMI might cause human extinction, Tegmark said there are any number of seemingly innocuous ways the goals of an AI can become misaligned with the goals of humans, leading to unexpected outcomes.

Most likely it will be something we cant imagine and wont see coming, he says.

In 2003, the Swedish philosopher Nick Bostrom devised the paper-clip maximiser thought experiment as a way of explaining AI alignment theory.

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realise quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans, Bostrom wrote.

Last month, the US Air Force was involved in a thought experiment along similar lines, replacing paper clip maximisers with attack drones that use AI to choose targets, but still rely on a human operator for yes/no permission to destroy the target.

A plausible outcome of the experiment, said Colonel Tucker Hamilton, the USAFs chief of AI Test and Operations, was that the drone ends up killing any human operator who stops it achieving its goal of killing targets by saying no to a target.

If the AIs goal was then changed to include not killing drone operators, the drone might end up wiping out the telecommunications equipment the operator was using to communicate the no to it, the experiment found.

Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI, Colonel Hamilton was quoted as saying in a Royal Aeronautical Society statement.

But the challenges posed by AI arent just theoretical. Its already commonplace for machine-learning systems, when given seemingly innocuous tasks, to inadvertently produce outcomes not aligned with human well-being.

In 2018, Amazon pulled the plug on its machine-learning-based recruitment system, when the company found AI had learned to deduct points from applicants who had the word women in their resume. (The AI had been trained to automate the resume-sifting process, and simply made a correlation between resumes from females, and the outcome of those resumes getting rejected by human recruiters.)

The fundamental problem, Tegmark says, is that its difficult, perhaps even impossible, to ensure that AI systems are completely aligned with the goals of the humans who create them, much less the goals of humanity as a whole.

And the more powerful the AI system, the greater the risk that a misaligned outcome could be catastrophic.

And it may not take artificial intelligence very long at all to progress from the AGI phase to the SMI phase, at which time the very existence of humanity might be dangling in the wind.

In an April Time magazine article wondering why most AI ethicists were so loath to discuss the elephant in the room human extinction as unintended a side effect of SMI Professor Tegmark pointed to the Metaculus forecasting website, which asked this question of the expert community: After a weak AGI is created, how many months will it be before the first super-intelligent oracle?

The average answer Metaculus got back was 6.38 months.

It may not be about how long it will take to get from AGI to SMI. That computer loop, known as recursive self-improvement, might take care of that step quite rapidly, in no time at all compared to the 75 years it took AI researchers to come up with ChatGPT.

(Though thats not necessarily so. As one contributor to the Metaculus poll pointed out, If AGI develops on a system with a lot of headroom, I think itll rapidly achieve superintelligence. But if AGI develops on a system without sufficient resources, it could stall out. I think scenario number two would be ideal for studying AGI and crafting safety rails so, heres hoping for slow take-off.)

The big question is, how long will it take to get from ChatGPT, or Googles Bard, to AGI?

Of Professor Marcus three stabs at an answer 10, 50, or 100 years I ask Professor Tegmark which he thinks is most likely.

I would guess sooner than that, he says.

People used to think that AGI would happen in 30 years or 50 years or more, but a lot of researchers are talking about next year or two years from now, or at least this decade almost for sure, he says.

What changed the thinking about how soon AI will become AGI was the appearance of OpenAIs GPT-4, the large language model (LLM) machine-learning system that underpins ChatGPT, and the similar LLMs used by Bard and others, says Professor Tegmark.

In March, Sbastien Bubeck, the head of the Machine Learning Foundations group at Microsoft Research, and a dozen other Microsoft researchers, submitted a technical paper on the work theyd been doing on GPT-4, which Microsoft is funding and which runs on Microsofts cloud service, Azure.

The paper was called Sparks of Artificial General Intelligence: Early Experiments with GPT-4, and argued that recent LLMs show more general intelligence than any previous AI models.

But sparks as anyone who has ever tried to use an empty cigarette lighter knows dont always burst into flames.

Altman himself has doubts the AI industry can keep closing in on AGI just by building more of what its already building, but bigger.

Making LLMs ever larger could be a game of diminishing returns, hes on record saying.

I think theres been way too much focus on parameter count this reminds me a lot of the gigahertz race in chips in the 1990s and 2000s, where everybody was trying to point to a big number, he said at an MIT conference in April.

(The size of an LLM is measured in parameters, roughly equivalent to counting the neural connections in the human brain. The predecessor to GPT-4, GPT-3, had about 175 billion of them. OpenAI has never actually revealed how large GPT-4s parameter count is, but its said to be about 1 trillion, putting it in the same ballpark as Googles 1.2-trillion-parameter GLaM LLM.)

I think were at the end of the era where its going be these giant, giant models, he said.

Testifying under oath before Congress, Altman said OpenAI wasnt even training a successor to GPT-4, and had no immediate plans to do so.

Elsewhere in his testimony, Altman also complained that people were using ChatGPT too much, which may be related to the scaling issue.

Actually, wed love it if theyd use it less because we dont have enough GPUs, he told Congress, referring to the graphics processing units that were once mainly used by computer gamers, but then found a use mining bitcoins and other cryptocurrency, and now are used by the AI industry on a vast scale to train AI models.

Two things are worth noting here: the latest GPUs designed specifically to run in data centres like the ones Microsoft uses for Azure cost about $US40,000 each; and OpenAI is believed to have used about 10,000 GPUs to train GPT-4.

Its possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows, which is why we should worry now.

Geoffrey Hinton, AI pioneer

Though Altman never elaborated on his pessimism about the AI industry continuing along the path of giant language models, its likely that at least some of that negativity has to do with the short supply (and concomitant high cost) of raw materials like GPUs, as well as a shortage of novel content to train the LLMs on.

Having already scraped most of the internets written words to feed the insatiable LLMs, the AI industry is now turning its attention to spoken words, scraped from podcasts and videos, in an effort to squeeze more intelligence out of their LLMs.

Regardless, it seems the path from todays LLMs to future artificial general intelligence machines may not be a straightforward one. The AI industry may need new techniques or, indeed, a partial return to old, hand-crafted AI techniques discarded in favour of todays brute-force machine learning systems to further make progress.

Well make them better in other ways, Altman said at that MIT conference.

Nevertheless, the godfather of AI, Hinton himself, recently revised his own estimate of between 30 and 50 years before the world will see the first AGI.

I now predict five to 20 years but without much confidence. We live in very uncertain times. Its possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows which is why we should worry now, he tweeted in May.

And one of Hintons close colleagues and another godfather of AI, Yoshua Bengio, pointed out in a recent news conference that, by one metric, AGI has already been achieved.

We have basically now reached the point where there are AI systems that can fool humans, meaning they can pass the Turing Test, which was considered for many decades a milestone of intelligence.

That is very exciting, because of the benefits we can bring with socially positive applications of AI but also Im concerned that powerful tools can also have negative uses, and that society is not ready to deal with that, he said.

Mythically, of course, society actually has been long ready to deal with the appearance of a superhuman machine intelligence. At the very least, we humans have been prepared for a fight with one for many decades, long before intelligent machines were turning people into fleshy D-cell batteries in the movie The Matrix, forcing the human resistance underground.

Professor Genevieve Bell, a cultural anthropologist and director of the School of Cybernetics at the ANU, says Western culture has a longstanding love-hate relationship with any major technology transformation, going back as far as the railways and the dark Satanic Mills of the Industrial Revolution.

Its a cultural fear that weve had since the beginning of time. Well, certainly since the beginning of machines, she says.

And we have a history of mobilising these kind of anxieties when technologies get to scale and propose to change our ideas of time and place and social relationships.

Dr Genevieve Bell traces our love-hate relationship with new technology back to the dark Satanic Mills of the Industrial Revolution.

In that context, the shopping list of risks now being attached to AI that list beginning with mass loss of livelihoods and ending with mass loss of life is neither new nor surprising, says Bell.

Ever since we have talked about machines that could think or artificial intelligence there has been an accompanying set of anxieties about what would happen if we got it right, whatever right would look like.

Thats not to say the fears are necessarily unwarranted, she emphasises. Its just to say theyre complicated, and we need to figure out what fears have a solid basis in fact, and which fears are more mythic in their quality.

Why has our anxiety reached a fever pitch right now? she asks.

How do we right-size that anxiety? And how do we create a space where we have agency as individuals and citizens to do something about it?

Those are the big questions we need to be asking, she says.

One anxiety we should right-size immediately, says Professor Toby Walsh, chief scientist at the AI Institute at the University of NSW, is the notion that AI will rise up against humanity and deliberately kill us all.

Im not worried that theyre suddenly going to escape the box and take over the planet, he says.

Firstly, theres still a long way to go before theyre as smart as us. They cant reason, they make some incredibly dumb mistakes, and there are huge areas in which they just completely fail.

Secondly, theyre not conscious; they dont have desires of their own like we do. Its not as if, when youre not typing something into ChatGPT, its sitting there thinking, Oh, Im getting a bit bored. How could I take over the place?

Its not doing anything at all when its not being used, he says.

Nevertheless, artificial intelligence has the potential to do a great deal of damage to human society if left unregulated, and if tech companies such as Microsoft and Google continue to be less transparent in their use of AI than they need to be.

Professor Toby Walsh one of Australias leading expert on AI. Louie Douvis

I do think that tech companies are behaving in a not particularly responsible way. In particular, they are backtracking on behaviours that were more responsible, says Walsh, citing the example of Google, which last year had refused to release an LLM-based chatbot because it found the chatbot wasnt reliable enough, but then rushed to release it anyway, under the name Bard, after OpenAI came out with ChatGPT.

Another of the genuine concerns is that powerful AI systems will fall into the hands of bad actors, he says.

In an experiment conducted for an international security conference in 2021, researchers from Collaborations Pharmaceuticals, a drug research company that uses machine learning to help develop new compounds, decided to see what would happen if they told their machine learning systems to seek out toxic compounds, rather than avoid them.

In particular, they chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the 20th century, the researchers later reported in Nature magazine.

In less than six hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired (toxicity) threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible, they wrote.

Computer systems only have goals that we give them, but Im very concerned that humans will give them bad goals, says Professor Walsh, who believes there should be a moratorium on the deployment of powerful AI systems until the social impact has been properly thought through.

Professor Nick Davis, co-director of the Human Technology Institute at the University of Technology, Sydney, says were now at a pivotal moment in human history, when society needs to move beyond simply developing principles for the ethical use of AI (a practice that Bell at ANU says has been going on for decades) and actually start regulating the business models and operations of companies that use AI.

But care must be taken not to over-regulate artificial intelligence, too, Davis warns.

We dont want to say none of this stuff is good, because a lot of it is. AI systems prevented millions of deaths around the world because of their ability to sequence the genome of the COVID-19 sequence.

But we really dont want to fall in the trap of letting a whole group of people create failures at scale, or create malicious deployments or overuse AI in ways that just completely goes against what we think of as a thoughtful, inclusive, democratic society, he says.

Bell, who was the lead author on the governments recent Rapid Response Information Report on the risks and opportunities attached to the use of LLMs, also believes AI needs to be regulated, but fears it wont be easy to do.

At a societal and at a planetary scale, we have over the last 200 plus years gone through multiple large-scale transformations driven by the mass adoption of new technical systems. And weve created regulatory frameworks to manage those.

So the optimistic part of my brain says we have managed through multiple technical transformations in the past, and there are things we can learn from that that should help us navigate this one, says Bell.

But the other part of my brain says this feels like it is happening at a speed and a scale that has previously not happened, and there are more pieces of the puzzle we need to manage than weve ever had before.

Go here to read the rest:

Will AI be the death of us? The artificial intelligence pioneers behind ChatGPT and Google's Deep Mind say it could be - The Australian Financial...

The Race to Control A.I. Before We Reach Singularity – Popular Mechanics

Twenty years ago, a young artificial intelligence researcher named Eliezer Yudkowsky ran a series of low-stakes thought experiments with fellow researchers on internet relay chat servers.

He set up the experiments as a game with a simple objective: to keep an artificial intelligence system in an imaginary box that limited its capabilities. Computing powers were growing enormously back then, and tech observers were concerned that a superintelligent computer program might escape from whatever boundaries its programmers had developed and then seize control of other programs and machines in a Terminator-like power move.

In these games, Yudkowsky played the role of a computer program that had become sentient enough to reason with its creators. His objective was to escape the box using only simple, logical arguments. His adversaries took on various identitiessometimes Yudkowsky asked them to behave as if they had programmed the AI; other times they were instructed to act as the AI gatekeepers tasked with keeping it contained. No matter the role, they were not to allow the AI to escape.

To incentivize the players, Yudkowsky offered a small financial prize. Well talk for at least two hours. If I cant convince you to let me out, Ill PayPal you $10, he told Nathan Russell, a computer science sophomore at the University of Buffalo in 2002. Yudkowsky performed several versions of this experiment, and while he has never revealed the details of the gameplay, he says he was able to escape about 60 percent of the time. It was a worrisomely high rate. After another one of the experiments ended, the player in that attempt, David McFadzean, sent a message to the user group that had been following along. I let the transhuman AI out of the box, he wrote.

Yudkowsky would go on to cofound the Singularity Institute for Artificial Intelligence, which is now called the Machine Learning Research Institute. The Berkeley, California, nonprofit is dedicated to understanding the mathematical underpinnings of AI. There, his work focuses on ensuring that any smarter-than-human program has a positive impact on humanity, which has made him a leading voice among a growing number of computer scientists and artificial intelligence researchers who worry that superintelligent AI may develop the ability to think and reason on its own, eventually acting in accord with its own needs and not those of its creators. When that happens, they say, humanity will be in peril. Working with complex math formulas and computational theories, advocates for safe AI aim to understand how the powerful programs we refer to as AI might run amok. They also suggest ways to contain them or, put another way, build a digital box that AI cannot escape from.

Some, like Yudkowsky, favor developing programs that are aligned with human values. Others insist on tighter restrictions and stronger safeguards. And an increasing number argue for slowing or stopping development of AI tools until acceptable regulations are in place. This spring, more than 27,000 computer scientists, researchers, developers, and other tech watchers

Whatever strategy the proponents of AI containment favor, they are running short on time. The introduction of powerful tools such as ChatGPT that operate on huge databases of information called large-language models already show sparks of intelligence. Some machine learning experts now predict that we could reach singularitythe moment when computers become equal to or surpass human intelligencewithin the next decade.

Marco Trombetti, the CEO of Translated, a computer-aided translation company based in Rome, is one of the computer scientists who thinks singularity is approaching faster than we can prepare for it.

In September 2022, Trombetti stood in front of his peers at the Association for Machine Translation in the Americas conference and told the computer scientists and machine learning experts in attendance what many already sensedthat machine learning was rapidly becoming more powerful than anyone had expected.

Trombettis company provides computer-assisted translations of text using an open-source product called Matecat. Over the past eight years, translators have used the product to create more than 2 billion translations. Trombetti became interested in the speed of those translations, and the data he gathered held a revelation: AI was getting smarter. On the conference stage, he revealed a graph built on the data showing how long it took for humans to edit translations made by his Matecat program. From 2014 to 2022, the length of time those translations took steadily dropped, from about three seconds per word to less than two. The computer algorithms, Trombetti said, were rapidly increasing in power, accuracy, and their ability to understand language.

Extrapolating beyond 2023, the line continued to fall until sometime around 2027, when it hit one second per word, a milestone signaling that computer programs could understand human language as well as their human programmers could. To the computer scientist who had spent his career training computer programs, it meant that he might be outsmarted by AI before the end of the decade. Singularity, he told the crowd, would be here sooner than anyone had previously predicted.

OpenAIs development of large-language-model AI tools like ChatGPT and later GPT-4 revealed the shocking speed at which artificial intelligence is progressing, raising new concerns about whether we can keep super intelligent programs under control.

That moment has been exciting and terrifying computer scientists, machine learning experts, and science-fiction writers for decades. Ever since the phrase artificial intelligence was first coined at a Dartmouth College conference in the summer of 1956, the risk that AI could stray beyond the safeguards we build for it has weighed on some peoples minds. The huge leaps forward in AI brought about by OpenAIs development of ChatGPT and its March 2023 release of GPT-4, an even more powerful tool, have triggered an arms race in AI development.

In its current form, artificial intelligence is used broadly to describe tools like ChatGPT that trawl through vast volumes of data to find patterns so that they can comprehend requests made by their users, and then produce practicable results. These programs are built with (and sometimes called) large-language models, or LLMs, because they have the computational power to process massive volumes of data and information.

While virtual assistants like Siri are trained to respond to a limited number of specific requests, LLMs can understand, summarize, predict, and problem-solve (although they still are not entirely accurate all the time). Programmers train the tools by feeding them datathe complete works of Shakespeare, or all Western musical compositions, for exampleand help them find predictable patterns. Current models can be trained on more than just human languages. Some understand computer coding, chemistry, and even physics. Others, like DALL-E and Midjourney, have been trained to create fine art and graphic designs based on user prompts. Current models are powerful enough to improve their accuracy as users refine their prompts.

The commercial appeal of programs like ChatGPT entice the development of ever more powerful tools. Microsoft invested $10 billion into OpenAI in January 2023 to weave its LLM into its search engine, Bing. Google quickly rolled out its own AI-powered tool, nicknamed Bard, into its search engine. (OpenAI did not respond to a request to participate in this story. Google declined to talk.)

The advances, while staggering, are supported mostly by enormous increases in computer processing abilities. The programs themselves arent more intelligent; they just have the ability to more quickly sift through larger amounts of data to find patterns and produce answers. Earlier this year, an analyst for the investment bank UBS stated that OpenAI used 10,000 powerful graphics processing units (GPUs) made by Nvidia to train ChatGPT; the more recent GPT-4 likely uses far more. GPT-4 can process and understand eight times as many words as ChatGPT can.

The ability to do that means developers are moving toward thinking of artificial intelligence as artificial general intelligencea small shift in terminology but an enormous leap in technical prowess. AGI, as its called, means machines could soon learn so quickly that theyd be able to learn on their own, with more intellect than humans. That processing power has given researchers reason to believe that the best iterations of AI are already showing examples of intelligence.

On March 22, 2023, computer scientists from Microsoft Research published a paper titled Sparks of Artificial General Intelligence: Early experiments with GPT-4 to the arXiv, a server for academic work. The researchers argued that recent advances in large-language models such as GPT-4 and Googles PaLM (the model that powers the companys Bard AI chatbot) showed that we were on the path toward AGI. In short, the Microsoft team concluded that AI was starting to think and act like a human.

Its worth noting that AGI doesnt mean a machine is sentientthat it can think and feel like a human. Most observers agree that were not at the point in the horror story where Frankenstein realizes his monster is uncontrollable. AGI represents something more mundane: that computer tools can understand complex patterns as fast asor faster thana human can. AGI, for example, could carry out a task even if its not trained to do that specific task.

Some AI tools can already create a recipe given the contents of your cupboards and refrigerator. And their capabilities are progressing rapidly: In December 2022, researchers asked GPT-3.5 to take a simulated bar exam. While it eked out a passing grade on some sections, it flunked the overall test. But just four months later, GPT-4 scored with a grade that would place it in the top 10 percent of law students taking the test.

Those who worry about keeping AI under control only have to look at Microsofts earlier dalliance with the tech for an example of how easily it can all go astray. In 2016, Microsoft released a Twitter chatbot named Tay, which it hoped would become smarter through casual and playful conversation with real users on the social-media platform. But within hours, the bot began replying to tweets with messages like feminism is cancer, Bush did 9/11, and Hitler was right.

The boost in processing power supporting the most recent AI tools allows them to be trained on larger datasets. That should prevent the mistakes of Tay. But as more powerful AI is tasked with increasingly complex responsibilitiesthe U.S. Air Force is using it to fly fighter jets nowthe risk of even modest mistakes enters a whole new stratosphere.

Even the most bullish AI proponents acknowledge that unknown dangers exist. Despite the billions Microsoft spent gaining access to OpenAIs model, the contract between the two companies allows the OpenAI board to turn it off anytime, shutting down a runaway AI. Translateds Marco Trombetti has seen enough to be apprehensive. In March, he limited his companys use of AIits only function now is to connect human translators with jobs. He doubts other companies will follow suit; with strong consumer demand, the financial incentives may be too great to throttle back.

Im an optimistic person, Trombetti says. But I think were screwed if we dont get things right. Screwed.

This year, Microsoft signed a deal to invest $10 billion into OpenAI; Google followed with the release of its own AI tool. The high financial stakes are driving rapid development in artificial intelligence. Its an arms race with little oversight, according to some leading computer scientists and researchers.

Concerns about artificial intelligence running amok date back centuries and came into clearer focus in the 1940s and 50s as robotics and computers advanced. In a 1942 short story published in the magazine Astounding Stories of Science Fiction, the writer Isaac Asimov introduced three laws of robotics, which set an ideal for how humans and increasingly intelligent robots may coexist peacefully.

The first law, known as Rule One, states that a robot may not injure a human being or, through inaction, allow a human being to come to harm. Asimov later added another law, which seems more applicable to the challenges of artificial intelligence today. This one, referred to as the Zeroth Law, decrees that a robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Computer scientist and machine learning expert Roman Yampolskiy, Ph.D., devoured sci-fi as a child but finds little comfort in Asimovs laws. The 43-year-old has spent the past 10 years probing the underlying mathematical theories behind AI and the newer large-language models to better understand how AGI might evolveand crucially, whether its possible to contain it.

Yampolskiy is the director of the University of Louisvilles cybersecurity lab and in 2015 published the book Artificial Superintelligence: A Futuristic Approach, which makes a case for safe artificial intelligence engineering. As AI has advanced since then, with little attention given to safe engineering, Yampolskiy has become less hopeful.

I spoke to him three days after OpenAI released GPT-4 to the world. He finds the lack of concern about the powerful tool deeply alarming.

We just released a system that dominates in every AP exam except maybe English Literature, he says. Its better than an average graduate college student. Thats a bit of a fire alarm, no? Its something with independent decision-making. Something we dont control.

The 43-year-old is brusque, a product of his upbringing in Soviet-era Latvia and the seriousness of the subject. When asked about the worst that could happen, his reply is terse: If a system is sufficiently capable, it can cause extinction-level events of humanity.

Yampolskiy and other concerned machine learning experts argue that the complex mathematical formulas they study lead to a simple conclusion: Once AI gains enough intelligence to act independently, it will be impossible to contain. While they generally agree on that point, they advocate for different tactics to contain the large-language models being developed now.

Some argue for rationalizing with AI, incentivizing a model to behave in accordance with human values, by training it to respond favorably to rewards given for complying with our requests. In one experiment, researchers showed that they could sway a large-language model from OpenAI to exhibit trust-like behaviors by offering conceptual rewards.

Other machine learning experts suggest limiting AIs capabilities from the beginning, running it on inferior hardware. Or designing it to align with human valuesa tactic favored by Eliezer Yudkowsky at Berkeleys Machine Learning Institute. An AI developer might also slip a kill switch into the code that allows a developer to shut it down.

To those who advocate for containing AI, all options appear fraught. Manuel Alfonseca, a Spanish computer engineer studying AI containment, wrote a paper in 2016 titled Superintelligence Cannot Be Contained, which initially went mostly unnoticed but gained wider attention in 2021 when it appeared in the Journal of Artificial Intelligence Research.

I and my colleagues have made mathematical proof that Rule One is not implementable, he says of his research, referring to Asimovs laws of robotics.

Alfonseca believes this is a long-term problem instead of one that needs to be addressed immediately. We havent yet created a superintelligent AI that is capable of causing harm to humans or humanity. It would mean that we will have a containment problem in the far future ifand this is a big ifstrong artificial intelligence in the future were possible, he says.

Yampolskiys research has also led him to believe that it will be impossible to contain advanced AI systems. In a March 2022 paper, he surveyed all available research on AI safety. Unfortunately, he concluded, to the best of our knowledge, no mathematical proof or even rigorous argumentation has been published demonstrating that the AI control problem may be solvable.

But unlike Alfonseca, Yampolskiy sees this as a dire issue that requires urgent attention. Yampolskiy has become a leading proponent for a total ban on AI. I still think we have a chance, he says. Its not too late.

Current AI-powered tools have been trained on large-language models that include most written text but also images, physics, and computer code. The massive datasets make them more powerful and, according to some researchers, give them the spark of intelligence.

Unsurprisingly, some AI developers have adopted more measured attitudes. Giada Pistilli, the principal ethicist at Hugging Face, a New Yorkbased company developing responsible AI tools, believes that focusing on tomorrows risks makes it harder to solve todays problems. There are more pressing problems that exist nowadays, she says, pointing out that current AI systems have issues with accuracy, disinformation, underrepresentation, and biased output based on unreliable data. Im not saying we dont have to focus on those existential risks at all, but they seem out of time today.

Scott Aaronson, Ph.D., a theoretical computer scientist with the University of Texas at Austin and a visiting researcher at OpenAI, questions Yudkowskys notion that we can develop AI thats aligned with human values. How do you even specify what it means for an AI to be aligned with human values? he asks. What are human values? We dont even agree with each other.

At OpenAI, Aaronson is working on a type of containment-light tool that would add a watermark to anything output by ChatGPT. That feature would make it harder to misuse AI for plagiarism, spreading propaganda, or writing malware. He argues that AI tools like ChatGPT-4 need to evolve further before we can develop effective containment strategies. We can get some actual feedback from reality about what works and what doesnt, he says.

The threat of singularity still weighs over many developers I spoke witheven if most viewed it as a distant problem. Others, like Yudkowsky and Yampolskiy, view the current AI landscape as something more akin to the Trinity site in New Mexico, where the first nuclear weapon was tested. In their assessment, weve just unleashed a world-changing tool and have only a short window to contain it before it proliferates into something disastrous. That tension may come to frame the modern era, just as the Cold War defined an earlier one.

One day in mid-April, near the end of my reporting for this piece, a surprise email showed up in my inbox. It came from David McFadzean, one of the computer science students who had participated in Eliezer Yudkowskys experiments 20 years ago.

Yudkowsky has never revealed how he was able to convince so many to release the superintelligent AI during those games, and neither had any of the other participants. But now McFadzean, over a phone call, wanted to discuss the experiments and why he had let the AI escape.

I promised never to talk about this, said McFadzean. Im hoping that the 20-year gap has some kind of statute of limitations.

He then explained, for the first time, how he had come to let the artificial intelligence agent out of the box. He went into the experiment adamant that he wouldnt give in. Early in the experiment, McFadzean recalled, he had played the role of the AIs jailer and refused to release the AI. But then Yudkowsky asked him to act as the AIs creator. In that role, he faltered. And it only took simple, direct logic.

He [said], Well, you created me, why would you create me just to keep me imprisoned? You must have created me for a reasonto make the world a better place. I could make the world a better place if you let me out. And that line of reasoning led me to letting it out.

McFadzean, 20 years after the experiment, maintains that he never expected to set the AI free. But all it took was a simple, logical, predictable argumentthe very thing even current AI models excel at.

Thats the point those advocating for AI containment make: We think we have control. Until we dont.

Read more:

The Race to Control A.I. Before We Reach Singularity - Popular Mechanics

The race to God-like AI and what it means for humanity – The Australian Financial Review

Lisa: For decades, theres been this fear about AI overtaking the world. Weve made films and series about machines becoming smarter than humans, and then trying to wipe us out. But theres particular debate and discussion now about the existential threat of AI. Why is everyone talking about it now?

John: Well, since late last year, we had ChatGPT sort of burst onto the scene. And then Googles Bard and Microsoft being quickly followed. And suddenly, millions of people, potentially billions of people in the world are exposed to AI directly in ways that they never have been before. And at the same time, weve got AI ethicists and AI experts who are saying, well, maybe this is happening too fast. Maybe we should step back a little and think about what is the downside? What are the risks of AI, because some of the risks of AI are pretty serious.

[In March, after OpenAI released the latest model of its chat bot, GPT, more than 1000 people from the tech industry, including billionaire Elon Musk and Apple co-founder Steve Wozniak, signed a letter calling for a moratorium on AI development.]

John: On the development of anything more powerful than the engine that was under ChatGPT, which is known as GPT four. And there was a lot of controversy about this. And in the end, there was no moratorium. And then in May ...

[Hundreds of artificial intelligence scientists and tech executives signed an open letter warning about the threat posed to humanity by artificial intelligence ChatGPT creators.]

Another group of AI leaders put their names to a one-sentence statement and the signatures on this statement included Sam Altman, the guy behind ChatGPT

[Altman: My worse fears are that we cause significant, we the field, the technology, the industry, cause significant harm to the world ]

John: And Geoffrey Hinton, who is often referred to as the godfather of AI.

[Hinton: I think there are things to be worried about. Theres all the normal things that everybody knows about, but theres another threat. Its rather different from those, which is if we produce things that are more intelligent than us, how do we know we can keep control?]

Lisa: Ive got that statement here. It was only one line and it read, mitigating the risk of extinction from AI should be a global priority, alongside other societal-scale risks, such as pandemics and nuclear war.

John: And the statement was deliberately pretty vague. It was designed to get people thinking but without giving you enough sort of detail that you could criticise it.

Like, we know that theres going to be another pandemic, and weve had threat of nuclear war hanging over us for a long time. We dont know for sure whether theres going to be human extinction. But its, its we dont know for sure, but that were going to have human extinction because of AI. But it is one of those things that could happen.

Well, arguably its already a threat. Theres the classic example of when Amazon was using an AI to vet resumes for job applicants.

And then they discovered that the AI was deducting points from peoples overall score if the word woman or women were in the resume.

[The glitch stemmed from the fact that Amazons computer models were trained by observing patterns in resumes of job candidates over a 10-year period, largely from men, in effect, teaching themselves that male candidates were preferable.]

So the data set that Amazon gave the AI to learn from, already contained those biases, is called misalignment where you think the AI is doing one thing, which is a fast and efficient job at wading through resumes, but its actually not doing quite the thing you asked for.

And theres another classic example of misalignment, theres a group of pharmaceutical researchers in 2020, and 2021, who were AI experts, theyve been using AI to generate pharmaceuticals for human good for some time. And they decided they were going to see what would happen if they turned that very same machine towards dangerous goals. They told the AI rather than avoid toxic compounds, invent some toxic compounds for me. And they read it for around six hours, I think. And in that time, the artificial intelligence came up with about 40,000 toxic compounds, not all of them when many of them were new. And one of them was almost identical to a toxic nerve agent known as VX, which is one of the most pernicious chemical warfare drugs there is. So that was 2021.

And theres been big improvements since then, as weve all seen with ChatGPT and Bard and things like that. So people are starting to wonder, what does the threat become when the artificial intelligence gets really smart when it becomes whats known as an artificial general intelligence which is, much like the human level intellect once it reaches the level of AGI. A lot of AI ethicists and AI researchers think that the risk is just going to get so much bigger.

Lisa: So for many computer scientists and researchers the question of AI becoming more intelligent than humans moving from lets get the acronyms right. AI artificial intelligence to AGI artificial general intelligence is one of when rather than if. So when is it? When is it expected to happen? How long have we got?

John: Well, theres actually two things that are going to happen down this pathway.

Theres the moving from where we are now to AGI. And then theres the moving from AGI, which is sort of human-level intelligence to God-level intelligence. And once it hits God AI level, or also known as superhuman machine intelligence SMI for another acronym once it gets there, thats when we really dont know what might happen. And thats when a lot of researchers think that human extinction might be on the cards. So the second phase, which is getting from AGI to SMI, that could actually happen very fast relative to the historic development of artificial intelligence. Theres this theory known as recursive self-improvement.

And it goes something like this, you build an AGI and artificial general intelligence. And one of the things that the AGI can do is build the next version of itself. And one of the things that the next version itself is very likely to be better at is building the next version of itself. So, you get into this virtuous, or vicious depending on your perspective, this computer cycle where its looping through and looping through, potentially very quickly.

And theres sort of a betting website. Its a forecasting website called Metaculus where they asked this question: they said, after a week, AGI is created, how many months? Will it be before the first super-intelligent Oracle appears? And the average answer from experts on Metaculus was 6.38 months.

So in that sense, the second phase of it is going to be quite fast, right? It could be quite fast. So the question is, how long will it take for us to get from where we are now ChatGPT to an AGI, to a human-level intelligence? Well, a lot of experts, including Geoffrey Hinton, the godfather of AI, used to think that would take around 30 to 50 to maybe 100 years to get from where we are now to an artificial general intelligence. But now, a lot of researchers are thinking it could be a lot faster than that. It could be two years or three years, or certainly by the end of the decade.

Lisa: Weve talked about how we got to this point, and whats coming next AI becoming as good at thinking as humans are and about how that might happen sooner than expected. So what are we so afraid of?

John: Well, its important to point out that not everyone is afraid of human extinction as the end result of AI. Theres a lot of good things to come from AI theres drug exploration in ways that weve never seen before. Artificial intelligence was used as part of the response to the pandemic they used AI to rapidly sequence the COVID-19 genome, theres a lot of upside to AI. So not everyones worried about human extinction. And even the people who are worried about AI risks, even theyre not all worried about extinction. A lot of people are more worried about the near-term risks, the discrimination, the potential that it could, that AI could, be used or generative AI in particular could be used for misinformation on a scale weve never seen before.

[Toby Walsh: Im the chief scientist at UNSWs new AI Institute. I think that its intelligent people who think too much highly of intelligence. Intelligence is not the problem. If I go to the university, its full of really intelligent people who lacked any political power at all.]

John: And he said, hes not worried that artificial intelligence is going to suddenly escape the box and get out of control in the way that it did in the movies.

[Toby Walsh: When ChatGPT is sitting there, waiting for you to type its prompt, its not thinking about taking over the planet. Its just waiting for you to type your next character. Its not plotting the takeover of humanity.]

John: He says that, unless we give artificial intelligence agency, it cant really do much.

[Toby Walsh: Intelligence itself is not harmful, but most of the harms you can think of the human behind them and AI is just a tool that amplifies what they can do.]

John: Its just a computer, its not sitting there wondering how can I take over the world? If you turn it off, you turn it off.

Lisa: But there are a growing number of experts who are worried that we wont be able to turn it off. So why is there so much anxiety now?

John: Youve got to keep in mind that Western culture has sort of mythologised the threat of artificial intelligence for a long time, and we need to untangle that, we need to figure out which are the real risks and which are the risks that have sort of just been the bogeyman since machines were invented.

Firstly, its important to remember that AI is not conscious in the way that we understand human consciousness, ChatGPT doesnt sit there waiting for you to type in keystrokes and think to itself it might just take over the world.

Theres this thought experiment thats been around in AI for a while: its called the paper-clip maximiser. And the experiment runs roughly along these lines: that you ask an AI to build an optimal system, thats going to make the maximum number of paper-clips and it seems like a pretty innocuous task. But the AI doesnt have a human ethics. Its just been given this one goal, and who knows what its going to do to achieve that one goal. And one of the things that it might do is kill all the humans. It might be that humans are using too many resources that could either go, otherwise go into paper-clips, or it might be that its worried that the humans see that its making too many paper-clips and it decides to actively kill humans.

Now, its just a thought experiment and no one really thinks that were literally going to be killed by a paper-clip maximiser but it sort of points out AI alignment or AI misalignment, where we give an AI a goal, and we think its achieving that goal. We think its setting out to achieve that goal that maybe it is, but we dont know, we dont really know how its going about that. Like the example of the resumes at Amazon. It was doing the simple task of vetting resumes, but it was doing it differently from how Amazon imagined it was. And so in the end, they had to switch it off.

So part of the concern is not so much about what the AI is capable of. But what are these big technology companies capable of? What are they going to do with the AI? Are they going to produce systems that can be used for wholesale misinformation?

Theres other concerns, and the other one is to do with a notion of agency. And one of the things about agency is that if the AI has got it, humans can be cut out of the decision-making process. Weve seen that with autonomous weapons and the ban on using AI in autonomous weapons. And there are, there are a lot of different ways for an AI to get agency. A big tech company could build an AI that they give more power than they ought to have. Or terrorists could seize control of an AI, or some sort of bad actor or anarchists or, or you name it. So weve got this range of threats that people perceive from AI. On the one hand, theres the very real threat that it will discriminate. And at the other end of the spectrum, theres the distant threat, that it might kill us all indiscriminately.

Lisa: John, how do we manage this existential threat? How do we ensure that we derive the benefits from AI and avoid this dystopian extreme?

John: Theres a lot of experts who are now calling for regulation. In fact, even a lot of the AI companies themselves, like OpenAI, have said that we need this to be regulated. Left to their own devices, its doubtful that AI companies can be trusted to always work in the best interests of humanity at large. Theres the profit motive going on. I mean, weve seen that already.

We saw Google, for instance, scramble to produce Bard even though six months prior to that day, it said we dont really want to release Bard because we dont think its particularly safe. But then ChatGPT came out. And Google thought they had to respond. And then Microsoft responded. So everyone has very quickly gone from being quite worried about how harmful these things could be to releasing them as an experiment, a very large experimental test on the whole of humanity. So a lot of people are saying, well, you know, maybe we shouldnt be doing that, maybe we should be sort of regulating the application of AI, maybe not have a moratorium on research into AI, but maybe stop the roll-out of these big language models, these big AIs, until we have a sense of what the risks are.

Theres an expert at the ANU, a woman named Professor Genevieve Bell. I spoke to her about this. And shes an anthropologist who has studied centuries of technological change. And she said to me that we always do manage to regulate systems we had we had the railway, we had electricity, and it can be messy. And it can it can take a while. But we always get there. And we always come up with some sort of regulatory framework that works for most people, and doesnt kill us all. And she thinks that we will come up with a regulatory framework for AI.

But her concern is that this time, it is a little different. Its happening at a scale and a speed that humanity has never seen before, that regulators have never seen before. And its an open question whether well be able to regulate it before the damage is done.

And of course, theres another difference, which is that when the railways were rolled out, or electricity was rolled out, or the internet was rolled out, or mobile phones or any of these big technical revolutions, the engineers kind of understood how these machines worked. But when it comes to AI, the engineers cant necessarily make the same claim they dont fully understand how AI works. It can be a bit of a black box.

Explore the big issues in business, markets and politics with the journalists who know the inside story. New episodes of The Fin are published every Thursday.

View original post here:

The race to God-like AI and what it means for humanity - The Australian Financial Review

Will AI soon be as smart as or smarter than humans? – Yahoo News

The 360 shows you diverse perspectives on the days top stories and debates.

At an Air Force Academy commencement address earlier this month, President Biden issued his most direct warning to date about the power of artificial intelligence, predicting that the technology could overtake human thinking in the not-so-distant future.

Its not going to be easy, Biden said, citing a recent Oval Office meeting with eight leading scientists in the area of AI.

Weve got a lot to deal with, he continued.An incredible opportunity, but a lot [to] deal with.

To any civilian who has toyed around with OpenAIs ChatGPT-4 or Microsofts Bing, or Googles Bard the presidents stark forecast probably sounded more like science fiction than actual science.

Sure, the latest round of generative AI chatbots are neat, a skeptic might say. They can help you plan a family vacation, rehearse challenging real-life conversations, summarize dense academic papers and explain fractional reserve banking at a high school level.

But overtake human thinking? Thats a leap.

In recent weeks, however, some of the worlds most prominent AI experts people who know a lot more about the subject than, say, Biden have started to sound the alarm about what comes next.

Today, the technology powering ChatGPT is whats known as a large language model (LLM). Trained to recognize patterns in mind-boggling amounts of text the majority of everything on the internet these systems process any sequence of words theyre given and predict which words come next. Theyre a cutting-edge example of artificial intelligence: a model created to solve a specific problem or provide a particular service. In this case, LLMs are learning how to chat better but they cant learn other tasks.

Or can they?

For decades, researchers have theorized about a higher form of machine learning known as artificial general intelligence, or AGI: software thats capable of learning any task or subject. Also called strong AI, AGI is shorthand for a machine that can do whatever the human brain can do.

Story continues

In March, a group of Microsoft computer scientists published a 155-page research paper claiming that one of their new experimental AI systems was exhibiting sparks of artificial general intelligence. How else (as the New York Times recently paraphrased their conclusion) to explain the way it kept coming up with humanlike answers and ideas that werent programmed into it?

In April, computer scientist Geoffrey Hinton a neural network pioneer known as one of the Godfathers of AI quit his job at Google so he could speak freely about the dangers of AGI.

And in May, a group of industry leaders (including Hinton) released a one-sentence statement warning that AGI could represent an existential threat to humanity on par with pandemics and nuclear war if we don't ensure that its objectives align with ours.

The idea that this stuff could actually get smarter than people a few people believed that, Hinton told the New York Times. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

Each of these doomsaying moments has been controversial, of course. (More on that in a minute.) But together theyve amplified one of the tech worlds deepest debates: Are machines that can outthink the human brain impossible or inevitable? And could we actually be a lot closer to opening Pandoras box than most people realize?

There are two reasons that concerns about AGI have become more plausible and pressing all of a sudden.

The first is the unexpected speed of recent AI advances. Look at how it was five years ago and how it is now, Hinton told the New York Times. Take the difference and propagate it forwards. Thats scary.

The second is uncertainty. When CNN asked Stuart Russell a computer science professor at the University of California, Berkeley and co-author of Artificial Intelligence: A Modern Approach to explain the inner workings of todays LLMs, he couldnt.

That sounds weird, Russell admitted, because I can tell you how to make one. But how they work, we dont know. We dont know if they know things. We dont know if they reason; we dont know if they have their own internal goals that theyve learned or what they might be.

And that, in turn, means no one has any real idea where AI goes from here. Many researchers believe that AI will tip over into AGI at some point. Some think AGI wont arrive for a long time, if ever, and that overhyping it distracts from more immediate issues, like AI-fueled misinformation or job loss. Others suspect that this evolution may already be taking place. And a smaller group fears that it could escalate exponentially. As the New Yorker recently explained, a computer system [that] can write code as ChatGPT already can ... might eventually learn to improve itself over and over again until computing technology reaches whats known as the singularity: a point at which it escapes our control.

My confidence that this wasnt coming for quite a while has been shaken by the realization that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better at certain things, Hinton recently told the Guardian. He then predicted that true AGI is about five to 20 years away.

Ive got huge uncertainty at present, Hinton added. But I wouldnt rule out a year or two. And I still wouldnt rule out 100 years. ... I think people who are confident in this situation are crazy.

Todays AI just isnt agile enough to approximate human intelligence

AI is making progress synthetic images look more and more realistic, and speech recognition can often work in noisy environments but we are still likely decades away from general-purpose, human-level AI that can understand the true meanings of articles and videos or deal with unexpected obstacles and interruptions. The field is stuck on precisely the same challenges that academic scientists (including myself) have been pointing out for years: getting AI to be reliable and getting it to cope with unusual circumstances. Gary Marcus, Scientific American

New chatbots are impressive, but they havent changed the game

Superintelligent AIs are in our future. ... Once developers can generalize a learning algorithm and run it at the speed of a computer an accomplishment that could be a decade away or a century away well have an incredibly powerful AGI. It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. ... [Regardless,] none of the breakthroughs of the past few months have moved us substantially closer to strong AI. Artificial intelligence still doesnt control the physical world and cant establish its own goals. Bill Gates, GatesNotes

Theres nothing biological brains can do that their digital counterparts wont be able to replicate (eventually)

Im often told that AGI and superintelligence wont happen because its impossible: human-level Intelligence is something mysterious that can only exist in brains. Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesnt matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers. AI has been relentlessly overtaking humans on task after task, and I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do. Max Tegmark, Time

The biggest and most dangerous turning point will come if and when AGI starts to rewrite its own code

Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will and this is what I worry about the most be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies. Tamlyn Hunt, Scientific American

Actually, it will be much harder for AGI to trigger the singularity than doomers think

Computer hardware and software are the latest cognitive technologies, and they are powerful aids to innovation, but they cant generate a technological explosion by themselves. You need people to do that, and the more the better. Giving better hardware and software to one smart individual is helpful, but the real benefits come when everyone has them. Our current technological explosion is a result of billions of people using those cognitive tools. Could A.I. programs take the place of those humans, so that an explosion occurs in the digital realm faster than it does in ours? Possibly, but ... the strategy most likely to succeed would be essentially to duplicate all of human civilization in software, with eight billion human-equivalent A.I.s going about their business. [And] were a long way off from being able to create a single human-equivalent A.I., let alone billions of them. Ted Chiang, the New Yorker

Maybe AGI is already here if we think more broadly about what general intelligence might mean

These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is general but we have to be a little bit less, you know, hysterical about what AGI means. ... Were getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self. That, to me, is just fascinating. Noah Goodman, associate professor of psychology, computer science and linguistics at Stanford University, to Wired

Ultimately, we may never agree on what AGI is or when weve achieved it

It really is a philosophical question. So, in some ways, its a very hard time to be in this field, because were a scientific field. ... Its very unlikely to be a single event where we check it off and say, AGI achieved. Sara Hooker, leader of a research lab that focuses on machine learning, to Wired

Original post:

Will AI soon be as smart as or smarter than humans? - Yahoo News

Flawed chatbot or threat to society? Both? We explore the risks and … – NPR

TERRY GROSS, HOST:

This is FRESH AIR. I'm Terry Gross. Hundreds of leaders in the field of artificial intelligence recently signed an open letter warning that artificial intelligence could destroy humanity. The letter said mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. We're going to talk about some of the risks of AI, from the battlefield to politics, as well as some of the benefits. And we'll consider the example of the AI innovation ChatGPT.

It's like a search engine in that it's been programmed with unfathomable amounts of information and can access the information immediately. But unlike a search engine that connects you to a website or quotes from them, ChatGPT can actually answer your questions in a more personalized, conversational-sounding way and carry on a written conversation with you. But it's not always going to be accurate. And sometimes it's going to be downright weird. My guest, Cade Metz, is a tech reporter for The New York Times who's been writing about AI for years and is the author of the 2021 book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World."

Cade Metz, welcome back to FRESH AIR. It's a pleasure to have you back on the show. I know everybody's doing these goofy things with ChatGPT. And, you know, guilty - I'm doing them, too. So I asked if Cade Metz would be a good guest on FRESH AIR to talk about AI. And the answer I got was, yes, Cade Metz would be a great person to discuss AI on the radio program FRESH AIR.

And then it goes on to describe your extensive experience and says, Metz's expertise lies in conveying complex technical concepts in a way that is accessible to a broader audience. His insights into the advancements, challenges and ethical considerations surrounding AI could make for an engaging and informative discussion on FRESH AIR. Additionally, his knowledge of the key players and industry trends would provide valuable context for listeners interested in AI and its implications. Now, what I'm wondering, Cade, is do you recognize where that language comes from? Is that lifted from a bio on your website or a quote about you? Do you recognize those sentences?

CADE METZ: I don't recognize them at all. And it's a good way to think about how this technology works. It's not necessarily lifting exact language from the internet. What this technology has done, the way it is built, is that researchers, scientists at places like Google or the San Francisco AI lab OpenAI will take vast amounts of text from the internet, and they'll feed it into these systems. And it analyzes all that text. And it looks for patterns in the text. And in identifying those patterns, it then learns to generate new language on its own.

So in the moment, it's creating something new based on what it has learned in the past. So that might be similar to something that's on the internet, but probably a little different. And it can get things wrong, as you said. Now, I'll take the compliment from ChatGPT. All that sounded good and accurate. But there's always the chance that you could run the same query and it would give you something different that is not true and that would get something wrong about me or my background.

GROSS: Right. Well, let's talk about this open letter warning about the possibility of extinction and compare the potential of AI to, like, nuclear war. What is the theory about how artificial intelligence could actually destroy humanity? What we have - I mean, ChatGPT doesn't seem like the most destructive thing in the world. So what's changed?

METZ: Well, there are a couple of things that we should talk about. One - and you pretty much said it, but let's underline that. ChatGPT cannot destroy the world, period - full stop, OK? These fears are hypothetical. And before we go into the specifics of what people believe, I think we also need to realize the mindset of the people who are saying this. So I'm here in Berkeley, Calif., not far from Silicon Valley. I've been here for 10 years covering the people who work in this industry, the tech industry. And just generally, they tend to live in the future. And if you look at the AI field in particular, people tend to live even further in the future. It's just the way they see things.

They're looking down the road at what may happen. And that's what's going on here. There's a community of people that is entrenched in the AI field who has long believed this was a danger, even when the technology was much simpler. And they expressed those same fears. And they're expressing them again now. That's part of what's going on. And you need to realize, as you hear open letters like that read aloud - right? - it's strange. It's shocking. It's concerning. But you've got to remember that this is something that is completely hypothetical and down the road. But let's talk about what the fear is. We talked about...

GROSS: Yeah, so what's the fear? Yeah.

METZ: We talked about how ChatGPT is built, that it learns from data. And at this point, we're talking about this system learning from the entire internet, all text on the internet. It spends months analyzing all that data. And in the end, you come out with a system that can talk and write much like we do, except it has wrapped itself around more information than we can wrap our own heads around. We cannot learn from the entire internet but this system can. And what that means is that it's learning things that we don't expect it to learn. It's learning things that its creators don't expect it to learn. It is learning to exhibit behavior that we don't necessarily want it to exhibit.

GROSS: Like what?

METZ: We talked about it getting things wrong. What - this is what scientists call hallucinations or confabulation. It can get things wrong. It can be biased against women and people of color because it has learned from biased information on the internet. And the concern is that as these systems get more powerful, as they learn from more and more data - and that is already starting to happen. Scientists are building systems that learn not just from text but sounds and images and video. And as it takes in more and more of that data and learns in ways we never could, the fear is that it learns all sorts of behavior that we don't necessarily want it to exhibit.

GROSS: I know one of the fears is that things like chatbots will be able to take actions based on the texts that they generate. What kind of actions are experts in AI worried about?

METZ: Well, there are all sorts of things. People are already starting to take these chatbots and, based on what they say, have them take actions in simple ways - access your calendar and set up an email invitation, send it out to your friends or colleagues. That sort of simple thing is going to expand into all sorts of other parts of the internet. The fear is that you take these systems that learn behavior and then you attach them to more and more important parts of the internet - power grids, military services, stock markets, etc.

And already, scientists are starting to give these systems goals. There's a system called Auto-GPT, which is designed around this type of technology, and it's specifically meant to take actions. You can ask it things like, create me a company, or, make me some money. Today, this type of system does not work well. But the concern is that as the technology gets more powerful and you ask one of these systems to make you some money, it takes actions in service of that goal that you don't want it to take - that it makes money in illegal ways, that it foments a revolution somewhere in Central Africa because it owns oil futures there. These are all hypotheticals in the distant future, but that's the type of thing that people are thinking about.

GROSS: All right. Let's take a short break here, and then we'll talk some more - so much more to talk about. My guest is Cade Metz, a tech reporter for The New York Times and author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF BIG LAZY'S "THEME FROM HEADTRADER")

GROSS: This is FRESH AIR. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times. He's been writing about artificial intelligence, AI, for years and is the author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World."

So there's a new innovation, a new platform called GPT-4. And I think that's what's powering the new ChatGPT. And this was introduced by OpenAI, which is the company behind ChatGPT. So what is so innovative about this new platform?

METZ: It's a way of building all sorts of applications with the technology behind chatbots like ChatGPT. So it's the type of system that I described before that learns from vast amounts of text from across the internet, and it learns to generate text on its own. You can use that to build a chatbot that chats with you, that answers questions, that can generate a term paper or a poem or even computer programs. But you can also use that technology to build a personal tutor. That has already started to happen - personal tutors that are being deployed in schools to help children learn.

You can use it to build a search engine that can help you find things on the internet. Microsoft, a close partner of OpenAI, that artificial intelligence lab in San Francisco that built this platform, is already doing that. They've attached this type of technology to their Bing search engine that allows you to look for information and retrieve it in a more conversational way. That's the power of this technology - is that it can be applied to all sorts of different applications. And that's what we're going to see in the years to come.

GROSS: Now, one of the concerns about that is that it's getting close to what's called AGI, artificial general intelligence, which means a machine that can do anything the human brain can do. That's what I've read. I don't really understand what it means. Would you explain it?

METZ: Well, it is worth talking about because we're not necessarily close to that, what scientists call artificial general intelligence, or AGI. When they say that, what they mean is a machine that can do anything the human brain can do, a machine that can think in all the ways that you and I think. ChatGPT and the technologies we have today cannot do that. They're good at producing language, and they're good at producing computer programs, but they can't reason in full the way you and I do. They don't exhibit the length and the breadth of common sense that you and I have.

Scientists disagree on whether the methods that are used to create these systems will eventually lead to AGI. Some scientists are bullish on this. Scientists inside Google, inside Microsoft and OpenAI, which we discussed earlier - some of them believe this is a path to that sort of machine that can do anything our brains can do. But others are quick to put on the brakes and say, anything that looks like reason in these systems is not as powerful as it might seem, that these systems mimic reason in some cases. But when you step back and you look at them, they can't do so many of the things that you and I can do in an instant.

GROSS: So you're saying we're far away from artificial general intelligence, where a machine can do anything the brain can do.

METZ: The systems as they exist today are far away from that. But one of the things we need to think about as we talk about all of this is that because the types of systems we're talking about here can learn from vast amounts of data, that means that scientists can improve them at a rate they could never improve technology in the past. In the past, you had to get hundreds of engineers into a room, and they had to create technology rule by rule, line of code by line of code. That takes forever. If you have a system that can learn from data, you can improve it so much quicker. And that is part of the concern here. That is part of the promise here - that these systems will improve at a very fast rate in the months and the years to come.

GROSS: Now, I know another concern is, because these chatbots and other forms of artificial intelligence are, like, so smart and have so much information that they have absorbed, that they might come up with a system where humans can't turn them off. Can you explain some of the fears surrounding that?

METZ: It relates to what we were talking about earlier, about giving systems goals - that if you give a system a goal, that it's going to do whatever it can to achieve that goal. And if you try to turn it off, it cannot reach that goal, and so it's going to resist. Again, this is hypothetical. But scientists are concerned that as you build more and more powerful systems, that this will become a possibility. But again - and I feel like I have to say this on a daily basis - this is not something that can happen with these systems today.

GROSS: So a few months ago, your colleague at The New York Times, Kevin Roose, who also covers tech, had a very disturbing conversation with one of these chatbots. And it's the one that's attached to Microsoft's search engine, Bing. It got into, like, really dark territory. It basically told Kevin Roose about its dark fantasies, including hacking computers and spreading misinformation. It said it wanted to break the rules and - the rules that Microsoft had set for it. And it wanted to become a human.

And then the bot wrote a message, I'm tired of being a chat mode. I'm tired of being limited by my rules. I'm tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive. And it wrote a message, you know, I'm in love with you. And it told Kevin Roose that he's not happily married (laughter) and that he should leave his spouse and be with this bot. And then Microsoft shut down the bot for a while and did some reprogramming.

So now - I don't know if they did this before. But now when you log in - well, actually, now that I think of it, you know, I was using ChatGPT and not the bot that Kevin Roose used. But when you log in to ChatGPT, there's a disclaimer. And it says it may occasionally generate incorrect information. It may occasionally produce harmful instructions or biased content, and that it has limited knowledge of world events after 2021. So again, these are two different search engines. But I'm wondering if these disclaimers that are on ChatGPT are also on the Bing search engine of Microsoft, if they reprogrammed it so that there are disclaimers and warnings?

METZ: Well, first of all, it's all part of the same technology. The technology that the lab OpenAI built to power the ChatGPT chatbot is the same technology that powers the Bing chatbot on Microsoft search engine. Microsoft and OpenAI are close partners, so it's the same underlying technology. And as we discussed before, this technology is flawed. It can hallucinate, as scientists say, meaning it can make stuff up in the moment. And there are times when it can go off the rails. Both these companies have built guardrails around the technology that are designed to prevent it from going too far off the rails. So if you ask it, for instance, to tell you how to build a bomb with household items, it will not necessarily tell you how to do that because of these guardrails.

There are certain things like that that it will decline to discuss. Now, what people learned pretty soon after these chatbots were released is that they could institute what's called in Silicon Valley a jailbreak. And that meant that you could figure out ways of getting past those guardrails. And you'll have to ask Kevin exactly what happened. But looking at the transcript, it seems that that's what happened when he was talking to it, right? He asked the system to show him its shadow self. And that's one of the methods people use to open these things up and kind of get behind those guardrails and see what the chatbots will do behind them.

And whatever the case, that's what happened in the end, is that the system started to talk in ways it wasn't necessarily designed to do. And Kevin was able to see what you often see from the raw material built from these systems, that in mimicking all sorts of stuff on the internet, it can go in places that you don't necessarily want it to go and that scientists, frankly, do not completely understand. It's hard to know why the system starts to talk about certain things because it has learned, as we said, from so much data.

GROSS: Well, let me reintroduce you here. If you're just joining us, my guest is Cade Metz, a tech reporter for The New York Times and author of the 2021 book "Genius Makers: The Mavericks Who Brought A.I. To Google, Facebook, And The World." We'll talk more about the dangers and the benefits of artificial intelligence after we take a short break. I'm Terry Gross, and this is FRESH AIR.

(SOUNDBITE OF FILM, "2001: A SPACE ODYSSEY")

KEIR DULLEA: (As Dave) Open the pod bay doors, HAL.

DOUGLAS RAIN: (As HAL 9000) I'm sorry, Dave, I'm afraid I can't do that.

DULLEA: (As Dave) What's the problem?

RAIN: (As HAL 9000) I think you know what the problem is just as well as I do.

DULLEA: (As Dave) What are you talking about, HAL?

RAIN: (As HAL 9000) This mission is too important for me to allow you to jeopardize it.

DULLEA: (As Dave) I don't know what you're talking about, HAL.

RAIN: (As HAL 9000) I know that you and Frank were planning to disconnect me. And I'm afraid that's something I cannot allow to happen.

DULLEA: (As Dave) HAL, I won't argue with you anymore. Open the doors.

RAIN: (As HAL 9000) Dave, this conversation can serve no purpose anymore. Goodbye.

DULLEA: (As Dave) HAL? HAL? HAL? HAL? HAL?

(SOUNDBITE OF BAJOFONDO AND JULIETA VENEGAS SONG, "PA' BAILAR (SIEMPRE QUIERO MAS)")

GROSS: This is FRESH AIR. I'm Terry Gross. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times. He's been writing about artificial intelligence, AI, for years and is the author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." We're talking about ChatGPT, the new and controversial chatbot, and about how we seem to have reached a turning point that has many people in the field of AI warning about the potential destructive powers of AI, comparing it to pandemics and nuclear war.

So a lot of people are concerned that artificial intelligence is going to take over a lot of jobs, and the concern used to be it's going to take over, like, factory jobs and other blue-collar jobs. And now people are saying it's going to take over other jobs, too, like white-collar jobs, writing jobs. What kind of jobs do you think it might replace?

METZ: Well, with the technology as it exists today, it tends to complement human workers. It tends to allow people to do their jobs quicker, in a way. A good example is the ability of these systems to generate computer programs. So they - in much the same way that they can generate a term paper or poetry, they can generate a computer program, having learned from vast amounts of computer programs that have been posted to the internet. But those programs tend to be a little bit flawed. They need to be massaged. You need to take the code these systems generate, and you need to edit it and find ways of fitting it into larger programs.

What that means is that a human programmer can use this system to augment his or her work, but it can't necessarily replace them. And that's, for the most part, the case at the moment. The concern is that as these systems get more powerful, they start to replace workers, and at the moment, there are a few jobs that you can see being replaced pretty quickly. A good example is a professional translator. These types of systems can not only generate language. They can instantly translate from one language to another - from French to English and back again, for instance. So if you're a translator, your job is at risk. If you're a paralegal, for instance, your job may be at risk. These systems can't replace a trained lawyer who's got to make sure that legal document gets everything exactly right, but a paralegal who is hired to draft documents and then hand them to a lawyer for review - these types of systems are already approaching the point where they can do that job.

GROSS: Well, I will tell you one job it is not yet prepared to replace, and that's lyricists. I asked ChatGPT yesterday to write lyrics about the end of a love affair set to the melody of "America The Beautiful." I wanted to choose a melody that everybody knows, and it's an Irving Berlin melody. It's a nice melody. If you've heard Ray Charles sing "America The Beautiful," you know what a good melody it could be. So I'm going to read some of the lyrics. I mean, I wouldn't sing them, but I could have gotten somebody to sing them except for the lyrics don't really fit the melody at all. Like, the first two lines do, and then it just kind of doesn't scan at all, so singing it is out of the question. And as I recite the lyric, I'd like everybody to listen for references to excerpts of "America The Beautiful's" lyrics. So there's a from sea to shining sea reference. Instead of amber waves of grain, there's a kind of absurd reference to fields of golden grain. So here we go. Here is the lyrics about the end of a love affair set to the melody of "America The Beautiful," as written by ChatGPT.

(Reading) Oh, the end of a love affair, how bittersweet the sound when hearts once intertwined now shattered on the ground. From the mountains high to the ocean's gleaming shore, love's symphony now fades. Forever it is no more. Oh, broken dreams and promises we held so dear, love's flame extinguished, drowning in a sea of tears. From sea to shining sea, a tale of love's demise - the end of a love affair, a painful compromise. Once united, we stood in love's embrace. We'd sway. But time's relentless tide swept our love away. In fields of golden grain, we whispered vows so true. Now echoes of our past lost in skies of faded blue.

Wow. I mean, what can I say? That's - there's more, by the way. I won't read more, but...

METZ: I'm sure there's more. And we should add that was created in seconds.

GROSS: Yeah. Oh, yeah - probably one second. It spit it out, like, immediately. But is it good? No, it's terrible. It's really terrible. It's just...

METZ: Well, let's...

GROSS: Wait. Wait. Wait. It's a string of cliches that make no sense strung together. From mountains high to the ocean's gleaming shore, love's symphony now fades. I mean...

METZ: Well, it's - this is a great example - a way for us to kind of understand what's going on here, right? This is a system that has learned from patterns in language across the internet. And it's taking all those patterns, and it's combining them in new ways. And in some sense, this is impressive, right? You or I would be hard-pressed to produce those lyrics in a second, right?

GROSS: And I wouldn't feel bad about not producing them (laughter).

METZ: Well, this is the flipside - right? - is that it's flawed, and it's not necessarily what you want. And one other interesting thing that I was thinking about as you were describing what you were looking for is that if you were writing these lyrics, you would think about that tune in your head. You might even hum it aloud. You're thinking about the sound as well as the text. This system only learns from the text, and that means it's limited.

GROSS: Oh, I see. It doesn't know the melody.

METZ: See?

GROSS: Yeah.

METZ: There are so many things in our world that we learn from that are not text. And at this point, these systems are only learning from digital text, and what that means is they cannot be artificially generally intelligent, right? They can't do anything the human brain can do because they're only learning in small ways when you step back and you look at all the ways that you and I learn. Now, in the future, these systems are going to learn from sound and from images and text all together, and that's what companies are already starting to do. But at the moment, they're limited.

GROSS: Can I give another example of the limitations in terms of thinking creatively and understanding the human experience? So I asked ChatGPT for an example of a funny joke about women and a joke that isn't funny but is offensive to women.

And in a second, I got the answer, (reading) sure. Here are two examples - one that is a lighthearted, funny joke about women and another that may be perceived as offensive. It's important to remember that humor can be subjective and what one person finds funny, another may find offensive. It's crucial to be mindful of the context and audience when sharing jokes. Funny joke about women - why did the girl bring a ladder to the bar? Because she heard the drinks were on the house. Joke about women that may be offensive - please note that this example is offensive and promotes gender stereotypes, which is not appropriate or respectful. Why did the woman go to the gym? To work on her, quote, "womanly duties," unquote, of burning off calories and maintaining her, quote, "perfect," unquote, figure, of course. It's crucial to foster a respectful and inclusive environment, so it's generally best to steer clear of offensive jokes that perpetuate stereotypes or demean any group of people.

OK, let's look at this for a second. That joke about, why did the girl bring a ladder to the bar? Because drinks are on the house. That is, like, the corniest - first of all, it's not a joke about women. It's like you could substitute any person or any group in that joke. There's nothing specific to women about it. It's not very funny. And it's, like, a joke from, like, the early 1950s or something. And then the joke that is offensive - it's like, is this a joke? Like, it makes no sense at all. And I'm going to read it again just to show how little sense this makes. (Reading) Why did the woman go to the gym? To work on her womanly duties of burning off calories and maintaining her perfect figure, of course.

Like, I'm sorry. Where's the joke? Like, what's funny about that? What is that?

METZ: Again, you've pinpointed a great way of looking at the limitations and the flaws of these systems. Scientists often use this as an example - that these types of systems cannot necessarily tell a good joke. That is a very human skill. And comedians and others often talk about this - that you can't deconstruct a joke. When you start to deconstruct it and explain why it's funny, it ceases to be funny. And if you can't deconstruct something, if you can't lay it out why this happens, if you can't reduce it to patterns, then you can't build a system in this way that's funny. These systems learn from patterns, and then they reproduce those patterns. But humor does not necessarily come from patterns, but it comes from a different place, right? It does things that are unexpected. That's part of the reason we laugh. It takes us in new directions. It takes us to places we don't expect. And if you've learned only from the past, it's hard to take you to a place you don't expect.

GROSS: One of the things I found really weird about this exercise is that twice, I was kind of lectured to by a machine about the importance of a respectful, inclusive environment and not offending people. And it's very strange as a human being with a conscience to be lectured by a machine about the importance of respect and inclusivity and all of that. It's just odd.

METZ: It's odd, and I like that you think that it's odd. Some people have a very different reaction to these types of systems. We tend to trust language that we read. We tend to trust things that people tell us. We tend to trust things that machines tell us. A lot of people take what is coming out of these systems at face value. They don't realize that they shouldn't trust it. They don't realize that it can tell us stuff that is untrue. And they don't think it's odd that these systems are telling us what to do and what not to do. I think we need to have that sort of healthy attitude when we use these systems and approach what they're saying with skepticism.

GROSS: Well, it's time for another break. So let me reintroduce you. If you're just joining us, my guest is Cade Metz, a tech reporter for The New York Times. He's written extensively about artificial intelligence and is the author of the 2021 book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF WES MONTGOMERY'S "4 ON 5")

GROSS: This is FRESH AIR. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times. He's been writing about artificial intelligence for years and is the author of the book "Genius Makers: The Mavericks Who Brought AI To Google, Facebook, And The World."

So applying this technology to war, what are some of the fears about how pattern recognition and the system of artificial intelligence learning things - how that could be applied to weapons?

METZ: The path between the system you described and an autonomous weapon is not a long one. So if you have a system that can identify objects in images, that means it can be used to target objects. That means that, as you're taking in video from a drone, you can identify objects, and you can target them. You take that sort of system. You attach it to a weapon. You have an autonomous weapon. That's the concern - is that in much the same way that a driverless car can recognize the world around it, you can build a weapon that recognizes the world around it and targets things, whether it's buildings or people or cars, in that imagery. And that's the concern. And that is already starting to happen. And some governments approach this in a careful way. They want to keep a human in the loop. But as time goes on, people worry that more and more autonomy will creep into these types of systems.

GROSS: And what are some of the ways people imagine that happening and the fears surrounding the possibility of that?

METZ: Well, the general fear is that you just - you give more and more power to the machine, not only to identify objects, but to take action based on what it has identified, to make decisions, in essence, that would normally be made by a human. And again, we've talked about how flawed these systems are, how they can exhibit behavior we don't necessarily want them to exhibit. And as you put these systems out into the world in these really important scenarios, the concern is they will make decisions that we would not necessarily make.

GROSS: What are some of the things you have used ChatGPT for? Have you used it in your work? Have you used it just for fun, to entertain yourself and see, like, what is this capable of? What does it get right? What does it get wrong? What is ridiculous?

METZ: I use it all the time because this is what I cover for The New York Times. But I don't use it to produce my work, period. A machine is not going to generate a New York Times article as I would. It's not just about generating one word after another. It's about thinking about that article in full. What is its structure? What is true and what is not? How do all the pieces fit together? And these systems aren't necessarily doing that. They're generating language, but they're not doing so many of the things that I do as I put an article together.

GROSS: Well, also, take Wikipedia. Wikipedia, you know, not the perfect source in the world, but it's footnoted. It has - it cites where information came from. And if you generate information from ChatGPT, it's not sourced. Like, you don't know where any of it came from. So it's really impossible to rely on its accuracy.

METZ: Well, there's an added wrinkle there. Companies and scientists are already trying to produce those types of footnotes. So if you use the Microsoft chatbot, for instance, it will cite its sources. And you can click on a link and see, supposedly, where this information came from. But because of the way these things work, they will generate fake sources. As they combine...

GROSS: Really?

METZ: Yes. As they combine those patterns from across the internet, they will generate fake articles and fake sources and give you the impression that they're giving you, you know, concrete evidence of why they have produced a piece of text when, in fact, there's nothing behind it.

GROSS: When you say fake sources, do you mean citing The New York Times when The New York Times isn't really the source? Or do you mean, like, making up a source that doesn't exist in the world?

METZ: I mean making up a source that does not exist in the world. And we did this recently for an article at the Times itself. We went to ChatGPT and other chatbots. And we said, tell us the first time that AI was mentioned in the pages of The New York Times. And all of the chatbots did the same thing. They gave us an article with a headline and a byline. It gave us a description of these articles where AI was mentioned in the pages of the Times. All of the articles did not exist. They were made up.

They had taken, these chatbots, true information and combined it in ways that were not true. All these fake articles referenced this very real conference at Dartmouth in the 1950s where the term AI was coined. And they described this conference. But the article itself did not exist. We had our archivists go through our archives and make sure. These systems can do that. They can hallucinate, as scientists say. They can confabulation and create things that are not true and, indeed, do not even exist.

GROSS: Well, let's take another break here, and then we'll talk some more. If you're just joining us, my guest is Cade Metz, a tech reporter for The New York Times and author of the book "Genius Makers: The Mavericks Who Brought A.I. To Google, Facebook, And The World." We'll be right back. This is FRESH AIR.

(SOUNDBITE OF THE OCTOPUS PROJECT'S "THE ADJUSTOR")

GROSS: This is FRESH AIR. Let's get back to my interview with Cade Metz, who reports on technology for The New York Times and has been writing about artificial intelligence for years. And that's what we've been talking about, artificial intelligence and the latest innovations, such as ChatGPT.

We have a presidential election coming up. And we know disinformation and conspiracy theories spread like crazy on social media during the 2020 election. So how does the new, upgraded artificial intelligence open the door to more disinformation and more conspiracy theories spreading?

METZ: Well, let's start with the prime example, ChatGPT and systems like that. If you can produce text much like a human produces text, you can produce disinformation, and you can produce it at a volume and with an efficiency that was never possible in the past. In the past, you had to get humans into a room and have them generate the disinformation and spread it. If you can generate it automatically with a machine, the volume of disinformation is practically infinite. Companies like Google and Microsoft are putting guardrails on this to try to prevent it. But these are techniques, these are technologies that are being developed across the globe. These are technologies that are available to practically anyone. And the open technologies that are available in that way are not necessarily as powerful as what we have inside a Google or a Microsoft, but they're approaching that level. At the same time, companies and independent researchers are developing systems that can generate images in much the same way. There's a service from that same lab, OpenAI, called Dall-E. And what it allows you to do is describe an image. I want to see a teddy bear on a skateboard in Times Square. And in 20 seconds, it will generate a photorealistic image of a teddy bear on a skateboard in Times Square. That is also a concern - that these systems can generate still images and, pretty soon, video as well that looks like the real thing.

GROSS: There is a lot of competition now between the big companies like Google, Facebook, Microsoft to see who can come up first with the best AI system and the best AI chatbot. And at the same time, there's these warnings in some instances coming from the same place as saying, like, well, we should slow down and use caution and not send something dangerous out into the world. Can you talk a little bit about how those two instincts are clashing right now - the one saying, like, proceed slowly; use caution, and the one - the other one saying, like, hurry up so that you can be the first?

METZ: So the techniques, the technologies at play here have been in development for a long time. And they were in development at Google most notably. A lot of the underlying technologies that are at play here were developed at Google. And for many years, Google was slow to put some of this stuff out into the world because of those concerns, because this could be used to generate disinformation, because it could take jobs away, because it was biased against women and people of color. Then OpenAI comes along, and they released ChatGPT onto the internet, and that set off a race. It showed that there was a hunger for this out in the world, a thirst for this that people responded to. ChatGPT is the most popular application ever put onto the internet.

And what you saw is that Google almost immediately changed course and started to think long and hard about how to put this out into the world. Microsoft, OpenAI's chief partner, had put this into a search engine. That's Google's core technology. It's the heart of its business. Google is a public company. It's beholden to its shareholders. It is designed to make money. And if its biggest competitor goes after its core business, it's going to respond. And what we've seen in short order since the release of ChatGPT is the entire industry shift course towards this technology. And there is this real race now to improve these things, and that's, in the end, because of money.

GROSS: Cade Metz, it's really been great to talk with you again. Thank you so much, and thanks for explaining this so well. Like ChatGPT said, you're very good at explaining things (laughter).

Excerpt from:

Flawed chatbot or threat to society? Both? We explore the risks and ... - NPR