Will AI soon be as smart as or smarter than humans? – Yahoo News

The 360 shows you diverse perspectives on the days top stories and debates.

At an Air Force Academy commencement address earlier this month, President Biden issued his most direct warning to date about the power of artificial intelligence, predicting that the technology could overtake human thinking in the not-so-distant future.

Its not going to be easy, Biden said, citing a recent Oval Office meeting with eight leading scientists in the area of AI.

Weve got a lot to deal with, he continued.An incredible opportunity, but a lot [to] deal with.

To any civilian who has toyed around with OpenAIs ChatGPT-4 or Microsofts Bing, or Googles Bard the presidents stark forecast probably sounded more like science fiction than actual science.

Sure, the latest round of generative AI chatbots are neat, a skeptic might say. They can help you plan a family vacation, rehearse challenging real-life conversations, summarize dense academic papers and explain fractional reserve banking at a high school level.

But overtake human thinking? Thats a leap.

In recent weeks, however, some of the worlds most prominent AI experts people who know a lot more about the subject than, say, Biden have started to sound the alarm about what comes next.

Today, the technology powering ChatGPT is whats known as a large language model (LLM). Trained to recognize patterns in mind-boggling amounts of text the majority of everything on the internet these systems process any sequence of words theyre given and predict which words come next. Theyre a cutting-edge example of artificial intelligence: a model created to solve a specific problem or provide a particular service. In this case, LLMs are learning how to chat better but they cant learn other tasks.

Or can they?

For decades, researchers have theorized about a higher form of machine learning known as artificial general intelligence, or AGI: software thats capable of learning any task or subject. Also called strong AI, AGI is shorthand for a machine that can do whatever the human brain can do.

Story continues

In March, a group of Microsoft computer scientists published a 155-page research paper claiming that one of their new experimental AI systems was exhibiting sparks of artificial general intelligence. How else (as the New York Times recently paraphrased their conclusion) to explain the way it kept coming up with humanlike answers and ideas that werent programmed into it?

In April, computer scientist Geoffrey Hinton a neural network pioneer known as one of the Godfathers of AI quit his job at Google so he could speak freely about the dangers of AGI.

And in May, a group of industry leaders (including Hinton) released a one-sentence statement warning that AGI could represent an existential threat to humanity on par with pandemics and nuclear war if we don't ensure that its objectives align with ours.

The idea that this stuff could actually get smarter than people a few people believed that, Hinton told the New York Times. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

Each of these doomsaying moments has been controversial, of course. (More on that in a minute.) But together theyve amplified one of the tech worlds deepest debates: Are machines that can outthink the human brain impossible or inevitable? And could we actually be a lot closer to opening Pandoras box than most people realize?

There are two reasons that concerns about AGI have become more plausible and pressing all of a sudden.

The first is the unexpected speed of recent AI advances. Look at how it was five years ago and how it is now, Hinton told the New York Times. Take the difference and propagate it forwards. Thats scary.

The second is uncertainty. When CNN asked Stuart Russell a computer science professor at the University of California, Berkeley and co-author of Artificial Intelligence: A Modern Approach to explain the inner workings of todays LLMs, he couldnt.

That sounds weird, Russell admitted, because I can tell you how to make one. But how they work, we dont know. We dont know if they know things. We dont know if they reason; we dont know if they have their own internal goals that theyve learned or what they might be.

And that, in turn, means no one has any real idea where AI goes from here. Many researchers believe that AI will tip over into AGI at some point. Some think AGI wont arrive for a long time, if ever, and that overhyping it distracts from more immediate issues, like AI-fueled misinformation or job loss. Others suspect that this evolution may already be taking place. And a smaller group fears that it could escalate exponentially. As the New Yorker recently explained, a computer system [that] can write code as ChatGPT already can ... might eventually learn to improve itself over and over again until computing technology reaches whats known as the singularity: a point at which it escapes our control.

My confidence that this wasnt coming for quite a while has been shaken by the realization that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better at certain things, Hinton recently told the Guardian. He then predicted that true AGI is about five to 20 years away.

Ive got huge uncertainty at present, Hinton added. But I wouldnt rule out a year or two. And I still wouldnt rule out 100 years. ... I think people who are confident in this situation are crazy.

Todays AI just isnt agile enough to approximate human intelligence

AI is making progress synthetic images look more and more realistic, and speech recognition can often work in noisy environments but we are still likely decades away from general-purpose, human-level AI that can understand the true meanings of articles and videos or deal with unexpected obstacles and interruptions. The field is stuck on precisely the same challenges that academic scientists (including myself) have been pointing out for years: getting AI to be reliable and getting it to cope with unusual circumstances. Gary Marcus, Scientific American

New chatbots are impressive, but they havent changed the game

Superintelligent AIs are in our future. ... Once developers can generalize a learning algorithm and run it at the speed of a computer an accomplishment that could be a decade away or a century away well have an incredibly powerful AGI. It will be able to do everything that a human brain can, but without any practical limits on the size of its memory or the speed at which it operates. ... [Regardless,] none of the breakthroughs of the past few months have moved us substantially closer to strong AI. Artificial intelligence still doesnt control the physical world and cant establish its own goals. Bill Gates, GatesNotes

Theres nothing biological brains can do that their digital counterparts wont be able to replicate (eventually)

Im often told that AGI and superintelligence wont happen because its impossible: human-level Intelligence is something mysterious that can only exist in brains. Such carbon chauvinism ignores a core insight from the AI revolution: that intelligence is all about information processing, and it doesnt matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers. AI has been relentlessly overtaking humans on task after task, and I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do. Max Tegmark, Time

The biggest and most dangerous turning point will come if and when AGI starts to rewrite its own code

Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will and this is what I worry about the most be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies. Tamlyn Hunt, Scientific American

Actually, it will be much harder for AGI to trigger the singularity than doomers think

Computer hardware and software are the latest cognitive technologies, and they are powerful aids to innovation, but they cant generate a technological explosion by themselves. You need people to do that, and the more the better. Giving better hardware and software to one smart individual is helpful, but the real benefits come when everyone has them. Our current technological explosion is a result of billions of people using those cognitive tools. Could A.I. programs take the place of those humans, so that an explosion occurs in the digital realm faster than it does in ours? Possibly, but ... the strategy most likely to succeed would be essentially to duplicate all of human civilization in software, with eight billion human-equivalent A.I.s going about their business. [And] were a long way off from being able to create a single human-equivalent A.I., let alone billions of them. Ted Chiang, the New Yorker

Maybe AGI is already here if we think more broadly about what general intelligence might mean

These days my viewpoint is that this is AGI, in that it is a kind of intelligence and it is general but we have to be a little bit less, you know, hysterical about what AGI means. ... Were getting this tremendous amount of raw intelligence without it necessarily coming with an ego-viewpoint, goals, or a sense of coherent self. That, to me, is just fascinating. Noah Goodman, associate professor of psychology, computer science and linguistics at Stanford University, to Wired

Ultimately, we may never agree on what AGI is or when weve achieved it

It really is a philosophical question. So, in some ways, its a very hard time to be in this field, because were a scientific field. ... Its very unlikely to be a single event where we check it off and say, AGI achieved. Sara Hooker, leader of a research lab that focuses on machine learning, to Wired

Original post:

Will AI soon be as smart as or smarter than humans? - Yahoo News

Related Posts

Comments are closed.