Terminator? Skynet? No way. Machines will never rule the world, according to book by UB philosopher – Niagara Frontier Publications

Mon, Aug 22nd 2022 11:20 am

New book co-written by UB philosopher claims AI will never rule the world

AI that would match the general intelligence of humans is impossible, says SUNY Distinguished Professor Barry Smith

By the University at Buffalo

Elon Musk in 2020 said that artificial intelligence (AI) within five years would surpass human intelligence on its way to becoming an immortal dictator over humanity. But a new book co-written by a University at Buffalo philosophy professor argues that wont happen not by 2025, not ever!

Barry Smith, Ph.D., SUNY Distinguished Professor in the department of philosophy in UBs College of Arts and Sciences, and Jobst Landgrebe, Ph.D., founder of Cognotekt, a German AI company, have co-authored Why Machines Will Never Rule the World: Artificial Intelligence without Fear.

Their book presents a powerful argument against the possibility of engineering machines that can surpass human intelligence. Machine learning and all other working software applications the proud accomplishments of those involved in AI research are for Smith and Landgrebe far from anything resembling the capacity of humans. Further, they argue that any incremental progress thats unfolding in the field of AI research will in practical terms bring it no closer to the full functioning possibility of the human brain.

Smith and Landgrebe offer a critical examination of AIs unjustifiable projections, such as machines detaching themselves from humanity, self-replicating, and becoming full ethical agents. There cannot be a machine will, they say. Every single AI application rests on the intentions of human beings including intentions to produce random outputs. This means the Singularity, a point when AI becomes uncontrollable and irreversible (like a Skynet moment from the Terminator movie franchise) is not going to occur. Wild claims to the contrary serve only to inflate AIs potential and distort public understanding of the technologys nature, possibilities and limits.

Reaching across the borders of several scientific disciplines, Smith and Landgrebe argue that the idea of a general artificial intelligence (AGI) the ability of computers to emulate and go beyond the general intelligence of humans rests on fundamental mathematical impossibilities that are analogous in physics to the impossibility of building a perpetual motion machine. AI that would match the general intelligence of humans is impossible because of the mathematical limits on what can be modeled and is computable. These limits are accepted by practically everyone working in the field; yet they have thus far failed to appreciate their consequences for what an AI can achieve.

To overcome these barriers would require a revolution in mathematics that would be of greater significance than the invention of the calculus by Newton and Leibniz more than 350 years ago, says Smith, one of the worlds most cited contemporary philosophers. We are not holding our breath.

Landgrebe points out that, As can be verified by talking to mathematicians and physicists working at the limits of their respective disciplines, there is nothing even on the horizon which would suggest that a revolution of this sort might one day be achievable. Mathematics cannot fully model the behaviors of complex systems like the human organism, he says.

AI has many highly impressive success stories, and considerable funding has been dedicated toward advancing its frontier beyond the achievements in narrow, well-defined fields such as text translation and image recognition. Much of the investment to push the technology forward into areas requiring the machine counterpart of general intelligence may, the authors say, be money down the drain.

The text generator GPT-3 has shown itself capable of producing different sorts of convincing outputs across many divergent fields, Smith says. Unfortunately, its users soon recognize that mixed in with these outputs there are also embarrassing errors, so that the convincing outputs themselves began to appear as nothing more than clever parlor tricks.

AIs role in sequencing the human genome led to suggestions for how it might help find cures for many human diseases; yet, after 20 years of additional research (in which both Smith and Landgrebe have participated), little has been produced to support optimism of this sort.

In certain completely rule-determined confined settings, machine learning can be used to create algorithms that outperform humans, Smith says. But this does not mean that they can discover the rules governing just any activity taking place in an open environment, which is what the human brain achieves every day.

Technology skeptics do not, of course, have a perfect record. Theyve been wrong in regard to breakthroughs ranging from space flight to nanotechnology. But Smith and Landgrebe say their arguments are based on the mathematical implications of the theory of complex systems. For mathematical reasons, AI cannot mimic the way the human brain functions. In fact, the authors say that its impossible to engineer a machine that would rival the cognitive performance of a crow.

An AGI is impossible, says Smith. As our book shows, there can be no general artificial intelligence because it is beyond the boundary of what is even in principle achievable by means of a machine.

See the rest here:
Terminator? Skynet? No way. Machines will never rule the world, according to book by UB philosopher - Niagara Frontier Publications

Related Posts

Comments are closed.