Dangers & Risks of Artificial Intelligence – ITChronicles

Due to hype and popular fiction, the dangers of artificial intelligence (AI) are typically associated in the public eye with Sci-Fi horror scenarios. These often involve killer robots and hyper-intelligent computer systems which consider humanity a nuisance that needs to be gotten rid of for the good of the planet. While nightmares like this often play out as overblown and silly in comic books and on-screen, the risks of artificial intelligence cannot be dismissed so lightly and AI dangers do exist.

In this article, well be looking at some of the real risks of artificial intelligence, and why AI is dangerous when looked at in certain contexts or wrongly applied.

Artificial intelligence encompasses a range of technologies and systems ranging from Googles search algorithms, through smart home gadgets, to military-grade autonomous weapons. So issuing a blanket confirmation or denial to the question Is Artificial Intelligence Dangerous? isnt that simple the issue is much more nuanced than that.

Most artificial intelligence systems today qualify as weak or narrow AI technologies designed to perform specific tasks such as searching the internet, responding to environmental changes like temperature, or facial recognition. Generally speaking, narrow AI performs better than humans at those specific tasks.

For some AI developers, however, the Holy Grail is strong AI or artificial general intelligence (AGI), a level of technology at which machines would have a much greater degree of autonomy and versatility, enabling them to outperform humans in almost all cognitive tasks.

While the super intelligence of strong AI has the potential to help us eradicate war, disease, and poverty, there are significant dangers of artificial intelligence at this level. However, there are those who question whether strong AI will ever be achieved, and others who maintain that if and when it does arrive, it can only be beneficial.

Optimism aside, the increasing sophistication of technologies and algorithms may have the result that AI is dangerous if its goals and implementation run contrary to our own expectations or objectives. The risks of AI in this context may hold even at the level of narrow or weak AI. If, for example, a home or in-vehicle thermostat system is poorly configured or hacked, its operation could pose a serious hazard to human health through over-heating or freezing. The same would apply to smart city management systems or autonomous vehicle steering mechanisms.

Most researchers agree that a strong or AGI system would be unlikely to exhibit human emotions such as love or hate, and would therefore not pose AI dangers through benevolent or malevolent intentions. However, even the strongest AI must be programmed by humans initially, and its in this context that the danger lies. Specifically, artificial intelligence analysts highlight two scenarios where the underlying programming or human intent of a system design could cause problems:

This threat covers all existing and future autonomous weapons systems (military drones, robots, missile defenses, etc.), or technologies capable of intentionally or unintentionally causing massive harm or physical destruction due to misuse, hacking, or sabotage.

Besides the prospect of an AI arms race and the possibility of AI-enabled warfare in the case of autonomous weaponry, there are AI risks posed by the design and deployment of the technology itself. With high stakes activity an inherent part of military design, such systems would probably have fail-safes that make them extremely difficult to deactivate once started and their human owners could conceivably lose control of them, in escalating situations.

The classic illustration of this AI danger comes in the example of a self-driving car. If you ask such a vehicle to take you to the airport as quickly as possible, it could quite literally do so breaking every traffic law in the book, causing accidents, and freaking you out completely, in the process.

At the super intelligence level of AGI, imagine a geo-engineering or climate control system thats given free rein to implement its programming in the most efficient manner possible. The damage it could cause to infrastructure and ecosystems could be catastrophic.

How dangerous is AI? At its current rate of development, artificial intelligence has already exceeded the expectations of many observers, with milestones having been achieved that were considered decades away, just a few years ago.

While some experts still estimate that the development of human-level AI is still centuries away, most researchers are coming round to the opinion that it could happen before 2060. And the prevailing view amongst all observers is that, as long as were not 100% sure that artificial general intelligence wont happen this century, its a good idea to start safety research now, to prepare for its arrival.

Many of the safety problems associated with super intelligent AI are so complex that they may require decades to solve. A super intelligent AI will, by definition, be very good at achieving its goals whatever they may be. As humans, well need to ensure that its goals are completely aligned with ours. The same holds for weaker artificial intelligence systems as the technology continues to evolve.

Intelligence enables control, and as technology becomes smarter, the greatest danger of artificial intelligence lies in its capacity to exceed human intelligence. Once that milestone is achieved, we run the danger of losing our control over the technology. And this danger becomes even more severe if the goals of that technology dont align with our own objectives.

A scenario whereby an AGI whose goals run counter to our own uses the internet to enforce the implementation of its internal directives illustrates why AI is dangerous in this respect. Such a system could potentially impact the financial markets, manipulate social and political discourse, or introduce technological innovations that we can barely imagine, much less keep up with.

The keys to determining why artificial intelligence is dangerous or not lie in its underlying programming, the method of its deployment, and whether or not its goals are in alignment with our own.

As technology continues its march toward artificial general intelligence, AI has the potential to become more intelligent than any human, and we currently have no way of predicting how it will behave. What we can do is everything in our power to ensure that the goals of that intelligence remain compatible with ours and the research and design to implement systems that keep them that way.

Summary:

Artificial intelligence encompasses a range of technologies and systems ranging from Googles search algorithms, through smart home gadgets, to military-grade autonomous weapons. So issuing a blanket confirmation or denial to the question Is Artificial Intelligence Dangerous? isnt that simple. For some AI developers, the Holy Grail is strong AI or artificial general intelligence (AGI), a level of technology at which machines would have a much greater degree of autonomy and versatility, enabling them to outperform humans in almost all cognitive tasks. While the super intelligence of strong AI has the potential to help us eradicate war, disease, and poverty, there are significant dangers of artificial intelligence at this level. The keys to determining why artificial intelligence is dangerous or not lie in its underlying programming, the method of its deployment, and whether or not its goals are in alignment with our own.

Go here to see the original:
Dangers & Risks of Artificial Intelligence - ITChronicles

Related Posts

Comments are closed.