Opinion| The United States and the assassination of Iranian nuclear scientist – Daily News Egypt

New York Times has revealed interesting details about the assassination of Iranian nuclear scientist Fakhrizadeh, saying that it was carried out with a new weapon equipped with artificial intelligence and multiple cameras operating via satellite.

The newspaper pointed out that the assassination was carried out by a killer robot capable of firing 600 rounds per minute, without agents on the ground. The newspaper also added that its information in this regard was based on interviews with American, Israeli, and Iranian officials, including two intelligence officials familiar with the planning and implementation of the operation.

According to an intelligence official familiar with the plan, Israel chose an advanced model of the Belgian-made FN MAG machine gun linked to an advanced smart robot. Then it was smuggled to Iran in pieces over different times, and then secretly reassembled in Iran.The robot was built to fit the size of a pickup tank, and cameras were installed in multiple directions on the vehicle to give the wheelhouse a complete picture of not only the target and its security details, but the surrounding environment.

In the end, the car was rigged with explosives, so that it could be detonated remotely and turned into small parts after the end of the killing process, in order to destroy all evidence. The newspaper pointed out that the assassination of Fakhrizadeh took less than a minute, during which only 15 bullets were fired. The satellite camera installed in the car sent images directly to the headquarters of the operation.

What happened is not a science fiction scene in a Hollywood movie, but it is a fact that we must deal with in the future, and we must also deal with the unprecedented risks and challenges that this entails on the overall security scene. The end of the world will be at the hands of smart robots. If you believe some of the AI observers, you will find that we are in a race towards what is known as the technological singularity point, a hypothetical point at which AI devices outperform our human intelligence and continue to evolve themselves amazingly beyond all our expectations. But if that happens, which is a rare assumption of course, what will happen to us?

Over the past few months, a number of high-profile celebrities, such as Elon Musk and Bill Gates, have warned that we should worry more about the potentially dangerous consequences of superintelligent AI systems. However, we find that they have already invested their money in projects that they consider to be important in this context. We find that Musk, like many billionaires, supports the OpenAI Foundation, a non-profit organization dedicated to developing artificial intelligence devices that serve and benefit humanity in general.

A recent study conducted by researchers from Oxford University in Britain and Yale University in the United States revealed that there is a 50% chance that artificial intelligence will outperform human intelligence in all areas within 45 years, and is expected to be able to take over all human jobs within 120 years. The results of the study do not rule out that this will happen before this date.

According to the study, machines will outperform humans at translating languages by 2024, writing academic articles by 2026, driving trucks by 2027, working in retail by 2031, writing a bestselling book by 2049, and performing surgery by 2053.

The study also stressed that artificial intelligence is rapidly improving its capabilities, and is increasingly proving itself in areas historically controlled by humans, for example, the AlphaGo programme, owned by Google, recently defeated the worlds largest player in the ancient Chinese game known as Atmosphere. In the same vein, the study also expects that self-driving technology will replace millions of taxi drivers.

A few days ago, the United Nations High Commissioner for Human Rights, Michelle Bachelet, stressed the urgent need to halt the sale and use of artificial intelligence systems that pose a grave threat to human rights, until appropriate safeguards are in place. It also called for banning artificial intelligence applications that cannot be used in line with international human rights law So the reality is more dangerous than we think, and perhaps the disaster will be closer than we think.

See the rest here:
Opinion| The United States and the assassination of Iranian nuclear scientist - Daily News Egypt

Related Posts

Comments are closed.