Unleashing the Unknown: Fears Behind Artificial General … – Techopedia

Artificial General Intelligence (AGI) is still a concept or, at most, at a nascent stage. Yet, there is already a lot of debate around it.

AGI and artificial intelligence (AI) are different. The latter performs specific activities, such as the Alexa assistant. But you know that Alexa is limited in its abilities.

AGI, in the meantime, can replace human beings with robots. It enables AI to emulate the cognitive powers of a human being. Think of a robot judge in a court presiding over a complex case.

Example of how AGI can be used in real life

Imagine a scenario where a patient with a tumor undergoes surgery. It is later revealed that a robot performed the operation. While the outcome may be successful, the patients family and friends are surprised and have reservations about trusting a robot with such a complex task. Surgery requires improvisation and decision-making, qualities we trust in human doctors.

The concept is both a scary and radical idea. The fears emanate from various ethical, social, and moral issues. A school of thought is against AGI because robots can be controlled to perform undesirable and unethical actions.

AGI is still in its infancy, and disagreements notwithstanding, it will be a long time before we see its manifestations. The base of AGI is the same as that of AI and Machine Learning (ML). Work is still in progress around the world, with the main focus remaining on a few areas discussed below.

Big data has significantly lowered the cost of data storage. Both AI and ML require large volumes of data. Big data and cloud storage have made data storage affordable, contributing to the development of AGI.

Scientists have made significant progress in both ML and Deep Learning (DL) technologies. Major developments have occurred in neural networks, reinforcement learning, and generative models.

Transfer learning hastens ML by applying existing knowledge to recognize similar objects. For example, a learning model learns to identify small birds based on their features, such as small wings, beaks, and eyes. Now, another learning model must identify various species of small birds in the Amazon rainforest. The latter model doesnt begin from scratch but inherits the learning from the earlier model, so the learning is expedited.

Its not that you will see or experience AGI in a new avatar that is unleashing changes in society from a point in time. The changes will be gradual and slowly yet steadily manifest in our day-to-day lives.

ChatGPT models have been developing at a breakneck speed with impressive capabilities. However, not everyone is fully convinced of the potential of AGI. Various countries and experts emphasize the importance of guiding ChatGPTs development within specific rules and regulations to ensure responsible progress toward AGI.

Response from Italy

In April 2023, Italy became the first nation to ban the development of ChatGPT over a breach of data and payment information. The government has also been probing whether the ChatGPT complies with the European Unions General Data Protection Regulation (GDPR) rules that protect confidential data inside and outside the EU.

Experts point out that there is no transparency in how ChatGPT is being developed. No information is publicly available about its development models, data, parameters, and version release plans.

OpenAIs brainchild continues to develop at a great speed, and we cant probably imagine the powers it has been accumulating. All without checks and balances. Some believe that ChatGPT 5 will mark the arrival of the AGI.

According to Anthony Aguirre, a Professor of Physics at UC Santa Cruz and the executive vice president of the Future of Life, said:The largest-scale computations are increasing the size by about 2.5 times per year. GPT-4s parameters were not disclosed by OpenAI, but there is no reason to think this trend has stopped or even slowed.

Aguirre, who was behind the famous open letter, added: Only the labs themselves know what computations they are running, but the trend is unmistakable.

The open letter signed by many industry stalwarts reflected the fear and apprehensions towards the uncontrolled development of ChatGPT.

The letter urges strongly to stop all developments of ChatGPT until a robust framework is established to control misinformation, hallucination, and bias in the system. Indeed, the so-called hallucination, inaccurate responses, and the bias exhibited by ChatGPT on many occasions are too glaring to ignore.

The open letter is signed by Steve Wozniak, among many other stalwarts, and already has 3,100 signatories that comprise software developers and engineers, CEOs, CFOs, technologists, psychologists, doctoral students, professors, medical doctors, and public school teachers.

The government has also been probing whether the ChatGPT complies with the European Unions General Data Protection Regulation (GDPR) rules that protect confidential data inside and outside the EU.

Its scary to think if a few wealthy and powerful nations can develop and concentrate AGI in their hands and use that to serve their benefits.

For example, they can control all the personal and sensitive data of other countries and communities, wreaking havoc.

AGI can become a veritable tool for biased actions and judgments. And, in the worst case, lead to sophisticated information warfare.

AGI is still in the conceptual stage, but given the lack of transparency and the perceived speed at which AI and ML have been progressing, it might not be too far when AGI is realized.

Its imperative that countries and corporates put their heads together and develop a robust framework that has enough checks and balances and guardrails.

The main goal of the framework would be to protect mankind and prevent unethical intrusions in their lives.

Continue reading here:

Unleashing the Unknown: Fears Behind Artificial General ... - Techopedia

Related Posts

Comments are closed.