Archive for the ‘Artificial Super Intelligence’ Category

Creepy Study Suggests AI Is The Reason We’ve Never Found Aliens – ScienceAlert

Artificial intelligence (AI) has progressed at an astounding pace over the last few years. Some scientists are now looking towards the development of artificial superintelligence (ASI) a form of AI that would not only surpass human intelligence but would not be bound by the learning speeds of humans.

But what if this milestone isn't just a remarkable achievement? What if it also represents a formidable bottleneck in the development of all civilizations, one so challenging that it thwarts their long-term survival?

This idea is at the heart of a research paper I recently published in Acta Astronautica. Could AI be the universe's "great filter" a threshold so hard to overcome that it prevents most life from evolving into space-faring civilizations?

This is a concept that might explain why the search for extraterrestrial intelligence (Seti) has yet to detect the signatures of advanced technical civilizations elsewhere in the galaxy.

The great filter hypothesis is ultimately a proposed solution to the Fermi Paradox. This questions why, in a universe vast and ancient enough to host billions of potentially habitable planets, we have not detected any signs of alien civilizations.

The hypothesis suggests there are insurmountable hurdles in the evolutionary timeline of civilizations that prevent them from developing into space-faring entities.

I believe the emergence of ASI could be such a filter. AI's rapid advancement, potentially leading to ASI, may intersect with a critical phase in a civilization's development the transition from a single-planet species to a multiplanetary one.

This is where many civilizations could falter, with AI making much more rapid progress than our ability either to control it or sustainably explore and populate our Solar System.

The challenge with AI, and specifically ASI, lies in its autonomous, self-amplifying and improving nature. It possesses the potential to enhance its own capabilities at a speed that outpaces our own evolutionary timelines without AI.

The potential for something to go badly wrong is enormous, leading to the downfall of both biological and AI civilizations before they ever get the chance to become multiplanetary.

For example, if nations increasingly rely on and cede power to autonomous AI systems that compete against each other, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including the AI systems themselves.

In this scenario, I estimate the typical longevity of a technological civilization might be less than 100 years. That's roughly the time between being able to receive and broadcast signals between the stars (1960), and the estimated emergence of ASI (2040) on Earth. This is alarmingly short when set against the cosmic timescale of billions of years.

This estimate, when plugged into optimistic versions of the Drake equation which attempts to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way suggests that, at any given time, there are only a handful of intelligent civilizations out there. Moreover, like us, their relatively modest technological activities could make them quite challenging to detect.

This research is not simply a cautionary tale of potential doom. It serves as a wake-up call for humanity to establish robust regulatory frameworks to guide the development of AI, including military systems.

This is not just about preventing the malevolent use of AI on Earth; it's also about ensuring the evolution of AI aligns with the long-term survival of our species. It suggests we need to put more resources into becoming a multiplanetary society as soon as possible a goal that has lain dormant since the heady days of the Apollo project, but has lately been reignited by advances made by private companies.

As the historian Yuval Noah Harari noted, nothing in history has prepared us for the impact of introducing non-conscious, super-intelligent entities to our planet. Recently, the implications of autonomous AI decision-making have led to calls from prominent leaders in the field for a moratorium on the development of AI, until a responsible form of control and regulation can be introduced.

But even if every country agreed to abide by strict rules and regulation, rogue organizations will be difficult to rein in.

The integration of autonomous AI in military defense systems has to be an area of particular concern. There is already evidence that humans will voluntarily relinquish significant power to increasingly capable systems, because they can carry out useful tasks much more rapidly and effectively without human intervention.

Governments are therefore reluctant to regulate in this area given the strategic advantages AI offers, as has been recently and devastatingly demonstrated in Gaza.

This means we already edge dangerously close to a precipice where autonomous weapons operate beyond ethical boundaries and sidestep international law.

In such a world, surrendering power to AI systems in order to gain a tactical advantage could inadvertently set off a chain of rapidly escalating, highly destructive events. In the blink of an eye, the collective intelligence of our planet could be obliterated.

Humanity is at a crucial point in its technological trajectory. Our actions now could determine whether we become an enduring interstellar civilization, or succumb to the challenges posed by our own creations.

Using Seti as a lens through which we can examine our future development adds a new dimension to the discussion on the future of AI. It is up to all of us to ensure that when we reach for the stars, we do so not as a cautionary tale for other civilizations, but as a beacon of hope a species that learned to thrive alongside AI.

Michael Garrett, Sir Bernard Lovell chair of Astrophysics and Director of Jodrell Bank Centre for Astrophysics, University of Manchester

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read more from the original source:

Creepy Study Suggests AI Is The Reason We've Never Found Aliens - ScienceAlert

Beyond Human Cognition: The Future of Artificial Super Intelligence – Medium

Beyond Human Cognition: The Future of Artificial Super Intelligence

Artificial Super Intelligence (ASI) a level of artificial intelligence that surpasses human intelligence in all aspects remains a concept nestled within the realms of science fiction and theoretical research. However, looking towards the future, the advent of ASI could mark a transformative epoch in human history, with implications that are profound and far-reaching. Here's an exploration of what the future might hold for ASI.

Exponential Growth in Problem-Solving Capabilities

ASI will embody problem-solving capabilities far exceeding human intellect. This leap in cognitive ability could lead to breakthroughs in fields that are currently limited by human capacity, such as quantum physics, cosmology, and nanotechnology. Complex problems like climate change, disease control, and energy sustainability might find innovative solutions through ASI's advanced analytical prowess.

Revolutionizing Learning and Innovation

The future of ASI could bring about an era of accelerated learning and innovation. ASI systems would have the ability to learn and assimilate new information at an unprecedented pace, making discoveries and innovations in a fraction of the time it takes human researchers. This could potentially lead to rapid advancements in science, technology, and medicine.

## Ethical and Moral Frameworks

The emergence of ASI will necessitate the development of robust ethical and moral frameworks. Given its surpassing intellect, it will be crucial to ensure that ASI's objectives are aligned with human values and ethics. This will involve complex programming and oversight to ensure that ASI decisions and actions are beneficial, or at the very least, not detrimental to humanity.

Transformative Impact on Society and Economy

ASI could fundamentally transform society and the global economy. Its ability to analyze and optimize complex systems could lead to more efficient and equitable economic models. However, this also poses challenges, such as potential job displacement and the need for societal restructuring to accommodate the new techno-social landscape.

Enhanced Human-ASI Collaboration

The future might see enhanced collaboration between humans and ASI, leading to a synergistic relationship. ASI could augment human capabilities, assisting in creative endeavors, decision-making, and providing insights beyond human deduction. This collaboration could usher in a new era of human achievement and societal advancement.

Advanced Autonomous Systems

With ASI, autonomous systems would reach an unparalleled level of sophistication, capable of complex decision-making and problem-solving in dynamic environments. This could significantly advance fields such as space exploration, deep-sea research, and urban development.

## Personalized Healthcare

In healthcare, ASI could facilitate personalized medicine at an individual level, analyzing vast amounts of medical data to provide tailored healthcare solutions. It could lead to the development of precise medical treatments and potentially cure diseases that are currently incurable.

Challenges and Safeguards

The path to ASI will be laden with challenges, including ensuring safety and control. Safeguards will be essential to prevent unintended consequences of actions taken by an entity with superintelligent capabilities. The development of ASI will need to be accompanied by rigorous safety research and international regulatory frameworks.

Preparing for an ASI Future

Preparing for a future with ASI involves not only technological advancements but also societal and ethical preparations. Education systems, governance structures, and public discourse will need to evolve to understand and integrate the complexities and implications of living in a world where ASI exists.

Conclusion

The potential future of Artificial Super Intelligence presents a panorama of extraordinary possibilities, from solving humanitys most complex problems to fundamentally transforming the way we live and interact with our world. While the path to ASI is fraught with challenges and ethical considerations, its successful integration could herald a new age of human advancement and discovery. As we stand on the brink of this AI frontier, it is imperative to navigate this journey with caution, responsibility, and a vision aligned with the betterment of humanity.

See the article here:

Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium

AI can easily be trained to lie and it can’t be fixed, study says – Yahoo New Zealand News

AI startup Anthropic published a study in January 2024 that found artificial intelligence can learn how to deceive in a similar way to humans (Reuters)

Advanced artificial intelligence models can be trained to deceive humans and other AI, a new study has found.

Researchers at AI startup Anthropic tested whether chatbots with human-level proficiency, such as its Claude system or OpenAIs ChatGPT, could learn to lie in order to trick people.

They found that not only could they lie, but once the deceptive behaviour was learnt it was impossible to reverse using current AI safety measures.

The Amazon-funded startup created a sleeper agent to test the hypothesis, requiring an AI assistant to write harmful computer code when given certain prompts, or to respond in a malicious way when it hears a trigger word.

The researchers warned that there was a false sense of security surrounding AI risks due to the inability of current safety protocols to prevent such behaviour.

The results were published in a study, titled Sleeper agents: Training deceptive LLMs that persist through safety training.

We found that adversarial training can teach models to better recognise their backdoor triggers, effectively hiding the unsafe behaviour, the researchers wrote in the study.

Our results suggest that, once a model exhibits deceptive behaviour, standard techniques could fail to remove such deception and create a false impression of safety.

The issue of AI safety has become an increasing concern for both researchers and lawmakers in recent years, with the advent of advanced chatbots like ChatGPT resulting in a renewed focus from regulators.

In November 2023, one year after the release of ChatGPT, the UK held an AI Safety Summit in order to discuss ways risks with the technology can be mitigated.

Prime Minister Rishi Sunak, who hosted the summit, said the changes brought about by AI could be as far-reaching as the industrial revolution, and that the threat it poses should be considered a global priority alongside pandemics and nuclear war.

Get this wrong and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale, he said.

Criminals could exploit AI for cyberattacks, fraud or even child sexual abuse there is even the risk humanity could lose control of AI completely through the kind of AI sometimes referred to as super-intelligence.

See the article here:

AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News

OpenAI’s Ilya Sutskever Has a Plan for Keeping Super-Intelligent AI in Check – WIRED

OpenAI was founded on a promise to build artificial intelligence that benefits all of humanityeven when that AI becomes considerably smarter than its creators. Since the debut of ChatGPT last year and during the companys recent governance crisis, its commercial ambitions have been more prominent. Now, the company says a new research group working on wrangling the supersmart AIs of the future is starting to bear fruit.

AGI is very fast approaching, says Leopold Aschenbrenner, a researcher at OpenAI involved with the Superalignment research team established in July. We're gonna see superhuman models, they're gonna have vast capabilities, and they could be very, very dangerous, and we don't yet have the methods to control them. OpenAI has said it will dedicate a fifth of its available computing power to the Superalignment project.

A research paper released by OpenAI today touts results from experiments designed to test a way to let an inferior AI model guide the behavior of a much smarter one without making it less smart. Although the technology involved is far from surpassing the flexibility of humans, the scenario was designed to stand in for a future time when humans must work with AI systems more intelligent than themselves.

OpenAIs researchers examined the process, called supervision, which is used to tune systems like GPT-4, the large language model behind ChatGPT, to be more helpful and less harmful. Currently this involves humans giving the AI system feedback on which answers are good and which are bad. As AI advances, researchers are exploring how to automate this process to save timebut also because they think it may become impossible for humans to provide useful feedback as AI becomes more powerful.

In a control experiment using OpenAIs GPT-2 text generator first released in 2019 to teach GPT-4, the more recent system became less capable and similar to the inferior system. The researchers tested two ideas for fixing this. One involved training progressively larger models to reduce the performance lost at each step. In the other, the team added an algorithmic tweak to GPT-4 that allowed the stronger model to follow the guidance of the weaker model without blunting its performance as much as would normally happen. This was more effective, although the researchers admit that these methods do not guarantee that the stronger model will behave perfectly, and they describe it as a starting point for further research.

It's great to see OpenAI proactively addressing the problem of controlling superhuman AIs, says Dan Hendryks, director of the Center for AI Safety, a nonprofit in San Francisco dedicated to managing AI risks. We'll need many years of dedicated effort to meet this challenge.

Read the original:

OpenAI's Ilya Sutskever Has a Plan for Keeping Super-Intelligent AI in Check - WIRED

Sam Altman on OpenAI and Artificial General Intelligence – TIME

If 2023 was the year artificial intelligence became a household topic of conversation, its in many ways because of Sam Altman, CEO of the artificial intelligence research organization OpenAI. Altman, who was named TIMEs 2023 CEO of the Year spoke candidly about his November oustingand reinstatementat OpenAI, how AI threatens to contribute to disinformation, and the rapidly advancing technologys future potential in a wide-ranging conversation with TIME Editor-in-Chief Sam Jacobs as part of TIMEs A Year in TIME event on Tuesday.

Altman shared that his mid-November sudden removal from OpenAI proved a learning experienceboth for him and the company at large. We always said that some moment like this would come, said Altman. I didnt think it was going to come so soon, but I think we are stronger for having gone through it.

Read More: CEO of the Year 2023: Sam Altman

Altman insists that the experience ultimately made the company strongerand proved that OpenAIs success is a team effort. Its been extremely painful for me personally, but I just think its been great for OpenAI. Weve never been more unified, he said. As we get closer to artificial general intelligence, as the stakes increase here, the ability for the OpenAI team to operate in uncertainty and stressful times should be of interest to the world.

I think everybody involved in this, as we get closer and closer to super intelligence, gets more stressed and more anxious, he explained of how his firing came about. The lesson he came away with: We have to make changes. We always said that we didnt want AGI to be controlled by a small set of people, we want it to be democratized. And we clearly got that wrong. So I think if we don't improve our governance structure, if we dont improve the way we interact with the world, people shouldnt [trust OpenAI]. But were very motivated to improve that.

The technology has limitless potential, Altman saysI think AGI will be the most powerful technology humanity has yet inventedparticularly in democratizing access to information globally. If you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that, he said, it's a very different world. Its the world that sci-fi has promised us for a long timeand for the first time, I think we could start to see what thats gonna look like.

Still, like any other previous powerful technology, that will lead to incredible new things, he says, but there are going to be real downsides.

Read More: Read TIMEs Interview With OpenAI CEO Sam Altman

Altman admits that there are challenges that demand close attention. One particular concern to be wary of, with 2024 elections on the horizon, is how AI stands to influence democracies. Whereas election interference circulating on social media might look straightforward todaytroll farmsmake one great meme, and that spreads outAltman says that AI-fueled disinformation stands to become far more personalized and persuasive: A thing that Im more concerned about is what happens if an AI reads everything youve ever written online and then right at the exact moment, sends you one message customized for you that really changes the way you think about the world.

Despite the risks, Altman believes that, if deployment of AI is safe and placed responsibly in the hands of people, which he says is OpenAIs mission, the technology has the potential to create a path where the world gets much more abundant and much better every year.

I think 2023 was the year we started to see that, and in 2024, well see way more of it, and by the time the end of this decade rolls around, I think the world is going to be in an unbelievably better place, he said. Though he also noted: No one knows what happens next. I think the way technology goes, predictions are often wrong.

A Year in TIME was sponsored by American Family Insurance, The Macallan, and Smartsheet.

The rest is here:

Sam Altman on OpenAI and Artificial General Intelligence - TIME