Archive for the ‘Artificial General Intelligence’ Category

AI can be transformative technology, only with appropriate restrictions and safeguards against malicious use – ZAWYA

Check Point Research (CPR), the Threat Intelligence arm of Check Point Software Technologies Ltd. (NASDAQ: CHKP) and a leading provider of cyber security solutions globally, warns that artificial intelligence has the potential to be a transformative technology that can significantly impact our daily lives, but only with appropriate bans and regulations in place to ensure AI is used and developed ethically and responsibly.

"AI has already shown its potential and has the possibility to revolutionize many areas such as healthcare, finance, transportation and more. It can automate tedious tasks, increase efficiency and provide information that was previously not possible. AI could also help us solve complex problems, make better decisions, reduce human error or tackle dangerous tasks such as defusing a bomb, flying into space or exploring the oceans. But at the same time, we see massive use of AI technologies to develop cyber threats as well," says Ram Narayanan, Country Manager at Check Point Software Technologies, Middle East. Such misuse of AI has been widely reported in the media, with select reports around ChatGPT being leveraged by cybercriminals to contribute to the creation of malware.

Overall, the development of AI is not just another passing craze, but it remains to be seen how much of a positive or negative impact it will have on society. And although AI has been around for a long time, 2023 will be remembered by the public as the "Year of AI". However, there continues to be a lot of hype around this technology and some companies may be overreacting. We need to have realistic expectations and not see AI as an automatic panacea for all the world's problems.

We often hear concerns of whether AI will approach or even surpass human capabilities. Predicting how advanced AI will be is difficult, but there are already several categories. Current AI is referred to as narrow or "weak" AI (ANI Artificial Narrow Intelligence). General AI (AGI Artificial General Intelligence) should function like the human brain, thinking, learning and solving tasks like a human. The last category is Artificial Super Intelligence (ASI) and is basically machines that are smarter than us.

If artificial intelligence reaches the level of AGI, there is a risk that it could act on its own and potentially become a threat to humanity. Therefore, we need to work on aligning the goals and values of AI with those of humans.

Ram Narayanan further states, To mitigate the risks associated with advanced AI, it is important that governments, companies and regulators work together to develop robust safety mechanisms, establish ethical principles and promote transparency and accountability in AI development. Currently, there is a minimum of rules and regulations. There are proposals such as the AI Act, but none of these have been passed and essentially everything so far is governed by the ethical compasses of users and developers. Depending on the type of AI, companies that develop and release AI systems should ensure at least minimum standards such as privacy, fairness, explainability or accessibility."

Unfortunately, AI can also be used by cybercriminals to refine their attacks, automatically identify vulnerabilities, create targeted phishing campaigns, socially engineer, or create advanced malware that can change its code to better evade detection. AI can also be used to generate convincing audio and video deepfakes that can be used for political manipulation, false evidence in criminal trials, or to trick users into paying money.

But AI is also an important aid in defending against cyberattacks in particular. For example, Check Point uses more than 70 different tools to analyse threats and protect against attacks, more than 40 of which are AI-based. These technologies help with behavioral analysis, analyzing large amounts of threat data from a variety of sources, including the darknet, making it easier to detect zero-day vulnerabilities or automate patching of security vulnerabilities.

"Various bans and restrictions on AI have also been discussed recently. In the case of ChatGPT, the concerns are mainly related to privacy, as we have already seen data leaks, nor is the age limit of users addressed. However, blocking similar services has only limited effect, as any slightly more savvy user can get around the ban by using a VPN, for example, and there is also a brisk trade in stolen premium accounts. The problem is that most users do not realise that the sensitive information entered into ChatGPT will be very valuable if leaked, and could be used for targeted marketing purposes. We are talking about potential social manipulation on a scale never seen before," points out Ram Narayanan.

The impact of AI on our society will depend on how we choose to develop and use this technology. It will be important to weigh the potential benefits and risks whilst striving to ensure that AI is developed in a responsible, ethical and beneficial way for society.

Read more:

AI can be transformative technology, only with appropriate restrictions and safeguards against malicious use - ZAWYA

How AI Knows Things No One Told It – Scientific American

No one yet knows how ChatGPT and its artificial intelligence cousins will transform the world, and one reason is that no one really knows what goes on inside them. Some of these systems abilities go far beyond what they were trained to doand even their inventors are baffled as to why. A growing number of tests suggest these AI systems develop internal models of the real world, much as our own brain does, though the machines technique is different.

Advertisement

Everything we want to do with them in order to make them better or safer or anything like that seems to me like a ridiculous thing to ask ourselves to do if we dont understand how they work, says Ellie Pavlick of Brown University, one of the researchers working to fill that explanatory void.

At one level, she and her colleagues understand GPT (short for generative pretrained transformer) and other large language models, or LLMs, perfectly well. The models rely on a machine-learning system called a neural network. Such networks have a structure modeled loosely after the connected neurons of the human brain. The code for these programs is relatively simple and fills just a few screens. It sets up an autocorrection algorithm, which chooses the most likely word to complete a passage based on laborious statistical analysis of hundreds of gigabytes of Internet text. Additional training ensures the system will present its results in the form of dialogue. In this sense, all it does is regurgitate what it learnedit is a stochastic parrot, in the words of Emily Bender, a linguist at the University of Washington. But LLMs have also managed to ace the bar exam, explain the Higgs boson in iambic pentameter, and make an attempt to break up their users marriage. Few had expected a fairly straightforward autocorrection algorithm to acquire such broad abilities.

That GPT and other AI systems perform tasks they were not trained to do, giving them emergent abilities, has surprised even researchers who have been generally skeptical about the hype over LLMs. I dont know how theyre doing it or if they could do it more generally the way humans dobut theyve challenged my views, says Melanie Mitchell, an AI researcher at the Santa Fe Institute.

Advertisement

It is certainly much more than a stochastic parrot, and it certainly builds some representation of the worldalthough I do not think that it is quite like how humans build an internal world model, says Yoshua Bengio, an AI researcher at the University of Montreal.

At a conference at New York University in March, philosopher Raphal Millire of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millire went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. Its multistep reasoning of a very high degree, he says. And the bot nailed it. When Millire asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasnt just parroting the Internet. Rather it was performing its own calculations to reach the correct answer.

Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-ina tool ChatGPT can use when answering a querythat allows it to do so. But that plug-in was not used in Millires demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their contexta situation similar to how nature repurposes existing capacities for new functions.

Advertisement

This impromptu ability demonstrates that LLMs develop an internal complexity that goes well beyond a shallow statistical analysis. Researchers are finding that these systems seem to achieve genuine understanding of what they have learned. In one study presented last week at the International Conference on Learning Representations (ICLR), doctoral student Kenneth Li of Harvard University and his AI researcher colleaguesAspen K. Hopkins of the Massachusetts Institute of Technology, David Bau of Northeastern University, and Fernanda Vigas, Hanspeter Pfister and Martin Wattenberg, all at Harvardspun up their own smaller copy of the GPT neural network so they could study its inner workings. They trained it on millions of matches of the board game Othello by feeding in long sequences of moves in text form. Their model became a nearly perfect player.

To study how the neural network encoded information, they adopted a technique that Bengio and Guillaume Alain, also at the University of Montreal, devised in 2016. They created a miniature probe network to analyze the main network layer by layer. Li compares this approach to neuroscience methods. This is similar to when we put an electrical probe into the human brain, he says.In the case of the AI, the probe showed that its neural activity matched the representation of an Othello game board, albeit in a convoluted form. To confirm this, the researchers ran the probe in reverse to implant information into the networkfor instance, flipping one of the games black marker pieces to a white one. Basically, we hack into the brain of these language models, Li says. The network adjusted its moves accordingly. The researchers concluded that it was playing Othello roughly like a human: by keeping a game board in its minds eye and using this model to evaluate moves. Li says he thinks the system learns this skill because it is the most parsimonious description of its training data. If you are given a whole lot of game scripts, trying to figure out the rule behind it is the best way to compress, he adds.

This ability to infer the structure of the outside world is not limited to simple game-playing moves; it also shows up in dialogue. Belinda Li (no relation to Kenneth Li), Maxwell Nye and Jacob Andreas, all at M.I.T., studied networks that played a text-based adventure game. They fed in sentences such as The key is in the treasure chest, followed by You take the key. Using a probe, they found that the networks encoded within themselves variables corresponding to chest and you, each with the property of possessing a key or not, and updated these variables sentence by sentence. The system had no independent way of knowing what a box or key is, yet it picked up the concepts it needed for this task. There is some representation of the state hidden inside of the model, Belinda Li says.

Advertisement

Researchers marvel at how much LLMs are able to learn from text. For example, Pavlick and her then Ph.D. student Roma Patel found that these networks absorb color descriptions from Internet text and construct internal representations of color. When they see the word red, they process it not just as an abstract symbol but as a concept that has certain relationship to maroon, crimson, fuchsia, rust, and so on. Demonstrating this was somewhat tricky. Instead of inserting a probe into a network, the researchers studied its response to a series of text prompts. To check whether it was merely echoing color relationships from online references, they tried misdirecting the system by telling it that red is in fact greenlike the old philosophical thought experiment in which one persons red is another persons green. Rather than parroting back an incorrect answer, the systems color evaluations changed appropriately in order to maintain the correct relations.

Picking up on the idea that in order to perform its autocorrection function, the system seeks the underlying logic of its training data, machine learning researcher Sbastien Bubeck of Microsoft Research suggests that the wider the range of the data, the more general the rules the system will discover. Maybe were seeing such a huge jump because we have reached a diversity of data, which is large enough that the only underlying principle to all of it is that intelligent beings produced them, he says. And so the only way to explain all of this data is [for the model] to become intelligent.

In addition to extracting the underlying meaning of language, LLMs are able to learn on the fly. In the AI field, the term learning is usually reserved for the computationally intensive process in which developers expose the neural network to gigabytes of data and tweak its internal connections. By the time you type a query into ChatGPT, the network should be fixed; unlike humans, it should not continue to learn. So it came as a surprise that LLMs do, in fact, learn from their users promptsan ability known as in-context learning. Its a different sort of learning that wasnt really understood to exist before, says Ben Goertzel, founder of the AI company SingularityNET.

Advertisement

One example of how an LLM learns comes from the way humans interact with chatbots such as ChatGPT. You can give the system examples of how you want it to respond, and it will obey. Its outputs are determined by the last several thousand words it has seen. What it does, given those words, is prescribed by its fixed internal connectionsbut the word sequence nonetheless offers some adaptability. Entire websites are devoted to jailbreak prompts that overcome the systems guardrailsrestrictions that stop the system from telling users how to make a pipe bomb, for exampletypically by directing the model to pretend to be a system without guardrails. Some people use jailbreaking for sketchy purposes, yet others deploy it to elicit more creative answers. It will answer scientific questions, I would say, better than if you just ask it directly, without the special jailbreak prompt, says William Hahn, co-director of the Machine Perception and Cognitive Robotics Laboratory at Florida Atlantic University. Its better at scholarship.

Another type of in-context learning happens via chain of thought prompting, which means asking the network to spell out each step of its reasoninga tactic that makes it do better at logic or arithmetic problems requiring multiple steps. (But one thing that made Millires example so surprising is that the network found the Fibonacci number without any such coaching.)

In 2022 a team at Google Research and the Swiss Federal Institute of Technology in ZurichJohannes von Oswald, Eyvind Niklasson, Ettore Randazzo, Joo Sacramento, Alexander Mordvintsev, Andrey Zhmoginov and Max Vladymyrovshowed that in-context learning follows the same basic computational procedure as standard learning, known as gradient descent. This procedure was not programmed; the system discovered it without help. It would need to be a learned skill, says Blaise Agera y Arcas, a vice president at Google Research. In fact, he thinks LLMs may have other latent abilities that no one has discovered yet. Every time we test for a new ability that we can quantify, we find it, he says.

Advertisement

Although LLMs have enough blind spots not to qualify as artificial general intelligence, or AGIthe term for a machine that attains the resourcefulness of animal brainsthese emergent abilities suggest to some researchers that tech companies are closer to AGI than even optimists had guessed. Theyre indirect evidence that we are probably not that far off from AGI, Goertzel said in March at a conference on deep learning at Florida Atlantic University. OpenAIs plug-ins have given ChatGPT a modular architecture a little like that of the human brain. Combining GPT-4 [the latest version of the LLM that powers ChatGPT] with various plug-ins might be a route toward a humanlike specialization of function, says M.I.T. researcher Anna Ivanova.

At the same time, though, researchers worry the window may be closing on their ability to study these systems. OpenAI has not divulged the details of how it designed and trained GPT-4, in part because it is locked in competition with Google and other companiesnot to mention other countries. Probably theres going to be less open research from industry, and things are going to be more siloed and organized around building products, says Dan Roberts, a theoretical physicist at M.I.T., who applies the techniques of his profession to understanding AI.

And this lack of transparency does not just harm researchers; it also hinders efforts to understand the social impacts of the rush to adopt AI technology. Transparency about these models is the most important thing to ensure safety, Mitchell says.

Visit link:

How AI Knows Things No One Told It - Scientific American

How politics and business are driving the AI arms race with China – Bulletin of the Atomic Scientists

In March, thousands of tech leadersElon Musk among themsigned an open letter asking artificial intelligence (AI) labs to stop developing next-generation training systems for at least six months. There is precedent for such temporary pauses in other fields of research: In 2019, for example, scientists successfully called for a moratorium on any human gene editing that would pass along heritable DNA to genetically modified children.

While a pause in the field of AI is unlikely to happen, at least it means the United States is finally starting to realize the importance of regulating AI systems.

The reasons that a pause in AI wont happen are multifoldand are about more than just the research itself. Critics of the proposed pause argue that regulating or restricting AI would help China pull ahead in AI development, causing the United States to lose its military and economic edge. To be sure, the United States must keep its citizens secure. But failing to regulate AI or to coordinate with China in cases where that is in the United States interest would endanger US citizens.

History shows us that this worry is more than just theoretical. As a presidential candidate, John F. Kennedy invented the missile gap narrative to make President Dwight D. Eisenhower seem weak on defense, claiming that the Soviet Union was overtaking the United States in nuclear missile deployment. Kennedys rhetoric may have helped him politically but also hindered cooperation with the Soviet leadership. Historically, arms races are often driven more by domestic economics and politics than by rational responses to external threats.

China, which is actually regulating AI much more tightly than the United States or even the European Union and is likely to be hamstrung by US semiconductor export controls in the coming years, is far behind the United States in AI development. Much like the Cold War nuclear arms race, todays US-China AI competition is heavily influenced by domestic forces such as private interest groups, bureaucratic infighting, electoral politics, and public opinion. By better understanding these domestic forces, policy makers in the United States can minimize the risks faced by the United States, China, and the world.

Private interests. In the US-China AI competition, companies developing AI systems and promoting their own interests might lobby against domestic or international AI regulation. There is historical precedent for this. In 2001, the United States rejected a Protocol to strengthen the Biological Weapons Convention, in part because of pressure from the US chemical and pharmaceutical industries, which wanted to limit inspections of their facilities.

US AI companies appear to be aware of the risks of using their products. OpenAIs stated mission is to ensure that artificial general intelligence benefits all of humanity. DeepMinds operating principles include commitment to act as responsible pioneers in the field of AI. DeepMinds founders have pledged not to work on lethal AI, and Googles AI Principles state that Google will not deploy or design AI for weapons intended to injure humans, or for surveillance that violates international norms.

However, there are already worrisome signs that commercial competition may undermine these commitments. Google, fearing that OpenAIs ChatGPT could replace its search engine, told employees it would recalibrate the amount of risk it is prepared to accept when deploying new AI systems. While not strictly relevant to international agreements, this move suggests that tech companies are willing to compromise on AI safety in response to commercial incentives.

Another potentially concerning development is the creation of links between AI startups and big tech companies. OpenAI partnered with Microsoft in January, and Google acquired DeepMind in 2014. Acquisition and partnership may limit the ability of AI startups to act in ways that lower risk. DeepMind and Google, for example, have clashed over the governance of DeepMind projects since their merger.

Lobbying may also raise risks. The big tech companies are experienced lobbyists: Amazon spent $21.4 million on lobbying in 2022, making it the 6th largest spender; Meta (the parent company of Facebook, Instagram, and WhatsApp) came in 10th with $19.2 million; and Alphabet (parent company of Google) was 19th with $13.2 million. Last year, big tech companies increased their donations to US foreign policy think tanks in an effort to promote the argument that stricter rules will harm their ability to compete with China.

In the future, suppliers of military AI systems might increase the chances of an AI arms race by lobbying for the development of more advanced weapons systems, or by opposing arms control agreements that would limit their future sales. This is probably long way off. Analysis from the Brookings Institutiona nonprofit public policy organizationfound that 95 percent of federal contracts from the last five years with artificial intelligence in the description were for professional, scientific, and technical services (essentially external funding for research and development). The same analysis found that there were 307 different vendors and 474 total contracts.

Taken together, this analysis suggests an immature market, with many smaller vendors focused on developing AI systems rather than on larger contracts for supplying hardware or software, which are more typical for military procurement. In the future, though, larger contracts for military AI and a more concentrated supplier base would probably mean increased lobbying by military AI suppliersand increased chances of a military AI arms race.

Bureaucratic politics. There were many instances of bureaucratic politics exacerbating the Cold War nuclear arms race. As Slate columnist and author of several books on military strategy Fred Kaplan has described, the Air Force and the Navy repeatedly came up with new nuclear strategies and doctrines that would give them more of the nuclear weapons budget. For example, the Navys think tank came up with finite deterrence, which suggested that the United States could deter the Soviet Union by deploying a relatively small number of nuclear missiles on submarines, obviating the need for large numbers of nuclear bombers and missiles (which were operated by the Air Force).

Bureaucratic incentives often cause organizations to attempt to accumulate more resources and influence than is optimal from the perspective of the state. Although most cutting-edge AI development is currently carried out in the private sector, that could change. History suggests that as a technologys strategic importance and cost grow, the inclination and capacity for the state to exert control over its development and deployment will also grow.

There is another reason for concern about AI developed in or for the public sectorparticularly the defense sector, despite the current private-sector dominance. As former US Navy Secretary Richard Danzig has written, military development and use of technology tends to be particularly risky, for several reasons: secrecy makes oversight and regulation more difficult; the unpredictability of warfare environments; and the adversarial, unconstrained nature of military operations. The military already accounts for significant proportion of US government spending on AI.

Regardless of how the military uses AI, it is likely there will be resistance to any AI arms control initiatives. An arms control agreement almost always interferes with the interests of one or more groups within the defense establishment. Military support is particularly important for ratification, which is why President Kennedy had to abandon his push for a comprehensive test ban in the face of resistance from the Joint Chiefs of Staff.

Electoral politics and public opinion. The relationship between foreign policy and electoral politics is not straightforward. An influential paper published in 2005 found that US foreign policy is most heavily and consistently influenced by internationally oriented business leaders, followed by experts, with some small influence for organized labor groups, and very weak or no influence from public opinion. (It should be noted that not all researchers agree with this finding, however: Many case studies and experiments have found that public opinion does influence decision makers in certain circumstances.)

Studies suggest public opinion is more important for high-salience issuesthat is, issues that are seen as particularly noticeable or important. Public opinion does not come into play as much for issues that (rightly or wrongly) feel less relevant. For example, voters generally do not care much about trade policy: They do not know their political representatives trade policy positions, so trade policy does not affect their voting behavior. According to the Secret Congress theory, which contends that it is much easier to pass legislation on topics that are more under the radar and consequently not politically salient, if AI policy issues were politically salient and the parties were divided on them, it would be much more difficult to pass regulation and treaties that would reduce risks from AI.

At the moment, AI is too esoteric to be politically salient, although this is starting to change. The electoral politics of AI policy are overshadowed by broader concerns about strategic competition with China. In the United States, elite opinion, business opinion, and public opinion have shifted toward the view that engagement with China has failed and a more confrontational approach is now required. Current US policy toward Chinaincluding accelerating US AI development and restricting Chinese AI progresscommands bipartisan support.

However, if one party becomes more hawkish on China-related policy issues, public opinion on AI might split accordingly, with supporters of the more hawkish party viewing cooperation on AI policy less favorably. This may have happened in the past with nuclear weapons. There is some evidence to suggest that Obamas 2009 Prague speech, in which he announced Americas commitment to seek the peace and security of a world without nuclear weapons, led to disarmament being associated with Obama personally. This polarized the issue of arms control and disarmament along partisan lines, making future policy making more difficult.

If AI policy issues do become politically salient, the history and political science literature suggest that electoral politics might impede arms control in a number of ways. For example, if arms control policy gets caught up in partisan politics, it becomes much harder to develop and implement, particularly given that treaty ratification requires a two-thirds majority in the Senate.

In the past, political groups have held dovish positions on some nuclear issues while holding hawkish positions on others. For example, the Nunn-Lugar Cooperative Threat Reduction program, which worked with the states of the former Soviet Union to dismantle and secure the legacies of the Cold War, had strong bipartisan support, even as arms control agreements faced resistance from many Republicans. Certain nuclear issues are idiosyncratic. For example, Iran issues are politicized in a different way than other nuclear issues, because of the link to Israels security: Many otherwise liberal Democrats who are Jewish or represent heavily Jewish districts are hawkish on Iran. AI may turn out to be similar, with political cooperation on some aspects of AI policy and partisan gridlock on others.

Finally, it is worth noting that the large number of potential uses for AI means that AI will touch peoples lives frequently and in significant ways. However, it is unlikely that these applications will cohere into a consistent pro- or anti-AI perspective. Public opinion on AI foreign policy will probably resemble other technology-related foreign policy issueswith the two major parties split according to their levels of hawkishness.

The United States has a tricky balance to strike. On the one hand, promoting AI development could create economic and social benefits, and the government has a duty to keep US citizens safe by maintaining technological superiority. On the other hand, if AI is not sufficiently well-regulated, and the United States and China cant cooperate where necessary, the whole world could be at risk.

Striking this balance is like walking a tightrope. Domestic forces threaten to knock the United States off balance.

The Bulletin elevates expert voices above the noise. But as an independent, nonprofit media organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

View post:

How politics and business are driving the AI arms race with China - Bulletin of the Atomic Scientists

Paper Claims AI May Be a Civilization-Destroying "Great Filter" – Futurism

If aliens are out there, why haven't they contacted us yet? It may be, a new paper argues, that they or, in the future, we inevitably getwiped out by ultra-strong artificial intelligence, victims of our own drive to create a superior being.

This potential answer to the Fermi paradox in which physicist Enrico Fermi and subsequent generations pose the question: "where is everybody?" comes from National Intelligence University researcher Mark M. Bailey, who in a new yet-to-be-peer-reviewed paper posits that advanced AI may be exactly the kind of catastrophic risk that could wipe out entire civilizations.

Bailey cites superhuman AI as a potential "Great Filter," a potential answer to the Fermi paradox in which some terrible and unknown threat, artificial or natural, wipes out intelligent life before it can make contact with others.

"For anyone concerned with global catastrophic risk, one sobering question remains," Bailey writes. "Is the Great Filter in our past, or is it a challenge that we must still overcome?"

We humans, the researcher notes, are "terrible at intuitively estimating long-term risk," and given how many warnings have already been issued about AI and its potential endpoint, anartificial general intelligence or AGI it's possible, he argues, that we may be summoning our own demise.

"One way to examine the AI problem is through the lens of the second species argument," the paper continues. "This idea considers the possibility that advanced AI will effectively behave as a second intelligent species with whom we will inevitably share this planet. Considering how things went the last time this happened when modern humans and Neanderthals coexisted the potential outcomes are grim."

Even scarier, Bailey notes, is the prospect of near-god-like artificial superintelligence (ASI),in which an AGI surpasses human intelligence because "any AI that can improve its own code would likely be motivated to do so."

"In this scenario, humans would relinquish their position as the dominant intelligent species on the planet with potential calamitous consequences," the author hypothesizes. "Like the Neanderthals, our control over our future, and even our very existence, may end with the introduction of a more intelligent competitor."

There hasn't yet, of course, been any direct evidence to suggest that extraterrestrial AIs wiped out natural life in any alien civilizations, though in Bailey's view, "the discovery of artificial extraterrestrial intelligence without concurrent evidence of a pre-existing biological intelligence would certainly move the needle."

The possibility, of course, raises the possibility that there are destructive AIs lingering around the universe after eliminating their creators. To that end, Bailey helpfully suggests that "actively signaling our existence in a way detectable to such an extraterrestrial AI may not be in our best interest" because "any competitive extraterrestrial AI may be inclined to seek resources elsewhere including Earth."

"While it may seem like science fiction, it is probable that an out-of-control... technology like AI would be a likely candidate for the Great Filter whether organic to our planet, or of extraterrestrial origin," Bailey concludes. "We must ask ourselves; how do we prepare for this possibility?"

Reader, it's freaky stuff but once again, we're glad someone is considering it.

More on an AI apocalypse: Warren Buffett Compares AI to the Atom Bomb

Go here to read the rest:

Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism

People warned AI is becoming like a God and a ‘catastrophe’ is … – UNILAD

An artificial intelligence investor has warned that humanity may need to hit the breaks on AI development, claiming it's becoming 'God-like' and that it could cause 'catastrophe' for us in the not-so-distant future.

Ian Hogarth - who has invested in over 50 AI companies - made an ominous statement on how the constant pursuit of increasingly-smart machines could spell disaster in an essay for the Financial Times.

The AI investor and author claims that researchers are foggy on what's to come and have no real plan for a technology with that level of knowledge.

"They are running towards a finish line without an understanding of what lies on the other side," he warned.

Hogarth shared what he'd recently been told by a machine-learning researcher that 'from now onwards' we are on the verge of artificial general intelligence (AGI) coming to the fore.

AGI has been defined as is an autonomous system that can learn to accomplish any intellectual task that human beings can perform and surpass human capabilities.

Hogarth, co-founder of Plural Platform, said that not everyone agrees that AGI is imminent but rather 'estimates range from a decade to half a century or more' for it to arrive.

However, he noted the tension between companies that are frantically trying to advance AI's capabilities and machine learning experts who fear the end point.

The AI investor also explained that he feared for his four-year-old son and what these massive advances in AI technology might mean for him.

He said: "I gradually shifted from shock to anger.

"It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight."

When considering whether the people in the AGI race were planning to 'slow down' to ' let the rest of the world have a say' Hogarth admitted that it's morphed into a 'them' versus 'us' situation.

Having been a prolific investor in AI startups, he also confessed to feeling 'part of this community'.

Hogarth's descriptions of the potential power of AGI were terrifying as he declared: "A three-letter acronym doesnt capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI."

Hogarth described it as 'a superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it'.

But even with this knowledge and, despite the fact that it's still on the horizon, he warned that we have no idea of the challenges we'll face and the 'nature of the technology means it is exceptionally difficult to predict exactly when we will get there'.

"God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race," the investor said.

Despite a career spent investing in and supporting the advancement of AI, Hogarth explained that what made him pause for thought was the fact that 'the contest between a few companies to create God-like AI has rapidly accelerated'.

He continued: "They do not yet know how to pursue their aim safely and have no oversight."

Hogarth still plans to invest in startups that pursue AI responsibly, but explained that the race shows no signs of slowing down.

"Unfortunately, I think the race will continue," he said.

"It will likely take a major misuse event - a catastrophe - to wake up the public and governments."

Follow this link:

People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD