Archive for the ‘Artificial Super Intelligence’ Category

What Is Image-to-Image Translation? | Definition from TechTarget – TechTarget

What is image-to-image translation?

Image-to-image translation is a generative artificial intelligence (AI) technique that translates a source image into a target image while preserving certain visual properties of the original image. This technology uses machine learning and deep learning techniques such as generative adversarial networks (GANs); conditional adversarial networks, or cGANs; and convolutional neural networks (CNNs) to learn complex mapping functions between input and output images.

Image-to-image translation allows images to be converted from one form to another while retaining essential features. The goal is to learn a mapping between the two domains and then generate realistic images in whatever style a designer chooses. This approach enables tasks such as style transfer, colorization and super-resolution, a technique that improves the resolution of an image.

The image-to-image technology encompasses a diverse set of applications in art, image engagement, data augmentation and computer vision, also known as machine vision. For instance, image-to-image translation allows photographers to change a daytime photo to a nighttime one, convert a satellite image into a map and enhance medical images to enable more accurate diagnoses.

Image processing systems using image-to-image translation require the following basic steps:

A critical aspect of image-to-image translation is ensuring the model generalizes well in response to previously unseen or unsupervised scenarios. Cycle consistency and unsupervised learning help to ensure that if an image is translated from one domain to another and then back, it returns to its original form. Deep learning architectures, such as U-Net and CNNs, are also commonly used because they can capture complex spatial relationships in images. In the training process, batch normalization and optimization algorithms are used to stabilize and expedite convergence.

The two main approaches to image-to-image translation are supervised and unsupervised learning.

Supervised methods rely on paired training data, where each input image has a corresponding target image. Using this approach, the generated image system learns the direct mapping that's required between the two domains. However, obtaining paired data can be challenging and time-consuming, especially when dealing with complex image transformation.

Unsupervised methods tackle the image-to-image translation problem without paired training examples. One prominent unsupervised approach is CycleGAN, which introduces the concept of cycle consistency. This involves two mappings: from the source domain to the target domain and vice versa. CycleGAN ensures the target domain is similar to the original source image.

Image-to-image translation and generative AI in general are touted for being cost-effective, but they're also criticized for lacking creativity. It's essential to research the various AI models that have been developed to handle image-to-image translation tasks, as each comes with its own unique benefits and drawbacks. Research groups such as Gartner also urge users and generative AI developers to look for trust and transparency when choosing and designing models.

Some of the most popular models include the following:

Image-to-image translation is a popular generative AI technology. Learn the eight biggest generative AI ethical concerns.

Read more from the original source:

What Is Image-to-Image Translation? | Definition from TechTarget - TechTarget

There is probably an 80% consensus that free will is actually … – CTech

Dr. Tomas Chamorro-Premuzic and James Spiro

(Photo: Zoom/Sinay David)

On a philosophical or testimonial level, if you look at most of the mainstream science, neuroscience, behavioral science, there is probably 80% consensus that free will is actually overrated or overstated, said Dr. Tomas Chamorro-Premuzic, author of I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. We think we are in control of the decisions we make, but actually there are so many serendipitous and biologically driven courses of our decision.

Dr. Tomas Chamorro-Premuzic is an organizational psychologist who works mostly in the areas of personality profiling, people analytics, talent identification, the interface between human and artificial intelligence, and leadership development. He is the Chief Innovation Officer at ManpowerGroup, a professor of business psychology at University College London and at Columbia University, co-founder of deepersignals.com, and an associate at Harvards Entrepreneurial Finance Lab.

He is the writer behind books such as Why Do So Many Incompetent Men Become Leaders?, The Future of Recruitment: Using the New Science of Talent Analytics to Get Your Hiring Right, and this years I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. Joining CTech for its new BiblioTech video series, he discusses the integration of AI into our lives and how we can keep our unique creativity and value in an increasingly digital world.

Leaving aside these philosophical discussions what I highlight in the book is that if we get to a point where our decisions are so predictable that AI can make most of these decisions, even if we are not automated and replaced by AI, surely we need to question our sense of subjective free will?

Many of the topics that Chamorro-Premuzic addresses in the book relate to the impact that AI will have on our lives and how different generations might respond to the algorithms living beside us. For example, he cites tech leaders like Bill Gates and Elon Musk, who present concerning views of AI, but also respond positively to how Gen Z might learn to adopt such technologies.

One of the things that the digital age has introduced is ever more and more ADD-like behaviors, he continued. We are pressed to do things quicker and quicker. And therefore there are few rewards for pausing and thinking.

Even though he believes humans are perfectly capable of stopping and taking time to consider their thoughts and actions, most of the decisions today in the AI age are so fast that they become very predictable and therefore easily outsourced to machines.

Gen Z and the next generation will need to showcase their expertise in a different area or a different way, he told CTech. Expertise is mutating from knowing a lot of answers to asking the right questions - from memorizing and retrieving facts to knowing how, why, and where the facts are wrong Demonstrating and cultivating expertise is a big challenge for the young generations.

Tomas, in your book you tackle one of the biggest questions facing our species: "Will we use artificial intelligence to improve the way we work and live, or will we allow it to alienate us?" Why did you find that now was the moment that this question needed to be asked and why did your book come out when it did?

I wrote 4-5 years ago that AI could be a really powerful tool to translate data and make leadership selection more data-driven with my first book, Why Do So Many Incompetent Men Become Leaders? (And How to Fix it). Then came The Future of Recruitment: Using the New Science of Talent Analytics to Get Your Hiring Right, which was about practical advice on how organizations can do that. Then, I was already contracted to do a new book during the pandemic, and on a personal level I found myself interacting with AI so much and interacting with other humans so little, that I thought this thing was really about to take off especially if we will be in lockdown for a while.

I started to look at the wider impact of AI and human behavior. Coincidentally the book was due to launch when OpenAI released ChatGPT which I always say is good and bad. Its good because there is more interest now for a book that explores the implications for human intelligence and human creativity in an age where we can outsource much of our thinking to machines. And it's bad because I had to write it myself, I couldn't rely on ChatGPT to write it! I think the next one will probably be written by AI and I will edit it!

I'd like to highlight what some of the tech leaders of today have said about AI, which you address at the start of your book:

You comment that Bill Gates is concerned about super intelligence; Stephen Hawking noted that Super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we are in trouble. Finally, you highlight how Elon Musk labeled AI a fundamental risk to the existence of human civilization - although you point out it hasn't stopped him from trying to implant it into our brains.

Tomas, why are we pursuing such a scary and unknown technology?

We're pursuing it mostly for two reasons. First, over the past 10 years, we have amassed so much data that we dont have enough human resources or human intelligence available to analyze that data. Also, we had to rely on a large language model, or some version of AI, to help us make sense of the data and actually make decisions in a more efficient, quick, and effortless way which is needed in a work that is so complex.

The second reason is that human beings are very lazy. We love to optimize everything for familiarity, for predictability. You could either sit down to watch any movie that Netflix recommends to you and after five seconds youll be watching a movie, or you could do what I do which is dismiss the algorithm, dig deeper, and waste two hours of my life. By the time I actually find the movie I want to watch it is time to go to sleep. We are trading off efficiency, which means lazy, fast, and furious decision-making, for deep, thoughtful, and expert-like decisions.

It is the same whether we are choosing a job, a romantic partner, a restaurant, a hotel, or what we consume in terms of news. This is why AI has been introduced as a potential tool that can enhance our productivity. Even if we're not necessarily going to invest whatever savings we gain from the productivity that AI uses into more thoughtful, creative, and intellectually fulfilling activities. Therein lies the problem.

I want to address some of the more nefarious things you mention and some of the ways that AI is affecting us in ways we don't understand. We speak about AI in the world, but how much choice do we have and how much is just an illusion of choice?

On a philosophical or testimonial level, if you look at most of the mainstream science, neuroscience, behavioral science, there is probably around 80% consensus that free will is actually overrated or overstated. But it is mostly an illusion. We think we are in control of the decisions we make but actually, there are so many serendipitous and biologically driven courses of our decisions.

Leaving aside these philosophical discussions which are hard to verify and often don't mean much to the average consumer, it is clear to me: If we get to a point where our decisions are so predictable that AI can make most of these decisions, even if we are not automated and replaced by AI, surely we need to question our sense of subjective free will?

If when I'm writing an email to you and Googles auto-complete version is correct 95% of the time, then I have to wonder whether I really am an agentic creative human that still has some choice or whether it's more deterministic than we think. I think the way to think about these issues is that we are mostly free to choose, or at least we feel we are free to choose, but that doesn't necessarily mean we want to pause, think, and choose. One of the things that the digital age has introduced is even more and more ADD-like behaviors. We are pressed to do things quicker and quicker and therefore there are few rewards for pausing and thinking, which explains the rise of things like mindfulness movements, apps, and people who do digital detoxes.

We are perfectly capable of pausing and thinking, but most of the decisions we are making in the AI age are so fast that they become very predictable and therefore they can be outsourced to machines.

I'd like to elaborate on what you mention in the book which you call a "Crisis of Distractability". I think it really sums up where so many of us are today online. What did you mean by that and how is it manifesting itself in recent years?

Around 11 years ago I went to a digital marketing conference where you had all the big tech firms. For the first time, some people were introducing the notion of the second screen, which was very counterintuitive and bold at the time. People were watching TV and holding their iPads, or they were looking at their smartphones and now theres a second screen market.

Now, we all have 3-4 screens that we interact with all the time. Life itself has been downgraded to a distraction. You're almost distracted when you can't pay attention to your apps or your social media feeds. You get FOMO if you can't interact with people digitally and you have to pay attention to the analog world.

In terms of productivity, I think this is really important because even though we keep on arguing about whether technology and GenAI are going to lead to a productivity gain or the demise of human civilization, the tech firms keep telling us it will make us healthier, fitter, happier, and more productive.

Actually the productivity data is very clear. Our productivity went up between 2000-2008 in the first wave of the digital revolution, only to start to stagnate or stall after that, after the advent of social media. Roughly 60-75% of smartphone use occurs during working hours when they're working from home or in an office and 70% of workers report being distracted. In the U.S. alone, digital distractions cost the U.S. economy $650 billion dollars in productivity loss per year, which is 15 times more than the cost of absentees, turnover, and sickness. Multitasking, which we all do, results in a deficit of our intellectual cognitive performance of around 10 IQ points. It's basically as debilitating as smoking weed, presumably minus the benefits.

We think and fool ourselves into thinking that we can multitask, but every time you switch from one task to the other and you go back, youve lost the equivalent of 26 minutes of concentration on that task. Technology might improve productivity but sometimes you become more productive if you ignore or have the ability to resist technology as well.

There is a whole new generation in Gen Z who are growing up in the world youve been outlining - with AI and a search for uniqueness. What are some of the challenges they're going to have when trying to find their voice or establish their careers or relationships?

The main challenge will be to demonstrate social proof. If you just enter or start your career, no matter how smart you are, it is a very steep curve to demonstrate to others that you can provide more value than what you can get from AI. You're probably paying a lot of attention to ChatGPT and other forms of GenAI in terms of their ability to produce an article, or an opinion piece. Youre probably, in your area of expertise, able to spot the errors, but the reason you are adding value to that is because of your track record and experience, that actually you know your stuff.

If you're just starting, it's very difficult to persuade people that you have that expertise. Gen Z and the next generation will need to showcase their expertise in a different area or a different way. Expertise is mutating from knowing a lot of answers to asking the right questions - from memorizing and retrieving facts to knowing how, why, and where the facts are wrong. Fundamentally, to make decisions on the basis of information that might be correct or incorrect. Demonstrating and cultivating expertise is a big challenge for the young generations.

I heard that the future artists or engineers wont be coders, theyll be prompt engineers. Theyre going to know how to get the best out of the AI, which at the moment folks like me are walking around with our blindfolds not knowing what it's capable of.

There is an argument to be made that as soon as there's enough prompt engineers prompting AI, AI will learn to prompt itself then we will need to move to the next iteration. There is going to be a very intense cat-and-mouse race or game where as soon as we develop something it can be automated. And we have to develop something else and it can be automated.

Creativity is really critical. Spotify probably has enough data to automate 80% of its artists because it has an algorithm to understand what people like and most music can be pre-processed and done synthetically. Even if it automated 100% of its content, it probably wouldnt kill musicians. It would push artists to invent the next version of music. I think that's how we need to think about every form of performance that is intellectually fueled or creatively or artistically informed.

You touch on popular content in the book, such as Netflix's The Social Dilemma, the famous book Surveillance Capitalism, and of course Black Mirror, which is the modern-day Twilight Zone. What can readers learn from I, Human?

Hopefully they will learn a little bit about AI, especially if they don't have technical backgrounds on it. It's designed for people with no knowledge for people to understand what AI is and what it isnt - to understand how the algorithms that we interact with on a regular basis are reshaping our behavior.

Culture is always a big influence on how we behave. The average person today behaves differently from the average person in the Renaissance, medieval times, or in ancient Greece or Rome even though our hardware or DNA is the same. What I argue is that the current culture could be defined universally as the AI age, and with that comes certain behavioral traits and markers they will discover in their book.

The final part is a call to action, how we need to change if we want to ensure that the AI age is also the human AI age and that we use this technological invention to upgrade ourselves.It finishes on a relatively optimistic note with a call to action to rediscover some of the qualities that make us who we are. AI will probably not harm things like dep curiously, creativity, self-awareness, empathy, and EQ. The argument is that AI will probably win the IQ battle but the EQ battle could be won by humans.

Read more here:

There is probably an 80% consensus that free will is actually ... - CTech

Meta is planning on introducing dozens of chatbot personas … – TechRadar

Meta is gearing up to announce a generative artificial intelligence chatbot (internally dubbed as Gen AI Personas) that is aimed at enticing younger users to the world of AI chatbots. The new chatbot is expected to launch during Metas Connect event on September 27, and will introduce some familiar but dated personas.

The Verge notes that the chatbots will come with different personas that will promote more humanlike, engaging conversations to appeal to younger users. One of the sassy robot personas is inspired by Bender from Futurama and Alvin the Alien.

Meta is planning to add dozens of familiar faces to its chatbot roster and even plans on creating a tool that will enable celebrities to make their own chatbots for their fans. This is good news, as I could finally talk to Beyonce.

Meta is clearly putting a lot of time and effort into perfecting its chatbot game in the budding world of AI. We all remember Snapchat AI, which rose to fame for about a week and then quickly fizzled out into obscurity.

Interestingly, the Wall Street Journal reached out to former Snap and Instagram executive Meghana Dhar, who noted that chatbots dont scream Gen Z to me, but definitely, Gen Z is much more comfortable with new technology. She also adds that Metas goal with the chatbots is likely to be to keep them engaged for longer so it has increased opportunity to serve them ads.

That would explain the rather random selection of young people personas that Meta is going for. While Bender from Futurama is pretty recognizable, hes not exactly a Gen Z icon. As someone from the demographic Meta seems to be targeting, its an extremely odd celebrity to slap onto your product, considering theres a plethora of other (more relevant) personalities to choose from.

The advantage Meta has in picking Gen Z as its target demographic is that Gen Z is very public about who they are super into right now. Meta could have picked literally anyone else, so hopefully the other personalities it has up its sleeve are a bit more contemporary.

Excerpt from:

Meta is planning on introducing dozens of chatbot personas ... - TechRadar

We Cannot Trust AI With Control Of Our Bombs – Fair Observer

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable hallucinations, resulting in potentially catastrophic outcomes. But theres an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film WarGames, a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced whopper) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The Terminator franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called Skynet that, like WOPR, was designed to control US nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of autonomous, or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called robot generals. In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over Americas atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanitys demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the US Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the air force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As C2 capabilities are increasingly loaded onto AI-controlled systems, they may soon be issuing fire instructions directly to shooters, largely bypassing human control.

A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp, a military show of force, or early engagementthats how Will Roper, assistant secretary of the air force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that we do need to change the name as the system evolves, Roper added, I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just dont think we can go there.

And while he cant go there, thats just where the rest of us may, indeed, be going.

Mind you, thats only the start. In fact, the air forces ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all US combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced jad-cee-two). JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon to engage the target, the Congressional Research Service reported in 2022.

Initially, JADC2 will be designed to coordinate combat operations among conventional or non-nuclear American forces. Eventually, however, it is expected to link up with the Pentagons nuclear command-control-and-communications systems (NC3), potentially giving computers significant control over the use of the American nuclear arsenal. JADC2 and NC3 are intertwined, General John E. Hyten, vice chairman of the Joint Chiefs of Staff, indicated in a 2020 interview. As a result, he added in typical Pentagonese, NC3 has to inform JADC2 and JADC2 has to inform NC3.

It doesnt require great imagination to picture a time in the not-too-distant future when a crisis of some sortsay a US-China military clash in the South China Sea or near Taiwanprompts ever more intense fighting between opposing air and naval forces. Imagine then the JADC2 ordering an intense bombardment of enemy bases and command systems in China itself, triggering reciprocal attacks on US facilities and a lightning decision by JADC2 to retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.

The possibility that nightmare scenarios of this sort could result in the accidental or unintended onset of nuclear war has long troubled analysts in the arms control community. But the growing automation of military C2 systems has generated anxiety not just among them but among senior national security officials as well.

As early as 2019, when I questioned Lieutenant General Jack Shanahan, director of the Pentagons Joint Artificial Intelligence Center, about such a risky possibility, he responded, You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do with nuclear command and control. This is the ultimate human decision that needs to be made and so we have to be very careful. Given the technologys immaturity, he added, we need a lot of time to test and evaluate before applying AI to NC3.

In the years since, despite such warnings, the Pentagon has been racing ahead with the development of automated C2 systems. In its budget submission for 2024, the Department of Defense requested $1.4 billion for the JADC2 in order to transform warfighting capability by delivering information advantage at the speed of relevance across all domains and partners. Uh-oh! And then it requested another $1.8 billion for other kinds of military-related AI research.

Pentagon officials acknowledge that it will be some time before robot generals will be commanding vast numbers of US troops (and autonomous weapons) in battle, but they have already launched several projects intended to test and perfect just such linkages. One example is the armys Project Convergence, involving a series of field exercises designed to validate ABMS and JADC2 component systems. In a test held in August 2020 at the Yuma Proving Ground in Arizona, for example, the army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at Joint Base Lewis McChord in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. This entire sequence was supposedly accomplished within 20 seconds, the Congressional Research Service later reported.

Less is known about the navys AI equivalent, Project Overmatch, as many aspects of its programming have been kept secret. According to Admiral Michael Gilday, chief of naval operations, Overmatch is intended to enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain. Little else has been revealed about the project.

Despite all the secrecy surrounding these projects, you can think of ABMS, JADC2, Convergence and Overmatch as building blocks for a future Skynet-like mega-network of super-computers designed to command all US forces, including its nuclear ones, in armed combat. The more the Pentagon moves in that direction, the closer well come to a time when AI possesses life-or-death power over all American soldiers along with opposing forces and any civilians caught in the crossfire.

Such a prospect should be ample cause for concern. To start with, consider the risk of errors and miscalculations by the algorithms at the heart of such systems. As top computer scientists have warned us, those algorithms are capable of remarkably inexplicable mistakes and, to use the AI term of the moment, hallucinationsthat is, seemingly reasonable results that are entirely illusionary. Under the circumstances, its not hard to imagine such computers hallucinating an imminent enemy attack and launching a war that might otherwise have been avoided.

And thats not the worst of the dangers to consider. After all, theres the obvious likelihood that Americas adversaries will similarly equip their forces with robot generals. In other words, future wars are likely to be fought by one set of AI systems against another, both linked to nuclear weaponry, with entirely unpredictablebut potentially catastrophicresults.

Not much is known (from public sources at least) about Russian and Chinese efforts to automate their military command-and-control systems, but both countries are thought to be developing networks comparable to the Pentagons JADC2. As early as 2014, in fact, Russia inaugurated a National Defense Control Center (NDCC) in Moscow, a centralized command post for assessing global threats and initiating whatever military action is deemed necessary, whether of a non-nuclear or nuclear nature. Like JADC2, the NDCC is designed to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses.

China is said to be pursuing an even more elaborate, if similar, enterprise under the rubric of Multi-Domain Precision Warfare (MDPW). According to the Pentagons 2022 report on Chinese military developments, its military, the Peoples Liberation Army, is being trained and equipped to use AI-enabled sensors and computer networks to rapidly identify key vulnerabilities in the US operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.

Picture, then, a future war between the US and Russia or China (or both) in which the JADC2 commands all US forces, while Russias NDCC and Chinas MDPW command those countries forces. Consider, as well, that all three systems are likely to experience errors and hallucinations. How safe will humans be when robot generals decide that its time to win the war by nuking their enemies?

If this strikes you as an outlandish scenario, think again, at least according to the leadership of the National Security Commission on Artificial Intelligence, a congressionally mandated enterprise that was chaired by Eric Schmidt, former head of Google, and Robert Work, former deputy secretary of defense. While the Commission believes that properly designed, tested and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit, the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability, it affirmed in its Final Report. Such dangers could arise, it stated, because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems on the battlefieldwhen, that is, AI fights AI.

Though this may seem an extreme scenario, its entirely possible that opposing AI systems could trigger a catastrophic flash warthe military equivalent of a flash crash on Wall Street, when huge transactions by super-sophisticated trading algorithms spark panic selling before human operators can restore order. In the infamous Flash Crash of May 6, 2010, computer-driven trading precipitated a 10% fall in the stock markets value. According to Paul Scharre of the Center for a New American Security, who first studied the phenomenon, the military equivalent of such crises on Wall Street would arise when the automated command systems of opposing forces become trapped in a cascade of escalating engagements. In such a situation, he noted, autonomous weapons could lead to accidental death and destruction at catastrophic scales in an instant.

At present, there are virtually no measures in place to prevent a future catastrophe of this sort or even talks among the major powers to devise such measures. Yet, as the National Security Commission on Artificial Intelligence noted, such crisis-control measures are urgently needed to integrate automated escalation tripwires into such systems that would prevent the automated escalation of conflict. Otherwise, some catastrophic version of World War III seems all too possible. Given the dangerous immaturity of such technology and the reluctance of Beijing, Moscow and Washington to impose any restraints on the weaponization of AI, the day when machines could choose to annihilate us might arrive far sooner than we imagine and the extinction of humanity could be the collateral damage of such a future war.

[TomDispatch first published this piece.]

[Anton Schauble edited this piece.]

The views expressed in this article are the authors own and do not necessarily reflect Fair Observers editorial policy.

See the original post:

We Cannot Trust AI With Control Of Our Bombs - Fair Observer

AI: is the end nigh? | Laura Dodsworth – The Critic

This article is taken from the August-September 2023 issue of The Critic. To get the full magazine why not subscribe? Right now were offering five issues for just 10.

Does AI pose a mass extinction threat? Or is this concern merely the latest manifestation of humanitys need to frighten itself witless?

As the year 2000 approached the world fretted over the Y2K or Millennium Bug. Neurotics and newspapers alike predicted that power plants, banks and planes would fail as 1999 became 2000, ushering in pandemonium and death. John Hamre, the US Deputy Secretary of Defense from 1997 to March 2000, foresaw that the Y2K problem is the electronic equivalent of the El Nio and there will be nasty surprises around the globe. There werent and there was little difference in the outcome between countries which invested millions of dollars and countries which invested none.

In the 23 years since then, weve gone from computers are so stupid the world will end to computers are so clever the world will end. But the hysteria remains the same.

The latest apocalyptic horror on the heels of Covid-19 and climate catastrophe is whether, non-human minds as Elon Musk pitches it, might eventually outnumber, outsmart, obsolete and replace us. He co-signed an open letter with other tech leaders warning that machines might flood our information channels with propaganda and untruth (in contradistinction to humans doing so).

The letter set out profound risks to society, humanity and democracy, which in turn led to a multitude of hyperbolic headlines such as the BBCs Artificial intelligence could lead to extinction, experts warn. The Centre for AI safety warned starkly that: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

AI does pose threats, as well as tremendous opportunities, but the threats may be quite different to the doom and gloom headlines. First, there is no certainty that AI will develop the capabilities that we are being extravagantly warned about. Even the Future of Life Institute which published the open letter admits that super-intelligence is not necessarily inevitable.

Thus far, AI has had a free ride on human achievement and creativity. There is no AI without humans. There is no generative language AI without human language. There is no writing in the style of John Donne, without John Donne. In fact, ChatGPT and Bard do a terrible impersonation of metaphysical poetry, although their limericks are passable. There is no AI art, music, novels without everything that has gone before. In short, the achievements are still ours.

The panic is focused on what might be. AI is an extremely advanced tool, but it is just a tool. It is the humans holding the tools with whom we need to concern ourselves. New technology has sometimes resulted in some horrible uses, such as the gas chambers. New communications technologies have been channels for propaganda. But they were not the propaganda itself. Nevertheless, some threats are real.

Firstly, AI systems are now becoming human-competitive at general tasks. IBMs CEO, Arvind Krishna, recently told Bloomberg that he could easily see 30 per cent of jobs getting replaced by AI and automation over a five-year period. And according to a report by Goldman Sachs, AI could replace the equivalent of 300 million full-time jobs.

It turns out the very IT, software, media, creative and legal people now worried about AI, might find themselves facing increased competition from AI. For example, Chat GPT will help people with average writing skills produce better articles, which will probably lead to more competition and lower wages.

AI is also a brainwashers dream. Advocates for regulation want you to think that AI is about to discover sentience and write new religious tomes, invent propaganda and disrupt elections, all because it wants to, for its own devious reasons. In fact, the brainwashing threat is quite different.

AI can be sedimented with psychological techniques such as nudging. Nudging involves influencing your behaviour by altering the environment, or choice architecture, in different ways, by exploiting our natural cognitive biases. Algorithmic nudging is a potentially potent tool in the hands of paternalistic libertarian do-gooders or authoritarians.

Nudges will be able to scale completely unlike the real world counterpart, and at the same time be completely personalised. Facebook knows you better than anyone, except your spouse, from a mere 200 likes splattered on its pages, even to the extent of knowing your sexuality. As I warn in my book Free Your Mind, if you dont want AI to know you better than anyone else, tread lightly on social media and use it mindfully.

It is interesting that the threat of AI is likened to nukes, yet the academics have been writing for years about algorithmic nudging which presents clear ethical dilemmas about consent, privacy and manipulation, without clamouring for regulation.

Algorithms already create completely personalised platforms

Algorithms already create completely personalised platforms. Twitter is often described as a public square, but it more closely resembles a maze, in which the lights are off and the walls move, seemingly arbitrarily. Aside from the disturbing evidence presented in the release of the Twitter Files particularly concerning how Twitter deamplifies content it does not like, anyone using the platform a lot will attest to the inexplicable rise and fall of follower counts and the suppression of juicy tweets. It seems content is pushed up or down based on the preferences of Big Tech and government agencies, and this is made effective through the capabilities of algorithms. AI is killing transparency and pluralism.

In our relationship with AI, our biases create danger. The authority bias means we see AI as more powerful than it is, and therefore we are more likely to succumb to manufactured and exaggerated fears. We anthropomorphise AI. Google engineer, Blake Lemoine was prepared to lose his job because he believed LaMDA, an AI chatbot, has sentience.

AI is not human-like, but it is our human tendency to believe it is so. One study has shown that since lockdown, people show a higher preference for anthropomorphised brands and platforms. The more we disconnect from each other, through tech, the more we want tech to resemble us. Men already have AI girlfriends and one Belgian man was persuaded to kill himself by an AI chatbot called Eliza after he shared his fears about climate change. Alarming though this is, is it any more so than a technological upgrade of last years sex dolls or emo music?

AI might make us stupid. As we rely even more on our phone our own capabilities may decrease. One study has shown that just having your phone nearby reduces cognitive abilities. As we outsource homework, research and even parts of our jobs, will we use our brains to create more wonders of the world, or to vegetate longer on TikTok?

Our biases make us vulnerable to the perceived threats of AI

Our biases make us vulnerable to the perceived threats of AI, but so do the times in which we find ourselves. We no longer seem to have sufficient collective belief in our special status as human beings. Another co-signatory of the open letter is the historian and author Yuval Noah Harari who has described humans as hackable animals. If you see humans as soulless organic algorithms then you might indeed feel threatened by AI which certainly constitutes superior algorithms unconstrained by mortal flesh.

Harari believes that humans will no longer be autonomous entities directed by the stories the narrating self invents. Instead they will be integral parts of a huge global network. This is a far-reaching hypothesis, and perhaps why Harari does not own a smartphone, for all his apparent enthusiasm for a transhumanist chipped-brain future.

He has claimed that AI may even try to write the worlds next Bible. Humans are quite capable of starting religious wars on their own. So far all AI has managed is to show the Pope in a white puffer jacket.

Hararis dire warnings keep him in the spotlight as a forward-looking muse to the worlds elite. After all, describing AI as merely an intelligent system which, for now, can write a passable undergrad-level essay doesnt seem epoch-defining. Equally, those calling for regulation potentially stand to benefit from investment, government contracts and control over the desired direction of regulation.

Casting AI as a god is indicative of our tendency to fear the End of Days, combined with a crisis of confidence in ourselves and an overdeveloped authority bias. AI is no god, it is a fleet of angels, poised to swoop and intervene in the lives of humans at the bidding of the priest caste who direct it.

It is the priest caste we should look to. What do the tech leaders and politicians of the world want? They dont want to stop AI altogether, of course. They want to pause development and the release of updates while they work together to dramatically accelerate development of robust AI governance systems. They want a seat at the table to write a new moral code.

As a priority, they want the right sort of people academics, politicians and tech leaders to be doing this. Comparing AI to nukes rather than explaining its nudging capabilities is all you need to know about the transparency of the regulation, and the sort of safety it aims to achieve.

Whether AI is viewed as an intelligent assistant or angel, it is in the employ of humans.

Free Your Mind: The new world of manipulation and how to resist itwritten by Laura Dodsworth and Patrick Fagan is out now (Harper Collins) from all good book shops.

Read more here:

AI: is the end nigh? | Laura Dodsworth - The Critic