Archive for the ‘Artificial Super Intelligence’ Category

Meta is planning on introducing dozens of chatbot personas … – TechRadar

Meta is gearing up to announce a generative artificial intelligence chatbot (internally dubbed as Gen AI Personas) that is aimed at enticing younger users to the world of AI chatbots. The new chatbot is expected to launch during Metas Connect event on September 27, and will introduce some familiar but dated personas.

The Verge notes that the chatbots will come with different personas that will promote more humanlike, engaging conversations to appeal to younger users. One of the sassy robot personas is inspired by Bender from Futurama and Alvin the Alien.

Meta is planning to add dozens of familiar faces to its chatbot roster and even plans on creating a tool that will enable celebrities to make their own chatbots for their fans. This is good news, as I could finally talk to Beyonce.

Meta is clearly putting a lot of time and effort into perfecting its chatbot game in the budding world of AI. We all remember Snapchat AI, which rose to fame for about a week and then quickly fizzled out into obscurity.

Interestingly, the Wall Street Journal reached out to former Snap and Instagram executive Meghana Dhar, who noted that chatbots dont scream Gen Z to me, but definitely, Gen Z is much more comfortable with new technology. She also adds that Metas goal with the chatbots is likely to be to keep them engaged for longer so it has increased opportunity to serve them ads.

That would explain the rather random selection of young people personas that Meta is going for. While Bender from Futurama is pretty recognizable, hes not exactly a Gen Z icon. As someone from the demographic Meta seems to be targeting, its an extremely odd celebrity to slap onto your product, considering theres a plethora of other (more relevant) personalities to choose from.

The advantage Meta has in picking Gen Z as its target demographic is that Gen Z is very public about who they are super into right now. Meta could have picked literally anyone else, so hopefully the other personalities it has up its sleeve are a bit more contemporary.

Excerpt from:

Meta is planning on introducing dozens of chatbot personas ... - TechRadar

We Cannot Trust AI With Control Of Our Bombs – Fair Observer

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable hallucinations, resulting in potentially catastrophic outcomes. But theres an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film WarGames, a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced whopper) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The Terminator franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called Skynet that, like WOPR, was designed to control US nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of autonomous, or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called robot generals. In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over Americas atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanitys demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the US Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the air force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As C2 capabilities are increasingly loaded onto AI-controlled systems, they may soon be issuing fire instructions directly to shooters, largely bypassing human control.

A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp, a military show of force, or early engagementthats how Will Roper, assistant secretary of the air force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that we do need to change the name as the system evolves, Roper added, I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just dont think we can go there.

And while he cant go there, thats just where the rest of us may, indeed, be going.

Mind you, thats only the start. In fact, the air forces ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all US combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced jad-cee-two). JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon to engage the target, the Congressional Research Service reported in 2022.

Initially, JADC2 will be designed to coordinate combat operations among conventional or non-nuclear American forces. Eventually, however, it is expected to link up with the Pentagons nuclear command-control-and-communications systems (NC3), potentially giving computers significant control over the use of the American nuclear arsenal. JADC2 and NC3 are intertwined, General John E. Hyten, vice chairman of the Joint Chiefs of Staff, indicated in a 2020 interview. As a result, he added in typical Pentagonese, NC3 has to inform JADC2 and JADC2 has to inform NC3.

It doesnt require great imagination to picture a time in the not-too-distant future when a crisis of some sortsay a US-China military clash in the South China Sea or near Taiwanprompts ever more intense fighting between opposing air and naval forces. Imagine then the JADC2 ordering an intense bombardment of enemy bases and command systems in China itself, triggering reciprocal attacks on US facilities and a lightning decision by JADC2 to retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.

The possibility that nightmare scenarios of this sort could result in the accidental or unintended onset of nuclear war has long troubled analysts in the arms control community. But the growing automation of military C2 systems has generated anxiety not just among them but among senior national security officials as well.

As early as 2019, when I questioned Lieutenant General Jack Shanahan, director of the Pentagons Joint Artificial Intelligence Center, about such a risky possibility, he responded, You will find no stronger proponent of integration of AI capabilities writ large into the Department of Defense, but there is one area where I pause, and it has to do with nuclear command and control. This is the ultimate human decision that needs to be made and so we have to be very careful. Given the technologys immaturity, he added, we need a lot of time to test and evaluate before applying AI to NC3.

In the years since, despite such warnings, the Pentagon has been racing ahead with the development of automated C2 systems. In its budget submission for 2024, the Department of Defense requested $1.4 billion for the JADC2 in order to transform warfighting capability by delivering information advantage at the speed of relevance across all domains and partners. Uh-oh! And then it requested another $1.8 billion for other kinds of military-related AI research.

Pentagon officials acknowledge that it will be some time before robot generals will be commanding vast numbers of US troops (and autonomous weapons) in battle, but they have already launched several projects intended to test and perfect just such linkages. One example is the armys Project Convergence, involving a series of field exercises designed to validate ABMS and JADC2 component systems. In a test held in August 2020 at the Yuma Proving Ground in Arizona, for example, the army used a variety of air- and ground-based sensors to track simulated enemy forces and then process that data using AI-enabled computers at Joint Base Lewis McChord in Washington state. Those computers, in turn, issued fire instructions to ground-based artillery at Yuma. This entire sequence was supposedly accomplished within 20 seconds, the Congressional Research Service later reported.

Less is known about the navys AI equivalent, Project Overmatch, as many aspects of its programming have been kept secret. According to Admiral Michael Gilday, chief of naval operations, Overmatch is intended to enable a Navy that swarms the sea, delivering synchronized lethal and nonlethal effects from near-and-far, every axis, and every domain. Little else has been revealed about the project.

Despite all the secrecy surrounding these projects, you can think of ABMS, JADC2, Convergence and Overmatch as building blocks for a future Skynet-like mega-network of super-computers designed to command all US forces, including its nuclear ones, in armed combat. The more the Pentagon moves in that direction, the closer well come to a time when AI possesses life-or-death power over all American soldiers along with opposing forces and any civilians caught in the crossfire.

Such a prospect should be ample cause for concern. To start with, consider the risk of errors and miscalculations by the algorithms at the heart of such systems. As top computer scientists have warned us, those algorithms are capable of remarkably inexplicable mistakes and, to use the AI term of the moment, hallucinationsthat is, seemingly reasonable results that are entirely illusionary. Under the circumstances, its not hard to imagine such computers hallucinating an imminent enemy attack and launching a war that might otherwise have been avoided.

And thats not the worst of the dangers to consider. After all, theres the obvious likelihood that Americas adversaries will similarly equip their forces with robot generals. In other words, future wars are likely to be fought by one set of AI systems against another, both linked to nuclear weaponry, with entirely unpredictablebut potentially catastrophicresults.

Not much is known (from public sources at least) about Russian and Chinese efforts to automate their military command-and-control systems, but both countries are thought to be developing networks comparable to the Pentagons JADC2. As early as 2014, in fact, Russia inaugurated a National Defense Control Center (NDCC) in Moscow, a centralized command post for assessing global threats and initiating whatever military action is deemed necessary, whether of a non-nuclear or nuclear nature. Like JADC2, the NDCC is designed to collect information on enemy moves from multiple sources and provide senior officers with guidance on possible responses.

China is said to be pursuing an even more elaborate, if similar, enterprise under the rubric of Multi-Domain Precision Warfare (MDPW). According to the Pentagons 2022 report on Chinese military developments, its military, the Peoples Liberation Army, is being trained and equipped to use AI-enabled sensors and computer networks to rapidly identify key vulnerabilities in the US operational system and then combine joint forces across domains to launch precision strikes against those vulnerabilities.

Picture, then, a future war between the US and Russia or China (or both) in which the JADC2 commands all US forces, while Russias NDCC and Chinas MDPW command those countries forces. Consider, as well, that all three systems are likely to experience errors and hallucinations. How safe will humans be when robot generals decide that its time to win the war by nuking their enemies?

If this strikes you as an outlandish scenario, think again, at least according to the leadership of the National Security Commission on Artificial Intelligence, a congressionally mandated enterprise that was chaired by Eric Schmidt, former head of Google, and Robert Work, former deputy secretary of defense. While the Commission believes that properly designed, tested and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit, the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability, it affirmed in its Final Report. Such dangers could arise, it stated, because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems on the battlefieldwhen, that is, AI fights AI.

Though this may seem an extreme scenario, its entirely possible that opposing AI systems could trigger a catastrophic flash warthe military equivalent of a flash crash on Wall Street, when huge transactions by super-sophisticated trading algorithms spark panic selling before human operators can restore order. In the infamous Flash Crash of May 6, 2010, computer-driven trading precipitated a 10% fall in the stock markets value. According to Paul Scharre of the Center for a New American Security, who first studied the phenomenon, the military equivalent of such crises on Wall Street would arise when the automated command systems of opposing forces become trapped in a cascade of escalating engagements. In such a situation, he noted, autonomous weapons could lead to accidental death and destruction at catastrophic scales in an instant.

At present, there are virtually no measures in place to prevent a future catastrophe of this sort or even talks among the major powers to devise such measures. Yet, as the National Security Commission on Artificial Intelligence noted, such crisis-control measures are urgently needed to integrate automated escalation tripwires into such systems that would prevent the automated escalation of conflict. Otherwise, some catastrophic version of World War III seems all too possible. Given the dangerous immaturity of such technology and the reluctance of Beijing, Moscow and Washington to impose any restraints on the weaponization of AI, the day when machines could choose to annihilate us might arrive far sooner than we imagine and the extinction of humanity could be the collateral damage of such a future war.

[TomDispatch first published this piece.]

[Anton Schauble edited this piece.]

The views expressed in this article are the authors own and do not necessarily reflect Fair Observers editorial policy.

See the original post:

We Cannot Trust AI With Control Of Our Bombs - Fair Observer

AI: is the end nigh? | Laura Dodsworth – The Critic

This article is taken from the August-September 2023 issue of The Critic. To get the full magazine why not subscribe? Right now were offering five issues for just 10.

Does AI pose a mass extinction threat? Or is this concern merely the latest manifestation of humanitys need to frighten itself witless?

As the year 2000 approached the world fretted over the Y2K or Millennium Bug. Neurotics and newspapers alike predicted that power plants, banks and planes would fail as 1999 became 2000, ushering in pandemonium and death. John Hamre, the US Deputy Secretary of Defense from 1997 to March 2000, foresaw that the Y2K problem is the electronic equivalent of the El Nio and there will be nasty surprises around the globe. There werent and there was little difference in the outcome between countries which invested millions of dollars and countries which invested none.

In the 23 years since then, weve gone from computers are so stupid the world will end to computers are so clever the world will end. But the hysteria remains the same.

The latest apocalyptic horror on the heels of Covid-19 and climate catastrophe is whether, non-human minds as Elon Musk pitches it, might eventually outnumber, outsmart, obsolete and replace us. He co-signed an open letter with other tech leaders warning that machines might flood our information channels with propaganda and untruth (in contradistinction to humans doing so).

The letter set out profound risks to society, humanity and democracy, which in turn led to a multitude of hyperbolic headlines such as the BBCs Artificial intelligence could lead to extinction, experts warn. The Centre for AI safety warned starkly that: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

AI does pose threats, as well as tremendous opportunities, but the threats may be quite different to the doom and gloom headlines. First, there is no certainty that AI will develop the capabilities that we are being extravagantly warned about. Even the Future of Life Institute which published the open letter admits that super-intelligence is not necessarily inevitable.

Thus far, AI has had a free ride on human achievement and creativity. There is no AI without humans. There is no generative language AI without human language. There is no writing in the style of John Donne, without John Donne. In fact, ChatGPT and Bard do a terrible impersonation of metaphysical poetry, although their limericks are passable. There is no AI art, music, novels without everything that has gone before. In short, the achievements are still ours.

The panic is focused on what might be. AI is an extremely advanced tool, but it is just a tool. It is the humans holding the tools with whom we need to concern ourselves. New technology has sometimes resulted in some horrible uses, such as the gas chambers. New communications technologies have been channels for propaganda. But they were not the propaganda itself. Nevertheless, some threats are real.

Firstly, AI systems are now becoming human-competitive at general tasks. IBMs CEO, Arvind Krishna, recently told Bloomberg that he could easily see 30 per cent of jobs getting replaced by AI and automation over a five-year period. And according to a report by Goldman Sachs, AI could replace the equivalent of 300 million full-time jobs.

It turns out the very IT, software, media, creative and legal people now worried about AI, might find themselves facing increased competition from AI. For example, Chat GPT will help people with average writing skills produce better articles, which will probably lead to more competition and lower wages.

AI is also a brainwashers dream. Advocates for regulation want you to think that AI is about to discover sentience and write new religious tomes, invent propaganda and disrupt elections, all because it wants to, for its own devious reasons. In fact, the brainwashing threat is quite different.

AI can be sedimented with psychological techniques such as nudging. Nudging involves influencing your behaviour by altering the environment, or choice architecture, in different ways, by exploiting our natural cognitive biases. Algorithmic nudging is a potentially potent tool in the hands of paternalistic libertarian do-gooders or authoritarians.

Nudges will be able to scale completely unlike the real world counterpart, and at the same time be completely personalised. Facebook knows you better than anyone, except your spouse, from a mere 200 likes splattered on its pages, even to the extent of knowing your sexuality. As I warn in my book Free Your Mind, if you dont want AI to know you better than anyone else, tread lightly on social media and use it mindfully.

It is interesting that the threat of AI is likened to nukes, yet the academics have been writing for years about algorithmic nudging which presents clear ethical dilemmas about consent, privacy and manipulation, without clamouring for regulation.

Algorithms already create completely personalised platforms

Algorithms already create completely personalised platforms. Twitter is often described as a public square, but it more closely resembles a maze, in which the lights are off and the walls move, seemingly arbitrarily. Aside from the disturbing evidence presented in the release of the Twitter Files particularly concerning how Twitter deamplifies content it does not like, anyone using the platform a lot will attest to the inexplicable rise and fall of follower counts and the suppression of juicy tweets. It seems content is pushed up or down based on the preferences of Big Tech and government agencies, and this is made effective through the capabilities of algorithms. AI is killing transparency and pluralism.

In our relationship with AI, our biases create danger. The authority bias means we see AI as more powerful than it is, and therefore we are more likely to succumb to manufactured and exaggerated fears. We anthropomorphise AI. Google engineer, Blake Lemoine was prepared to lose his job because he believed LaMDA, an AI chatbot, has sentience.

AI is not human-like, but it is our human tendency to believe it is so. One study has shown that since lockdown, people show a higher preference for anthropomorphised brands and platforms. The more we disconnect from each other, through tech, the more we want tech to resemble us. Men already have AI girlfriends and one Belgian man was persuaded to kill himself by an AI chatbot called Eliza after he shared his fears about climate change. Alarming though this is, is it any more so than a technological upgrade of last years sex dolls or emo music?

AI might make us stupid. As we rely even more on our phone our own capabilities may decrease. One study has shown that just having your phone nearby reduces cognitive abilities. As we outsource homework, research and even parts of our jobs, will we use our brains to create more wonders of the world, or to vegetate longer on TikTok?

Our biases make us vulnerable to the perceived threats of AI

Our biases make us vulnerable to the perceived threats of AI, but so do the times in which we find ourselves. We no longer seem to have sufficient collective belief in our special status as human beings. Another co-signatory of the open letter is the historian and author Yuval Noah Harari who has described humans as hackable animals. If you see humans as soulless organic algorithms then you might indeed feel threatened by AI which certainly constitutes superior algorithms unconstrained by mortal flesh.

Harari believes that humans will no longer be autonomous entities directed by the stories the narrating self invents. Instead they will be integral parts of a huge global network. This is a far-reaching hypothesis, and perhaps why Harari does not own a smartphone, for all his apparent enthusiasm for a transhumanist chipped-brain future.

He has claimed that AI may even try to write the worlds next Bible. Humans are quite capable of starting religious wars on their own. So far all AI has managed is to show the Pope in a white puffer jacket.

Hararis dire warnings keep him in the spotlight as a forward-looking muse to the worlds elite. After all, describing AI as merely an intelligent system which, for now, can write a passable undergrad-level essay doesnt seem epoch-defining. Equally, those calling for regulation potentially stand to benefit from investment, government contracts and control over the desired direction of regulation.

Casting AI as a god is indicative of our tendency to fear the End of Days, combined with a crisis of confidence in ourselves and an overdeveloped authority bias. AI is no god, it is a fleet of angels, poised to swoop and intervene in the lives of humans at the bidding of the priest caste who direct it.

It is the priest caste we should look to. What do the tech leaders and politicians of the world want? They dont want to stop AI altogether, of course. They want to pause development and the release of updates while they work together to dramatically accelerate development of robust AI governance systems. They want a seat at the table to write a new moral code.

As a priority, they want the right sort of people academics, politicians and tech leaders to be doing this. Comparing AI to nukes rather than explaining its nudging capabilities is all you need to know about the transparency of the regulation, and the sort of safety it aims to achieve.

Whether AI is viewed as an intelligent assistant or angel, it is in the employ of humans.

Free Your Mind: The new world of manipulation and how to resist itwritten by Laura Dodsworth and Patrick Fagan is out now (Harper Collins) from all good book shops.

Read more here:

AI: is the end nigh? | Laura Dodsworth - The Critic

"Most Beautiful Car in the World" Alfa Romeo Asks People To … – autoevolution

Alfa Romeo is inching closer to the debut of its first supercar in more than 16 years. The model will be unveiled at the end of this month. But before that happens, the Italian carmaker is making a suggestion to enthusiasts: to imagine it with the help of artificial intelligence.

Modesty has never been a virtue for Alfa Romeo. So they come up with a proposition: enthusiasts should imagine what they call "the most beautiful car in the world" using artificial intelligence. That is the tag that Jeremy Clarkson used for the Alfa Romeo 8C Competizione during BBCs Top Gear, but the carmaker hopes to switch crowns.

And we do know that the 8C, revealed at the Mondial de l'Automobile back in 2006, is going to be a muse for the upcoming supercar in terms of design. And so will the legendary T33 Stradale from the 1960s.

Alfa Romeo confirms that the supercar will be unveiled on August 31. They describe it as a creation which was "born through the courage and passion of a team striving to make a dream become reality."

Photo: Alfa Romeo

Will it look futuristic or nostalgic? they ask. Classic or contemporary? 4 or 2 doors? Sleek or steampunk? Red or green? No, they are not looking for design inspiration with just one week left to the official presentation. The move is just part of the buildup ahead of the event scheduled to happen next week. The best submissions will be shared on Alfas Instagram account.

The limited-run model does have a name, but that is classified information as well. It should reportedly be called either the 6C or 333. The 6 would be a reference to the twin-turbo 2.9-liter V6, which should be integrated in a Formula One-inspired drivetrain. Meanwhile, the 333 would be a hint to the iconic T33 Stradale from more than half a century ago. Alfa Romeo will only build 333 examples of its super-exclusive supercar.

The carmaker has great expectations regarding the first supercar that they roll out in more than a decade and a half. Alfas CEO, Jean Philippe Imparato, said that the model would be sold out by the time he actually unveiled it. And it would happen he explained because it would beiconic and super sexy.

No word on any reservations just yet, though. We are to find out more next week, during the premiere that will be streamed live from the Alfa Romeo Museum in Arese, Italy.

Excerpt from:

"Most Beautiful Car in the World" Alfa Romeo Asks People To ... - autoevolution

Managing Past, Present and Future Epidemics – Australian Institute … – Australian Institute of International Affairs

On Tuesday 8 August Raina MacIntyre, Professor of Global Biosecurity in the Kirby Institute at the University of New South Wales, addressed the Institute on the lessons Australia and the international community need to learn about global health and biotechnology from the Covid pandemic. Professor MacIntyre drew on the research onthe prevention and control of infectious diseases explored in her book Dark Winter: An Insiders Guide to Pandemics and Biosecurity (NewSouth Press, November 2022).

Professor MacIntyre opened with an alarming anecdote: an illegal lab owned by Prestige Biotech was discovered in Fresno, California, in March 2023 containing genetically-engineered mice. These mice were humanised modified to replicate human responses to pathogens and could spread COVID-19 or SARS-CoV-2, the herpes virus, HIV and other diseases dangerous to humans. The lab was located 35 kilometres away from a naval base and had links to China, but nobody appeared to be alive to the implications of this discovery: there are huge gaps in the awareness of biosecurity issues among law enforcement, intelligence and military agencies.

A US congressional hearing had received testimony that the COVID-19 pandemic had been the result of a lab leak in Wuhan, a lab that had received funding from the United States. In response to a question from the audience as to what might motivate two notorious rivals like China and the United States to participate in joint research efforts in this way, Professor MacIntyre suggested that one possible reason could be that certain forms of research could only take place in certain settings, and in certain countries.She was open to the idea that COVID-19 could have originated from an accidental, or even deliberate, leak from the Wuhan Institute of Virology. She stated that this is not a right-wing conspiracy, but a plausible hypothesis.

Open-source methods are now available to manufacture synthetic biological weapons cheaply. Dual-use technology technology that can be applied for good or bad increases the risk of man-made pandemics. These unnatural diseases carry much greater risks than naturally-occurring pandemics. In Professor MacIntyres view, biological warfare is the next arms race, as nation-states seek to create new weapons to combat potential threats to their national security. She drew the audiences attention to the retention of the smallpox virus by the former Cold War superpowers, Russia and the US. In theory, the virus is retained for research purposes, but the possession of the smallpox virus seems likely to be for possible biological warfare. She had scrutinised the availability of formulas for deadly pathogens on the internet: the omnipresence of these formulas meant that anyone with the requisite training and equipment could create a pathogen for a biological weapon. For example, she claimed that a Canadian scientific team had easily recreated the horsepox virus a cousin of smallpox in 2017 by using publicly available research.

Next, Professor MacIntyre turned to the potential for engineering human embryos. While the World Health Organisation (WHO) has made attempts to regulate genetic engineering experiments involving humans, she believed certain governments and organisations have continued to undertake research in engineering superhumans. She called for governments to agree on principles to strictly regulate the development of such technology, in order to prevent adverse global impacts.The United Kingdom and the United States are among countries that have been conducting research on the creation of super soldiers which Professor MacIntyre warns has the potential to become a future arms race. The objective is to create soldiers who are stronger, fitter and with greater stamina and resistance to pain by conferring changes to the human genome. She warned that hostile states may in future find a way to alter the genome of vulnerable target peoples.

Professor MacIntyre drew parallels between the future of pandemics and climate change: although governments have had a vested interest against combatting global heating, it is public awareness of the phenomenon and its effects that will truly make a difference. Current regulation of biotechnology is heavily driven by the need to protect the interests of research scientists, and community awareness and engagement have been very low. But the solutions to the existential crisis posed by man-made pandemics will have to come from the community, empowered with the requisite knowledge and given a voice.The public need to seek information and press governments to respond to threats. The final chapter of Professor MacIntyres book is entitled A biological winter, alluding to an existential threat to humanity comparable to threat of a nuclear winter.

Professor MacIntyre commented on the declining compliance with established research ethics principles, such as the need for individual consent and the do no harm rule, largely borne out of the Helsinki Declaration and the Nuremberg trials. She argued that research committees have failed to consider the effect of research on people in other countries. For this reason she strongly advocates the registration of all clinical trials.

In response to a question from the audience on what the World Health Organisation (WHO) is doing to address the threat posed by man-made pandemics, Professor MacIntyre acknowledged that the WHO has assembled an advisory body, the Scientific Advisory Group for Origins of Novel Pathogens (SAGO), to investigate the origins of new epidemics, natural or otherwise, and also participates in pathogen projects. But arguably it is not doing enough to educate and inform the public especially given its vested interest in managing the expectations of donor states. Asked whether the WHO is the right organisation to address the risk of future pandemics, she responded that solutions to the problems she outlined earlier would likely stem from interdisciplinary approaches, models and training which would prevent inter-organisational conflicts and increase the ability to work collaboratively. She also touched on the work of Biosafety Now, a US-based non-governmental organisation aiming through regulation to increase the accountability of those who wish to conduct these controversial forms of research.

Responding to a query about the risk of the long-eradicated smallpox virus re-emerging as a epidemic in the future, given that stocks of the virus are held in the US and elsewhere, Professor MacIntyre acknowledged that melting Siberian permafrost has been said to increase the risk of a natural epidemic re-occurring, but considered it likely that future smallpox epidemics will be driven by man-made variants.

Asked about the status of future pandemic planning and vaccine development efforts, Professor MacIntyre discussed the work done by the Coalition for Epidemic Preparedness Innovations (CEPI) to create vaccine equity around the world and Australias efforts to expand its Intensive Care Unit (ICU) capacity by 120%. She also discussed EPIWATCH, an AI-based system which taps into open-source data to detect the early warning signals of an epidemic well before any health department in the world and aims to stop the spread before it crosses international borders.

Another audience member commented that they could not understand the scale of rejection against the notion that the COVID-19 virus had been made in a lab when this was a well-known practice of governments in the past, citing the example of the development of the anthrax virus by the UK government decades earlier. Professor MacIntyre responded that the difference can be explained by the post-cold war era we live in now. The development of anthrax during the cold war appeared to face less resistance when it was part of an overt arms race. Although it is arguable that the same practices persist today, they are far more covert.

Asked why the Ebola virus, with seemingly more insidious effects, had been easier to quell than COVID-19 and appeared to have mysteriously disappeared, Professor MacIntyre said that the answer came down to the reproduction variable for each virus and the method of transmission. The COVID-19 virus had a reproduction variable of 8-10 in contrast to a variable of just 2 for Ebola, and COVID-19 was also far more easily spread as a respiratory virus in comparison to Ebola which was spread through blood and bodily fluids.

Asked to explain how gain-of-function research works in practice, Professor MacIntyre used the example of adapting the avian flu virus to infect the human respiratory tract through modification of laboratory animals to transmit human pandemics. But the benefits of gain-of-function research were debatable. Although there have been hopes this research would be useful in developing vaccines and in pandemic planning, there have been no proven beneficial uses. The US National Science Advisory Board for Biosecurity (NSABB) had previously placed a moratorium on gain-of-function research. But Professor MacIntyre argues that intense lobbying by scientists who have invested much of their careers into this form of research has led to the lapse of the moratorium and subsequently the publication of open-source methods of engineering viruses which anyone could replicate.

In response to a question regarding the hateful internet and media rhetoric she has experienced, Professor MacIntyre stated she has been exposed to much vitriol since coming to prominence during the pandemic, and especially after her promotion of the COVID-19 lab leak theory.

In response to the final question of the evening on the degree to which artificial intelligence (AI) is being used to fuel research, Professor MacIntyre stated emphatically that AI was essential in many ways. It has allowed much of the experimental research which needed to be performed repeatedly in animals 20 years ago to now be performed much more quickly through computational means.

Summary by AIIA NSW intern Renuga Inpakumar with input from fellow interns Rachel A and Matthew Vasic

Renuga Inpakumar (left) with Professor Raina MacIntyre (right)

Read more:

Managing Past, Present and Future Epidemics - Australian Institute ... - Australian Institute of International Affairs