Archive for the ‘Artificial Super Intelligence’ Category

AI poses an existential threat, according to Munk Debates crowd … – The Hub

More than two-thirds of the Munk Debates crowd came into Roy Thomson Hall last week believing that artificial intelligence poses an existential threat to humanity and the debate-goers left mostly unshaken, with only three percent of the audience changing its mind after the final arguments had been made.

Over the last year, discourse about AI has greatly intensified with the release of Chat GPT and other AI-driven, publicly available technologies. In the wake of these developments, high-profile AI experts debated the resolution, Be it resolved, AI research and development poses an existential threat.

Arguing on the pro-side of the resolution was Yoshua Bengio, a professor at the Universit de Montral, and founder and scientific director of the Mila Quebec AI Institute, who won the 2018 A.M. Turing Award in the field of computing. Alongside him was Max Tegmark, a professor performing AI and physics research at MIT.

On the con side was Melanie Mitchell, a professor at the Santa Fe Institute who has authored and edited several books and papers on AI and related science and technologies. Also on the con side was Yann LeCun, VP & chief AI scientist at Meta and Silver Professor at NYU.

During the debate, Tegmark asked the con side if they had any evidence that AI will not pose an existential threat to humanity.

What do you actually think the probability is that we are going to get superhuman intelligence, say, in 20 years, say, in 100 years? asked Tegmark. What is your plan for how to make it safe? What is your plan for how were going to make sure that the goals of an AI are always aligned with humans?

LeCun said that such scenarios cannot be fully disproven but compared them to a claim that a teapot flew around Saturn also being disprovable. He added that when jet planes were being developed in the 1930s, supersonic trans-Atlantic jets would have been regarded as impossible, and were only built decades later.

I think a lot of the fears around AI are predicated on the idea that somehow there is a hard takeoff, which is that the minute you turn on an AI system that is capable of human intelligence or superintelligence, its going to take over the world within minutes, said LeCun. This is preposterous.

Bengio said companies that develop AI are likely to be more interested in profit-making and beating their competition, rather than aligning their products with the needs of society.

What Max and I and others are saying is not, necessarily, theres going to be a catastrophe but that we need to understand what can go wrong so that we can prepare for it, said Bengio.

Mitchell replied that the risk of anything is non-zero and that there is always the possibility that aliens may arrive and destroy Earth at any given moment, but that is highly unlikely. She pointed out that all of AIs intelligence is derived from human data and lacks the capacity to understand the world, and that negative predictions about AI are not a new phenomenon.

The whole history of AI has been a history of failed predictions. Back in the 1950s and 60s, people were predicting the same thing about super-intelligent AI and talking about existential risk, but it was wrong then. Id say its wrong now, said Mitchell.

Towards the end of the debate, Tegmark referenced the warnings made by Geoffrey Hinton, sometimes called the godfather of AI, who has stated that AI has the potential to manipulate and replace humans with its faster, automated thinking.

I feel a little bit like were on this big ship sailing south from here down in the Niagara River and Yoshua is like, I heard there might be a waterfall down there. Maybe this isnt safe, and Melanie is saying, Well, Im not convinced that there even is a waterfall, even though Geoff Hinton says there is, said Tegmark.

Mitchell responded by reiterating that similar fears had been expressed 80 years ago without coming to fruition.

That happened in 1960, not by Geoffrey Hinton, but people like Claude Shannon and Herbert Simon, and they were just dead wrong, said Mitchell.

At the start of the debate, 67 percent of the audience listed themselves on the pro side, while 33 percent were on the con side. When it was over, the con side won by convincing 3 percent of the audience to change their initial position. While the con side did win according to the debate rules, the vast 64 percent majority of the audience remained on the pro side.

From the outset, Tegmark argued that superhuman AI can surpass revolutionary technologies like nuclear bombs, possessing greater intelligence without human emotions or empathy. Tegmark also highlights concerns about malicious use and the replacement of decision-making roles by AI.

LeCun countered by stating that current AI systems, like self-driving cars, have limited capabilities and lack reasoning and understanding of the world. He mentioned that existing fears about AI, such as spreading misinformation, already exist on social media, which can be addressed through counter-measures using AI tools. LeCun proposes objective-driven AI with constraints and subservient emotions to ensure safety.

Bengio expressed concern about machines gaining self-preservation goals, leading to the desire to control humans for survival.

On the other hand, Mitchell argued that fears about AI are rooted in human psychology and not supported by science or evidence. She believes that AI does not pose an existential threat in the near future, and emphasizing such concerns diverts attention from real risks and hinders the potential benefits of technological progress.

Here is the original post:

AI poses an existential threat, according to Munk Debates crowd ... - The Hub

The Cautionary Tale of J. Robert Oppenheimer – Alta Magazine

When Christopher Nolans blockbuster biopic of the theoretical physicist J. Robert Oppenheimer, the so-called father of the atomic bomb, drops in theaters on July 21, moviegoers might be forgiven for wondering, Why now? What relevance could a three-hour drama chronicling the travails and inner torment of the scientist who led the Manhattan Projectthe race to develop the first nuclear weapon before the Germans during World War IIpossibly have for todays 5G generation, which greets each new technological advance with wide-eyed excitement and optimism?

But the film, which focuses on the moral dilemma facing Oppenheimer and his young collaborators as they prepare to unleash the deadliest device ever created by mankind, aware that the world will never be the same in the wake of their invention, eerily mirrors the present moment, as many of us anxiously watch the artificial intelligence doomsday clock countdown. Surely as terrifying as anything in Nolans war epic is the New York Times recent account of OpenAI CEO Sam Altman, sipping sweet wine as he calmly contemplates a radically altered future; boasting that he sees the U.S. effort to build the bomb as a project on the scale of his GPT-4, the awesomely powerful AI system that approaches human-level performance; and adding that it was the level of ambition we aspire to.

This article appears in Issue 24 of Alta Journal. SUBSCRIBE

If Altman, whose company created the chatbot ChatGPT, is troubled by any ethical qualms about his unprecedented artificial intelligence models and their potential impact on our lives and society, he is not losing any sleep over it. He sees too much promise in machine learning to be overly worried about the pitfalls. Large language models, the types of neural network on which ChatGPT is built, enable everything from digital assistants like Siri and Alexa to self-driving cars and computer-generated tweets and term papers. The 37-year-old AI guru thinks its all goodtransformative change. He is busy creating tools that empower humanity and cannot worry about all their applications and outcomes and whether there might be what he calls a downside.

Just this March, in an interview for the podcast On with Kara Swisher, Altman seemed to channel his hero Oppenheimer, asserting that OpenAI had to move forward to exploit this revolutionary technology and that it requires, in our belief, this continual deployment in the world. As with the discovery of nuclear fission, AI has too much momentum and cannot be stopped. The net gain outweighs the dangers. In other words, the market wants what the market wants. Microsoft is gung ho on the AI boom and has invested $13 billion in Altmans technology of the future, which means tools like robot soldiers and facial recognitionbased surveillance systems might be rolled out at record speed.

We have seen such arrogance before, when Oppenheimer quoted from the Hindu scripture the Bhagavad Gita in the shadow of the monstrous mushroom cloud created by the Trinity test explosion in the Jornada Del Muerto Desert, in New Mexico on July 16, 1945: Now I have become Death, destroyer of worlds. No man in history had ever been charged with developing such a powerful scientific weapon, an apparent affront to morality and sanity, that posed a grave threat to civilization yet at the same time proceeded with all due speed on the basis that it was virtually unavoidable. The official line was that it was a military necessity: the United States could not allow the enemy to achieve such a decisive weapon first. The bottom line is that the weapon was devised to be used, it cost upwards of $2 billion, and President Harry Truman and his top advisers had an assortment of strategic reasonshello, Soviet Unionfor deploying it.

Back in the spring of 1945, a prominent group of scientists on the Manhattan Project had voiced their concerns about the postwar implications of atomic energy and the grave social and political problems that might result. Among the most outspoken were the Danish Nobel laureate Niels Bohr, the Hungarian migr physicist Leo Szilard, and the German migr chemist and Nobel winner James Franck. Their mounting fears culminated in the Franck Report, a petition by a group from the projects Chicago laboratory arguing that releasing this indiscriminate destruction upon mankind would be a mistake, sacrificing public support around the world and precipitating a catastrophic arms race.

The Manhattan Project scientists also urged policymakers to carefully consider the questions of what the United States should do if Germany was defeated before the bomb was ready, which seemed likely; whether it should be used against Japan; and, if so, under what circumstances. The way in which nuclear weaponsare first revealed to the world, they noted, appears to be of great, perhaps fateful importance. They proposed performing a technical demonstration and then giving Japan an ultimatum. The writers of the Franck Report wanted to explore what kind of international control of atomic energy and weapons would be feasible and desirable and how a strict inspection policy could be implemented. The shock waves of the Trinity explosion would be felt all over the world, especially in the Soviet Union. The scientists foresaw that the nuclear bomb could not remain a secret weapon at the exclusive disposal of the United States and that it inexorably followed that rogue nations and dictators would use the bomb to achieve their own territorial ambitions, even at the risk of triggering Armageddon.

Fast-forward to the spring of 2023, when more than 1,000 tech experts and leaders, such as Tesla chief Elon Musk, Apple cofounder Steve Wozniak, and entrepreneur and 2020 presidential candidate Andrew Yang, sounded the alarm on the unbridled development of AI technology in a signed letter warning that the AI systems present profound risks to society and humanity. AI developers, they continued, are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no onenot even their creatorscan understand, predict, or reliably control.

The open letter called for a temporary halt to all AI research at labs around the globe until the risks can be better assessed and policymakers can create the appropriate guardrails. There needs to be an immediate pause for at least 6 months, it stated, on the training of AI systems more powerful than GPT-4, which has led to the rapid development and release of imperfect tools that make mistakes, fabricate information unexpectedly (a phenomenon AI researchers have aptly dubbed hallucination), and can be used to spread disinformation and further the grotesque distortion of the internet. This pause, the signatories wrote, should be used to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts, and they urged policymakers to roll out robust AI governance systems. How the letters authors hope to enforce compliance and prevent these tools from falling into the hands of authoritarian governments remains unclear.

Geoffrey Hinton, a pioneering computer scientist who has been called the godfather of AI, did not sign the letter but in May announced that he was leaving Google in order to freely express his concerns about the global AI race. He is worried that the reckless pace of advances in machine superintelligence could pose a serious threat to humanity. Until recently, Hinton thought that it was going to be two to five decades before we had general-purpose AIwith its wide range of possible uses, both intended and unintendedbut the trailblazing work of Google and OpenAI means the ability of AI systems to learn and solve any task with something approaching human cognition looms directly ahead, and in some ways they are already eclipsing the capabilities of the human brain. Look at how it was five years ago and how it is now, Hinton said of AI technology. Take the difference and propagate it forwards. Thats scary.

Until this year, when people asked Hinton how he could work on technology that was potentially dangerous, he would always paraphrase Oppenheimer to the effect that when you see something that is technically sweet, you go ahead and do it. He is not sanguine enough about the future iterations of AI to say that anymore.

Now, as during the Manhattan Project, there are those who argue against any moratorium on development for fear of the United States losing its competitive edge. ExGoogle CEO Eric Schmidt, who has expressed concerns about the possible misuse of AI, does not support a hiatus for the simple reason that it would benefit China. Schmidt is in favor of voluntary regulation, which he has described somewhat lackadaisically as letting the industry try to get its act together. Yet he concedes that the dangers inherent in AI itself may pose a larger threat than any global power struggle. I think the concerns could be understated. Things could be worse than people are saying, he told the Australian Financial Review in April. You have a scenario here where you have these large language models that, as they get bigger, have emergent behavior we dont understand.

If Nolan is true to form, audiences may find the personal dimension of Oppenheimer even more chilling than the IMAX-enhanced depiction of hair-raising explosions. The director has said that he is not interested in the mechanics of the bomb; rather, what fascinates him is the paradoxical and tragic nature of the man himself. Specifically, the movie will examine the toll inventing a weapon of mass destruction takes on an otherwise peaceable, dreamy, poetry-quoting blackboard theoretician, whose only previous brush with conflict was the occasional demonstration on UC Berkeleys leafy campus.

One of the things that would haunt Oppenheimer was his decision, as head of the scientific panel chosen to advise on the use of the bomb, to argue that there was no practical alternative to military use of the weapon. He wrote to Secretary of War Henry Stimson in June 1945 that he did not feel it was the panels place to tell the government what to do with the invention: It is clear that we, as scientific men, have no proprietary rights [and]no claim to special competence in solving the political, social, and military problems which are presented by the advent of atomic power.

Even at the time, Oppenheimer was already in the minority: most of the project scientists argued vehemently that they knew more about the bomb, and had given more thought to its potential dangers, than anyone else. But when Leo Szilard tried to circulate a petition rallying the scientists to present their views to the government, Oppenheimer forbade him to distribute it at Los Alamos.

Universal History Archive

After the two atomic attacks on Japanfirst Hiroshima on August 6 and then, just three days later, Nagasaki on August 9the horror of the mass killings, and of the unanticipated and deadly effects of radiation poisoning, forcefully hit Oppenheimer. In the days and weeks that followed, the brilliant scientific leader who had been drawn to the bomb project by ego and ambition, and who had skillfully helmed the secret military laboratory at Los Alamos in service of his country, was undone by the weight of responsibility for what he had wrought on the world. Within a month of the bombings, Oppenheimer regretted his stand on the role of scientists. He reversed his position and began frantically trying to use his influence and celebrity as the father of the A-bomb to convince the Truman administration of the urgent need for international control of nuclear power and weapons.

The film will almost certainly include the famous, or infamous, scene when Oppenheimer, by then a nervous wreck, burst into the Oval Office and dramatically announced, Mr. President, I feel I have blood on my hands. Truman was furious. I told him, the president said later, the blood was on my handsto let me worry about that. Afterward, Truman, who was struggling with his own misgivings about dropping the bombs and what it would mean for his legacy, would denounce Oppenheimer as that cry-baby scientist.

In the grip of his postwar zealotry, Oppenheimer became an outspoken opponent of nuclear proliferation. He was convinced no good could come of the race for the hydrogen bomb. Just months after the Soviet Unions successful test of an atomic bomb in 1949, he joined other eminent scientists in lobbying against the development of the H-bomb. In an attempt to alert the world, he helped draft a report that went so far as to describe Edward Tellers Super bomb as a weapon of genocideessentially, a threat to the future of the human raceand urged the nation not to proceed with a crash effort to develop bigger, ever more destructive thermonuclear warheads. In an effort to silence him, Teller and his faction of bigger-is-better physicists, together with officials in the U.S. Air Force who were eyeing huge defense contracts, cast aspersions on Oppenheimers character and patriotism and dug up old allegations about his ties to communism. In 1954, the Atomic Energy Commission, after a kangaroo court hearing, found him to be a loyal citizen but stripped him of his security clearance.

Last December, almost 70 years later, the U.S. Department of Energy restored Oppenheimers clearance, admitting that the trial had been flawed and that the verdict had less to do with genuine national security concerns than with his failure to support the countrys hydrogen bomb program. The reprieve came too late for the physicist, whose reputation had been destroyed, his public life as a scientist-statesman over. He died in 1967, relatively young, aged 62, still an outcast.

Altman and todays other lofty tech leaders would do well to note the terrible swiftness of Oppenheimers fall from gracefrom hero to villain in less than a decade. And how quick the government was to dispense with Oppenheimers advice once it had taken possession of his invention. The internet still remains unregulated in this country, but the European Union is considering labeling ChatGPT high risk. Italy has already banned OpenAIs service. Perhaps revealing a bit of nervousness that he has gotten ahead of himself, Altman responded to the open letter about temporarily halting the development of AI by taking to Twitter to gush about the demand that his company release a great alignment dataset, calling it one thing coming up in the debate about the pause letter I really agree with.

Nolans Oppenheimer epic will inevitably be a cautionary tale. The story of the nuclear weapons project illustrates, in the starkest terms, what happens when new science is developed too quickly, without any moral calculus, and how it can lead to devastating consequences that could not have been imagined at the outset.

See more here:

The Cautionary Tale of J. Robert Oppenheimer - Alta Magazine

Virgin Voyages and JLo Bust on A.I. To Sell Vacations – We Got This Covered

Photo via TikTok/JLo

Artificial Intelligence is nothing to play with even though apps are being handed out like toys for the world to enjoy. So, it looks like Jennifer Lopez wanted to have some fun with the idea in her latest commercial for Virgin Voyages and its hilarious.

Its no secret that JLo can sell anything from albums to movies and anything else she wants. What do AI and Virgin Voyages have to do with each other? Its the hope that Virgin Voyages isnt out there with AI captains on a ship steering the boat. The world just had a tragedy with the Titan and it was manned. We dont need another episode like that.

No, this is another thing entirely. This is a commercial and it has all the funny that AI can provide. Putting on those headsets that cover a persons eyes sends them to an entirely different environment, and its fun to be transported to the jungles of Africa or the beaches of Morocco. Just remember that another person can come behind you and put that same headset on and the virtual person takes on an entirely new personality.

Birthday. Anniversary. Because you just want to live in the NOW Let me personally invite your friends to celebrate at sea. Create a customized message using the @Virgin Voyages next Jen(eration) AI tool (link in bio).

Its not just a yacht! Its a super yacht!

How many of you want that commercial to go on forever with JLo doing all those personalities? I know I can watch it for days.

Everyones a Virgin now. Just to make it clear, WGTC doesnt sell tickets to the show. Were not affiliated or anything. We just like the commercial.

Contributing Writer at WGTC, Michael Allen is the author of 'The Deeper Dark' and 'A River in the Ocean,' both available on Amazon. At this time, 'The Deeper Dark' is also available on Apple Books. Currently under contract to write a full-length feature spy drama for producer/director Anton Jokikunnas.

Read the original here:

Virgin Voyages and JLo Bust on A.I. To Sell Vacations - We Got This Covered

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? – Hollywood Reporter

AI startup Respeecher re-created James Earl Jones Darth Vader voice for the Disney+ series Obi Wan Kenobi.

On May 17, as bodies lined up in the rain outside the Cannes Film Festival Palais for the chance to watch a short film directed byPedro Almodvar, an auteur known most of all for his humanism, a different kind of gathering was underway below the theater. Inside the March, a panel of technologists convened to tell an audience of film professionals how they might deploy artificial intelligence for creating scripts, characters, videos, voices and graphics.

The ideas discussed at the Cannes Next panel AI Apocalypse or Revolution? Rethinking Creativity, Content and Cinema in the Age of Artificial Intelligence make the scene of the Almodvar crowd seem almost poignant, like seeing a species blissfully ignorant of their own coming extinction, dinosaurs contentedly chewing on their dinners 10 minutes before the asteroid hits.

The only people who should be afraid are the ones who arent going to use these tools, said panelistAnder Saar, a futurist and strategy consultant for Red Bull Media House, the media arm of the parent company of Red Bull energy drinks. Fifty to 70 percent of a film budget goes to labor. If we can make that more efficient, we can do much bigger films at bigger budgets, or do more films.

The panel also includedHovhannes Avoyan, the CEO of Picsart, an image-editing developer powered by AI, andAnna Bulakh, head of ethics and partnerships at Respeecher, an AI startup that makes technology that allows one person to speak using the voice of another person. The audience of about 150 people was full of AI early adopters through a show of hands, about 75 percent said they had an account for ChatGPT, the AI language processing tool.

The panelists had more technologies for them to try. Bulakhs company re-createdJames Earl Jones Darth Vader voice as it sounded in 1977 for the 2022 Disney+ seriesObi-Wan Kenobi, andVince Lombardis voice for a 2021 NFL ad that aired during the Super Bowl. Bulakh drew a distinction between Respeechers work and AI that is created to manipulate, otherwise known as deepfakes. We dont allow you to re-create someones voice without permission, and we as a company are pushing for this as a best practice worldwide, Bulakh said. She also spoke about how productions already use Respeechers tools as a form of insurance when actors cant use their voices, and about how actors could potentially grow their revenue streams using AI.

Avoyan said he created his company for his daughter, an artist, and his intention is, he said, democratizing creativity. Its a tool, he said. Dont be afraid. It will help you in your job.

The optimistic conversation unfolding beside the French Riviera felt light years away from the WGA strike taking place in Hollywood, in which writers and studios are at odds over the use of AI, with studios considering such ideas as having human writers punch up drafts of AI-generated scripts, or using AI to create new scripts based on a writers previous work. During contract negotiations, the AMPTP refused union requests for protection from AI use, offering instead, annual meetings to discuss advancements in technology. The March talk also felt far from the warnings of a growing chorus of experts likeEric Horvitz, chief scientific officer at Microsoft, and AI pioneerGeoffrey Hinton, who resigned from his job at Google this month in order to speak freely about AIs risks, which he says include the potential for deliberate misuse, mass unemployment and human extinction.

Are these kinds of worries just moral panic? mused the moderator and head of Cannes NextSten Kristian-Saluveer. That seemed to be the panelists view. Saar dismissed the concerns, comparing the changes AI will bring to adaptations brought by the automobile or the calculator. When calculators came, it didnt mean we dont know how to do math, he said.

One of the panel buzz phrases was hyper-personalized IP, meaning that well all create our own individual entertainment using AI tools. Saar shared a video from a company he is advising, in which a childs drawings came to life and surrounded her on video screens. The characters in the future will be created by the kids themselves, he says. Avoyan said the line between creator and audience will narrow in such a way that we will all just be making our own movies. You dont even need a distribution house, he said.

A German producer and self-described AI enthusiast in the audience said, If the cost of the means of production goes to zero, the amount of produced material is going up exponentially. We all still only have 24 hours. Who or what, the producer wanted to know, would be the gatekeepers for content in this new era? Well, the algorithm, of course. A lot of creators are blaming the algorithm for not getting views, saying the algorithm is burying my video, Saar said. The reality is most of the content is just not good and doesnt deserve an audience.

What wasnt discussed at the panel was what might be lost in a future that looks like this. Will a generation raised on watching videos created from their own drawings, or from an algorithms determination of what kinds of images they will like, take a chance on discovering something new? Will they line up in the rain with people from all over the world to watch a movie made by someone else?

Read this article:

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Hollywood Reporter

Schools ‘bewildered’ by very fast rate of change in AI education … – The Irish News

Schools are bewildered by the rate of change in artificial intelligence (AI) and believe it is moving far too quickly for government alone to provide the advice that is needed, leading head teachers have warned.

Their comments come after Prime Minister Rishi Sunak said guardrails are to be put in place to maximise the benefits of AI while minimising the risks to society.

Mr Sunak said the UKs regulation must evolve alongside the rapid advance of AI, with threats including to jobs and disinformation.

A letter to The Times, signed by more than 60 education figures, says: Schools are bewildered by the very fast rate of change in AI, and seek secure guidance and counsel on the best way forward. But whose advice can we trust?

We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools.

Neither in the past has government shown itself capable or willing to do so.

The heads said they are pleased that the Government is now grasping the nettle but added: The truth is that AI is moving far too quickly for government or Parliament alone to provide the real-time advice that schools need.

We are announcing today our own cross-sector body composed of leading teachers in our schools, guided by a panel of independent digital and AI experts, to advise schools on which AI developments are likely to be beneficial, and which are damaging.

According to The Times, the heads, led by Sir Anthony Seldon, the headteacher of Epsom College, said schools must collaborate to ensure that AI works in their best interests and that of pupils, not of large education technology companies.

Mr Sunak has advocated the technologys benefits for national security and the economy, but growing concerns have been raised with the prominence of the ChatGPT bot which has passed exams and can compose prose.

Former government chief scientific adviser Sir Patrick Vallance has said AI could have an impact on jobs comparable with the industrial revolution.

Earlier this month Geoffrey Hinton, the man widely seen as the godfather of AI, warned that some of the dangers of AI chatbots are quite scary, as he quit his job at Google.

Last week one of the pioneers of AI warned the Government is not safeguarding against the dangers posed by future super-intelligent machines.

Professor Stuart Russell told The Times ministers were favouring a light touch on the burgeoning AI industry, despite warnings from civil servants it could create an existential threat.

He told The Times a system similar to ChatGPT could form part of a super-intelligence machine which could not be controlled.

How do you maintain power over entities more powerful than you forever? he asked. If you dont have an answer, then stop doing the research. Its as simple as that.

The stakes couldnt be higher: if we dont control our own civilisation, we have no say in whether we continue to exist.

Go here to read the rest:

Schools 'bewildered' by very fast rate of change in AI education ... - The Irish News