Archive for the ‘Artificial Super Intelligence’ Category

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? – Hollywood Reporter

AI startup Respeecher re-created James Earl Jones Darth Vader voice for the Disney+ series Obi Wan Kenobi.

On May 17, as bodies lined up in the rain outside the Cannes Film Festival Palais for the chance to watch a short film directed byPedro Almodvar, an auteur known most of all for his humanism, a different kind of gathering was underway below the theater. Inside the March, a panel of technologists convened to tell an audience of film professionals how they might deploy artificial intelligence for creating scripts, characters, videos, voices and graphics.

The ideas discussed at the Cannes Next panel AI Apocalypse or Revolution? Rethinking Creativity, Content and Cinema in the Age of Artificial Intelligence make the scene of the Almodvar crowd seem almost poignant, like seeing a species blissfully ignorant of their own coming extinction, dinosaurs contentedly chewing on their dinners 10 minutes before the asteroid hits.

The only people who should be afraid are the ones who arent going to use these tools, said panelistAnder Saar, a futurist and strategy consultant for Red Bull Media House, the media arm of the parent company of Red Bull energy drinks. Fifty to 70 percent of a film budget goes to labor. If we can make that more efficient, we can do much bigger films at bigger budgets, or do more films.

The panel also includedHovhannes Avoyan, the CEO of Picsart, an image-editing developer powered by AI, andAnna Bulakh, head of ethics and partnerships at Respeecher, an AI startup that makes technology that allows one person to speak using the voice of another person. The audience of about 150 people was full of AI early adopters through a show of hands, about 75 percent said they had an account for ChatGPT, the AI language processing tool.

The panelists had more technologies for them to try. Bulakhs company re-createdJames Earl Jones Darth Vader voice as it sounded in 1977 for the 2022 Disney+ seriesObi-Wan Kenobi, andVince Lombardis voice for a 2021 NFL ad that aired during the Super Bowl. Bulakh drew a distinction between Respeechers work and AI that is created to manipulate, otherwise known as deepfakes. We dont allow you to re-create someones voice without permission, and we as a company are pushing for this as a best practice worldwide, Bulakh said. She also spoke about how productions already use Respeechers tools as a form of insurance when actors cant use their voices, and about how actors could potentially grow their revenue streams using AI.

Avoyan said he created his company for his daughter, an artist, and his intention is, he said, democratizing creativity. Its a tool, he said. Dont be afraid. It will help you in your job.

The optimistic conversation unfolding beside the French Riviera felt light years away from the WGA strike taking place in Hollywood, in which writers and studios are at odds over the use of AI, with studios considering such ideas as having human writers punch up drafts of AI-generated scripts, or using AI to create new scripts based on a writers previous work. During contract negotiations, the AMPTP refused union requests for protection from AI use, offering instead, annual meetings to discuss advancements in technology. The March talk also felt far from the warnings of a growing chorus of experts likeEric Horvitz, chief scientific officer at Microsoft, and AI pioneerGeoffrey Hinton, who resigned from his job at Google this month in order to speak freely about AIs risks, which he says include the potential for deliberate misuse, mass unemployment and human extinction.

Are these kinds of worries just moral panic? mused the moderator and head of Cannes NextSten Kristian-Saluveer. That seemed to be the panelists view. Saar dismissed the concerns, comparing the changes AI will bring to adaptations brought by the automobile or the calculator. When calculators came, it didnt mean we dont know how to do math, he said.

One of the panel buzz phrases was hyper-personalized IP, meaning that well all create our own individual entertainment using AI tools. Saar shared a video from a company he is advising, in which a childs drawings came to life and surrounded her on video screens. The characters in the future will be created by the kids themselves, he says. Avoyan said the line between creator and audience will narrow in such a way that we will all just be making our own movies. You dont even need a distribution house, he said.

A German producer and self-described AI enthusiast in the audience said, If the cost of the means of production goes to zero, the amount of produced material is going up exponentially. We all still only have 24 hours. Who or what, the producer wanted to know, would be the gatekeepers for content in this new era? Well, the algorithm, of course. A lot of creators are blaming the algorithm for not getting views, saying the algorithm is burying my video, Saar said. The reality is most of the content is just not good and doesnt deserve an audience.

What wasnt discussed at the panel was what might be lost in a future that looks like this. Will a generation raised on watching videos created from their own drawings, or from an algorithms determination of what kinds of images they will like, take a chance on discovering something new? Will they line up in the rain with people from all over the world to watch a movie made by someone else?

Read this article:

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Hollywood Reporter

Schools ‘bewildered’ by very fast rate of change in AI education … – The Irish News

Schools are bewildered by the rate of change in artificial intelligence (AI) and believe it is moving far too quickly for government alone to provide the advice that is needed, leading head teachers have warned.

Their comments come after Prime Minister Rishi Sunak said guardrails are to be put in place to maximise the benefits of AI while minimising the risks to society.

Mr Sunak said the UKs regulation must evolve alongside the rapid advance of AI, with threats including to jobs and disinformation.

A letter to The Times, signed by more than 60 education figures, says: Schools are bewildered by the very fast rate of change in AI, and seek secure guidance and counsel on the best way forward. But whose advice can we trust?

We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools.

Neither in the past has government shown itself capable or willing to do so.

The heads said they are pleased that the Government is now grasping the nettle but added: The truth is that AI is moving far too quickly for government or Parliament alone to provide the real-time advice that schools need.

We are announcing today our own cross-sector body composed of leading teachers in our schools, guided by a panel of independent digital and AI experts, to advise schools on which AI developments are likely to be beneficial, and which are damaging.

According to The Times, the heads, led by Sir Anthony Seldon, the headteacher of Epsom College, said schools must collaborate to ensure that AI works in their best interests and that of pupils, not of large education technology companies.

Mr Sunak has advocated the technologys benefits for national security and the economy, but growing concerns have been raised with the prominence of the ChatGPT bot which has passed exams and can compose prose.

Former government chief scientific adviser Sir Patrick Vallance has said AI could have an impact on jobs comparable with the industrial revolution.

Earlier this month Geoffrey Hinton, the man widely seen as the godfather of AI, warned that some of the dangers of AI chatbots are quite scary, as he quit his job at Google.

Last week one of the pioneers of AI warned the Government is not safeguarding against the dangers posed by future super-intelligent machines.

Professor Stuart Russell told The Times ministers were favouring a light touch on the burgeoning AI industry, despite warnings from civil servants it could create an existential threat.

He told The Times a system similar to ChatGPT could form part of a super-intelligence machine which could not be controlled.

How do you maintain power over entities more powerful than you forever? he asked. If you dont have an answer, then stop doing the research. Its as simple as that.

The stakes couldnt be higher: if we dont control our own civilisation, we have no say in whether we continue to exist.

Go here to read the rest:

Schools 'bewildered' by very fast rate of change in AI education ... - The Irish News

Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI – Fortune

OpenAI CEO Sam Altman helped bring ChatGPT to the world, which sparked the current A.I. race involving Microsoft, Google, and others.

But hes busy with other ventures that could be no less disruptiveand are linked in some ways. This week, Microsoft announced a purchasing agreement with Helion Energy, a nuclear fusion startup primarily backed by Altman. And Worldcoin, a crypto startup involving eye scans cofounded by Altman in 2019, is close to securing hefty new investments, according to Financial Times reporting on Sunday.

Before becoming OpenAIs leader, Altman served as president of the startup accelerator Y Combinator, so its not entirely surprising that hes involved in more than one venture. But the sheer ambition of the projects, both on their own and collectively, merits attention.

Microsoft announced a deal on Wednesday in which Helion will supply it with electricity from nuclear fusion by 2028. Thats bold considering nobody is yet producing electricity from fusion, and many experts believe its decades away.

During a Stripe conference interview last week, Altman said the audience should be excited about the startups developments and drew a connection between Helion and artificial intelligence.

If you really want to make the biggest, most capable super intelligent system you can, you need high amounts of energy, he explained. And if you have an A.I. that can help you move faster and do better material science, you can probably get to fusion a little bit faster too.

He acknowledged the challenging economics of nuclear fusion, but added, I think we will probably figure it out.

He added, And probably we will get to a world where in addition to the cost of intelligence falling dramatically, the cost of energy falls dramatically, too. And if both of those things happen at the same timeI would argue that they are currently the two most important inputs in the whole economywe get to a super different place.

Worldcoinstill in beta but aiming to launch in the first half of this yearis equally ambitious, as Fortune reported in March. If A.I. takes away our jobs and governments decide that a universal basic income is needed, Worldcoin wants to be the distribution mechanism for those payments. If all goes to plan, itll be bigger than Bitcoin and approved by regulators across the globe.

That might be a long way off if it ever occurs, but in the meantime the startup might have found quicker path to monetization with World ID, a kind of badge you receive after being verified by Worldcoinand a handy way to prove that youre a human rather than an A.I. bot when logging into online platforms. The idea is your World ID would join or replace your user names and passwords.

The only way to really prove a human is a human, the Worldcoin team decided, was via an iris scan. That led to a small orb-shaped device you look into that converts a biometric scanning code into proof of personhood.

When youre scanned, verified, and onboarded to Worldcoin, youre given 25 proprietary crypto tokens, also called Worldcoins. Well over a million people have already participated, though of course the company aims to have tens and then hundreds of millions joining after beta. Naturally such plans have raised a range of privacy concerns, but according to the FT, the firm is now in advanced talks to raise about $100 million.

Originally posted here:

Sam Altman is plowing ahead with nuclear fusion and his eye-scanning crypto ventureand, oh yeah, OpenAI - Fortune

The Future of War Is AI – The Nation

EDITORS NOTE: This article originally appeared at TomDispatch.com. To stay on top of important articles like these, sign up to receive the latest updates from TomDispatch.com.

After almost 79 years on this beleaguered planet, let me say one thing: This cant end well. Really, it cant. And no, Im not talking about the most obvious issues ranging from the war in Ukraine to the climate disaster. What I have in mind is that latest, greatest human invention: artificial intelligence.

It doesnt seem that complicated to me. As a once-upon-a-time historian, Ive long thought about what, in these centuries, unartificial andall too oftenunartful intelligence has accomplished (and yes, Id prefer to put that in quotation marks). But the minute I try to imagine what that seemingly ultimate creation AI, already a living abbreviation of itself, might do, it makes me shiver. Brrr

Let me start with honesty, which isnt an artificial feeling at all. What I know about AI you could put in a trash bag and throw out with the garbage. Yes, Ive recently read whatever I could in the media about it and friends of mine have already fiddled with it. TomDispatch regular William Astore, for instance, got ChatGPT to write a perfectly passable critical essay on the military-industrial complex for his Bracing Views newsletterand that, I must admit, was kind of amazing.

Still, its not for me. Never me. I hate to say never because we humans truly dont know what well do in the future. Still, consider it my best guess that I wont have anything actively to do with AI. (Although my admittedly less than artificially intelligent spellcheck system promptly changed chatbox to hatbox when I was e-mailing Astore to ask him for the URL to that piece of his.)

But lets stop here a minute. Before we even get to AI, lets think a little about LTAI (Less Than Artificial Intelligence, just in case you dont know the acronym) on this planet. Who could deny that its had some remarkable successes? It created the Mona Lisa, The Starry Night, and Diego and I. Need I say more? Its figured out how to move us around this world in style and even into outer space. Its built vast cities and great monuments, while creating cuisines beyond compare. I could, of course, go on. Who couldnt? In certain ways, the creations of human intelligence should take anyones breath away. Sometimes, they even seem to give miracle a genuine meaning.

And yet, from the dawn of time, that same LTAI went in far grimmer directions, too. It invented weaponry of every kind, from the spear and the bow and arrow to artillery and jet fighter planes. It created the AR-15 semiautomatic rifle, now largely responsible (along with so many disturbed individual LTAIs) for our seemingly never-ending mass killings, a singular phenomenon in this peacetime country of ours.

And were talking, of course, about the same Less Than Artificial Intelligence that created the Holocaust, Joseph Stalins Russian gulag, segregation and lynch mobs in the United States., and so many other monstrosities of (in)human history. Above all, were talking about the LTAI that turned much of our history into a tale of war and slaughter beyond compare, something that, no matter how advanced we became, has neveras the brutal, deeply destructive conflict in Ukraine suggestsshown the slightest sign of cessation. Although I havent seen figures on the subject, I suspect that there has hardly been a moment in our history when, somewhere on this planet (and often that somewhere would have to be pluralized), we humans werent killing each other in significant numbers.

And keep in mind that in none of the above have I even mentioned the horrors of societies regularly divided between and organized around the staggeringly wealthy and the all too poor. But enough, right? You get the idea.

Oops, I left one thing out in judging the creatures that have now created AI. In the last century or two, the intelligence that did all of the above also managed to come up with two different ways of potentially destroying this planet and more or less everything living on it. The first of them it created largely unknowingly. After all, the massive, never-ending burning of fossil fuels that began with the 19th-century industrialization of much of the planet was what led to an increasingly climate-changed Earth. Though weve now known what we were doing for decades (the scientists of one of the giant fossil-fuel companies first grasped what was happening in the 1970s), that hasnt stopped us. Not by a long shot. Not yet anyway.

Over the decades to come, if not taken in hand, the climate emergency could devastate this planet that houses humanity and so many other creatures. Its a potentially world-ending phenomenon (at least for a habitable planet as weve known it). And yet, at this very moment, the two greatest greenhouse gas emitters, the United States and China (that country now being in the lead, but the US remaining historically number one), have proven incapable of developing a cooperative relationship to save us from an all-too-literal hell on Earth. Instead, theyve continued to arm themselves to the teeth and face off in a threatening fashion while their leaders are now not exchanging a word, no less consulting on the overheating of the planet.

The second path to hell created by humanity was, of course, nuclear weaponry, used only twice to devastating effect in August 1945 on the Japanese cities of Hiroshima and Nagasaki. Still, even relatively small numbers of weapons from the vast nuclear arsenals now housed on Planet Earth would be capable of creating a nuclear winter that could potentially wipe out much of humanity.

Readers like you make our independent journalism possible.

And mind you, knowing that, LTAI beings continue to create ever larger stockpiles of just such weaponry as ever more countries the latest being North Korea come to possess them. Under the circumstances and given the threat that the Ukraine War could go nuclear, its hard not to think that it might just be a matter of time. In the decades to come, the government of my own country is, not atypically, planning to put another $2 trillion into ever more advanced forms of such weaponry and ways of delivering them.

Given such a history, youd be forgiven for imagining that it might be a glorious thing for artificial intelligence to begin taking over from the intelligence responsible for so many dangers, some of them of the ultimate variety. And I have no doubt that, like its ancestor (us), AI will indeed prove anything but one-sided. It will undoubtedly produce wonders in forms that may as yet be unimaginable.

Still, lets not forget that AI was created by those of us with LTAI. If now left to its own devices (with, of course, a helping hand from the powers that be), it seems reasonable to assume that it will, in some way, essentially repeat the human experience. In fact, consider that a guarantee of sorts. That means it will create beauty and wonder andyes!horror beyond compare (and perhaps even more efficiently so). Lest you doubt that, just consider which part of humanity already seems the most intent on pushing artificial intelligence to its limits.

Yes, across the planet, departments of defense are pouring money into AI research and development, especially the creation of unmanned autonomous vehicles (think: killer robots) and weapons systems of various kinds, as Michael Klare pointed out recently at TomDispatch when it comes to the Pentagon. In fact, it shouldnt shock you to know that five years ago (yes, five whole years!), the Pentagon was significantly ahead of the game in creating a Joint Artificial Intelligence Center to, as The New York Times put it, explore the use of artificial intelligence in combat. There, it might, in the endand end is certainly an operative word herespeed up battlefield action in such a way that we could truly be entering unknown territory. We could, in fact, be entering a realm in which human intelligence in wartime decision-making becomes, at best, a sideline activity.

Only recently, AI creators, tech leaders, and key potential users, more than 1,000 of them, including Apple co-founder Steve Wozniak and billionaire Elon Musk, had grown anxious enough about what such a thingsuch a brain, you might saylet loose on this planet might do that they called for a six-month moratorium on its development. They feared profound risks to society and humanity from AI and wondered whether we should even be developing nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us.

The Pentagon, however, instantly responded to that call this way, as David Sanger reported in The New York Times: Pentagon officials, speaking at technology forums, said they thought the idea of a six-month pause in developing the next generations of ChatGPT and similar software was a bad idea: The Chinese wont wait, and neither will the Russians. So, full-speed ahead and skip any international attempts to slow down or control the development of the most devastating aspects of AI!

And I havent even bothered to mention how, in a world already seemingly filled to the brim with mis- and disinformation and wild conspiracy theories, AI is likely to be used to create yet more of the same of every imaginable sort, a staggering variety of hallucinations, not to speak of churning out everything from remarkable new versions of art to student test papers. I mean, do I really need to mention anything more than those recent all-too-realistic-looking photos of Donald Trump being aggressively arrested by the NYPD and Pope Francis sporting a luxurious Balenciaga puffy coat circulating widely online?

I doubt it. After all, image-based AI technology, including striking fake art, is on the rise in a significant fashion and, soon enough, you may not be able to detect whether the images you see are real or fake. The only way youll know, as Meghan Bartels reports in Scientific American, could be thanks to AI systems trained to detectyes!artificial images. In the process, of course, all of us will, in some fashion, be left out of the picture.

And of course, thats almost the good news when, with our present all-too-Trumpian world in mind, you begin to think about how Artificial Intelligence might make political and social fools of us all. Given that Im anything but one of the better-informed people when it comes to AI (though on Less Than Artificial Intelligence I would claim to know a fair amount more), Im relieved not to be alone in my fears.

Get unlimited access: $9.50 for six months.

In fact, among those who have spoken out fearfully on the subject is the man known as the godfather of AI, Geoffrey Hinton, a pioneer in the field of artificial intelligence. He only recently quit his job at Google to express his fears about where we might indeed be heading, artificially speaking. As he told The New York Times recently, The idea that this stuff could actually get smarter than peoplea few people believed that, but most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.

Now, he fears not just the coming of killer robots beyond human control but, as he told Geoff Bennett of the PBS NewsHour, the risk of super intelligent AI taking over control from people. I think its an area in which we can actually have international collaboration, because the machines taking over is a threat for everybody. Its a threat for the Chinese and for the Americans and for the Europeans, just like a global nuclear war was.

And that, indeed, is a hopeful thought, just not one that fits our present world of hot war in Europe, cold war in the Pacific, and division globally.

I, of course, have no way of knowing whether Less Than Artificial Intelligence of the sort Ive lived with all my life will indeed be sunk by the AI carrier fleet or whether, for that matter, humanity will leave AI in the dust by, in some fashion, devastating this planet all on our own. But I must admit that AI, whatever its positives, looks like anything but what the world needs right now to save us from a hell on earth. I hope for the best and fear the worst as I prepare to make my way into a future that I have no doubt is beyond my imagining.

Go here to see the original:

The Future of War Is AI - The Nation

NFL fans outraged after ChatGPT names best football teams since 2000 including a surprise at No 1… – The US Sun

ARTIFICIAL intelligence has infuriated fans across the nation with its top ten best teams since 2000 ranking.

The controversial list has unsurprisingly angered fans on social media, being labeled "the dumbest take on football I've ever seen."

Leading the way in the list created by ChatGPT for NFL on FOX are the 2007 New England Patriots.

A powerhouse team featuring the likes of Tom Brady, Randy Moss, Asante Samuel, Wes Welker, and Vince Wilfork among others, Bill Belichick's team went undefeated until the bitter end.

Eli Manning's New York Giants ultimately got the better of them in Super Bowl XLII, preventing what would have been only the second perfect season in league history.

The Patriots are followed by the 2013 Seattle Seahawks who were left by then-second-year starting quarterback, Russell Wilson.

Pete Carroll's 13-3 Seahawks team went on to hoist the Lombardi Trophy after the joint-third biggest Super Bowl blowout to date (43-8 over Peyton Manning's Denver Broncos).

Sean Peyton's 2009 New Orleans Saints team rounded out the top three.

Led by Drew Brees in his prime, he too beat a Peyton Manning-led team at the Super Bowl as they beat the Indianapolis Colts 31-17.

New England returned in fourth thanks to their 14-2 2016 team, which led Brady to his fifth ring during one of the most infamous comebacks in league history against the Atlanta Falcons at Super Bowl LI.

Ray Lewis and Rod Woodson's legendary 2000 Baltimore Ravens complete the top five, having guided the franchise to a Super Bowl win in just its fifth season since moving from Cleveland.

The second half of the ranking starts with the second non-Super Bowl-winning team, the 2004 Philadelphia Eagles.

They are followed by another team to fall short at the final hurdle despite having a prime Cam Newton leading the way, the 2015 Carolina Panthers.

Loaded with talent, the 2002 Tampa Bay Buccaneers made the list at eight thanks to their 12-4 record and a Super Bowl XXXVII ring.

The 11-5 Pittsburgh Steelers of 2005, featuring the likes of Ben Roethlisberger and Hines Ward follow, with the Patrick Mahomes-led 2019 Kansas City Chiefs closing out the top ten.

In response to the list, one unimpressed fan tweeted: "Woof. Terrible list. The 05 Steelers won in the most unimpressive season of football in recent memory.

"Them and the Seahawks played a dumpster fire Super Bowl. They won even though Roethlisbergers SB stats were:

"9-21, 123 yards, 2 interceptions."

Another said: "Nope. Where are the Peyton Manning led Broncos or Colts? Green Bay has been a perennial playoff/NFC Championship contender for near 20 years.

"Also no Ny Giants that was led by Eli Manning to the Super Bowl 3 different times and winning twice against Brady's Patriots."

As one added: "Cant accept the top team lost the Super Bowl."

While another simply said: "Absolutely not"

View post:

NFL fans outraged after ChatGPT names best football teams since 2000 including a surprise at No 1... - The US Sun