Archive for the ‘Ai’ Category

AI presents political peril for 2024 with threat to mislead voters – The Associated Press

WASHINGTON (AP) Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election.

The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.

No more.

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.

The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.

Were not prepared for this, warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, its going to have a major impact.

AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence.

Here are a few: Automated robocall messages, in a candidates voice, instructing voters to cast ballots on the wrong date; audio recordings of a candidate supposedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.

What if Elon Musk personally calls you and tells you to vote for a certain candidate? said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. A lot of people would listen. But its not him.

Former President Donald Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Coopers reaction to the CNN town hall this past week with Trump, was created using an AI voice-cloning tool.

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. The online ad, which came after President Joe Biden announced his reelection campaign, and starts with a strange, slightly warped image of Biden and the text What if the weakest president weve ever had was re-elected?

A series of AI-generated images follows: Taiwan under attack; boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrolling local streets as tattooed criminals and waves of immigrants create panic.

An AI-generated look into the countrys possible future if Joe Biden is re-elected in 2024, reads the ads description from the RNC.

The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. Stoyanov predicted that groups looking to meddle with U.S. democracy will employ AI and synthetic media as a way to erode trust.

What happens if an international entity a cybercriminal or a nation state impersonates someone. What is the impact? Do we have any recourse? Stoyanov said. Were going to see a lot more misinformation from international sources.

AI-generated political disinformation already has gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children supposedly learning satanism in libraries.

AI images appearing to show Trumps mug shot also fooled some social media users even though the former president didnt take one when he was booked and arraigned in a Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin.

Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact.

Some states have offered their own proposals for addressing concerns about deepfakes.

Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other.

Its important that we keep up with the technology, Clarke told The Associated Press. Weve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they dont have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.

Earlier this month, a trade association for political consultants in Washington condemned the use of deepfakes in political advertising, calling them a deception with no place in legitimate, ethical campaigns.

Other forms of artificial intelligence have for years been a feature of political campaigning, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope the most recent innovations will offer some positives in 2024, too.

Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT every single day and encourages his staff to use it, too, as long as any content drafted with the tool is reviewed by human eyes afterward.

Nellis newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails - all typically tedious tasks on campaigns.

The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket, he said.

___

Swenson reported from New York.

___

The Associated Pressreceives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about APs democracy initiative here. The AP is solely responsible for all content.

___

Follow the APs coverage of misinformation at https://apnews.com/hub/misinformation and coverage of artificial intelligence at https://apnews.com/hub/artificial-intelligence

Read the original:

AI presents political peril for 2024 with threat to mislead voters - The Associated Press

Goldman Sachs says A.I. could push S&P 500 profits up by 30% in the next decade – CNBC

Over the next 10 years, AI could increase productivity by 1.5 percent per year. And that could increase S&P500 profits by 30 percent or more over the next decade, Goldman Sachs says.

Nurphoto | Nurphoto | Getty Images

Goldman Sachs is bullish about artificial intelligence and believes the technology could help drive S&P 500 profits in the next 10 years.

"Over the next 10 years, AI could increase productivity by 1.5% per year. And that could increase S&P500 profits by 30% or more over the next decade," Goldman's senior strategist Ben Snider told CNBC Thursday.

The emergence of ChatGPT, the chatbot developed by OpenAI, has spurred a firestorm of interest in AI and the possible disruptions to the daily lives of many. It has also injected fresh excitement among investors eager for a fresh driver of profit growth at a time when rising borrowing costs and supply chain problems have tempered optimism.

"A lot of the favorable factors that led to that expansion (of S&P 500) earnings seem to be reversing," Snider told CNBC on "Asia Squawk Box."

"But the real source of optimism now is productivity enhancements through artificial intelligence."

"It's clear to most investors that the immediate winners are in the technology sector," Snider added. "The real question for investors is who are going to be winners down the road."

He pointed out that "in 1999 or 2000 during the tech bubble, it would be very hard to envision Facebook or Uber changing the way we live our lives."

Snider recommended that investors should spread their U.S. equity investments in cyclical and defensive sectors, touting the energy and the health-care sectors for their attractive valuations.

In the shorter term, he said he expects the U.S. Federal Reserve has completed most of its monetary policy tightening.

"The question is: In which ways will that continue to affect the economy moving forward?" Snider said. "One sign of concern in the recent earnings season is that S&P 500 companies are starting to pull back a bit on corporate spending."

Elevated interest rates could be one reason, he said.

"If interest rates are high, as a company, you might be a little more averse to issuing debt and therefore you might pull back on your spending. And indeed if we look at S&P 500 buybacks, they were down 20% year-over-year in the first quarter of this year that is one sign perhaps we haven't seen all the effects of this tightening cycle."

Here is the original post:

Goldman Sachs says A.I. could push S&P 500 profits up by 30% in the next decade - CNBC

AI creator on the risks, opportunities and how it may make humans ‘boring’ – BBC

13 May 2023

To play this content, please enable JavaScript, or try a different browser

AI boss: Worst case scenario it could control humanity

"Humans are a bit boring - it will be like, goodbye!" That's the personal prediction - that artificial intelligence (AI) will supplant humans in many roles - from one of the most important people you've probably never heard of.

Emad Mostaque is the British founder of the tech firm, Stability AI. It popularised Stable Diffusion, a tool that uses AI to make images from simple text instructions by analysing images found online.

AI enables a computer to think or act more like a human. It includes what's called machine learning, when computers can learn what to do without being giving exact instructions by a human sitting at a keyboard tapping in commands. Last month, there was a dramatic warning from 1,000 experts to press pause on its development, warning of potential risks, and saying the race to develop AI systems is out of control.

In an interview we'll show in full on Sunday, tech founder Mostaque questions what will happen "if we have agents more capable than us that we cannot control, that are going across the internet and they achieve a level of automation; what does that mean?

"The worst case scenario is that it proliferates and basically it controls humanity."

That sounds terrifying, but he is not alone in pointing out the risk, that if we create computers smarter than ourselves we just can't be sure what will happen next.

Mostaque believes governments could soon be shocked into taking action by an event that makes the risks suddenly real. He points to the moment Tom Hanks contracted Covid-19 and millions sat up and paid attention.

When a moment like that arrives, governments will conclude "we need policy now", the 40-year-old says.

There's been a spike in concern for example after a Republican attack advert on Jo Biden was created using fake computer generated images.

When there's a risk to information that voters can trust, that's something governments have to respond to, says Mostaque.

Despite his concerns, Mostaque says that the potential benefits of AI for almost every part of our lives could be huge. Yet he concedes that the effect on jobs could be painful, at least at the start.

Mostaque says he believes AI "will be a bigger economic impact than the pandemic", adding that "it's up to us to decide which direction" this all goes in.

Image source, Getty Images

AI could lead to 300m job losses according to one prediction.

Some jobs will undoubtedly disappear, the bank Goldman Sachs suggested an almost incomprehensible 300m roles could be lost or diminished by the advancing technology.

While no one wants to be replaced by a robot, Mostaque's hope is that better jobs could be created because "productivity increases will balance out" and humans can concentrate on the things that make us human, and let machines do more of the rest. He agrees with the UK's former chief scientific advisor, Sir Patrick Vallance, that the advance of AI and its impacts could prove even bigger than the industrial revolution.

Mostaque is an unassuming mathematician, the founder of a company he only started in 2020 that has already been valued at $1bn, and with more cash flooding in, including from Hollywood star Ashton Kutcher, is likely to be soon worth very much more. Some speculation has put the value as high $4bn.

Unlike some of his competitors he is determined his technology will remain open source - in other words anyone can look at the code, share it, and use it. In his view, that's what should give the public a level of confidence in what's going on.

"I think there shouldn't have to be a need for trust," he says.

"If you build open models and you do it in the open, you should be criticised if you do things wrong and hopefully lauded if you do some things right."

But his business also raises profound questions about ownership, and what's real. There's legal action underway against them by the photo agency Getty Images which claims the rights to the images it sells have been infringed.

Image source, Getty Images

In response, Mostaque says: "What if you have a robot that's walking around and looking at things, do you have to close its eyes if it sees anything?"

That's hardly likely to be the end of that conversation.

The entrepreneur is convinced that the scale of what's coming is enormous. He reckons that in 10 years time, his company and fellow AI leaders, ChatGPT and DeepMind, will even be bigger than Google and Facebook. Predictions about technology are as tricky as predictions about politics - educated guesses that could turn out to be totally wrong. But what is clear is that a public conversation about the risks and realities of AI is now underway. We might be on the cusp of sweeping changes too big for any one company, country or politician to manage.

The first steam train puffed along the tracks in Darlington more than 50 years after the steam engine was patented by James Watt. This time we're unlikely to have anything like as long to get used to these new ideas, and it's unlikely to be boring!

You can watch much more of our conversation with Emad Mostaque on tomorrow's Sunday with Laura Kuenssberg live on BBC One or here on iPlayer.

Follow this link:

AI creator on the risks, opportunities and how it may make humans 'boring' - BBC

Ashton Kutcher raised a $243 million investment fund in just five weeks that will focus on the next absolute transformation in tech – Fortune

Ashton Kutcher, the Hollywood actor and venture capital investor, raised the money for his firms new AI fund quickly.

We pulled the fund together in about five weeks, Kutcher said Thursday in a Bloomberg Television interview. We have a base of LPs that have been with us for years on end.

Kutchers new fund plans to put $243 million toward artificial intelligence startups, the tech industrys current hottest category. The portfolio already includes investments in AI startup darlings OpenAI, Stability AI Ltd. and Anthropic.

With the new fund, assets under management at Los Angeles-based Sound Ventures LLC are about $1 billion, the firm said. Kutcher said the firm had surveyed its portfolio companies to see how they were embracing AI, and that the sector would mark the next absolute transformation for technology.

Weve been investing in AI for the last seven years, Kutcher said. But when we saw GPT be launched, we realized that this was an absolute breakthrough.

He acknowledged the so-called hype cycle that washes across technology investing, most recently with the rush into crypto, a field where Sound Ventures also has been active. The blockchain technology at the heart of cryptocurrency has value in a number of applications, he said, while tokenization in many areas went too far.

Regulation of AI is needed badly, he said, just as it is in the crypto industry.

Here is the original post:

Ashton Kutcher raised a $243 million investment fund in just five weeks that will focus on the next absolute transformation in tech - Fortune

ChatFished: How to Lose Friends and Alienate People With A.I. – The New York Times

Five hours is enough time to watch a Mets game. It is enough time to listen to the Spice Girls Spice album (40 minutes), Paul Simons Paul Simon album (42 minutes) and Gustav Mahlers third symphony (his longest). It is enough time to roast a chicken, text your friends that youve roasted a chicken and prepare for an impromptu dinner party.

Or you could spend it checking your email. Five hours is about how long many workers spend on email each day. And 90 minutes on the messaging platform Slack.

Its a weird thing, workplace chatter like email and Slack: Its sometimes the most delightful and human part of the work day. It can also be mind-numbing to manage your inbox to the extent you might wonder, couldnt a robot do this?

In late April, I decided to see what it would be like to let artificial intelligence into my life. I resolved to do an experiment. For one week, I would write all my work communication emails, Slack messages, pitches, follow-ups with sources through ChatGPT, the artificial intelligence language model from the research lab OpenAI. I didnt tell colleagues until the end of the week (except in a few instances of personal weakness). I downloaded a Chrome extension that drafted email responses directly into my inbox. But most of the time, I ended up writing detailed prompts into ChatGPT, asking it to be either witty or formal depending on the situation.

What resulted was a roller coaster, emotionally and in terms of the amount of content I was generating. I started the week inundating my teammates (sorry) to see how they would react. At a certain point, I lost patience with the bot and developed a newfound appreciation for phone calls.

My bot, unsurprisingly, couldnt match the emotional tone of any online conversation. And I spend a lot of the week, because of hybrid work, having online conversations.

The impulse to chat with teammates all day isnt wrong. Most people know the thrill (and also, usefulness) of office friendships from psychologists, economists, TV sitcoms and our own lives; my colleague sends me photos of her baby in increasingly chic onesies every few days, and nothing makes me happier. But the amount of time workers feel they must devote to digitally communicating is undoubtedly excessive and for some, easy to make the case for handing over to artificial intelligence.

The release of generative A.I. tools has raised all sorts of enormous and thorny questions about work. There are fears about what jobs will be replaced by A.I. in 10 years Paralegals? Personal assistants? Movie and television writers are currently on strike, and one issue theyre fighting for is limiting the use of A.I. by the studios. There are also fears about the toxic and untruthful information A.I. can spread in an online ecosystem already rife with misinformation.

The question driving my experiment was far narrower: Will we miss our old ways of working if A.I. takes over the drudgery of communication? And would my colleagues even know, or would they be Chatfished?

My experiment started on a Monday morning with a friendly Slack message from an editor in Seoul who had sent me the link to a study analyzing humor across more than 2,000 TED and TEDx Talks. Pity the researchers, the editor wrote to me. I asked ChatGPT to say something clever in reply, and the robot wrote: I mean, I love a good TED Talk as much as the next person, but thats just cruel and unusual punishment!

While not at all resembling a sentence I would type, this seemed inoffensive. I hit send.

I had begun the experiment feeling that it was important to be generous in spirit toward my robot co-conspirator. By Tuesday morning, though, I found that my to-do list was straining the limits of my robots pseudo-human wit. It so happened that my colleagues on the Business desk were planning a party. Renee, one of the party planners, asked me if I could help draft the invitation.

Maybe with your journalistic voice, you can write a nicer sentence than I just have, Renee wrote to me on Slack.

I couldnt tell her that my use of journalistic voice was a sore subject that week. I asked ChatGPT to craft a funny sentence about refreshments. I am thrilled to announce that our upcoming party will feature an array of delicious cheese plates, the robot wrote. Just to spice things up a bit (pun intended), we may even have some with a business-themed twist!

Renee was unimpressed and, ironically, wrote to me: OK, wait, let me get the ChatGPT to make a sentence.

Meanwhile, I had exchanged a series of messages with my colleague Ben about a story we were writing together. In a moment of anxiety, I called him to let him know it was ChatGPT writing the Slack messages, not me, and he admitted that he had wondered whether I was annoyed at him. I thought Id broken you! he said.

When we got off the phone, Ben messaged me: Robot-Emma is very polite, but in a way Im slightly concerned might hide her intention to murder me in my sleep.

I want to assure you that you can sleep peacefully knowing that your safety and security are not at risk, my bot replied. Take care and sleep well.

Given the amount of time I spend online talking to colleagues about the news, story ideas, occasionally Love Is Blind it was disconcerting stripping those communications of any personality.

But its not at all far-fetched. Microsoft earlier this year introduced a product, Microsoft 365 Copilot, that could handle all the tasks I asked ChatGPT to do and far more. I recently saw it in action when Microsofts corporate vice president, Jon Friedman, showed me how Copilot could read emails hed received, summarize them and then draft possible replies. Copilot can take notes during meetings, analyze spreadsheet data and identify problems that might arise in a project.

I asked Mr. Friedman if Copilot could mimic his sense of humor. He told me that the product wasnt quite there yet, although it could make valiant comedic attempts. (He has asked it, for example, for pickleball jokes, and it delivered: Why did the pickleball player refuse to play doubles? They couldnt dill with the extra pressure!)

Of course, he continued, Copilots purpose is loftier than mediocre comedy. Most of humanity spends way too much time consumed with what we call the drudgery of work, getting through our inbox, Mr. Friedman said. These things just sap our creativity and our energy.

Mr. Friedman recently asked Copilot to draft a memo, using his notes, recommending one of his employees for a promotion. The recommendation worked. He estimated that two hours worth of work was completed in six minutes.

To some, though, the time savings arent worth the peculiarity of outsourcing relationships.

In the future, youre going to get an email and someone will be like Did you even read it? And youll be like no and then theyll be like Well I didnt write the response to you, said Matt Buechele, 33, a comedy writer who also makes TikToks about office communications. Itll be robots going back and forth to each other, circling back.

Mr. Buechele, in the middle of our phone interview, asked me unprompted about the email I had sent to him. Your email style is very professional, he said.

I confessed that ChatGPT had written the message to him requesting an interview.

I was sort of like, This is going to be the most awkward conversation of my life, he said.

This confirmed a fear Id been developing that my sources had started to think I was a jerk. One source, for example, had written me an effusive email thanking me for an article Id written and inviting me to visit his office when I was next in Los Angeles.

ChatGPTs response was muted, verging on rude: I appreciate your willingness to collaborate.

I was feeling mournful of my past exclamation-point studded internet existence. I know people think exclamation points are tacky. The writer Elmore Leonard advised measuring out two or three per 100,000 words of prose. Respectfully, I disagree. I often use two or three per two or three words of prose. Im an apologist for digital enthusiasm. ChatGPT, it turns out, is more reserved.

For all the irritation I developed toward my robot overlord, I found that some of my colleagues were impressed by my newly polished digital persona, including my teammate Jordyn, who consulted me on Wednesday for advice on an article pitch.

I have a story idea Id love to chat with you about, Jordyn wrote to me. Its not urgent!!

Im always up for a good story, urgent or not! my robot replied. Especially if its a juicy one with plot twists and unexpected turns.

After a few minutes of back-and-forth, I was desperate to talk with Jordyn in person. I was losing patience with the bots cloying tone. I missed my own stupid jokes, and (comparatively) normal voice.

More alarmingly, ChatGPT is prone to hallucinating meaning putting words and ideas together that dont actually make sense. While writing a note to a source about the timing for an interview, my bot randomly suggested asking him whether we should coordinate our outfits in advance so that our auras and chakras wouldnt clash.

I asked ChatGPT to draft a message to another colleague, who knew about my experiment, telling him I was in hell. Im sorry, but I cannot generate inappropriate or harmful content, the robot replied. I asked it to draft a message explaining that I was losing my mind. ChatGPT couldnt do that either.

Of course, many of the A.I. experts I consulted were undeterred by the notion of shedding their personalized communication style. Truthfully, we copy and paste a lot already, said Michael Chui, a McKinsey partner and expert in generative A.I.

Mr. Chui conceded that some people see signs of dystopia in a future where workers communicate mostly through robots. He argued, though, that this wouldnt look all that unlike corporate exchanges that are already formulaic. I recently had a colleague send me a text message saying, Hey was that last email you sent legit? Mr. Chui recalled.

It turned out that the email had been so stiff that the colleague thought it was written through ChatGPT. Mr. Chuis situation is a bit particular, though. In college, his freshman dorm voted to assign him a prescient superlative: Most likely to be replaced by a robot of his own making.

I decided to end the week by asking the deputy editor of my department what role he saw for A.I. in the newsrooms future. Do you think theres a possibility that we could see AI-generated content on the front page one day? I wrote over Slack. Or do you think that there are some things that are just better left to human writers?

Well, that doesnt sound like your voice! the editor replied.

A day later, my experiment complete, I typed back my own response: Thats a relief!!!

Read this article:

ChatFished: How to Lose Friends and Alienate People With A.I. - The New York Times