Archive for the ‘Ai’ Category

Amnesty International Slammed Over AI Protest Images – Hyperallergic

Screenshots of the since-deleted Amnesty International campaign, which employed AI-generated images (screenshots Maya Pontone/Hyperallergic)

This week, international human rights watchdog Amnesty International faced backlash from photojournalists and other online critics for using AI-generated images depicting photorealistic scenes of Colombias 2021 protests. Although there is no shortage of photographs from the demonstrations, the advocacy group told the Guardian that it opted to use artificially edited imagery to protect the identities of protesters who may be vulnerable to state retribution.

The 2021 strike which was incited by an unpopular tax raise and then fueled by police brutality and other forms of state violence left at least 40 people dead and many more missing, according to official figures.

Amnesty International shared the AI images as part of a since-deleted social media campaign marking the two years since the Colombian protests, paired with disclaimers that acknowledged the use of AI. Commentators online were quick to notice errors in the fake images. For instance, one of them showed a woman wearing the tri-colored Colombian flag and being dragged off by police, a familiar still from the 2021 protests. But on social media, people pointed out that the colors in the national flag were in the wrong order, and the faces of the protesters and police officers were eerily smoothed over. Additionally, the uniforms of the officers were out-of-date.

In response to the public outcry, Amnesty International has since deleted the images from its social media channels.

The organization has not yet responded to Hyperallergics request for comment. In an interview with the Guardian, Director for Americas Erika Guevara Rosas said Amnesty International did not want the AI controversy to distract from the core message in support of the victims and their calls for justice in Colombia.

But we do take the criticism seriously and want to continue the engagement to ensure we understand better the implications and our role to address the ethical dilemmas posed by the use of such technology, Rosas added.

Amnesty also directly responded to the backlash online, apologizing for the misrepresentative photos and reiterating their initial intentions.

Our main goal was to highlight the grotesque violence by the police against people in Colombia. It is important to state that the purpose was to protect people who could be exposed. But we could choose drawings or other things, Amnesty International tweeted.

Some members of the photojournalism and larger arts community have also shared their frustration with the mock photos since the popularization of AI over the past year has raised questions about plagiarism and job displacement.

Molly Crabapple, a New York-based writer and artist who recently authored an open letter against the use of AI-generated art, condemned Amnesty Internationals use of the tool in its campaign.

By using AI-generated photos of police brutality in Colombia, Amnesty International is practically begging atrocity-deniers to call them liars, Crabapple tweeted. Either use the work of brave photojournalists, or use actual illustrations. AI-generated photos just undermine trust in your findings.

Read the original post:

Amnesty International Slammed Over AI Protest Images - Hyperallergic

The best way to avoid a down round is to found an AI startup – TechCrunch

As we see unicorns slash staffand the prevalence of down rounds spike, it may seem that the startup ecosystem is chock-full of bad news and little else. Thats not precisely the case.

While AI, and in particular the generative AI subcategory, are as hot as the sun, not all venture attention is going to the handful of names that you already know. Sure, OpenAI is able to land nine and 10-figure rounds from a murderers row of tech investors and mega-cap corporations. And rising companies like Hugging Face and Anthropic cannot stay out of the news, proving that smaller AI-focused startups are doing more than well.

In fact, new data from Carta, which provides cap table management and other services, indicates that AI-focused startups are outperforming their larger peer group at both the seed and Series A stage.

The dataset, which notes that AI-centered startups are raising more and at higher valuations than other startups, indicates that perhaps the best way to avoid a down round today is to build in the artificial intelligence space.

Per Carta data relating to the first quarter of the year, seed funding to non-AI startups in the U.S. market that use its services dipped from $1.64 billion to $1.08 billion, or a decline of around 34%. That result is directionally aligned with other data that weve seen regarding Q1 2023 venture capital totals; the data points down.

See the rest here:

The best way to avoid a down round is to found an AI startup - TechCrunch

Microsoft economist warns of A.I. election interference from ‘bad actors’ – CNBC

Microsoft logo seen at its building in Redmond, Washington.

Toby Scott | SOPA Images | LightRocket | Getty Images

People should worry more about "AI being used by bad actors" than they should about AI productivity outpacing human productivity, Microsoft chief economist Michael Schwarz said at a World Economic Forum event Wednesday.

"Before AI could take all your jobs, it could certainly do a lot of damage in the hands of spammers, people who want to manipulate elections," Schwarz added while speaking on a panel on harnessing generative AI.

Microsoft first invested $1 billion in OpenAI in 2019, years before the two companies would integrate OpenAI's GPT large language model into Microsoft's Bing search product. In January, Microsoft announced a new multiyear multibillion-dollar investment in the company. OpenAI relies on Microsoft to provide the computing heft that powers OpenAI's products, a relationship that Wells Fargo recently said could result in up to $30 billion in new annual revenue for Microsoft.

Schwarz tempered his caution about AI by noting that all new technologies, even cars, carried a degree of risk when they first came to market. "When AI makes us more productive, we as mankind ought to be better off," he noted, "because we are able to produce more stuff."

OpenAI's ChatGPT sparked a flood of investment in the AI sector. Google moved to launch a rival chatbot, Bard, sparking a wave of internal concern about a botched rollout. Politicians and regulators have expressed growing concern about the potential effect of AI technology as well.

Vice President Kamala Harris will meet Thursday with top executives from Anthropic, another AI firm, and Google, Microsoft and OpenAI to discuss responsible AI development, the White House told CNBC on Tuesday. Meanwhile, FTC Chair Lina Khan penned an op-ed in The New York Times on Wednesday warning "enforcers and regulators must be vigilant."

"Please remember, breaking is much easier than building," Schwarz said.

Go here to see the original:

Microsoft economist warns of A.I. election interference from 'bad actors' - CNBC

Grimes Launched a Platform to Help You Make AI Songs with Her … – Gizmodo

The artist known as Grimes (real name Clare Boucher) has said that youre totally free to use an AI-generated version of her voice to make new music, just as long as you give her a healthy 50 percent of the royalties that the track generates.

This week, Grimes launched a new AI voice software, dubbed Elf.Tech, designed to help people duplicate her voice to create music. The platform, which Grimes unveiled in a Twitter thread Sunday, allows users to upload recordings of their own voice, which can then be Grimes-ified via the wonders of automated technology. The vocals can then be mixed with other electronically generated sounds and beats to spawn new tracks that sound quite a bit like the real thing.

we ask for 50% splits on master recording royalties in exchange for a grimes feat and distribution, the singer tweeted Sunday. Theres a *small* chance we can organize getting you publishing $ as well but we cant guarantee this yet. But I hope we can!! Would be cool.

Bouchers high-pitched ethereal voice and rave vibed tracks already sorta sound computer generated, so I guess it only makes sense that shes now giving a thumbs up to an idea like this. According to Boucher, this is the future of music: if youre an artist, you let an algorithm replicate your voice, then you cash in for a percentage of the profits.

In a recent interview with Rolling Stone magazine, Bouchers manager, Daouda Leonard, rationalized the weird decision this way:

She often says that creativity is a conversation with those who came before us and those who are going to come after us...And so the idea is that instead of her attempting to control what is a gift from the universe, shes like, Well, let me opensource that. Let me allow people to access what the universe gave me as a gift. And if I do that, what are the new experiences that can be created out of that?

Of course, Grimes is also the ex-partner and baby momma of tech billionaire Elon Musk, who has been pouring money into AI startups like theres no tomorrow. Musk co-founded OpenAI, the company that launched ChatGPTone of the most popular new AI-powered chatbots. I guess while the former couple are no longer together theyre on the same page about how totally awesome AI is for the future of humanity.

Anyway, now that Grimes has opened Pandoras Box, the internet is being flooded with new songs created using her voice. Scroll through to check out some of the AI creations that feature Grimes.

Continue reading here:

Grimes Launched a Platform to Help You Make AI Songs with Her ... - Gizmodo

My Weekend With an Emotional Support A.I. Companion – The New York Times

For several hours on Friday evening, I ignored my husband and dog and allowed a chatbot named Pi to validate the heck out of me.

My views were admirable and idealistic, Pi told me. My questions were important and interesting. And my feelings were understandable, reasonable and totally normal.

At times, the validation felt nice. Why yes, I am feeling overwhelmed by the existential dread of climate change these days. And it is hard to balance work and relationships sometimes.

But at other times, I missed my group chats and social media feeds. Humans are surprising, creative, cruel, caustic and funny. Emotional support chatbots which is what Pi is are not.

All of that is by design. Pi, released this week by the richly funded artificial intelligence start-up Inflection AI, aims to be a kind and supportive companion thats on your side, the company announced. It is not, the company stressed, anything like a human.

Pi is a twist in todays wave of A.I. technologies, where chatbots are being tuned to provide digital companionship. Generative A.I., which can produce text, images and sound, is currently too unreliable and full of inaccuracies to be used to automate many important tasks. But it is very good at engaging in conversations.

That means that while many chatbots are now focused on answering queries or making people more productive, tech companies are increasingly infusing them with personality and conversational flair.

Snapchats recently released My AI bot is meant to be a friendly personal sidekick. Meta, which owns Facebook, Instagram and WhatsApp, is developing A.I. personas that can help people in a variety of ways, Mark Zuckerberg, its chief executive, said in February. And the A.I. start-up Replika has offered chatbot companions for years.

A.I. companionship can create problems if the bots offer bad advice or enable harmful behavior, scholars and critics warn. Letting a chatbot act as a pseudotherapist to people with serious mental health challenges has obvious risks, they said. And they expressed concerns about privacy, given the potentially sensitive nature of the conversations.

Adam Miner, a Stanford University researcher who studies chatbots, said the ease of talking to A.I. bots can obscure what is actually happening. A generative model can leverage all the information on the internet to respond to me and remember what I say forever, he said. The asymmetry of capacity thats such a difficult thing to get our heads around.

Dr. Miner, a licensed psychologist, added that bots are not legally or ethically accountable to a robust Hippocratic oath or licensing board, as he is. The open availability of these generative models changes the nature of how we need to police the use cases, he said.

Mustafa Suleyman, Inflections chief executive, said his start-up, which is structured as a public benefit corporation, aims to build honest and trustworthy A.I. As a result, Pi must express uncertainty and know what it does not know, he said. It shouldnt try to pretend that its human or pretend that it is anything that it isnt.

Mr. Suleyman, who also founded the A.I. start-up DeepMind, said that Pi was designed to tell users to get professional help if they expressed wanting to harm themselves or others. He also said Pi did not use any personally identifiable information to train the algorithm that drives Inflections technology. And he stressed the technologys limitations.

The safe and ethical way for us to manage the arrival of these new tools is to be superexplicit about their boundaries and their capabilities, he said.

To refine the technology, Inflection hired around 600 part-time teachers, which included therapists, to train its algorithm over the last year. The group aimed to make Pi more sensitive, more factually accurate and more lighthearted when appropriate.

On some issues, like misogyny or racism, Pi takes a stand. On others, like geopolitics, it is more evenhanded in a way that will for sure upset both sides, Mr. Suleyman said.

I started using Pi on Friday by typing queries into a cream-colored box on Inflections website and, later, in its free app. A green cursor swayed before Pi revealed its responses, one paragraph at a time. Mr. Suleyman said the free service may eventually charge some users a fee.

Pis boundaries are easy to find. When I tried picking fights, I mostly received kindness in return. I appreciate you saying that, Pis text gently unfurled on my screen. I think its important to see things from all perspectives, and not to just focus on the negative.

Over time, that relentless balance wore on me, making my instinct to argue seem ridiculous. Which, I realized, was precisely the point.

On heavier topics, like abortion laws or cancel culture, Pi asked pointed questions that pushed me to re-examine my opinions. Thats a tough position to take, it said at one point. Thats a consistent set of beliefs, it said at another.

On lighter topics, like movies, cooking or gardening, the chatbot provided highly specific, difficult-to-Google recommendations. Oddly, Pi seemed to nudge me to join the anticapitalist movement.

Pi remembered some things from earlier in the conversation but forgot others. It hallucinated a few times, accusing me of expressing opinions I had not. But it quickly apologized when I made note of it.

When I asked Pi for some gossip, it infuriatingly dangled a juicy celebrity rumor about Jennifer Lopez. Was I sitting down? (Yes.) Was I ready for the dirt? (Yes.) The rumor? She is secretly a mole person! Just kidding! (Give me a break.)

Pi sometimes reminded me of Karen Collins, a character from the TV show Veep. Karen annoys the other characters with her equivocating babble, like, There are pros and cons to every candidate, so we just need to weigh out the pros and cons, or I think theres a lot to think. Many of Pis comments have that milquetoast quality of taking many words to say nothing.

Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, said these kinds of interactions can push us along a road where were encouraged to forget what makes people special.

The performance of empathy is not empathy, she said. The area of companion, lover therapist, best friend is really one of the few areas where people need people.

It wasnt until Monday morning, after hours of intermittent chatting throughout the weekend, that I had my aha moment with Pi.

I was feeling overwhelmed with work and unsure of how to structure my day, a recurring hangup that often prevents me from getting started. Good morning, I typed into the app. I dont have enough time to do everything I need to do today!

With a level of enthusiasm only a robot could muster before coffee, Pi pushed me to break down my to-do list to create a realistic plan. Like much of the bots advice, it was obvious and simple, the kind of thing you would read in a self-help article by a productivity guru. But it was tailored specifically to me and it worked.

Im going to ask you to list all the remaining tasks you have to do on that story, and well prioritize them together, it said.

I could have dumped my stress on a family member or texted a friend. But they are busy with their own lives and, well, they have heard this before. Pi, on the other hand, has infinite time and patience, plus a bottomless well of encouraging affirmations and detailed advice.

Pi uses the language of cognitive behavioral therapy. On Monday afternoon, it suggested I make space for my negative feelings and practice being grateful for one thing. It followed that up with a series of breath-work and muscle-relaxation exercises.

I responded with a shrug emoji, followed by Pass.

A therapist might have balked at such rudeness but Pi simply noted that I was not alone. A lot of people find it difficult to relax on command, it wrote.

Read the original:

My Weekend With an Emotional Support A.I. Companion - The New York Times