Archive for the ‘Ai’ Category

Adobe’s AI-Powered Generative Remove Feature in Lightroom Erases Unsightly Objects in Seconds – WIRED

Photo bombing is dead. Adobe is adding an artificial-intelligence-powered Generative Remove feature to its Lightroom photo editor that makes it dead simple to zap out unwanted elements, like that annoying guy in the background. The new feature is in a public beta-testing phase, but it will work across the Lightroom ecosystem whether you're using the app on mobile, desktop, or web.

Lightroom's Generative Remove uses Adobe's Firefly AI engine to smoothly replace unwanted elements. Simply paint over the area you want to remove and Lightroom will send that information to Adobe's Firefly servers, which then crunch the data and send it back. In demos WIRED saw, this process took no more than a few seconds, though performance will depend on your internet connection's speed.

Unlike Adobe Photoshop's Reference Image feature, which launched less than a month ago and allows users to generate new images using Firefly, Lightroom's AI features are very much focused on a photographer's workflow.

The highlighted area shows what will be removed.

You can use Object Aware and Generative AI together.

One of the more difficult things to do when editing images is to remove distracting elements. Typically this would be done using tools like Lightroom's Content Aware Remove, which hides elements by matching surrounding areas. This works well in small situations where backgrounds aren't too confusing for the software. For example, removing a telephone pole against a solid blue sky. But the larger the object to remove, and the more complex the background, the more difficult and time-consuming this becomes.

The Firefly-powered Generative Remove can do the same thing but for much larger objects against any background. Adobe has reduced what would have once taken hours and considerable technical know-how to the flick of a mouse and a few seconds of processing time. Everyone is now a Lightroom wizard. Also, unlike other retouching tools, which do the best match they can, Generative Remove generates three different versions and allows you to choose the one that looks best.

As impressive and useful as Generative Remove is, it might sound a bit familiar, especially to anyone using Google Photos. These new features don't offer much that Google's Magic Eraser tool couldn't already do. Nor does it enable anything like Google's Magic Editor, which lets you alter the lighting of a scene or cut and paste subjects within the scene.

Adobe's Generative Remove mirrors the company's previous uses of AI, like last year's AI-powered noise removal tool, which built on existing noise removal tools, making them better rather than breaking significant new ground. This, I suspect, is what working photographers actually wantbetter tools, rather than flashy new features. Adobe seems content to leave the more dramatic AI-powered tools, like rearranging a scene after the fact, to others.

See the article here:

Adobe's AI-Powered Generative Remove Feature in Lightroom Erases Unsightly Objects in Seconds - WIRED

Voice Actors Sue Company Whose AI Sounds Like Them – The New York Times

Last summer, as they drove to a doctors appointment near their home in Manhattan, Paul Skye Lehrman and Linnea Sage listened to a podcast about the rise of artificial intelligence and the threat it posed to the livelihoods of writers, actors and other entertainment professionals.

The topic was particularly important to the young married couple. They made their living as voice actors, and A.I. technologies were beginning to generate voices that sounded like the real thing.

But the podcast had an unexpected twist. To underline the threat from A.I., the host conducted a lengthy interview with a talking chatbot named Poe. It sounded just like Mr. Lehrman.

He was interviewing my voice about the dangers of A.I. and the harms it might have on the entertainment industry, Mr. Lehrman said. We pulled the car over and sat there in absolute disbelief, trying to figure out what just happened and what we should do.

Link:

Voice Actors Sue Company Whose AI Sounds Like Them - The New York Times

Sainsbury’s and Microsoft announce five-year AI collaboration – Microsoft

Sainsbury plc and Microsoft Corp. today announced a new five-year strategic partnership, using Microsofts artificial intelligence and machine learning capabilities and Sainsburys rich datasets to help accelerate the retailers recently announced Next Level Sainsburys strategy.

The partnership will improve store operations, drive greater efficiency for colleagues, and provide customers with more efficient and effective service, delivering stronger returns for shareholders under Sainsburys Save and invest to win programme.

By harnessing Microsofts products and expert engineering capabilities, Sainsburys will put the power of AI in the hands of store colleagues and make shopping more engaging and more convenient for millions of customers across the UK both online and in store.

This will be supported by upskilling programmes for Sainsburys colleagues, helping them learn and grow in the new AI-driven economy.

Sainsburys will use Microsofts services to transform across three core areas:

Clodagh Moriarty, Sainsburys Chief Retail and Technology Officer, said: Our collaboration with Microsoft will accelerate our ambition to become the UKs leading AI-enabled grocer.

Its one of the key ways were investing in transforming our capabilities over the next three years, enabling us to take another big leap forward in efficiency and productivity, continue to provide leading customer service and deliver returns for our shareholders.

Clare Barclay, CEO, Microsoft UK, said: Today, Sainsburys has laid out a bold vision that puts AI at the heart of its business, accelerating the development of new services, which will enhance and transform the customer and colleague experience.

We are delighted to be working with Sainsburys to power the next generation of retail.

Tags: AI, Azure, cloud, machine learning, Retail, Sainsbury's

Original post:

Sainsbury's and Microsoft announce five-year AI collaboration - Microsoft

How Googles AI Overviews Work, and How to Turn Them Off (You Cant) – WIRED

When can you expect your query to trigger an AI-generated summary of the results? AI Overviews appear for complex queries, says Mallory De Leon, a Google spokesperson. You'll find AI Overviews in your Google Search results when our systems determine that generative AI can be especially helpfulfor example, when you want to quickly understand information from a range of sources. During my initial tests, it felt like the AI Overviews popped up almost at random for queries, and the summaries appeared for simple questions as well as more complicated asks.

According to De Leon, the AI Overview is powered by a customized version of Googles Gemini model thats supplemented with aspects of the companys Search system, like the Knowledge Graph that has billions of general facts.

For some AI Overview answers, the webpage links are immediately visible. For other AI Overviews, you have to click Show more to see where the information is coming from.

One of my core hesitations about this feature as it rolls out is the continued potential for AI hallucinations, more commonly known as lies. When you interact with Googles Gemini chatbot, a disclaimer at the bottom reads: Gemini may display inaccurate info, including about people, so double-check its responses. Theres no such disclaimer added to the bottom of the AI Overview, which often simply reads, Generative AI is experimental.

When asked why theres no mention of potential hallucinations for AI Overviews, De Leon emphasizes that Google wants to still offer high-quality search results and mentions that the company did adversarial red-teaming tests to uncover potential weak points for the feature.

This implementation of generative AI is rooted in Searchs core quality and safety systems, with built-in guardrails to prevent low-quality or harmful information from surfacing, she says. AI Overviews are designed to highlight information that can be easily verified by the supporting information that we surface.

Knowing this, you might still want to click through the webpage links to double-check that the information is actually correct. Though its hard to imagine many users, who are often looking for quick answers, will spend extra time reading over the source material for Googles AI-generated answer.

Liz Reid, Googles head of Search, recently told my colleague Lauren Goode that AI Overviews are expected to arrive for countries outside of the United States before the end of 2024, so over a billion people will likely soon encounter this new feature. As someone whose job relies on readers actually clicking links and spending time reading the articles, of course Im apprehensive about this changeand Im not alone.

Beyond concerns from publishers, it also remains unclear what additional impacts might trickle down to users from Googles AI Overviews. Yes, OpenAIs ChatGPT and other AI tools are quite popular in Silicon Valley tech circles, but this feature will likely expose billions of people, who have never used a chatbot before, to AI-generated text. Even though AI Overviews are designed to save you time, they might lead to less trustworthy results.

Read the original:

How Googles AI Overviews Work, and How to Turn Them Off (You Cant) - WIRED

We have to stop ignoring AI’s hallucination problem – The Verge

Google I/O introduced an AI assistant that can see and hear the world, while OpenAI put its version of a Her-like chatbot into an iPhone. Next week, Microsoft will be hosting Build, where its sure to have some version of Copilot or Cortana that understands pivot tables. Then, a few weeks after that, Apple will host its own developer conference, and if the buzz is anything to go by, itll be talking about artificial intelligence, too. (Unclear if Siri will be mentioned.)

AI is here! Its no longer conceptual. Its taking jobs, making a few new ones, and helping millions of students avoid doing their homework. According to most of the major tech companies investing in AI, we appear to be at the start of experiencing one of those rare monumental shifts in technology. Think the Industrial Revolution or the creation of the internet or personal computer. All of Silicon Valley of Big Tech is focused on taking large language models and other forms of artificial intelligence and moving them from the laptops of researchers into the phones and computers of average people. Ideally, they will make a lot of money in the process.

But I cant really care about that because Meta AI thinks I have a beard.

I want to be very clear: I am a cis woman and do not have a beard. But if I type show me a picture of Alex Cranz into the prompt window, Meta AI inevitably returns images of very pretty dark-haired men with beards. I am only some of those things!

Meta AI isnt the only one to struggle with the minutiae of The Verges masthead. ChatGPT told me yesterday I dont work at The Verge. Googles Gemini didnt know who I was (fair), but after telling me Nilay Patel was a founder of The Verge, it then apologized and corrected itself, saying he was not. (I assure you he was.)

The AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their dimwittedness. I cannot get excited about the next turn in the AI revolution because that turn is into a place where computers cannot consistently maintain accuracy about even minor things.

I mean, they even screwed up during Googles big AI keynote at I/O. In a commercial for Googles new AI-ified search engine, someone asked how to fix a jammed film camera, and it suggested they open the back door and gently remove the film. That is the easiest way to destroy any photos youve already taken.

An AIs difficult relationship with the truth is called hallucinating. In extremely simple terms: these machines are great at discovering patterns of information, but in their attempt to extrapolate and create, they occasionally get it wrong. They effectively hallucinate a new reality, and that new reality is often wrong. Its a tricky problem, and every single person working on AI right now is aware of it.

One Google ex-researcher claimed it could be fixed within the next year (though he lamented that outcome), and Microsoft has a tool for some of its users thats supposed to help detect them. Googles head of Search, Liz Reid, told The Verge its aware of the challenge, too. Theres a balance between creativity and factuality with any language model, she told my colleague David Pierce. Were really going to skew it toward the factuality side.

But notice how Reid said there was a balance? Thats because a lot of AI researchers dont actually think hallucinations can besolved. A study out of the National University of Singapore suggested that hallucinations are an inevitable outcome of all large language models. Just as no person is 100 percent right all the time, neither are these computers.

And thats probably why most of the major players in this field the ones with real resources and financial incentive to make us all embrace AI think you shouldnt worry about it. During Googles IO keynote, it added, in tiny gray font, the phrase check responses for accuracy to the screen below nearly every new AI tool it showed off a helpful reminder that its tools cant be trusted, but it also doesnt think its a problem. ChatGPT operates similarly. In tiny font just below the prompt window, it says, ChatGPT can make mistakes. Check important info.

Thats not a disclaimer you want to see from tools that are supposed to change our whole lives in the very near future! And the people making these tools do not seem to care too much about fixing the problem beyond a small warning.

Sam Altman, the CEO of OpenAI who was briefly ousted for prioritizing profit over safety, went a step further and said anyone who had an issue with AIs accuracy was naive. If you just do the naive thing and say, Never say anything that youre not 100 percent sure about, you can get them all to do that. But it wont have the magic that people like so much, he told a crowd at Salesforces Dreamforce conference last year.

This idea that theres a kind of unquantifiable magic sauce in AI that will allow us to forgive its tenuous relationship with reality is brought up a lot by the people eager to hand-wave away accuracy concerns. Google, OpenAI, Microsoft, and plenty of other AI developers and researchers have dismissed hallucination as a small annoyance that should be forgiven because theyre on the path to making digital beings that might make our own lives easier.

But apologies to Sam and everyone else financially incentivized to get me excited about AI. I dont come to computers for the inaccurate magic of human consciousness. I come to them because they are very accurate when humans are not. I dont need my computer to be my friend; I need it to get my gender right when I ask and help me not accidentally expose film when fixing a busted camera. Lawyers, I assume, would like it to get the case law right.

I understand where Sam Altman and other AI evangelists are coming from. There is a possibility in some far future to create a real digital consciousness from ones and zeroes. Right now, the development of artificial intelligence is moving at an astounding speed that puts many previous technological revolutions to shame. There is genuine magic at work in Silicon Valley right now.

But the AI thinks I have a beard. It cant consistently figure out the simplest tasks, and yet, its being foisted upon us with the expectation that we celebrate the incredible mediocrity of the services these AIs provide. While I can certainly marvel at the technological innovations happening, I would like my computers not to sacrifice accuracy just so I have a digital avatar to talk to. That is not a fair exchange its only an interesting one.

Follow this link:

We have to stop ignoring AI's hallucination problem - The Verge