Archive for the ‘Ai’ Category

How to Use A.I. to Edit and Generate Stunning Photos – The New York Times

Hello! Welcome back to On Tech: A.I., a pop-up newsletter that teaches you about artificial intelligence, how it works and how to use it.

In last weeks newsletter, I shared the golden prompts for getting the most helpful answers from chatbots like ChatGPT, Bing and Bard. Now that youre familiar with the general principle of building a relationship with A.I. the more specific and detailed instructions you give, the better results youll get lets move on to a slightly different realm.

Much of the hype and fears around generative A.I. has been about text. But there have also been rapid and dramatic developments in systems that can generate images. In many cases, these share a similar structure to text-based generative A.I., but they can also be much weirder and lend themselves to some very fun creative pursuits.

Image generators are trained on billions of images, which enable them to produce new creations that were once the sole dominion of painters and other artists. Sometimes experts cant tell the difference between A.I.-created images and actual photographs (a circumstance that has fueled dangerous misinformation campaigns in addition to fun creations). And these tools are already changing the way that creative professionals do their jobs.

Compared to products like ChatGPT, image generating A.I. tools are not as well developed. They require jumping through a few more hoops, and may cost a bit of money. But if youre interested in learning the ropes theres no better time to start.

Last week, Adobe added a generative A.I. feature into a beta version of Photoshop, its iconic graphics software, and creators on social networks like TikTok and Instagram have been buzzing about it ever since.

I have a fair amount of experience with Photoshop. When I tested the new feature, called generative fill, I was impressed with how quickly and competently the A.I. carried out tasks that would have taken me at least an hour to do on my own. In less than five minutes and with only a few clicks, I used the feature to remove objects, add objects and swap backgrounds.

(To experiment with these tools yourself, start by signing up for a free trial of Adobe Creative Suite. Then, install the new Adobe Photoshop beta, which includes generative fill.)

Once you have Photoshop beta installed, import a photo and try these tricks:

To change a background, click the object selection icon (it has an arrow pointed at a box), then under the Select menu, click inverse to select the background. Next click the generative fill box and type in a prompt or leave it blank to let Photoshop come up with a new background concept for you.

I used these steps to edit a photo of my corgi, Max. I typed kennel for the prompt, and clicked generate" to replace the background. Heres the before (left) and after.

Photo editors at The New York Times do not enhance or alter photos, or generate images using artificial intelligence. But my first thought after testing generative fill was that photo editors working in other contexts, like marketing, could be soon out of work. When I shared this theory with Adobes chief technology officer, Ely Greenfield, he said that it might make photo editing more accessible, but he was optimistic that humans would still be needed.

I can make really pretty images with it, but frankly, I still make boring images, he said. When I look at the content that artists create when you put this in their hands versus what I create, their stuff is so much more interesting because they know how to tell a story.

I confess that what Ive done with generative fill is far less exciting than what others have been posting on social media. Lorenzo Green, who tweets about A.I., posted a collage of famous album covers, including Michael Jacksons Thriller and Adeles 21 that were expanded with generative fill. The results were quite entertaining.

(One note: If installing Photoshop feels daunting, a quicker way to test Adobes A.I. is to visit the Adobe Firefly website. There, you can open the generative fill tool, upload an image and click the add tool to trace around a subject, such as a dog. Then click background and type in a prompt like beach.)

Tools like DALL-E and Midjourney can create entirely new images in seconds. They work similarly to chatbots: You type in a text prompt the more specific, the better.

To write a quality prompt, start with the medium youd like to emulate, followed by the subject and any extra details. For example, typing a photograph of a cat wearing a sweater in a brightly lit room in the DALL-E prompt box will generate something like this:

DALL-E, which is owned by Open AI, the maker of ChatGPT, was one of the first widely available A.I. image generators that was simple for people to use. For $15, you get 115 credits; one credit can be used to generate a set of four images.

Midjourney, another popular image generator, is a work in progress, so the user experience is not as polished. The service costs $10 a month, and entering prompts can be a little more complicated, because it requires joining a separate messaging app, Discord. Nonetheless, the project can create high-quality, realistic images.

To use it, join Discord and then request an invitation to the Midjourney server. After joining the server, inside the chat box, type /imagine followed by a prompt. I typed /imagine a manga cover of a corgi in a ninja turtle costume and generated a set of convincing images:

Though its fine to type in a basic request, some have found obscure prompts that generated exceptional results (Beebom, a tech blog, has a list of examples). At Columbia University, Lance Weiler is teaching students how to leverage A.I., including Midjourney, to produce artwork.

Whichever tool you use, bear in mind that the onus is on you to use this tech responsibly. Technologists warn that image generators can increase the spread of deepfakes and misinformation. But the tools can also be used in positive and constructive ways, like making family photos look better and brainstorming artistic concepts.

Next week, Ill share some tips on how to use A.I. to speed up aspects of office jobs, such as drafting talking points and generating presentation slides.

In case youre wondering, the delightfully demented image at the top of this newsletter was created by a human the illustrator Charles Desmarais not by A.I.

See the original post:

How to Use A.I. to Edit and Generate Stunning Photos - The New York Times

James Cameron’s ‘Avatar’ scores wacky new Wes Anderson … – Space.com

Director Wes Anderson's resume of wildly original and uniquely cinematic fare has spawned a fad of fan-made AI trailers.

These amateur works target the quirky filmmaker's recognizable style as seen in movies like "Rushmore," "The Royal Tenenbaums," "The Life Aquatic With Steve Zissou," "Isle of Dogs," "Moonrise Kingdom," "The Grand Budapest Hotel" and his new sci-fi flick, "Asteroid City."

Fresh off of their "Star Wars: The Galactic Menagerie" parody video cleverly employing cutting-edge AI tools, the creative folks at the YouTube channel Curious Refuge have turned their attention to director James Cameron's blockbuster film "Avatar," and the results offer a sly lampoon of Anderson's trademark visual trickery, brash color palette, singular wit and symmetrical camera framing.

This latest alt-history teaser presents a timeline in which Anderson helmed "The Peculiar Pandora Expedition: An Avatar Story," a madcap sci-fi odyssey to that lush tropical moon packed with an eccentric ensemble cast of Anderson regulars like Bill Murray, Timothe Chalamet, Adrien Brody, Tilda Swinton, Luke Wilson, Owen Wilson, Angelica Huston, Gwyneth Paltrow, Jason Schwartzman and Willem Dafoe.

Related: 1st 'Asteroid City' trailer reveals Wes Anderson's take on a space-age alien encounter

Here's the creators' official description:

Embark on a captivating journey to the enchanting world of Pandora, reimagined through the unique and imaginative lens of Wes Anderson in "The Peculiar Pandora Expedition." This extraordinary fan-made trailer offers a fresh perspective on James Cameron's epic masterpiece, blending Anderson's distinctive style with the awe-inspiring landscapes and extraordinary creatures of Pandora.

Follow Jake Sully, a former Marine, as he ventures into this mesmerizing land alongside the strong-willed Neytiri. Together, they discover the peculiar wonders, vibrant colors, and extraterrestrial flirtations that define Pandora. With Anderson's keen eye for detail and storytelling, he brings a human touch to this eccentric world, giving us a fresh and captivating take on the Na'vi and their mystical environment.

Experience the breathtaking beauty of Pandora, from its majestic floating mountains to its lush and diverse flora. Marvel at the unique creatures that inhabit this world, as Jake's journey uncovers secrets and challenges him to choose between his own kind and the people he has come to love.

"The Peculiar Pandora Expedition" is a testament to Anderson's visionary mind and meticulous craftsmanship, delivering an adventure filled with passion, vibrant colors, and thought-provoking themes. Join us as we celebrate the magic of imagination and the power of cinematic storytelling.

Other AI-spawned homage videos still making the cyberspace rounds are trailers for supposed Wes Anderson versions of "The Lord of the Rings," "The Hunger Games," "The Shining," "Gremlins" and "Harry Potter."

This newest "Avatar" creation might be the most esoteric of the whole bunch. But the trend is getting a bit repetitious and long in the tooth, which could be precisely what these hyped digital art offerings are attempting to point out.

Cameron's second "Avatar" film, "Avatar: The Way of Water," splashes onto Disney+ and Max beginning on Wednesday (June 7).

Today's best Disney+ and Disney+ Bundle deals

Read the original:

James Cameron's 'Avatar' scores wacky new Wes Anderson ... - Space.com

Grimes used AI to clone her own voice. We cloned the voice of a … – NPR

Grimes used AI to clone her voice. We cloned the voice of a Planet Money host. : Planet Money In Part 1 of this series, AI proved that it could use real research and real interviews to write an original script for an episode of Planet Money.

Our next task was to teach the computer how to sound like us. How to read that script aloud like a Planet Money host.

On today's show, we explore the world of AI-generated voices, which have become so lifelike in recent years that they can credibly imitate specific people. To test the limits of the technology, we attempt to create our own synthetic voice by training a computer on recordings of former Planet Money host Robert Smith. Then we introduce synthetic Robert to his very human namesake.

There are a lot of ethical, and economic, questions raised by a technology that can duplicate anyone's voice. To help us make sense of it all, we seek the advice of an artist who has embraced AI voice clones: the musician Grimes.

(This is part two of a three-part series. For part one of our series, click here)

This episode was produced by Emma Peaslee and Willa Rubin, with help from Sam Yellowhorse Kesler. It was edited by Keith Romer and fact-checked by Sierra Juarez. Engineering by James Willetts. Jess Jiang is our acting executive producer.

We built a Planet Money AI chat bot. Help us test it out: Planetmoneybot.com.

Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.

Keystone Features/Getty Images

In Part 1 of this series, AI proved that it could use real research and real interviews to write an original script for an episode of Planet Money.

Our next task was to teach the computer how to sound like us. How to read that script aloud like a Planet Money host.

On today's show, we explore the world of AI-generated voices, which have become so lifelike in recent years that they can credibly imitate specific people. To test the limits of the technology, we attempt to create our own synthetic voice by training a computer on recordings of former Planet Money host Robert Smith. Then we introduce synthetic Robert to his very human namesake.

There are a lot of ethical, and economic, questions raised by a technology that can duplicate anyone's voice. To help us make sense of it all, we seek the advice of an artist who has embraced AI voice clones: the musician Grimes.

(This is part two of a three-part series. For part one of our series, click here)

This episode was produced by Emma Peaslee and Willa Rubin, with help from Sam Yellowhorse Kesler. It was edited by Keith Romer and fact-checked by Sierra Juarez. Engineering by James Willetts. Jess Jiang is our acting executive producer.

We built a Planet Money AI chat bot. Help us test it out: Planetmoneybot.com.

Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.

Always free at these links: Apple Podcasts, Spotify, Google Podcasts, NPR One or anywhere you get podcasts.

Find more Planet Money: Facebook / Instagram / TikTok / Our weekly Newsletter.

Music: "Hi-Tech Expert," "Lemons and Limes," and "Synergy in Numbers."

Go here to read the rest:

Grimes used AI to clone her own voice. We cloned the voice of a ... - NPR

Google’s AI-powered search experience is way too slow – The Verge

The worst thing about Googles new AI-powered search experience is how long you have to wait.

Can you think of the last time you waited for a Google Search result? For me, searches are generally instant. You type a thing in the search box, Google almost immediately spits out an answer to that thing, and then you can click some links to learn more about what you searched for or type something else into the box. Its a virtuous, useful cycle that has turned Google Search into the most visited website in the world.

Googles Search Generative Experience, on the other hand, has loading animations.

Let me back up a little. In May, Google introduced an experimental feature called Search Generative Experience (SGE) that uses Googles AI systems to summarize search results for you. The idea is that you wont have to click through a list of links or type something else in the search box; instead, Google will just tell you what youre looking for. In theory, that means your search queries can be more complex and conversational a pitch weve heard before! but Google will still be able to answer your questions.

If youve opted in to SGE, which is only available to people who sign up for Googles waitlist on its Search Labs, AI summaries will appear right under the search box. Ive been using SGE for a few days, and Ive found the responses themselves have been generally fine, if cluttered. For example, when I searched where can I watch Ted Lasso? the AI-generated response that appeared was a few sentences long and factually accurate. Its on Apple TV Plus. Apple TV Plus costs $6.99 per month. Great.

Screenshot by Jay Peters / The Verge

But the answers are often augmented with a bunch of extra stuff. On desktop, Google displays source information as cards on the right, even though you cant easily tell which pieces of information come from which sources (another button can help you with that). On mobile (well, only the Google app for now), the cards appear below the summarized text. Below the query response, you can click a series of potential follow-up prompts, and under all of that is a standard Google search result, which can be littered with additional info boxes.

That extra stuff in an SGE result isnt quite as helpful as it should be, either. When it showed off SGE at I/O, Google also showed how the tool could auto-generate a buying guide on the fly, so I thought where can I buy Tears of the Kingdom? would be a softball question. But the result was a mess, littered with giant sponsored cards above the result, a confusing list of suggested retail stores that didnt actually take me to listings for the game, a Google Map pinpointing those retail stores, and off to the right, three link cards where I could find my way to buying the game. A search for a used iPhone 13 Mini in red didnt go much better. I should have just scrolled down.

An increasingly cluttered search screen isnt exactly new territory for Google. What bothers me most about SGE is that its summaries take a few seconds to show up. As Google is generating an answer to your query, an empty colored box will appear, with loading bars fading in and out. When the search result finally loads, the colored box expands and Googles summary pops in, pushing the list of links down the page. I really dont like waiting for this; if I werent testing specifically for this article, for many of my searches, Id be immediately scrolling away from most generative AI responses so I could click on a link.

Confusingly, SGE broke down for me at weird times, even with some of the top-searched terms. The words YouTube, Amazon, Wordle, Twitter, and Roblox, for example, all returned an error message: An AI-powered overview is not available for this search. Facebook, Gmail, Apple, and Netflix, on the other hand, all came back with perfectly fine SGE-formatted answers. But for the queries that were valid, the results took what felt like forever to show up.

When I was testing, the Gmail result showed up fastest, in about two seconds. Netflixs and Facebooks took about three and a half seconds, while Apples took about five seconds. But for these single-word queries that failed, they all took more than five seconds to try and load before showing the error message, which was incredibly frustrating when I could have just scrolled down to click a link. The Tears of the Kingdom and iPhone 13 Mini queries both took more than six seconds to load an internet eternity!

When I have to wait that long when Im not specifically doing test queries, I just scroll down past the SGE results to get to something to read or click on. And when I have to tap my foot to wait for SGE answers that are often filled with cruft that I dont want to sift through, its all just making the search experience worse for me.

Maybe Im just stuck in my ways. I like to investigate sources for myself, and Im generally distrustful of the things AI tools say. But as somebody who has wasted eons of his life looking at loading screens in streaming videos and video games, having to do so on Google Search is a deal-breaker for me. And when the results dont feel noticeably better than what I could get just by looking at what Google offered before, I dont think SGE is worth waiting for.

Read the original post:

Google's AI-powered search experience is way too slow - The Verge

Politicians Need to Learn How AI WorksFast – WIRED

This week, US senatorsheard alarming testimony suggesting that unchecked AI couldsteal jobs,spread misinformation, and generally go quite wrong, in the words of OpenAI CEO Sam Altman (whatever that means). He and several lawmakers agreed that the US may now need a new federal agency to oversee the development of the technology. But the hearing also saw agreement that no one wants to kneecap a technology that could potentially increase productivity and give the US a lead in a new technological revolution.

Worried senators might consider talking toMissy Cummings, aonetime fighter pilot and engineering and robotics professor at George Mason University. She studies use of AI and automation in safety critical systems including cars and aircraft, and earlier this year returned to academia after a stint at the National Highway Traffic Safety Administration, whichoversees automotive technology, including Teslas Autopilot andself-driving cars.Cummings perspective might help politicians and policymakers trying to weigh the promise of much-hyped new algorithms with the risks that lay ahead.

Cummings told me this week that she left the NHTSA with a sense of profound concern about the autonomous systems that are being deployed by many car manufacturers. We're in serious trouble in terms of the capabilities of these cars, Cummings says. They're not even close to being as capable as people think they are.

I was struck by the parallels with ChatGPT and similar chatbots stoking excitement and concern about the power of AI. Automated driving features have been around for longer, but like large language models they rely on machine learning algorithms that are inherently unpredictable, hard to inspect, and require a different kind of engineering thinking to that of the past.

Also like ChatGPT, Teslas Autopilot and other autonomous driving projects have been elevated by absurd amounts of hype. Heady dreams of a transportation revolution led automakers, startups, and investors to pour huge sums into developing and deploying a technology thatstill has many unsolved problems. There was a permissive regulatory environment around autonomous cars in the mid-2010s, with government officials loath to apply brakes on a technology that promised to be worth billions for US businesses.

After billions spent on the technology, self-driving cars are stillbesetbyproblems, and some auto companies havepulled the plug on big autonomy projects. Meanwhile, as Cummings says, the public is often unclear about how capable semiautonomous technology really is.

In one sense, its good to see governments and lawmakers being quick to suggest regulation of generative AI tools and large language models. The current panic is centered on large language models and tools likeChatGPT that areremarkably good at answering questions and solving problems, even if they still have significant shortcomings, including confidently fabricating facts.

At this weeks Senate hearing, Altman of OpenAI, which gave us ChatGPT, went so far as to call for a licensing system to control whether companies like his are allowed to work on advanced AI.My worst fear is that wethe field, the technology, the industrycause significant harm to the world, Altman said during the hearing.

The rest is here:

Politicians Need to Learn How AI WorksFast - WIRED