Archive for the ‘Ai’ Category

Grimes used AI to clone her own voice. We cloned the voice of a … – NPR

Grimes used AI to clone her voice. We cloned the voice of a Planet Money host. : Planet Money In Part 1 of this series, AI proved that it could use real research and real interviews to write an original script for an episode of Planet Money.

Our next task was to teach the computer how to sound like us. How to read that script aloud like a Planet Money host.

On today's show, we explore the world of AI-generated voices, which have become so lifelike in recent years that they can credibly imitate specific people. To test the limits of the technology, we attempt to create our own synthetic voice by training a computer on recordings of former Planet Money host Robert Smith. Then we introduce synthetic Robert to his very human namesake.

There are a lot of ethical, and economic, questions raised by a technology that can duplicate anyone's voice. To help us make sense of it all, we seek the advice of an artist who has embraced AI voice clones: the musician Grimes.

(This is part two of a three-part series. For part one of our series, click here)

This episode was produced by Emma Peaslee and Willa Rubin, with help from Sam Yellowhorse Kesler. It was edited by Keith Romer and fact-checked by Sierra Juarez. Engineering by James Willetts. Jess Jiang is our acting executive producer.

We built a Planet Money AI chat bot. Help us test it out: Planetmoneybot.com.

Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.

Keystone Features/Getty Images

In Part 1 of this series, AI proved that it could use real research and real interviews to write an original script for an episode of Planet Money.

Our next task was to teach the computer how to sound like us. How to read that script aloud like a Planet Money host.

On today's show, we explore the world of AI-generated voices, which have become so lifelike in recent years that they can credibly imitate specific people. To test the limits of the technology, we attempt to create our own synthetic voice by training a computer on recordings of former Planet Money host Robert Smith. Then we introduce synthetic Robert to his very human namesake.

There are a lot of ethical, and economic, questions raised by a technology that can duplicate anyone's voice. To help us make sense of it all, we seek the advice of an artist who has embraced AI voice clones: the musician Grimes.

(This is part two of a three-part series. For part one of our series, click here)

This episode was produced by Emma Peaslee and Willa Rubin, with help from Sam Yellowhorse Kesler. It was edited by Keith Romer and fact-checked by Sierra Juarez. Engineering by James Willetts. Jess Jiang is our acting executive producer.

We built a Planet Money AI chat bot. Help us test it out: Planetmoneybot.com.

Help support Planet Money and get bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.

Always free at these links: Apple Podcasts, Spotify, Google Podcasts, NPR One or anywhere you get podcasts.

Find more Planet Money: Facebook / Instagram / TikTok / Our weekly Newsletter.

Music: "Hi-Tech Expert," "Lemons and Limes," and "Synergy in Numbers."

Go here to read the rest:

Grimes used AI to clone her own voice. We cloned the voice of a ... - NPR

Google’s AI-powered search experience is way too slow – The Verge

The worst thing about Googles new AI-powered search experience is how long you have to wait.

Can you think of the last time you waited for a Google Search result? For me, searches are generally instant. You type a thing in the search box, Google almost immediately spits out an answer to that thing, and then you can click some links to learn more about what you searched for or type something else into the box. Its a virtuous, useful cycle that has turned Google Search into the most visited website in the world.

Googles Search Generative Experience, on the other hand, has loading animations.

Let me back up a little. In May, Google introduced an experimental feature called Search Generative Experience (SGE) that uses Googles AI systems to summarize search results for you. The idea is that you wont have to click through a list of links or type something else in the search box; instead, Google will just tell you what youre looking for. In theory, that means your search queries can be more complex and conversational a pitch weve heard before! but Google will still be able to answer your questions.

If youve opted in to SGE, which is only available to people who sign up for Googles waitlist on its Search Labs, AI summaries will appear right under the search box. Ive been using SGE for a few days, and Ive found the responses themselves have been generally fine, if cluttered. For example, when I searched where can I watch Ted Lasso? the AI-generated response that appeared was a few sentences long and factually accurate. Its on Apple TV Plus. Apple TV Plus costs $6.99 per month. Great.

Screenshot by Jay Peters / The Verge

But the answers are often augmented with a bunch of extra stuff. On desktop, Google displays source information as cards on the right, even though you cant easily tell which pieces of information come from which sources (another button can help you with that). On mobile (well, only the Google app for now), the cards appear below the summarized text. Below the query response, you can click a series of potential follow-up prompts, and under all of that is a standard Google search result, which can be littered with additional info boxes.

That extra stuff in an SGE result isnt quite as helpful as it should be, either. When it showed off SGE at I/O, Google also showed how the tool could auto-generate a buying guide on the fly, so I thought where can I buy Tears of the Kingdom? would be a softball question. But the result was a mess, littered with giant sponsored cards above the result, a confusing list of suggested retail stores that didnt actually take me to listings for the game, a Google Map pinpointing those retail stores, and off to the right, three link cards where I could find my way to buying the game. A search for a used iPhone 13 Mini in red didnt go much better. I should have just scrolled down.

An increasingly cluttered search screen isnt exactly new territory for Google. What bothers me most about SGE is that its summaries take a few seconds to show up. As Google is generating an answer to your query, an empty colored box will appear, with loading bars fading in and out. When the search result finally loads, the colored box expands and Googles summary pops in, pushing the list of links down the page. I really dont like waiting for this; if I werent testing specifically for this article, for many of my searches, Id be immediately scrolling away from most generative AI responses so I could click on a link.

Confusingly, SGE broke down for me at weird times, even with some of the top-searched terms. The words YouTube, Amazon, Wordle, Twitter, and Roblox, for example, all returned an error message: An AI-powered overview is not available for this search. Facebook, Gmail, Apple, and Netflix, on the other hand, all came back with perfectly fine SGE-formatted answers. But for the queries that were valid, the results took what felt like forever to show up.

When I was testing, the Gmail result showed up fastest, in about two seconds. Netflixs and Facebooks took about three and a half seconds, while Apples took about five seconds. But for these single-word queries that failed, they all took more than five seconds to try and load before showing the error message, which was incredibly frustrating when I could have just scrolled down to click a link. The Tears of the Kingdom and iPhone 13 Mini queries both took more than six seconds to load an internet eternity!

When I have to wait that long when Im not specifically doing test queries, I just scroll down past the SGE results to get to something to read or click on. And when I have to tap my foot to wait for SGE answers that are often filled with cruft that I dont want to sift through, its all just making the search experience worse for me.

Maybe Im just stuck in my ways. I like to investigate sources for myself, and Im generally distrustful of the things AI tools say. But as somebody who has wasted eons of his life looking at loading screens in streaming videos and video games, having to do so on Google Search is a deal-breaker for me. And when the results dont feel noticeably better than what I could get just by looking at what Google offered before, I dont think SGE is worth waiting for.

Read the original post:

Google's AI-powered search experience is way too slow - The Verge

Politicians Need to Learn How AI WorksFast – WIRED

This week, US senatorsheard alarming testimony suggesting that unchecked AI couldsteal jobs,spread misinformation, and generally go quite wrong, in the words of OpenAI CEO Sam Altman (whatever that means). He and several lawmakers agreed that the US may now need a new federal agency to oversee the development of the technology. But the hearing also saw agreement that no one wants to kneecap a technology that could potentially increase productivity and give the US a lead in a new technological revolution.

Worried senators might consider talking toMissy Cummings, aonetime fighter pilot and engineering and robotics professor at George Mason University. She studies use of AI and automation in safety critical systems including cars and aircraft, and earlier this year returned to academia after a stint at the National Highway Traffic Safety Administration, whichoversees automotive technology, including Teslas Autopilot andself-driving cars.Cummings perspective might help politicians and policymakers trying to weigh the promise of much-hyped new algorithms with the risks that lay ahead.

Cummings told me this week that she left the NHTSA with a sense of profound concern about the autonomous systems that are being deployed by many car manufacturers. We're in serious trouble in terms of the capabilities of these cars, Cummings says. They're not even close to being as capable as people think they are.

I was struck by the parallels with ChatGPT and similar chatbots stoking excitement and concern about the power of AI. Automated driving features have been around for longer, but like large language models they rely on machine learning algorithms that are inherently unpredictable, hard to inspect, and require a different kind of engineering thinking to that of the past.

Also like ChatGPT, Teslas Autopilot and other autonomous driving projects have been elevated by absurd amounts of hype. Heady dreams of a transportation revolution led automakers, startups, and investors to pour huge sums into developing and deploying a technology thatstill has many unsolved problems. There was a permissive regulatory environment around autonomous cars in the mid-2010s, with government officials loath to apply brakes on a technology that promised to be worth billions for US businesses.

After billions spent on the technology, self-driving cars are stillbesetbyproblems, and some auto companies havepulled the plug on big autonomy projects. Meanwhile, as Cummings says, the public is often unclear about how capable semiautonomous technology really is.

In one sense, its good to see governments and lawmakers being quick to suggest regulation of generative AI tools and large language models. The current panic is centered on large language models and tools likeChatGPT that areremarkably good at answering questions and solving problems, even if they still have significant shortcomings, including confidently fabricating facts.

At this weeks Senate hearing, Altman of OpenAI, which gave us ChatGPT, went so far as to call for a licensing system to control whether companies like his are allowed to work on advanced AI.My worst fear is that wethe field, the technology, the industrycause significant harm to the world, Altman said during the hearing.

The rest is here:

Politicians Need to Learn How AI WorksFast - WIRED

How generative A.I. and low-code are speeding up innovation – CNBC

Oscar Wong | Moment | Getty Images

Independently, generative artificial intelligence and low-code software are two highly sought-after technologies. But experts say that together, the two harmonize in a way that accelerates innovation beyond the status quo.

Low-code development allows people to build applications with minimal need for hard code, instead using visual tools and other models to develop. While the intersection of low-code and AI feels natural, it's crucial to consider nuances like data integrity and security to ensure a meaningful integration.

Microsoft's Low-Code Signals 2023 report says 87% of chief innovation officers and IT professionals believe "increased AI and automation embedded into low-code platforms would help them better use the full set of capabilities."

According to Dinesh Varadharajan, CPO at low-code/no-code work platform Kissflow, the convergence of AI and low-code enables systems to manage the work rather than humans having to work for the systems.

Additionally, rather than the AI revolution replacing low-code, Varadharajan said, "One doesn't replace the other, but the power of two is going to bring a lot of possibilities."

Varadharajan notes that as AI and low-code technology come together, the development gap closes. Low-code software increases the accessibility of development across organizations (often to so-called citizen developers) while generative AI increases organizational efficiency and congruence.

According to Jim Rose, CEO of an automation platform for software delivery teams called CircleCI, these large language models that serve as the foundation of generative AI platforms will ultimately be able to change the language of low-code. Rather than building an app or website through a visual design format, Rose said, "What you'll be able to do is query the models themselves and say, for example, 'I need an easy-to-manage e-commerce shop to sell vintage shoes.'"

Rose agrees that the technology has not quite reached this point, in part because "you have to know how to talk" to generative AI to get what you're looking for. Kissflow's Varadharajan says he can see AI taking over task management within a year, and perhaps intersecting with low-code in a more meaningful way not long after.

Like anything involving AI, there are plenty of nuances that business leaders must take into account for successful implementation and iteration of AI-powered low-code.

Don Schuerman, CTO of enterprise software company Pega prioritizes what he calls "a responsible and ethical AI framework."

This includes the need for transparency. In other words, can you explain how and why AI is making a particular decision? Without that clarity, he says, companies can end up with a system that fails to serve end users in a fair and responsible way.

This melds with the need for bias testing, he added. "There are latent biases embedded in our society, which means there are latent biases embedded in our data," he said. "That means AI will pick up those biases unless we are explicitly testing and protecting against them."

Schuerman is a proponent of "keeping the human in the loop," not only for checking errors and making changes, but also to consider what machine learning algorithms have not yet mastered: customer empathy. By prioritizing customer empathy, organizations can maintain systems and recommend products and services actually relevant to the end user.

For Varadharajan, the biggest challenge he foresees with the convergence of AI and low-code is change management. Enterprise users, in particular, are used to working in a certain way, he says, which could make them the last segment to adopt the AI-powered low-code shift.

Whatever risks a company is dealing with, maintaining the governance layer is what will help leaders keep up with AI as it evolves. "Even now, we are still grappling with the possibilities of what generative AI can do," Varadharajan said. "As humans, we will also evolve. We will figure out ways to manage the risk."

While many generative AI platforms stem from open-source models, CircleCI's Rose says there's a successor of a different kind to come. "The next wave is closed-loop models that are trained against proprietary data," he said.

Proprietary data and closed-loop models will still have to reckon with the need for transparency, of course. Yet the ability for organizations to keep data secure in this small-model style could quickly shift the capacities of generative AI across industries.

Generative AI and low-code software puts innovation on a freeway, as long as organizations don't compromise on the responsibility factor, experts said. In the modern era, innovation speed is a must-have to be competitive. Just look at Bard, the Adobe-Google offering that is set to compete with OpenAI's ChatGPT in the generative AI space.

According to Scheurman, with AI and low-code, "I'm starting out further down the field than I did before." By shortening the path between an idea to experimentation and ultimately to a live product, he said AI-powered low-code accelerates the speed of innovation.

Link:

How generative A.I. and low-code are speeding up innovation - CNBC

Would you trust an AI doctor? New research shows patients are split – University of Arizona

Artificial intelligence-powered medical treatment options are on the rise and have the potential to improve diagnostic accuracy, but a new study led by University of Arizona Health Sciences researchers found that about 52% of participants would choose a human doctor rather than AI for diagnosis and treatment.

The paper, Diverse Patients Attitudes Towards Artificial Intelligence (AI) in Diagnosis, was published today in the journal PLOS Digital Health.

The research was led by Marvin J. Slepian, MD, JD, Regents Professor of Medicine at the UArizona College of Medicine Tucson and member of the BIO5 Institute, and Christopher Robertson, JD, professor of law and associate dean for strategic initiatives at Boston University. The research team found that most patients arent convinced the diagnoses provided by AI are as trustworthy of those delivered by human medical professionals.

While many patients appear resistant to the use of AI, accuracy of information, nudges and a listening patient experience may help increase acceptance, Dr. Slepian said of the studys other primary finding: that a human touch can help clinical practices use AI to their advantage and earn patients trust. To ensure that the benefits of AI are secured in clinical practice, future research on best methods of physician incorporation and patient decision making is required.

In the National Institutes of Health-funded study,participants were placed into scenarios as mock patients and asked whether they would prefer to have an AI system or a physical doctor for diagnosis and treatment, and under what circumstances.

In the first phase, researchers conducted structured interviews with actual patients, testing their reactions to current and future AI technologies. In the second phase of the study, researchers polled 2,472 participants across diverse ethnic, racial and socioeconomic groups using a blinded, randomized survey that tested eight variables.

Overall, participants were almost evenly split, with more than 52% choosing human doctors as a preference versus approximately 47% choosing an AI diagnostic method. If study participants were prompted that their primary care physicians felt AI was superior and helpful as an adjunct to diagnosis or otherwise nudged to consider AI as good, the acceptance of AI by study participants on re-questioning increased. This signaled the significance of the human physician in guiding a patients decision.

Disease severity leukemia versus sleep apnea did not affect participants trust in AI. Compared to white participants, Black participants selected AI less often and Native Americans selected it more often. Older participants were less likely to choose AI, as were those who self-identified as politically conservative or viewed religion as important.

The racial, ethnic and social disparities identified suggest that differing groups will warrant specific sensitivity and attention as to informing them as to the value and utility of AI to enhance diagnoses.

I really feel this study has the import for national reach. It will guide many future studies and clinical translational decisions even now, Dr. Slepian said. The onus will be on physicians and others in health care to ensure that information that resides in AI systems is accurate, and to continue to maintain and enhance the accuracy of AI systems as they will play an increasing role in the future of health care.

Co-authors include Andrew Woods, JD, Milton O. Riepe professor of law and co-director of the TechLaw program at the UArizona James E. Rogers College of Law; Kelly Bergstrand, PhD, associate professor of sociology and anthropology at the University of Texas at Arlington; Jess Findley, JD, PhD, professor of practice and director of bar and academic success at UArizona James E. Rogers College of Law; and Cayley Balser, JD, postgraduate at Innovation for Justice, housed at both the UArizona James E. Rogers College of Law and the University of Utah David Eccles School of Business.

This research was funded in part by the National Institutes of Health under award no. 3R25HL126140-05S1.

Go here to see the original:

Would you trust an AI doctor? New research shows patients are split - University of Arizona