Archive for the ‘Ai’ Category

YouTube is going all in on AI with background and video topic … – The Verge

More content on YouTube is going to be created at least in part using generative AI.

The video platform announced several new AI-powered tools for creators at its annual Made on YouTube event on Thursday. Among the features coming later this year or next are AI-generated photo and video backgrounds, AI video topic suggestions, and music search.

A new feature called Dream Screen will create AI-generated videos and photos that creators can place in the background of their YouTube Shorts. Initially, creators will be able to type in prompts to generate backgrounds; eventually, YouTube says, creators will be able to remix and edit their existing content using AI tools to create something new.

At Made on YouTube, the company demonstrated Dream Screen, generating backgrounds in seconds based on short prompts.

AI tools will also begin informing what kind of content creators make. A new AI feature in YouTube Studio will generate topic ideas and outlines for potential videos. The AI suggestions will be personalized to individual creators, YouTube says, and based on whats already trending with audiences. Additionally, an AI-powered music recommendation system will take a written description of a creators video and suggest audio to use.

Finally, YouTube announced an AI dubbing feature that will allow creators to dub their videos into other languages. YouTube brought over the Aloud team from its Area 120 incubator earlier this year to help make the feature.

The shift in how digital creators make content is already well underway since the explosion of cheap generative AI tools over the last year. As YouTube parent company Google has been pouring money into its generative AI systems, YouTube has also slowly introduced AI-powered tools including video summaries. On Googles biggest product, Search, the company is already testing placing AI search results at the top in the form of Search Generative Experience.

The slew of new AI-powered YouTube products could mark a shift in how creators plan, make, and structure their content. AI-driven insights will likely shift what kind of content creators double down on, and AI-generated content already viral on YouTube will become more common. In response to the spread of convincing synthetic material, other platforms like TikTok have already introduced labels to identify AI-generated material as such.

YouTube is also making it easier for creators to make Shorts with a new YouTube Create app that it announced at the event.

Correction September 21st, 11:10AM ET: Removed an incorrect reference to the product as Green Screen instead of Dream Screen.

Link:

YouTube is going all in on AI with background and video topic ... - The Verge

Opinion: I asked AI about myself. The answers were all wrong – The Virginian-Pilot

My interest in artificial intelligence piqued after a colleague told me he was using it for research and writing. Before I used AI for my own work, I decided to test its authenticity with a question I could verify. I asked OpenAIs ChatGPT about my own identity expecting a text version of a selfie. After a week of repeating the same question, the responses were confounding and concerning.

ChatGPT answered who is Philip Shucet by listing 15 distinct positions I supposedly held at one time or another. The positions included specific titles, job responsibilities and employment dates. But only three of the 15 jobs were accurate. The other 12 were fabrications; the positions were real, but I was never in any of them. The misinformation included jobs in two states I never lived in, as well as a congressional appointment to the Amtrak Review Board. How could AI be so wrong?

Although newsrooms, boardrooms and classrooms are buzzing with stories, AI is not new. The first chatbot, Eliza, was created in 1966 by Joseph Weizenbaum at MIT. Weizenbaum, who died in 2008, became skeptical of artificial intelligence, telling the New Age Journal in 1985, The dependence on computers is merely the most recent, and the most extreme, example of how man relies on technology in order to escape the burden of acting as an independent agent.

Was Weizenbaum sending a warning that technology might make us lazy?

In an interview about AI on a March segment of 60 Minutes, Brad Smith, president of Microsoft, told Leslie Stahl that a benefit of AI could be, looking at forms to see if theyve been filled out correctly. But what if the form is a resume created by AI? Can AI check its own misinformation? What happens when an employment record is tainted with false information created by AI? Can job recruiters rely on AI queries? Can employers rely on recruiters who use AI? And who is accountable when someone is hired based on misinformation generated by a machine and not by a human?

In the same 60 Minutes segment, Ellie Pavlik, an assistant professor at Brown, told Stahl, It (AI) doesnt really understand what it is saying is wrong. If AI doesnt know when it is wrong, how can anyone rely on AI to be correct?

In May, two New York attorneys used ChatGPT to write a court brief. The brief cited misinformation from cases that didnt exist. Schwartz told the judge that he failed miserably to do his own research to make sure the information was correct. The judge fined each attorney $5,000.

I asked ChatGPT about the consequences of giving out bad information. ChatGPT answered by saying that false information results in misrepresentation, confusion, legal concerns, emotional distress and erodes trust in AI. If ChatGPT understands the implications of false information, why does it continue to provide fabrications when a search engine could easily provide correct information? Because, as I know now, ChatGPT is not a search engine. I know because I asked.

ChatGPT says it is a language model designed to understand and generate human-like text based on input. ChatGPT says it doesnt crawl the web or search the Internet. Instead, it generates responses based on patterns and information it learned from the text it was trained on.

If AI needs to be trained, then theres a critical human element of accountability we cant ignore. So I started training ChatGPT by correcting it each time it answered with false information. After a week of training, ChatGPT was still returning a mix of accurate and inaccurate information, sometimes repeating fabrications. Im still sending back correct information, but Im ready to bring this experiment to an end for now.

This wasnt a test of ego, it was a test of reliability and trust. A 20% accuracy rate is a failing grade.

In 1976, Weizenbaum wrote, No other organism, and certainly no computer, can be made to confront genuine human problems in human terms. Im not a luddite. But as technology continues to leap forward further and faster, lets remember that we are in control of the information that defines us. We are the trainers.

Philip Shucet is a journalist. He previously held positions as the commissioner of VDOT, president and CEO of Hampton Roads Transit, and CEO of Elizabeth River Crossings. He has never held a congressional appointment.

Read more:

Opinion: I asked AI about myself. The answers were all wrong - The Virginian-Pilot

ICYMI: As California Fires Worsen, Can AI Come to the Rescue … – Office of Governor Gavin Newsom

WHAT YOU NEED TO KNOW: No other jurisdiction in the world comes close to Californias use of technology and innovation including AI to fight fires.

SACRAMENTO Short answer: yes.

California is leveraging technologies like AI to fight fires faster and smarter, saving countless lives and communities from destruction.

As reported by the Los Angeles Times, CAL FIRE recently launched a pilot program that uses AI to monitor live camera feeds and issues alerts if anomalies are detected. Already, the program has successfully alerted CAL FIRE to 77 fires before any 911 calls were made.

This program is made possible by record investments by Governor Newsom and the Legislature in wildfire prevention and response totaling $2.8 billion.

IN CASE YOU MISSED IT:

As California Fires Worsen, Can AI Come to the Rescue?

By Hayley Smith

Los Angeles Times

Just before 3 a.m. one night this month, Scott Slumpff was awakened by the ding of a text message.

An ALERTCalifornia anomaly has been confirmed in your area of interest, the message said.

Slumpff, a battalion chief with the California Department of Forestry and Fire Protection, sprang into action. The message meant the agencys new artificial intelligence system had identified signs of a wildfire with a remote mountaintop camera in San Diego County.

Within minutes, crews were dispatched to the burgeoning blaze on Mount Laguna squelching it before it grew any larger than a 10-foot-by-10-foot spot.

Without the alert, we wouldnt have even known about the fire until the next morning, when people are out and about seeing smoke, Slumpff said. We probably would have been looking at hundreds of acres rather than a small spot.

The rapid response was part of a new AI pilot project operated by Cal Fire in partnership with UC San Diegos ALERTCalifornia system, which maintains 1,039 high-definition cameras in strategic locations throughout the state.

The AI constantly monitors the camera feeds in search of anomalies such as smoke, and alerts Cal Fire when it detects something. A red box highlights the anomaly on a screen, allowing officials to quickly verify and respond.

The project rolled out just two months ago to six Cal Fire emergency command centers in the state. But the proof of concept has already been so successful correctly identifying 77 fires before any 911 calls were logged that it will soon roll out to all 21 centers.

The success of this project is the fires you never hear about, said Phillip SeLegue, staff chief of fire intelligence with Cal Fire.

Read more here.

Read more from the original source:

ICYMI: As California Fires Worsen, Can AI Come to the Rescue ... - Office of Governor Gavin Newsom

AI chips, shared trips, and a shorter work week : The Indicator from … – NPR

AI chips, shared trips, and a shorter work week : The Indicator from Planet Money It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

Lionel Bonaventure/AFP via Getty Images

It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

View post:

AI chips, shared trips, and a shorter work week : The Indicator from ... - NPR

How Schools Can Survive A.I. – The New York Times

Last November, when ChatGPT was released, many schools felt as if theyd been hit by an asteroid.

In the middle of an academic year, with no warning, teachers were forced to confront the new, alien-seeming technology, which allowed students to write college-level essays, solve challenging problem sets and ace standardized tests.

Some schools responded unwisely, I argued at the time by banning ChatGPT and tools like it. But those bans didnt work, in part because students could simply use the tools on their phones and home computers. And as the year went on, many of the schools that restricted the use of generative A.I. as the category that includes ChatGPT, Bing, Bard and other tools is called quietly rolled back their bans.

Ahead of this school year, I talked with numerous K-12 teachers, school administrators and university faculty members about their thoughts on A.I. now. There is a lot of confusion and panic, but also a fair bit of curiosity and excitement. Mainly, educators want to know: How do we actually use this stuff to help students learn, rather than just try to catch them cheating?

Im a tech columnist, not a teacher, and I dont have all the answers, especially when it comes to the long-term effects of A.I. on education. But I can offer some basic, short-term advice for schools trying to figure out how to handle generative A.I. this fall.

First, I encourage educators especially in high schools and colleges to assume that 100 percent of their students are using ChatGPT and other generative A.I. tools on every assignment, in every subject, unless theyre being physically supervised inside a school building.

At most schools, this wont be completely true. Some students wont use A.I. because they have moral qualms about it, because its not helpful for their specific assignments, because they lack access to the tools or because theyre afraid of getting caught.

But the assumption that everyone is using A.I. outside class may be closer to the truth than many educators realize. (You have no idea how much were using ChatGPT, read the title of a recent essay by a Columbia undergraduate in The Chronicle of Higher Education.) And its a helpful shortcut for teachers trying to figure out how to adapt their teaching methods. Why would you assign a take-home exam, or an essay on Jane Eyre, if everyone in class except, perhaps, the most strait-laced rule followers will use A.I. to finish it? Why wouldnt you switch to proctored exams, blue-book essays and in-class group work, if you knew that ChatGPT was as ubiquitous as Instagram and Snapchat among your students?

Second, schools should stop relying on A.I. detector programs to catch cheaters. There are dozens of these tools on the market now, all claiming to spot writing that was generated with A.I., and none of them work reliably well. They generate lots of false positives, and can be easily fooled by techniques like paraphrasing. Dont believe me? Ask OpenAI, the maker of ChatGPT, which discontinued its A.I. writing detector this year because of a low rate of accuracy.

Its possible that in the future, A.I. companies may be able to label their models outputs to make them easier to spot a practice known as watermarking or that better A.I. detection tools may emerge. But for now, most A.I. text should be considered undetectable, and schools should spend their time (and technology budgets) elsewhere.

My third piece of advice and the one that may get me the most angry emails from teachers is that teachers should focus less on warning students about the shortcomings of generative A.I. than on figuring out what the technology does well.

Last year, many schools tried to scare students away from using A.I. by telling them that tools like ChatGPT are unreliable, prone to spitting out nonsensical answers and generic-sounding prose. These criticisms, while true of early A.I. chatbots, are less true of todays upgraded models, and clever students are figuring out how to get better results by giving the models more sophisticated prompts.

As a result, students at many schools are racing ahead of their instructors when it comes to understanding what generative A.I. can do, if used correctly. And the warnings about flawed A.I. systems issued last year may ring hollow this year, now that GPT-4 is capable of getting passing grades at Harvard.

Alex Kotran, the chief executive of the AI Education Project, a nonprofit that helps schools adopt A.I., told me that teachers needed to spend time using generative A.I. themselves to appreciate how useful it could be and how quickly it was improving.

For most people, ChatGPT is still a party trick, he said. If you dont really appreciate how profound of a tool this is, youre not going to take all the other steps that are going to be required.

There are resources for educators who want to bone up on A.I. in a hurry. Mr. Kotrans organization has a number of A.I.-focused lesson plans available for teachers, as does the International Society for Technology in Education. Some teachers have also begun assembling recommendations for their peers, such as a website made by faculty at Gettysburg College that provides practical advice on generative A.I. for professors.

In my experience, though, there is no substitute for hands-on experience. So Id advise teachers to start experimenting with ChatGPT and other generative A.I. tools themselves, with the goal of getting as fluent in the technology as many of their students already are.

My last piece of advice for schools that are flummoxed by generative A.I. is this: Treat this year the first full academic year of the post-ChatGPT era as a learning experience, and dont expect to get everything right.

There are many ways A.I. could reshape the classroom. Ethan Mollick, a professor at the University of Pennsylvanias Wharton School, thinks the technology will lead more teachers to adopt a flipped classroom having students learn material outside class and practice it in class which has the advantage of being more resistant to A.I. cheating. Other educators I spoke with said they were experimenting with turning generative A.I. into a classroom collaborator, or a way for students to practice their skills at home with the help of a personalized A.I. tutor.

Some of these experiments wont work. Some will. Thats OK. Were all still adjusting to this strange new technology in our midst, and the occasional stumble is to be expected.

But students need guidance when it comes to generative A.I., and schools that treat it as a passing fad or an enemy to be vanquished will miss an opportunity to help them.

A lot of stuffs going to break, Mr. Mollick said. And so we have to decide what were doing, rather than fighting a retreat against the A.I.

Read the original here:

How Schools Can Survive A.I. - The New York Times