Archive for the ‘Ai’ Category

AI chips, shared trips, and a shorter work week : The Indicator from … – NPR

AI chips, shared trips, and a shorter work week : The Indicator from Planet Money It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

Lionel Bonaventure/AFP via Getty Images

It's Indicators of the Week, our weekly news roundup. Today, AI doesn't want to invest in AI, a county in Washington state implements a 4-day work week, and NYC says bye bye to Airbnb, sorta.

For sponsor-free episodes of The Indicator from Planet Money, subscribe to Planet Money+ via Apple Podcasts or at plus.npr.org.

Music by Drop Electric. Find us: TikTok, Instagram, Facebook, Newsletter.

View post:

AI chips, shared trips, and a shorter work week : The Indicator from ... - NPR

How Schools Can Survive A.I. – The New York Times

Last November, when ChatGPT was released, many schools felt as if theyd been hit by an asteroid.

In the middle of an academic year, with no warning, teachers were forced to confront the new, alien-seeming technology, which allowed students to write college-level essays, solve challenging problem sets and ace standardized tests.

Some schools responded unwisely, I argued at the time by banning ChatGPT and tools like it. But those bans didnt work, in part because students could simply use the tools on their phones and home computers. And as the year went on, many of the schools that restricted the use of generative A.I. as the category that includes ChatGPT, Bing, Bard and other tools is called quietly rolled back their bans.

Ahead of this school year, I talked with numerous K-12 teachers, school administrators and university faculty members about their thoughts on A.I. now. There is a lot of confusion and panic, but also a fair bit of curiosity and excitement. Mainly, educators want to know: How do we actually use this stuff to help students learn, rather than just try to catch them cheating?

Im a tech columnist, not a teacher, and I dont have all the answers, especially when it comes to the long-term effects of A.I. on education. But I can offer some basic, short-term advice for schools trying to figure out how to handle generative A.I. this fall.

First, I encourage educators especially in high schools and colleges to assume that 100 percent of their students are using ChatGPT and other generative A.I. tools on every assignment, in every subject, unless theyre being physically supervised inside a school building.

At most schools, this wont be completely true. Some students wont use A.I. because they have moral qualms about it, because its not helpful for their specific assignments, because they lack access to the tools or because theyre afraid of getting caught.

But the assumption that everyone is using A.I. outside class may be closer to the truth than many educators realize. (You have no idea how much were using ChatGPT, read the title of a recent essay by a Columbia undergraduate in The Chronicle of Higher Education.) And its a helpful shortcut for teachers trying to figure out how to adapt their teaching methods. Why would you assign a take-home exam, or an essay on Jane Eyre, if everyone in class except, perhaps, the most strait-laced rule followers will use A.I. to finish it? Why wouldnt you switch to proctored exams, blue-book essays and in-class group work, if you knew that ChatGPT was as ubiquitous as Instagram and Snapchat among your students?

Second, schools should stop relying on A.I. detector programs to catch cheaters. There are dozens of these tools on the market now, all claiming to spot writing that was generated with A.I., and none of them work reliably well. They generate lots of false positives, and can be easily fooled by techniques like paraphrasing. Dont believe me? Ask OpenAI, the maker of ChatGPT, which discontinued its A.I. writing detector this year because of a low rate of accuracy.

Its possible that in the future, A.I. companies may be able to label their models outputs to make them easier to spot a practice known as watermarking or that better A.I. detection tools may emerge. But for now, most A.I. text should be considered undetectable, and schools should spend their time (and technology budgets) elsewhere.

My third piece of advice and the one that may get me the most angry emails from teachers is that teachers should focus less on warning students about the shortcomings of generative A.I. than on figuring out what the technology does well.

Last year, many schools tried to scare students away from using A.I. by telling them that tools like ChatGPT are unreliable, prone to spitting out nonsensical answers and generic-sounding prose. These criticisms, while true of early A.I. chatbots, are less true of todays upgraded models, and clever students are figuring out how to get better results by giving the models more sophisticated prompts.

As a result, students at many schools are racing ahead of their instructors when it comes to understanding what generative A.I. can do, if used correctly. And the warnings about flawed A.I. systems issued last year may ring hollow this year, now that GPT-4 is capable of getting passing grades at Harvard.

Alex Kotran, the chief executive of the AI Education Project, a nonprofit that helps schools adopt A.I., told me that teachers needed to spend time using generative A.I. themselves to appreciate how useful it could be and how quickly it was improving.

For most people, ChatGPT is still a party trick, he said. If you dont really appreciate how profound of a tool this is, youre not going to take all the other steps that are going to be required.

There are resources for educators who want to bone up on A.I. in a hurry. Mr. Kotrans organization has a number of A.I.-focused lesson plans available for teachers, as does the International Society for Technology in Education. Some teachers have also begun assembling recommendations for their peers, such as a website made by faculty at Gettysburg College that provides practical advice on generative A.I. for professors.

In my experience, though, there is no substitute for hands-on experience. So Id advise teachers to start experimenting with ChatGPT and other generative A.I. tools themselves, with the goal of getting as fluent in the technology as many of their students already are.

My last piece of advice for schools that are flummoxed by generative A.I. is this: Treat this year the first full academic year of the post-ChatGPT era as a learning experience, and dont expect to get everything right.

There are many ways A.I. could reshape the classroom. Ethan Mollick, a professor at the University of Pennsylvanias Wharton School, thinks the technology will lead more teachers to adopt a flipped classroom having students learn material outside class and practice it in class which has the advantage of being more resistant to A.I. cheating. Other educators I spoke with said they were experimenting with turning generative A.I. into a classroom collaborator, or a way for students to practice their skills at home with the help of a personalized A.I. tutor.

Some of these experiments wont work. Some will. Thats OK. Were all still adjusting to this strange new technology in our midst, and the occasional stumble is to be expected.

But students need guidance when it comes to generative A.I., and schools that treat it as a passing fad or an enemy to be vanquished will miss an opportunity to help them.

A lot of stuffs going to break, Mr. Mollick said. And so we have to decide what were doing, rather than fighting a retreat against the A.I.

Read the original here:

How Schools Can Survive A.I. - The New York Times

Young professionals are turning to AI to create headshots. But there … – NPR

The photo on the left was what Sophia Jones fed the AI service. It generated the two images on the right. Sophia Jones hide caption

The photo on the left was what Sophia Jones fed the AI service. It generated the two images on the right.

Sophia Jones is juggling a lot right now. She just graduated from her master's program, started her first full-time job with SpaceX and recently got engaged. But thanks to technology, one thing isn't on her to-do list: getting professional headshots taken.

Jones is one of a growing number of young professionals who are relying not on photographers to take headshots, but on generative artificial intelligence.

The process is simple enough: Users send in up to a dozen images of themselves to a website or app. Then they pick from sample photos with a style or aesthetic they want to copy, and the computer does the rest. More than a dozen of these services are available online and in app stores.

For Jones, the use of AI-generated headshots is a matter of convenience, because she can tweak images she already has and use them in a professional setting. She found out about AI-generated headshots on TikTok, where they went viral recently, and has since used them in everything from her LinkedIn profile to graduation pamphlets, and in her workplace.

So far no one has noticed.

"I think you would have to do some serious investigating and zooming in to realize that it might not truly be me," Jones told NPR.

Still, many of these headshot services are far from perfect. Some of the generated photos give users extra hands or arms, and they have consistent issues around perfecting teeth and ears.

These issues are likely a result of the data sets that the apps and services are trained on, according to Jordan Harrod, a Ph.D. candidate who is popular on YouTube for explaining how AI technology works.

Harrod said some AI technology being used now is different in that it learns what styles a user is looking for and applies them "almost like a filter" to the images. To learn these styles, the technology combs through massive data sets for patterns, which means the results are based on the things it's learning from.

"Most of it just comes from how much training data represents things like hands and ears and hair in various different configurations that you'd see in real life," Harrod said. And when the data sets underrepresent some configurations, some users are left behind or bias creeps in.

Rona Wang is a postgraduate student in a joint MIT-Harvard computer science program. When she used an AI service, she noticed that some of the features it added made her look completely different.

"It made my skin kind of paler and took out the yellow undertones," Wang said, adding that it also gave her big blue eyes when her eyes are brown.

Others who have tried AI headshots have pointed out similar errors, noticing that some websites make women look curvier than they are and that they can wash out complexions and have trouble accurately depicting Black hairstyles.

"When it comes to AI and AI bias, it's important for us to be thinking about who's included and who's not included," Wang said.

For many, the decision may come down to cost and accessibility.

Grace White, a law student at the University of Arkansas, was an early adopter of AI headshots, posting about her experience on TikTok and attracting more than 50 million views.

The close-up photo on the right was one of 10 real images that Grace White submitted to an AI service, which generated the two images on the left. Grace White hide caption

Ultimately, White didn't use the generated images and opted for a professional photographer to take her photo, but she said she recognizes that not everyone has the same budget flexibility.

"I do understand people who may have a lower income, and they don't have the budget for a photographer," White said. "I do understand them maybe looking for the AI route just to have a cheaper option for professional headshots."

Go here to see the original:

Young professionals are turning to AI to create headshots. But there ... - NPR

Generative AI and data analytics on the agenda for Pamplin’s Day … – Virginia Tech

On Friday, Sept. 8, the second annual Day for Data symposium will gather industry leaders and academia together for a practical exploration of business analytics. The event is scheduled from 8 a.m. to 4 p.m. EDT in Virginia Tech's Owens Ballroom.

Virginia Tech is a leader in advanced analytics programs and capabilities, said Jay Winkeler, executive director of the Center for Business Analytics. Building off the success from last year, Day for Data will be bigger and bolder, with a focus on the AI [artificial intelligence] revolution happening all around us.

The conference, hosted by the Pamplin College of Businesss Center for Business Analytics, is an opportunity for shared learning and thought leadership in the field of business analytics. Corporate leaders and university faculty converge to fill a robust agenda with expertise in a wide range of topics including generative AI and large language models, advanced data analytics, digital privacy, business leadership and intelligence, and more.

Beyond the rich learning component, Day for Data also lends itself to opportunities for professional advancement. With a strong turnout expected from both academia and industry, the event offers students a chance to see the real-world applications of their studies and companies an opportunity to scout for emerging talent.

The interaction between students, faculty, and corporations is critical to harnessing the power of analytics and showing how skilled professionals translate analytics into meaningful business decisions, said Winkeler. For industry professionals, it is a chance to tell their success stories and gain critical exposure to a talented student and faculty population.

The symposium will begin with opening remarks by Saonee Sarker, Richard E. Sorensen Dean for the Pamplin College of Business, followed by a keynote address from Andrew Allwine, senior director of data optimization for Norfolk Southern. During the session, Allwine will share his strategies for aggregating and translating complex datasets into actionable insights and tangible return on investment for organizational decision-makers.

Key contributions by faculty working within Pamplin include a session led by Voices of Privacy, an initiative spearheaded by Professors France Blanger and Donna Wertalik that seeks to prepare society to manage their information privacy amid the challenging modern digital landscape, as well as a research poster session highlighting the latest research in the field.

After a lunch and networking break, Keith Johnson, director of solutions architecture for partner systems integrators at Amazon Web Services, will deliver a presentation and live demonstration of Amazons latest innovations with generative AI and large language models. Tracy Jones, data strategy and management executive for Guidehouse, will follow with a session on the opportunities and threats of artificial intelligence implementation, including case studies of organizations that neglected ethical principles and suffered consequences.

Both experts will return to join Kevin Davis, chief growth officer for MarathonTS, and Cayce Myers, director of graduate studies for the School of Communication at Virginia Tech, for a panel discussion and interactive conversation on artificial intelligence, including ethical, legal, and technical considerations. Day for Data will conclude with a networking reception.

Day for Data 2023 is sponsored by Norfolk Southern, Guidehouse, MarathonTS, Ernst & Young, and Amazon Web Services.

For more information on Day for Data and to register, please visit the event page.

Read the original post:

Generative AI and data analytics on the agenda for Pamplin's Day ... - Virginia Tech

AI helps robots manipulate objects with their whole bodies – MIT News

Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers out and lift that box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box.

Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carriers fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable.

Now MIT researchers found a way to simplify this process, known as contact-rich manipulation planning. They use an AI technique called smoothing, which summarizes many contact events into a smaller number of decisions, to enable even a simple algorithm to quickly identify an effective manipulation plan for the robot.

While still in its early days, this method could potentially enable factories to use smaller, mobile robots that can manipulate objects with their entire arms or bodies, rather than large robotic arms that can only grasp using fingertips. This may help reduce energy consumption and drive down costs. In addition, this technique could be useful in robots sent on exploration missions to Mars or other solar system bodies, since they could adapt to the environment quickly using only an onboard computer.

Rather than thinking about this as a black-box system, if we can leverage the structure of these kinds of robotic systems using models, there is an opportunity to accelerate the whole procedure of trying to make these decisions and come up with contact-rich plans, says H.J. Terry Suh, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper on this technique.

Joining Suh on the paper are co-lead author Tao Pang PhD 23, a roboticist at Boston Dynamics AI Institute; Lujie Yang, an EECS graduate student; and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research appears this week in IEEE Transactions on Robotics.

Learning about learning

Reinforcement learning is a machine-learning technique where an agent, like a robot, learns to complete a task through trial and error with a reward for getting closer to a goal. Researchers say this type of learning takes a black-box approach because the system must learn everything about the world through trial and error.

It has been used effectively for contact-rich manipulation planning, where the robot seeks to learn the best way to move an object in a specified manner.

But because there may be billions of potential contact points that a robot must reason about when determining how to use its fingers, hands, arms, and body to interact with an object, this trial-and-error approach requires a great deal of computation.

Reinforcement learning may need to go through millions of years in simulation time to actually be able to learn a policy, Suh adds.

On the other hand, if researchers specifically design a physics-based model using their knowledge of the system and the task they want the robot to accomplish, that model incorporates structure about this world that makes it more efficient.

Yet physics-based approaches arent as effective as reinforcement learning when it comes to contact-rich manipulation planning Suh and Pang wondered why.

They conducted a detailed analysis and found that a technique known as smoothing enables reinforcement learning to perform so well.

Many of the decisions a robot could make when determining how to manipulate an object arent important in the grand scheme of things. For instance, each infinitesimal adjustment of one finger, whether or not it results in contact with the object, doesnt matter very much. Smoothing averages away many of those unimportant, intermediate decisions, leaving a few important ones.

Reinforcement learning performs smoothing implicitly by trying many contact points and then computing a weighted average of the results. Drawing on this insight, the MIT researchers designed a simple model that performs a similar type of smoothing, enabling it to focus on core robot-object interactions and predict long-term behavior. They showed that this approach could be just as effective as reinforcement learning at generating complex plans.

If you know a bit more about your problem, you can design more efficient algorithms, Pang says.

A winning combination

Even though smoothing greatly simplifies the decisions, searching through the remaining decisions can still be a difficult problem. So, the researchers combined their model with an algorithm that can rapidly and efficiently search through all possible decisions the robot could make.

With this combination, the computation time was cut down to about a minute on a standard laptop.

They first tested their approach in simulations where robotic hands were given tasks like moving a pen to a desired configuration, opening a door, or picking up a plate. In each instance, their model-based approach achieved the same performance as reinforcement learning, but in a fraction of the time. They saw similar results when they tested their model in hardware on real robotic arms.

The same ideas that enable whole-body manipulation also work for planning with dexterous, human-like hands. Previously, most researchers said that reinforcement learning was the only approach that scaled to dexterous hands, but Terry and Tao showed that by taking this key idea of (randomized) smoothing from reinforcement learning, they can make more traditional planning methods work extremely well, too, Tedrake says.

However, the model they developed relies on a simpler approximation of the real world, so it cannot handle very dynamic motions, such as objects falling. While effective for slower manipulation tasks, their approach cannot create a plan that would enable a robot to toss a can into a trash bin, for instance. In the future, the researchers plan to enhance their technique so it could tackle these highly dynamic motions.

If you study your models carefully and really understand the problem you are trying to solve, there are definitely some gains you can achieve. There are benefits to doing things that are beyond the black box, Suh says.

This work is funded, in part, by Amazon, MIT Lincoln Laboratory, the National Science Foundation, and the Ocado Group.

See more here:

AI helps robots manipulate objects with their whole bodies - MIT News