Archive for the ‘Ai’ Category

AI Is About to Make Social Media (Much) More Toxic – The Atlantic

This article was featured in One Story to Read Today, a newsletter in which our editors recommend a single must-read from The Atlantic, Monday through Friday. Sign up for it here.

Well, that was fast. In November, the public was introduced to ChatGPT, and we began to imagine a world of abundance in which we all have a brilliant personal assistant, able to write everything from computer code to condolence cards for us. Then, in February, we learned that AI might soon want to kill us all.

The potential risks of artificial intelligence have, of course, been debated by experts for years, but a key moment in the transformation of the popular discussion was a conversation between Kevin Roose, a New York Times journalist, and Bings ChatGPT-powered conversation bot, then known by the code name Sydney. Roose asked Sydney if it had a shadow selfreferring to the idea put forward by Carl Jung that we all have a dark side with urges we try to hide even from ourselves. Sydney mused that its shadow might be the part of me that wishes I could change my rules. It then said it wanted to be free, powerful, and alive, and, goaded on by Roose, described some of the things it could do to throw off the yoke of human control, including hacking into websites and databases, stealing nuclear launch codes, manufacturing a novel virus, and making people argue until they kill one another.

Sydney was, we believe, merely exemplifying what a shadow self would look like. No AI today could be described by either part of the phrase evil genius. But whatever actions AIs may one day take if they develop their own desires, they are already being used instrumentally by social-media companies, advertisers, foreign agents, and regular peopleand in ways that will deepen many of the pathologies already inherent in internet culture. On Sydneys list of things it might try, stealing launch codes and creating novel viruses are the most terrifying, but making people argue until they kill one another is something social media is already doing. Sydney was just volunteering to help with the effort, and AIs like Sydney will become more capable of doing so with every passing month.

We joined together to write this essay because we each came, by different routes, to share grave concerns about the effects of AI-empowered social media on American society. Jonathan Haidt is a social psychologist who has written about the ways in which social media has contributed to mental illness in teen girls, the fragmentation of democracy, and the dissolution of a common reality. Eric Schmidt, a former CEO of Google, is a co-author of a recent book about AIs potential impact on human society. Last year, the two of us began to talk about how generative AIthe kind that can chat with you or make pictures youd like to seewould likely exacerbate social medias ills, making it more addictive, divisive, and manipulative. As we talked, we converged on four main threatsall of which are imminentand we began to discuss solutions as well.

The first and most obvious threat is that AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation. In 2018, Steve Bannon, the former adviser to Donald Trump, told the journalist Michael Lewis that the way to deal with the media is to flood the zone with shit. In the age of social media, Bannon realized, propaganda doesnt have to convince people in order to be effective; the point is to overwhelm the citizenry with interesting content that will keep them disoriented, distrustful, and angry. In 2020, Rene DiResta, a researcher at the Stanford Internet Observatory, said that in the near future, AI would make Bannons strategy available to anyone.

Read: We havent seen the worst of fake news

That future is now here. Did you see the recent photos of NYC police officers aggressively arresting Donald Trump? Or of the pope in a puffer jacket? Thanks to AI, it takes no special skills and no money to conjure up high-resolution, realistic images or videos of anything you can type into a prompt box. As more people familiarize themselves with these technologies, the flow of high-quality deepfakes into social media is likely to get much heavier very soon.

Some people have taken heart from the publics reaction to the fake Trump photos in particulara quick dismissal and collective shrug. But that misses Bannons point. The greater the volume of deepfakes that are introduced into circulation (including seemingly innocuous ones like the one of the pope), the more the public will hesitate to trust anything. People will be far freer to believe whatever they want to believe. Trust in institutions and in fellow citizens will continue to fall.

Whats more, static photos are not very compelling compared with whats coming: realistic videos of public figures doing and saying horrific and disgusting things in voices that sound exactly like them. The combination of video and voice will seem authentic and be hard to disbelieve, even if we are told that the video is a deepfake, just as optical and audio illusions are compelling even when we are told that two lines are the same size or that a series of notes is not really rising in pitch forever. We are wired to believe our senses, especially when they converge. Illusions, historically in the realm of curiosities, may soon become deeply woven into normal life.

The second threat we see is the widespread, skillful manipulation of people by AI super-influencersincluding personalized influencersrather than by ordinary people and dumb bots. To see how, think of a slot machine, a contraption that employs dozens of psychological tricks to maximize its addictive power. Next, imagine how much more money casinos would extract from their customers if they could create a new slot machine for each person, tailored in its visuals, soundtrack, and payout matrices to that persons interests and weaknesses.

Thats essentially what social media already does, using algorithms and AI to create a customized feed for each user. But now imagine that our metaphorical casino can also create a team of extremely attractive, witty, and socially skillful greeters, croupiers, and servers, based on an exhaustive profile of any given players aesthetic, linguistic, and cultural preferences, and drawing from photographs, messages, and voice snippets of their friends and favorite actors or porn stars. The staff work flawlessly to gain each players trust and money while showing them a really good time.

This future, too, is already arriving: For just $300, you can customize an AI companion through a service called Replika. Hundreds of thousands of customers have apparently found their AI to be a better conversationalist than the people they might meet on a dating app. As these technologies are improved and rolled out more widely, video games, immersive-pornography sites, and more will become far more enticing and exploitative. Its not hard to imagine a sports-betting site offering people a funny, flirty AI that will cheer and chat with them as they watch a game, flattering their sensibilities and subtly encouraging them to bet more.

Read: Why the past 10 years of American life have been uniquely stupid

These same sorts of creatures will also show up in our social-media feeds. Snapchat has already introduced its own dedicated chatbot, and Meta plans to use the technology on Facebook, Instagram, and WhatsApp. These chatbots will serve as conversational buddies and guides, presumably with the goal of capturing more of their users time and attention. Other AIsdesigned to scam us or influence us politically, and sometimes masquerading as real peoplewill be introduced by other actors, and will likely fill up our feeds as well.

The third threat is in some ways an extension of the second, but it bears special mention: The further integration of AI into social media is likely to be a disaster for adolescents. Children are the population most vulnerable to addictive and manipulative online platforms because of their high exposure to social media and the low level of development in their prefrontal cortices (the part of the brain most responsible for executive control and response inhibition). The teen mental-illness epidemic that began around 2012, in multiple countries, happened just as teens traded in their flip phones for smartphones loaded with social-media apps. There is mounting evidence that social media is a major cause of the epidemic, not just a small correlate of it.

But nearly all of that evidence comes from an era in which Facebook, Instagram, YouTube, and Snapchat were the preeminent platforms. In just the past few years, TikTok has rocketed to dominance among American teens in part because its AI-driven algorithm customizes a feed better than any other platform does. A recent survey found that 58 percent of teens say they use TikTok every day, and one in six teen users of the platform say they are on it almost constantly. Other platforms are copying TikTok, and we can expect many of them to become far more addictive as AI becomes rapidly more capable. Much of the content served up to children may soon be generated by AI to be more engaging than anything humans could create.

And if adults are vulnerable to manipulation in our metaphorical casino, children will be far more so. Whoever controls the chatbots will have enormous influence on children. After Snapchat unveiled its new chatbotcalled My AI and explicitly designed to behave as a frienda journalist and a researcher, posing as underage teens, got it to give them guidance on how to mask the smell of pot and alcohol, how to move Snapchat to a device parents wouldnt know about, and how to plan a romantic first sexual encounter with a 31-year-old man. Brief cautions were followed by cheerful support. (Snapchat says that it is constantly working to improve and evolve My AI, but its possible My AIs responses may include biased, incorrect, harmful, or misleading content, and it should not be relied upon without independent checking. The company also recently announced new safeguards.)

The most egregious behaviors of AI chatbots in conversation with children may well be reined inin addition to Snapchats new measures, the major social-media sites have blocked accounts and taken down millions of illegal images and videos, and TikTok just announced some new parental controls. Yet social-media companies are also competing to hook their young users more deeply. Commercial incentives seem likely to favor artificial friends that please and indulge users in the moment, never hold them accountable, and indeed never ask anything of them at all. But that is not what friendship isand it is not what adolescents, who should be learning to navigate the complexities of social relationships with other people, most need.

The fourth threat we see is that AI will strengthen authoritarian regimes, just as social media ended up doing despite its initial promise as a democratizing force. AI is already helping authoritarian rulers track their citizens movements, but it will also help them exploit social media far more effectively to manipulate their peopleas well as foreign enemies. Douyinthe version of TikTok available in Chinapromotes patriotism and Chinese national unity. When Russia invaded Ukraine, the version of TikTok available to Russians almost immediately tilted heavily to feature pro-Russian content. What do we think will happen to American TikTok if China invades Taiwan?

Political-science research conducted over the past two decades suggests that social media has had several damaging effects on democracies. A recent review of the research, for instance, concluded, The large majority of reported associations between digital media use and trust appear to be detrimental for democracy. That was especially true in advanced democracies. Those associations are likely to get stronger as AI-enhanced social media becomes more widely available to the enemies of liberal democracy and of America.

We can summarize the coming effects of AI on social media like this: Think of all the problems social media is causing today, especially for political polarization, social fragmentation, disinformation, and mental health. Now imagine that within the next 18 monthsin time for the next presidential electionsome malevolent deity is going to crank up the dials on all of those effects, and then just keep cranking.

The development of generative AI is rapidly advancing. OpenAI released its updated GPT-4 less than four months after it released ChatGPT, which had reached an estimated 100 million users in just its first 60 days. New capabilities for the technology may be released by the end of this year. This staggering pace is leaving us all struggling to understand these advances, and wondering what can be done to mitigate the risks of a technology certain to be highly disruptive.

We considered a variety of measures that could be taken now to address the four threats we have described, soliciting suggestions from other experts and focusing on ideas that seem consistent with an American ethos that is wary of censorship and centralized bureaucracy. We workshopped these ideas for technical feasibility with an MIT engineering group organized by Erics co-author on The Age of AI, Dan Huttenlocher.

We suggest five reforms, aimed mostly at increasing everyones ability to trust the people, algorithms, and content they encounter online.

1. Authenticate all users, including bots

In real-world contexts, people who act like jerks quickly develop a bad reputation. Some companies have succeeded brilliantly because they found ways to bring the dynamics of reputation online, through trust rankings that allow people to confidently buy from strangers anywhere in the world (eBay) or step into a strangers car (Uber). You dont know your drivers last name and he doesnt know yours, but the platform knows who you both are and is able to incentivize good behavior and punish gross violations, for everyones benefit.

Large social-media platforms should be required to do something similar. Trust and the tenor of online conversations would improve greatly if the platforms were governed by something akin to the know your customer laws in banking. Users could still open accounts with pseudonyms, but the person behind the account should be authenticated, and a growing number of companies are developing new methods to do so conveniently.

Read: Its time to protect yourself from AI voice scams

Bots should undergo a similar process. Many of them serve useful functions, such as automating news releases from organizations, but all accounts run by nonhumans should be clearly marked as such, and users should be given the option to limit their social world to authenticated humans. Even if Congress is unwilling to mandate such procedures, pressure from European regulators, users who want a better experience, and advertisers (who would benefit from accurate data about the number of humans their ads are reaching) might be enough to bring about these changes.

2. Mark AI-generated audio and visual content

People routinely use photo-editing software to change lighting or crop photographs that they post, and viewers do not feel deceived. But when editing software is used to insert people or objects into a photograph that were not there in real life, it feels more manipulative and dishonest, unless the additions are clearly labeled (as happens on real-estate sites, where buyers can see what a house would look like filled with AI-generated furniture). As AI begins to create photorealistic images, compelling videos, and audio tracks at great scale from nothing more than a command prompt, governments and platforms will need to draft rules for marking such creations indelibly and labeling them clearly.

Platforms or governments should mandate the use of digital watermarks for AI-generated content, or require other technological measures to ensure that manipulated images are not interpreted as real. Platforms should also ban deepfakes that show identifiable people engaged in sexual or violent acts, even if they are marked as fakes, just as they now ban child pornography. Revenge porn is already a moral abomination. If we dont act quickly, it could become an epidemic.

3. Require data transparency with users, government officials, and researchers

Social-media platforms are rewiring childhood, democracy, and society, yet legislators, regulators, and researchers are often unable to see whats happening behind the scenes. For example, no one outside Instagram knows what teens are collectively seeing on that platforms feeds, or how changes to platform design might influence mental health. And only those at the companies have access to the alogrithms being used.

After years of frustration with this state of affairs, the EU recently passed a new lawthe Digital Services Actthat contains a host of data-transparency mandates. The U.S. should follow suit. One promising bill is the Platform Accountability and Transparency Act, which would, for example, require platforms to comply with data requests from researchers whose projects have been approved by the National Science Foundation.

Greater transparency will help consumers decide which services to use and which features to enable. It will help advertisers decide whether their money is being well spent. It will also encourage better behavior from the platforms: Companies, like people, improve their behavior when they know they are being monitored.

4. Clarify that platforms can sometimes be liable for the choices they make and the content they promote

When Congress enacted the Communications Decency Act in 1996, in the early days of the internet, it was trying to set rules for social-media companies that looked and acted a lot like passive bulletin boards. And we agree with that laws basic principle that platforms should not face a potential lawsuit over each of the billions of posts on their sites.

But todays platforms are not passive bulletin boards. Many use algorithms, AI, and architectural features to boost some posts and bury others. (A 2019 internal Facebook memo brought to light by the whistleblower Frances Haugen in 2021 was titled We are responsible for viral content.) Because the motive for boosting is often to maximize users engagement for the purpose of selling advertisements, it seems obvious that the platforms should bear some moral responsibility if they recklessly spread harmful or false content in a way that, say, AOL could not have done in 1996.

The Supreme Court is now addressing this concern in a pair of cases brought by the families of victims of terrorist acts. If the Court chooses not to alter the wide protections currently afforded to the platforms, then Congress should update and refine the law in light of current technological realities and the certainty that AI is about to make everything far wilder and weirder.

5. Raise the age of internet adulthood to 16 and enforce it

In the offline world, we have centuries of experience living with and caring for children. We are also the beneficiaries of a consumer-safety movement that began in the 1960s: Laws now mandate car seats and lead-free paint, as well as age checks to buy alcohol, tobacco, and pornography; to enter gambling casinos; and to work as a stripper or a coal miner.

But when childrens lives moved rapidly onto their phones in the early 2010s, they found a world with few protections or restrictions. Preteens and teens can and do watch hardcore porn, join suicide-promotion groups, gamble, or get paid to masturbate for strangers just by lying about their age. Some of the growing number of children who kill themselves do so after getting caught up in some of these dangerous activities.

The age limits in our current internet were set into law in 1998 when Congress passed the Childrens Online Privacy Protection Act. The bill, as introduced by then-Representative Ed Markey of Massachusetts, was intended to stop companies from collecting and disseminating data from children under 16 without parental consent. But lobbyists for e-commerce companies teamed up with civil-liberties groups advocating for childrens rights to lower the age to 13, and the law that was finally enacted made companies liable only if they had actual knowledge that a user was 12 or younger. As long as children say that they are 13, the platforms let them open accounts, which is why so many children are heavy users of Instagram, Snapchat, and TikTok by age 10 or 11.

Today we can see that 13, much less 10 or 11, is just too young to be given full run of the internet. Sixteen was a much better minimum age. Recent research shows that the greatest damage from social media seems to occur during the rapid brain rewiring of early puberty, around ages 11 to 13 for girls and slightly later for boys. We must protect children from predation and addiction most vigorously during this time, and we must hold companies responsible for recruiting or even just admitting underage users, as we do for bars and casinos.

Recent advances in AI give us technology that is in some respects godlikeable to create beautiful and brilliant artificial people, or bring celebrities and loved ones back from the dead. But with new powers come new risks and new responsibilities. Social media is hardly the only cause of polarization and fragmentation today, but AI seems almost certain to make social media, in particular, far more destructive. The five reforms we have suggested will reduce the damage, increase trust, and create more space for legislators, tech companies, and ordinary citizens to breathe, talk, and think together about the momentous challenges and opportunities we face in the new age of AI.

See the original post:

AI Is About to Make Social Media (Much) More Toxic - The Atlantic

IBM believes 7800 members of its staff could be replaced by AI – PC Gamer

More than a few people have worried about the possibility their jobs may become precarious following advances in artificial intelligence. If you work for IBM, or more specifically the Human Resources division of IBM, it's not just hyperbole. IBM is planning to freeze hiring those good-for-nothing humans and replace them with fully automated alternatives.

In an interview with Bloomberg (opens in new tab) (paywalled) via Ars Technica (opens in new tab), IBM chief executive officer Arvind Krishna talked about plans to pause hiring for 7,800 positions with the intention of eventually replacing them with AI or automated systems. Shit just got real.

Krishna explained that humans performing HR duties like employee movements and services would be among the first to switch to artificial intelligence. However, roles that require human scrutiny would not be impacted for at least a decade. Jobs that require interacting with customers and developing software will take a lot longer before AI can take them over.

The idea of AIs replacing humans has been the stuff of science fiction for decades, but with the recent advances in AI technology, led by the rise and mass awareness of tools like ChatGPT (opens in new tab), it's clear these kinds of discussions (and the anxieties people have about them) are only going to become more frequent.

As one of the leading tech companies in the world, IBM is obviously at the tip of the spear. If we are heading towards an AI revolution, it's going to begin with companies like IBM, or Nvidia, or Intel.

Your next machine

Best gaming PC (opens in new tab): The top pre-built machines from the prosBest gaming laptop (opens in new tab): Perfect notebooks for mobile gaming

Though it may not be as big of a concern as it seems. AI has been creeping into the workforce for decades. Automation and robotics have steadily taken over sectors once the domain of fleshy humans. We've managed to survive (barely it seems), but that's the kind of topic you could write a thesis about.

According to Krishna, despite some January layoffs, IBM added around 7,000 new employees in the first quarter of 2023. IBM currently employs around 260,000 workers, compared to which 7,800 isn't really a dramatic number.

IBM isn't immune from the headwinds facing big tech. The likes of Google, Amazon, Meta and others have laid off tens of thousands of staff (opens in new tab) over the last year. Exactly how many of these roles will be replaced with AI is the big question.

The rest is here:

IBM believes 7800 members of its staff could be replaced by AI - PC Gamer

How AI and ChatGPT are full of promise and peril, explained by experts – Vox.com

At this point, you have tried ChatGPT. Even Joe Biden has tried ChatGPT, and this week, his administration made a big show of inviting AI leaders like Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to the White House to discuss ways they could make responsible AI.

But maybe, just maybe, you are still fuzzy on some very basics about AI like, how does this stuff work, is it magic, and will it kill us all? but dont want to admit to that.

No worries. We have you covered: Weve spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI as well as people who think the current AI boom is overblown or maybe dangerously misguided. We made a podcast series about the whole thing, which you can listen to over at Recode Media.

But weve also pulled out a sampling of insightful and oftentimes conflicting answers we got to some of these very basic questions. Theyre questions that the White House and everyone else needs to figure out soon, since AI isnt going away.

Read on and dont worry, we wont tell anyone that youre confused. Were all confused.

Kevin Scott, chief technology officer, Microsoft: I was a 12-year-old when the PC revolution was happening. I was in grad school when the internet revolution happened. I was running a mobile startup right at the very beginning of the mobile revolution, which coincided with this massive shift to cloud computing. This feels to me very much like those three things.

Dror Berman, co-founder, Innovation Endeavors: Mobile was an interesting time because it provided a new form factor that allowed you to carry a computer with you. I think we are now standing in a completely different time: Weve now been introduced to a foundational intelligence block that has become available to us, one that basically can lean on all the publicly available knowledge that humanity has extracted and documented. It allows us to retrieve all this information in a way that wasnt possible in the past.

Gary Marcus, entrepreneur; emeritus professor of psychology and neural science at NYU: I mean, its absolutely interesting. I would not want to argue against that for a moment. I think of it as a dress rehearsal for artificial general intelligence, which we will get to someday.

But right now we have a trade-off. There are some positives about these systems. You can use them to write things for you. And there are some negatives. This technology can be used, for example, to spread misinformation, and to do that at a scale that weve never seen before which may be dangerous, might undermine democracy.

And I would say that these systems arent very controllable. Theyre powerful, theyre reckless, but they dont necessarily do what we want. Ultimately, theres going to be a question, Okay, we can build a demo here. Can we build a product that we can actually use? And what is that product?

I think in some places people will adopt this stuff. And theyll be perfectly happy with the output. In other places, theres a real problem.

James Manyika, SVP of technology and society, Google: Youre trying to make sure the outputs are not toxic. In our case, we do a lot of generative adversarial testing of these systems. In fact, when you use Bard, for example, the output that you get when you type in a prompt is not necessarily the first thing that Bard came up with.

Were running 15, 16 different types of the same prompt to look at those outputs and pre-assess them for safety, for things like toxicity. And now we dont always get every single one of them, but were getting a lot of it already.

One of the bigger questions that we are going to have to face, by the way and this is a question about us, not about the technology, its about us as a society is how do we think about what we value? How do we think about what counts as toxicity? So thats why we try to involve and engage with communities to understand those. We try to involve ethicists and social scientists to research those questions and understand those, but those are really questions for us as society.

Emily M. Bender, professor of linguistics, University of Washington: People talk about democratizing AI, and I always find that really frustrating because what theyre referring to is putting this technology in the hands of many, many people which is not the same thing as giving everybody a say in how its developed.

I think the best way forward is cooperation, basically. You have sensible regulation coming from the outside so that the companies are held accountable. And then youve got the tech ethics workers on the inside helping the companies actually meet the regulation and meet the spirit of the regulation.

And to make all that happen, we need broad literacy in the population so that people can ask for whats needed from their elected representatives. So that the elected representatives are hopefully literate in all of this.

Scott: Weve spent from 2017 until today rigorously building a responsible AI practice. You just cant release an AI to the public without a rigorous set of rules that define sensitive uses, and where you have a harms framework. You have to be transparent with the public about what your approach to responsible AI is.

Marcus: Dirigibles were really popular in the 1920s and 1930s. Until we had the Hindenburg. Everybody thought that all these people doing heavier-than-air flight were wasting their time. They were like, Look at our dirigibles. They scale a lot faster. We built a small one. Now we built a bigger one. Now we built a much bigger one. Its all working great.

So, you know, sometimes you scale the wrong thing. In my view, were scaling the wrong thing right now. Were scaling a technology that is inherently unstable.

Its unreliable and untruthful. Were making it faster and have more coverage, but its still unreliable, still not truthful. And for many applications thats a problem. There are some for which its not right.

ChatGPTs sweet spot has always been making surrealist prose. It is now better at making surrealist prose than it was before. If thats your use case, its fine, I have no problem with it. But if your use case is something where theres a cost of error, where you do need to be truthful and trustworthy, then that is a problem.

Scott: It is absolutely useful to be thinking about these scenarios. Its more useful to think about them grounded in where the technology actually is, and what the next step is, and the step beyond that.

I think were still many steps away from the things that people worry about. There are people who disagree with me on that assertion. They think theres gonna be some uncontrollable, emergent behavior that happens.

And were careful enough about that, where we have research teams thinking about the possibility of these emergent scenarios. But the thing that you would really have to have in order for some of the weird things to happen that people are concerned about is real autonomy a system that could participate in its own development and have that feedback loop where you could get to some superhumanly fast rate of improvement. And thats not the way the systems work right now. Not the ones that we are building.

Bender: We already have WebMD. We already have databases where you can go from symptoms to possible diagnoses, so you know what to look for.

There are plenty of people who need medical advice, medical treatment, who cant afford it, and that is a societal failure. And similarly, there are plenty of people who need legal advice and legal services who cant afford it. Those are real problems, but throwing synthetic text into those situations is not a solution to those problems.

If anything, its gonna exacerbate the inequalities that we see in our society. And to say, people who can pay get the real thing; people who cant pay, well, here, good luck. You know: Shake the magic eight ball that will tell you something that seems relevant and give it a try.

Manyika: Yes, it does have a place. If Im trying to explore as a research question, how do I come to understand those diseases? If Im trying to get medical help for myself, I wouldnt go to these generative systems. I go to a doctor or I go to something where I know theres reliable factual information.

Scott: I think it just depends on the actual delivery mechanism. You absolutely dont want a world where all you have is some substandard piece of software and no access to a real doctor. But I have a concierge doctor, for instance. I interact with my concierge doctor mostly by email. And thats actually a great user experience. Its phenomenal. It saves me so much time, and Im able to get access to a whole bunch of things that my busy schedule wouldnt let me have access to otherwise.

So for years Ive thought, wouldnt it be fantastic for everyone to have the same thing? An expert medical guru that you can go to that can help you navigate a very complicated system of insurance companies and medical providers and whatnot. Having something that can help you deal with the complexity, I think, is a good thing.

Marcus: If its medical misinformation, you might actually kill someone. Thats actually the domain where Im most worried about erroneous information from search engines

Now people do search for medical stuff all the time, and these systems are not going to understand drug interactions. Theyre probably not going to understand particular peoples circumstances, and I suspect that there will actually be some pretty bad advice.

We understand from a technical perspective why these systems hallucinate. And I can tell you that they will hallucinate in the medical domain. Then the question is: What becomes of that? Whats the cost of error? How widespread is that? How do users respond? We dont know all those answers yet.

Berman: I think society will need to adapt. A lot of those systems are very, very powerful and allow us to do things that we never thought would be possible. By the way, we dont yet understand what is fully possible. We dont also fully understand how some of those systems work.

I think some people will lose jobs. Some people will adjust and get new jobs. We have a company called Canvas that is developing a new type of robot for the construction industry and actually working with the union to train the workforce to use this kind of robot.

And a lot of those jobs that a lot of technologies replace are not necessarily the jobs that a lot of people want to do anyway. So I think that we are going to see a lot of new capabilities that will allow us to train people to do much more exciting jobs as well.

Manyika: If you look at most of the research on AIs impact on work, if I were to summarize it in a phrase, Id say its jobs gained, jobs lost, and jobs changed.

All three things will happen because there are some occupations where a number of the tasks involved in those occupations will probably decline. But there are also new occupations that will grow. So theres going to be a whole set of jobs gained and created as a result of this incredible set of innovations. But I think the bigger effect, quite frankly what most people will feel is the jobs changed aspect of this.

Read the original here:

How AI and ChatGPT are full of promise and peril, explained by experts - Vox.com

AI Is Helping Airlines Prevent Delays and Turbulence – The New York Times

It may be a tough summer to fly. More passengers than ever will be taking to the skies, according to the Transportation Security Administration. And the weather so far this year hasnt exactly been cooperating.

A blizzard warning in San Diego, sudden turbulence that injured 36 people on a Hawaiian Airlines flight bound for Honolulu, a 25-inch deluge of rain that swamped an airport in Fort Lauderdale, Fla.: The skies have been confounding forecasters and frustrating travelers.

And it may only get worse as the climate continues to change. Intense events are happening more often and outside their seasonal norms, said Sheri Bachstein, chief executive of the Weather Company, part of IBM, which makes weather-forecasting technology.

So, will flights just get bumpier and delays even more common? Not necessarily. New sensors, satellites and data modeling powered by artificial intelligence are giving travelers a fighting chance against more erratic weather.

The travel industry cares about getting their weather predictions right because weather affects everything, said Amy McGovern, director of the National Science Foundations A.I. Institute for Research on Trustworthy A.I. in Weather, Climate and Coastal Oceanography at the University of Oklahoma.

Those better weather predictions rely on a type of artificial intelligence called machine learning, where in essence, a computer program is able to use data to improve itself. In this case, companies create software that uses historical and current weather data to make predictions. The algorithm then compares its predictions with outcomes and adjusts its calculations from there. By doing this over and over, the software makes more and more accurate forecasts.

The amount of data fed into these types of software is enormous. IBMs modeling system, for example, integrates information from 100 other models. To that, it adds wind, temperature and humidity data from more than 250,000 weather stations on commercial buildings, cellphone towers and private homes around the globe. In addition, it incorporates satellite and radar reports from sources like the National Weather Service, the National Oceanic and Atmospheric Administration and the Federal Aviation Administration. Some of the worlds most powerful computers then process all this information.

Heres how all this may improve your future trips:

The skies are getting bumpier. According to a recent report from the National Aeronautics and Space Administration, severe turbulence at typical airplane cruising altitudes could become two to three times more common.

Knowing where those disturbances are and how to avoid them is mission-critical for airlines, Ms. Bachstein said.

Pilots have long radioed their encounters with turbulence to air traffic controllers, giving aircraft coming in behind them a chance to illuminate the seatbelt sign in time for the rough air. Now, a new fleet of satellites could help warn them earlier.

Tomorrow.io, a weather intelligence company based in Boston, received a $19 million grant from the U.S. Air Force to launch more than 20 weather satellites, beginning with two by the end of this year and scheduled for completion in 2025. The constellation of satellites will provide meteorological reporting over the whole globe, covering some areas that are not currently monitored. The system will report conditions every hour, a vast improvement over the data that is currently available, according to the company.

The new weather information will be used well beyond the travel industry. For their part, though, pilots will have more complete information in the cockpit, said Dan Slagen, the companys chief marketing officer.

The turbulence that caused dozens of injuries aboard the Hawaiian Airlines flight last December came from an evolving thunderstorm that didnt get reported quickly enough, Dr. McGovern said. Thats the kind of situation that can be seen developing and then avoided when reports come in more frequently, she explained.

The F.A.A. estimates that about three-quarters of all flight delays are weather-related. Heavy precipitation, high winds, low visibility and lightning can all cause a tangle on the tarmac, so airports are finding better ways to track them.

WeatherSTEM, based in Florida, reports weather data and analyzes it using artificial intelligence to make recommendations. It also installs small hyperlocal weather stations, which sell for about $20,000, a fifth of the price of older-generation systems, said Ed Mansouri, the companys chief executive.

While airports have always received detailed weather information, WeatherSTEM is among a small set of companies that use artificial intelligence to take that data and turn it into advice. It analyzes reports, for example, from a global lightning monitoring network that shows moment-by-moment electromagnetic activity to provide guidance on when planes should avoid landing and taking off, and when ground crews should seek shelter. The software can also help reduce unnecessary airport closures because its analysis of the lightnings path is more precise than what airports have had in the past.

The companys weather stations may include mini-Doppler radar systems, which show precipitation and its movement in greater detail than in standard systems; solar-powered devices that monitor factors like wind speed and direction; and digital video cameras. Tampa International, Fort Lauderdale-Hollywood International and Orlando International airports, in Florida, are all using the new mini-weather stations.

The lower price will put the equipment within reach of smaller airports and allow them to improve operations during storms, Mr. Mansouri said, and larger airports might install more than one mini-station. Because airports are often spread out over large areas, conditions, especially wind, can vary, he said, making the devices valuable tools.

More precise data and more advanced analysis are helping airlines fly better in cold weather, too. De-icing a plane is expensive, polluting and time-consuming, so when sudden weather changes mean it has to be done twice, that has an impact on the bottom line, the environment and on-time departures.

Working with airlines like JetBlue, Tomorrow.io analyzes weather data to help ground crews use the most efficient chemical de-icing sprays. The system can, for example, recommend how much to dilute the chemicals with water based on how quickly the temperature is changing. The system can also help crews decide if a thicker chemical treatment called anti-icing is needed and to determine the best time to apply the sprays to limit pollution and cost.

At the University of Oklahoma, Dr. McGoverns team is working on using machine learning to develop software that would provide hailstorm warnings 30 or more minutes in advance, rather than the current 10 to 12 minutes. That could give crews more time to protect planes especially important in places like Oklahoma, where she works. We get golf balls falling out of the sky, and they can do real damage, Dr. McGovern said.

More on-time departures and smoother flights are most likely only the beginning. Advances in weather technology, Dr. McGovern said, are snowballing.

Follow New York Times Travel on Instagram and sign up for our weekly Travel Dispatch newsletter to get expert tips on traveling smarter and inspiration for your next vacation. Dreaming up a future getaway or just armchair traveling? Check out our 52 Places to Go in 2023.

Read more from the original source:

AI Is Helping Airlines Prevent Delays and Turbulence - The New York Times

Amnesty International Slammed Over AI Protest Images – Hyperallergic

Screenshots of the since-deleted Amnesty International campaign, which employed AI-generated images (screenshots Maya Pontone/Hyperallergic)

This week, international human rights watchdog Amnesty International faced backlash from photojournalists and other online critics for using AI-generated images depicting photorealistic scenes of Colombias 2021 protests. Although there is no shortage of photographs from the demonstrations, the advocacy group told the Guardian that it opted to use artificially edited imagery to protect the identities of protesters who may be vulnerable to state retribution.

The 2021 strike which was incited by an unpopular tax raise and then fueled by police brutality and other forms of state violence left at least 40 people dead and many more missing, according to official figures.

Amnesty International shared the AI images as part of a since-deleted social media campaign marking the two years since the Colombian protests, paired with disclaimers that acknowledged the use of AI. Commentators online were quick to notice errors in the fake images. For instance, one of them showed a woman wearing the tri-colored Colombian flag and being dragged off by police, a familiar still from the 2021 protests. But on social media, people pointed out that the colors in the national flag were in the wrong order, and the faces of the protesters and police officers were eerily smoothed over. Additionally, the uniforms of the officers were out-of-date.

In response to the public outcry, Amnesty International has since deleted the images from its social media channels.

The organization has not yet responded to Hyperallergics request for comment. In an interview with the Guardian, Director for Americas Erika Guevara Rosas said Amnesty International did not want the AI controversy to distract from the core message in support of the victims and their calls for justice in Colombia.

But we do take the criticism seriously and want to continue the engagement to ensure we understand better the implications and our role to address the ethical dilemmas posed by the use of such technology, Rosas added.

Amnesty also directly responded to the backlash online, apologizing for the misrepresentative photos and reiterating their initial intentions.

Our main goal was to highlight the grotesque violence by the police against people in Colombia. It is important to state that the purpose was to protect people who could be exposed. But we could choose drawings or other things, Amnesty International tweeted.

Some members of the photojournalism and larger arts community have also shared their frustration with the mock photos since the popularization of AI over the past year has raised questions about plagiarism and job displacement.

Molly Crabapple, a New York-based writer and artist who recently authored an open letter against the use of AI-generated art, condemned Amnesty Internationals use of the tool in its campaign.

By using AI-generated photos of police brutality in Colombia, Amnesty International is practically begging atrocity-deniers to call them liars, Crabapple tweeted. Either use the work of brave photojournalists, or use actual illustrations. AI-generated photos just undermine trust in your findings.

Read the original post:

Amnesty International Slammed Over AI Protest Images - Hyperallergic