How AI and ChatGPT are full of promise and peril, explained by experts – Vox.com
At this point, you have tried ChatGPT. Even Joe Biden has tried ChatGPT, and this week, his administration made a big show of inviting AI leaders like Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to the White House to discuss ways they could make responsible AI.
But maybe, just maybe, you are still fuzzy on some very basics about AI like, how does this stuff work, is it magic, and will it kill us all? but dont want to admit to that.
No worries. We have you covered: Weve spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI as well as people who think the current AI boom is overblown or maybe dangerously misguided. We made a podcast series about the whole thing, which you can listen to over at Recode Media.
But weve also pulled out a sampling of insightful and oftentimes conflicting answers we got to some of these very basic questions. Theyre questions that the White House and everyone else needs to figure out soon, since AI isnt going away.
Read on and dont worry, we wont tell anyone that youre confused. Were all confused.
Kevin Scott, chief technology officer, Microsoft: I was a 12-year-old when the PC revolution was happening. I was in grad school when the internet revolution happened. I was running a mobile startup right at the very beginning of the mobile revolution, which coincided with this massive shift to cloud computing. This feels to me very much like those three things.
Dror Berman, co-founder, Innovation Endeavors: Mobile was an interesting time because it provided a new form factor that allowed you to carry a computer with you. I think we are now standing in a completely different time: Weve now been introduced to a foundational intelligence block that has become available to us, one that basically can lean on all the publicly available knowledge that humanity has extracted and documented. It allows us to retrieve all this information in a way that wasnt possible in the past.
Gary Marcus, entrepreneur; emeritus professor of psychology and neural science at NYU: I mean, its absolutely interesting. I would not want to argue against that for a moment. I think of it as a dress rehearsal for artificial general intelligence, which we will get to someday.
But right now we have a trade-off. There are some positives about these systems. You can use them to write things for you. And there are some negatives. This technology can be used, for example, to spread misinformation, and to do that at a scale that weve never seen before which may be dangerous, might undermine democracy.
And I would say that these systems arent very controllable. Theyre powerful, theyre reckless, but they dont necessarily do what we want. Ultimately, theres going to be a question, Okay, we can build a demo here. Can we build a product that we can actually use? And what is that product?
I think in some places people will adopt this stuff. And theyll be perfectly happy with the output. In other places, theres a real problem.
James Manyika, SVP of technology and society, Google: Youre trying to make sure the outputs are not toxic. In our case, we do a lot of generative adversarial testing of these systems. In fact, when you use Bard, for example, the output that you get when you type in a prompt is not necessarily the first thing that Bard came up with.
Were running 15, 16 different types of the same prompt to look at those outputs and pre-assess them for safety, for things like toxicity. And now we dont always get every single one of them, but were getting a lot of it already.
One of the bigger questions that we are going to have to face, by the way and this is a question about us, not about the technology, its about us as a society is how do we think about what we value? How do we think about what counts as toxicity? So thats why we try to involve and engage with communities to understand those. We try to involve ethicists and social scientists to research those questions and understand those, but those are really questions for us as society.
Emily M. Bender, professor of linguistics, University of Washington: People talk about democratizing AI, and I always find that really frustrating because what theyre referring to is putting this technology in the hands of many, many people which is not the same thing as giving everybody a say in how its developed.
I think the best way forward is cooperation, basically. You have sensible regulation coming from the outside so that the companies are held accountable. And then youve got the tech ethics workers on the inside helping the companies actually meet the regulation and meet the spirit of the regulation.
And to make all that happen, we need broad literacy in the population so that people can ask for whats needed from their elected representatives. So that the elected representatives are hopefully literate in all of this.
Scott: Weve spent from 2017 until today rigorously building a responsible AI practice. You just cant release an AI to the public without a rigorous set of rules that define sensitive uses, and where you have a harms framework. You have to be transparent with the public about what your approach to responsible AI is.
Marcus: Dirigibles were really popular in the 1920s and 1930s. Until we had the Hindenburg. Everybody thought that all these people doing heavier-than-air flight were wasting their time. They were like, Look at our dirigibles. They scale a lot faster. We built a small one. Now we built a bigger one. Now we built a much bigger one. Its all working great.
So, you know, sometimes you scale the wrong thing. In my view, were scaling the wrong thing right now. Were scaling a technology that is inherently unstable.
Its unreliable and untruthful. Were making it faster and have more coverage, but its still unreliable, still not truthful. And for many applications thats a problem. There are some for which its not right.
ChatGPTs sweet spot has always been making surrealist prose. It is now better at making surrealist prose than it was before. If thats your use case, its fine, I have no problem with it. But if your use case is something where theres a cost of error, where you do need to be truthful and trustworthy, then that is a problem.
Scott: It is absolutely useful to be thinking about these scenarios. Its more useful to think about them grounded in where the technology actually is, and what the next step is, and the step beyond that.
I think were still many steps away from the things that people worry about. There are people who disagree with me on that assertion. They think theres gonna be some uncontrollable, emergent behavior that happens.
And were careful enough about that, where we have research teams thinking about the possibility of these emergent scenarios. But the thing that you would really have to have in order for some of the weird things to happen that people are concerned about is real autonomy a system that could participate in its own development and have that feedback loop where you could get to some superhumanly fast rate of improvement. And thats not the way the systems work right now. Not the ones that we are building.
Bender: We already have WebMD. We already have databases where you can go from symptoms to possible diagnoses, so you know what to look for.
There are plenty of people who need medical advice, medical treatment, who cant afford it, and that is a societal failure. And similarly, there are plenty of people who need legal advice and legal services who cant afford it. Those are real problems, but throwing synthetic text into those situations is not a solution to those problems.
If anything, its gonna exacerbate the inequalities that we see in our society. And to say, people who can pay get the real thing; people who cant pay, well, here, good luck. You know: Shake the magic eight ball that will tell you something that seems relevant and give it a try.
Manyika: Yes, it does have a place. If Im trying to explore as a research question, how do I come to understand those diseases? If Im trying to get medical help for myself, I wouldnt go to these generative systems. I go to a doctor or I go to something where I know theres reliable factual information.
Scott: I think it just depends on the actual delivery mechanism. You absolutely dont want a world where all you have is some substandard piece of software and no access to a real doctor. But I have a concierge doctor, for instance. I interact with my concierge doctor mostly by email. And thats actually a great user experience. Its phenomenal. It saves me so much time, and Im able to get access to a whole bunch of things that my busy schedule wouldnt let me have access to otherwise.
So for years Ive thought, wouldnt it be fantastic for everyone to have the same thing? An expert medical guru that you can go to that can help you navigate a very complicated system of insurance companies and medical providers and whatnot. Having something that can help you deal with the complexity, I think, is a good thing.
Marcus: If its medical misinformation, you might actually kill someone. Thats actually the domain where Im most worried about erroneous information from search engines
Now people do search for medical stuff all the time, and these systems are not going to understand drug interactions. Theyre probably not going to understand particular peoples circumstances, and I suspect that there will actually be some pretty bad advice.
We understand from a technical perspective why these systems hallucinate. And I can tell you that they will hallucinate in the medical domain. Then the question is: What becomes of that? Whats the cost of error? How widespread is that? How do users respond? We dont know all those answers yet.
Berman: I think society will need to adapt. A lot of those systems are very, very powerful and allow us to do things that we never thought would be possible. By the way, we dont yet understand what is fully possible. We dont also fully understand how some of those systems work.
I think some people will lose jobs. Some people will adjust and get new jobs. We have a company called Canvas that is developing a new type of robot for the construction industry and actually working with the union to train the workforce to use this kind of robot.
And a lot of those jobs that a lot of technologies replace are not necessarily the jobs that a lot of people want to do anyway. So I think that we are going to see a lot of new capabilities that will allow us to train people to do much more exciting jobs as well.
Manyika: If you look at most of the research on AIs impact on work, if I were to summarize it in a phrase, Id say its jobs gained, jobs lost, and jobs changed.
All three things will happen because there are some occupations where a number of the tasks involved in those occupations will probably decline. But there are also new occupations that will grow. So theres going to be a whole set of jobs gained and created as a result of this incredible set of innovations. But I think the bigger effect, quite frankly what most people will feel is the jobs changed aspect of this.
Read the original here:
How AI and ChatGPT are full of promise and peril, explained by experts - Vox.com
- Trump calls Chinas DeepSeek AI app a wake-up call after tech stocks slide - The Washington Post - January 27th, 2025 [January 27th, 2025]
- What is DeepSeek, the Chinese AI app challenging OpenAI and Silicon Valley? - The Washington Post - January 27th, 2025 [January 27th, 2025]
- Trump: DeepSeek's AI should be a 'wakeup call' to US industry - Reuters - January 27th, 2025 [January 27th, 2025]
- DeepSeek dropped an open-source AI bombwhat does it mean for OpenAI and Anthropic? - Fortune - January 27th, 2025 [January 27th, 2025]
- This Unstoppable Artificial Intelligence (AI) Stock Climbed 90% in 2024, and Its Still a Buy at Todays Price - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Apple turns its AI on by default in latest software update - CNBC - January 27th, 2025 [January 27th, 2025]
- What is DeepSeek, the Chinese AI startup that shook the tech world? - CNN - January 27th, 2025 [January 27th, 2025]
- French AI chatbot taken offline after wild answers led to online ridicule - CNN - January 27th, 2025 [January 27th, 2025]
- Everyone is freaking out about Chinese AI startup DeepSeek. Are its claims too good to be true? - Fortune - January 27th, 2025 [January 27th, 2025]
- Here's what DeepSeek AI does better than OpenAI's ChatGPT - Mashable - January 27th, 2025 [January 27th, 2025]
- DeepSeek is making Wall Street nervous about the AI spending boom: Heres what we know - Yahoo Finance - January 27th, 2025 [January 27th, 2025]
- I Tried to See My Future Baby's Face Using AI, but It Got Weird - CNET - January 27th, 2025 [January 27th, 2025]
- DeepSeek caused a $600 billion freakout. But Chinas AI upstart may not be the danger to Nvidia and U.S. export controls many assume - Yahoo Finance - January 27th, 2025 [January 27th, 2025]
- I'm an Uber product manager who uses AI to automate some of my work. It frees up more time for the human side of the job. - Business Insider - January 27th, 2025 [January 27th, 2025]
- What is DeepSeek? Get to know the Chinese startup that shocked the AI industry - Business Insider - January 27th, 2025 [January 27th, 2025]
- Time to 'panic' or 'overblown'? Wall Street weighs how DeepSeek could shake up the AI trade - Yahoo Finance - January 27th, 2025 [January 27th, 2025]
- Tech stocks tank as a Chinese competitor threatens to upend the AI frenzy; Nvidia sinks nearly 17% - The Associated Press - January 27th, 2025 [January 27th, 2025]
- This man wiped $600 billion off Nvidia by marrying quant trading with AI - MarketWatch - January 27th, 2025 [January 27th, 2025]
- What Is DeepSeek? Everything to Know About Chinas ChatGPT Rival and Why It Might Mean the End of the AI Trade. - Barron's - January 27th, 2025 [January 27th, 2025]
- Why Apple Stock Dodged the DeepSeek AI Rout - Investopedia - January 27th, 2025 [January 27th, 2025]
- Chinese AI startup DeepSeek is rattling markets. Here's what to know - Yahoo Finance - January 27th, 2025 [January 27th, 2025]
- Chinas DeepSeek AI is hitting Nvidia where it hurts - The Verge - January 27th, 2025 [January 27th, 2025]
- DeepSeek vs. ChatGPT: I tried the hot new AI model. It was impressive, but there were some things it wouldn't talk about. - Business Insider - January 27th, 2025 [January 27th, 2025]
- How the buzz around Chinese AI model DeepSeek sparked a massive Nasdaq sell-off - CNBC - January 27th, 2025 [January 27th, 2025]
- DeepSeeks top-ranked AI app is restricting sign-ups due to malicious attacks - The Verge - January 27th, 2025 [January 27th, 2025]
- How To Gain Vital Skills In Conversational Icebreakers Via Nimble Use Of Generative AI - Forbes - January 26th, 2025 [January 26th, 2025]
- AI is a force for good and Britain needs to be a maker of ideas, not a mere taker | Will Hutton - The Guardian - January 26th, 2025 [January 26th, 2025]
- Ge Wang: GenAI Art Is the Least Imaginative Use of AI Imaginable - Stanford HAI - January 26th, 2025 [January 26th, 2025]
- A Once-in-a-Decade Investment Opportunity: The Best AI Stock to Buy in 2025, According to a Wall Street Analyst - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Experts Weigh in on $500B Stargate Project for AI - IEEE Spectrum - January 26th, 2025 [January 26th, 2025]
- Cathie Wood Says Software Is the Next Big AI Opportunity -- 2 Ark ETFs You'll Want to Buy if She's Right - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- My Top 2 Artificial Intelligence (AI) Stocks for 2025 (Hint: Nvidia Is Not One of Them) - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- 2024: A year of extraordinary progress and advancement in AI - The Keyword - January 26th, 2025 [January 26th, 2025]
- Morgan Stanley says these 20 stocks are set to reap the benefits of AI with adoption at a 'tipping point' - Business Insider - January 26th, 2025 [January 26th, 2025]
- Coldplay evolves the fan experience with Microsoft AI - Microsoft - January 26th, 2025 [January 26th, 2025]
- Meta to spend up to $65 billion this year to power AI goals, Zuckerberg says - Reuters - January 26th, 2025 [January 26th, 2025]
- $60 billion in one year: Mark Zuckerberg touts Meta's AI investments - NBC News - January 26th, 2025 [January 26th, 2025]
- Why prosocial AI must be the framework for designing, deploying and governing AI - VentureBeat - January 26th, 2025 [January 26th, 2025]
- Its only $30 to learn how to automate your job with AI - PCWorld - January 26th, 2025 [January 26th, 2025]
- Microsoft and OpenAI evolve partnership to drive the next phase of AI - Microsoft - January 26th, 2025 [January 26th, 2025]
- Chinas AI industry has almost caught up with Americas - The Economist - January 26th, 2025 [January 26th, 2025]
- AI hallucinations cant be stopped but these techniques can limit their damage - Nature.com - January 26th, 2025 [January 26th, 2025]
- Trump shrugs off Elon Musks criticism of AI announcement: He hates one of the people - CNN - January 26th, 2025 [January 26th, 2025]
- Apple makes a change to its AI team and plans Siri upgrades - The Verge - January 26th, 2025 [January 26th, 2025]
- Trump rescinds Biden's executive order on AI safety in attempt to diverge from his predecessor - The Associated Press - January 26th, 2025 [January 26th, 2025]
- Apple Enlists Veteran Software Executive to Help Fix AI and Siri - Yahoo Finance - January 26th, 2025 [January 26th, 2025]
- Stargate AI Project: What AI Stocks Could Benefit in 2025 and Beyond? - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Palantir Could Ride AI Fed Spending Tidal Wave, Says Analyst - Investor's Business Daily - January 26th, 2025 [January 26th, 2025]
- Heres why you should start talking to ChatGPT even if AI scares you - BGR - January 26th, 2025 [January 26th, 2025]
- The "First AI Software Engineer" Is Bungling the Vast Majority of Tasks It's Asked to Do - Futurism - January 26th, 2025 [January 26th, 2025]
- Down Nearly 50% From Its High, Is SoundHound AI Stock a Good Buy Right Now? - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Trump pumps coal as answer to AI power needs but any boost could be short-lived - The Associated Press - January 26th, 2025 [January 26th, 2025]
- Prediction: This Stock Will be the Biggest Winner of the U.S.' New $500 Billion AI Project. - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Mark Zuckerberg's $65 Billion AI Bet Benefits Nvidia And Other Players, Says Top Analyst, But Warns Market Bull Run Will 'End In A Spectacular Bubble... - January 26th, 2025 [January 26th, 2025]
- In motion to dismiss, chatbot platform Character AI claims it is protected by the First Amendment - TechCrunch - January 26th, 2025 [January 26th, 2025]
- America Is Winning the Race for Global AI Primacyfor Now - Foreign Affairs Magazine - January 17th, 2025 [January 17th, 2025]
- Opinion | Flaws in AI Are Deciding Your Future. Heres How to Fix Them. - The Chronicle of Higher Education - January 17th, 2025 [January 17th, 2025]
- Apple is pulling its AI-generated notifications for news after generating fake headlines - CNN - January 17th, 2025 [January 17th, 2025]
- ELIZA: World's first AI chatbot has finally been resurrected after decades - New Scientist - January 17th, 2025 [January 17th, 2025]
- AI scammers pretending to be Brad Pitt con woman out of $850,000 - Fox News - January 17th, 2025 [January 17th, 2025]
- From Potential to Profit: Closing the AI Impact Gap - BCG - January 17th, 2025 [January 17th, 2025]
- Whoever Leads In AI Compute Will Lead The World - Forbes - January 17th, 2025 [January 17th, 2025]
- Innovating in line with the European Unions AI Act - Microsoft - January 17th, 2025 [January 17th, 2025]
- This Artificial Intelligence (AI) Stock Is an Absolute Bargain Right Now, and It Could Skyrocket in 2025 - The Motley Fool - January 17th, 2025 [January 17th, 2025]
- Cinematic AI Shorts From Eric Ker, Timothy Wang, Henry Daubrez And CaptainHaHaa - Forbes - January 17th, 2025 [January 17th, 2025]
- 2 Artificial Intelligence (AI) Electric Vehicle Stocks to Buy With $500. If Certain Wall Street Analysts Are Right, They Could Soar as Much as 60% and... - January 17th, 2025 [January 17th, 2025]
- OpenAI CEO Sam Altman Says This Will Be the No.1 Most Valuable Skill in the Age of AI - Inc. - January 17th, 2025 [January 17th, 2025]
- The Amazing Ways DocuSign Is Using AI To Transform Business Agreements - Forbes - January 17th, 2025 [January 17th, 2025]
- Prediction: These 3 Artificial Intelligence (AI) Chip Stocks Will Crush the Market in 2025 - Yahoo Finance - January 17th, 2025 [January 17th, 2025]
- AI isn't the future of online shopping - here's what is - ZDNet - January 17th, 2025 [January 17th, 2025]
- 2 Artificial Intelligence (AI) Stocks With Seemingly Impenetrable Moats That Can Have Their Palantir Moment in 2025 - The Motley Fool - January 17th, 2025 [January 17th, 2025]
- US tightens its grip on AI chip flows across the globe - Reuters - January 17th, 2025 [January 17th, 2025]
- The companies paying hospitals to hand over patient data to train AI - STAT - January 17th, 2025 [January 17th, 2025]
- Biden's administration proposes new rules on exporting AI chips, provoking an industry pushback - The Associated Press - January 17th, 2025 [January 17th, 2025]
- Apple solves broken news alerts by turning off the AI - The Register - January 17th, 2025 [January 17th, 2025]
- President-Elect Donald Trump Will Take Office in 3 Days, and He's Set to Reshape the Future of Artificial Intelligence (AI) in America - The Motley... - January 17th, 2025 [January 17th, 2025]
- Here Are My Top 4 No-Brainer AI Stocks to Buy for 2025 - The Motley Fool - January 17th, 2025 [January 17th, 2025]
- Got $1,000? Here Are 2 AI Stocks to Buy Hand Over Fist in 2025 - The Motley Fool - January 17th, 2025 [January 17th, 2025]
- Apple halts AI feature that made iPhones hallucinate about news - The Washington Post - January 17th, 2025 [January 17th, 2025]
- It's official: All your Office apps are getting AI and a price increase - ZDNet - January 17th, 2025 [January 17th, 2025]