How AI and ChatGPT are full of promise and peril, explained by experts – Vox.com
At this point, you have tried ChatGPT. Even Joe Biden has tried ChatGPT, and this week, his administration made a big show of inviting AI leaders like Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to the White House to discuss ways they could make responsible AI.
But maybe, just maybe, you are still fuzzy on some very basics about AI like, how does this stuff work, is it magic, and will it kill us all? but dont want to admit to that.
No worries. We have you covered: Weve spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI as well as people who think the current AI boom is overblown or maybe dangerously misguided. We made a podcast series about the whole thing, which you can listen to over at Recode Media.
But weve also pulled out a sampling of insightful and oftentimes conflicting answers we got to some of these very basic questions. Theyre questions that the White House and everyone else needs to figure out soon, since AI isnt going away.
Read on and dont worry, we wont tell anyone that youre confused. Were all confused.
Kevin Scott, chief technology officer, Microsoft: I was a 12-year-old when the PC revolution was happening. I was in grad school when the internet revolution happened. I was running a mobile startup right at the very beginning of the mobile revolution, which coincided with this massive shift to cloud computing. This feels to me very much like those three things.
Dror Berman, co-founder, Innovation Endeavors: Mobile was an interesting time because it provided a new form factor that allowed you to carry a computer with you. I think we are now standing in a completely different time: Weve now been introduced to a foundational intelligence block that has become available to us, one that basically can lean on all the publicly available knowledge that humanity has extracted and documented. It allows us to retrieve all this information in a way that wasnt possible in the past.
Gary Marcus, entrepreneur; emeritus professor of psychology and neural science at NYU: I mean, its absolutely interesting. I would not want to argue against that for a moment. I think of it as a dress rehearsal for artificial general intelligence, which we will get to someday.
But right now we have a trade-off. There are some positives about these systems. You can use them to write things for you. And there are some negatives. This technology can be used, for example, to spread misinformation, and to do that at a scale that weve never seen before which may be dangerous, might undermine democracy.
And I would say that these systems arent very controllable. Theyre powerful, theyre reckless, but they dont necessarily do what we want. Ultimately, theres going to be a question, Okay, we can build a demo here. Can we build a product that we can actually use? And what is that product?
I think in some places people will adopt this stuff. And theyll be perfectly happy with the output. In other places, theres a real problem.
James Manyika, SVP of technology and society, Google: Youre trying to make sure the outputs are not toxic. In our case, we do a lot of generative adversarial testing of these systems. In fact, when you use Bard, for example, the output that you get when you type in a prompt is not necessarily the first thing that Bard came up with.
Were running 15, 16 different types of the same prompt to look at those outputs and pre-assess them for safety, for things like toxicity. And now we dont always get every single one of them, but were getting a lot of it already.
One of the bigger questions that we are going to have to face, by the way and this is a question about us, not about the technology, its about us as a society is how do we think about what we value? How do we think about what counts as toxicity? So thats why we try to involve and engage with communities to understand those. We try to involve ethicists and social scientists to research those questions and understand those, but those are really questions for us as society.
Emily M. Bender, professor of linguistics, University of Washington: People talk about democratizing AI, and I always find that really frustrating because what theyre referring to is putting this technology in the hands of many, many people which is not the same thing as giving everybody a say in how its developed.
I think the best way forward is cooperation, basically. You have sensible regulation coming from the outside so that the companies are held accountable. And then youve got the tech ethics workers on the inside helping the companies actually meet the regulation and meet the spirit of the regulation.
And to make all that happen, we need broad literacy in the population so that people can ask for whats needed from their elected representatives. So that the elected representatives are hopefully literate in all of this.
Scott: Weve spent from 2017 until today rigorously building a responsible AI practice. You just cant release an AI to the public without a rigorous set of rules that define sensitive uses, and where you have a harms framework. You have to be transparent with the public about what your approach to responsible AI is.
Marcus: Dirigibles were really popular in the 1920s and 1930s. Until we had the Hindenburg. Everybody thought that all these people doing heavier-than-air flight were wasting their time. They were like, Look at our dirigibles. They scale a lot faster. We built a small one. Now we built a bigger one. Now we built a much bigger one. Its all working great.
So, you know, sometimes you scale the wrong thing. In my view, were scaling the wrong thing right now. Were scaling a technology that is inherently unstable.
Its unreliable and untruthful. Were making it faster and have more coverage, but its still unreliable, still not truthful. And for many applications thats a problem. There are some for which its not right.
ChatGPTs sweet spot has always been making surrealist prose. It is now better at making surrealist prose than it was before. If thats your use case, its fine, I have no problem with it. But if your use case is something where theres a cost of error, where you do need to be truthful and trustworthy, then that is a problem.
Scott: It is absolutely useful to be thinking about these scenarios. Its more useful to think about them grounded in where the technology actually is, and what the next step is, and the step beyond that.
I think were still many steps away from the things that people worry about. There are people who disagree with me on that assertion. They think theres gonna be some uncontrollable, emergent behavior that happens.
And were careful enough about that, where we have research teams thinking about the possibility of these emergent scenarios. But the thing that you would really have to have in order for some of the weird things to happen that people are concerned about is real autonomy a system that could participate in its own development and have that feedback loop where you could get to some superhumanly fast rate of improvement. And thats not the way the systems work right now. Not the ones that we are building.
Bender: We already have WebMD. We already have databases where you can go from symptoms to possible diagnoses, so you know what to look for.
There are plenty of people who need medical advice, medical treatment, who cant afford it, and that is a societal failure. And similarly, there are plenty of people who need legal advice and legal services who cant afford it. Those are real problems, but throwing synthetic text into those situations is not a solution to those problems.
If anything, its gonna exacerbate the inequalities that we see in our society. And to say, people who can pay get the real thing; people who cant pay, well, here, good luck. You know: Shake the magic eight ball that will tell you something that seems relevant and give it a try.
Manyika: Yes, it does have a place. If Im trying to explore as a research question, how do I come to understand those diseases? If Im trying to get medical help for myself, I wouldnt go to these generative systems. I go to a doctor or I go to something where I know theres reliable factual information.
Scott: I think it just depends on the actual delivery mechanism. You absolutely dont want a world where all you have is some substandard piece of software and no access to a real doctor. But I have a concierge doctor, for instance. I interact with my concierge doctor mostly by email. And thats actually a great user experience. Its phenomenal. It saves me so much time, and Im able to get access to a whole bunch of things that my busy schedule wouldnt let me have access to otherwise.
So for years Ive thought, wouldnt it be fantastic for everyone to have the same thing? An expert medical guru that you can go to that can help you navigate a very complicated system of insurance companies and medical providers and whatnot. Having something that can help you deal with the complexity, I think, is a good thing.
Marcus: If its medical misinformation, you might actually kill someone. Thats actually the domain where Im most worried about erroneous information from search engines
Now people do search for medical stuff all the time, and these systems are not going to understand drug interactions. Theyre probably not going to understand particular peoples circumstances, and I suspect that there will actually be some pretty bad advice.
We understand from a technical perspective why these systems hallucinate. And I can tell you that they will hallucinate in the medical domain. Then the question is: What becomes of that? Whats the cost of error? How widespread is that? How do users respond? We dont know all those answers yet.
Berman: I think society will need to adapt. A lot of those systems are very, very powerful and allow us to do things that we never thought would be possible. By the way, we dont yet understand what is fully possible. We dont also fully understand how some of those systems work.
I think some people will lose jobs. Some people will adjust and get new jobs. We have a company called Canvas that is developing a new type of robot for the construction industry and actually working with the union to train the workforce to use this kind of robot.
And a lot of those jobs that a lot of technologies replace are not necessarily the jobs that a lot of people want to do anyway. So I think that we are going to see a lot of new capabilities that will allow us to train people to do much more exciting jobs as well.
Manyika: If you look at most of the research on AIs impact on work, if I were to summarize it in a phrase, Id say its jobs gained, jobs lost, and jobs changed.
All three things will happen because there are some occupations where a number of the tasks involved in those occupations will probably decline. But there are also new occupations that will grow. So theres going to be a whole set of jobs gained and created as a result of this incredible set of innovations. But I think the bigger effect, quite frankly what most people will feel is the jobs changed aspect of this.
Read the original here:
How AI and ChatGPT are full of promise and peril, explained by experts - Vox.com
- US ahead in AI innovation, easily surpassing China in Stanfords new ranking - The Associated Press - November 21st, 2024 [November 21st, 2024]
- Announcing recipients of the Google.org AI Opportunity Fund: Europe - The Keyword - November 21st, 2024 [November 21st, 2024]
- AI agents what they are, and how theyll change the way we work - Microsoft - November 21st, 2024 [November 21st, 2024]
- Shannon Vallor says AI does present an existential risk but not the one you think - Vox.com - November 21st, 2024 [November 21st, 2024]
- US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their work - The Associated Press - November 21st, 2024 [November 21st, 2024]
- Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects - The Hacker News - November 21st, 2024 [November 21st, 2024]
- The intersection of AI and the downfall of long-form literature - Tufts Daily - November 21st, 2024 [November 21st, 2024]
- Silicon Valley billionaire warns 'absolutely there's a bubble' in AI valuations: 'Nobody would be surprised' if OpenAI 'disappeared next Monday' -... - November 21st, 2024 [November 21st, 2024]
- Advancing red teaming with people and AI - OpenAI - November 21st, 2024 [November 21st, 2024]
- Can Google Scholar survive the AI revolution? - Nature.com - November 21st, 2024 [November 21st, 2024]
- Nearly half of Gen AI adopters want it open source - here's why - ZDNet - November 21st, 2024 [November 21st, 2024]
- Founder of AI education chatbot charged with defrauding investors of $10 million - USA TODAY - November 21st, 2024 [November 21st, 2024]
- Microsoft at 50: An AI Giant. A Kinder Culture. And Still Hellbent on Domination - WIRED - November 21st, 2024 [November 21st, 2024]
- Matthew Libby on the dark underbelly of AI and his new play Data at Arena Stage - DC Theater Arts - November 21st, 2024 [November 21st, 2024]
- Cruise fesses up, Pony AI raises its IPO ambitions, and the TuSimple drama dials back up - TechCrunch - November 21st, 2024 [November 21st, 2024]
- I Called AI Santa Claus. He Hung Up On Me - The Daily Beast - November 21st, 2024 [November 21st, 2024]
- Nvidia says its Blackwell AI chip is full steam ahead - The Verge - November 21st, 2024 [November 21st, 2024]
- AI in drug discovery is nonsense, but call Schrdinger AI if you want, says CEO - STAT - November 21st, 2024 [November 21st, 2024]
- Is This a Sign That SoundHound AI Is Becoming a Safer Stock to Buy? - The Motley Fool - November 21st, 2024 [November 21st, 2024]
- Why the U.S. Launched an International Network of AI Safety Institutes - TIME - November 21st, 2024 [November 21st, 2024]
- Nvidias boss dismisses fears that AI has hit a wall - The Economist - November 21st, 2024 [November 21st, 2024]
- Will the bubble burst for AI in 2025, or will it start to deliver? - The Economist - November 21st, 2024 [November 21st, 2024]
- Thousands of AI agents later, who even remembers what they do? - The Register - November 21st, 2024 [November 21st, 2024]
- Child safety org flags new CSAM with AI trained on real child sex abuse images - Ars Technica - November 21st, 2024 [November 21st, 2024]
- Nvidias Sales Soar as AI Spending Boom Barrels Ahead - The Wall Street Journal - November 21st, 2024 [November 21st, 2024]
- How Oracle Got Its Mojo Back. What's Behind The AI Cloud Push Powering Its 80% Stock Gain. - Investor's Business Daily - November 21st, 2024 [November 21st, 2024]
- KPMG to spend $100 million on AI partnership with Google Cloud - Reuters - November 21st, 2024 [November 21st, 2024]
- Microsoft is the mystery AI company licensing HarperCollins books, says Bloomberg - The Verge - November 21st, 2024 [November 21st, 2024]
- How Students Can AI-Proof Their Careers - The Wall Street Journal - November 21st, 2024 [November 21st, 2024]
- The US Patent and Trademark Office Banned Staff From Using Generative AI - WIRED - November 21st, 2024 [November 21st, 2024]
- Wall Street strategists aren't relying on AI to drive the stock market rally anymore: Morning Brief - Yahoo Finance - November 19th, 2024 [November 19th, 2024]
- Move over chatbots, AI agents are the next big thing. What are they? - Quartz - November 19th, 2024 [November 19th, 2024]
- Meta AI Begins Roll Out on Ray-Ban Meta Glasses in France, Italy, Ireland and Spain - Meta - November 19th, 2024 [November 19th, 2024]
- Exclusive: Leaked Amazon documents identify critical flaws in the delayed AI reboot of Alexa - Fortune - November 19th, 2024 [November 19th, 2024]
- How Mark Zuckerberg went all-in to make Meta a major AI player and threaten OpenAIs dominance - Fortune - November 19th, 2024 [November 19th, 2024]
- AI maths assistant could help solve problems that humans are stuck on - New Scientist - November 19th, 2024 [November 19th, 2024]
- AI Is Now Co-Creator Of Our Collective Intelligence So Watch Your Back - Forbes - November 19th, 2024 [November 19th, 2024]
- Itching to write a book? AI publisher Spines wants to make a deal - TechCrunch - November 19th, 2024 [November 19th, 2024]
- AI is hitting a wall just as the hype around it reaches the stratosphere - CNN - November 19th, 2024 [November 19th, 2024]
- AI can learn to think before it speaks - Financial Times - November 19th, 2024 [November 19th, 2024]
- Can AI Robots Offer Advice That Heals Souls? - Religion Unplugged - November 19th, 2024 [November 19th, 2024]
- Crook breaks into AI biz, points $250K wire payment at their own account - The Register - November 19th, 2024 [November 19th, 2024]
- Symbotic Stock Rises 28%. Heres Why the AI-Robot Company Is Surging. - Barron's - November 19th, 2024 [November 19th, 2024]
- Leaked: Amazon held talks with Instacart, Uber, Ticketmaster, and others for help on its new AI-powered Alexa - Business Insider - November 19th, 2024 [November 19th, 2024]
- Got $3,000? 3 Artificial Intelligence (AI) Stocks to Buy and Hold for the Long Term - The Motley Fool - November 19th, 2024 [November 19th, 2024]
- Theres No Longer Any Doubt That Hollywood Writing Is Powering AI - The Atlantic - November 19th, 2024 [November 19th, 2024]
- Is AI making job applications easier, or creating another problem? - NBC News - November 19th, 2024 [November 19th, 2024]
- Microsoft announces its own Black Hat-like hacking event with big rewards for AI security - The Verge - November 19th, 2024 [November 19th, 2024]
- AI startup Perplexity adds shopping features as search competition tightens - Reuters - November 19th, 2024 [November 19th, 2024]
- Scientists Are Using AI To Improve Vegan Meat Alternatives - Plant Based News - November 19th, 2024 [November 19th, 2024]
- Microsofts new Copilot Actions use AI to automate repetitive tasks - The Verge - November 19th, 2024 [November 19th, 2024]
- AI Spending To Exceed A Quarter Trillion Dollars Next Year - Seeking Alpha - November 19th, 2024 [November 19th, 2024]
- Ben Affleck tells actors and writers not to worry about AI - TechCrunch - November 19th, 2024 [November 19th, 2024]
- New Nvidia AI chips overheating in servers, the Information reports - Reuters - November 19th, 2024 [November 19th, 2024]
- This 'lifelike' AI granny is infuriating phone scammers. Here's how - and why - ZDNet - November 19th, 2024 [November 19th, 2024]
- Nvidia stock sinks on reports of Blackwell AI server issues ahead of earnings - Yahoo Finance - November 19th, 2024 [November 19th, 2024]
- Billionaire Mark Zuckerberg Has Transformed Meta into a Generative AI Leader. But Is the Stock a Buy? - The Motley Fool - November 19th, 2024 [November 19th, 2024]
- The Third Wave Of AI Is Here: Why Agentic AI Will Transform The Way We Work - Forbes - November 19th, 2024 [November 19th, 2024]
- Decoding Trumps Tech And AI Agenda: Innovation And Policy Impacts - Forbes - November 19th, 2024 [November 19th, 2024]
- Google.orgs $20 million fund for AI and science - The Keyword - November 19th, 2024 [November 19th, 2024]
- AI-Driven Dark Patterns: How Artificial Intelligence Is Supercharging Digital Manipulation - Forbes - November 17th, 2024 [November 17th, 2024]
- AI could alter data science as we know it - here's why - ZDNet - November 17th, 2024 [November 17th, 2024]
- Biden and Xi take a first step to limit AI and nuclear decisions at their last meeting - NPR - November 17th, 2024 [November 17th, 2024]
- Biden, Xi agree that humans, not AI, should control nuclear arms - Reuters - November 17th, 2024 [November 17th, 2024]
- Meet Evo, an AI model that can predict the effects of gene mutations with 'unparalleled accuracy' - Livescience.com - November 17th, 2024 [November 17th, 2024]
- I Asked AI What The "Most Beautiful Person" In 27 Countries Would Look Like, And Here Are The Results - BuzzFeed - November 17th, 2024 [November 17th, 2024]
- Graph-based AI model maps the future of innovation - MIT News - November 17th, 2024 [November 17th, 2024]
- Why it Matters That Googles AI Gemini Chatbot Made Death Threats to a Grad Student - Inc. - November 17th, 2024 [November 17th, 2024]
- I just had Elon Musk's Grok AI rate and roast the desk setups of Tom's Guide editors - Tom's Guide - November 17th, 2024 [November 17th, 2024]
- How real-world businesses are transforming with AI - Microsoft - November 17th, 2024 [November 17th, 2024]
- Im Out of Shape. Will an AI Trainer Improve My Fitness? - WIRED - November 17th, 2024 [November 17th, 2024]
- Coca Colas AI-Generated Ad Controversy, Explained - Forbes - November 17th, 2024 [November 17th, 2024]
- 3 New AI Smart Home Features Arrive With Gemini and Google Nest - CNET - November 17th, 2024 [November 17th, 2024]
- 2 Artificial Intelligence (AI) Stocks to Buy on the Dip - The Motley Fool - November 17th, 2024 [November 17th, 2024]
- About a Hero Review: An AI-Assisted Docu-Mystery That Wont Give Werner Herzog Any Sleepless Nights - Variety - November 17th, 2024 [November 17th, 2024]
- This Artificial Intelligence (AI) Stock Soared Since Trump Won the Election, but Is It a Buy? - The Motley Fool - November 17th, 2024 [November 17th, 2024]
- Gemini AI tells the user to die the answer appeared out of nowhere when the user asked Google's Gemini for help with his homework - Tom's Hardware - November 17th, 2024 [November 17th, 2024]
- Musk's concerns over Google DeepMind 'AI Dictatorship' revealed in emails from 2016 communications released during the recent OpenAI court case -... - November 17th, 2024 [November 17th, 2024]
- Bidens final meeting with Xi Jinping reaps agreement on AI and nukes - POLITICO - November 17th, 2024 [November 17th, 2024]
- The AI lab waging a guerrilla war over exploitative AI - MIT Technology Review - November 17th, 2024 [November 17th, 2024]