How AI and ChatGPT are full of promise and peril, explained by experts – Vox.com
At this point, you have tried ChatGPT. Even Joe Biden has tried ChatGPT, and this week, his administration made a big show of inviting AI leaders like Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to the White House to discuss ways they could make responsible AI.
But maybe, just maybe, you are still fuzzy on some very basics about AI like, how does this stuff work, is it magic, and will it kill us all? but dont want to admit to that.
No worries. We have you covered: Weve spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI as well as people who think the current AI boom is overblown or maybe dangerously misguided. We made a podcast series about the whole thing, which you can listen to over at Recode Media.
But weve also pulled out a sampling of insightful and oftentimes conflicting answers we got to some of these very basic questions. Theyre questions that the White House and everyone else needs to figure out soon, since AI isnt going away.
Read on and dont worry, we wont tell anyone that youre confused. Were all confused.
Kevin Scott, chief technology officer, Microsoft: I was a 12-year-old when the PC revolution was happening. I was in grad school when the internet revolution happened. I was running a mobile startup right at the very beginning of the mobile revolution, which coincided with this massive shift to cloud computing. This feels to me very much like those three things.
Dror Berman, co-founder, Innovation Endeavors: Mobile was an interesting time because it provided a new form factor that allowed you to carry a computer with you. I think we are now standing in a completely different time: Weve now been introduced to a foundational intelligence block that has become available to us, one that basically can lean on all the publicly available knowledge that humanity has extracted and documented. It allows us to retrieve all this information in a way that wasnt possible in the past.
Gary Marcus, entrepreneur; emeritus professor of psychology and neural science at NYU: I mean, its absolutely interesting. I would not want to argue against that for a moment. I think of it as a dress rehearsal for artificial general intelligence, which we will get to someday.
But right now we have a trade-off. There are some positives about these systems. You can use them to write things for you. And there are some negatives. This technology can be used, for example, to spread misinformation, and to do that at a scale that weve never seen before which may be dangerous, might undermine democracy.
And I would say that these systems arent very controllable. Theyre powerful, theyre reckless, but they dont necessarily do what we want. Ultimately, theres going to be a question, Okay, we can build a demo here. Can we build a product that we can actually use? And what is that product?
I think in some places people will adopt this stuff. And theyll be perfectly happy with the output. In other places, theres a real problem.
James Manyika, SVP of technology and society, Google: Youre trying to make sure the outputs are not toxic. In our case, we do a lot of generative adversarial testing of these systems. In fact, when you use Bard, for example, the output that you get when you type in a prompt is not necessarily the first thing that Bard came up with.
Were running 15, 16 different types of the same prompt to look at those outputs and pre-assess them for safety, for things like toxicity. And now we dont always get every single one of them, but were getting a lot of it already.
One of the bigger questions that we are going to have to face, by the way and this is a question about us, not about the technology, its about us as a society is how do we think about what we value? How do we think about what counts as toxicity? So thats why we try to involve and engage with communities to understand those. We try to involve ethicists and social scientists to research those questions and understand those, but those are really questions for us as society.
Emily M. Bender, professor of linguistics, University of Washington: People talk about democratizing AI, and I always find that really frustrating because what theyre referring to is putting this technology in the hands of many, many people which is not the same thing as giving everybody a say in how its developed.
I think the best way forward is cooperation, basically. You have sensible regulation coming from the outside so that the companies are held accountable. And then youve got the tech ethics workers on the inside helping the companies actually meet the regulation and meet the spirit of the regulation.
And to make all that happen, we need broad literacy in the population so that people can ask for whats needed from their elected representatives. So that the elected representatives are hopefully literate in all of this.
Scott: Weve spent from 2017 until today rigorously building a responsible AI practice. You just cant release an AI to the public without a rigorous set of rules that define sensitive uses, and where you have a harms framework. You have to be transparent with the public about what your approach to responsible AI is.
Marcus: Dirigibles were really popular in the 1920s and 1930s. Until we had the Hindenburg. Everybody thought that all these people doing heavier-than-air flight were wasting their time. They were like, Look at our dirigibles. They scale a lot faster. We built a small one. Now we built a bigger one. Now we built a much bigger one. Its all working great.
So, you know, sometimes you scale the wrong thing. In my view, were scaling the wrong thing right now. Were scaling a technology that is inherently unstable.
Its unreliable and untruthful. Were making it faster and have more coverage, but its still unreliable, still not truthful. And for many applications thats a problem. There are some for which its not right.
ChatGPTs sweet spot has always been making surrealist prose. It is now better at making surrealist prose than it was before. If thats your use case, its fine, I have no problem with it. But if your use case is something where theres a cost of error, where you do need to be truthful and trustworthy, then that is a problem.
Scott: It is absolutely useful to be thinking about these scenarios. Its more useful to think about them grounded in where the technology actually is, and what the next step is, and the step beyond that.
I think were still many steps away from the things that people worry about. There are people who disagree with me on that assertion. They think theres gonna be some uncontrollable, emergent behavior that happens.
And were careful enough about that, where we have research teams thinking about the possibility of these emergent scenarios. But the thing that you would really have to have in order for some of the weird things to happen that people are concerned about is real autonomy a system that could participate in its own development and have that feedback loop where you could get to some superhumanly fast rate of improvement. And thats not the way the systems work right now. Not the ones that we are building.
Bender: We already have WebMD. We already have databases where you can go from symptoms to possible diagnoses, so you know what to look for.
There are plenty of people who need medical advice, medical treatment, who cant afford it, and that is a societal failure. And similarly, there are plenty of people who need legal advice and legal services who cant afford it. Those are real problems, but throwing synthetic text into those situations is not a solution to those problems.
If anything, its gonna exacerbate the inequalities that we see in our society. And to say, people who can pay get the real thing; people who cant pay, well, here, good luck. You know: Shake the magic eight ball that will tell you something that seems relevant and give it a try.
Manyika: Yes, it does have a place. If Im trying to explore as a research question, how do I come to understand those diseases? If Im trying to get medical help for myself, I wouldnt go to these generative systems. I go to a doctor or I go to something where I know theres reliable factual information.
Scott: I think it just depends on the actual delivery mechanism. You absolutely dont want a world where all you have is some substandard piece of software and no access to a real doctor. But I have a concierge doctor, for instance. I interact with my concierge doctor mostly by email. And thats actually a great user experience. Its phenomenal. It saves me so much time, and Im able to get access to a whole bunch of things that my busy schedule wouldnt let me have access to otherwise.
So for years Ive thought, wouldnt it be fantastic for everyone to have the same thing? An expert medical guru that you can go to that can help you navigate a very complicated system of insurance companies and medical providers and whatnot. Having something that can help you deal with the complexity, I think, is a good thing.
Marcus: If its medical misinformation, you might actually kill someone. Thats actually the domain where Im most worried about erroneous information from search engines
Now people do search for medical stuff all the time, and these systems are not going to understand drug interactions. Theyre probably not going to understand particular peoples circumstances, and I suspect that there will actually be some pretty bad advice.
We understand from a technical perspective why these systems hallucinate. And I can tell you that they will hallucinate in the medical domain. Then the question is: What becomes of that? Whats the cost of error? How widespread is that? How do users respond? We dont know all those answers yet.
Berman: I think society will need to adapt. A lot of those systems are very, very powerful and allow us to do things that we never thought would be possible. By the way, we dont yet understand what is fully possible. We dont also fully understand how some of those systems work.
I think some people will lose jobs. Some people will adjust and get new jobs. We have a company called Canvas that is developing a new type of robot for the construction industry and actually working with the union to train the workforce to use this kind of robot.
And a lot of those jobs that a lot of technologies replace are not necessarily the jobs that a lot of people want to do anyway. So I think that we are going to see a lot of new capabilities that will allow us to train people to do much more exciting jobs as well.
Manyika: If you look at most of the research on AIs impact on work, if I were to summarize it in a phrase, Id say its jobs gained, jobs lost, and jobs changed.
All three things will happen because there are some occupations where a number of the tasks involved in those occupations will probably decline. But there are also new occupations that will grow. So theres going to be a whole set of jobs gained and created as a result of this incredible set of innovations. But I think the bigger effect, quite frankly what most people will feel is the jobs changed aspect of this.
Read the original here:
How AI and ChatGPT are full of promise and peril, explained by experts - Vox.com
- VC Fundraising Jumps As Investors Bet on Transformative AI - PYMNTS.com - March 11th, 2025 [March 11th, 2025]
- Cathie Wood on What Comes Next in AI and Big Tech - Bloomberg - March 11th, 2025 [March 11th, 2025]
- How real-world businesses are transforming with AI with more than 140 new stories - Microsoft - March 11th, 2025 [March 11th, 2025]
- Expanding AI Overviews and introducing AI Mode - The Keyword - March 11th, 2025 [March 11th, 2025]
- VSCO Canvas is a Reminder that Generative AI is Still Not There - The Phoblographer - March 11th, 2025 [March 11th, 2025]
- 1 Unstoppable AI Stock That Could Skyrocket When the Market Comes to Its Senses - The Motley Fool - March 11th, 2025 [March 11th, 2025]
- This is the fourth AMD Ryzen AI Max+ 395 and it could be the cheapest but fastest mini PC ever launched - TechRadar - March 11th, 2025 [March 11th, 2025]
- What Coca-Cola has learned on its generative AI journey so far - Marketing Dive - March 11th, 2025 [March 11th, 2025]
- GenLayer offers novel approach for AI agent transactions: getting multiple LLMs to vote on a suitable contract - VentureBeat - March 11th, 2025 [March 11th, 2025]
- Vibe Coding: The AI Revolution Thats Making VCs Bet Big On Human Intuition - Forbes - March 11th, 2025 [March 11th, 2025]
- WHO announces new collaborating centre on AI for health governance - World Health Organization - March 11th, 2025 [March 11th, 2025]
- ServiceNow to Extend Leading Agentic AI to Every Employee for Every Corner of the Business With Acquisition of Moveworks - Business Wire - March 11th, 2025 [March 11th, 2025]
- China's top universities expand enrolment to beef up capabilities in AI, strategic areas - Reuters - March 11th, 2025 [March 11th, 2025]
- The Human-AI Playbook: Moving Beyond Automation To True Collaboration - Forbes - March 11th, 2025 [March 11th, 2025]
- How the AI Talent Race Is Reshaping the Tech Job Market - The Wall Street Journal - March 11th, 2025 [March 11th, 2025]
- Praxis AI pioneers AI-driven education with Claude in Amazon Bedrock - Anthropic - March 11th, 2025 [March 11th, 2025]
- The Dangerous Reason We Fall in Love With AI - TIME - March 11th, 2025 [March 11th, 2025]
- China's DeepSeek resolves issue briefly affecting its AI reasoning model - Reuters - March 11th, 2025 [March 11th, 2025]
- How pharmaceutical companies are training their workers on AI - Business Insider - March 11th, 2025 [March 11th, 2025]
- 30 Ways to Use AI to Make Life Better and Easier - Art of Manliness - March 11th, 2025 [March 11th, 2025]
- Global expansion in Generative AI: a year of growth, newcomers, and attacks - The Cloudflare Blog - March 11th, 2025 [March 11th, 2025]
- ServiceNow Buys AI Startup for $2.85 Billion. Why It's Making Its Largest Deal Yet. - Barron's - March 11th, 2025 [March 11th, 2025]
- Microsoft developing AI reasoning models to compete with OpenAI, The Information reports - Reuters - March 11th, 2025 [March 11th, 2025]
- Top 20 AI Research Scientists: The People Leading in LLM & AI Technology - The Information - March 11th, 2025 [March 11th, 2025]
- Manus mania is here: Chinese general agent is this weeks future of AI' and OpenAI-killer - The Register - March 11th, 2025 [March 11th, 2025]
- Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem - Tech Policy Press - March 11th, 2025 [March 11th, 2025]
- Anthropics Recommendations to OSTP for the U.S. AI Action Plan - Anthropic - March 11th, 2025 [March 11th, 2025]
- As AI agents multiply, IT becomes the new HR department - ZDNet - March 11th, 2025 [March 11th, 2025]
- Huawei reportedly acquired two million Ascend 910 AI chips from TSMC last year through shell companies - Tom's Hardware - March 11th, 2025 [March 11th, 2025]
- Asia stocks vulnerable to tariffs, but AI could drive growth - Goldman Sachs - Investing.com - March 11th, 2025 [March 11th, 2025]
- AI at the Brink: Preventing the Subversion of Democracy - Tech Policy Press - March 3rd, 2025 [March 3rd, 2025]
- Impact Analytics Brings AI-Native InventorySmart Technology to Merchandise Planning Curriculum at the Fashion Institute of Technology (FIT) -... - March 3rd, 2025 [March 3rd, 2025]
- Beyond the buzz, state lawmakers weigh in on health care AI - American Medical Association - March 3rd, 2025 [March 3rd, 2025]
- Kyndryl Announces Collaboration with Microsoft to Enable AI-powered Healthcare - PR Newswire - March 3rd, 2025 [March 3rd, 2025]
- Survey Shows How AI Is Reshaping Healthcare and Life Sciences, From Lab to Bedside - NVIDIA Blog - March 3rd, 2025 [March 3rd, 2025]
- AI Joins the Team: How FME Students Learn to Use Generative AI Babson Thought & Action - Babson Thought & Action - March 3rd, 2025 [March 3rd, 2025]
- Lenovo at MWC 2025: Advancing AI-Powered Business Computing with Latest ThinkPad, ThinkBook, and Visionary Concept Devices - Lenovo StoryHub - March 3rd, 2025 [March 3rd, 2025]
- Californias AI Revolution: Proposed CPA Regulations Target Automated Decision Making - Workforce Bulletin - March 3rd, 2025 [March 3rd, 2025]
- Lenovo at MWC 2025: Expanding the Boundaries of AI-Powered Creativity, Productivity, and Innovation - Lenovo StoryHub - March 3rd, 2025 [March 3rd, 2025]
- IAEA Board Briefed on Ukraine, Iran, Gender Parity, AI and More - International Atomic Energy Agency - March 3rd, 2025 [March 3rd, 2025]
- Provider organizations that invest in cloud-first, AI-powered strategies will thrive - Healthcare IT News - March 3rd, 2025 [March 3rd, 2025]
- AI tool can write and evaluate business plans as well as or better than humans can, research indicates - Phys.org - March 3rd, 2025 [March 3rd, 2025]
- Were at MWC showcasing the latest AI features on Android. - The Keyword - March 3rd, 2025 [March 3rd, 2025]
- Sneak Peek of The AI+HI Project 2025 - SHRM - March 3rd, 2025 [March 3rd, 2025]
- AWS Returns as Diamond Sponsor for Qlik Connect 2025 to Advance AI Execution - Business Wire - March 3rd, 2025 [March 3rd, 2025]
- How agentic AI is redefining the tax and accounting profession - Thomson Reuters Tax & Accounting - March 3rd, 2025 [March 3rd, 2025]
- Keysight and Northeastern University to Demonstrate AI-RAN Orchestration at Mobile World Congress 2025 - Business Wire - March 3rd, 2025 [March 3rd, 2025]
- AlgoRhythms summit will explore the future of music and AI - IU Newsroom - March 3rd, 2025 [March 3rd, 2025]
- Cincinnati Children's is Exceeding Patient Expectations with AI-first ThinkAndor - PR Newswire - March 3rd, 2025 [March 3rd, 2025]
- How AI was used in the making of some of this years Oscar favorites - PBS NewsHour - March 3rd, 2025 [March 3rd, 2025]
- How AI can distort clinical decision-making to prioritize profits over patients - STAT - March 3rd, 2025 [March 3rd, 2025]
- How to stop American AI from becoming the next Myspace - Breaking Defense - March 3rd, 2025 [March 3rd, 2025]
- Will AI Replace Writers? Here's Why It's Not Happening Anytime Soon - Forbes - March 3rd, 2025 [March 3rd, 2025]
- At HIMSS25, Thinking About Governance and Agentic AI - Healthcare Innovation - March 3rd, 2025 [March 3rd, 2025]
- From the vision to our AI Phone: the next chapter - Deutsche Telekom - March 3rd, 2025 [March 3rd, 2025]
- Winning in the Intelligence Age: A Guide to AI-Driven Advantage - Consumer Goods Technology - March 3rd, 2025 [March 3rd, 2025]
- Leveraging AI To Propel Small Business Growth - Forbes - March 3rd, 2025 [March 3rd, 2025]
- Why AI Isnt Always the Answer for Photo Edits - Fstoppers - March 3rd, 2025 [March 3rd, 2025]
- Fruit Fly Research Led NJIT Scientists and Edison Teens to Better AI Habits on Supercomputers - NJIT News | - March 3rd, 2025 [March 3rd, 2025]
- Pushing the AI Boundaries to Win in the Intelligence Age - Consumer Goods Technology - March 3rd, 2025 [March 3rd, 2025]
- The Trump administration can avoid a strategic misstep in the AI global race - Microsoft - March 1st, 2025 [March 1st, 2025]
- The Humane Ai Pin Has Already Been Brought Back to Life - WIRED - March 1st, 2025 [March 1st, 2025]
- As Africa races towards its AI revolution, China is with it each step of the way - South China Morning Post - March 1st, 2025 [March 1st, 2025]
- Two dozen arrested in international swoop for links to AI-made child sex abuse images - Reuters - March 1st, 2025 [March 1st, 2025]
- What is Mistral AI? Everything to know about the OpenAI competitor - TechCrunch - March 1st, 2025 [March 1st, 2025]
- Why SoundHound AI Stock Soared Higher Today - The Motley Fool - March 1st, 2025 [March 1st, 2025]
- AI Fever in Power Stocks Moves From Nuclear to Plain Natural Gas - The Wall Street Journal - March 1st, 2025 [March 1st, 2025]
- How to turn ChatGPT into your AI coding power tool - and double your output - ZDNet - March 1st, 2025 [March 1st, 2025]
- When will we be able to trust AI? - Star Tribune - March 1st, 2025 [March 1st, 2025]
- Everything you need to know about Alexa+, Amazon's new generative AI assistant - ZDNet - March 1st, 2025 [March 1st, 2025]
- Meet The University Dropouts Using AI To Train Clinicians - Forbes - March 1st, 2025 [March 1st, 2025]
- AI can spot depression through driving habits, study finds - PsyPost - March 1st, 2025 [March 1st, 2025]
- Chip Ganassi Racing partners with OpenAI in first motorsports venture for AI company - The Associated Press - March 1st, 2025 [March 1st, 2025]
- The Hidden Material Breakthrough That Could Supercharge AI and Save Energy - SciTechDaily - March 1st, 2025 [March 1st, 2025]
- Microsoft wants Donald Trump to change AI-chip rules that names India, UAE and others; warns it will beco - The Times of India - March 1st, 2025 [March 1st, 2025]
- How unchecked AI could trigger a nuclear war - Brookings Institution - March 1st, 2025 [March 1st, 2025]
- The Spy Sheikh Taking the AI World by Storm - The Wall Street Journal - March 1st, 2025 [March 1st, 2025]
- Microsoft kills Skype, confirms AI in CoD, and tests free Office - Windows Central - March 1st, 2025 [March 1st, 2025]
- Apple Once Lagged in AI. Thats Helping the Stock Today. - Barron's - March 1st, 2025 [March 1st, 2025]
- It almost happened: Trump, Vance, Zelensky come to blows in wild AI-generated video - Hindustan Times - March 1st, 2025 [March 1st, 2025]