How AI and ChatGPT are full of promise and peril, explained by experts – Vox.com

At this point, you have tried ChatGPT. Even Joe Biden has tried ChatGPT, and this week, his administration made a big show of inviting AI leaders like Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman to the White House to discuss ways they could make responsible AI.

But maybe, just maybe, you are still fuzzy on some very basics about AI like, how does this stuff work, is it magic, and will it kill us all? but dont want to admit to that.

No worries. We have you covered: Weve spent much of the spring talking to people working in AI, investing in AI, trying to build businesses in AI as well as people who think the current AI boom is overblown or maybe dangerously misguided. We made a podcast series about the whole thing, which you can listen to over at Recode Media.

But weve also pulled out a sampling of insightful and oftentimes conflicting answers we got to some of these very basic questions. Theyre questions that the White House and everyone else needs to figure out soon, since AI isnt going away.

Read on and dont worry, we wont tell anyone that youre confused. Were all confused.

Kevin Scott, chief technology officer, Microsoft: I was a 12-year-old when the PC revolution was happening. I was in grad school when the internet revolution happened. I was running a mobile startup right at the very beginning of the mobile revolution, which coincided with this massive shift to cloud computing. This feels to me very much like those three things.

Dror Berman, co-founder, Innovation Endeavors: Mobile was an interesting time because it provided a new form factor that allowed you to carry a computer with you. I think we are now standing in a completely different time: Weve now been introduced to a foundational intelligence block that has become available to us, one that basically can lean on all the publicly available knowledge that humanity has extracted and documented. It allows us to retrieve all this information in a way that wasnt possible in the past.

Gary Marcus, entrepreneur; emeritus professor of psychology and neural science at NYU: I mean, its absolutely interesting. I would not want to argue against that for a moment. I think of it as a dress rehearsal for artificial general intelligence, which we will get to someday.

But right now we have a trade-off. There are some positives about these systems. You can use them to write things for you. And there are some negatives. This technology can be used, for example, to spread misinformation, and to do that at a scale that weve never seen before which may be dangerous, might undermine democracy.

And I would say that these systems arent very controllable. Theyre powerful, theyre reckless, but they dont necessarily do what we want. Ultimately, theres going to be a question, Okay, we can build a demo here. Can we build a product that we can actually use? And what is that product?

I think in some places people will adopt this stuff. And theyll be perfectly happy with the output. In other places, theres a real problem.

James Manyika, SVP of technology and society, Google: Youre trying to make sure the outputs are not toxic. In our case, we do a lot of generative adversarial testing of these systems. In fact, when you use Bard, for example, the output that you get when you type in a prompt is not necessarily the first thing that Bard came up with.

Were running 15, 16 different types of the same prompt to look at those outputs and pre-assess them for safety, for things like toxicity. And now we dont always get every single one of them, but were getting a lot of it already.

One of the bigger questions that we are going to have to face, by the way and this is a question about us, not about the technology, its about us as a society is how do we think about what we value? How do we think about what counts as toxicity? So thats why we try to involve and engage with communities to understand those. We try to involve ethicists and social scientists to research those questions and understand those, but those are really questions for us as society.

Emily M. Bender, professor of linguistics, University of Washington: People talk about democratizing AI, and I always find that really frustrating because what theyre referring to is putting this technology in the hands of many, many people which is not the same thing as giving everybody a say in how its developed.

I think the best way forward is cooperation, basically. You have sensible regulation coming from the outside so that the companies are held accountable. And then youve got the tech ethics workers on the inside helping the companies actually meet the regulation and meet the spirit of the regulation.

And to make all that happen, we need broad literacy in the population so that people can ask for whats needed from their elected representatives. So that the elected representatives are hopefully literate in all of this.

Scott: Weve spent from 2017 until today rigorously building a responsible AI practice. You just cant release an AI to the public without a rigorous set of rules that define sensitive uses, and where you have a harms framework. You have to be transparent with the public about what your approach to responsible AI is.

Marcus: Dirigibles were really popular in the 1920s and 1930s. Until we had the Hindenburg. Everybody thought that all these people doing heavier-than-air flight were wasting their time. They were like, Look at our dirigibles. They scale a lot faster. We built a small one. Now we built a bigger one. Now we built a much bigger one. Its all working great.

So, you know, sometimes you scale the wrong thing. In my view, were scaling the wrong thing right now. Were scaling a technology that is inherently unstable.

Its unreliable and untruthful. Were making it faster and have more coverage, but its still unreliable, still not truthful. And for many applications thats a problem. There are some for which its not right.

ChatGPTs sweet spot has always been making surrealist prose. It is now better at making surrealist prose than it was before. If thats your use case, its fine, I have no problem with it. But if your use case is something where theres a cost of error, where you do need to be truthful and trustworthy, then that is a problem.

Scott: It is absolutely useful to be thinking about these scenarios. Its more useful to think about them grounded in where the technology actually is, and what the next step is, and the step beyond that.

I think were still many steps away from the things that people worry about. There are people who disagree with me on that assertion. They think theres gonna be some uncontrollable, emergent behavior that happens.

And were careful enough about that, where we have research teams thinking about the possibility of these emergent scenarios. But the thing that you would really have to have in order for some of the weird things to happen that people are concerned about is real autonomy a system that could participate in its own development and have that feedback loop where you could get to some superhumanly fast rate of improvement. And thats not the way the systems work right now. Not the ones that we are building.

Bender: We already have WebMD. We already have databases where you can go from symptoms to possible diagnoses, so you know what to look for.

There are plenty of people who need medical advice, medical treatment, who cant afford it, and that is a societal failure. And similarly, there are plenty of people who need legal advice and legal services who cant afford it. Those are real problems, but throwing synthetic text into those situations is not a solution to those problems.

If anything, its gonna exacerbate the inequalities that we see in our society. And to say, people who can pay get the real thing; people who cant pay, well, here, good luck. You know: Shake the magic eight ball that will tell you something that seems relevant and give it a try.

Manyika: Yes, it does have a place. If Im trying to explore as a research question, how do I come to understand those diseases? If Im trying to get medical help for myself, I wouldnt go to these generative systems. I go to a doctor or I go to something where I know theres reliable factual information.

Scott: I think it just depends on the actual delivery mechanism. You absolutely dont want a world where all you have is some substandard piece of software and no access to a real doctor. But I have a concierge doctor, for instance. I interact with my concierge doctor mostly by email. And thats actually a great user experience. Its phenomenal. It saves me so much time, and Im able to get access to a whole bunch of things that my busy schedule wouldnt let me have access to otherwise.

So for years Ive thought, wouldnt it be fantastic for everyone to have the same thing? An expert medical guru that you can go to that can help you navigate a very complicated system of insurance companies and medical providers and whatnot. Having something that can help you deal with the complexity, I think, is a good thing.

Marcus: If its medical misinformation, you might actually kill someone. Thats actually the domain where Im most worried about erroneous information from search engines

Now people do search for medical stuff all the time, and these systems are not going to understand drug interactions. Theyre probably not going to understand particular peoples circumstances, and I suspect that there will actually be some pretty bad advice.

We understand from a technical perspective why these systems hallucinate. And I can tell you that they will hallucinate in the medical domain. Then the question is: What becomes of that? Whats the cost of error? How widespread is that? How do users respond? We dont know all those answers yet.

Berman: I think society will need to adapt. A lot of those systems are very, very powerful and allow us to do things that we never thought would be possible. By the way, we dont yet understand what is fully possible. We dont also fully understand how some of those systems work.

I think some people will lose jobs. Some people will adjust and get new jobs. We have a company called Canvas that is developing a new type of robot for the construction industry and actually working with the union to train the workforce to use this kind of robot.

And a lot of those jobs that a lot of technologies replace are not necessarily the jobs that a lot of people want to do anyway. So I think that we are going to see a lot of new capabilities that will allow us to train people to do much more exciting jobs as well.

Manyika: If you look at most of the research on AIs impact on work, if I were to summarize it in a phrase, Id say its jobs gained, jobs lost, and jobs changed.

All three things will happen because there are some occupations where a number of the tasks involved in those occupations will probably decline. But there are also new occupations that will grow. So theres going to be a whole set of jobs gained and created as a result of this incredible set of innovations. But I think the bigger effect, quite frankly what most people will feel is the jobs changed aspect of this.

Read the original here:

How AI and ChatGPT are full of promise and peril, explained by experts - Vox.com

Related Posts

Comments are closed.