Opinion: I asked AI about myself. The answers were all wrong – The Virginian-Pilot

My interest in artificial intelligence piqued after a colleague told me he was using it for research and writing. Before I used AI for my own work, I decided to test its authenticity with a question I could verify. I asked OpenAIs ChatGPT about my own identity expecting a text version of a selfie. After a week of repeating the same question, the responses were confounding and concerning.

ChatGPT answered who is Philip Shucet by listing 15 distinct positions I supposedly held at one time or another. The positions included specific titles, job responsibilities and employment dates. But only three of the 15 jobs were accurate. The other 12 were fabrications; the positions were real, but I was never in any of them. The misinformation included jobs in two states I never lived in, as well as a congressional appointment to the Amtrak Review Board. How could AI be so wrong?

Although newsrooms, boardrooms and classrooms are buzzing with stories, AI is not new. The first chatbot, Eliza, was created in 1966 by Joseph Weizenbaum at MIT. Weizenbaum, who died in 2008, became skeptical of artificial intelligence, telling the New Age Journal in 1985, The dependence on computers is merely the most recent, and the most extreme, example of how man relies on technology in order to escape the burden of acting as an independent agent.

Was Weizenbaum sending a warning that technology might make us lazy?

In an interview about AI on a March segment of 60 Minutes, Brad Smith, president of Microsoft, told Leslie Stahl that a benefit of AI could be, looking at forms to see if theyve been filled out correctly. But what if the form is a resume created by AI? Can AI check its own misinformation? What happens when an employment record is tainted with false information created by AI? Can job recruiters rely on AI queries? Can employers rely on recruiters who use AI? And who is accountable when someone is hired based on misinformation generated by a machine and not by a human?

In the same 60 Minutes segment, Ellie Pavlik, an assistant professor at Brown, told Stahl, It (AI) doesnt really understand what it is saying is wrong. If AI doesnt know when it is wrong, how can anyone rely on AI to be correct?

In May, two New York attorneys used ChatGPT to write a court brief. The brief cited misinformation from cases that didnt exist. Schwartz told the judge that he failed miserably to do his own research to make sure the information was correct. The judge fined each attorney $5,000.

I asked ChatGPT about the consequences of giving out bad information. ChatGPT answered by saying that false information results in misrepresentation, confusion, legal concerns, emotional distress and erodes trust in AI. If ChatGPT understands the implications of false information, why does it continue to provide fabrications when a search engine could easily provide correct information? Because, as I know now, ChatGPT is not a search engine. I know because I asked.

ChatGPT says it is a language model designed to understand and generate human-like text based on input. ChatGPT says it doesnt crawl the web or search the Internet. Instead, it generates responses based on patterns and information it learned from the text it was trained on.

If AI needs to be trained, then theres a critical human element of accountability we cant ignore. So I started training ChatGPT by correcting it each time it answered with false information. After a week of training, ChatGPT was still returning a mix of accurate and inaccurate information, sometimes repeating fabrications. Im still sending back correct information, but Im ready to bring this experiment to an end for now.

This wasnt a test of ego, it was a test of reliability and trust. A 20% accuracy rate is a failing grade.

In 1976, Weizenbaum wrote, No other organism, and certainly no computer, can be made to confront genuine human problems in human terms. Im not a luddite. But as technology continues to leap forward further and faster, lets remember that we are in control of the information that defines us. We are the trainers.

Philip Shucet is a journalist. He previously held positions as the commissioner of VDOT, president and CEO of Hampton Roads Transit, and CEO of Elizabeth River Crossings. He has never held a congressional appointment.

Read more:

Opinion: I asked AI about myself. The answers were all wrong - The Virginian-Pilot

Related Posts

Comments are closed.