Is ChatGPT Safe? 6 Cybersecurity Risks of OpenAI’s Chatbot – MUO – MakeUseOf

Although many digital natives praise ChatGPT, some fear it does more harm than good. News reports about crooks hijacking AI have been making rounds on the internet, increasing unease among skeptics. They even consider ChatGPT a dangerous tool.

AI chatbots arent perfect, but you dont have to avoid them altogether. Heres everything you should know about how crooks abuse ChatGPT and what you can do to stop them.

Most front-end security concerns about ChatGPT stem from speculations and unverified reports. The platform only launched in November 2022, after all. Its natural for new users to have misconceptions about the privacy and security of unfamiliar tools.

According to OpenAIs terms of use, heres how ChatGPT handles the following data:

Rumors say that ChatGPT sells personally identifiable information (PII).

The platform was launched by OpenAI, a reputable AI research lab funded by tech investors like Microsoft and Elon Musk. ChatGPT should only use customer data to provide the services stated in the privacy policy.

Moreover, ChatGPT asks for minimal information. You can create an account with just your name and email address.

OpenAI keeps ChatGPT conversations secure, but it reserves the right to monitor them. AI trainers continuously look for areas of improvement. Since the platform comprises vast yet limited datasets, resolving errors, bugs, and vulnerabilities requires system-wide updates.

However, OpenAI can only monitor convos for research purposes. Distributing or selling them to third parties violates its own terms of use.

According to the BBC, OpenAI trained ChaGPT on 300 billion words. It collects data from public web pages, like social media platforms, business websites, and comment sections. Unless youve gone off the grid and erased your digital footprint, ChatGPT likely has your information.

While ChatGPT isnt inherently dangerous, the platform still presents security risks. Crooks can bypass restrictions to execute various cyberattacks.

Instead of spending hours writing emails, crooks use ChatGPT. Its fast and accurate. Advanced language models (such as GPT-3.5 and GPT-4) can produce hundreds of coherent, convincing phishing emails within minutes. They even adopt unique tones and writing styles.

Since ChatGPT makes it harder to spot hacking attempts, take extra care before answering emails. As a general rule, avoid divulging information. Note that legitimate companies and organizations rarely ask for confidential PII through random emails.

Learn to spot hacking attempts. Although email providers filter spam messages, some crafty ones could fall through the cracks. You should still know what phishing messages look like.

ChatGPT uses an open-source LLM, which anyone can modify. Coders proficient in large language models (LLM) and machine learning often integrate pre-trained AI models into their legacy systems. Training AI on new datasets alters functionality. For instance, ChatGPT becomes a pseudo-fitness expert if you feed it recipes and exercise routines.

Although collaborative and convenient, open-sourcing leaves technologies vulnerable to abuse. Skilled criminals already exploit ChatGPT. They train it on large volumes of stolen data, turning the platform into a personal database for fraud.

Remember: you have no control over how crooks operate. The best approach is to contact the Federal Trade Commission (FTC) once you notice signs of identity theft.

ChatGPT writes usable code snippets in different programming languages. Most samples require minimal modifications to function properly, especially if you structure a concise prompt. You could leverage this feature to develop apps and sites.

Since ChatGPT was trained on billions of datasets, it also knows illicit practices, like developing malware and viruses. OpenAI prohibits chatbots from writing malicious codes. But crooks bypass these restrictions by restructuring prompts and asking precise questions.

The below photo shows that ChatGPT rejects writing code for malicious purposes.

Meanwhile, the below photo shows that ChatGPT will give you harmful information if you phrase your prompts correctly.

Unethical bloggers spin content using ChatGPT. Since the platform runs on advanced LLMs, it can quickly rephrase thousands of words an avoid plagiarism tags.

ChatGPT rephrased the below text in 10 seconds.

Of course, spinning still classifies as plagiarism. Paraphrased AI articles sometimes rank by chance, but Google generally prefers original content from reputable sources. Cheap tricks and SEO hacks cant beat high-quality, evergreen writing.

Also, Google releases multiple core updates annually. It will soon focus on removing lazy, unoriginal AI-generated pieces from SERPs.

AI language models have no biases. They provide answers by analyzing user requests and pulling data from their existing database.

Take ChatGPT as an example. When you send a prompt, it responds based on the datasets OpenAI used for training.

While ChatGPTs content policies block inappropriate requests, users bypass them with jailbreak prompts. They feed it precise, clever instructions. ChatGPT produces the below response if you ask it to portray a psychopathic fictional character.

The good news is OpenAI hasnt lost control of ChatGPT. Its ongoing efforts in tightening restrictions stop ChatGPT from producing unethical responses, regardless of user input. Jailbreaking wont be as easy moving forward.

The rapid growth of new, unfamiliar technologies like ChatGPT creates opportunities for quid pro quo attacks. Theyre social engineering tactics wherein crooks lure victims with fake offers.

Most people havent explored ChatGPT yet. And hackers exploit the confusion by spreading misleading promotions, emails, and announcements.

The most notorious cases involve fake apps. New users dont know they can only access ChatGPT through OpenAI. They unknowingly download spammy programs and extensions.

Most just want app downloads, but others steal personally identifiable information. Crooks infect them with malware and phishing links. For example, in March 2023, a fake ChatGPT Chrome extension stole Facebook credentials from 2,000+ users daily.

To combat quid pro quo attempts, avoid third-party apps. OpenAI never released an authorized mobile app, computer program, or browser extension for ChatGPT. Anything claiming as such is a scam.

ChatGPT is not a threat by itself. The system has vulnerabilities, but it wont compromise your data. Instead of fearing AI technologies, research how crooks incorporate them into social engineering tactics. That way, you can proactively protect yourself.

But if you still have doubts about ChatGPT, try Bing. The new Bing features an AI-powered chatbot that runs on GPT-4, pulls data from the internet, and follows strict security measures. You might find it more suited to your needs.

Original post:
Is ChatGPT Safe? 6 Cybersecurity Risks of OpenAI's Chatbot - MUO - MakeUseOf

Related Posts

Comments are closed.