Artificial intelligence, possible recession driving record fraud rates … – Fox Business

Dr. Robert Marks discusses a Stanford survey that says 36% of researchers are concerned artificial intelligence could bring 'nuclear level catastrophe' on 'Kennedy.'

According to a new report, artificial intelligence (AI), a possible recession and a return to pre-pandemic activity are driving record fraud rates across the globe.

Pindrop, a global leader in voice technology, has released its annual Voice Intelligence & Security Report following an analysis of five billion calls and 3 billion fraud catches.

During an economic downturn, fraud is typically reported as a significant crime. The report claims historical data suggests that insurance claims and fraud will skyrocket in 2023.

ROMANCE SCAMS COST AMERICANS $1B IN 2022, A NEW RECORD

Photo illustration showing ChatGPT and OpenAI research laboratory logo and inscription at a mobile phone smartphone screen with a blurry background. Open AI is an app using artificial intelligence technology. (Nicolas Economou/NurPhoto via Getty Images / Getty Images)

With the pandemic winding down and economic conditions shifting, fraudsters have shifted focus away from government payouts and back to more traditional targets, such as contact centers.

But fraudsters are using new tactics to attack their old marks, including the use of personal user data acquired from the dark web, new AI models for synthetic audio generation and more. These factors have led to a 40% increase in fraud rates against contact centers in 2022 compared to the year prior.

The report found that fraudsters leveraging fast-learning AI models to create synthetic audio and content have already led to far-reaching consequences in the world of fraud. Although deepfakes and synthetic voices have existed for nearly 30 years, bad actors have made them more persuasive by pairing the tech with smart scripts and conversational speech.

Recently, Vice News used a synthetically generated voice with tools from ElevenLabs to utter a fixed passphrase "My Voice is My Password" and was able to bypass the voice authentication system at Lloyds Bank.

UNBRIDLED AI TECH RISKS SPREAD OF DISINFORMATION, REQUIRING POLICY MAKERS STEP IN WITH RULES: EXPERTS

Scammers will often resort to "phishing," which is a nefarious information gathering technique that uses fraud and trickery to fool people into handing over contact details, financial documents and payments. (iStock / iStock)

Arizona mother Jennifer DeStefano recounted a terrifying experience when phone scammers used AI technology to make her think her teenage daughter had been kidnapped.

The call came amidst a rise in "spoofing" schemes with fraudsters claiming that they have kidnapped loved ones to receive ransom money using voice cloning technology.

But, Pindrop says these technologies are not frequently used on the average citizen or consumer but are rather implemented in spearfishing schemes to attack high-profile targets, like CEOs and other C-suite executives.

For example, a bad actor or team of fraudsters could use a CEOs voice to ask another executive to wire millions of dollars for a fake offer to buy a company.

"It's actually the voice of the CEO, even in the case of the CEO having an accent, or even in the case that the CEO doesn't have public facing audio," Pindrop Co-Founder and CEO Vijay Balasubramanian told Fox News Digital.

This voice audio is typically derived from acquiring private recordings and internal all-hands messaging.

Pindrop notes that such tech could become more pervasive and help to inhibit other established fraud techniques.

CHATGPT FACING POTENTIAL DEFAMATION LAWSUIT AFTER FALSELY LABELING AUSTRALIAN MAYOR AS BRIBERY CONVICT

Verizon Business CEO Tami Erwin shares tips for protecting against cyber threats and encourages creating a security framework.

These include large-scale vishing/smishing efforts, victim social engineering, and (Interactive Voice Response) IVR reconnaissance. These tactics have caused permanent damage to brand reputations and forced consumer abandonment, according to Pindrop, resulting in the loss of billions of dollars.

Since 2020, these data breaches have affected over 300 million victims and data compromises are at an all-time high, with more than 1,800 events reported in 2021 and 2022 individually.

Furthermore, in 2021 and 2022, the number of reported data breaches reached an all-time high, with over 1,800 incidents yearly.

"It always starts with reconnaissance," Balasubramanian said.

IVR is the system companies use to guide users through their call center. For example, press one for billing information or press two for your balance. These systems have become more conversational because of AI.

CHATGPT BEING USED TO WRITE MALWARE, RANSOMWARE: REPORTS

A person receives a potential spam phone call on their cell phone. (iStock / iStock)

"They're taking a social security number that they have and they will go to every single bank and punch in that social security number. And the response of that system is one of two things. I don't recognize what that is, or hey, welcome thank you for being a valued customer. Your account balance is x," Balasubramanian said.

After acquiring all this account information, fraudsters target the accounts with the highest balances.

They then send a message saying there is a fraud charge with a convincing message, including information mined with bots from the IVR systems. The message then asks the account holder to divulge further information, such as a credit card number or CVV, which helps the fraudster finally access the account and remove funds.

Pindrop says companies need to detect voice liveness in sync with automatic speech recognition (ASR) and audio analytics to determine the speaker's environment and contextual audio to prevent synthetic voices, pitch manipulation, and replay attacks.

To prevent scams using synthetic voices, pitch manipulation and replay attacks, Pindrop says companies must also be capable of detecting voice liveness through automatic speech recognition (ASR) and audio analytics that determine the speaker's environment and contextual audio.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Unfortunately, research suggests that fraud rates in states that pose enhanced restrictions on the use of biometrics (such as California, Texas, Illinois, and Washington) are twice as likely to experience fraud. While these states enact such laws to protect consumer data, the legislation often makes no differentiation when it comes to company cybersecurity measures, which need voice analytics to adequately protect company and consumer data.

"If I target a consumer from those states, they most likely don't have advanced analytics performed on the voice, they are not looking for deep fakes. They are not checking if the voice is distorted," Balasubramanian said. "They aren't looking for any of that, so it's easy to steal money for those consumers."

Continue reading here:
Artificial intelligence, possible recession driving record fraud rates ... - Fox Business

Related Posts

Comments are closed.