Are AI-Engineered Threats FUD or Reality? – Dark Reading

The moment that generative AI applications hit the market, it changed the pace of business not only for security teams, but for cybercriminals too. Today, not embracing AI innovations can mean falling behind your competitors and putting your cyber defense at a disadvantage against cyberattacks powered by AI. But when discussing how AI will or won't impact cybercrime, it's important that we look at things through a pragmatic and sober lens not feeding into hype that reads more like science fiction.

Today's AI advancements and maturity signal a significant leap forward for enterprise security. Cybercriminals can't easily match the size and scale of enterprises' resources, skills, and motivation, making it harder for them to keep up with the current speed of AI innovation. Private venture investment in AI exploded to $93.5 billion in 2021 the bad guys don't have that level of capital. They also don't have the manpower, computing power, and innovations that affords commercial companies or government more time and opportunity to fail quick, learn fast, and get it right first.

Make no mistake, though: Cybercrime will catch up. This is not the first time the security industry has had a brief edge when ransomware started driving more defenders to adopt endpoint detection and response technologies, attackers needed some time to figure out how to circumvent and evade those detections. That interim "grace period" gave businesses time to better shield themselves. The same applies now: Businesses need to maximize on their lead in the AI race, advancing their threat detection and response capabilities and leveraging the speed and precision that current AI innovations afford them.

So how is AI changing cybercrime? Well, it won't change it substantially anytime soon, but it will scale it in certain instances. Let's take at a look at where malicious use of AI will and won't make the most immediate impact.

In recent months, we've seen claims regarding various malicious use cases of AI, but just because a scenario is possible does not make it probable. Take fully automated malware campaigns, for example logic says that it is possible to leverage AI to achieve that outcome, but given that leading tech companies have yet to pioneer fully automated software development cycles, it's unlikely that financially constrained cybercrime groups will achieve this sooner. Even partial automation can enable the scaling of cybercrime, however, a tactic we've already seen used in Bazar campaigns. This is not an innovation, but a tried-and-true technique that defenders are already taking on.

Another use case to consider is AI-engineered phishing attacks. Not only is this one possible, but we're already beginning to see these attacks in the wild. This next generation of phishing may achieve higher levels of persuasiveness and click-rate, but a human-engineered phish and AI-engineered phish still drive toward the same goal. In other words, an AI-engineered phish is still a phish searching for a click, and it requires the same detection and response readiness.

However, while the problem remains the same, the scale is vastly different. AI acts as a force multiplier to scale phishing campaigns, so, if an enterprise is seeing a spike in inbound phishing emails and those malicious emails are significantly more persuasive then it's likely looking at a high click-rate probability and potential for compromise. AI models can also increase targeting efficacy, helping attackers determine who is the most susceptible target for a specific phish within an organization and ultimately reaching a higher ROI from their campaigns. Phishing attacks have historically been among the most successful tactics that attackers have used to infiltrate enterprises. The scaling of this type of attack emphasizes the critical role that EDR, MDR, XDR, and IAM technologies play in detecting anomalous behavior before it achieves impact.

AI poisoning attacks, in other words programmatically manipulating the code and data on which AI models are built, may be the "holy grail" of attacks for cybercriminals. The impact of a successful poisoning attack could range anywhere from misinformation attempts to Die Hard 4.0. Why? Because by poisoning the model, an attacker can make it behave or function in whatever way they want, and it's not easily detectable. However, these attacks aren't easy to carry out they require gaining access to the data the AI model is training on at the time of training, which is no small feat. As more models become open source, the risk of these attacks will increase, but it will remain low for the time being.

While it's important to separate hype from reality, it's also important to ensure we're asking the right questions about AI's impact on the threat landscape. There are lots of unknowns regarding AI's potential how it may change adversaries' goals and objectives is one we mustn't overlook. It remains unknown how new abilities may help serve new purposes for adversaries and recalibrate their motives.

We may not see an immediate spike in novel AI-enabled attacks, but the scaling of cybercrime thanks to AI will have a substantial impact on organizations that aren't prepared. Speed and scale are intrinsic characteristics of AI, and just as defenders are seeking to benefit from them, so are attackers. Security teams are already understaffed and overwhelmed seeing a spike in malicious traffic or incident response engagements is a substantial weight added onto their workload.

This reaffirms more than ever the need for enterprises to invest in their defenses, using AI to drive speed and precision in their threat detection and response capabilities. Enterprises that take advantage of this "grace period" will find themselves much more prepared and resilient for the day attackers actually do catch up in the AI cyber race.

Read the original post:

Are AI-Engineered Threats FUD or Reality? - Dark Reading

Related Posts

Comments are closed.