Artificial Intelligence: Your friend in the fight against cyberattacks – The Times of India Blog
Before the turn of the millennium, a cybercriminal was a lone wolf, a hacker, who felt the urge to expose the lacunae in a computer network or operating system. Financial gain was not the criterion for a cyberattack. However, the technological advancements following the millennium bug bred a new generation of cybercriminals.
Today, cybercriminals are no longer lone wolves; they operate in highly skilled criminal rings with access to shared data, tools, expertise, and malicious artificial intelligence (AI). By weaponizing AI and turning it for malicious purposes, they can increase the scale and mount a wide range of cyberattacks. Recent studies confirm the weaponization of AI: according to a Forrester report, 77 per cent of business leaders surveyed across the world expect that weaponized AI will lead to a rise in the scale of cyberattacks.
Weaponized AI is taking many forms
Adversarial AI, which takes advantage of an AI models inherent trait of learning, has come to the fore and is posing new threats. Adversarial AI through malicious inputs can disrupt a single device or an entire group of devices that are using an AI model. AI malware, which hides deep within a seemingly innocent application to avoid detection, uses AI models to detect if it has reached a specific target. Then there are AI-powered botnets that harness the power of AI to adapt faster than a cybersecurity team can react. As cybercriminals evolve in their attacks, using weaponized or malicious AI, the existing defences would often be lacking in identifying these adversaries.
Alert fatigue is real
While cyberattacks become more elaborate and sophisticated, the tools needed to fight them are becoming more complex and at the same time, it is becoming increasingly difficult to find people with the right skills. In an IBM Resilient and Ponemon study, 75 per cent of the respondents said they were facing moderately to high difficulty in hiring and retaining skilled cybersecurity personnel. The fact that in cybersecurity roles, skills need to evolve continuously in line with the threat landscape further compounds the talent challenge. AI harvests information to help security analysts work faster and more efficiently. Security analysts can apply AI to train computers in the language of security using techniques like natural language processing.
Another impediment for companies is timely insights that can help arrive at the best conclusions and business choices. However, they often struggle to synthesize the required insights as the context becomes more complicated. Simply put, they are unable to access enough data in time. In a study by cybersecurity firm, Fidelis, 83 per cent of surveyed companies admitted they could not even process half the alerts they received daily. Moreover, companies encounter roadblocks in their response as cyberattacks are now happening at faster speeds. Significantly, studies prove that the longer it takes to address a data breach, the more expensive it becomes to remediate.
To scale at a similar pace of cyberattacks and combat the advanced tools and malware, companies need to augment traditional programming, which merely averts known patterns or threats, by combining their cybersecurity operations with AI.
AI-powered Behavioural Analysis can be a trusted advisor
With cognitive systems that learn and reason from their interactions with humans and augment rule-based programming, AI will learn more and more. With AI and analytics, companies can improve threat detection time and accuracy. They can use predictive analytics to identify network anomalies, detect malware and analyze user behaviour patterns to determine risky users within the company, and potentially thwart fraud and insider threats. By applying AI to behavioural biometrics, they can identify the users better based on their keyboard strokes, mouse movements, or use of mobile devices. This enhances cybersecurity besides fostering an improved and seamless user experience. One of the areas organizations need to focus on while applying AI is to take a closer look at their models as well as their partners AI security tools to ensure they are trustworthy, and that AI bias does not affect security outcomes. Close monitoring of the algorithms and the input data coupled with training for security teams in diverse facets of a problem is critical in keeping AI bias under check.
Another important aspect is that AI can enable the contextualization of data insights and machine learning logic to help companies prioritize the most important threat alerts. AI and analytics allow security orchestration to automatically block threats, correct problems, respond to attacks and automate low-level alerts based on prior examples.
Productized AI to stop weaponized AI
Companies can rely on good AI to create models that could potentially tackle Adversarial AI and AI-powered attacks. AI models can be hardened to make them more robust against malicious inputs from Adversarial AI. Companies can look at using AI in detectors to go beyond rule-based security, reasoning, and automation, to enhance the effectiveness of their security operations. Good AI outcomes cannot be built overnight they would need similar rigour and training akin to any other good product development processes. The approach of building customized and automated AI to identify attacks might sound like a silver bullet but companies should recognize the fallacy of this approach; AI is a security asset and in conjunction with their security teams and traditional programming, it can help them fight in the evolving cyber battlefield. One aspect is for sure AI would tip the balance in this cyberwar what needs to be seen is to see how companies productize and befriend AI to help them tip the balance in their favour.
Views expressed above are the author's own.
END OF ARTICLE
See the rest here:
Artificial Intelligence: Your friend in the fight against cyberattacks - The Times of India Blog