The utilization of machine learning has skyrocketed over the past few years. The advanced technology has made high-performance computing accessible to almost all businesses out there. Businesses now use machine learning in cybersecurity, social networks, e-commerce websites, search engines, video streaming platforms and more. As organizations and users increasingly rely on machine learning-based applications, security experts have begun warning about adversaries abusing the technology.
Attackers can use data poisoning to severely affect machine learning systems. Machine learning systems are extremely vulnerable to data manipulation. Cybersecurity experts refer to malicious activities by attackers as adversarial machine learning. Adversarial machine learning can be a massive threat to business operations in an organization. Affected machine learning-based applications could produce inaccurate results, affecting business processes drastically. Business leaders need to be mindful of data poisoning on machine learning systems to create proactive strategies to prevent and mitigate such attacks.
Before creating effective strategies to protect machine learning systems, it is essential to understand what data poisoning is and how it can affect businesses. Data poisoning attacks contaminate a machine learning models training data. Such attacks severely impact the machine learning models ability to produce accurate predictions. To achieve this, attackers insert custom-made adversarial data into data sets used to train a machine learning model and the manipulated data is almost undetectable. The length of a data poisoning attack varies based on a models training cycle. In some cases, it may take weeks for a successful data poisoning attack.
Data poisoning attacks can be performed in a black box scenario as well as a white box scenario. In a black box scenario, an attacker uses classifiers in a machine learning model that depend on user feedback to learn. In a white box scenario, an attacker illegally gets access to the model and all the private data from some point in the supply chain, if the data is gathered from many sources.
Data poisoning attacks can allow attackers to get access to confidential information in the training data using corrupted data samples. Attackers can also disguise inputs to trick a machine learning model into evading accurate classification. Along with these, data poisoning attacks enable adversaries to reverse-engineer a machine learning model, assisting them in replicating and analyzing it locally to prepare for more advanced attacks.
Attackers are already targeting big players in the tech industry that use machine learning in cybersecurity with the help of data poisoning. A few years ago, Google had revealed that Gmails spam filter was compromised at least four times, where several spam emails were not marked as spam. Attackers sent millions of emails to throw off the classifier and alter how it defines a spam email. This technique allowed attackers to send several undetected malicious emails containing malware or other cybersecurity threats.
Another example of data poisoning includes Microsofts Twitter chat bot, Tay. Tay was programmed to learn and engage in casual conversation on Twitter. However, cyber criminals fed offensive tweets into Tays algorithm, turning the innocent chat bot offensive. As a result, Microsoft had to shut down Tay just 16 hours after launch.
Preventing and mitigating data poisoning can be extremely tricky. Contaminated data is almost impossible to detect and machine learning models are retrained with data sets at specific intervals depending on their use cases. Since data poisoning is a gradual process that happens over a certain number of training cycles, it is difficult to identify when the accuracy of a machine learning model has begun reducing.
Mitigating the damage done by data poisoning requires a time-consuming process that includes a historical analysis of all inputs for various classifiers to recognize all bad data samples and eliminate them. After this process, an organization would need to begin retraining the machine learning model from a version before the data poisoning attack. However, this entire procedure can be incredibly complicated and expensive when dealing with a large amount of data as well as a large number of data poisoning attacks. As a result, the affected machine learning model may never get fixed.
Considering the time-consuming and complicated process for detecting and mitigating data poisoning, businesses need to develop a proactive approach to protect machine learning models. Business leaders have to focus on vulnerabilities of machine learning in cybersecurity strategies for their organization. Business leaders can consult cybersecurity experts to design strategies that include machine learning in cybersecurity measures of their business.
Countering the Underrated Threat of Data Poisoning Facing Your Organization
Organizations can consider the following techniques to protect machine learning models from data poisoning:
Machine learning engineers and developers have to focus on steps to block attempts at attacking the model and detect polluted data inputs before the next training cycle begins. For this, developers can perform regression testing, input validity checking, manual moderation, anomaly detection and rate limiting. This approach is simpler and more effective compared to fixing compromised models.
Developers can restrict how many inputs can be provided by each unique user for the training data and they can also define the value of each input. A small group of users should not account for the majority of machine learning model training data. Along with these, developers can compare newly trained classifiers to the older ones by rolling them out to a small set of users only.
Attackers need access to a lot of confidential information to execute a successful data poisoning attack. Therefore, organizations should be careful about sharing sensitive data and have strong access control measures in place for the machine learning model as well as data. To do this effectively, business leaders need to design methods to safeguard models of machine learning in cybersecurity strategy that is used across the organization. The protection of machine learning models and data is tied to how an organization generally handles cybersecurity. Businesses can also restrict permissions of several users, enable multi-factor logins, and utilize data and file versioning to keep data sets safer.
Organizations regularly perform penetration tests against their systems and networks to identify vulnerabilities as part of their cybersecurity strategy. They can conduct similar tests on machine learning models to integrate machine learning into cybersecurity measures. Developers need to attack their own machine learning models to understand their vulnerabilities. Based on the insights gained from this technique, they can build defensive strategies to protect training data sets. Such attacks would also help developers identify what poisoned data points look like, allowing them to design mechanisms to discard contaminated data points.
In a recent talk at the USENIX Enigma conference, Hyrum Anderson, Microsofts principal architect of Trustworthy Machine Learning, presented a red team exercise where his team reverse-engineered a machine learning model that was used by a resource provisioning service. Although the team didnt have direct access to the model, they found enough information about how the machine learning model gathered necessary data, and they developed a local model replica to test attacks without being detected by the actual system. This entire process allowed the team to understand how they could attack the live system. After gathering all the essential information, the team managed to execute a successful attack that compromised the live system.
Businesses can perform similar processes to identify weaknesses in their machine learning systems and develop effective security measures. Regularly testing machine learning models will help organizations protect their models against several existing cyber attacks as well as new attacks created by adversaries.
Developers and engineers can occasionally alter machine learning algorithms that use classifiers. These changing algorithms as well as models can be kept secret, and they would be harder to recognize and attack. This is considered as a moving target strategy against attackers, which can help in protecting machine learning models. To effectively execute this strategy, businesses may need to hire more developers and cybersecurity experts to alter machine learning models and test them for vulnerabilities.
Adversarial machine learning may not seem like an immediate threat right now. But as machine learning gets adopted in various industries, it could be a force to reckon with. Data poisoning can prove to be extremely threatening in machine learning-based self-driving cars where human lives can be at risk. Hence, it is essential to start integrating machine learning into cybersecurity workflow to ensure the safety of data sets used in machine learning systems. Currently, there arent any sophisticated tools to protect machine learning models against data poisoning, since cybersecurity experts have started pointing out such threats in recent years. For now, businesses have to rely on creating holistic cybersecurity strategies that focus on the safety of machine learning models. Cybersecurity experts will soon launch far more sophisticated tools that can be deployed to protect machine learning models and data sets.
Continued here:
Countering The Underrated Threat Of Data Poisoning Facing Your Organization - Forbes