Staying ahead of threat actors in the age of AI – Microsoft
Over the last year, the speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI. Defenders are only beginning to recognize and apply the power of generative AI to shift the cybersecurity balance in their favor and keep ahead of adversaries. At the same time, it is also important for us to understand how AI can be potentially misused in the hands of threat actors. In collaboration with OpenAI, today we are publishing research on emerging threats in the age of AI, focusing on identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape. You can read OpenAIs blog on the research here. Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors usage of AI. However, Microsoft and our partners continue to study this landscape closely.
The objective of Microsofts partnership with OpenAI, including the release of this research, is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse. As part of this commitment, we have taken measures to disrupt assets and accounts associated with threat actors, improve the protection of OpenAI LLM technology and users from attack or abuse, and shape the guardrails and safety mechanisms around our models. In addition, we are also deeply committed to using generative AI to disrupt threat actors and leverage the power of new tools, including Microsoft Copilot for Security, to elevate defenders everywhere.
The progress of technology creates a demand for strong cybersecurity and safety measures. For example, the White Houses Executive Order on AI requires rigorous safety testing and government supervision for AI systems that have major impacts on national and economic security or public health and safety. Our actions enhancing the safeguards of our AI models and partnering with our ecosystem on the safe creation, implementation, and use of these models align with the Executive Orders request for comprehensive AI safety and security standards.
In line with Microsofts leadership across AI and cybersecurity, today we are announcing principles shaping Microsofts policy and actions mitigating the risks associated with the use of our AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates we track.
These principles include:
Microsoft remains committed to responsible AI innovation, prioritizing the safety and integrity of our technologies with respect for human rights and ethical standards. These principles announced today build on Microsofts Responsible AI practices, our voluntary commitments to advance responsible AI innovation and the Azure OpenAI Code of Conduct. We are following these principles as part of our broader commitments to strengthening international law and norms and to advance the goals of the Bletchley Declaration endorsed by 29 countries.
Because Microsoft and OpenAIs partnership extends to security, the companies can take action when known and emerging threat actors surface. Microsoft Threat Intelligence tracks more than 300 unique threat actors, including 160 nation-state actors, 50 ransomware groups, and many others. These adversaries employ various digital identities and attack infrastructures. Microsofts experts and automated systems continually analyze and correlate these attributes, uncovering attackers efforts to evade detection or expand their capabilities by leveraging new technologies. Consistent with preventing threat actors actions across our technologies and working closely with partners, Microsoft continues to study threat actors use of AI and LLMs, partner with OpenAI to monitor attack activity, and apply what we learn to continually improve defenses. This blog provides an overview of observed activities collected from known threat actor infrastructure as identified by Microsoft Threat Intelligence, then shared with OpenAI to identify potential malicious use or abuse of their platform and protect our mutual customers from future threats or harm.
Recognizing the rapid growth of AI and emergent use of LLMs in cyber operations, we continue to work with MITRE to integrate these LLM-themed tactics, techniques, and procedures (TTPs) into the MITRE ATT&CK framework or MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) knowledgebase. This strategic expansion reflects a commitment to not only track and neutralize threats, but also to pioneer the development of countermeasures in the evolving landscape of AI-powered cyber operations. A full list of the LLM-themed TTPs, which include those we identified during our investigations, is summarized in the appendix.
The threat ecosystem over the last several years has revealed a consistent theme of threat actors following trends in technology in parallel with their defender counterparts. Threat actors, like defenders, are looking at AI, including LLMs, to enhance their productivity and take advantage of accessible platforms that could advance their objectives and attack techniques. Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent. On the defender side, hardening these same security controls from attacks and implementing equally sophisticated monitoring that anticipates and blocks malicious activity is vital.
While different threat actors motives and complexity vary, they have common tasks to perform in the course of targeting and attacks. These include reconnaissance, such as learning about potential victims industries, locations, and relationships; help with coding, including improving things like software scripts and malware development; and assistance with learning and using native languages. Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets jobs, professional networks, and other relationships.
Importantly, our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely. At the same time, we feel this is important research to publish to expose early-stage, incremental moves that we observe well-known threat actors attempting, and share information on how we are blocking and countering them with the defender community.
While attackers will remain interested in AI and probe technologies current capabilities and security controls, its important to keep these risks in context. As always, hygiene practices such as multifactor authentication (MFA) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts.
The threat actors profiled below are a sample of observed activity we believe best represents the TTPs the industry will need to better track using MITRE ATT&CK framework or MITRE ATLAS knowledgebase updates.
Forest Blizzard (STRONTIUM) is a Russian military intelligence actor linked to GRU Unit 26165, who has targeted victims of both tactical and strategic interest to the Russian government. Their activities span across a variety of sectors including defense, transportation/logistics, government, energy, non-governmental organizations (NGO), and information technology. Forest Blizzard has been extremely active in targeting organizations in and related to Russias war in Ukraine throughout the duration of the conflict, and Microsoft assesses that Forest Blizzard operations play a significant supporting role to Russias foreign policy and military objectives both in Ukraine and in the broader international community. Forest Blizzard overlaps with the threat actor tracked by other researchers as APT28 and Fancy Bear.
Forest Blizzards use of LLMs has involved research into various satellite and radar technologies that may pertain to conventional military operations in Ukraine, as well as generic research aimed at supporting their cyber operations. Based on these observations, we map and classify these TTPs using the following descriptions:
Similar to Salmon Typhoons LLM interactions, Microsoft observed engagement from Forest Blizzard that were representative of an adversary exploring the use cases of a new technology. As with other adversaries, all accounts and assets associated with Forest Blizzard have been disabled.
Emerald Sleet (THALLIUM) is a North Korean threat actor that has remained highly active throughout 2023. Their recent operations relied on spear-phishing emails to compromise and gather intelligence from prominent individuals with expertise on North Korea. Microsoft observed Emerald Sleet impersonating reputable academic institutions and NGOs to lure victims into replying with expert insights and commentary about foreign policies related to North Korea. Emerald Sleet overlaps with threat actors tracked by other researchers as Kimsuky and Velvet Chollima.
Emerald Sleets use of LLMs has been in support of this activity and involved research into think tanks and experts on North Korea, as well as the generation of content likely to be used in spear-phishing campaigns. Emerald Sleet also interacted with LLMs to understand publicly known vulnerabilities, to troubleshoot technical issues, and for assistance with using various web technologies. Based on these observations, we map and classify these TTPs using the following descriptions:
All accounts and assets associated with Emerald Sleet have been disabled.
Crimson Sandstorm (CURIUM) is an Iranian threat actor assessed to be connected to the Islamic Revolutionary Guard Corps (IRGC). Active since at least 2017, Crimson Sandstorm has targeted multiple sectors, including defense, maritime shipping, transportation, healthcare, and technology. These operations have frequently relied on watering hole attacks and social engineering to deliver custom .NET malware. Prior research also identified custom Crimson Sandstorm malware using email-based command-and-control (C2) channels. Crimson Sandstorm overlaps with the threat actor tracked by other researchers as Tortoiseshell, Imperial Kitten, and Yellow Liderc.
The use of LLMs by Crimson Sandstorm has reflected the broader behaviors that the security community has observed from this threat actor. Interactions have involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine. Based on these observations, we map and classify these TTPs using the following descriptions:
All accounts and assets associated with Crimson Sandstorm have been disabled.
Charcoal Typhoon (CHROMIUM) is a Chinese state-affiliated threat actor with a broad operational scope. They are known for targeting sectors that include government, higher education, communications infrastructure, oil & gas, and information technology. Their activities have predominantly focused on entities within Taiwan, Thailand, Mongolia, Malaysia, France, and Nepal, with observed interests extending to institutions and individuals globally who oppose Chinas policies. Charcoal Typhoon overlaps with the threat actor tracked by other researchers as Aquatic Panda, ControlX, RedHotel, and BRONZE UNIVERSITY.
In recent operations, Charcoal Typhoon has been observed interacting with LLMs in ways that suggest a limited exploration of how LLMs can augment their technical operations. This has consisted of using LLMs to support tooling development, scripting, understanding various commodity cybersecurity tools, and for generating content that could be used to social engineer targets. Based on these observations, we map and classify these TTPs using the following descriptions:
All associated accounts and assets of Charcoal Typhoon have been disabled, reaffirming our commitment to safeguarding against the misuse of AI technologies.
Salmon Typhoon (SODIUM) is a sophisticated Chinese state-affiliated threat actor with a history of targeting US defense contractors, government agencies, and entities within the cryptographic technology sector. This threat actor has demonstrated its capabilities through the deployment of malware, such as Win32/Wkysol, to maintain remote access to compromised systems. With over a decade of operations marked by intermittent periods of dormancy and resurgence, Salmon Typhoon has recently shown renewed activity. Salmon Typhoon overlaps with the threat actor tracked by other researchers as APT4 and Maverick Panda.
Notably, Salmon Typhoons interactions with LLMs throughout 2023 appear exploratory and suggest that this threat actor is evaluating the effectiveness of LLMs in sourcing information on potentially sensitive topics, high profile individuals, regional geopolitics, US influence, and internal affairs. This tentative engagement with LLMs could reflect both a broadening of their intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies.
Based on these observations, we map and classify these TTPs using the following descriptions:
Salmon Typhoons engagement with LLMs aligns with patterns observed by Microsoft, reflecting traditional behaviors in a new technological arena. In response, all accounts and assets associated with Salmon Typhoon have been disabled.
In closing, AI technologies will continue to evolve and be studied by various threat actors. Microsoft will continue to track threat actors and malicious activity misusing LLMs, and work with OpenAI and other partners to share intelligence, improve protections for customers and aid the broader security community.
Using insights from our analysis above, as well as other potential misuse of AI, were sharing the below list of LLM-themed TTPs that we map and classify to the MITRE ATT&CK framework or MITRE ATLAS knowledgebase to equip the community with a common taxonomy to collectively track malicious use of LLMs and create countermeasures against:
See the original post:
Staying ahead of threat actors in the age of AI - Microsoft
- Trump calls Chinas DeepSeek AI app a wake-up call after tech stocks slide - The Washington Post - January 27th, 2025 [January 27th, 2025]
- What is DeepSeek, the Chinese AI app challenging OpenAI and Silicon Valley? - The Washington Post - January 27th, 2025 [January 27th, 2025]
- Trump: DeepSeek's AI should be a 'wakeup call' to US industry - Reuters - January 27th, 2025 [January 27th, 2025]
- DeepSeek dropped an open-source AI bombwhat does it mean for OpenAI and Anthropic? - Fortune - January 27th, 2025 [January 27th, 2025]
- This Unstoppable Artificial Intelligence (AI) Stock Climbed 90% in 2024, and Its Still a Buy at Todays Price - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Apple turns its AI on by default in latest software update - CNBC - January 27th, 2025 [January 27th, 2025]
- What is DeepSeek, the Chinese AI startup that shook the tech world? - CNN - January 27th, 2025 [January 27th, 2025]
- French AI chatbot taken offline after wild answers led to online ridicule - CNN - January 27th, 2025 [January 27th, 2025]
- Everyone is freaking out about Chinese AI startup DeepSeek. Are its claims too good to be true? - Fortune - January 27th, 2025 [January 27th, 2025]
- Here's what DeepSeek AI does better than OpenAI's ChatGPT - Mashable - January 27th, 2025 [January 27th, 2025]
- DeepSeek is making Wall Street nervous about the AI spending boom: Heres what we know - Yahoo Finance - January 27th, 2025 [January 27th, 2025]
- I Tried to See My Future Baby's Face Using AI, but It Got Weird - CNET - January 27th, 2025 [January 27th, 2025]
- DeepSeek caused a $600 billion freakout. But Chinas AI upstart may not be the danger to Nvidia and U.S. export controls many assume - Yahoo Finance - January 27th, 2025 [January 27th, 2025]
- I'm an Uber product manager who uses AI to automate some of my work. It frees up more time for the human side of the job. - Business Insider - January 27th, 2025 [January 27th, 2025]
- What is DeepSeek? Get to know the Chinese startup that shocked the AI industry - Business Insider - January 27th, 2025 [January 27th, 2025]
- Time to 'panic' or 'overblown'? Wall Street weighs how DeepSeek could shake up the AI trade - Yahoo Finance - January 27th, 2025 [January 27th, 2025]
- Tech stocks tank as a Chinese competitor threatens to upend the AI frenzy; Nvidia sinks nearly 17% - The Associated Press - January 27th, 2025 [January 27th, 2025]
- This man wiped $600 billion off Nvidia by marrying quant trading with AI - MarketWatch - January 27th, 2025 [January 27th, 2025]
- What Is DeepSeek? Everything to Know About Chinas ChatGPT Rival and Why It Might Mean the End of the AI Trade. - Barron's - January 27th, 2025 [January 27th, 2025]
- Why Apple Stock Dodged the DeepSeek AI Rout - Investopedia - January 27th, 2025 [January 27th, 2025]
- Chinese AI startup DeepSeek is rattling markets. Here's what to know - Yahoo Finance - January 27th, 2025 [January 27th, 2025]
- Chinas DeepSeek AI is hitting Nvidia where it hurts - The Verge - January 27th, 2025 [January 27th, 2025]
- DeepSeek vs. ChatGPT: I tried the hot new AI model. It was impressive, but there were some things it wouldn't talk about. - Business Insider - January 27th, 2025 [January 27th, 2025]
- How the buzz around Chinese AI model DeepSeek sparked a massive Nasdaq sell-off - CNBC - January 27th, 2025 [January 27th, 2025]
- DeepSeeks top-ranked AI app is restricting sign-ups due to malicious attacks - The Verge - January 27th, 2025 [January 27th, 2025]
- How To Gain Vital Skills In Conversational Icebreakers Via Nimble Use Of Generative AI - Forbes - January 26th, 2025 [January 26th, 2025]
- AI is a force for good and Britain needs to be a maker of ideas, not a mere taker | Will Hutton - The Guardian - January 26th, 2025 [January 26th, 2025]
- Ge Wang: GenAI Art Is the Least Imaginative Use of AI Imaginable - Stanford HAI - January 26th, 2025 [January 26th, 2025]
- A Once-in-a-Decade Investment Opportunity: The Best AI Stock to Buy in 2025, According to a Wall Street Analyst - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Experts Weigh in on $500B Stargate Project for AI - IEEE Spectrum - January 26th, 2025 [January 26th, 2025]
- Cathie Wood Says Software Is the Next Big AI Opportunity -- 2 Ark ETFs You'll Want to Buy if She's Right - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- My Top 2 Artificial Intelligence (AI) Stocks for 2025 (Hint: Nvidia Is Not One of Them) - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- 2024: A year of extraordinary progress and advancement in AI - The Keyword - January 26th, 2025 [January 26th, 2025]
- Morgan Stanley says these 20 stocks are set to reap the benefits of AI with adoption at a 'tipping point' - Business Insider - January 26th, 2025 [January 26th, 2025]
- Coldplay evolves the fan experience with Microsoft AI - Microsoft - January 26th, 2025 [January 26th, 2025]
- Meta to spend up to $65 billion this year to power AI goals, Zuckerberg says - Reuters - January 26th, 2025 [January 26th, 2025]
- $60 billion in one year: Mark Zuckerberg touts Meta's AI investments - NBC News - January 26th, 2025 [January 26th, 2025]
- Why prosocial AI must be the framework for designing, deploying and governing AI - VentureBeat - January 26th, 2025 [January 26th, 2025]
- Its only $30 to learn how to automate your job with AI - PCWorld - January 26th, 2025 [January 26th, 2025]
- Microsoft and OpenAI evolve partnership to drive the next phase of AI - Microsoft - January 26th, 2025 [January 26th, 2025]
- Chinas AI industry has almost caught up with Americas - The Economist - January 26th, 2025 [January 26th, 2025]
- AI hallucinations cant be stopped but these techniques can limit their damage - Nature.com - January 26th, 2025 [January 26th, 2025]
- Trump shrugs off Elon Musks criticism of AI announcement: He hates one of the people - CNN - January 26th, 2025 [January 26th, 2025]
- Apple makes a change to its AI team and plans Siri upgrades - The Verge - January 26th, 2025 [January 26th, 2025]
- Trump rescinds Biden's executive order on AI safety in attempt to diverge from his predecessor - The Associated Press - January 26th, 2025 [January 26th, 2025]
- Apple Enlists Veteran Software Executive to Help Fix AI and Siri - Yahoo Finance - January 26th, 2025 [January 26th, 2025]
- Stargate AI Project: What AI Stocks Could Benefit in 2025 and Beyond? - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Palantir Could Ride AI Fed Spending Tidal Wave, Says Analyst - Investor's Business Daily - January 26th, 2025 [January 26th, 2025]
- Heres why you should start talking to ChatGPT even if AI scares you - BGR - January 26th, 2025 [January 26th, 2025]
- The "First AI Software Engineer" Is Bungling the Vast Majority of Tasks It's Asked to Do - Futurism - January 26th, 2025 [January 26th, 2025]
- Down Nearly 50% From Its High, Is SoundHound AI Stock a Good Buy Right Now? - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Trump pumps coal as answer to AI power needs but any boost could be short-lived - The Associated Press - January 26th, 2025 [January 26th, 2025]
- Prediction: This Stock Will be the Biggest Winner of the U.S.' New $500 Billion AI Project. - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Mark Zuckerberg's $65 Billion AI Bet Benefits Nvidia And Other Players, Says Top Analyst, But Warns Market Bull Run Will 'End In A Spectacular Bubble... - January 26th, 2025 [January 26th, 2025]
- In motion to dismiss, chatbot platform Character AI claims it is protected by the First Amendment - TechCrunch - January 26th, 2025 [January 26th, 2025]
- America Is Winning the Race for Global AI Primacyfor Now - Foreign Affairs Magazine - January 17th, 2025 [January 17th, 2025]
- Opinion | Flaws in AI Are Deciding Your Future. Heres How to Fix Them. - The Chronicle of Higher Education - January 17th, 2025 [January 17th, 2025]
- Apple is pulling its AI-generated notifications for news after generating fake headlines - CNN - January 17th, 2025 [January 17th, 2025]
- ELIZA: World's first AI chatbot has finally been resurrected after decades - New Scientist - January 17th, 2025 [January 17th, 2025]
- AI scammers pretending to be Brad Pitt con woman out of $850,000 - Fox News - January 17th, 2025 [January 17th, 2025]
- From Potential to Profit: Closing the AI Impact Gap - BCG - January 17th, 2025 [January 17th, 2025]
- Whoever Leads In AI Compute Will Lead The World - Forbes - January 17th, 2025 [January 17th, 2025]
- Innovating in line with the European Unions AI Act - Microsoft - January 17th, 2025 [January 17th, 2025]
- This Artificial Intelligence (AI) Stock Is an Absolute Bargain Right Now, and It Could Skyrocket in 2025 - The Motley Fool - January 17th, 2025 [January 17th, 2025]
- Cinematic AI Shorts From Eric Ker, Timothy Wang, Henry Daubrez And CaptainHaHaa - Forbes - January 17th, 2025 [January 17th, 2025]
- 2 Artificial Intelligence (AI) Electric Vehicle Stocks to Buy With $500. If Certain Wall Street Analysts Are Right, They Could Soar as Much as 60% and... - January 17th, 2025 [January 17th, 2025]
- OpenAI CEO Sam Altman Says This Will Be the No.1 Most Valuable Skill in the Age of AI - Inc. - January 17th, 2025 [January 17th, 2025]
- The Amazing Ways DocuSign Is Using AI To Transform Business Agreements - Forbes - January 17th, 2025 [January 17th, 2025]
- Prediction: These 3 Artificial Intelligence (AI) Chip Stocks Will Crush the Market in 2025 - Yahoo Finance - January 17th, 2025 [January 17th, 2025]
- AI isn't the future of online shopping - here's what is - ZDNet - January 17th, 2025 [January 17th, 2025]
- 2 Artificial Intelligence (AI) Stocks With Seemingly Impenetrable Moats That Can Have Their Palantir Moment in 2025 - The Motley Fool - January 17th, 2025 [January 17th, 2025]
- US tightens its grip on AI chip flows across the globe - Reuters - January 17th, 2025 [January 17th, 2025]
- The companies paying hospitals to hand over patient data to train AI - STAT - January 17th, 2025 [January 17th, 2025]
- Biden's administration proposes new rules on exporting AI chips, provoking an industry pushback - The Associated Press - January 17th, 2025 [January 17th, 2025]
- Apple solves broken news alerts by turning off the AI - The Register - January 17th, 2025 [January 17th, 2025]
- President-Elect Donald Trump Will Take Office in 3 Days, and He's Set to Reshape the Future of Artificial Intelligence (AI) in America - The Motley... - January 17th, 2025 [January 17th, 2025]
- Here Are My Top 4 No-Brainer AI Stocks to Buy for 2025 - The Motley Fool - January 17th, 2025 [January 17th, 2025]
- Got $1,000? Here Are 2 AI Stocks to Buy Hand Over Fist in 2025 - The Motley Fool - January 17th, 2025 [January 17th, 2025]
- Apple halts AI feature that made iPhones hallucinate about news - The Washington Post - January 17th, 2025 [January 17th, 2025]
- It's official: All your Office apps are getting AI and a price increase - ZDNet - January 17th, 2025 [January 17th, 2025]