Archive for the ‘Artificial Intelligence’ Category

How Artificial Intelligence Can Help Fight Fires in the West – Governing

(TNS) With wildfires becoming bigger and more destructive as the West dries out and heats up, agencies and officials tasked with preventing and battling the blazes could soon have a new tool to add to their arsenal of prescribed burns, pick axes, chain saws and aircraft.

The high-tech help could come way of an area not normally associated with fighting wildfires: artificial intelligence. And space.

Lockheed Martin Space, based in Jefferson County, Colo., is tapping decades of experience of managing satellites, exploring space and providing information for the U.S. military to offer more accurate data quicker to ground crews. They are talking to the U.S. Forest Service, university researchers and a Colorado state agency about how their their technology could help.

The scenario that wildland fire operators and commanders work in is very similar to that of the organizations and folks who defend our homeland and allies. Its a dynamic environment across multiple activities and responsibilities, said Dan Lordan, senior manager for AI integration at Lockheed Martins Artificial Intelligence Center.

Lockheed Martin aims to use its technology developed over years in other areas to reduce the time it takes to gather information and make decisions about wildfires, said Rich Carter, business development director for Lockheed Martin Spaces Mission Solutions.

The concept of a regular fire season has all but vanished as drought and warmer temperatures make Western lands ripe for ignition. At the end of December, the Marshall fire burned 991 homes and killed two people in Boulder County. The Denver area just experienced its third driest-ever April with only 0.06 of an inch of moisture, according to the National Weather Service.

Colorado had the highest number of fire-weather alerts in April than any other April in the past 15 years. Crews have quickly contained wind-driven fires that forced evacuations along the Front Range and on the Eastern Plains. But six families in Monte Vista lost their homes in April when a fire burned part of the southern Colorado town.

Since 2014, the Colorado Division of Fire Prevention and Control has flown planes equipped with infrared and color sensors to detect wildfires and provide the most up-to-date information possible to crews on the ground. The onboard equipment is integrated with the Colorado Wildfire Information System, a database that provides images and details to local fire managers.

Last year we found almost 200 new fires that nobody knew anything about, said Bruce Dikken, unit chief for the agencys multi-mission aircraft program. I dont know if any of those 200 fires would have become big fires. I know they didnt become big fires because we found them.

When the two Pilatus PC-12 airplanes began flying in 2014, Colorado was the only state with such a program conveying the information in near real time, Dikken said. Lockheed Martin representatives have spent time in the air on the planes recently to see if its AI can speed up the process.

We dont find every single fire that we fly over and it can certainly be faster if we could employ some kind of technology that might, for instance, automatically draw the fire perimeter, Dikken said. Right now, its very much a manual process.

Something like the 2020Cameron Peakfire, which at 208,663 acres is Colorados largest wildfire, could take hours to map, Dikken said.

And often the people on the planes are tracking several fires at the same time. Dikken said the faster they can collect and process the data on a fires perimeter, the faster they can move to the next fire. If it takes a couple of hours to map a fire, what I drew at the beginning may be a little bit different now, he said.

Lordan said Lockheed Martin engineers who have flown with the state crews, using the video and images gathered on the flights, have been able to produce fire maps in as little as 15 minutes.

The company has talked to the state about possibly carrying an additional computer that could help crunch all that information and transmit the map of the fire while still in flight to crews on the ground, Dikken said. The agency is waiting to hear the results of Lockheed Martins experiences aboard the aircraft and how the AI might help the state, he added.

They have a strong interest in applying their skills and capabilities to the wildland fire problem, and I think that would be welcome, Finney said.

The lab in Missoula has been involved in fire research since 1960 and developed most of the fire-management tools used for operations and planning, Finney said. Were pretty well situated to understand where new things and capabilities might be of use in the future and some of these things certainly might be.

However, Lockheed Martin is focused on technology and thats not really been where the most effective use of our efforts would be, Finney said.

Prevention and mitigation and preemptive kind of management activities are where the great opportunities are to change the trajectory were on, Finney said. Improving reactive management is unlikely to yield huge benefits because the underlying source of the problem is the fuel structure across large landscapes as well as climate change.

Logging and prescribed burns, or fires started under controlled conditions, are some of the management practices used to get rid of fuel sources or create a more diverse landscape. But those methods have sometimes met resistance, Finney said.

As bad as the Cameron Peak fire was, Finney said the prescribed burns the Arapaho and Roosevelt National Forests did through the years blunted the blazes intensity and changed the flames movement in spots.

Unfortunately, they hadnt had time to finish their planned work, Finney said.

Lordan said the value of artificial intelligence, whether in preventing fires or responding to a fire, is producing accurate and timely information for fire managers, what he called actionable intelligence.

One example, Lordan said, is information gathered and managed by federal agencies on the types and conditions of vegetation across the country. He said updates are done every two to three two years. Lockheed Martin uses data from satellites managed by the European Space Agency that updates the information about every five days.

Lockheed is working with Nvidia, a California software company, to produce a digital simulation of a wildfire based on an areas topography, condition of the vegetation, wind and weather to help forecast where and how it will burn. After the fact, the companies used the information about the Cameron Peak fire, plugging in the more timely satellite data on fuel conditions, and generated a video simulation that Lordan said was similar to the actual fires behavior and movement.

While appreciating the help technology provides, both Dikken with the state of Colorado and Finney with the Forest Service said there will always be a need for ground-truthing by people.

Applying AI to fighting wildfires isnt about taking people out of the loop, Lockheed Martin spokesman Chip Eschenfelder said. Somebody will always be in the loop, but people currently in the loop are besieged by so much data they cant sort through it fast enough. Thats where this is coming from.

2022 MediaNews Group, Inc. Distributed by Tribune Content Agency, LLC.

Continue reading here:
How Artificial Intelligence Can Help Fight Fires in the West - Governing

The Genesis of Artificial Intelligence and Digital Twins – InformationWeek

Artificial intelligence, machine learning, and digital twins -- why are we hearing so much about them and why do they suddenly seem critical? The simplest explanation is this: When something is too complex for a human to easily process or there is too little time for a human to make a critical decision, the only choice is to remove the human. That requires the ability to replicate the thought process a human might go through, which requires a lot of data and a deep understanding of the decision environment.

So why now? For decades, we saw huge advancements come primarily from the integration and shrinking of electronics. Smaller products, consuming less power, and offering dramatic increases in functionality per square inch were the hallmarks of technology progress.

Software applications also have evolved over the decades, one of the most notable ways being the dramatic acceleration of the application adoption cycle. In the past two decades alone, users have shifted at alarmingly fast rates from treating applications as novelties, to using them as a convenience, and then to expecting them to work flawlessly all the time. At each adoption stage, a users expectation rises, meaning the product must evolve and mature at very fast, scalable rates.

The combination of the hardware and software trends formed a convergence of product development requirements. New critical need applications suddenly must feature higher capacity of real-time processing, time-sensitive decision-making, high to very high availability, and expectations that platform-generated decisions be correct, every time.

While most people think of AI primarily as an end-user resource, AI has become necessary for faster product design and development. From the earliest stage of a chipset design or layout of a circuit through end-product validation, emulators have become necessary for building complex interfaces and environments. These emulators, known as digital twins, are a virtual manifestation of a process, environmental condition, or protocol capable of serving as a known good signal. In test terms, a digital twin can be a simple signal generator, a full protocol generator or a complete environment emulator. Digital twins allow developers to rapidly create a significantly wider range of test conditions to validate their product before shipping. High-performance digital twins typically contain their own AI engines for troubleshooting and regression testing new product designs.

The shift to AI-driven development and digital twins has become necessary due to the amount of functionality and autonomous decision-making expected in new products. Basic design principles specify features and functionality of a product, then set up tests to validate them. The sheer number and complexity of interface standards makes that virtually impossible to construct by hand. By using digital twins, a much wider set of functional tests can be programmed in much less time. AI functionality then automates test processes based on what it discovers and predicts actions that might be needed. To understand this better, its useful to understand the core of what makes any AI possible.

In its simplest form, software decision-making starts with algorithms. Basic algorithms run a set of calculations, and if you know what constitutes acceptable results, you can create a finite state machine using decision tree outcomes. This would hardly be considered intelligent. By adding a notation of state, however, and inserting a feedback loop, your basic algorithm can make outcome decisions a function of the current conditions compared to the current state. Combine this while evolving the decision tree into a behavior tree and you have formed the genesis of AI.

The need for AI and digital twins is real, and when you question the veracity of one -- yours or someone elses -- go back to its genesis, otherwise known as the data. Data source(s) are the foundation of any digital assessment tool, and those sources determine the potential of an algorithms modeling accuracy. If multiple data-rich sources are available, then the accuracy potential is high. If only basic data is available, the resulting algorithm or digital twin will not be accurate. This is something you can assess yourself.

We are at the early stage of AI, which means lots of products will be making lots of claims. Understanding what a product is supposed to deliver will allow you to assess it. Understanding which data sources it processes will tell you how accurately it can deliver the results the vendor promises. Digital twins are much further along in maturity -- especially those that emulate specific elements rather than entire ecosystems. Remember, though, that the more finite the environment, the more likely the digital twin will accurately replicate it.

We all want to understand how something works and how it produces its outcomes. With an understanding of the basic elements inside every AI system and digital twin, you can ask questions about their fundamental elements. If you get stuck, use the steps as a guide for questions to ask of the vendor. Most will share all or some of the key background or parameters to help you understand. I they dont, their competitors will.

Read the rest here:
The Genesis of Artificial Intelligence and Digital Twins - InformationWeek

Startups apply artificial intelligence to supply chain disruptions – Reuters

LONDON, May 3 (Reuters) - Over the last two years a series of unexpected events has scrambled global supply chains. Coronavirus, war in Ukraine, Brexit and a container ship wedged in the Suez Canal have combined to delay deliveries of everything from bicycles to pet food.

In response, a growing group of startups and established logistics firms has created a multi-billion dollar industry applying the latest technology to help businesses minimize the disruption.

Interos Inc, Fero Labs, KlearNow Corp and others are using artificial intelligence and other cutting-edge tools so manufacturers and their customers can react more swiftly to supplier snarl-ups, monitor raw material availability and get through the bureaucratic thicket of cross-border trade.

Register

The market for new technology services focused on supply chains could be worth more than $20 billion a year in the next five years, analysts told Reuters. By 2025, more than 80% of new supply chain applications will use artificial intelligence and data science in some way, according to tech research firm Gartner.

"The world's gotten too complex to try to manage some of these things on spreadsheets," said Dwight Klappich, a Gartner analyst.

Interos, valued at more than $1 billion in its latest funding round, is one of the most successful in the nascent market. The Arlington, Virginia-based company says it has mapped out 400 million businesses globally and uses machine learning to monitor them on behalf of corporate customers, alerting them immediately when fire, flood, hacking or any other event causes a potential disruption.

Before Russian tanks rolled into Ukraine in February, the company had assessed the impact of an invasion. Interos said it identified about 500 U.S. firms with direct supplier relations with companies in Ukraine. Further down the chain Interos found 20,000 U.S. firms had links to second-tier suppliers in Ukraine and 100,000 U.S. firms had links to third-tier suppliers.

Chief Executive Jennifer Bisceglie said after the war started 700 companies approached Interos for help in assessing their exposure to suppliers in Ukraine and Russia. She said the company is developing a new product to game out other hypothetical supply chain disruption scenarios, such as China invading Taiwan, for customers to understand their exposure to risk and where to find alternative suppliers.

Supply chain shocks are inevitable, Bisceglie told Reuters. "But I think we're going to get better at minimizing these disruptions."

U.S. airline Delta Air Lines Inc (DAL.N), which spends more than $7 billion a year on catering, uniforms and other goods on top of its plane and fuel budget, is one company using Interos to keep track of its 600 primary suppliers and 8,000 total suppliers.

"We're not expecting to avoid the next crisis," said Heather Ostis, Deltas supply chain chief. "But we're expecting to be a lot more efficient and effective than our competitors in how we assess risk when that happens."

Santa Clara, California-based KlearNow sells a platform that automates cumbersome paper-dominated customs clearance processes.

That has been a lifesaver for EED Foods, based in Doncaster, England, which imports Czech and Slovak sweets and smoked meats for expat customers in Britain.

"Before Brexit we were very scared we would have to shut down," said Elena Ostrerova, EED's purchasing manager. "But instead we are busy as never before."

Ostrerova says her company is still growing at annual rate of 40% after Brexit took effect in early 2020, partly because some competitors gave up rather than tackle the onerous new paperwork for importing from the European Union.

She said KlearNows customs clearance platform keeps track of its hundreds of shipments from Central Europe, tallying totals on thousands of items, correcting mistakes on everything from country of origin to gross net weight, and providing an entry number - under which all the information about a shipment is contained - for the company hauling it to Britain.

"We have minimum human involvement," Ostrerova said, which saves the company time and the cost of manual data input.

Berk Birand, CEO of New York-based Fero Labs, said the coronavirus pandemic highlighted the need for manufacturers to adapt to changing suppliers so that they can continue to make identical products, no matter the origin of the raw materials.

The startup's platform uses machine learning to monitor and adapt to how raw materials from different suppliers affect product quality, from varying impurities in steel to the level of viscosity in a surfactant, a key ingredient in shampoo. The system then communicates with plant engineers to tweak manufacturing processes so that product consistency is maintained.

Dave DeWalt, founder of venture capital firm NightDragon, which led Interos' $100 million Series C funding round last year, says regulators are going to take much greater interest in supply chain risk.

"If you have a supply chain issue that could cost you major shareholder value, you'll have a major responsibility too," DeWalt said. "I believe that's coming in the near future."

Major logistics firms are also deploying machine learning to boost their competitiveness. U.S. truck fleet operator Ryder System Inc (R.N) uses the real-time data from its fleet, and those of its customers and partners, to create algorithms for predicting traffic patterns, truck availability and pricing.

Silicon Valley venture capital firm Autotech Ventures has invested in both KlearNow and newtrul, which aggregates data from transport management systems in America's highly fragmented trucking sector to predict pricing changes.

"Mapping your supply chain and interconnectivity at the individual part level is the Holy Grail," said Autotech partner Burak Cendek.

Register

Reporting By Nick CareyEditing by Bill Rigby

Our Standards: The Thomson Reuters Trust Principles.

Read more:
Startups apply artificial intelligence to supply chain disruptions - Reuters

Applications for artificial intelligence in Department of Defense cyber missions – Microsoft On the Issues – Microsoft

Editors note: On May 3 Eric Horvitz, Chief Scientific Officer, testified before the U.S. Senate Armed Services Committee Subcommittee on Cybersecurity for a hearing on the use of AI in Department of Defense cyber missions. Read Eric Horvitzs written testimonybelow andwatch the hearing here.

Chairman Manchin, Ranking Member Rounds, and Members of the Subcommittee, thank you for the opportunity to share insights about the impact of artificial intelligence (AI) on cybersecurity. I applaud the Subcommittee for its foresight and leadership in holding a hearing on this critically important topic. Microsoft is committed to working collaboratively with you to help ensure new advances in AI and cybersecurity benefit our country and society more broadly.

My perspective is grounded in my experiences working across industry, academia, scientific agencies, and government. As Microsofts Chief Scientific Officer, I provide leadership and perspectives on scientific advances and trends at the frontiers of our understandings, and on issues and opportunities rising at the intersection of technology, people, and society. I have been pursuing and managing research on principles and applications of AI technologies for several decades, starting with my doctoral work at Stanford University. I served as a Commissioner on the National Security Commission on AI (NSCAI), was president of the Association for the Advancement of Artificial Intelligence (AAAI), chaired the Section on Computing, Information, and Communication of the American Association for the Advancement of Science (AAAS). I am a member of the National Academy of Engineering (NAE) and the American Academy of Arts and Sciences. I currently serve on the Presidents Council of Advisors on Science and Technology (PCAST) and on the Computer Science and Telecommunications Board (CSTB) of the National Academies of Sciences.

I will cover in my testimony four key areas of attention at the intersection of AI and cybersecurity that warrant deeper understanding and thoughtful action:

Before covering these topics, I will provide brief updates on the cybersecurity landscape and on recent progress in AI. Ill conclude my testimony with reflections about directions.

1. Cybersecuritys changing landscape

Attacks on computing systems and infrastructure continue to grow in complexity, speed, frequency, and scale. We have seen new attack techniques and the exploitation of new attack surfaces aimed at disrupting critical infrastructure and accessing confidential data.[1] In 2021 alone, the Microsoft 365 Defender suite, supported by AI techniques, blocked more than 9.6 billion malware threats, 35.7 billion phishing and malicious emails, and 25.6 billion attempts to hijack customer accounts targeting both enterprise and consumer devices.[2],[3] Multiple independent reports have characterized the nature and status of different forms of cyberattack.[4] As detailed in Microsofts recent Digital Defense Report,[5] cyber criminals and nation-state actors continue to adapt their techniques to exploit new vulnerabilities and counter cyber defenses.

To help mitigate these concerning trends, the U.S. government has taken significant steps forward to secure our cyber ecosystem. Congress enacted several recommendations that came out of the Cyberspace Solarium Commission, such as creating the Office of the National Cyber Director and enacting cyber incident reporting legislation. Almost a year ago, the Administration issued Executive Order (E.O.) 14028, Improving the Nations Cybersecurity, which directs agencies to develop and implement a variety of initiatives to raise the bar on cybersecurity across areas, such as supply chain security, and requiring agencies to adopt a zero-trust model. Microsoft has worked diligently to meet deadlines specified in the E.O. on cybersecurity and we support these efforts to encourage a cohesive response to evolving cyber threats.

We expect to face continuing efforts by creative and tireless state and non-state actors who will attempt to attack computing systems with the latest available technologies. We need to continue to work proactively and reactively to address threats and to note changes in systems, technologies, and patterns of usage. On the latter, cybersecurity challenges have been exacerbated by the increasing fluidity between online work and personal activities as daily routines have become more intertwined.[6] The large-scale shift to a paradigm of hybrid work coming with the COVID-19 pandemic has moved workers further away from traditional, controlled environments. Cybersecurity solutions must enable people to work productively and securely across various devices from a variety of non-traditional locations.

2. Advancements in Artificial Intelligence

Artificial intelligence is an area of computer science focused on developing principles and mechanisms to solve tasks that are typically associated with human cognition, such as perception, reasoning, language, and learning. Numerous milestones have been achieved in AI theory and applications over the 67 years since the phrase artificial intelligence was first used in a funding proposal that laid out a surprisingly modern vision for the field.[7]

Particularly stunning progress has been made over the last decade, spanning advances in machine vision (e.g., object recognition), natural language understanding, speech recognition, automated diagnosis, reasoning, robotics, and machine learningprocedures for learning from data. Many impressive gains across subdisciplines of AI are attributed to a machine learning methodology named deep neural networks (DNNs). DNNs have delivered unprecedented accuracy when fueled by large amounts of data and computational resources.

Breakthroughs in accuracy include performances that exceed human baselines for a number of specific benchmarks, including sets of skills across vision and language subtasks. While AI scientists remain mystified by the powers of human intellect, the rate of progress has surprised even seasoned experts.

Jumps in core AI capabilities have led to impressive demonstrations and real-world applications, including systems designed to advise decision makers, generate textual and visual content, and to provide new forms of automation, such as the control of autonomous and semi-autonomous vehicles.

AI technologies can be harnessed to inject new efficiencies and efficacies into existing workflows and processes. The methods also can be used to introduce fundamentally new approaches to standing challenges. When deployed in a responsible and insightful manner, AI technologies can enhance the quality of the lives of our citizenry and add to the vibrancy of our nation and world. For example, AI technologies show great promise in enhancing healthcare via providing physicians with assistance on diagnostic challenges, guidance on optimizing therapies, and inferences about the structure and interaction of proteins that lead to new medications.

AI advances have important implications for the Department of Defense, our intelligence community, and our national security more broadly. Like any technology, the rising capabilities of AI are available to friends and foes alike. Thus, in addition to harnessing AI for making valuable contributions to people and society, we must continue to work to understand and address the possibilities that the technologies can be used by malevolent actors and adversaries to disrupt, interfere, and destroy. AI has important implications for cybersecurity as the technologies can provide both new powers for defending against cyberattacks and new capabilities to adversaries.

3. Advancing Cybersecurity with AI

The value of harnessing AI in cybersecurity applications is becoming increasingly clear. Amongst many capabilities, AI technologies can provide automated interpretation of signals generated during attacks, effective threat incident prioritization, and adaptive responses to address the speed and scale of adversarial actions. The methods show great promise for swiftly analyzing and correlating patterns across billions of data points to track down a wide variety of cyber threats of the order of seconds. Additionally, AI can continually learn and adapt to new attack patternsdrawing insights from past observations to detect similar attacks that occur in the future.

3.1 Assisting and Complementing Workforce

The power of automation and large-scale detection, prioritization, and response made possible by AI technologies can not only relieve the burden on cybersecurity professionals but also help with the growing workforce gap. On the challenges to current cyber workforce: the U.S. Bureau of Labor Statistics estimates cybersecurity job opportunities will grow 33% from 2020 to 2030more than six times the national average.[8] However, the number of people entering the field is not keeping pace. There is a global shortage of 2.72 million cybersecurity professionals, according to the 2021 (ISC)2 Cybersecurity Workforce Study released in October 2021.[9]

Organizations that prioritize cybersecurity run security operations teams 24/7. Still, there are often far more alerts to analyze than there are analysts to triage them, resulting in missed alerts that evolve into breaches. Trend Micro released a survey in May 2021 of security operations center decision makers that showed that 51% feel their team is overwhelmed with the overall volume of alerts, 55% are not confident in their ability to efficiently prioritize and respond to alerts, and that 27% of their time is spent dealing with false positives.[10]

AI technologies enable defenders to effectively scale their protection capabilities, orchestrate and automate time-consuming, repetitive, and complicated response actions. These methods can enable cybersecurity teams to handle large volumes of classical threats in more relevant time frames with less human intervention and better results. Such support with scaling on the essentials can free cybersecurity professionals to focus and prioritize on those attacks that require specialized expertise, critical thinking, and creative problem solving. However, additional attention should also be given to general cybersecurity training, security awareness, secure development lifecycle practices, and simulated training modules, including using AI to run intelligent and personalized simulations.

3.2 AI at Multiple Stages of Security

Today, AI methods are being harnessed across all stages of security including prevention, detection, investigation and remediation, discovery and classification, threat intelligence, and security training and simulations. I will discuss each of these applications in turn.

Prevention. Prevention encompasses efforts to reduce the vulnerability of software to attack, including user identities and data, computing system endpoints, and cloud applications. AI methods are currently used in commercially available technologies to detect and block both known and previously unknown threats before they can cause harm. In 2021, AV-Test Institute observed over 125 million new malware threats.[11] The ability of machine learning techniques to generalize from past patterns to catch new malware variants is key to being able to protect users at scale.

As an example, last year Microsoft 365 Defender successfully blocked a file that would later be confirmed as a variant of the GoldMax malware. Defender had never seen the new variant of GoldMax. The malware was caught and blocked leveraging the power of an AI pattern recognizer working together with a technology known as fuzzy hashinga means for taking a fingerprint of malware.[12] It is important to note that GoldMax is malware that persists on networks, feigning to be a scheduled task by impersonating the activities of systems management software. Such hiding out as a scheduled task is part of the tools, tactics, and procedures of NOBELIUM, the Russian state actor behind the attacks against SolarWinds in December 2020 and which the U.S. government and others have identified as being part of Russias foreign intelligence service known as the SVR.

In other work, we have found that AI methods can improve our ability to detect sophisticated phishing attacks. Phishing attacks center on social engineering, where an attacker creates a fake webpage or sends a fraudulent message designed to trick a person into revealing sensitive data to the attacker or to deploy malicious software on the victims device, such as ransomware. To help protect people from harmful URLs, AI pattern recognizers have been deployed in browsers and other applications as part of their security services. AI methods can improve detection while lowering false positive rates, which can frustrate end users.[13]

Detection. Detection involves identifying and alerting suspicious behaviors as they happen. The goal is to quickly respond to attacks, including identifying the scale and scope of an attack, closing the attackers entry, and remediating footholds that the attacker may have established. The key challenge with detecting suspicious activity is to find the right balance between providing enough coverage via seeking high rates of accurate security alerts versus false alarms. AI methods are being leveraged in detection to (1) triage attention to alerts about potential attacks, (2) identify multiple attempts at breaches over time that are part of larger and lengthier attack campaigns, (3) detecting fingerprints of the activities of malware as it operates within a computer or on a network, (4) identifying the flow of malware through an organization,[14] and (5) guiding automated approaches to mitigation when a response needs to be fast to stop an attack from propagating. For example, an automated system can shut down network connectivity and contain a device if a sequence of alerts is detected that is known to be associated with ransomware activity like the way a bank might decline a credit card transaction that appears fraudulent.

There are several technologies available today to help detect attacks. I will use Microsoft 365 Defender capabilities as an example. A set of neural network models are used to detect a potential attack underway by fusing multiple signals about activities within a computing system, including processes being started and stopped, files being changed and renamed, and suspicious network communication.[15], [16] In addition, probabilistic algorithms are used to detect high likelihoods of lateral movement on a network.[17] Lateral movement refers to malware, such as ransomware, moving from machine to machine as it infects an organization. The goal is to detect signals of concerning patterns of spread and to shut down the infection by isolating potentially infected machines and alerting security experts to investigate. As numerous legitimate operations can appear like lateral movement of malware, simplistic approaches can have high false-positive rates. AI systems can help to raise the rate of capture and block these spreading infections, while reducing false positives.[18]

As a recent example, in March 2022, Microsoft leveraged its AI models to identify an attack attributed to a Russian actor that Microsoft tracks as Iridium, also referred to as Sandworm. The US government has attributed Iridium activity to a group allegedly based at GRU Unit 74455 of the Main Directorate of the General Staff of the Armed Forces of the Russian Federation. The actor deployed wiper malware at a Ukrainian shipping company based in Lviv. Wiper malware erases data and programs on the computers that it infects. The first documented encounter of this malware was on a system running Microsoft Defender with Cloud Protection enabled. The ensemble of machine learning models in Defender, combined with signals across client and cloud, allowed Microsoft to block this malware at first sight.

Investigation and remediation. Investigation and remediation are methods used following a breach to provide customers with a holistic understanding of the security incident, including the extent of the breach, which devices and data were impacted, how the attack propagatedthrough the customer environment, and to seek attribution for the threat.[19] Gathering and doing synthesis from telemetry sources is tedious. Efforts to date include multiple tools to collect telemetry from within and across organizations. The use of AI for investigation and remediation is a promising and open area of research.[20],[21]

Threat intelligence. Threat intelligence enables security researchers to stay on top of the current threat landscape by tracking active malicious actors, at times deliberately engaging with them and studying their behavior. Today, Microsoft actively tracks 40+ active nation-state actors and140+ threat groups across 20 countries.[22],[23] AI methods help to identify and tag entities from multiple feeds and intelligence sharing across agencies. AI models show promise with their ability to learn and make inferences about high-level relationships and interactions by identifying similarities across different campaigns for enhancing threat attribution.[24],[25]

Recommendations: Advance development and application of AI methods to defend against cyberattacks

4. AI-powered cyberattacks

While AI is improving our ability to detect cybersecurity threats, organizations and consumers will face new challenges as cybersecurity attacks increase in sophistication. To date, adversaries have commonly employed software tools in a manual manner to reach their objectives. They have been successful in exfiltrating sensitive data about American citizens, interfering with elections, and distributing propaganda on social media without the sophisticated use of AI technologies. [26],[27],[28] While there is scarce information to date on the active use of AI in cyberattacks, it is widely accepted that AI technologies can be used to scale cyberattacks via various forms of probing and automation. Multiple research and gaming efforts within cybersecurity communities have demonstrated the power using AI methods to attack computing systems. This area of work is referred to as offensive AI.[29],[30]

4.1 Approaches to offensive AI

Offensive AI methods will likely be taken up as tools of the trade for powering and scaling cyberattacks. We must prepare ourselves for adversaries who will exploit AI methods to increase the coverage of attacks, the speed of attacks, and the likelihood of successful outcomes. We expect that uses of AI in cyberattacks will start with sophisticated actors but will rapidly expand to the broader ecosystem via increasing levels of cooperation and commercialization of their tools.[31]

Basic automation. Just as defenders use AI to automate their processes, so too can adversaries introduce efficiencies and efficacies for their own benefit. Automating attacks using basic pre-programmed logic is not new in cybersecurity. Many malware and ransomware variants over the last five years have used relatively simple sets of logical rules to recognize and adapt to operating environments. For example, it appears that attacking software has checked time zones to adapt to local working hours and customized behavior in a variety of ways to avoid detection or take tailored actions to adapt to the target computing environment.[32],[33] On another front, automated bots have begun to proliferate on social media platforms.[34] These are all rudimentary forms of AI that encode and harness an attackers expert knowledge. However, substantial improvements in AI technology make plausible malicious software that is much more adaptive, stealthy, and intrusive.[35]

Authentication-based attacks. AI methods can be employed in authentication-based attacks, where, for example, recently developed AI methods can be used to generate synthetic voiceprints to gain access through an authentication system. Compelling demonstrations of voice impersonations to fool an authentication system were presented during the Capture the Flag (CTF) cybersecurity competition at the 2018 DEF CON meeting.[36]

AI-powered social engineering. Human perception and psychology are weak links in cyber-defense. AI can be used to exploit this persistent vulnerability. We have seen the rise of uses of AI for social engineering, aiming the power of machine learning at influencing the actions of people to perform tasks that are not in their interest. As an example, AI methods can be used to generate ultra-personalized phishing attacks capable of fooling even the most security conscious users. A striking 2018 study demonstrated how AI methods could be used to significantly raise the probability that end users would click on malevolent links in social media posts. The AI system learned from publicly available data including online profiles, connections, content of posts, and online activity of targeted individuals. Machine-learning was used to optimize the timing and content of messages with a goal of maximizing clickthrough rateswith significant results.[37] A 2021 study demonstrated that the language of emails could be crafted automatically with large-scale neural language models and that the AI-generated messages were more successful than the human-written messages by a significant margin.[38] In a related direction, Microsoft has tracked groups that use AI to craft convincing but fake social media profiles as lures.

4.2 AI-powered cyberattacks on the frontier

The need to prepare for more sophisticated offensive AI was highlighted in presentations at a National Academies of Sciences workshop on offensive AI that I co-organized in 2019. The workshop, sponsored by the Office of the Director of National Intelligence, led to a report available from the Academies.[39] The report includes discussion of the applications of AI methods across the cyber kill-chain, including the use of AI methods in social engineering, discovery of vulnerabilities, exploiting development and targeting, and malware adaptation, as well as in methods and tools that can be used to target vulnerabilities in Al-enabled systems, such as autonomous systems and controls used in civilian and military applications.

The cybersecurity research community has demonstrated the power of AI and other sophisticated computational methods in cyberattacks. Adversaries can harness AI to efficiently guess passwords, to attack industrial control systems without raising suspicions, and to create malware that evades detection or prevents inspection[40],[41],[42],[43],[44],[45] AI-enabled bots can also automate network attacks and make it difficult to extinguish the attackers command and control channels.[46] In another direction, a competitor demonstrated at a DARPA Cyber Grand Challenge exercise in 2016 [47] how machine learning could be used to learn how to generate chaff traffic, decoy patterns of online activity that resemble the distribution of events seen in real attacks for distraction and cover-up of actual attack strategies.[48]

It is safe to assume that AI will improve the success, impact, and scope of the full breadth of threats present today. AI will also introduce new challenges, including special cyber vulnerabilities introduced with general uses of AI components and applications, which create new apertures for adversaries to exploit.

Recommendations: Prepare for malicious uses of AI to perform cyberattacks

5. Special vulnerabilities of AI systems

The power and growing reliance on AI generates a perfect storm for a new type of cyber-vulnerability: attacks targeted directly at AI systems and components. With attention focused on developing and integrating AI capabilities into applications and workflows, the security of AI systems themselves is often overlooked. However, adversaries see the rise of new AI attack surfaces growing in diversity and ubiquity and will no doubt be pursuing vulnerabilities. Attacks on AI systems can come in the form of traditional vulnerabilities, via basic manipulations and probes, and via a new, troubling category: adversarial AI.

5.1 Attacks on AI Supply Chains

AI systems can be attacked via targeting traditional security weaknesses and software flaws, including attacks on the supply chain of AI systems, where malevolent actors gain access and manipulate insecure AI code and data. As an example, in 2021, a popular software platform used to build neural networks was found to have 201 traditional security vulnerabilities, such as memory corruption and code execution.[50] Researchers have demonstrated how adversaries could use existing cyberattack toolkits to attack core infrastructure of the software running AI systems.[51] Multiple components of AI systems in the supply chain of AI systems can be modified or corrupted via traditional cyberattacks. As an example, data sets used to train AI systems are rarely under version control in the same way that source code is. Researchers from NYU found that most AI frameworks downloaded from a popular algorithm repository do not check the integrity of AI models, in contrast to the standards of practice with traditional software, where cryptographic verification of executables/libraries has been standard practice for well over a decade.[52]

5.2 Adversarial AI

Adversarial AI or adversarial machine learning methods harness more sophisticated AI techniques to attack AI systems. Several classes of adversarial AI have been identified, including adversarial examples, the use of basic policies or more sophisticated machine learning methods to fool AI systems with inputs that cause the systems to fail to function properly. A second type of attack is called data poisoning, where data used to train AI systems are poisoned with streams of data that inject erroneous or biased training data into data sets, changing the behavior or degrading the performance of AI systems.[53] A third type of attack, called model stealing, seeks to learn details about the underlying AI model used in an AI system.[54] A fourth category of attack, called model inversion, seeks to reconstruct the underlying private data that is used to train the target system.[55]

With adversarial examples, basic manipulations or more sophisticated application of AI methods are used to generate inputs that are custom-tailored to cause failures in targeted AI systems. Goals of these attacks include disruptive failures of automated message classifiers, perceptions of machine vision systems, and recognitions of the words in utterances by speech recognition systems.

As an example of basic manipulations of inputs, a group, alleged to be within the Chinese government, attempted to amplify propaganda on Uyghurs by bypassing Twitters anti-spam algorithm via appending random characters at the end of tweets.[56] The approach was viewed as an attempt to mislead the algorithm into thinking each tweet was unique and legitimate. In another example, researchers from Skylight appended benign code from a gaming database to Wannacry ransomware to cause the machine-learning-based antivirus filter to classify the modified ransomware as benign.[57] In related work on the fragility of AI systems, researchers showed that simply rotating a scan of a skin lesion confuses a computer recognition system to classify the image as malignant.[58]

In uses of AI to generate adversarial examples, researchers have demonstrated stunning examples of failures. In one approach, adversarial methods are used to inject patterns of pixels into images to change what an AI system sees. While the changes with AI inferences are dramatic, the changes to the original images are not detectable by humans. Sample demonstrations include the modification of a photo of a panda leading an AI system to misclassify the panda as a gibbon and changes to a stop sign to misclassify it as a yield sign.[59],[60] Similar demonstrations have been done in the realm of speech recognition, with the injection of hidden acoustical patterns in speech that changes what a listening system hears.[61] Attacks leading to such misclassifications and malfunctions can be extremely costly, particularly in high-stakes domains like defense, transportation, healthcare, and industrial processes.

Challenges of adversarial AI and a set of recommendations are called out in the final report of the National Security Commission on AI (NSCAI).[62] I chaired the lines of effort on directions with developing and fielding trustworthy, responsible, and ethical AI applications, leading to chapters 7 and 8 of the report and the appendix on NSCAIs recommendations on key considerations for fielding AI systems that align with democratic values, civil liberties, and human rights.[63],[64],[65] Chapter 7 of the report covers rising concerns with adversarial AI, including the assessment that, The threat is not hypothetical: adversarial attacks are happening and already impacting commercial ML systems. In support of this statement, over the last five years, the Microsoft cybersecurity team has seen an uptick in adversarial AI attacks.[66] I believe the trend will continue.

5.3 Efforts to Mitigate Adversarial AI

Pursuit of resistant systems. Computer science R&D has been underway on methods for making AI systems more resistant to adversarial machine learning attacks. One area of work centers on raising the level of robustness of systems to attacks with adversarial inputs as described above.[67],[68] Approaches include special training procedures to include adversarial examples, validation of inputs to identify specific properties that can reveal signs of an attack and making changes to the overall approach to building models, and modifying the objective functions used in optimization procedures used to create the models so that more robust models are created. While the latter techniques and research directions behind them are promising, the challenges of adversarial examples persist, per the large space of inputs to machine learning procedures. Thus, it is important to continue to invest in R&D on adversarial AI, to perform ongoing studies with red-teaming exercises, and to remain vigilant.

5.4 Tracking, Awareness, and Resources

Front-line awareness. Despite the opportunities that adversarial AI methods will provide to state and non-state actors for manipulating and disrupting critical AI systems and rising evidence of real-world attacks with adversarial AI, the idea of protecting AI systems from these attacks has been largely an afterthought. There is an urgency to be aware and to be ready to respond to adversarial AI threats, especially those used in critical areas such as defense. A Microsoft survey of 28 organizations in 2020 showed, despite the rise in attacks on AI systems, companies are still unaware of these kinds of intentional failures to AI systems and are massively underinvested in tools and processes to secure AI systems. Ryan Fedasiuk, a noted researcher at Georgetowns Center for Security of Emerging Technology specializing in Chinas AI operations, notes that Chinese military officers have explicitly called out that the U.S. defenses are susceptible to data poisoning, and even so far as calling data integrity as the Achilles heel of the U.S. joint all-domain command and control strategy.[69]

Resources and Engagement. Microsoft, along with MITRE and 16 other organizations created the Adversarial ML Threat Matrix to catalog threats to AI systems.[70] The content includes documentation of case studies where attacks have been made on commercial AI systems. For engineers and policymakers, Microsoft, in collaboration with Berkman Klein Center at Harvard University, released a taxonomy of machine learning failure modes.[71] For security professionals, Microsoft has open-sourced Counterfit, its own tool for assessing the posture of AI systems.[72] For the broader community of cybersecurity practitioners interested in AI and security, Microsoft hosts the annual Machine Learning Evasion Competition as a venue to exercise their muscle in attacking and securing AI systems.[73] Within the Federal government, the DoD has listed safety and security of AI systems in its core AI principles.[74] And there is encouraging activity by NIST on an AI Risk Assessment Framework to address multiple dimensions of AI systems, including robustness and security.[75]

Recommendations: Raise awareness and address vulnerabilities of AI systems

6. AI in Malign Information Operations

Advances in machine learning and graphics have boosted the abilities of state and non-state actors to fabricate and distribute high-fidelity audiovisual content, referred to as synthetic media and deepfakes. AI technologies for generating deepfakes can now fabricate content that is indistinguishable from real-world people, scenes, and events, threatening national security. Advances that could only be found with the walls of computer science laboratories or in demonstrations that surprised attendees at academic AI conferences several years ago are now widely available in tools that create audio and audiovisual content that can be used to drive disinformation campaigns.

6.1 Challenges of Synthetic Media

Advances in the capabilities of generative AI methods to synthesize a variety of signals, including high-fidelity audiovisual imagery, have significance for cybersecurity. When personalized, the use of AI to generate deepfakes can raise the effectiveness of social-engineering operations (discussed above) in persuading end-users to provide adversaries with access to systems and information.

On a larger scale, the generative power of AI methods and synthetic media have important implications for defense and national security. The methods can be used by adversaries to generate believable statements from world leaders and commanders, to fabricate persuasive false-flag operations, and to generate fake news events. A recent demonstration includes the multiple examples of manipulated and more sophisticated deepfakes that have come to the fore over the course of the Russian attack on Ukraine. This includes a video of President Volodymyr Zelenskyy appearing to call for surrender.[76]

The proliferation of synthetic media has had another concerning effect: malevolent actors have labeled real events as fake, taking advantage of new forms of deniability coming with the loss of credibility in the deepfake era. Video and photo evidence, such as imagery of atrocities, are being called fake. Known as the liars dividend, the proliferation of synthetic media emboldens people to claim real media as fake, and creates plausible deniability for their actions.[77]

We can expect synthetic media and its deployment to continue grow in sophistication over time, including the persuasive interleaving of deepfakes with unfolding events in the world and real-time synthesis of deepfakes. Real-time generations could be employed to create compelling, interactive imposters (e.g., appearing in teleconferences and guided by a human controller) that appear to have natural head pose, facial expressions, and utterances. Looking further out, we may have to face the challenge of synthetic fabrications of people that can engage autonomously in persuasive real-time conversations over audio and visual channels.

6.2 Direction: Digital Content Provenance

A promising approach to countering the threat of synthetic media can be found in a recent advance, named digital content provenance technology. Digital content provenance leverages cryptography and database technologies to certify the source and history of edits (the provenance) of any digital media. This can provide glass-to-glass certification of content, from the photons hitting the light-sensitive surfaces of cameras to the light emitted from the pixels of displays, for secure workloads. We pursued an early vision and technical methods for enabling end-to-end tamper-proof certification of media provenance in a cross-team effort at Microsoft.[78],[79] The aspirational project was motivated by our assessment that, in the long-term, neither humans nor AI methods would be able to reliably distinguish fact from AI-generated fictionsand that we must prepare with urgency for the expected trajectory of increasingly realistic and persuasive deepfakes.

After taking the vision to reality with technical details and the implementation of prototype technologies for certifying the provenance of audiovisual content, we worked to build and contribute to cross-industry partnerships, including Project Origin, the Content Authenticity Initiative (CAI), and the Coalition for Content Provenance and Authenticity (C2PA), a multistakeholder coalition of industry and civil society organizations. [80],[81],[82],[83] In January 2022, C2PA released a specification of a standard that enables the interoperability of digital content provenance systems.[84],[85] Commercial production tools are now becoming available in accordance with the C2PA standard that enable authors and broadcasters to assure viewers about the originating source and history of edits to photo and audiovisual media.

The final report of the NSCAI recommends that digital content provenance technologies should be pursued to mitigate the rising challenge ofsynthetic media. In Congress, the bipartisan Deepfake Task Force Act (S. 2559) proposes the establishment of the National Deepfake and Digital Provenance Task Force.[86] Microsoft and its media provenance collaborators encourage Congress to move forward with standing-up a task force to help identify and address the challenges of synthetic media and we would welcome the opportunity to provide assistance and input into the work.

Recommendations: Defend against malign information operations

Summary

I have covered in my testimony status, trends, examples, and directions ahead with rising opportunities and challenges at the intersection of AI and cybersecurity. AI technologies will continue to be critically important for enhancing cybersecurity in military and civilian applications. AI methods are already qualitatively changing the game in cyber defense. Technical advances in AI have helped in numerous ways, spanning our core abilities to prevent, detect, and respond to attacksincluding attacks that have never been seen before. AI innovations are amplifying and extending the capabilities of security teams across the country.

On the other side, state and non-state actors are beginning to leverage AI in numerous ways. They will draw new powers from fast-paced advances in AI and will continue to add new tools to their armamentarium. We need to double down with our attention and investments on threats and opportunities at the convergence of AI and cybersecurity. Significant investments in workforce training, monitoring, engineering, and core R&D will be needed to understand, develop, and operationalize defenses for the breadth of risks we can expect with AI-powered cyberattacks. The threats include new kinds of attacks, including those aimed squarely at AI systems. The DoD, federal and state agencies, and the nation need to stay vigilant and stay ahead of malevolent adversaries. This will take more investment and commitment to fundamental research and engineering on AI and cybersecurity, and in building and nurturing our cybersecurity workforce so our teams can be more effective todayand well-prepared for the future.

Thank you for the opportunity to testify. I look forward to answering your questions.

[1] https://www.microsoft.com/security/blog/2021/12/15/the-final-report-on-nobeliums-unprecedented-nation-state-attack/

[2] https://news.microsoft.com/wp-content/uploads/prod/sites/626/2022/02/Cyber-Signals-E-1-218.pdf, page 3

[3] https://www.microsoft.com/en-us/research/group/m365-defender-research/

[4] 2018-Webroot-Threat-Report_US-ONLINE.pdf

[5] Microsoft Digital Defense Report, October 2021

[6] https://www.microsoft.com/security/blog/2021/05/12/securing-a-new-world-of-hybrid-work-what-to-know-and-what-to-do/

[7] J. McCarthy, J., M.L. Minsky, N. Rochester, N., C.E. Shannon, C.E. A Proposal for the Dartmouth Summer Project on Artificial Intelligence, Dartmouth University, May 1955. http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

[8] https://www.bls.gov/ooh/computer-and-information-technology/information-security-analysts.htm

[9] https://www.isc2.org/News-and-Events/Press-Room/Posts/2021/10/26/ISC2-Cybersecurity-Workforce-Study-Sheds-New-Light-on-Global-Talent-Demand

[10] https://newsroom.trendmicro.com/2021-05-25-70-Of-SOC-Teams-Emotionally-Overwhelmed-By-Security-Alert-Volume

[11] https://www.av-test.org/en/statistics/malware/

[12] https://www.microsoft.com/security/blog/2021/07/27/combing-through-the-fuzz-using-fuzzy-hashing-and-deep-learning-to-counter-malware-detection-evasion-techniques

[13] https://www.microsoft.com/en-us/research/publication/urltran-improving-phishing-url-detection-using-transformers/

[14] https://dl.acm.org/doi/10.1145/3471621.3471858

[15] https://www.microsoft.com/security/blog/2020/07/23/seeing-the-big-picture-deep-learning-based-fusion-of-behavior-signals-for-threat-detection/

[16] https://www.microsoft.com/security/blog/2020/08/27/stopping-active-directory-attacks-and-other-post-exploitation-behavior-with-amsi-and-machine-learning/

[17] https://www.microsoft.com/security/blog/2019/12/18/data-science-for-cybersecurity-a-probabilistic-time-series-model-for-detecting-rdp-inbound-brute-force-attacks/

[18] https://www.microsoft.com/security/blog/2020/06/10/the-science-behind-microsoft-threat-protection-attack-modeling-for-finding-and-stopping-evasive-ransomware/

[19] https://www.microsoft.com/security/blog/2021/12/02/structured-threat-hunting-one-way-microsoft-threat-experts-prioritizes-customer-defense/

[20] https://www.microsoft.com/security/blog/2020/07/09/inside-microsoft-threat-protection-correlating-and-consolidating-attacks-into-incidents/

[21] https://www.microsoft.com/security/blog/2020/07/29/inside-microsoft-threat-protection-solving-cross-domain-security-incidents-through-the-power-of-correlation-analytics/

[22] https://www.microsoft.com/security/blog/2022/02/03/cyber-signals-defending-against-cyber-threats-with-the-latest-research-insights-and-trends/

Here is the original post:
Applications for artificial intelligence in Department of Defense cyber missions - Microsoft On the Issues - Microsoft

BigBear.ai to Highlight Artificial Intelligence and Machine Learning Capabilities at Upcoming Industry Events – Business Wire

COLUMBIA, Md.--(BUSINESS WIRE)--BigBear.ai (NYSE: BBAI), a leader in AI-powered analytics and cyber engineering solutions, announced company executives are embarking on a thought-leadership campaign across multiple global industry events. The campaign will emphasize how the companys advancements in AI technologies will impact the federal and commercial markets in the coming months.

At these events, BigBear.ai leaders will highlight the capabilities of BigBear.ais newly acquired company, ProModel Corporation, the importance of defining responsible AI usage, and how federal and commercial organizations leverage AI and ML.

The events BigBear.ai is scheduled to address include:

CTMA Partners Meeting May 3-5, 2022: Virginia Beach, VA

Due to the rapid deployment and advancement of sensor technologies, artificial intelligence, and data science, the Department of Defense has turned to a more predictive-based approach to maintaining technology assets. The agencys recently revamped condition-based maintenance plus (CBM+) policy will accelerate the adoption, integration, and use of these emerging technologies while shifting its strategic approach from largely reactive maintenance to proactive maintenance. Participating as part of a panel session to address this trend, BigBear.ai Senior Vice President of Analytics Carl Napoletano will highlight ProModels commercial capabilities and ProModel Government Services legacy capabilities in the federal space.

DIA Future Technologies Symposium May 11-12, 2022: Virtual Event

BigBear.ais Senior Vice President of Analytics, Frank Porcelli, will brief the DIA community about BigBear.ais AI-powered solutions at this virtual presentation. After providing a high-level overview and demonstration of the companys AI products (Observe, Orient, and Dominate), Frank will also offer insights into how AI technologies are being leveraged in the federal sector.

Conference on Governance of Emerging Technologies and Science May 19-20, 2022: Phoenix, Arizona

Newly appointed BigBear.ai General Counsel Carolyn Blankenship will attend the ninth edition of Arizona States annual conference, which examines how to create sustainable governance solutions that address new technologies legal, regulatory, and policy ramifications. During her presentation, Carolyn will detail the importance of Intellectual Property (IP) law in AI and the responsible use of AI and other emerging technologies. Prior to starting as General Counsel, Carolyn organized and led Thomson Reuters cross-functional team that outlined the organizations first set of Data Ethics Principles.

Automotive Innovation Forum May 24-25, 2022: Munich, Germany

ProModel was among the select few organizations invited to attend Autodesks The Automotive Innovation Forum 2022. This premier industry event celebrates new automotive plant design and manufacturing technology solutions. Michael Jolicoeur of ProModel, Director of the Autodesk Business Division, will headline a panel at the conference and highlight the latest industry trends in automotive factory design and automation.

DAX 2022 June 4, 2022: University of Maryland, Baltimore County, Baltimore, Maryland

Three BigBear.ai experts - Zach Casper, Senior Director of Cyber; Leon Worthen, Manager of Strategic Operations; and Sammy Hamilton, Data Scientist/Engagement Engineer - will headline a panel discussion exploring the variety of ways AI and ML are deployed throughout the defense industry. The trio of experts will discuss how AI and ML solve pressing cybersecurity problems facing the Department of Defense and intelligence communities.

To connect with BigBear.ai at these events, send an email to events@bigbear.ai.

About BigBear.ai

BigBear.ai delivers AI-powered analytics and cyber engineering solutions to support mission-critical operations and decision-making in complex, real-world environments. BigBear.ais customers, which include the US Intelligence Community, Department of Defense, the US Federal Government, as well as customers in manufacturing, logistics, commercial space, and other sectors, rely on BigBear.ais solutions to see and shape their world through reliable, predictive insights and goal-oriented advice. Headquartered in Columbia, Maryland, BigBear.ai has additional locations in Virginia, Massachusetts, Michigan, and California. For more information, please visit: http://bigbear.ai/ and follow BigBear.ai on Twitter: @BigBearai.

Link:
BigBear.ai to Highlight Artificial Intelligence and Machine Learning Capabilities at Upcoming Industry Events - Business Wire