AI Helping to Refine Intelligence Analysis – GovernmentCIO Media & Research
Artificial intelligence and machine learning capacities are allowing analysts to produce quicker, more streamlined assessments.
Americas national security organizations have begun applying AI to more quickly and effectively produce intelligence assessments.
Speaking at the GovernmentCIO Media & Research AI: National Security virtual event, Director of the National Security Agency (NSA) Research Directorate Mark Segal discussed how these new capacities are assisting intelligence analysts in better processing and sorting large quantities of often complex and disparate information.
In outlining the NSAs research priorities, Segal noted that both AI and machine-learning capacities already showed promise for better organizing the large pools of variable data their analysts sort through in producing regular assessments.
One of the challenges that we have found AI to be particularly useful for is looking through the sheer amount of data that's created every day on this planet. Our analysts are looking at some of this data trying to understand it, and understand what its implications are for national security. The amount of data that we have to sort is going up pretty dramatically, but the number of people that we have who are actually looking at this data is pretty constant. So we're constantly looking for tools and technologies to help our analysts more effectively go through huge piles of data, Segal said.
This application of AI to analysis has the potential to expedite the delivery of actionable intelligence to policymakers as well, who are able to more quickly and conclusively come to decisions based on a more effective sorting of available information.
We analyze information and then provide that analysis to policymakers. For example, lets say we're looking at a large pile of documents and trying to understand what the intentions of another country are by looking through that data quickly. We want to zoom in immediately on the most important parts of that data, and have our skilled analysts say, We think this entity is doing a specific thing, and then leave that to the policymakers to determine how we might respond, Segal said.
Segal cautioned that agency technologists need to start with a realistic understanding of AI and machine learning to make most effective use of these new capacities, and to see them in terms of how they can concretely refine internal processes and advance their organizations key aims.
One of the biggest risks about AI right now is that there's this huge amount of hype surrounding it AI is a tool just like any other tool. And the way that you use a tool is to figure out where it would be effective, and where it would actually help solve a problem in our research organization. One of the things that we try to do is actually look at the technology in order to apply it to real problems and analyze the results in a scientifically rigorous manner, Segal said.
Segal also cautioned agencies to avoid creating undue biases within their algorithms, as these built-in flaws would ultimately distort the resulting analysis in ways that are either ineffective or potentially dangerous if they go uncorrected.
A lot of machine-learning algorithms are trained on data, and one of the challenges that can emerge there is that if the data is biased, its going to affect the output," Segal said. "For example, with facial-recognition software, if the training data only has people that have a certain hair type, or a certain skin color, or certain facial features, it will not work in practice because when you encounter other data that you've not seen before, the algorithm will behave in unpredictable ways."
One of the most promising applications NSA researchers have begun exploring is automated data sorting, using AI to sift through large quantities of documents and identify relevant information far more quickly than a human worker would be able to.
Imagine you've got a very large pile of documents, and in some of these documents there are really important things you want analysts to look at while some of the other documents are completely irrelevant. So one of the ways that we've used AI and machine learning in particular is we can have a trained human look at a subset of these documents and train a model to say which ones are really important and which ones are less important. Once you've trained a model and have enough data that you train the model successfully, you can go through a much larger collection of documents much more quickly than a human being could do it, Segal said.
Another concrete use case that aligns AI with operational efficiency is using tailored algorithms to convert speech to text.
If you can do that, you can make that text searchable, which once again makes the analyst more productive. So instead of listening to thousands of hours of audio to hear one relevant audio clip, you put in a few keywords and scan all this processed text, Segal said.
Segal emphasized that no matter how advanced these capacities become, national security institutions should continue evaluating AI for both potential biases, as well as through the central criteria as to whether or not these new uses are conducive to their longstanding mission.
I think the main way that we do that is when we try these experiments, pilot studies and different techniques, we have a way of quantitatively measuring its effectiveness. When it proves to be effective, we refine the techniques. And when it proves not to be effective, we take a step back and think about why it failed, Segal said.
The rest is here:
AI Helping to Refine Intelligence Analysis - GovernmentCIO Media & Research