Artificial Intelligence/Machine Learning and the Future of National Security – smallwarsjournal

Artificial Intelligence/Machine Learning and the Future of National Security

AI is a once-in-a lifetime commercial and defense game changer

By Steve Blank

Hundreds of billions in public and private capital is being invested in AI and Machine Learning companies. The number of patents filed in 2021 is more than 30 times higher than in 2015 as companies and countries across the world have realized that AI and Machine Learning will be a major disruptor and potentially change the balance of military power.

Until recently, the hype exceeded reality. Today, however, advances in AI in several important areas (here, here, here, here and here) equal and even surpass human capabilities.

If you havent paid attention, nows the time.

AI and the DoD

The Department of Defense has thought that AI is such a foundational set of technologies that they started a dedicated organization -- the JAIC -- to enable and implement artificial intelligence across the Department. They provide the infrastructure, tools, and technical expertise for DoD users to successfully build and deploy their AI-accelerated projects.

Some specific defense-related AI applications are listed later in this document.

Were in the Middle of a Revolution

Imagine its 1950, and youre a visitor who traveled back in time from today. Your job is to explain the impact computers will have on business, defense and society to people who are using manual calculators and slide rules. You succeed in convincing one company and a government to adopt computers and learn to code much faster than their competitors /adversaries. And they figure out how they could digitally enable their business supply chain, customer interactions, etc. Think about the competitive edge theyd have by today in business or as a nation. Theyd steamroll everyone.

Thats where we are today with Artificial Intelligence and Machine Learning. These technologies will transform businesses and government agencies. Today, 100s of billions of dollars in private capital have been invested in 1,000s of AI startups. The U.S. Department of Defense has created a dedicated organization to ensure its deployment.

But What Is It?

Compared to the classic computing weve had for the last 75 years, AI has led to new types of applications, e.g. facial recognition; new types of algorithms, e.g. machine learning; new types of computer architectures, e.g. neural nets; new hardware, e.g. GPUs; new types of software developers, e.g. data scientists; all under the overarching theme of artificial intelligence. The sum of these feels like buzzword bingo. But they herald a sea change in what computers are capable of doing, how they do it, and what hardware and software is needed to do it.

This brief will attempt to describe all of it.

New Words to Define Old Things

One of the reasons the world of AI/ML is confusing is that its created its own language and vocabulary. It uses new words to define programming steps, job descriptions, development tools, etc. But once you understand how the new world maps onto the classic computing world, it starts to make sense. So first a short list of some key definitions.

AI/ML - a shorthand for Artificial Intelligence/Machine Learning

Artificial Intelligence (AI) - a catchall term used to describe Intelligent machines which can solve problems, make/suggest decisions and perform tasks that have traditionally required humans to do. AI is not a single thing, but a constellation of different technologies.

Machine Learning (ML) - a subfield of artificial intelligence. Humans combine data with algorithms (see here for a list) to train a model using that data. This trained model can then make predications on new data (is this picture a cat, a dog or a person?) or decision-making processes (like understanding text and images) without being explicitly programmed to do so.

Machine learning algorithms - computer programs that adjust themselves to perform better as they are exposed to more data.

The learning part of machine learning means these programs change how they process data over time. In other words, a machine-learning algorithm can adjust its own settings, given feedback on its previous performance in making predictions about a collection of data (images, text, etc.).

Deep Learning/Neural Nets a subfield of machine learning. Neural networks make up the backbone of deep learning. (The deep in deep learning refers to the depth of layers in a neural network.) Neural nets are effective at a variety of tasks (e.g., image classification, speech recognition). A deep learning neural net algorithm is given massive volumes of data, and a task to perform - such as classification. The resulting model is capable of solving complex tasks such as recognizing objects within an image and translating speech in real time. In reality, the neural net is a logical concept that gets mapped onto a physical set of specialized processors. See here.)

Data Science a new field of computer science. Broadly it encompasses data systems and processes aimed at maintaining data sets and deriving meaning out of them. In the context of AI, its the practice of people who are doing machine learning.

Data Scientists - responsible for extracting insights that help businesses make decisions. They explore and analyze data using machine learning platforms to create models about customers, processes, risks, or whatever theyre trying to predict.

Whats Different? Why is Machine Learning Possible Now?

To understand why AI/Machine Learning can do these things, lets compare them to computers before AI came on the scene. (Warning simplified examples below.)

Classic Computers

For the last 75 years computers (well call these classic computers) have both shrunk to pocket size (iPhones) and grown to the size of warehouses (cloud data centers), yet they all continued to operate essentially the same way.

Classic Computers - Programming

Classic computers are designed to do anything a human explicitly tells them to do. People (programmers) write software code (programming) to develop applications, thinking a priori about all the rules, logic and knowledge that need to be built in to an application so that it can deliver a specific result. These rules are explicitly coded into a program using a software language (Python, JavaScript, C#, Rust, ).

Classic Computers - Compiling

The code is then compiled using software to translate the programmers source code into a version that can be run on a target computer/browser/phone. For most of todays programs, the computer used to develop and compile the code does not have to be that much faster than the one that will run it.

Classic Computers - Running/Executing Programs

Once a program is coded and compiled, it can be deployed and run (executed) on a desktop computer, phone, in a browser window, a data center cluster, in special hardware, etc. Programs/applications can be games, social media, office applications, missile guidance systems, bitcoin mining, or even operating systems e.g. Linux, Windows, IOS. These programs run on the same type of classic computer architectures they were programmed in.

Classic Computers Software Updates, New Features

For programs written for classic computers, software developers receive bug reports, monitor for security breaches, and send out regular software updates that fix bugs, increase performance and at times add new features.

Classic Computers- Hardware

The CPUs (Central Processing Units) that write and run these Classic Computer applications all have the same basic design (architecture). The CPUs are designed to handle a wide range oftasks quickly in a serial fashion. These CPUs range from Intel X86 chips, and the ARM cores on Apple M1 SoC, to thez15 in IBM mainframes.

Machine Learning

In contrast to programming on classic computing with fixed rules, machine learning is just like it sounds we can train/teach a computer to learn by example by feeding it lots and lots of examples. (For images a rule of thumb is that a machine learning algorithm needs at least 5,000 labeled examples of each category in order to produce an AI model with decent performance.) Once it is trained, the computer runs on its own and can make predictions and/or complex decisions.

Just as traditional programming has three steps - first coding a program, next compiling it and then running it - machine learning also has three steps: training (teaching), pruning and inference (predicting by itself.)

Machine Learning - Training

Unlike programing classic computers with explicit rules, training is the process of teaching a computer to perform a task e.g. recognize faces, signals, understand text, etc. (Now you know why you're asked to click on images of traffic lights, cross walks, stop signs, and buses or type the text of scanned image in ReCaptcha.) Humans provide massive volumes of training data (the more data, the better the models performance) and select the appropriate algorithm to find the best optimized outcome.

(See the detailed machine learning pipeline later in this section for the gory details.)

By running an algorithm selected by a data scientist on a set of training data, the Machine Learning system generates the rules embedded in a trained model. The system learns from examples (training data), rather than being explicitly programmed. (See the Types of Machine Learning section for more detail.) This self-correction is pretty cool. An input to a neural net results in a guess about what that input is. The neural net then takes its guess and compares it to a ground-truth about the data, effectively asking an expert Did I get this right? The difference between the networks guess and the ground truth is itserror. The network measures that error, and walks the error back over its model, adjusting weights to the extent that they contributed to the error.)

Just to make the point again: The algorithms combined with the training data - not external human computer programmers - create the rules that the AI uses. The resulting model is capable of solving complex tasks such as recognizing objects its never seen before, translating text or speech, or controlling a drone swarm.

(Instead of building a model from scratch you can now buy, for common machine learning tasks, pretrained models from others and here, much like chip designers buying IP Cores.)

Machine Learning Training - Hardware

Training a machine learning model is a very computationally intensive task. AI hardware must be able to perform thousands of multiplications and additions in a mathematical process called matrix multiplication. It requires specialized chips to run fast. (See the AI hardware section for details.)

Machine Learning - Simplification via pruning, quantization, distillation

Just like classic computer code needs to be compiled and optimized before it is deployed on its target hardware, the machine learning models are simplified and modified(pruned) touse less computingpower, energy, and memory before theyre deployed to run on their hardware.

Read more here:
Artificial Intelligence/Machine Learning and the Future of National Security - smallwarsjournal

Related Posts

Comments are closed.