Archive for the ‘Alphago’ Category

Brinks Home Security Will Leverage AI to Drive Customer Experience – Security Sales & Integration

A partnership with startup OfferFit aims to unlock new insights into customer journey mapping with an AI-enabled, self-learning platform.

DALLAS Brinks Home Security has embarked on what it terms an artificial intelligence (AI) transformation in partnership with OfferFit to innovate true 1-to-1 marketing personalization, according to an announcement.

Founded last year, OfferFit uses self-learning AI to personalize marketing offers down to the individual level. Self-learning AI allows companies to scale their marketing offers using real-time results driven by machine learning.

Self-learning AI, also called reinforcement learning, first came to national attention through DeepMinds AlphaGo program, which beat human Go champion Lee Sedol in 2016. While the technology has been used in academic research for years, commercial applications are just starting to be implemented.

Brinks Home Security CEO William Niles approached OfferFit earlier this year about using the AI platform to test customer marketing initiatives, according to the announcement. The pilot program involved using OfferFits proprietary AI to personalize offers for each customer in the sample set.

At first, the AI performed no better than the control. However, within two weeks, the AI had reached two times the performance of the control population. By the end of the third week, it had reached four times the result of the control group, the announcement states.

Brinks Home Security is now looking to expand use cases to other marketing and customer experience campaigns with the goal of providing customers with relevant, personalized offers and solutions.

The companies that flourish in the next decade will be the leaders in AI adoption, Niles says. Brinks Home Security is partnering with OfferFit because we are on a mission to have the best business intelligence and marketing personalization in the industry.

Personalization is a key component in creating customers for life. The consumer electronics industry, in particular, has a huge opportunity to leverage this type of machine learning to provide customers with more meaningful company interactions, not only at the point of sale but elsewhere in the customer lifecycle.

Our goal is to create customers for life by providing a premium customer experience, says Jay Autrey, chief customer officer, Brinks Home Security. To achieve that, we must give each customer exactly the products and services they need to be safe and comfortable in their home. OfferFit lets us reach true one-to-one personalization.

The Brinks Home Security test allowed OfferFit to see its AI adapting through a real-world case. Both companies see opportunities to expand the partnership and its impact on the customer lifecycle.

We know that AI is the future of marketing personalization, and pilot programs like the one that Brinks Home Security just completed demonstrate the value that machine learning can have for a business and on its customers, comments OfferFit CEO George Khachatryan.

Read more:
Brinks Home Security Will Leverage AI to Drive Customer Experience - Security Sales & Integration

The Future is Unmanned – The Maritime Executive

Why the Navy should build unmanned fighters as well as unmanned vessels Back to the future: the X-47B unmanned fighter prototype aboard the carrier USS George H.W. Bush, 2013 (USN)

By CIMSEC 02-28-2021 08:02:00

[ByTrevor Phillips-Levine, Dylan Phillips-Levine, and Walker D. Mills]

In August 2020, USNI News reported that the Navy had initiated work to develop its first new carrier-based fighter in almost 20 years. While the F-35C Lightning II will still be in production for many years, the Navy needs to have another fighter ready to replace the bulk of the F/A-18E/F/G Super Hornets and Growlers by the mid-2030s. This new program will design that aircraft. While this is an important development, it will be to the Navys detriment if the Next Generation Air Dominance (NGAD) program yields a manned fighter.

Designing a next-generationmannedaircraft will be a critical mistake. Every year remotely piloted aircraft (RPAs) replace more and more manned aviation platforms, and artificial intelligence (AI) is becoming ever increasingly capable. By the mid-2030s, when the NGAD platform is expected to begin production, it will be obsolete on arrival if it is a manned platform. In order to make sure the Navy maintains a qualitative and technical edge in aviation, it needs to invest in an unmanned-capable aircraft today. Recent advances and long-term trends in automation and computing make it clear that such an investment is not only prudent but necessary to maintain capability overmatch and avoid falling behind.

Artificial Intelligence

This year, AI designed by a team from Heron Systems defeated an Air Force pilot, call sign Banger, 5-0 in a simulated dogfight run by DARPA. Though the dogfight was simulated and had numerous constraints, it was only the latest in a long string of AI successes in competitions against human masters and experts.

Since 1997, when IBMs DeepBlue beat the reigning world chess champion Gary Kasparov over six games in Philadelphia, machines have been on a winning streak against humans. In 2011,IBMs Watson wonJeopardy!.In 2017, DeepMinds (Google) AlphaGobeat the worlds number one Go playerat the complex Chinese board game. In 2019, DeepMinds AlphaStarbeat one of the worlds top-ranked Starcraft II players, a real-time computerstrategy game, 5-0. Later that year an AI from Carnegie Mellon named Pluribus beat six professionals in a game of Texas Holdem poker.On the lighter side,an AI writing algorithm nearly beat the writing team for the game Cards Against Humanityin a competition to see who could sell more card packs in a Black Friday write-off. After the contest the companys statement read: The writers sold 2% more packs, so their jobs will be replaced by automation later instead of right now. Happy Holidays.

Its a joke, but the company is right. AI is getting better and better every year and human abilities will continue to be bested by AI in increasingly complex and abstract tasks. History shows that human experts have been repeatedly surprised by AIs rapid progress and their predictions on when AI will reach human parity in specific tasksoften come true years or a decade early. We cant make the same mistake with unmanned aviation.

Feb, 11, 1996 Garry Kasparov, left, reigning world chess champion, plays a match against IBMs Deep Blue, in the second of a six-game match in Philadelphia. Moving the chess pieces for IBMs Deep Blue is Feng-hsiung Hsu, architect and principal designer of the Deep Blue chess machine. (H. Rumph, Jr./AP File)

Most of these competitive AIs use machine learning. A subset of machine learning is deep reinforcement learning which uses biologically inspired evolutionary techniques to pit a model against itself over and over. Models that that are more successful at accomplishing the specific goal such as winning at Go or identifying pictures of tigers, continue on. It is like a giant bracket, except that the AI can compete against itself millions or even billions of times in preparation to compete against a human. Heron Systems AI, which defeated the human pilot, had run over four billion simulations before the contest. The creators called it putting a baby in the cockpit. The AI was given almost no instructions on how to fly, so even basic practices like not crashing into the ground were things it had to learn through trial and error.

This type of training has advantages algorithms can come up with moves that humans have never thought of, or use maneuvers humans would not choose to utilize. In the Go matches between Lee SeDol and AlphaGo, the AI made a move on turn 37, in game two, that shocked the audience and SeDol. Fan Hui, a three-time European Go champion and spectator of the match said, Its not a human move. Ive never seen a human play this move. It is possible that the move had never been played before in the history of the game. In the AlphaDogfight competition, the AI favored aggressive head-on gun attacks. This tactic is considered high-risk and prohibited in training. Most pilots wouldnt attempt it in combat. But an AI could. AI algorithms can develop and employ maneuvers that human pilots wouldnt think of or wouldnt attempt. They can be especially unpredictable in combat against humans because they arent human.

An AI also offers significant advantages over humans in piloting an aircraft because it is not limited by biology. An AI can make decisions in fractions of a second and simultaneously receive input from any number of sensors. It never has to move its eyes or turn its head to get a better look. In high-speed combat where margins are measured in seconds or less, this speed matters. An AI also never gets tired it is immune to the human factors of being a pilot. It is impervious to emotion, mental stress, and arguably the most critical inhibitor, the biological stresses of high-G maneuvers. Human pilots have a limit to their continuous high-G maneuver endurance. In the AlphaDogfight, both the AI and Banger, the human pilot, spent several minutes in continuous high-G maneuvers. While high G-maneuvers would be fine for an AI, real combat would likely induce loss of consciousness or G-LOC for human pilots.

Design and Mission Profiles

Aircraft, apart from remotely piloted aircraft (RPAs), are designed with a human pilot in mind. It is inherent to the platform that it will have to carry a human pilot and devote space and systems to all the necessary life support functions. Many of the maximum tolerances the aircraft can withstand are bottlenecked not by the aircraft itself, but to its pilot. An unmanned aircraft do not have to worry about protecting a human pilot or carrying one. It can be designed solely for the mission.

Aviation missions are also limited to the endurance of human pilots, where there is a finite number of hours a human can remain combat effective in a cockpit. Using unmanned aircraft changes that equation so that the limit is the capabilities of the aircraft and systems itself. Like surveillance drones, AI-piloted aircraft could remain on station for much longer than human piloted aircraft and (with air-to-air refueling) possibly for days.

The future operating environment will be less and less forgiving for human pilots. Decisions will be made at computational speed which outpaces a human OODA loop. Missiles will fly at hypersonic speeds and directed energy weapons will strike targets at the speed of light.Lockheed Martin has set a goal for mounting lasers on fighter jets by 2025. Autonomous aircraft piloted by AI will have distinct advantages in the future operating environment because of the quickness of its ability to react and the indefinite sustainment of that reaction speed. The Navy designed the Phalanx system to be autonomous in the 1970s and embedded doctrine statements into the Aegis combat system because it did not believe that humans could react fast enough in the missile age threat environment. The future will be even more unforgiving with a hypersonic threat environment and decisions made at the speed of AI that will often trump those made at human speeds in combat.

Unmanned aircraft are also inherently more risk worthy than manned aircraft. Commanders with unmanned aircraft can take greater risks and plan more aggressive missions that would have featured an unacceptably low probability of return for manned missions. This increased flexibility will be essential in rolling back and dismantling modern air defenses and anti-access, area-denial networks.

Unmanned is Already Here

The U.S. military already flies hundreds of large RPAs like the MQ-9 Predator and thousands of smaller RPAs like the RQ-11 Raven. It uses these aircraft for reconnaissance, surveillance, targeting, and strike. TheMarine Corps has flown unmanned cargo helicopters in Afghanistanand other cargo-carrying RPAs andautonomous aircrafthave proliferated in the private sector. These aircraft have been displacing human pilots in the cockpit for decades with human pilots now operating from the ground. The dramatic proliferation of unmanned aircraft over the last two decades has touched every major military and conflict zone. Even terrorists and non-state actors are leveraging unmanned aircraft for both surveillance and strike.

Apart from NGAD, the Navy is going full speed ahead on unmanned and autonomous vehicles.Last year it awarded a $330 million dollar contract for a medium-sized autonomous vessel. In early 2021, the Navy plans to runalarge Fleet Battle Problem exercise centered on unmanned vessels.The Navy has also begun to supplement its MH-60S squadrons with the unmanned MQ-8B. Chief among its advantages over the manned helicopter is the long on-station time. The Navy continues toinvest in its unmanned MQ-4C maritime surveillance dronesand has nowflight-tested the unmanned MQ-25 Stingray aerial tanker. In fact, the Navy has so aggressively pursued unmanned and autonomous vehicles that Congress has tried toslow down its speed of adoption and restrict some funding.

The Air Force too has been investing in unmanned combat aircraft. The unmanned loyal wingman drone is already being tested and in 2019 the service released itsArtificial Intelligence Strategyarguing that AI is a capability that will underpin our ability to compete, deter and win. The service is also moving forward with testing their Golden Horde, an initiative to create a lethal swarm of autonomous drones.

The Marine Corps has also decided to bet heavily on an unmanned future. In the recently releasedForce Design 2030 Report, the Commandant of the Marine Corps calls for doubling the Corps unmanned squadrons. Marines are alsodesigning unmanned ground vehiclesthat will be central to their new operating concept, Expeditionary Advanced Base Operations (EABO) andnew, large unmanned aircraft. Department of the Navy leaders have said that they would not be surprised if as much as 50 percent of Marine Corps aviation is unmanned relatively soon. The Marine Corps is also investing in a new family of systems to meet its requirement for ship-launched drones. With so much investment in other unmanned and autonomous platforms, why is the Navy not moving forward on an unmanned NGAD?

Criticism

An autonomous, next-generation combat aircraft for the Navy faces several criticisms. Some concerns are valid while others are not. Critics can rightly point out that AI is not ready yet. While this is certainly true, it likely will be ready enough by the mid-2030s when the NGAD is reaching production. 15 years ago, engineers were proud of building a computer that could beat Gary Kasparov at chess. Today, AIs have mastered ever more complex real-time games and aerial dogfighting. One can only expect AI will make a similar if not greater leap in the next 15 years. We need to be future-proofing future combat aircraft. So the question should not be, Is AI ready now? but, Will AI be ready in 15 years when NGAD is entering production?

Critics of lethal autonomy should note that it is already here. Loitering munitions are only the most recent manifestation of weapons without a human in the loop. The U.S. military has employed autonomous weapons ever since Phalanx was deployed on ships in the 1970s, and more recently with anti-ship missiles featuring intelligent seeker heads. The Navy is also simultaneously investing in autonomous surface vessels and unmanned helicopters, proving that there is room for lethal autonomy in naval aviation.

Some have raised concerns that autonomous aircraft can be hacked and RPAs can have their command and control links broken, jammed, or hijacked. But these concerns are no more valid with unmanned aircraft than manned aircraft. Modern 5thgeneration aircraft are full of computers, networked systems, and use fly-by-wire controls. A hacked F-35 will be hardly different than a hacked unmanned aircraft, except there is a human trapped aboard. In the case of RPAs, they have lost link protocols that can return them safely to base if they lose contact with a ground station.

Unfortunately, perhaps the largest obstacle to an unmanned NGAD is imagination. Simply put, it is difficult for Navy leaders, often pilots themselves, to imagine a computer doing a job that they have spent years mastering. They often consider it as much an art as a science. But these arguments sound eerily similar to arguments made by mounted cavalry commanders in the lead up to the Second World War. As late as 1939, Army General John K. Kerr argued that tanks could not replace horses on the battlefield. He wrote: We must not be misled to our own detriment to assume that the untried machine can displace the proved and tried horse. Similarly, the U.S. Navy was slow to adopt and trust search radars in the Second World War. Of their experience in Guadalcanal, historianJames D. Hornfischerwrote, The unfamiliar power of a new technology was seldom a match for a complacent human mind bent on ignoring it. Today we cannot make the same mistakes.

Conclusion

The future of aviation is unmanned aircraft whether remotely piloted, autonomously piloted, or a combination. There is simply no reason that a human needs to be in the cockpit of a modern, let alone next-generation aircraft. AI technology is progressing rapidly and consistently ahead of estimates. If the Navy waits to integrate AI into combat aircraft until it is mature, it will put naval aviation a decade or more behind.

Platforms being designed now need to be engineered to incorporate AI and future advances. Human pilots will not be able to compete with mature AI already pilots are losing to AI in dogfights; arguably the most complex part of their skillset. The Navy needs to design the next generation of combat aircraft for unmanned flight or it risks making naval aviation irrelevant in the future aerial fight.

TrevorPhillips-Levine is a lieutenant commander in the United States Navy. He has flown the F/A-18 Super Hornet in support of operations New Dawn and Enduring Freedom and is currently serving as a department head in VFA-2.

Dylan Phillips-Levine is a lieutenant commander in the United States Navy. He has flown the T-6B Texan II as an instructor and the MH-60R Seahawk. He is currently serving as an instructor in the T-34C-1 Turbo-Mentor as anexchange instructor pilot with the Argentine navy.

Walker D. Mills is a captain in the Marines. An infantry officer, he is currently serving as an exchange instructor at the Colombian naval academy. He is an Associate Editor at CIMSEC and an MA student at the Center for Homeland Defense and Security at the Naval Postgraduate School.

This article appears courtesy of CIMSEC and may be found in its original form here.

The opinions expressed herein are the author's and not necessarily those of The Maritime Executive.

The rest is here:
The Future is Unmanned - The Maritime Executive

Examining the world through signals and systems – MIT News

Theres a mesmerizing video animation on YouTube of simulated, self-driving traffic streaming through a six-lane, four-way intersection. Dozens of cars flow through the streets, pausing, turning, slowing, and speeding up to avoid colliding with their neighbors. And not a single car stopping. But what if even one of those vehicles was not autonomous? What if only one was?

In the coming decades, autonomous vehicles will play a growing role in society, whether keeping drivers safer, making deliveries, or increasing accessibility and mobility for elderly or disabled passengers.

But MIT Assistant Professor Cathy Wu argues that autonomous vehicles are just part of a complex transport system that may involve individual self-driving cars, delivery fleets, human drivers, and a range of last-mile solutions to get passengers to their doorstep not to mention road infrastructure like highways, roundabouts, and, yes, intersections.

Transport today accounts for about one-third of U.S. energy consumption. The decisions we make today about autonomous vehicles could have a big impact on this number ranging from a 40 percent decrease in energy use to a doubling of energy consumption.

So how can we better understand the problem of integrating autonomous vehicles into the transportation system? Equally important, how can we use this understanding to guide us toward better-functioning systems?

Wu, who joined the Laboratory for Information and Decision Systems (LIDS) and MIT in 2019, is the Gilbert W. Winslow Assistant Professor of Civil and Environmental Engineering as well as a core faculty member of the MIT Institute for Data, Systems, and Society. Growing up in a Philadelphia-area family of electrical engineers, Wu sought a field that would enable her to harness engineering skills to solve societal challenges.

During her years as an undergraduate at MIT, she reached out to Professor Seth Teller of the Computer Science and Artificial Intelligence Laboratory to discuss her interest in self-driving cars.

Teller, who passed away in 2014, met her questions with warm advice, says Wu. He told me, If you have an idea of what your passion in life is, then you have to go after it as hard as you possibly can. Only then can you hope to find your true passion.

Anyone can tell you to go after your dreams, but his insight was that dreams and ambitions are not always clear from the start. It takes hard work to find and pursue your passion.

Chasing that passion, Wu would go on to work with Teller, as well as in Professor Daniela Russ Distributed Robotics Laboratory, and finally as a graduate student at the University of California at Berkeley, where she won the IEEE Intelligent Transportation Systems Society's best PhD award in 2019.

In graduate school, Wu had an epiphany: She realized that for autonomous vehicles to fulfill their promise of fewer accidents, time saved, lower emissions, and greater socioeconomic and physical accessibility, these goals must be explicitly designed-for, whether as physical infrastructure, algorithms used by vehicles and sensors, or deliberate policy decisions.

At LIDS, Wu uses a type of machine learning called reinforcement learning to study how traffic systems behave, and how autonomous vehicles in those systems ought to behave to get the best possible outcomes.

Reinforcement learning, which was most famously used by AlphaGo, DeepMinds human-beating Go program, is a powerful class of methods that capture the idea behind trial-and-error given an objective, a learning agent repeatedly attempts to achieve the objective, failing and learning from its mistakes in the process.

In a traffic system, the objectives might be to maximize the overall average velocity of vehicles, to minimize travel time, to minimize energy consumption, and so on.

When studying common components of traffic networks such as grid roads, bottlenecks, and on- and off-ramps, Wu and her colleagues have found that reinforcement learning can match, and in some cases exceed, the performance of current traffic control strategies. And more importantly, reinforcement learning can shed new light toward understanding complex networked systems which have long evaded classical control techniques. For instance, if just 5 to 10 percent of vehicles on the road were autonomous and used reinforcement learning, that could eliminate congestion and boost vehicle speeds by 30 to 140 percent. And the learning from one scenario often translates well to others. These insights could one day soon help to inform public policy or business decisions.

In the course of this research, Wu and her colleagues helped improve a class of reinforcement learning methods called policy gradient methods. Their advancements turned out to be a general improvement to most existing deep reinforcement learning methods.

But reinforcement learning techniques will need to be continually improved to keep up with the scale and shifts in infrastructure and changing behavior patterns. And research findings will need to be translated into action by urban planners, auto makers and other organizations.

Today, Wu is collaborating with public agencies in Taiwan and Indonesia to use insights from her work to guide better dialogues and decisions. By changing traffic signals or using nudges to shift drivers behavior, are there other ways to achieve lower emissions or smoother traffic?

Im surprised by this work every day, says Wu. We set out to answer a question about self-driving cars, and it turns out you can pull apart the insights, apply them in other ways, and then this leads to new exciting questions to answer.

Wu is happy to have found her intellectual home at LIDS. Her experience of it is as a very deep, intellectual, friendly, and welcoming place. And she counts among her research inspirations MIT course 6.003 (Signals and Systems) a class she encourages everyone to take taught in the tradition of professors Alan Oppenheim (Research Laboratory of Electronics) and Alan Willsky (LIDS). The course taught me that so much in this world could be fruitfully examined through the lens of signals and systems, be it electronics or institutions or society, she says. I am just realizing as Im saying this, that I've been empowered by LIDS thinking all along!

Research and teaching through a pandemic havent been easy, but Wu is making the best of a challenging first year as faculty. (Ive been working from home in Cambridge my short walking commute is irrelevant at this point, she says wryly.) To unwind, she enjoys running, listening to podcasts covering topics ranging from science to history, and reverse-engineering her favorite Trader Joes frozen foods.

Shes also been working on two Covid-related projects born at MIT: One explores how data from the environment, such as data collected by internet-of-things-connected thermometers, can help identify emerging community outbreaks. Another project asks if its possible to ascertain how contagious the virus is on public transport, and how different factors might decrease the transmission risk.

Both are in their early stages, Wu says. We hope to contribute a bit to the pool of knowledge that can help decision-makers somewhere. Its been very enlightening and rewarding to do this and see all the other efforts going on around MIT.

More here:
Examining the world through signals and systems - MIT News

DeepMind Proposes Graph-Theoretic Investigation of the Multiplayer Games Landscape – Synced

In the mid-1960s, computer science and AI researchers adopted the pet name drosophila for the game of Chess a reference to the fruit flies commonly used in genetic research. American evolutionary biologist Thomas Morgan made critical contributions to the field by studying his famous fly rooms, and AI researchers today believe multiplayer games like Chess can provide similar accessible and relatively simple experimental environments for shaping useful knowledge about complex systems.

In recent years researchers have made multiplayer games a hot testbed for AI research, using reinforcement learning techniques to create superhuman agents in Chess, Go, StarCraft II and others.

This progress, however, can be better informed by characterizing games and their topological landscape, proposes the paper Navigating the Landscape of Multiplayer Games, recently published in Nature Communications. In the work, researchers from DeepMind and Universidade de Lisboa introduce a graph-based toolkit for analyzing and comparing games in this regard.

Understanding and decomposing the characterizing features of games can be leveraged for downstream training of agents via curriculum learning, which seeks to enable agents to learn increasingly-complex tasks. The researchers say it has become increasingly important to identify a framework that can taxonomize, characterize, and decompose complex AI tasks, and they turned to multiplayer games for references. They defined the core challenge as a Problem Problem: the engineering problem of generating large numbers of interesting adaptive environments to support research.

The researchers start with a fundamental question: What makes a game interesting enough for an AI agent to learn to play? They propose that answering this requires techniques that can characterize and enable discovery over the topological landscape of games, whether they are interesting or not.

The team combined graph and game theory to analyze the structure of general-sum, multiplayer games. They used the new toolkit to characterize games, looking at motivating examples and canonical games with well-defined structures first, then extending to larger-scale empirical games datasets. The games graph representations can offer researchers various insights, such as strong transitive relationships revealed in AlphaGo, the DeepMind program that defeated Go grandmaster Lee Sedol in 2016.

The study surveys the landscape of games and develops techniques to help with understanding the space of games, the downstream training of agents in game settings, and interest-improving algorithmic development. The team says the work opens paths for further exploration of the theoretical properties of graph-based games analysis and the Problem Problem and task theory, and can benefit related studies on the geometry and structure of games.

The paper Navigating the Landscape of Multiplayer Games is on Nature Communications.

Reporter: Fangyu Cai |Editor: Michael Sarazen

We know you dont want to miss any news or research breakthroughs.Subscribe to our popular newsletterSynced Global AI Weeklyto get weekly AI updates.

Thinking of contributing to Synced Review? Synceds new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

Like Loading...

Read the original here:
DeepMind Proposes Graph-Theoretic Investigation of the Multiplayer Games Landscape - Synced

There’s No Turning Back on AI in the Military – WIRED

For countless Americans, the United States military epitomizes nonpareil technological advantage. Thankfully, in many cases, we live up to it.

But our present digital reality is quite different, even sobering. Fighting terrorists for nearly 20 years after 9/11, we remained a flip-phone military in what is now a smartphone world. Infrastructure to support a robust digital force remains painfully absent. Consequently, service members lead personal lives digitally connected to almost everything and military lives connected to almost nothing. Imagine having some of the worlds best hardwarestealth fighters or space planessupported by the worlds worst data plan.

Meanwhile, the accelerating global information age remains dizzying. The year 2020 is on track to produce 59 zetabytes of data. Thats a one with 21 zeroes after itover 50 times the number of stars in the observable universe. On average, every person online contributes 1.7 megabytes of content per second, and counting. Taglines like Data is the new oil emphasize the economic import, but not its full potential. Data is more reverently captures its ever evolving, artificially intelligent future.

WIRED OPINION

ABOUT

Will Roper is the Air Force and Space Force acquisition executive.

The rise of artificial intelligence has come a long way since 1945, when visionary mathematician Alan Turing hypothesized that machines would one day perform intelligent functions, like playing chess. Aided by meteoric advances in data processinga million-billion-fold over the past 70 yearsTurings vision was achieved only 52 years later, when IBMs Deep Blue defeated the reigning world chess champion, Garry Kasparov, with select moves described as almost human. But this impressive feat would be dwarfed in 2016 when Googles AlphaGo shocked the world with a beyond-human, even beautiful move on its way to defeating 18-time world Go champion Lee Sedol. That now famous move 37 of game two was the death knell of human preeminence in strategy games. Machines now teach the worlds elite how to play.

China took more notice of this than usual. Weve become frustratingly accustomed to them copying or stealing US military secretstwo decades of post-9/11 operations provides a lot of time to watch and learn. But Chinas ambitions far outstrip merely copying or surpassing our military. AlphaGos victory was a Sputnik moment for the Chinese Communist Party, triggering its own NASA-like response: a national Mega-Project in AI. Though there is no moon in this digital space race, its giant leap may be the next industrial revolution. The synergy of 5G and cloud-to-edge AI could radically evolve the internet of things, enabling ubiquitous AI and all the economic and military advantages it could bestow. It's not just our military that needs digital urgency: Our nation must wake up fast. The only thing worse than fearing AI itself is fearing not having it.

There is a gleam of hope. The Air Force and Space Force had their own move 37 moment last month during the first AI-enabled shoot-down of a cruise missile at blistering machine speeds. Though happening in a literal flash, this watershed event was seven years in the making, integrating technologies as diverse as hypervelocity guns, fighters, computing clouds, virtual reality, 4G LTE and 5G, and even Project Maventhe Pentagons first AI initiative. In the blink of a digital eye, we birthed an internet of military things.

Working at unprecedented speeds (at least for the Pentagon), the Air Force and Space Force are expanding this IoT.mil across the militaryand not a moment too soon. With AI surpassing human performance in more than just chess and Go, traditional roles in warfare are not far behind. Whose AI will overtake them? is an operative question in the digital space race. Another is how our military finally got off the launch pad.

More than seven years ago, I spearheaded the development of hypervelocity guns to defeat missile attacks with low-cost, rapid-fire projectiles. I also launched Project Maven to pursue machine-speed targeting of potential threats. But with no defense plug-n-play infrastructure, these systems remained stuck in airplane mode. The Air Force and Space Force later offered me the much-needed chance to create that digital infrastructurecloud, software platforms, enterprise data, even coding skillsfrom the ground up. We had to become a good software company to become a software-enabled force.

See more here:
There's No Turning Back on AI in the Military - WIRED