Archive for the ‘Artificial Intelligence’ Category

Shaping an Australian Navy Approach to Maritime Remotes, Artificial Intelligence and Combat Grids – Second Line of Defense

By Robbin Laird

During my visit to Australia last October, I had a chance to talk to a number of people about the evolving approach in Australia to maritime remotes and their evolving role within the fifth generation warfare approach or what I refer to as building a distributed integratable force or an integrated distributed force.

Towards the end of my stay, I had a chance to discuss with the key presenter on this topic at the Seapower Conference held in Sydney in early October, Commander Paul Hornsby, the Royal Australian Navy lead on maritime remotes.

We discussed a number of issues, but I am going to focus on where maritime remotes fit within the evolving strategic thinking of the Royal Australian Navy and its contribution to the ADF.

The broad point is that Australia is focusing on robotics and artificial intelligence more generally in its economy, with clear opportunities for innovation to flow between the civil and military sectors. Australia is a large island continent with a relatively small population. For both economic and defense reasons, Australia needs to extend the capabilities of its skilled manpower with robotic and AI capabilities. For the Navy, this means shaping a much large fleet in terms of a significant web of maritime remotes working interactively with the various manned assets operating in an area of interest.

Commander Hornsby highlighted the 2018 Australian Robotics Roadmap as an indicator of the Australian approach to cross-leveraging robotic systems and AI. As the report noted:

Robotics can be the force multiplier needed to augment Australias highly valued humanworkforce and to enable persistent, wide-area operations in air, land, sea, subsurface, spaceand cyber domains.

A second broad point is that Australia is working closely with core allies to forge a common R and D pool and to cross-learn from one another with regard to the operation of maritime remotes and their ability to deliver capabilities to the operational forces.

An example of the cross-learning and collaborative approach was Autonomous Warrior 2018. The exercise was a milestone in allied cooperation, according to Lt. Andrew Herring, in an article published on November 24, 2018.

When more than 50 autonomous technologies and over 500 scientists, technicians and support staff came together for AUTONOMOUS WARRIOR 2018 (AW18) in Jervis Bay, ACT, it marked the culmination of four years collaboration between the militaries, defence scientists and defence industries of five nations.

Today, Navys Deputy Director Mine Warfare Diving and Special Ops Capability, Commander Paul Hornsby, and Defence Science and Technologys (DST) Trusted Autonomous Systems Program Leader, Professor Jason Scholz, are exploring autonomous technologies with US Air Force Research Labs Senior Engineering Research Manager, Dr Mark Draper and Dr Philip Smith from the UKs Defence Science and Technology Laboratory.

The four, with their respective organisations, are collaborating under the Five Eyes Technical Cooperation Program (TTCP), which shares information and ideas among defence scientists from Australia, UK, USA, Canada and New Zealand, pursuing strategic challenges in priority areas.

Among them is TTCPs Autonomy Strategic Challenge, which aims to integrate autonomous technologies to operate together in different environments.

AUTONOMOUS WARRIOR2018 includes the Strategic Challenges fifth and final scientific trial Wizard of Aus a software co-development program aimed at managing autonomous vehicles from a shared command and control system that integrates with combat systems used by Five Eyes nations.

US Air Force Research Labs Dr Mark Draper summarises AW18s ambitious objective. What we are trying to achieve here is force multiplication and interoperability, where multiple unmanned systems from different countriesin the air, on the ground and on the surface of the water or even underwaterwould all be controlled and managed by one person sitting at one control station.

Two systems together

To achieve this, two systems have come together: AIM and MAPLE.

Allied IMPACT, known as AIM, combines best of breed technologies from Australia, United Kingdom, United States and Canada.

Weve brought these technologies together and integrated them into one control station and we are testing its effectiveness in reasonable and realistic military scenarios, Dr Draper said.

Australia has led development of three of AIMs eight modules: the Recommender, which uses artificial intelligence to analyse information and recommend actions to commanders; the Narrative, which automatically generates multimedia briefings about emerging operational situations; and DARRT, which enables real time test and evaluation of autonomous systems.

The Maritime Autonomous Platform Exploitation (MAPLE) system is a UK-led project providing the information architecture required to integrate a diverse mix of live unmanned systems into a common operating picture that is fed into the AIM Command and Control Station.

The sort of software co-development we are doing here is not usually done, UK Defence Scientist Dr Philip Smith said.

The evaluation team is using real time data logging to evaluate system performance, apply lessons learned and improve the software.

This is also giving us detailed diagnostics to determine where to focus effort for future development, he said.

Revolutionary potential

DSTs Professor Jason Scholz is optimistic about the potential for these technologies beyond AW18.

This activity has demonstrated what can be achieved when a spirit of cooperation, understanding and support exists between military personnel, scientists, engineers and industry.

Systems became more reliable as the exercise progressed with improvements made daily.

These highly disruptive technologies can potentially revolutionise how armed forces operate. The sort of cooperation weve seen at AW18 is vital for bringing these technologies into service.

It would be interesting to run a similar activity with these rapidly evolving technologies in two or three years, Professor Scholz said.

Lasting impact

Commander Hornsby, who has been the ADF lead for AW18 and is developing Navys autonomous systems strategy, says the activity has raised awareness among Australias Defence Force and defence industry.

The nearly 1000 visitors to AW18 gained fresh insights into the technologys current state of development and its potential to enhance capability.

As a huge continent occupied by a relatively small population with a mid-sized defence force by world standards, the force multiplier effect of autonomous systems is vital, which is why Australia is a leading developer.

The evaluations done at AW18 are also important internationally.

The world is watching AW18 closely because Australia offers the most challenging operating conditions for unmanned technologies. If they can make it here, they can make it anywhere, Commander Hornsby said.

Autonomous Warrior 2018 was a major demonstration and evaluation of the potential of robotic, autonomous and uninhabited systems, in support of Defence operations in coastal environments. It combined a dynamic exhibition, trials and exercising of in-service systems.

Australian industry contributed semi-autonomous vehicles for use in AW18 and developed data interfaces to enable control by Five Eyes systems. Contributing companies included Bluezone Group, Ocius, Defendtex, Australian Centre for Field Robotics, Silverton and Northrop Grumman. Vehicles were also contributed by Australian, NZ, US and UK government agencies.

In our discussion, Commander Hornsby noted that collaborative R and D and shared experiences was a key element of the Australian approach, but that Australia had unique operating conditions in the waters off of Australia, and systems that might work in other waters would not necessarily be successful in the much more challenging waters to be found in Northern and Western Australia, areas where the deployment of maritime remotes is a priority.

But one must remember that the maritime remote effort is a question of payloads and platforms. Not simply building platforms. Rear Admiral Mark Darrah, US Navy, made a comment about unmanned air systems which is equally applicable to maritime remotes: Many view UAS as a capability when in fact it should be viewed as a means of employing payloads to achieve particular capabilities.

His approach to maritime remotes is very much in the character of looking at different platforms, in terms of speed, range, endurance, and other performance parameters, measured up against the kind of payload these various platforms might be able to carry.

Calculations, of the payload/platform pairing and their potential impacts then needed to be measured up against the kind of mission which they are capable of performing. And in this sense, the matching of the payload/platform dyad to the mission or task, suggests prioritization for the Navy and the ADF in terms of putting in to operation the particular capability.

This also means that different allied navies might well have different views of their priority requirements, which could lead to very different timelines with regard to deployment of particular maritime remotes.

And if the sharing approach prevails, this could well provide the allied nations to provide cross-cutting capabilities when deployed together or provide acquisition and export opportunities for those allies with one another.

Commander Hornsby breaks out the missions for AUV and UUV employment in the following manner:

Home & Away operations

Pending combination, provides: Deterrence, Sea Control, Sea Denial, Power Projection or Force Protection

What this means is that different payload/platform combinations can work these different missions more or less effectively. And quite obviously, in working the concepts of operations for each mission or task which will include maritime remotes needs to shape an approach where their capabilities are properly included in that approach.

And in a 2016 briefing by Hornsby., he highlighted this point as follows:

But importantly, maritime remotes should not be looked at in isolation of the operation of the distributed force and how integratable data can be accumulated and communicated to allow for C2 which can shape effective concepts of operations.

This means that how maritime remotes are worked as an interactive grid is a key part of shaping an effective way ahead. And this allows for creative mix and matching of remotes with manned assets and the shaping of decision making at the tactical edge. Remotes and AI capabilities are not ends in of themselves; but are key parts of the reshaping of the C2/ISR capabilities which are reshaping the concepts of operations of the combat force.

In that 2016 briefing, Commander Hornsby provided an example of the kind of grid which maritime remotes enable:

To use an example in the European context, as the fourth battle of the Atlantic shapes up, if the allies can work cross-cutting maritime remote payload/platform capabilities and can operate those in the waters which the Russians intend to use to conduct their operations against NATO, then a new grid could be created which would have significant ISR data which could be communicated through UUV and USV grids to various parts of the 21st century integrated distributed combat force.

Such an approach is clearly crucial for Australia as it pushes out its defense perimeter but needs to enhance maritime security and defense of its ports and adjacent waters. And that defense will highlight a growing role for maritime remotes.

As Robert Slaven of L3Harris Technologies, a former member of the Royal Australian Navy, has put it:

The remotes can be distributed throughout the area of interest and be there significantly in advance of when we have to create a kinetic effect. In fact, they could be operating months or years in advance of shaping the decision of what kind of kinetic effect we would need in a crisis situation.

We need to learn how to work the machines to shape our understanding of the battlespace and to shape the kind of C2 which could direct the kind of kinetic or non-kinetic effect we are trying to achieve.

The featured photo showsHead of Royal Australian Navy Capability, Rear Admiral Peter Quinn, AM, CSC, RAN (right), Australian Defence Force personnel and industry partners watch the Defendtex Tempest Unmanned Aerial Vehicle display during AUTONOMOUS WARRIOR 2018 at HMAS Creswell.

Also, see the following:

Manned-Unmanned Teaming: Shaping Future Capabilities

Post Views: 926

Go here to read the rest:
Shaping an Australian Navy Approach to Maritime Remotes, Artificial Intelligence and Combat Grids - Second Line of Defense

Artificial intelligence, geopolitics, and information integrity – Brookings Institution

Much has been written, and rightly so, about the potential that artificial intelligence (AI) can be used to create and promote misinformation. But there is a less well-recognized but equally important application for AI in helping to detect misinformation and limit its spread. This dual role will be particularly important in geopolitics, which is closely tied to how governments shape and react to public opinion both within and beyond their borders. And it is important for another reason as well: While nation-state interest in information is certainly not new, the incorporation of AI into the information ecosystem is set to accelerate as machine learning and related technologies experience continued advances.

The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.

This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promoteor impedeinformation integrity.

Read and download the full article, Artificial intelligence, geopolitics, and information integrity.

See the article here:
Artificial intelligence, geopolitics, and information integrity - Brookings Institution

Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security – Security Intelligence

Artificial intelligence (AI) isnt new. What is new is the growing ubiquity of AI in large organizations. In fact, by the end of this year, I believe nearly every type of large organization will find AI-based cybersecurity tools indispensable.

Artificial intelligence is many things to many people. One fairly neutral definition is that its a branch of computer science that focuses on intelligent behavior, such as learning and problem solving. Now that cybersecurity AI is mainstream, its time to stop treating AI like some kind of magic pixie dust that solves every problem and start understanding its everyday necessity in the new cybersecurity landscape. 2020 is the year large organizations will come to rely on AI for security.

AI isnt magic, but for many specific use cases, the right tool for the job will increasingly involve AI. Here are six reasons why thats the case.

The monetary calculation every organization must make is the cost of security tools, programs and resources on one hand versus the cost of failing to secure vital assets on the other. That calculation is becoming easier as the potential cost of data breaches grows. And these costs arent stemming from the cleanup operation alone; they may also include damage to the brand, drops in stock prices and loss of productivity.

The average total cost of a data breach is now $3.92 million, according to the 2019 Cost of a Data Breach Report. Thats an increase of nearly 12 percent since 2014. The rising costs are also global, as Juniper Research predicts that the business costs of data breaches will exceed $5 trillion per year by 2024, with regulatory fines included.

These rising costs are partly due to the fact that malware is growing more destructive. Ransomware, for example, is moving beyond preventing file access and toward going after critical files and even master boot records.

Fortunately, AI can help security operations centers (SOCs) deal with these rising risks and costs. Indeed, the Cost of a Data Breach Report found that cybersecurity AI can decrease average costs by $230,000.

The percentage of state-sponsored cyberattacks against organizations of all kinds is also growing. In 2019, nearly one-quarter (23 percent) of breaches analyzed by Verizon were identified as having been funded or otherwise supported by nation-states or state-sponsored actors up from 12 percent in the previous year. This is concerning because state-sponsored attacks tend to be far more capable than garden-variety cybercrime attacks, and detecting and containing these threats often requires AI assistance.

An arms race between adversarial AI and defensive AI is coming. Thats just another way of saying that cybercriminals are coming at organizations with AI-based methods sold on the dark web to avoid setting off intrusion alarms and defeat authentication measures. So-called polymorphic malware and metamorphic malware change and adapt to avoid detection, with the latter making more drastic and hard-to-detect changes with its code.

Even social engineering is getting the artificial intelligence treatment. Weve already seen deepfake audio attacks where AI-generated voices impersonating three CEOs were used against three different companies. Deepfake audio and video simulations are created using generative adversarial network (GAN) technologies, where two neural networks train each other (one learning to create fake data and the other learning to judge its quality) until the first can create convincing simulations.

GAN technology can, in theory and in practice, be used to generate all kinds of fake data, including fingerprints and other biometric data. Some security experts predict that future iterations of malware will use AI to determine whether they are in a sandbox or not. Sandbox-evading malware would naturally be harder to detect using traditional methods.

Cybercriminals could also use AI to find new targets, especially internet of things (IoT) targets. This may contribute to more attacks against warehouses, factory equipment and office equipment. Accordingly, the best defense against AI-enhanced attacks of all kinds is cybersecurity AI.

Large organizations are suffering from a chronic expertise shortage in the cybersecurity field, and this shortage will continue unless things change. To that end, AI-based tools can enable enterprises to do more with the limited human resources already present in-house.

The Accenture Security Index found that more than 70 percent of organizations worldwide struggle to identify what their high-value assets are. AI can be a powerful tool for identifying these assets for protection.

The quantity of data that has to be sifted through to identify threats is vast and growing. Fortunately, machine learning is well-suited to processing huge data sets and eliminating false positives.

In addition, rapid in-house software development may be creating many new vulnerabilities, but AI can find errors in code far more quickly than humans. To embrace rapid application development (RAD) requires the use of AI for bug fixing.

These are just two examples of how growing complexity can inform and demand the adoption of AI-based tools in an enterprise.

There has always been tension between the need for better security and the need for higher productivity. The most usable systems are not secure, and the most secure systems are often unusable. Striking the right balance between the two is vital, but achieving this balance is becoming more difficult as attack methods grow more aggressive.

AI will likely come into your organization through the evolution of basic security practices. For instance, consider the standard security practice of authenticating employee and customer identities. As cybercriminals get better at spoofing users, stealing passwords and so on, organizations will be more incentivized to embrace advanced authentication technologies, such as AI-based facial recognition, gait recognition, voice recognition, keystroke dynamics and other biometrics.

The 2019 Verizon Data Breach Investigations Report found that 81 percent of hacking-related breaches involved weak or stolen passwords. To counteract these attacks, sophisticated AI-based tools that enhance authentication can be leveraged. For example, AI tools that continuously estimate risk levels whenever employees or customers access resources from the organization could prompt identification systems to require two-factor authentication when the AI component detects suspicious or risky behavior.

A big part of the solution going forward is leveraging both AI and biometrics to enable greater security without overburdening employees and customers.

One of the biggest reasons why employing AI will be so critical this year is that doing so will likely be unavoidable. AI is being built into security tools and services of all kinds, so its time to change our thinking around AIs role in enterprise security. Where it was once an exotic option, it is quickly becoming a mainstream necessity. How will you use AI to protect your organization?

Read the original:
Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security - Security Intelligence

China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) – The National Interest Online

Artificial intelligence (AI) is increasingly embedded into every aspect of life, and China is pouring billions into its bid to become an AI superpower. China's three-step plan is to pull equal with the United States in 2020, start making major breakthroughs of its own by mid-decade, and become the world's AI leader in 2030.

There's no doubt that Chinese companies are making big gains. Chinese government spending on AI may not match some of the most-hyped estimates, but China is providing big state subsidies to a select group of AI national champions, like Baidu in autonomous vehicles (AVs), Tencent in medical imaging, Alibaba in smart cities, Huawei in chips and software.

State support isn't all about money. It's also about clearing the road to success -- sometimes literally. Baidu ("China's Google") is based in Beijing, where the local government has kindly closed more than 300 miles of city roads to make way for AV tests. Nearby Shandong province closed a 16 mile mountain road so that Huawei could test its AI chips for AVs in a country setting.

In other Chinese AV test cities, the roads remain open but are thoroughly sanitized. Southern China's tech capital, Shenzhen, is the home of AI leader Tencent, which is testing its own AVs on Shenzhen's public roads. Notably absent from Shenzhen's major roads are motorcycles, scooters, bicycles, or even pedestrians. Two-wheeled vehicles are prohibited; pedestrians are comprehensively corralled by sidewalk barriers and deterred from jaywalking by stiff penalties backed up by facial recognition technology.

And what better way to jump-start AI for facial recognition than by having a national biometric ID card database where every single person's face is rendered in machine-friendly standardized photos?

Making AI easy has certainly helped China get its AI strategy off the ground. But like a student who is spoon-fed the answers on a test, a machine that learns from a simplified environment won't necessarily be able to cope in the real world.

Machine learning (ML) uses vast quantities of experiential data to train algorithms to make decisions that mimic human intelligence. Type something like "ML 4 AI" into Google, and it will know exactly what you mean. That's because Google learns English in the real world, not from memorizing a dictionary.

It's the same for AVs. Google's Alphabet cousin Waymo tests its cars on the anything-goes roads of everyday America. As a result, its algorithms have learned how to deal with challenges like a cyclist carrying a stop sign. Everything that can happen on America's roads, will happen on America's roads. Chinese AI is learning how to drive like a machine, but American AI is learning how to drive like a human -- only better.

American, British, and (especially) Israeli facial recognition AI efforts face similar real-world challenges. They have to work with incomplete, imperfect data, and still get the job done. What's more, they can't throw up too many false positives -- innocent people identified as threats. China's totalitarian regime can punish innocent people with impunity, but in democratic countries, even one false positive could halt a facial recognition roll-outs.

It's tempting to think that the best way forward for AI is to make it easy. In fact, the exact opposite is true. Like a muscle pushed to exercise, AI thrives on challenges. Chinese AI may take some giant strides operating in a stripped-down reality, but American AI will win the race in the real world. Reality is complicated, and if it's one thing Americans are good at, it's dealing with complexity.

Salvatore Babones is an adjunct scholar at the Centre for Independent Studies and an associate professor at the University of Sydney.

Visit link:
China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) - The National Interest Online

Gift will allow MIT researchers to use artificial intelligence in a biomedical device – MIT News

Researchers in the MIT Department of Civil and Environmental Engineering (CEE) have received a gift to advance their work on a device designed to position living cells for growing human organs using acoustic waves. The Acoustofluidic Device Design with Deep Learning is being supported by Natick, Massachusetts-based MathWorks, a leading developer of mathematical computing software.

One of the fundamental problems in growing cells is how to move and position them without damage, says John R. Williams, a professor in CEE. The devices weve designed are like acoustic tweezers.

Inspired by the complex and beautiful patterns in the sand made by waves, the researchers' approach is to use sound waves controlled by machine learning to design complex cell patterns. The pressure waves generated by acoustics in a fluid gently move and position the cells without damaging them.

The engineers developed a computer simulator to create a variety of device designs, which were then fed to an AI platform to understand the relationship between device design and cell positions.

Our hope is that, in time, this AI platform will create devices that we couldnt have imagined with traditional approaches, says Sam Raymond, who recently completed his doctorate working with Williams on this project. Raymonds thesis title, "Combining Numerical Simulation and Machine Learning," explored the application of machine learning in computational engineering.

MathWorks and MIT have a 30-year long relationship that centers on advancing innovations in engineering and science, says P.J. Boardman, director of MathWorks. We are pleased to support Dr. Williams and his team as they use new methodologies in simulation and deep learning to realize significant scientific breakthroughs.

Williams and Raymond collaborated with researchers at the University of Melbourne and the Singapore University of Technology and Design on this project.

See the original post:
Gift will allow MIT researchers to use artificial intelligence in a biomedical device - MIT News