Archive for the ‘Artificial Intelligence’ Category

Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security – Security Intelligence

Artificial intelligence (AI) isnt new. What is new is the growing ubiquity of AI in large organizations. In fact, by the end of this year, I believe nearly every type of large organization will find AI-based cybersecurity tools indispensable.

Artificial intelligence is many things to many people. One fairly neutral definition is that its a branch of computer science that focuses on intelligent behavior, such as learning and problem solving. Now that cybersecurity AI is mainstream, its time to stop treating AI like some kind of magic pixie dust that solves every problem and start understanding its everyday necessity in the new cybersecurity landscape. 2020 is the year large organizations will come to rely on AI for security.

AI isnt magic, but for many specific use cases, the right tool for the job will increasingly involve AI. Here are six reasons why thats the case.

The monetary calculation every organization must make is the cost of security tools, programs and resources on one hand versus the cost of failing to secure vital assets on the other. That calculation is becoming easier as the potential cost of data breaches grows. And these costs arent stemming from the cleanup operation alone; they may also include damage to the brand, drops in stock prices and loss of productivity.

The average total cost of a data breach is now $3.92 million, according to the 2019 Cost of a Data Breach Report. Thats an increase of nearly 12 percent since 2014. The rising costs are also global, as Juniper Research predicts that the business costs of data breaches will exceed $5 trillion per year by 2024, with regulatory fines included.

These rising costs are partly due to the fact that malware is growing more destructive. Ransomware, for example, is moving beyond preventing file access and toward going after critical files and even master boot records.

Fortunately, AI can help security operations centers (SOCs) deal with these rising risks and costs. Indeed, the Cost of a Data Breach Report found that cybersecurity AI can decrease average costs by $230,000.

The percentage of state-sponsored cyberattacks against organizations of all kinds is also growing. In 2019, nearly one-quarter (23 percent) of breaches analyzed by Verizon were identified as having been funded or otherwise supported by nation-states or state-sponsored actors up from 12 percent in the previous year. This is concerning because state-sponsored attacks tend to be far more capable than garden-variety cybercrime attacks, and detecting and containing these threats often requires AI assistance.

An arms race between adversarial AI and defensive AI is coming. Thats just another way of saying that cybercriminals are coming at organizations with AI-based methods sold on the dark web to avoid setting off intrusion alarms and defeat authentication measures. So-called polymorphic malware and metamorphic malware change and adapt to avoid detection, with the latter making more drastic and hard-to-detect changes with its code.

Even social engineering is getting the artificial intelligence treatment. Weve already seen deepfake audio attacks where AI-generated voices impersonating three CEOs were used against three different companies. Deepfake audio and video simulations are created using generative adversarial network (GAN) technologies, where two neural networks train each other (one learning to create fake data and the other learning to judge its quality) until the first can create convincing simulations.

GAN technology can, in theory and in practice, be used to generate all kinds of fake data, including fingerprints and other biometric data. Some security experts predict that future iterations of malware will use AI to determine whether they are in a sandbox or not. Sandbox-evading malware would naturally be harder to detect using traditional methods.

Cybercriminals could also use AI to find new targets, especially internet of things (IoT) targets. This may contribute to more attacks against warehouses, factory equipment and office equipment. Accordingly, the best defense against AI-enhanced attacks of all kinds is cybersecurity AI.

Large organizations are suffering from a chronic expertise shortage in the cybersecurity field, and this shortage will continue unless things change. To that end, AI-based tools can enable enterprises to do more with the limited human resources already present in-house.

The Accenture Security Index found that more than 70 percent of organizations worldwide struggle to identify what their high-value assets are. AI can be a powerful tool for identifying these assets for protection.

The quantity of data that has to be sifted through to identify threats is vast and growing. Fortunately, machine learning is well-suited to processing huge data sets and eliminating false positives.

In addition, rapid in-house software development may be creating many new vulnerabilities, but AI can find errors in code far more quickly than humans. To embrace rapid application development (RAD) requires the use of AI for bug fixing.

These are just two examples of how growing complexity can inform and demand the adoption of AI-based tools in an enterprise.

There has always been tension between the need for better security and the need for higher productivity. The most usable systems are not secure, and the most secure systems are often unusable. Striking the right balance between the two is vital, but achieving this balance is becoming more difficult as attack methods grow more aggressive.

AI will likely come into your organization through the evolution of basic security practices. For instance, consider the standard security practice of authenticating employee and customer identities. As cybercriminals get better at spoofing users, stealing passwords and so on, organizations will be more incentivized to embrace advanced authentication technologies, such as AI-based facial recognition, gait recognition, voice recognition, keystroke dynamics and other biometrics.

The 2019 Verizon Data Breach Investigations Report found that 81 percent of hacking-related breaches involved weak or stolen passwords. To counteract these attacks, sophisticated AI-based tools that enhance authentication can be leveraged. For example, AI tools that continuously estimate risk levels whenever employees or customers access resources from the organization could prompt identification systems to require two-factor authentication when the AI component detects suspicious or risky behavior.

A big part of the solution going forward is leveraging both AI and biometrics to enable greater security without overburdening employees and customers.

One of the biggest reasons why employing AI will be so critical this year is that doing so will likely be unavoidable. AI is being built into security tools and services of all kinds, so its time to change our thinking around AIs role in enterprise security. Where it was once an exotic option, it is quickly becoming a mainstream necessity. How will you use AI to protect your organization?

Read the original:
Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security - Security Intelligence

China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) – The National Interest Online

Artificial intelligence (AI) is increasingly embedded into every aspect of life, and China is pouring billions into its bid to become an AI superpower. China's three-step plan is to pull equal with the United States in 2020, start making major breakthroughs of its own by mid-decade, and become the world's AI leader in 2030.

There's no doubt that Chinese companies are making big gains. Chinese government spending on AI may not match some of the most-hyped estimates, but China is providing big state subsidies to a select group of AI national champions, like Baidu in autonomous vehicles (AVs), Tencent in medical imaging, Alibaba in smart cities, Huawei in chips and software.

State support isn't all about money. It's also about clearing the road to success -- sometimes literally. Baidu ("China's Google") is based in Beijing, where the local government has kindly closed more than 300 miles of city roads to make way for AV tests. Nearby Shandong province closed a 16 mile mountain road so that Huawei could test its AI chips for AVs in a country setting.

In other Chinese AV test cities, the roads remain open but are thoroughly sanitized. Southern China's tech capital, Shenzhen, is the home of AI leader Tencent, which is testing its own AVs on Shenzhen's public roads. Notably absent from Shenzhen's major roads are motorcycles, scooters, bicycles, or even pedestrians. Two-wheeled vehicles are prohibited; pedestrians are comprehensively corralled by sidewalk barriers and deterred from jaywalking by stiff penalties backed up by facial recognition technology.

And what better way to jump-start AI for facial recognition than by having a national biometric ID card database where every single person's face is rendered in machine-friendly standardized photos?

Making AI easy has certainly helped China get its AI strategy off the ground. But like a student who is spoon-fed the answers on a test, a machine that learns from a simplified environment won't necessarily be able to cope in the real world.

Machine learning (ML) uses vast quantities of experiential data to train algorithms to make decisions that mimic human intelligence. Type something like "ML 4 AI" into Google, and it will know exactly what you mean. That's because Google learns English in the real world, not from memorizing a dictionary.

It's the same for AVs. Google's Alphabet cousin Waymo tests its cars on the anything-goes roads of everyday America. As a result, its algorithms have learned how to deal with challenges like a cyclist carrying a stop sign. Everything that can happen on America's roads, will happen on America's roads. Chinese AI is learning how to drive like a machine, but American AI is learning how to drive like a human -- only better.

American, British, and (especially) Israeli facial recognition AI efforts face similar real-world challenges. They have to work with incomplete, imperfect data, and still get the job done. What's more, they can't throw up too many false positives -- innocent people identified as threats. China's totalitarian regime can punish innocent people with impunity, but in democratic countries, even one false positive could halt a facial recognition roll-outs.

It's tempting to think that the best way forward for AI is to make it easy. In fact, the exact opposite is true. Like a muscle pushed to exercise, AI thrives on challenges. Chinese AI may take some giant strides operating in a stripped-down reality, but American AI will win the race in the real world. Reality is complicated, and if it's one thing Americans are good at, it's dealing with complexity.

Salvatore Babones is an adjunct scholar at the Centre for Independent Studies and an associate professor at the University of Sydney.

Visit link:
China Will Lose the Artificial Intelligence (AI) Race (And Why America Will Win) - The National Interest Online

Gift will allow MIT researchers to use artificial intelligence in a biomedical device – MIT News

Researchers in the MIT Department of Civil and Environmental Engineering (CEE) have received a gift to advance their work on a device designed to position living cells for growing human organs using acoustic waves. The Acoustofluidic Device Design with Deep Learning is being supported by Natick, Massachusetts-based MathWorks, a leading developer of mathematical computing software.

One of the fundamental problems in growing cells is how to move and position them without damage, says John R. Williams, a professor in CEE. The devices weve designed are like acoustic tweezers.

Inspired by the complex and beautiful patterns in the sand made by waves, the researchers' approach is to use sound waves controlled by machine learning to design complex cell patterns. The pressure waves generated by acoustics in a fluid gently move and position the cells without damaging them.

The engineers developed a computer simulator to create a variety of device designs, which were then fed to an AI platform to understand the relationship between device design and cell positions.

Our hope is that, in time, this AI platform will create devices that we couldnt have imagined with traditional approaches, says Sam Raymond, who recently completed his doctorate working with Williams on this project. Raymonds thesis title, "Combining Numerical Simulation and Machine Learning," explored the application of machine learning in computational engineering.

MathWorks and MIT have a 30-year long relationship that centers on advancing innovations in engineering and science, says P.J. Boardman, director of MathWorks. We are pleased to support Dr. Williams and his team as they use new methodologies in simulation and deep learning to realize significant scientific breakthroughs.

Williams and Raymond collaborated with researchers at the University of Melbourne and the Singapore University of Technology and Design on this project.

See the original post:
Gift will allow MIT researchers to use artificial intelligence in a biomedical device - MIT News

Automation and AI sound similar, but may have vastly different impacts on the future of work – Brookings Institution

Last November, Brookings published a report on artificial intelligences impact on the workplace that immediately raised eyebrows. Many readers, journalists, and even experts were perplexed by the reports primary finding: that, for the most part, it is better-paid, better-educated white-collar workers who are most exposed to AIs potential economic disruption.

This conclusionby authors Mark Muro, Robert Maxim, and Jacob Whitonseemed to fly in the face of the popular understanding of technologys future effects on workers. For years, weve been hearing about how these advancements will force mainly blue-collar, lower-income workers out of jobs, as robotics and technology slowly consume those industries.

In an article about the November report, The Mercury News outlined this discrepancy: The study released Wednesday by the Brookings Institution seems to contradict findings from previous studiesincluding Brookings ownthat showed lower-skilled workers will be most affected by robots and automation, which can involve AI.

One of the previous studies that article refers to is likely Brookingss January 2019 report (also written by Muro, Maxim, and Whiton) titled Automation and Artificial Intelligence: How machines are affecting people and places. And indeed, in apparent contradiction of the AI report, the earlier study states, The impacts of automation in the coming decades will be variable across occupations, and will be visible especially among lower-wage, lower-education roles in occupations characterized by rote work.

So how do we square these two seemingly disparate conclusions? The key is in distinguishing artificial intelligence and automation, two similar-sounding concepts that nonetheless will have very different impacts on the future of work here in the U.S. and across the globe. Highlighting these distinctions is critical to understanding what types of workers are most vulnerable, and what we can do to help them.

The difference in how we define automation versus AI is important in how we judge their potential effects on the workplace.

Automation is a broad category describing an entire class of technologies rather than just one, hence much of the confusion surrounding its relationship to AI. Artificial intelligence can be a form of automation, as can robotics and softwarethree fields that the automation report focused on. Examples of the latter two forms could be machines that scurry across factory floors delivering parts and packages, or programs that automate administrative duties like accounting or payroll.

Automation substitutes human labor in tasks both physical and cognitiveespecially those that are predictable and routine. Think machine operators, food preparers, clerks, or delivery drivers. Activities that seem relatively secure, by contrast, include: the management and development of people; applying expertise to decisionmaking, planning and creative tasks; interfacing with people; and the performance of physical activities and operating machinery in unpredictable physical environments, the automation report specified.

In the more recent AI-specific report, the authors focused of the subset of AI known as machine learning, or using algorithms to find patterns in large quantities of data. Here, the technologys relevance to the workplace is less about tasks and more about intelligence. Instead of the routine, AI theoretically substitutes for more interpersonal duties such as human planning, problem-solving, or perception.

And what are some of the topline occupations exposed to AIs effects, according to Brookings research? Market research analysts and marketing specialists (planning and creative tasks, interfacing with people), sales managers (the management and development of people), and personal financial advisors (applying expertise to decisionmaking). The parallels between what automation likely wont affect and what AI likely will affect line up almost perfectly.

Machine learning is especially useful for prediction-based roles. Prediction under conditions of uncertaintyis a widespread and challenging aspect of many information-sector jobs in health, business, management, marketing, and education, wrote Muro, Maxim, and Whiton in a recent follow-up to their AI report. These predictive, mostly white-collar occupations seem especially poised for disruption by AI.

Some news outlets grasped this difference between the AI and the automation report. In The New York Timess Bits newsletter, Jamie Condliffe wrote: Previously, similar studies lumped together robotics and A.I. But when they are picked apart, it makes sense that A.I.which is about planning, perceiving and so onwould hit white-collar roles.

A very clear way to distinguish the impacts of the two concepts is to observe where Brookings Metro research anticipates those impacts will be greatest. The metros areas where automations potential is highest include blue-collar or service-sector-centric places such as Toledo, Ohio, Greensboro, N.C., Lakeland-Winter Haven, Fla. and Las Vegas.

The top AI-exposed metro area, by contrast, is the tech hub of San Jose, Calif., followed by other large cities such as Seattle and Salt Lake City. Places less exposed to AI, the report says, range from bigger, service-oriented metro areas such as El Paso, Texas, Las Vegas, and Daytona Beach, Fla., to smaller, leisure communities including Hilton Head and Myrtle Beach, S.C. and Ocean City, N.J.

AI will also likely have different impacts on different demographics than other forms of automation. In their report on the broader automation field, Muro, Maxim, and Whiton found that 47% of Latino or Hispanic workers are in jobs that couldin part or whollybe automated. American Indians had the next highest automation potential, at 45%, followed by Black workers (44%), white workers (40%), and Asian Americans (39%). Reverse that order, and youll come very close to the authors conclusion on AIs impact on worker demographics: Asian Americans have the highest potential exposure to AI disruption, followed by white, Latino or Hispanic, and Black workers.

For all of these differences, one important similarity does exist for both AI and broader automations impact on the workforce: uncertainty. Artificial intelligences real-world potential is clouded in ambiguity, and indeed, the AI report used the text of AI-based patents to attempt to foresee its usage in the workplace. The authors hypothesize that, far from taking over human work, AI may end up complementing labor in fields like medicine or law, possibly even creating new work and jobs as demand increases.

As new forms of automation emerge, it too could end up having any number of potential long-term impactsincluding, paradoxically, increasing demand and creating jobs. Machine substitution for labor improves productivity and quality and reduces the cost of goods and services, the authors write. This maythough not always, and not foreverhave the impact of increasing employment in these same sectors.

As policymakers draw up potential solutions to protect workers from technological disruption, its important to keep in mind the differences between advancements like AI and automation at largeand who, exactly, theyre poised to affect.

Link:
Automation and AI sound similar, but may have vastly different impacts on the future of work - Brookings Institution

Artificial intelligence requires trusted data, and a healthy DataOps ecosystem – ZDNet

Lately, we've seen many "x-Ops" management practices appear on the scene, all derivatives from DevOps, which seeks to coordinate the output of developers and operations teams into a smooth, consistent and rapid flow of software releases. Another emerging practice, DataOps, seeks to achieve a similarly smooth, consistent and rapid flow of data through enterprises. Like many things these days, DataOps is spilling over from the large Internet companies, who process petabytes and exabytes of information on a daily basis.

Such an uninhibited data flow is increasingly vital to enterprises seeking to become more data-driven and scale artificial intelligence and machine learning to the point where these technologies can have strategic impact.

Awareness of DataOps is high. A recent survey of 300 companies by 451 Research finds 72 percent have active DataOps efforts underway, and the remaining 28 percent are planning to do so over the coming year. A majority, 86 percent, are increasing their spend on DataOps projects to over the next 12 months. Most of this spending will go to analytics, self-service data access, data virtualization, and data preparation efforts.

In the report, 451 Research analyst Matt Aslett defines DataOps as "The alignment of people, processes and technology to enable more agile and automated approaches to data management."

The catch is "most enterprises are unprepared, often because of behavioral norms -- like territorial data hoarding -- and because they lag in their technical capabilities -- often stuck with cumbersome extract, transform, and load (ETL) and master data management (MDM) systems," according to Andy Palmer and a team of co-authors in their latest report,Getting DataOps Right, published by O'Reilly. Across most enterprises, data is siloed, disconnected, and generally inaccessible. There is also an abundance of data that is completely undiscovered, of which decision-makers are not even aware.

Here are some of Palmer's recommendations for building and shaping a well-functioning DataOps ecosystem:

Keep it open: The ecosystem in DataOps should resemble DevOps ecosystems in which there are many best-of-breed free and open source software and proprietary tools that are expected to interoperate via APIs." This also includes carefully evaluating and selecting from the raft of tools that have been developed by the large internet companies.

Automate it all:The collection, ingestion, organizing, storage and surfacing of massive amounts of data at as close to a near-real-time pace as possible has become almost impossible for humans to manage. Let the machines do it, Palmer urges. Areas ripe for automaton include "operations, repeatability, automated testing, and release of data." Look to the ways DevOps is facilitating the automation of the software build, test, and release process, he points out.

Process data in both batch and streaming modes. While DataOps is about real-time delivery of data, there's still a place -- and reason -- for batch mode as well. "The success of Kafka and similar design patterns has validated that a healthy next-generation data ecosystem includes the ability to simultaneously process data from source to consumption in both batch and streaming modes," Palmer points out.

Track data lineage: Trust in the data is the single most important element in a data-driven enterprise, and it simply may cease to function without it. That's why well-thought-out data governance and a metadata (data about data) layer is important. "A focus on data lineage and processing tracking across the data ecosystem results in reproducibility going up and confidence in data increasing," says Palmer.

Have layered interfaces. Everyone touches data in different ways. "Some power users need to access data in its raw form, whereas others just want to get responses to inquiries that are well formulated," Palmer says. That's why a layered set of services and design patterns is required for the different personas of users. Palmer says there are three approaches to meeting these multilayered requirements:

Business leaders are increasingly leaning on their technology leaders and teams to transform their organizations into data-driven digital entities that can react to events and opportunities almost instantaneously. The best way to accomplish this -- especially with the meager budgets and limited support that gets thrown out with this mandate -- is to align the way data flows from source to storage.

Continue reading here:
Artificial intelligence requires trusted data, and a healthy DataOps ecosystem - ZDNet