As adoption of artificial intelligence accelerates, can the technology be trusted? – SiliconANGLE News

The list of concerns around the use of artificial intelligence seems to grow with every passing week.

Issues around bias, the use of AI for deepfakevideos and audio,misinformation, governmental surveillance, securityand failure of the technology to properly identify the simplest of objects have created a cacophony of concern about the technologys long-term future.

One software company recently released a study which showed only 25% of consumers would trust a decision made by systems using AI, and another report commissioned by KPMG International found that a mere 35% of information technology leaders had a high level of trust in their own organizations analytics.

Its a bumpy journey for AI as the technology world embarks on a new decade and key practitioners in the space are well aware that trust will ultimately determine how widely and quickly the technology becomes adopted throughout the world.

We want to build an ecosystem of trust,Francesca Rossi, AI ethics global leader at IBM Corp., said at the digitalEmTech Digital conference on Monday. We want to augment human intelligence, not replace it.

The EmTech Digital event, restructured into a three-day digital conference by MIT Technology Review after plans to hold it this month in San Francisco were canceled, was largely focused on trust in AI and how the tech industry was seeking to manage a variety of issues around it.

One of those issues is the use of deepfake AI tools to create genuine appearing videos or audio to deceive users. The use of deepfake videos has been rising rapidly, according to recent statistics provided by Deeptrace, which found an 84% rise in false video content versus a year ago.

Today more than ever we cannot believe what we see, and we also cannot believe what we hear,Delip Rao, vice president of research at AI Foundation, said during an EmTech presentation on Tuesday. This is creating a credibility crisis.

To help stem the flow of deepfakes into the content pool, the AI Foundation has launched a platform,Reality Defender, thatuses deepfake detection methods provided by various partners, including Google LLC and Facebook Inc. The nonprofit group recently extended its detection technology to include 2020 election campaigns in the U.S. as well.

As a generation, we have consumed more media than any generation before us and were hardly educated about how we consume it, Rao said. We cannot afford to be complacent. The technology behind deepfakes is here to stay.

AI has also come under fire for its use in facial recognition systems powered by a significant rise in the installation of surveillance cameras globally. A recent report by IHS Markit showed that China leads the world with 349 million surveillance cameras. The U.S. has 70 million cameras, yet it is close to China on a per capita basis with 4.6 people per camera installed.

The rise of AI-equipped cameras and facial recognition software has led to the development of a cottage industry on both sides of the equation. One Chinese AI company SenseTime has claimed the development of an algorithm which can identify a person whose facial features are obscured by a surgical mask and use thermal imaging to determine body temperature.

Meanwhile, a University of Maryland professor has developed a line of clothing, including hoodies and t-shirts, emblazoned with patterns specially designed to defeat surveillance camera recognition systems. All of that underscores the growing societal challenges faced by practitioners in the AI field.

The other complex problem affecting the AI industry involves cybersecurity. As adoption grows and the tools improve, the use of AI is not limited to white hat users. Black hat hackers have access to AI as well and they have the capability to use it.

Cybersecurity vendor McAfee Inc. has seen evidence that hackers may be employing AI to identify victims likely to be vulnerable to attack, according to Steve Grobman, senior vice president and chief technology officer at McAfee. Malicious actors can also use the technology to generate customized content as a way to sharpen spear phishing lures.

AI is a powerful tool for both the defenders and the attackers, Grobman said. AI creates a new efficiency frontier for the attacker. Were seeing a constant evolution of attack techniques.

The trust issues surrounding AI represent an important focus right now because the AI train has left the station and a lot of passengers are on board for the ride. AI has become a key element in improving operational efficiency for many businesses and a number of speakers at the event outlined how enterprises are employing the technology.

Frito Lay Inc. uses AI to analyze weather patterns and school schedules to determine when its corn chip inventory should be increased on store shelves. Global healthcare provider Novartis AG is using AI to support clinical trials and determine injection schedules for people with macular degeneration.

And when engineers at shipping giant DHL International saw how AI could be used to detect cats in YouTube videos, they wondered if the same approach could be taken to inspect shipping pallets for stackability in cargo planes.

These are small decisions were doing for load efficiency on over 500 flights per night, said Ben Gesing, DHLs director and head of trend research. At DHL, no new technology has been as pervasive or as fast-growing as AI.

Perhaps even more intriguing was the recent news that Salesforce Inc. has employed AI to undertake major research on protein generation. Earlier this month, Salesforce published a study which detailed a new AI system called ProGen that can generate proteins in a controllable fashion.

In a presentation Tuesday, Salesforce Chief ScientistRichard Socherdescribed how the company viewed AI as a double-edged strategy. One is the science fiction state, in which dreams of self-driving cars and big medical breakthroughs reside. The other is the electricity state, which uses technology such as natural language understanding to power chatbots.

AI is in this dual state right now, Socher said. At Salesforce, were trying to tackle both of those states. I truly believe that AI will impact every single industry out there.

If Socher is right, then every industry is going to have to deal with a way to engender trust in how it uses the technology. One EmTech speaker presented results from a recent Deloitte study which found that only one in five CEOs and executives polled had an ethical AI framework in place.

There are challenges ahead of us, said Xiaomeng Lu, senior policy manager at Access Partnership. We cant run away. We have to tackle them head on.

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

Link:
As adoption of artificial intelligence accelerates, can the technology be trusted? - SiliconANGLE News

Related Posts

Comments are closed.