Deepfakes – The Danger Of Artificial Intelligence That We Will Learn To Manage Better – Forbes
Deepfakes are scarily simple to create. But will this technology create a reality of alternative facts where truth goes to die? No. Deepfakes are a technology, and more widespread abuse is expected with more widespread availability. Over time, we will adopt better transparency, better detection, and, most importantly, each of us users will become more aware and thus better equipped to fight the abuse of Deepfakes.
Deep Fakes are here to stay.
What are deepfakes?
Deepfakes are a way to manipulate images. Image manipulation is not new. Stalin removed Nikolai Yezhov from still images. Today, the underlying technology that creates deepfakes is Artificial Intelligence (AI). AI-supported deepfake technology offers improved capabilities - but it also increases the scale for manipulation and bad actor intervention. Starting with obvious examples: Check out this Tiktok compilation about Tom Cruise. FAKE. Or this video of Barack Obama calling Trump a total and complete dipshit. Also, FAKE. The list goes on. There was once a meme in which Nicolas Cage became the fake leading actor of a series of different movies (video compilation). Today, anyone can create a deepfake. No programming skills are needed. If youre interested in learning more about the technology behind deepfakes in detail, take a look at this Forbes article.
Deepfakes are widely used today - and theyre here to stay. Its AI and thus the same technology that helps read human emotions (for example, for people with Autism) or helps identify obstacles on the road (for example, a Duck Chase). Deepfakes are also used in Hollywood: Look no further than to the Star Wars empire, the de-aging in Marvel movies, or the Late Paul Walker in Fast & Furious 7. Deepfakes have the potential to replace high-end CGI, which would save millions of dollars and countless hours of processing time in filmmaking.
But then deepfakes can be - and are - abused. Unfortunately, this type of abuse of technology is nothing new. In my 2014 book Ask Measure Learn I wrote: [...] when email suddenly made it possible to communicate with large numbers of strangers for free, it immediately led to the problem of unsolicited commercial email, better known as spam. When computers could communicate openly through networks, it spawned viruses and Trojan horses. And now that we live in a society of social media channels and information on demand, this world has become flooded with phoney or even fraudulent information. Spam has grown into social spam. [...]
During the 2013 Strata Conference (video), I presented state-of-the-art, AI-driven Bot conversations and the ways to detect them. Our technology has evolved over the last decade, and in addition to fake emails, fake Amazon reviews, and fake bot conversations, we are now dealing with deepfakes. This is pretty annoying. But, on the bright side, the way to deal with this threat remains the same: detection, transparency, regulation, and education.
(1) Detection
A growing number of researchers are studying fake news or building technologies to identify deepfakes. Big Tech firms like Google, Microsoft, and Meta have openly condemned deepfake technology and are creating tools for recognizing them. Microsoft is creating new anti-deep-fake technology to fight misinformation (Microsoft Video Authenticator). YouTube, owned by Google, reiterated in February 2020 that it will not host deepfake videos related to the U.S. election, voting procedures, or the 2020 U.S. census.
With over Millions of dollars of awards, Meta's deepfake detector challenge has encouraged researchers and developers to create algorithms to fight deepfakes. The challenges launch came after releasing a large dataset of visual deepfakes produced in collaboration with Jigsaw, Googles internal technology incubator. In addition, the large dataset of deepfakes was incorporated into a benchmark made freely available to researchers for developing synthetic video detection systems.
In a rapidly evolving and attention-frenzy world, detecting deepfakes is even more important. However, we are nowhere close to distinguishing real content from fake. Some of the best tools out there are Counter.social, deeptrace, Reality Defender, and Sensity.ai (which claims to be the worlds first deepfake detection tool). Still, the best deepfake detector is still only 65% accurate. Even Azure Cognitive Services was fooled over 78% of the time.
The future of these detectors will likely mirror bot detectors, spam detectors, or any other cyber threat detectors. Each evolution will spark a counter-reaction. It will be an arms race, or put differently: future deepfake detection will be as good as your Email Spam Detector and we all know that we still get the SPAM from Nigerian princes in our inboxes.
(2) Transparency
Social networks have allowed us to connect with everyone. This new connectedness means every fringe opinion will find its audience. Lies and fake news have become a business model. Chaos and mistrust were the consequence. To instill trust, social media companies created verified accounts (for example in Twitter). In a world of deepfake abundance, verified accounts alone will not be not sufficient. For instance, even the verified RealDonaldTrump account tweeted a DeepFake of Pelosi, which was widely shared and re-shared. In order to eliminate second-order effects of social networks, a different level of transparency is required.
Perhaps, we can establish trust if we know the source of a given video? One proposition to combat deepfakes points to the usage of NFTs (Non-Fungible Tokens) as a possible solution: If everyone encrypted their videos as NFTs, it would be easy to find and compare different sources of the same moment.
(3) Regulation
Misuse of deepfakes without a clear identification should be outlawed - and regulators around the globe have started to take them seriously. Already in 2019, there were about a dozen federal- and state-level bills to regulate deepfakes. These laws range from criminalizing the use of a womans likeness in a pornographic film without her consent (Virginia Law), to the appropriate use of a deceased persons data (New York Law), and dealing with cheap fakes (low-tech digital frauds not requiring AI). When we look at how personal data is being used, and realize the value of a digital identity, it becomes clear that privacy is a key pillar of a digitally safe environment. To read more on this, see my take on privacy and data.
Spam and Fake news is nothing new.
(4) Education
Coming back to the Nigerian Prince. These scams typically start with an email. The fraudsters offer a share of a vast investment opportunity but, in turn, need some money from you. There are many different versions of this internet scam, and it keeps coming back. But, today, many wont fall for it anymore. Why? Because you have all heard about it (if not, you just did here). Education is the best protection. And just like with the Nigerian Prince, we need the same degree of education for deepfakes.
With that purpose in mind, Channel 4 in the UK created a deepfake of the Queens speech last Christmas. It was hilarious and brought deepfakes into the public discussion. For my course at Cornell, where I teach MBA candidates to design products that use Data and AI, we start each term with a 1min Video summary that Prithvi and I created to welcome the students to the course. Yes, its fun. But its also a reminder to all of us that great technology can be abused.
This article was written with Prithvi Sriram, who has not only been a student of the course, but also helped create toolkits that future students can use to get their hands on Deep Learning. He currently works at Infinitus Systems, a late series B healthcare startup, where he was the founding member of the analytics team.
Please see this Forbes article if you are looking for an overview of different Deep Learning tool sets for deepfakes.
Go here to read the rest:
Deepfakes - The Danger Of Artificial Intelligence That We Will Learn To Manage Better - Forbes