Deep Fakes and Shallow Measures: How is Social Media Struggling to Spot the Fakes?

Facebook is one of the largest social media platforms in the world, perhaps the biggest. However, this doesn’t mean that it’s immune to controversies. In 2019, a video surfaced on the social media platform that featured Nancy Pelosi, then speaker of the US House of Representatives. In the video, she was seen slurring her words and speaking very slowly, which raised doubts about the mental health of the speaker. However, it was later found that the video was manipulated, and the culprits slowed down the video that featured Nancy speaking. Though the video was not fabricated, it sure was manipulated to push a narrative. Facebook labelled the video misleading, but the damage was already done. This incident highlights the need to regulate AI content on social media websites while posing the following question: Are social media platforms doing enough to differentiate real and AI-generated content?

Today, AI has become almost indistinguishable from real content. This allows miscreants to create content that gives rise to misinformation. These contents have the potential to create chaos and push false narratives, potentially disturbing the peace of a nation or the world at large. 

Rise of AI Fakes

AI technology has dramatically improved in the last few years, enabling the creation of super-realistic deep fakes. This has significantly increased the creation of deep fakes, synthetic media, and AI-driven content, so much so that some sources estimate that by 2025, 90% of all content on the internet will be AI-generated, which is a cause for concern. 

As per a Sensity study, an AI monitoring platform, 96% of deep fake content on the internet involves non-consensual pornography, mostly targeting women. However, the main concern is not the isolated misuse of AI but the systemic exploitation of AI technology to strategically push an agenda or a narrative in various sectors like politics and business. 

This was evident in the 2020 presidential elections in the US. It was reported that more than 35% of all politically charged posts were deep fakes that propagated misinformation to create a false narrative. Moreover, even after the audience commented that the post was a deep fake, social media platforms like Facebook, Instagram, and YouTube did not notice it. Was it deliberate or by mistake? This is a new debate.

Current State of Social Media Platforms

After facing backlash for the Pelosi video, Facebook implemented a few measures. In 2020, the company announced that it would remove deep fake videos that met certain criteria, such as being generated using AI models or machine learning techniques. However, the rule doesn’t apply to satire, parody, or misinformation that isn’t classified as manipulated media, leaving a gaping loophole that people can still exploit.

 X, formerly Twitter, introduced a policy in early 2020 called “Synthetic and Manipulated Media,” which labels or removes misleading AI-generated content. Yet, in 2023, the company came under fire when a deep fake video of Ukrainian President Volodymyr Zelenskyy surrendering to Russia was posted and remained online for over 16 hours before being taken down, raising questions about the efficacy of its content moderation tools. However, since then, the company has started applying community notes to posts that are either false or generated by AI or both. 

On the other hand, while being a key player in the war against AI fakes, YouTube has often lagged in rapid response. Though it has developed AI tools to detect manipulated media, research by MIT Technology Review found that 15% of deep fake videos on the platform went undetected by YouTube’s automated systems in 2023.

Are these Measures Enough?

Although almost every social media platform has employed some technology to distinguish AI content, experts believe the measures are inadequate. According to a survey by the Brookings Institute, 78% of people still believe that tech companies are not doing enough to battle this problem. 

For starters, the scale of content posted on platforms like X, Facebook, and YouTube is extraordinary. Although their AI detection systems are fully functional in catching obvious manipulations, more subtle or less manipulated content still slips through the cracks. Another reason is the apparent loopholes that are constructed into the system itself. Again, the Pelosi video is a perfect example. As mentioned above, although it was classified as a fake or manipulated video, it wasn’t taken down from the platform because it is a “satire” and not a source of misinformation. 

However, the most lagging part of these AI detection systems is that most of the deep fakes on the platform are way too sophisticated for these detection systems to distinguish them from the rest.  According to Deeptrace, an AI forensics company, 50% of AI-generated fakes on platforms in 2022 were not detected until they had gained substantial traction.

Role of Government

The detection systems of social media platforms have been incompetent in this matter. As a result, the government has had to intervene and take matters into its own hands. In 2023, the European Union introduced the Digital Services Act (DSA), which includes stricter guidelines for social platforms to combat AI-driven misinformation. Platforms that fail to comply with the DSA could face fines of up to 6% of their annual revenue. Similarly, the state of California also introduced a law against deep fake during the elections.

Leave a Reply