Real or Deepfake? Identifying synthetic media in the age of AI

Is what you’re seeing real?

In this era of “fake news,” the importance of fighting misinformation is paramount. Currently, one of the biggest threats is deepfakes, synthetic media made using artificial intelligence. These videos are almost indistinguishable from actual footage and could easily lead to the spread of false information. The best way to combat deepfakes at present is to learn how to spot them and prevent their dissemination.

How are deepfakes created?

Deepfakes are created by a machine learning technique called “generative adversarial networks” (GANs), in which the programs analyze multiple different images of a person at different angles in order to learn what the person looks like. Using this analysis, the network is then able to create a video of the person doing or saying something they’ve never done or said before.

What threats do deepfakes pose?

The speed of the internet means misinformation can spread like wildfire. Deepfake videos of political and public figures threaten careers and credibility, falsely influence supporters, and even corrode international relations. For example, imagine a doctored video of the President of the United States confirming alien contact at Area 51. Additionally, deepfakes have created a deep-seated distrust of media in society, which extends to otherwise credible sources, and threatens the democratization of information.

How can you spot a deepfake?

There are a few tricks to spotting today’s deepfakes. One flaw in deepfakes is a lack of blinking. Photos of an individual with their eyes closed are rarely publicly shared, so the systems often fail to pick up the human behavior of blinking. However, since creators of deepfakes are constantly evolving alongside those combating them, this isn’t a foolproof tell.

A few other tricks that can be used are:

  • Examining the source and its credibility
  • Checking the metadata of the video with a tool such as InVID
  • Using blockchain-powered tools to authenticate it
  • Using reverse image search engines (like Google Images), since many deepfakes are based on footage already available online.
  • Going forward

    Though deepfakes pose a significant threat in modern society, researchers are working to stay one step ahead. Artificial intelligence may enable the development of deepfakes, but it is also the tool that has the power to spot and discredit them. In the meantime, it’s important to stay educated on how to identify credible media, and counteract “fake news.”

SparkCognition is committed to compliance with applicable privacy laws, including GDPR, and we provide related assurances in our contractual commitments. Click here to review our Cookie & Privacy Policy.

Request a Partnership

Already a partner? Sign in.