What is deepfake technology and how does it work?

Do repost and rate:

We’ve all heard about deepfakes.

They’ve made Elon Musk sing a Soviet space song and turned Barack Obama into Black Panther, among other parodies and memes.

They’ve also been used to commit crimes and been the subject of several controversies when their creators tried to make them pass as legitimate.

Related

For example, during the 2020 US presidential campaign, there were several deepfake videos of Joe Biden falling asleep, getting lost, and misspeaking. These videos aimed at bolstering the rumor that he was in cognitive decline due to his age.

Deepfakes have also been used to create pornography of female celebrities —a form of image-based sexual abuse— and to spread misinformation through 'sock puppet' accounts and false witnesses.

As a result, companies like Facebook and Adobe, and even individuals, are trying to develop more effective techniques to detect deepfakes.

What is a deepfake, and how does it work?

The term “deepfake” was created from the words “deep learning” and “fake”.

Deep learning is a type of machine learning based on artificial neural networks, which are inspired by the human brain. The method is used to teach machines how to learn from large amounts of data via multi-layered structures of algorithms.

Deepfakes usually employ a deep-learning computer network called a variational auto-encoder, a type of artificial neural network that is normally used for facial recognition.

Autoencoders can encode and compress input data, reducing it to a lower dimensional latent space, and then reconstruct it to deliver output data based on the latent representation.

In the case of deepfakes, the autoencoders are used to detect facial features, suppressing visual noise and “non-face” elements in the process. The latent representation contains all these basic data that the autoencoder will use to deliver a more versatile model that allows the “face swap”, leaning on common features.

To make the results more realistic, deepfakes also use Generative Adversarial Networks (GANs).

GANs train a “generator” to create new images from the latent representation of the source image, and a “discriminator” to evaluate the realism of the generated materials.

Standard GAN architecture.

Wikimedia Commons/????-?????  

If the generator’s image does not pass the discriminator’s test, it is incited to develop new images until there is one that “fools” the discriminator.

How long does it take to make a deepfake?

The process of making a deepfake might sound complicated, but pretty much anyone can create a deepfake, as there are many tools available to do so, and not much knowledge is required to use them.

How long it takes to make a deepfake depends on the deepfake software used.

Standard face swap

Stephen Wolfram/Der Store Danske  

The complexity of the deepfake is also a determinant factor. High-quality deepfakes are usually made on powerful computers that are able to render projects faster, but complex deepfake videos can still take hours to render, while simple face-swapping can be done in 30 minutes. A simpler deepfake can even be created in a few seconds using deepfake apps for smartphones.

Is deepfake technology legal?

Deepfakes have inspired a number of legislative reforms around the world.

  • In California, two bills from 2019 ban the use of deepfakes in politics and pornography. The states of Virginia and Texas have similar regulations against deepfakes.
  • The US Congress is also considering the approval of the Deep Fakes Accountability Act (officially titled 'Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2021' or H. R. 2395), a bill that is meant to “combat the spread of disinformation through restrictions on deep-fake video alteration technology”. The bill proposes disclosure and watermark requirements for deepfake video makers and establishes penalties (fines and up to 5 years in prison) for those who don’t meet these requirements.
  • At the beginning of 2022, China’s State Internet Information Office launched the “Provisions on the Administration of Deep Synthesis Internet Information Services”, a draft of regulations for deepfakes and other artificially generated content.
  • The European Parliament has modified the Digital Services Act to impose the use of labels on deepfake videos. The Digital Services Act will come into effect by January 1, 2024.

Who invented deepfake technology?

No single person can be credited for inventing deepfake technology as it is based on several previous technologies, such as artificial neural networks (ANNs) and artificial intelligence (AI).

In general, the development of this type of synthetic media can be traced back to the 1990s. But deepfake technology as we know it today often relies on GANs, and GANs didn’t exist until 2014 when they were invented by computer scientist Ian Goodfellow.

The word “deepfake” emerged in 2017.

How do you detect a deepfake?

Some deepfakes are easier to detect than others. It all depends on the quality and complexity of the falsified material. Moderately trained, or even untrained, people can detect lower-quality deepfakes with the naked eye, by taking into account subtle details.

Some deepfakes use filters that make the false faces look blurrier in some areas. Others have slight inconsistencies in symmetry, color, lighting, sharpness, or texture. Some deepfake videos might shimmer or flicker due to these inconsistencies “accumulating” frame to frame.

Artificial intelligence and neural networks are being trained for automated deepfake detection, but the efficacy of these methods often depends on the realism of deepfakes, which are always evolving. And because some deepfake detection methods are similar to the ones used to create deepfakes, the false videos can be improved as new detection methods appear. As a result, there is a constant tug of war where no one ever wins.

“What makes the deepfake research area more challenging is the competition between the creation and detection and prevention of deepfakes, which will become increasingly fierce in the future”, says Amit Roy-Chowdhury, a professor of electrical and computer engineering and head of the Video Computing Group at UC Riverside. Roy-Chowdhury helped create the Expression Manipulation Detection (EDM) method, a system that spots specific areas within an image that have been altered. He adds that, “With more advances in generative models, deepfakes will be easier to synthesize and harder to distinguish from real.”

For You
innovation
Brainwaves synchronize during online games when players aren't in the same room

Researchers at the University of Helsinki have demonstrated that brains synchronize while playing online games even when the participants are not physically present in the same room.

Stephen Vicinanza | 9/25/2022
innovationBird flight-inspired propulsion technology could let us reach Jupiter much faster
Paul Ratner| 1/7/2023
diyThese 9 Microsoft Word shortcuts will supercharge your documents with automation
Christopher McFadden| 12/5/2022
More Stories
culture

AI image generator hit by $1.8 trillion lawsuit from Getty Images

Baba Tamim| 2/8/2023
culture

End of an era: the 'Typhoon'-class submarine is no more

Christopher McFadden| 2/8/2023
culture

Oldest workshop discovered in Ethiopia, dating back 1.2 million years

Nergis Firtina| 2/8/2023

Regulation and Society adoption

Ждем новостей

Нет новых страниц

Следующая новость