How Does Deepfake Technology Work and Should I Be Worried About It?

July 14, 2022     |    5 minute read

This author has not yet filled in any details.
So far the author has created 219 blog entries.

Stay up to date with the latest content by subscribing to the Aware Biometrics Blog!

Share This

ENTERPRISE SECURITY, THOUGHT LEADERSHIP

If you’ve spent time on social media, you might have come across what is a potentially concerning technological advancement – the deepfake. Deepfake images can be seen as anything from entertaining to concerning – or even scary. A political figure spreading propaganda they didn’t truly spread. A historical figure making a speech they didn’t actually make. A famous actor doing or saying things they didn’t do or say. Some questions you might be turning over in your mind – how are these images created? What are the security implications of deepfakes? Should I be worried? Rest assured, while deepfake technology is advancing, there isn’t a need to panic about what this could mean for the average person’s security.  

What is Deepfake Technology?

The phrase “deepfake” comes from the combination of the terms “deep learning” and “fake.” While it doesn’t have one universally agreed-upon definition, a deepfake generally means that a person in an existing video is replaced with someone else’s likeness. Essentially, a deepfake is a photo, audio, or video that has been manipulated by Machine Learning (ML) and Artificial Intelligence (AI) to make it appear to be something that it is not.  

Deepfakes are not videos that have been reworked by video editing software. They are usually generated by specialized applications or algorithms, by a blend of old and newly manufactured video. These deepfake applications, rooted in machine learning, deconstruct the subtle features of someone’s face and learn how to manipulate them based on the individual conditions of the video. Those manipulations can then be integrated into a second video, making an entirely new creation.  

Deepfake Apps and Software

The inner workings of a deepfake algorithm are complex. But the algorithms’ “secret sauce” has two important components. The first is that they develop a deep understanding of one person’s face and learn how to map those attributes to another face. Since most people have mouths, eyes, and noses in roughly the same place, a deepfake algorithm can analyze the characteristics of that anatomy and learn it to an exceptional level of detail. It then manipulates the features in a second video to match the features seen in the first. The algorithm does all that while keeping the original video’s general style and look. 

Another interesting deepfake algorithm characteristic is that they’re comprised of pieces that work in opposition of each other. As one piece manufactures phony data, there’s a secondary part that is trained to flag this phony data, helping to improve the results by pointing out what appears to be fake. A deepfake program can act as its own coach and teacher to improve the output. 

The results are a synthetic video that could be used with good or bad intent. When considering a synthetic video, it’s not hard to imagine why it might be dangerous. There’s the obvious risk that a person’s synthetic words or actions could incite someone to do something bad or dangerous. But an additional risk is that synthetic videos might start to undermine the believability of genuine videos. Privacy experts are understandably concerned that a deepfake might be used to spread misinformation on social media or bypass security measures like biometric authentication platforms. 

What are other concerns?

Morphs, Presentations Attacks and Other Evolving Threats

The concerns don’t stop at deepfakes. Morphs are a type of biometric attack method that combine the faces of two or more individuals into one unique face. Because there can be elements of an authorized user and an unauthorized user’s face in the morph, facial recognition could potentially be tricked into providing access fraudulently. Morphs may also be used to fabricate identity documents, such as passports, for individuals who cannot lawfully get one or cross borders. In this instance, a morph would be created by combining the likenesses of the person who cannot get a passport with a person who can. This morphed image could then be used to enroll for a new passport. Once the passport is received, the unauthorized traveler could then use it in an attempt to bypass border security.  

Despite what could be considered troubling potential, the news on deepfakes, morphs, and other evolving potential threats isn’t all bad. For one, the sort of unsupervised learning that is being perfected within deepfake algorithms and applications has potential for good. Machine learning similar to what is seen in deepfake technology can help self-driving cars recognize surroundings, including pedestrians, and help to improve voice search and virtual reality applications.  

One reason deepfakes have focused on celebrities and historical figures is that a lot of background data is available on them. A machine learning algorithm like those used in deepfake processing must deeply understand how a subject looks. That means it must analyze subject data from different angles, in different lighting, and across various conditions. So, for the average person, it’s hard to have enough background data available for them to be the target of a deepfake attack. 

The news is also promising for organizations or individuals that might use biometric authentication to secure their assets. Best-in-class biometric frameworks use liveness detection – which determines whether the user is a living, breathing person being presented live to the imaging device or if it’s a presentation or spoof attack designed to breach the system. Whether it’s a simple photo spoof or a deepfake, or morph video, reputable biometric authentication systems have a solid ability to distinguish between a live person and a facsimile of a live person. 

For more insight on evolving threats and how to address them with biometrics, download our whitepaper. 

Download the White Paper:

Presentation Attacks, Deepfakes and Morphs

It has always been of vital importance to protect the sensitive and valuable assets of organizations and their customers from outside threats and unwanted access. In today’s world, this is even more pronounced with identity theft at an all-time high level and data breaches increasing by 17%1 since 2020 alone.

Passwords have typically formed the backbone of most access security authentication methods, but with 61% of data breaches taking place as a result of weak or stolen passwords2, organizations are looking for more secure alternatives. Biometric authentication, which takes advantage of a person’s unique physical characteristics to grant access to secure information or assets, virtually eliminates the problems associated with password-based authentication methods, providing organizations with a much more secure alternative.

With a shift from password-based to biometric authentication now taking place to improve security, however, outside hackers and malicious parties have attempted to follow suit with new attack methods designed to thwart these increased security measures and gain access fraudulently. These methods include presentation attacks, deepfakes and morphs, and their rise in frequency has resulted in heightened awareness and fear about them. Thankfully, many misconceptions exist around these methods and how much of a threat they truly pose. Organizations armed with information on each of these evolving threats can effectively protect themselves and assure customers and interested parties that their assets remain secure.