Deepfakes are altered images, audio recordings and videos that have been constructed to make a person appear to say or to do something that they never actually said or did.

While fake or “doctored” images have a long history, technology today helps fraudsters create deepfake video and audio that can be difficult to detect.  And because social media enables us to reach huge audiences, deepfakes can often be portrayed as breaking news by duped media.

“Deepfakes pose a unique threat because they use people we know and trust to mislead or create chaos,” said Debbie Guild, PNC’s chief security officer. “It’s important that people critically evaluate what they’re seeing and hearing and keep watch for the things that don’t seem quite right.”

How it works

The word deepfake is a combination of “deep learning,” and an old-fashioned reference to something being “fake.”

Fraudsters start by taking advantage of photos or video of their target available via the internet or social media. The fraudster manipulates that content into a new image or voice recording with the subject saying or doing something that they never did or that is out of character for them.   

The process itself is technical: Deepfakes are generated using artificial intelligence and machine learning algorithms to imitate real humans, but can be created using readily available software.

“The danger here is that the barrier to entry to creating a deepfake is minimal,” Guild said. “It may seem like you need to have extensive training to create one, but editing technology has made it so that people with basic computer skills can sometimes pull these off.”

Who gets hurt

Those most often victimized are politicians, celebrities and high-level business executives, due in part to the availability of images, videos or audio recordings of them already on the internet. The fraudster’s goal can range from a simple prank or social media engagement to more sinister ends such as spreading misinformation, ruining reputations or inflicting financial harm.

Well-known deepfakes have targeted high-profile individuals for a variety of reasons, both nefarious and well-intentioned. Recent deepfakes have mimicked President Donald Trump, President Barack Obama, Facebook CEO Mark Zuckerberg and even a Florida art museum’s educational deepfakes of artist Salvador Dali, who died in 1989.

Don’t assume the impact of deepfakes is limited to those in the spotlight, however. Deepfakes have been used to trick workers into taking inappropriate actions by imitating legitimate CEO messages resulting in significant costs and job losses. At a minimum, consumers or voters may be misinformed about an important topic or person due to deepfakes.

How to detect

Generally, deepfake images, audio and video can be difficult to identify, but there are things you can be on the lookout for:

  • Badly synced sound and video
  • Blurriness where the face meets the neck and hair
  • Box-like shapes or other cropped effects around the mouth, eyes and neck
  • Changes in the background and/or lighting
  • Face discolorations
  • Irregular blinking or no blinking
  • Lower-quality sections in the same video
  • Movements that aren’t natural

Actions you can take

  • Be cautious about video and photos that you share via social media.
  • Challenge what you see and hear, especially if the message is highly unusual for the person.
  • Confirm the authenticity of the message, especially before taking action.
  • Do not deviate from routine controls. If you receive a phone call from a superior with a highly unusual request, follow company protocol to double check the order before executing.

“With the growing popularity of social media and extent of available online content, deepfakes are something we will have to contend with for the foreseeable future,” Guild said. “By taking a critical look at what digital media we are consuming and sharing, we can do our part to limit their harmful effects.”