Understanding Deepfakes: Insights on Detection and Prevention
This week, the UK government unveiled plans to criminalise the creation of sexually explicit "deepfake" images in England and Wales through a new legislation. Building upon the Online Safety Act passed last year, which prohibited the sharing of deepfakes, this new law aims to explicitly address the act of 'creation'.
We find deepfake technology primarily employed for generating pornographic content, disproportionately affecting women. Its widespread usage also heightens apprehensions about misinformation and manipulation. By crafting remarkably believable videos and images, deepfakes possess the capacity to deceive individuals and manipulate perceptions.
What is a Deepfake?
A deepfake is a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information. They use footage pictures and audio recordings of real people and at times completely newly generated images to create incredibly realistic fakes with the ability to make anyone appear to say and do things they never did. Deepfakes can be used to:
Spread misinformation: malicious actors can create deepfake videos of politicians or celebrities to manipulate public opinion or disrupt voting. (e.g. in the build up to elections, fake campaign videos could be generated to appeal to different audiences or discredit politicians.)
Damage reputations: Deepfakes can be used to create synthetic sexual content of individuals, causing personal and professional harm. (e.g. intimate imagery created for personal use or for abusing others through non-consensual sharing, colloquially known as ‘revenge porn’.)
Commit financial fraud: Deepfakes can be used to impersonate someone in a video call to trick them into authorising financial transactions.
Blackmail: Deepfakes can also be used for blackmailing purposes, where perpetrators threaten to release fabricated videos or images unless their demands are met. In previous sextortion/webcam blackmail cases the threat might have been to share footage recorded in a previous video call but the rise of deepfakes means that anyone could potentially be targeted in this way.
Facilitate online abuse and harassment: Deepfakes can amplify the harmful effects of online abuse and harassment by creating fabricated content intended to demean, intimidate, or exploit individuals, damaging the potential for justice.
Deepfakes pose a significant threat because they can be so convincing. Even subtle manipulations can erode trust in the media and public figures. Additionally, technology is becoming increasingly accessible, making it easier for anyone to create them.
How to Spot a Deepfake:
There are some fairly simple things you can look for when trying to spot a deepfake:
Unnatural facial features: From unnatural eye movements to emotionless faces there are lots of red flags you can look out for. Is the head position slightly off? Perhaps a facial feature doesn’t look quite right; is that hair too smooth to be real, or maybe there’s some form of distortion or disjointed movement when their face moves? These are all signs that an image could have been digitally altered.
Awkward-looking body or posture: Another sign is if a person’s body shape seems unbelievable or there are inconsistencies in body and face placement. This can be easier to spot because deepfake technology usually focuses on facial features rather than the whole body.
Strange colour: Inconsistent skin tone, discolouration, strange lights and shadows are all signs that the imagery isn’t real.
Blurring: If the edges of images are not aligned, e.g. where someone’s face and neck meet their body, you’ll know something’s not right.
Inconsistent audio and noise: Deepfake creators usually spend more time on the video images rather than the audio.. This can result in audio features being given less attention so watch out for poor lip-syncing, robotic-sounding voices, strange background noise, or even the absence of audio.
Reverse image searches: A search for an original image, or a reverse image search through a search engine (e.g. google image search) can unearth similar videos online to help determine if an image, audio, or video has been altered in any way.
Trustworthy media outlets do not feature the video: If a video shows a person in the public eye saying or doing something shocking or important, the news will probably be reporting on it. If there are no results from trusted media sources talking about it, this could indicate the video is a deepfake.
Protecting Yourself from Deepfakes:
While deepfakes present a challenge, there are steps you can take to protect yourself:
Be Critical: Don't take everything online at face value. Look for signs of manipulation, like unnatural movements or inconsistencies in lighting or audio.
Verify Information: Don't rely on a single source, especially for sensational claims. Cross-reference information from reputable sources. If you are still unsure, seek clarification: ask a tech-savvy friend or member of your community.
Beware of Suspicious Links: Deepfakes can be used to lure people to malicious websites. Don't click on suspicious links.
Stay Informed: Deepfake detection methods evolve as the technology does. Keep yourself updated on the latest advancements.
What can you do if you’ve found out Deepfakes have been made featuring you?
Don't Engage: Don't share the deepfake or respond to the sender. Sharing it can amplify its reach.
Report the Deepfake: Most platforms have reporting mechanisms for misinformation and inappropriate content. Report the deepfake to the platform where you encountered it.
Gather Evidence: If possible, and only if it's imagery of an adult, take screenshots of the deepfake and save the online link. This can be helpful if you need to report it to the authorities. If you suspect there is imagery of a child, please be cautious of downloads or screenshots as this is considered a criminal offence (Child Sexual Abuse Material). In this case, report the content to the platform and to IWF (UK) or CyberTipline (USA) directly, but don’t save any evidence on your devices.
Seek support: Deepfakes can be upsetting. Talk to a trusted friend, family member, or mental health professional for support. If the content created is synthetic sexual content (intimate imagery) take a look at our guide here for further help: Intimate Abuse guide and contact The Revenge Porn Helpline for further advice and support
Report to Authorities: If the deepfake is being used for illegal purposes, report it to the police.
Organisations That Can Help:
The Cyber Helpline (UK & USA): provides free, expert help and advice to victims of cybercrime and online harm in the UK & USA. You can receive guidance on how to proceed.
IWF (UK): Identify & remove global online child sexual abuse imagery.
Revenge Porn Helpline (UK): UK service supporting adults (aged 18+) who experience intimate image abuse, also known as revenge porn.
National Centre for Missing and Exploited Children (NCMEC) (US): While focused on child exploitation, NCMEC offers resources for reporting online harassment, which can include deepfakes.
Cyber Civil Liberties Union (CCLU) (US): Advocates for digital privacy and can be a resource for those facing online harassment.
In Hope (Europe): A European hotline network specialising in cyberbullying and online harassment.
Navigating the Future of Deepfakes:
The phenomenon of deepfakes underscores the urgent need for heightened awareness, critical thinking, and proactive measures to safeguard against online deception and protect individuals from the harmful effects of misinformation, abuse, harassment, and exploitation. By staying informed and exercising caution, we can navigate the complex landscape of deepfake threats and preserve the integrity of digital discourse.
There is a glimmer of hope on the horizon: in the UK, under the Online Safety Act, which was passed last year, the sharing of deepfakes was made illegal and a new law is currently making its way through parliament that will make it an offence for someone to create a sexually explicit deepfake - even if they have no intention to share it. This has the potential to stop this problem at the source. As with all these laws, there are loopholes to be navigated (mainly that intent to distress will need to be proved for successful prosecution in this case) but there’s still time and the hope is that some of these issues will be ironed out by the time the new law comes to pass.
Remember, the digital landscape is ever-evolving, and continuous vigilance is key to staying ahead of potential threats. For more in-depth information and additional resources, visit our TCH guides page.
Authors: Onyedikachi Ugwu & Kathryn Goldby
Access our guides for dealing with cybercrime: