Deepfakes are no longer a theoretical threat. AI-generated fake videos of politicians, celebrities, and ordinary people are shared millions of times daily. Voice cloning is used in grandparent scams and CEO fraud. Non-consensual intimate deepfakes have devastated real people's lives. This guide gives you the practical tools to detect, respond to, and protect yourself from deepfakes.
Types of Deepfakes You Will Encounter
AI clones a real person's voice from publicly available audio (social media, YouTube). Used in grandparent scams ("I'm in jail, send money"), CEO fraud (calling finance teams to authorize wire transfers), and romance scams. Extremely convincing.
AI-generated explicit images of real people who never consented. Used for harassment, sextortion, and revenge porn. A growing crisis in schools — teen victims are increasingly common.
A person's face replaced in existing video footage. Used for disinformation, political manipulation, celebrity impersonation, and putting real people in compromising situations.
AI-generated profile photos for fake social media accounts, romance scam profiles, and fraudulent review accounts. Generated faces of people who do not exist — often photorealistic.
How to Spot a Deepfake Video
Deepfakes have improved dramatically but still have tells:
- Face edges: Look for slight blurring or artifacts around the hairline, ears, and jaw. The skin may appear unnaturally smooth or have an AI "shine."
- Teeth and eyes: Teeth may look slightly misshapen or have an unusual number of teeth. Eyes may not blink at natural intervals, or blinks may look irregular.
- Lighting inconsistencies: The face lighting may not match the background lighting, especially on the ears and neck.
- Audio sync: Lip movements may be slightly off from the audio track, especially in fast speech or words with complex mouth shapes (F, P, B sounds).
- Accessories: Earrings, glasses, and hair may blur or distort at edges, especially during movement.
- Background: Objects near the face may warp or distort slightly when the face moves.
- Emotional incongruence: Facial expressions may not match the emotional content of the speech.
High-quality deepfakes generated by state-of-the-art models can fool human detection over 90% of the time. Visual inspection alone is not enough. Always use detection tools and apply common-sense context checks.
How to Spot AI-Generated Images
- Hands: AI still struggles with hands. Look for too many or too few fingers, fused fingers, or unnatural hand poses.
- Text: AI-generated images with text often show garbled, misspelled, or nonsensical letters.
- Symmetry: Real faces have natural asymmetry. AI faces are often too perfectly symmetrical.
- Background details: Objects in the background may be blurred, distorted, or physically impossible.
- Watermarks: Check for subtle AI watermarks (Midjourney, DALL-E, Stable Diffusion sometimes embed metadata).
- Reverse image search: Search the image on Google, TinEye, or Yandex. If it does not appear anywhere else, it may be newly generated.
Deepfake Detection Tools
- Microsoft Video Authenticator: Analyzes videos and images for manipulation signs. Available to journalists and NGOs.
- Intel FakeCatcher: Real-time detection using blood flow analysis in video (AI-generated faces lack real blood flow signals).
- Sensity AI (now Reality Defender): Enterprise-grade detection for images and video. Free tier available.
- Google About This Image: In Google Search, right-click any image and choose "About this image" to see source and history metadata.
- Hive Moderation API: Detects AI-generated images. Free tier available for developers.
- AI or Not (aiornot.com): Simple tool to check if an image is AI-generated. Free.
- FotoForensics: Analyzes JPEG error levels and metadata to detect image manipulation.
Protecting Against Voice Clone Scams
Voice cloning scams are among the most effective deepfake attacks because people trust the sound of a familiar voice. Protection strategies:
Establish a secret code word with family members — something specific, not a common word. If you receive an emergency call from a family member, ask for the code word. A real family member will know it. An AI voice clone will not.
- If a caller claims to be family and asks for money or secrets, hang up and call them back on a known number
- Be especially cautious of calls from unknown numbers claiming to be people you know
- Scammers create artificial urgency — "I'm in jail," "I was in an accident" — to prevent verification. Slow down.
- Businesses should verify any unusual financial requests through a second, independent channel
Protecting Your Image from Deepfakes
- Limit public photos: Every public photo is potential training data for deepfake models. Review your social media privacy settings — see our social media privacy guide.
- Use Glaze and Nightshade: Free tools that add invisible perturbations to images, making them harder to use for AI training while looking normal to humans.
- Google yourself regularly: Search your name and reverse-search your photos to detect unauthorized use.
- Watermark professional images: Visible and invisible watermarks make unauthorized deepfakes harder to create and easier to prove ownership of originals.
If You Are Targeted by a Deepfake
- Document everything: Screenshot the content, URLs, account names, and timestamps. Do this before reporting — content may be removed.
- Report to the platform: All major social media platforms have policies against non-consensual deepfakes. Use the platform's abuse reporting tools.
- Cyber Civil Rights Initiative: cybercivilrights.org provides a crisis helpline and resources for NCII victims: 1-844-878-CCRI.
- Law enforcement: Over 20 US states have specific laws against non-consensual deepfakes. File a report with local police and potentially the FBI's Internet Crime Complaint Center (IC3.gov).
- Legal counsel: Defamatory or harassing deepfakes may support civil claims. Many attorneys offer free initial consultations.
- Support: Being targeted by a deepfake is traumatic. Seek support from trusted people and mental health resources.
Related Guides
Frequently Asked Questions
Can I spot a deepfake just by looking?
Not reliably in 2026. Early deepfakes had obvious tells — blurry edges, unnatural blinking, mismatched lighting. Modern deepfakes generated by the latest AI models are nearly photorealistic. However, subtle artifacts still exist: inconsistent ear details, unnatural skin texture at the hairline, teeth that look slightly off, and lighting that does not match the background. Context clues — does this look out of character? was this shared by an anonymous account? — are often more reliable than visual inspection alone.
What are the best free deepfake detection tools?
Microsoft's Video Authenticator, Intel's FakeCatcher, and Sensity AI offer detection capabilities. Hive Moderation provides a free API tier. Google's About This Image feature in Search shows provenance metadata. The DuckDuckGo browser extension flags AI-generated images on some platforms. No tool is 100% accurate — use multiple sources and apply common sense.
What should I do if a deepfake of me exists?
First, preserve evidence — take screenshots and save URLs. Report to the platform hosting the content (most have policies against non-consensual deepfakes). If the content is sexual, report to the Cyber Civil Rights Initiative (cybercivilrights.org). Contact local law enforcement — over 20 US states have laws specifically criminalizing non-consensual deepfakes. Consider contacting an attorney if the deepfake is defamatory or used for harassment.
Can deepfake audio be detected?
AI-cloned voice audio is harder to detect than video deepfakes because we have fewer audio analysis tools readily available. Warning signs: slight robotic quality, unnatural breathing patterns, audio that does not match expected emotion, and calls from unknown numbers asking for urgent action. The best defense is to establish a family code word — a secret phrase that only real family members know — to verify identity during emergency calls.
Are deepfakes illegal?
Laws vary by jurisdiction and use case. Non-consensual intimate deepfakes (NCII) are illegal in most US states and the UK. Deepfakes used for fraud or defamation are illegal under existing fraud and defamation laws. Election deepfakes are increasingly regulated. However, detection and prosecution remain difficult. The legal landscape is evolving rapidly — several federal deepfake bills are pending in the US Congress.