Action Guide
43 min read

What to Do If You Find a Deepfake: Complete Action Guide 2025

Step-by-step action plan when you discover deepfake videos. Includes platform reporting procedures (TikTok, YouTube, Instagram), FBI IC3 filing process, evidence documentation, legal action options under Take It Down Act (May 2025), victim resources, and response timelines. Critical guide for handling scam deepfakes, political misinformation, non-consensual content, and impersonation.

AI Video Detector Team
July 9, 2025
deepfake reportingFBI IC3Take It Down Actvictim resourcescybercrimelegal action

What to Do If You Find a Deepfake: Complete Action Guide 2025

You just discovered a deepfake video. Maybe it's:

  • **A celebrity endorsing a cryptocurrency scam** (using your favorite star's face)
  • **A politician making statements they never said** (election misinformation)
  • **Someone you know in a non-consensual explicit video** (deepfake porn)
  • **Your own face used without permission** (impersonation fraud)
  • Your heart races. What do you do NOW?

    The wrong response: Panic, share it to "warn others," or ignore it hoping it goes away.

    The right response: Follow this 5-phase action plan that takes you from discovery to resolution.

    What's changed in 2025:

  • **Take It Down Act** (May 2025): Federal law makes non-consensual deepfakes a **felony**
  • **Platform 48-hour removal requirement**: Tech companies must remove flagged content within 2 days
  • **FBI IC3 reporting**: 880,418 cybercrime complaints in 2023 ($12.5B losses, 22% increase)
  • **Statutory damages**: $150K-$250K for non-consensual sexual deepfakes
  • **YouTube AI removal tools**: Request removal if deepfake simulates your face/voice
  • This guide provides:

  • ✅ **Immediate actions** (what to do in first 60 minutes)
  • ✅ **Platform reporting procedures** (TikTok, YouTube, Instagram, Facebook, Twitter/X)
  • ✅ **FBI IC3 filing process** (step-by-step with screenshots guide)
  • ✅ **Evidence documentation** (how to preserve proof)
  • ✅ **Legal action options** (when to sue, expected costs, success rates)
  • ✅ **Victim support resources** (organizations that can help)
  • ✅ **Response timelines** (how long each step takes)
  • ✅ **Scenario-specific guides** (scam, political, non-consensual, impersonation)
  • Whether you're the victim, a concerned friend, journalist, or just a responsible netizen, this guide gives you the complete roadmap to combat deepfakes effectively.

    Don't just scroll past. ACT.

    ---

    Table of Contents

  • [Immediate Actions: First 60 Minutes](#immediate)
  • [Phase 1: Document and Preserve Evidence](#phase-evidence)
  • [Phase 2: Report to Platform (48-Hour Clock Starts)](#phase-platform)
  • [Phase 3: Report to Authorities (FBI IC3)](#phase-authorities)
  • [Phase 4: Alert Affected Parties](#phase-alert)
  • [Phase 5: Monitor and Follow Up](#phase-monitor)
  • [Platform-Specific Reporting: TikTok](#report-tiktok)
  • [Platform-Specific Reporting: YouTube](#report-youtube)
  • [Platform-Specific Reporting: Instagram/Facebook](#report-meta)
  • [Platform-Specific Reporting: Twitter/X](#report-twitter)
  • [FBI IC3 Filing: Complete Walkthrough](#fbi-ic3)
  • [Legal Action Options Under 2025 Laws](#legal-action)
  • [Scenario 1: Scam Deepfake (Celebrity Crypto Scam)](#scenario-scam)
  • [Scenario 2: Political Deepfake (Election Misinformation)](#scenario-political)
  • [Scenario 3: Non-Consensual Sexual Deepfake](#scenario-ncii)
  • [Scenario 4: Personal Impersonation](#scenario-impersonation)
  • [Victim Support Resources](#resources)
  • [What NOT to Do](#what-not-to-do)
  • ---

    Immediate Actions: First 60 Minutes

    The clock is ticking. Here's what to do RIGHT NOW:

    Action Checklist (Complete within 1 hour)

    ☐ STOP - Don't share or engage with the content
    ☐ SCREENSHOT - Capture URL, post details, comments
    ☐ DOWNLOAD - Save a copy (if legal to possess in your jurisdiction)
    ☐ NOTE TIME - Record discovery time and date
    ☐ IDENTIFY TYPE - Determine category (scam, political, sexual, impersonation)
    ☐ ASSESS URGENCY - Is this actively harming someone RIGHT NOW?
    ☐ BEGIN PHASE 1 - Start documentation process (next section)
    

    Critical DON'Ts (First Hour)

    ❌ DON'T share it to "warn others" (amplifies spread)
    ❌ DON'T comment on the original post (alerts creator you found it)
    ❌ DON'T contact the creator directly (may destroy evidence or flee)
    ❌ DON'T delete your evidence screenshots (you'll need them)
    ❌ DON'T panic-post about it publicly yet (wait for legal advice if it involves you)
    

    Urgency Assessment

    HIGH URGENCY (Act immediately, call police if needed):

    - Active fraud attempt in progress (people losing money NOW)
    - Violent threats using deepfake
    - Child sexual abuse material (CSAM) involving deepfakes
    - Imminent physical danger to someone
    
    Action: Call local police immediately, then proceed with this guide
    

    MEDIUM URGENCY (Complete within 24 hours):

    - Non-consensual sexual content (deepfake porn)
    - Active impersonation fraud
    - Political misinformation during election period
    - Brand/reputation damage
    
    Action: Proceed with this guide immediately
    

    STANDARD URGENCY (Complete within 3 days):

    - Celebrity scam deepfakes (general circulation)
    - Obvious parody/satire that might confuse some
    - Historical deepfakes being recirculated
    
    Action: Follow this guide at normal pace
    

    Your Mental State Matters

    If YOU are the victim shown in the deepfake:

    ⚠️ What you might feel right now:
    - Shock, disbelief
    - Violation, anger
    - Shame, embarrassment (you shouldn't - you did nothing wrong)
    - Panic about who's seen it
    
    🫂 Remember:
    - You are not alone (179 incidents in Q1 2025 alone)
    - This is NOT your fault
    - Help is available (victim resources in Section 17)
    - Legal protections exist (Take It Down Act, May 2025)
    
    📞 Crisis support:
    - Cyber Civil Rights Initiative: cybercivilrights.org
    - Crisis Text Line: Text HOME to 741741
    - RAINN (sexual content): 1-800-656-4673
    

    Deep breath. You've got this. Now let's proceed systematically.

    ---

    Phase 1: Document and Preserve Evidence

    Time required: 15-30 minutes

    Critical importance: Without documentation, platform reports and legal action become much harder

    What to Document

    A. The Content Itself

    ☐ Screenshot of the deepfake video (capture thumbnail clearly)
    ☐ Screenshot showing full URL in browser address bar
    ☐ Full-page screenshot of post (including comments, likes, shares count)
    ☐ Download copy of video file (if platform allows)
       - TikTok: Use SnapTik.app or similar (saves without watermark)
       - YouTube: Use youtube-dl or similar tool
       - Instagram: Use third-party downloaders
       - Save as: evidence_[date]_[platform]_original.mp4
    

    B. Metadata and Context

    ☐ Post URL (exact link)
    ☐ Poster's username and display name
    ☐ Poster's profile URL
    ☐ Post date/time (note your timezone)
    ☐ Discovery date/time (when YOU first saw it)
    ☐ View count at discovery
    ☐ Like/comment/share counts
    ☐ Video caption/description (full text)
    ☐ Hashtags used
    ☐ Any audio/voiceover text (transcribe if needed)
    

    C. Poster Information

    ☐ Screenshot of poster's profile page
    ☐ Bio/description
    ☐ Follower/following counts
    ☐ Account creation date (if visible)
    ☐ Other recent posts (screenshot last 5-10)
    ☐ Any contact information listed
    ☐ Linked websites or social media
    

    D. Comments and Engagement

    ☐ Screenshot top 20-30 comments
    ☐ Note if anyone is calling it fake
    ☐ Note if anyone is questioning authenticity
    ☐ Screenshot any comments from poster responding to questions
    ☐ Note patterns (are many commenters bots?)
    

    E. Distribution Tracking

    ☐ Search for video on other platforms
    ☐ Screenshot all locations where you find it
    ☐ Note which posting came first (if determinable)
    ☐ Check if it's been shared by news outlets
    ☐ Google search video title/description for coverage
    

    How to Organize Evidence

    Create dedicated folder:

    Folder structure:
    Deepfake_Evidence_[YYYY-MM-DD]/
    ├── 01_Original_Content/
    │   ├── video_original.mp4
    │   ├── screenshot_video_page.png
    │   ├── screenshot_url.png
    │   └── screenshot_full_post.png
    ├── 02_Poster_Info/
    │   ├── screenshot_profile.png
    │   ├── screenshot_other_posts.png
    │   └── bio_text.txt
    ├── 03_Comments/
    │   ├── screenshot_comments_1.png
    │   ├── screenshot_comments_2.png
    │   └── notable_comments.txt
    ├── 04_Distribution/
    │   ├── other_platforms/
    │   ├── news_coverage/
    │   └── distribution_log.txt
    └── 05_Documentation/
        ├── timeline.txt
        ├── reporting_log.txt
        └── contact_attempts.txt
    

    Timeline.txt template:

    DEEPFAKE INCIDENT TIMELINE
    
    Discovery:
    - Date/Time: 2025-01-15 14:30 EST
    - Platform: TikTok
    - URL: [paste exact URL]
    - Discovered by: [your name/role]
    
    Content Description:
    - Type: Celebrity crypto scam
    - Person depicted: Elon Musk
    - False claim: Bitcoin giveaway
    - View count at discovery: 1.2M
    
    Evidence Collection:
    - Screenshots taken: 2025-01-15 14:35 EST
    - Video downloaded: 2025-01-15 14:40 EST
    - Cross-platform search completed: 2025-01-15 15:00 EST
    
    Actions Taken:
    [Update as you proceed through phases]
    

    Evidence Storage and Backup

    ☐ Save to multiple locations:
       - Local computer hard drive
       - External USB drive (physical backup)
       - Cloud storage (Google Drive, Dropbox, OneDrive)
       - Email evidence folder to yourself (creates timestamp)
    
    ☐ Create SHA-256 hash of video file (proves file hasn't been altered):
       - Windows: certutil -hashfile video.mp4 SHA256
       - Mac/Linux: shasum -a 256 video.mp4
       - Save hash in documentation file
    
    ☐ Consider notarizing evidence (for potential legal action):
       - Services: Truepic, Amber, NotaryCam
       - Creates blockchain-verified timestamp
       - Admissible in court
    

    Legal Considerations for Evidence Collection

    Is it legal to download the deepfake?

    ✅ Generally YES if:
    - For evidence/reporting purposes
    - You are the victim
    - For law enforcement reporting
    - For platform violation reporting
    
    ⚠️ Potential issues:
    - Non-consensual sexual imagery (CSAM laws if minors involved)
    - Copyright issues (usually overridden by fair use for evidence)
    - Terms of service violations (platforms may object, but legal need prevails)
    
    Recommendation: Download for evidence purposes, store securely, don't redistribute
    

    ---

    Phase 2: Report to Platform (48-Hour Clock Starts)

    Time required: 10-20 minutes per platform

    Legal deadline: Under Take It Down Act (2025), platforms must remove non-consensual intimate imagery within 48 hours of notification

    Multi-Platform Strategy

    If deepfake appears on multiple platforms:

    Priority order:
    1. Platform with most views (stop widest spread)
    2. Platform where poster has most followers (limit reach)
    3. All other platforms (thorough cleanup)
    
    Simultaneous reporting:
    ✅ Report to ALL platforms at once (don't wait for one to respond)
    ✅ Use copy-paste report template (save time)
    ✅ Track submission times (for 48-hour deadline)
    

    Report Submission Checklist

    For each platform report:

    ☐ Use platform's official reporting tool (not just comments)
    ☐ Select correct violation category (see platform sections below)
    ☐ Provide all requested information (URL, description, proof)
    ☐ Upload evidence if option exists
    ☐ Save confirmation number/email
    ☐ Note submission time
    ☐ Set 48-hour reminder to check status
    

    What to Include in Reports

    Template report text (adapt for each platform):

    SUBJECT: Deepfake Video Violation - [Type] - Urgent Removal Requested
    
    I am reporting a deepfake video that violates your Terms of Service:
    
    VIDEO URL: [paste exact URL]
    POSTER: @[username]
    VIOLATION TYPE: [Impersonation / Misinformation / Non-consensual intimate imagery]
    
    DESCRIPTION:
    This video is a deepfake (AI-generated video) that [describes harm]:
    - [Example: Falsely depicts Elon Musk endorsing a cryptocurrency scam]
    - [Example: Uses my likeness without consent in sexual content]
    - [Example: Contains fabricated political statement by [person]]
    
    EVIDENCE:
    - Video is AI-generated (lacks [specific tells, or confirmed by detection tools])
    - Original source does not exist (reverse search conducted)
    - [Person depicted] has publicly denied making this content
    - This violates your policy against [specific policy section]
    
    HARM:
    - [X] people have viewed this (as of [date/time])
    - [Description of harm: financial loss, reputational damage, emotional distress]
    
    LEGAL BASIS (if applicable):
    - Take It Down Act (2025) for non-consensual intimate imagery
    - NO FAKES Act protections (if applicable)
    - [State] deepfake laws
    
    REQUEST:
    Immediate removal within 48 hours as required by law.
    
    CONTACT:
    Name: [Your name]
    Email: [Your email]
    Phone: [Optional]
    
    Evidence attached: [list screenshots/documentation]
    
    Date/Time of Report: [timestamp]
    

    ---

    Phase 3: Report to Authorities (FBI IC3)

    Time required: 20-40 minutes

    When to do this: For serious incidents (fraud, threats, NCII, widespread harm)

    When to Report to Law Enforcement

    MUST report (legally or practically essential):

    ✅ Non-consensual sexual imagery (Take It Down Act requires prosecution capability)
    ✅ Financial fraud (scam deepfakes stealing money)
    ✅ Child sexual abuse material (CSAM) involving deepfakes - IMMEDIATELY
    ✅ Threats of violence
    ✅ Extortion/blackmail attempts
    ✅ Identity theft for financial gain
    

    SHOULD report (builds case database):

    ✅ Political misinformation during elections
    ✅ Corporate impersonation (CEO fraud)
    ✅ Large-scale coordinated deepfake campaigns
    ✅ Repeat offender content creators
    

    OPTIONAL report:

    ⚪ Obvious parody/satire (low harm)
    ⚪ Small-scale impersonation (minimal damage)
    ⚪ Already handled by platform removal
    

    FBI IC3 (Internet Crime Complaint Center)

    ⚠️ CRITICAL WARNING (September 2025):

    The FBI has warned that fake IC3 websites exist to steal your data.

    Only use the official site: www.ic3.gov

  • Type URL directly into browser (DON'T use search engines)
  • Verify SSL certificate
  • Check for FBI.gov domain
  • FBI IC3 Filing Process (Step-by-Step)

    Step 1: Go to IC3.gov

    URL: https://www.ic3.gov
    Click: "File a Complaint"
    

    Step 2: Read Instructions

    Review:
    - What types of complaints IC3 handles
    - What information you'll need
    - How reports are used (investigative and intelligence)
    
    Note: IC3 does NOT:
    - Provide legal advice
    - Mediate disputes
    - Recover lost funds directly
    

    Step 3: Complete Complaint Form

    Section A: Your Information

    Full Legal Name: [Required]
    Address: [Required]
    Phone: [Required]
    Email: [Required]
    Date of Birth: [Required]
    
    ⚠️ Accuracy critical - investigators may contact you
    

    Section B: Subject/Suspect Information (if known)

    Name: [Poster's username if real name unknown]
    Address: [If known, otherwise "Unknown"]
    Phone: [If known]
    Email: [If known]
    Website/URL: [Poster's profile URL]
    
    Note: It's OK to say "Unknown" - FBI can investigate
    

    Section C: Financial Transaction (for fraud cases)

    Did you lose money: [Yes/No]
    Amount lost: $[amount]
    Payment method: [Wire transfer, cryptocurrency, gift cards, etc.]
    Recipient: [If known]
    Transaction date: [Date]
    
    Bank/Institution details: [If applicable]
    

    Section D: Incident Description

    Template for deepfake incidents:

    INCIDENT TYPE: Deepfake Video Fraud / Non-Consensual Deepfake Pornography / [etc.]
    
    DATE OF INCIDENT: [Date you discovered deepfake]
    
    DESCRIPTION:
    I discovered a deepfake video (AI-generated video manipulating someone's likeness) on [Platform] that is causing [harm type]:
    
    1. CONTENT DETAILS:
       - URL: [Exact URL]
       - Platform: [TikTok/YouTube/Instagram/etc.]
       - Poster: @[username]
       - Views: [number] as of [date]
    
    2. DEEPFAKE CHARACTERISTICS:
       - Person depicted: [Name or "Myself"]
       - False content: [What fake video shows]
       - AI-generated (evidence: [detection tools used, obvious artifacts, etc.])
    
    3. HARM CAUSED:
       [Check all that apply]
       □ Financial loss: $[amount] lost due to scam
       □ Identity theft: My likeness used without consent
       □ Reputational damage: [Describe]
       □ Emotional distress: [Describe]
       □ Privacy violation: Non-consensual sexual imagery
    
    4. ATTEMPTS TO RESOLVE:
       - Reported to [Platform] on [date]: [Response or no response yet]
       - [Any other actions taken]
    
    5. EVIDENCE:
       I have preserved evidence including:
       - Downloaded video file
       - Screenshots of post and poster profile
       - Transaction records (if fraud)
       - [Other evidence]
    
       I can provide this evidence to investigators.
    
    6. SUPPORTING INFORMATION:
       - [Person depicted] has publicly denied creating this content: [Link if applicable]
       - Fact-checking organizations have debunked this: [Links if applicable]
       - Similar deepfakes by same creator: [Links if applicable]
    
    REQUESTED ACTION:
    Investigation and prosecution of deepfake creator under:
    - Take It Down Act (2025) [if non-consensual intimate imagery]
    - Wire fraud statutes [if financial fraud]
    - [Other applicable laws]
    
    ATTACHMENTS: [List evidence files if upload option available]
    

    Step 4: Review and Submit

    ☐ Review all information for accuracy
    ☐ Ensure contact information is correct
    ☐ Submit complaint
    ☐ SAVE COMPLAINT NUMBER (critical for follow-up)
    ☐ Print or screenshot confirmation page
    

    Step 5: What Happens Next

    Immediate:
    - Complaint entered into FBI database
    - Assigned complaint number
    - No immediate response (this is normal)
    
    Within 30 days:
    - Complaint reviewed by analysts
    - Matched with other similar complaints (pattern detection)
    - Assigned to field office if meets criteria
    
    Within 3-6 months:
    - Investigation may begin (if case meets thresholds)
    - You may be contacted for follow-up
    - OR: Case used for intelligence (helps future investigations)
    
    Note: Not all complaints result in prosecution, but ALL are valuable for building case patterns
    

    Additional Law Enforcement Options

    Local Police:

    When to call:
    ✅ Immediate physical danger
    ✅ Active ongoing fraud (people losing money NOW)
    ✅ You know identity of creator (local jurisdiction)
    
    How to report:
    1. Call non-emergency line: [Your city] police department
    2. Ask for: Cybercrime division or detective unit
    3. Provide: All evidence you've collected
    4. Reference: FBI IC3 complaint number (if filed)
    
    Expect: Varied response (small departments may lack expertise)
    

    State Authorities:

    Many states have dedicated cybercrime units:
    
    Examples:
    - California: DOJ eCrime Unit
    - New York: State Police Computer Crime Unit
    - Texas: DPS Cyber Crimes Unit
    
    Find yours: Google "[Your State] cybercrime reporting"
    

    ---

    Phase 4: Alert Affected Parties

    Time required: Variable (15 minutes - several hours depending on scope)

    When to do this: After documenting evidence and filing reports

    Who to Alert

    Priority 1: Direct Victims

    If deepfake depicts a real person (not you):
    
    ☐ Contact them privately (DM, email, phone if possible)
    ☐ Send them evidence documentation
    ☐ Share this action guide
    ☐ Offer to help with reporting
    ☐ Respect their wishes (if they want to handle privately)
    
    Message template:
    "Hi [Name], I discovered a deepfake video using your likeness. It's [brief description] and has [view count] views on [platform]. I've documented evidence and can share it with you. I wanted you to be aware immediately. Let me know if you'd like my help reporting it or if you prefer to handle it. Here's the URL: [link]"
    

    Priority 2: Target Audience (if scam)

    If deepfake is scamming specific group:
    
    ☐ Alert community moderators (Reddit, Facebook groups, forums)
    ☐ Post warning (if appropriate) in affected communities
    ☐ Contact admins of fan pages (if celebrity impersonation)
    
    Warning template:
    "⚠️ SCAM ALERT: A deepfake video is circulating showing [person] promoting [scam]. This is AI-generated and fake. [Person] has not endorsed this. DO NOT send money or click links. Report video if you see it: [platform reporting link]. Evidence it's fake: [brief explanation]"
    

    Priority 3: Fact-Checkers and Media

    For misinformation deepfakes:
    
    ☐ Snopes tip line: snopes.com/contact
    ☐ PolitiFact: politifact.com/article/2018/may/03/suggest-fact-check
    ☐ Lead Stories: leadstories.com/contact.html
    ☐ Your local news station (if regional impact)
    
    Tip template:
    "I discovered a deepfake video that is spreading misinformation about [topic]. It has [view count] views and is being shared widely. I have documented evidence showing it's AI-generated. Would you consider fact-checking this? [Details and evidence links]"
    

    Priority 4: Platform Trust & Safety (beyond reporting)

    For large-scale or dangerous deepfakes:
    
    ☐ Twitter/X Safety: help.twitter.com (escalation option)
    ☐ Meta Safety: facebook.com/help/contact/380829032097346
    ☐ YouTube Trusted Flagger Program (if you're part of it)
    ☐ TikTok Safety: tiktok.com/safety (escalation contact)
    
    When to escalate:
    - Standard report not addressed within 48 hours
    - Content violates laws (Take It Down Act)
    - Imminent harm to safety
    

    Public Alerts (Use Caution)

    When to post public warning:

    ✅ Widespread scam affecting many people
    ✅ Time-sensitive misinformation (elections, emergencies)
    ✅ Victim consents to public awareness
    
    ❌ Don't post publicly if:
    - Victim wants privacy (non-consensual intimate imagery)
    - Could amplify spread (Streisand effect)
    - Still gathering evidence (alert suspect)
    

    Public alert template (social media post):

    🚨 DEEPFAKE ALERT 🚨
    
    A fake AI-generated video is circulating showing [brief description].
    
    ❌ This is NOT real
    ✅ It was created using AI (deepfake technology)
    
    Evidence it's fake:
    - [Specific tells]
    - [Person depicted] has denied it
    - [Fact-check links if available]
    
    If you see this video:
    ☐ DON'T share it
    ☐ Report it to [platform]
    ☐ Warn others it's fake
    
    Learn how to spot deepfakes: [link to educational resources]
    
    #Deepfake #Misinformation #FactCheck
    

    ---

    Phase 5: Monitor and Follow Up

    Duration: Ongoing (7-30 days typically)

    Purpose: Ensure removal and prevent re-upload

    Monitoring Checklist

    Daily (First Week):

    ☐ Check if original post is still live
    ☐ Search for reposts on same platform
    ☐ Check other platforms for copies
    ☐ Review platform report status (if portal exists)
    ☐ Document any changes (view counts, new comments)
    ☐ Check for response from platform or authorities
    

    Weekly (Weeks 2-4):

    ☐ Broad search across all platforms
    ☐ Google search for video title/description
    ☐ Check if victim/subject has commented publicly
    ☐ Review any media coverage
    ☐ Update timeline documentation
    

    Platform Response Tracking

    Create tracking log:

    PLATFORM RESPONSE LOG
    
    Report #1: TikTok
    - Submitted: 2025-01-15 15:00 EST
    - Method: In-app reporting tool
    - Confirmation: Email received 15:05
    - 48-hour deadline: 2025-01-17 15:00
    - Status check (24h): Still live
    - Status check (48h): REMOVED ✅
    - Removal confirmed: 2025-01-17 10:30 (ahead of deadline)
    - Notes: Platform cited "synthetic and manipulated media" policy
    
    Report #2: YouTube
    - Submitted: 2025-01-15 15:30 EST
    - Method: Privacy request form
    - Confirmation: Case #XXXXXXX
    - 48-hour deadline: 2025-01-17 15:30
    - Status check (24h): "Under review"
    - Status check (48h): "Uploader given 48h to respond"
    - Status check (96h): REMOVED ✅
    - Removal confirmed: 2025-01-19 14:00
    - Notes: Took longer due to uploader response period
    
    [Continue for all platforms]
    

    Escalation Procedures

    If content NOT removed within 48 hours (Take It Down Act violations):

    Step 1: Send Legal Notice to Platform

    Template (send via email to platform's legal/DMCA contact):
    
    SUBJECT: URGENT - Failure to Comply with Take It Down Act (48-Hour Deadline Passed)
    
    To: [Platform] Legal Department
    
    CASE REFERENCE: [Your report number]
    
    This is formal notice that [Platform] has failed to remove content in violation of the Take It Down Act (2025) within the required 48-hour period.
    
    CONTENT DETAILS:
    - URL: [link]
    - Report submitted: [date/time]
    - 48-hour deadline: [date/time]
    - Current status: Still live (as of [current date/time])
    
    VIOLATION:
    Non-consensual intimate imagery (deepfake) depicting [description]
    
    LEGAL REQUIREMENT:
    The Take It Down Act (Public Law XXX-XXX, enacted May 2025) requires platforms to remove reported non-consensual intimate imagery within 48 hours of notification.
    
    YOUR FAILURE TO COMPLY:
    As of [current time], [X] hours past deadline, the content remains accessible.
    
    CONSEQUENCES:
    Continued failure to comply may result in:
    - Civil liability under federal law
    - Statutory damages
    - Regulatory action
    - Public reporting of non-compliance
    
    IMMEDIATE ACTION REQUIRED:
    Remove content within 24 hours of this notice.
    
    EVIDENCE:
    Attached: Original report, screenshots showing content still live, timestamp proof
    
    CONTACT:
    [Your name]
    [Your attorney if represented]
    [Contact information]
    
    Date: [Date]
    

    Step 2: Consult Attorney

    When to hire lawyer:
    ✅ Platform ignores legal notice
    ✅ Content causing ongoing harm
    ✅ Damages exceed $10,000
    ✅ Criminal case may be filed
    
    What lawyer can do:
    - Send stronger legal demand
    - File emergency court order (injunction)
    - Sue platform for non-compliance
    - Sue creator for damages
    

    Step 3: Contact Regulators

    For Take It Down Act violations:
    
    ☐ File complaint with FTC: ftccomplaintassistant.gov
    ☐ Report to state Attorney General: [Your state AG office]
    ☐ Media coverage: Contact journalist covering tech/privacy
    
    Template for regulator complaint:
    "[Platform] has failed to comply with the Take It Down Act by not removing deepfake non-consensual intimate imagery within the required 48-hour period. Content was reported on [date] and remains live as of [current date], [X] hours past deadline. I request investigation and enforcement action."
    

    Re-Upload Prevention

    Set up alerts for re-uploads:

    ☐ Google Alerts for video title/description
    ☐ Talkwalker Alerts for image search
    ☐ Platform-specific searches (weekly)
    ☐ If you're the victim: Enable content monitoring services
       - Reputation monitoring tools
       - Reverse image search alerts
    

    If re-uploaded:

    ✅ Report immediately (don't wait - platforms prioritize repeat violations)
    ✅ Reference original report number
    ✅ Note: "This is re-upload of previously removed content"
    ✅ Platform policies often result in account suspension for repeat violations
    

    ---

    Platform-Specific Reporting: TikTok

    TikTok Reporting Methods

    Method 1: In-App Report (Fastest)

    Step 1: Locate the video
    Step 2: Press and hold on video OR tap "Share" icon (arrow)
    Step 3: Tap "Report"
    Step 4: Select violation type:
    
    For deepfakes, choose:
    ☐ "Integrity and Authenticity" → "Misleading information"
    ☐ "Integrity and Authenticity" → "Impersonation"
    ☐ "Safety" → "Non-consensual intimate imagery" (if sexual content)
    ☐ "Safety" → "Scams or fraud"
    
    Step 5: Follow prompts to provide details
    Step 6: Submit
    Step 7: Save confirmation (screenshot)
    

    Method 2: Web Form Report

    URL: https://www.tiktok.com/legal/report/feedback
    
    Use when:
    - Need to upload evidence files
    - Reporting multiple violations at once
    - Want written record
    
    Fill out:
    - Your information
    - Violating content URL
    - Violation type
    - Detailed description
    - Evidence uploads
    

    Method 3: Impersonation-Specific Report

    URL: https://support.tiktok.com/en/safety-hc/report-a-problem/report-an-impersonation-account
    
    Use when:
    - Entire account is impersonating someone
    - Multiple deepfake videos from same account
    
    Requires:
    - Proof of identity (if you're the person being impersonated)
    - Account URL
    - Examples of impersonation
    

    TikTok Response Times

    Standard reports: 24-48 hours review
    Priority reports (NCII): <24 hours (usually <12 hours)
    Complex cases: 3-7 days
    Appeal process: 7-14 days (if content removed but you disagree)
    

    TikTok Anonymity

    ✅ Reports are completely anonymous
    ✅ Creator is NOT notified who reported
    ✅ Your account is not revealed
    
    Exception: Legal proceedings may require disclosure
    

    ---

    Platform-Specific Reporting: YouTube

    YouTube AI-Specific Removal Request (New 2025 Tool)

    For deepfakes of YOU:

    YouTube allows removal requests if deepfake simulates your face or voice
    
    Step 1: Go to YouTube Privacy Complaint Process
    URL: support.google.com/youtube/answer/142443
    
    Step 2: Select "Privacy complaint"
    Step 3: Choose reason:
    ☐ "Uses my image without my permission"
    ☐ "Uses my voice without my permission"
    
    Step 4: Provide information:
    - Your full legal name
    - Your contact email
    - Affected video URL
    - Timestamp (if only part of video is deepfake)
    - Explanation: "This video uses AI-generated deepfake technology to simulate my face/voice without my consent. I did not create, authorize, or appear in this content."
    
    Step 5: Submit
    Step 6: YouTube notifies uploader (48-hour response window)
    Step 7: Decision within 2-7 days
    

    YouTube response to uploader:

    Uploader gets 48 hours to:
    - Remove content voluntarily, OR
    - Provide evidence they have permission
    
    If no response or insufficient evidence → Content removed
    

    Standard YouTube Reporting

    For non-personal deepfakes (scams, misinformation, etc.):

    Step 1: Below video, click "..." (three dots)
    Step 2: Select "Report"
    Step 3: Choose category:
    
    For deepfakes:
    ☐ "Spam or misleading" → "Scams or fraud" (for scam deepfakes)
    ☐ "Spam or misleading" → "Misleading metadata" (if title/description lies)
    ☐ "Harmful or dangerous content" → "Misinformation"
    
    Step 4: Provide details
    Step 5: Submit
    
    Note: Generic reports may take longer (3-7 days)
    

    YouTube Copyright Strike (For Content Theft)

    If deepfake stole your original video content:
    
    URL: youtube.com/copyright_complaint_form
    
    Process:
    1. Prove you own original content
    2. Identify infringing video
    3. Swear under penalty of perjury
    4. Submit
    
    Result:
    - Video removed within 24 hours
    - Creator gets copyright strike
    - 3 strikes = Account termination
    

    ---

    Platform-Specific Reporting: Instagram/Facebook

    Instagram Reporting

    In-App Report:

    Step 1: On Reel or post, tap "..." (three dots)
    Step 2: Tap "Report"
    Step 3: Select:
    
    For deepfakes:
    ☐ "Fraud or scam" → "Impersonation"
    ☐ "False information" (for misinformation deepfakes)
    ☐ "Nudity or sexual activity" → "Involves someone I know" (for NCII)
    
    Step 4: Follow prompts
    Step 5: If you're the person in video, select "Me"
    Step 6: Submit
    

    Web Form (For Serious Violations):

    Non-consensual intimate imagery:
    URL: facebook.com/help/contact/567360146613371
    
    Impersonation:
    URL: facebook.com/help/contact/634636770043106
    
    Provide:
    - Your information
    - Content URL
    - Explanation
    - Proof (ID if claiming it's you)
    

    Facebook Reporting

    Similar process to Instagram:

    Step 1: Click "..." on post
    Step 2: Select "Find support or report post"
    Step 3: Choose violation type
    Step 4: Submit
    
    For serious violations, use web forms (same URLs as Instagram)
    

    Meta's 48-Hour Removal (Take It Down Act)

    For NCII (non-consensual intimate imagery):
    
    Meta commits to:
    ✅ Review within 24 hours
    ✅ Remove within 48 hours if violation confirmed
    ✅ Disable uploader's ability to re-upload
    
    Track via:
    - Confirmation email (check spam folder)
    - Support Inbox (in-app)
    

    ---

    Platform-Specific Reporting: Twitter/X

    Twitter/X Reporting (No AI-Specific Tools Yet)

    In-App Report:

    Step 1: Click "..." on tweet
    Step 2: Select "Report Tweet"
    Step 3: Choose:
    
    For deepfakes:
    ☐ "It's misleading" → "Synthetic, manipulated, or out-of-context media"
    ☐ "It's spam" → "It's a scam"
    ☐ "It's abusive or harmful" → "Impersonation"
    
    Step 4: Provide details
    Step 5: Submit
    

    Copyright/DMCA Report:

    URL: help.twitter.com/forms/dmca
    
    Use if:
    - Deepfake uses your copyrighted content
    - You own rights to original video
    

    Twitter/X Limitations

    ⚠️ Challenges:
    - No dedicated deepfake reporting category (as of 2025)
    - Response times variable (1-14 days)
    - Paid verification complicates impersonation reports
    
    Recommendation:
    - Report via multiple categories
    - Tag @TwitterSupport publicly (sometimes speeds up review)
    - Reference Community Notes if added by users
    

    ---

    FBI IC3 Filing: Complete Walkthrough

    (Detailed in Phase 3 above - key points)

    IC3.gov Official Site: www.ic3.gov

    2023 Statistics:

  • 880,418 complaints received
  • $12.5 billion in losses
  • 22% increase from 2022
  • What IC3 Does:

  • Collects cybercrime reports
  • Analyzes patterns
  • Distributes to law enforcement
  • Supports investigations
  • What IC3 Does NOT Do:

  • Provide immediate response
  • Guarantee prosecution
  • Recover funds directly
  • Offer legal advice
  • Follow-up:

  • Most cases: No immediate contact (report feeds intelligence)
  • Serious cases: Investigator may reach out within 30-90 days
  • Save complaint number for reference
  • Additional Contacts:

    FBI Cyber Watch (for urgent threats):
    Email: CyWatch@fbi.gov
    Phone: 855-292-3937
    
    FBI Field Office Locator:
    URL: fbi.gov/contact-us/field-offices
    (For in-person reporting of serious incidents)
    

    ---

    Legal Action Options Under 2025 Laws

    Federal Law: Take It Down Act (May 2025)

    What it covers:

  • Non-consensual intimate imagery (NCII)
  • Both real photos/videos AND deepfakes
  • Distribution online
  • Key provisions:

    ✅ Federal crime to share NCII without consent
    ✅ Platforms must remove within 48 hours of notice
    ✅ Victims can sue for damages:
       - Statutory damages: $150,000 - $250,000
       - Actual damages (prove harm)
       - Punitive damages (if malicious)
       - Attorney's fees
    
    ✅ Criminal penalties for creators:
       - Up to 2 years federal prison
       - Fines
       - Probation
    

    Who can sue:

  • Person depicted in deepfake
  • Legal guardian (if minor)
  • Estate (if deceased)
  • Statute of limitations:

  • 4 years from discovery of violation
  • 10 years from creation (whichever is later)
  • State Laws (40+ States Have Deepfake Laws)

    California (Strong protections):

    Right of publicity protection:
    - Lifetime + 70 years after death
    - Damages: Actual damages OR statutory ($750-$150K)
    - Attorney's fees recoverable
    
    Deepfake-specific law (AB 730, AB 602):
    - Non-consensual pornography: Criminal penalties
    - Political deepfakes: Illegal within 60 days of election
    

    New York:

    Civil Rights Law §§ 50-51:
    - Protects against unauthorized use of likeness
    - Damages: Actual + punitive
    - Injunctions available
    

    Texas:

    Property Code § 26.001:
    - Publicity rights
    - Damages: $250,000 minimum OR actual (whichever greater)
    - Criminal penalties for NCII
    

    Other states with laws: Illinois, Virginia, Washington, Florida, Georgia, Minnesota, and 30+ more.

    When to Consider Suing

    Strong case indicators:

    ✅ Clear identification of creator
    ✅ Significant damages (>$10,000)
    ✅ Creator has assets to collect judgment
    ✅ Evidence well-documented
    ✅ Violation of specific law
    ✅ Irreparable harm to reputation
    

    Weak case indicators:

    ❌ Anonymous creator (can't find them)
    ❌ Creator overseas (hard to enforce)
    ❌ Minimal damages
    ❌ Obvious parody (First Amendment protection)
    ❌ Platform removed content quickly (harm limited)
    

    Legal Action Process

    Step 1: Cease & Desist Letter

    Cost: $500-$2,000 (lawyer drafts)
    Success rate: ~60% (many creators comply to avoid suit)
    Timeframe: 7-day response period typical
    
    Sample clause:
    "You have created and distributed a deepfake video using my client's likeness without authorization, in violation of [State] law and the NO FAKES Act. DEMAND: (1) Remove all copies within 7 days, (2) Provide written confirmation of removal, (3) Agree not to re-upload. Failure to comply will result in immediate legal action seeking damages exceeding $150,000."
    

    Step 2: Civil Lawsuit

    Cost: $10,000-$50,000+ (depends on complexity)
    Timeframe: 12-24 months to resolution
    Typical recovery: $50,000-$500,000 (varies widely)
    
    Process:
    1. Hire attorney (intellectual property or privacy specialist)
    2. File complaint in federal or state court
    3. Serve defendant
    4. Discovery (gather evidence, depositions)
    5. Settlement negotiations (90% settle before trial)
    6. Trial (if no settlement)
    7. Judgment
    8. Collection (if defendant doesn't pay voluntarily)
    

    Step 3: Criminal Prosecution (via authorities, not you directly)

    Your role:
    - File FBI IC3 complaint
    - Cooperate with investigators
    - Provide evidence
    - Testify if case goes to trial
    
    Prosecution decision: Made by US Attorney or District Attorney
    
    Outcomes:
    - Prison time (up to 2 years federal)
    - Fines
    - Probation
    - Restitution order (you may receive compensation)
    

    ---

    Scenario 1: Scam Deepfake (Celebrity Crypto Scam)

    Scenario: You find a deepfake video of Elon Musk promoting a Bitcoin giveaway scam. It has 2M views on TikTok.

    Action Plan

    Phase 1: Document (15 minutes)

    ☐ Screenshot video showing Elon Musk deepfake
    ☐ Screenshot scam link in video description/comments
    ☐ Download video file
    ☐ Note view count, engagement
    ☐ Search for same video on other platforms
    ☐ Screenshot any victims' comments ("I sent money!")
    

    Phase 2: Report to Platform (10 minutes)

    ☐ TikTok in-app report: "Scams or fraud"
    ☐ Description: "Deepfake impersonating Elon Musk to promote cryptocurrency scam. Fake promises of giveaway. This is AI-generated video designed to steal money."
    ☐ Include evidence: Link to real Elon Musk denying (if exists)
    

    Phase 3: Report to FBI IC3 (30 minutes)

    ☐ File at IC3.gov
    ☐ Category: "Identity theft" + "Fraud"
    ☐ Description: Include scam URL, how it works, estimated victims
    ☐ Financial loss: Note if you or anyone you know lost money
    

    Phase 4: Alert Community (20 minutes)

    ☐ Find cryptocurrency forums/subreddits
    ☐ Post warning: "PSA: Elon Musk Bitcoin giveaway is SCAM. Video is deepfake. Do NOT send crypto. Report if you see it."
    ☐ Contact real Elon Musk's team (if you can): Tweet @elonmusk with warning
    ☐ Alert Snopes/PolitiFact for fact-check article
    

    Phase 5: Monitor (Ongoing)

    ☐ Check TikTok daily for removal (48-hour expectation)
    ☐ Search "Elon Musk Bitcoin giveaway" on Twitter, YouTube for copies
    ☐ Set Google Alert: "Elon Musk Bitcoin giveaway scam"
    ☐ Track if news coverage emerges
    

    Expected Timeline

    Hour 0: Discovery
    Hour 1: Documentation complete
    Hour 2: Platform reports submitted
    Hour 3: FBI IC3 filed
    Day 1: Community alerts posted
    Day 2: TikTok removes video (48-hour requirement)
    Week 1: Copied videos reported and removed from other platforms
    Month 1: FBI may contact if part of larger scam investigation
    

    Success Metrics

    ✅ Original video removed within 48 hours
    ✅ No new victims in your network
    ✅ Scam added to fact-checker databases
    ✅ Pattern reported to FBI (helps future cases)
    

    ---

    Scenario 2: Political Deepfake (Election Misinformation)

    Scenario: Two weeks before election, you find deepfake of candidate saying they're "dropping out of the race." 5M views on Twitter/X in 12 hours.

    Action Plan (URGENT - Time-Sensitive)

    Immediate Actions (First 60 minutes):

    ☐ Screenshot EVERYTHING (views, timestamp, engagement)
    ☐ Download video
    ☐ Identify all platforms where it's spreading
    ☐ Note if any news outlets have covered it
    ☐ Check if candidate or campaign has responded
    

    Multi-Platform Reporting (Next 30 minutes):

    ☐ Twitter: Report as "Synthetic media" + "Election misinformation"
    ☐ If on other platforms: Report simultaneously to all
    ☐ Escalate: Use platform's "high priority" or "election integrity" reporting if available
    

    Contact Campaign/Candidate (Immediately):

    ☐ Find campaign's official contact
    ☐ Email/DM: "URGENT: Deepfake video spreading showing you saying [false claim]. 5M views. Evidence attached. Please issue denial immediately."
    ☐ Provide all documentation
    

    Alert Fact-Checkers (Within 2 hours):

    Priority contacts:
    ☐ PolitiFact: Rapid Response team
    ☐ Snopes: Breaking news desk
    ☐ Lead Stories: Submit tip
    ☐ AP Fact Check: tips@ap.org
    ☐ Reuters Fact Check: factcheck@reuters.com
    
    Tip message:
    "URGENT ELECTION MISINFORMATION: Deepfake of [candidate] claiming to drop out spreading rapidly. 5M views in 12 hours. Need immediate fact-check to prevent voter confusion. Evidence and analysis attached."
    

    Contact Media (Within 3 hours):

    ☐ Major newspapers: Submit to breaking news desk
    ☐ TV networks: Contact political desk
    ☐ Focus: "5M voters may be misled before election"
    
    Press release template:
    "DEEPFAKE ALERT: A fabricated AI-generated video falsely showing [candidate] withdrawing from the race is spreading virally. This is disinformation intended to suppress voter turnout. [Candidate] HAS NOT withdrawn. The video is confirmed deepfake. Voters should verify all election information through official sources."
    

    Report to Election Authorities:

    ☐ FBI: IC3.gov (election interference category)
    ☐ Cybersecurity & Infrastructure Security Agency (CISA)
       Email: central@cisa.dhs.gov
    ☐ State Election Board: [Your state]
    ☐ Federal Election Commission: info@fec.gov
    
    Report: "Election misinformation via deepfake technology. Potential to suppress voter turnout."
    

    Public Warning Campaign:

    ☐ Post on your social media (tag news outlets)
    ☐ Use hashtags: #Deepfake #ElectionMisinformation #[Candidate]
    ☐ Create counter-narrative: "Video is FAKE. [Candidate] is still in race. Vote on [date]."
    ☐ Share fact-check articles as soon as published
    

    Expected Timeline (Election Context)

    Hour 0: Discovery
    Hour 1: All reports filed, campaign alerted
    Hour 2-4: Fact-checkers publish debunks
    Hour 6-12: News media covers "deepfake spreads misinformation"
    Hour 12-24: Platforms add warnings or remove
    Day 1-2: Candidate makes public statement denying
    Week 1: Video spread contained, fact-checks widespread
    
    Critical: Must counter within 24 hours before it becomes "accepted truth"
    

    ---

    Scenario 3: Non-Consensual Sexual Deepfake

    ⚠️ CONTENT WARNING: This section discusses non-consensual intimate imagery.

    Scenario: You discover a deepfake pornographic video depicting you (or someone you know). It's been posted on pornographic website and shared on Twitter.

    Action Plan (Victim-Centered)

    Immediate Self-Care:

    ☐ Take a breath - This is NOT your fault
    ☐ Call support hotline if you need to talk:
       - Crisis Text Line: Text HOME to 741741
       - RAINN: 1-800-656-4673
       - Cyber Civil Rights Initiative: cybercivilrights.org
    
    ☐ Don't view the content repeatedly (causes re-traumatization)
    ☐ Ask trusted friend to help with evidence collection if you can't
    

    Evidence Collection (Have friend help if needed):

    ☐ Screenshot (don't download actual video file - may violate laws in some jurisdictions)
    ☐ Document URL
    ☐ Note platform
    ☐ Screenshot poster's profile
    ☐ Save any messages/blackmail attempts
    ☐ Do NOT engage with poster
    

    Immediate Reports (Within first 24 hours):

    1. Platform Reports:

    ☐ Report to hosting platform immediately
    
    For porn sites:
    - Most have "Report Abuse" or "DMCA" link in footer
    - Select "Non-consensual content" or "Revenge porn"
    - Provide: URL, your name, statement "I did not consent to this"
    
    For social media (Twitter, Reddit, etc.):
    - Use reporting tools
    - Category: "Non-consensual intimate imagery"
    - Urgent flag: "Immediate removal required"
    

    2. Take It Down (NCMEC):

    URL: takeitdown.ncmec.org
    
    Service: Hashes content and distributes to platforms
    Benefit: Prevents re-uploads across major platforms
    Process:
    1. Create account
    2. Upload content or provide URL
    3. System creates unique hash
    4. Platforms auto-block future uploads
    
    Platforms participating: Facebook, Instagram, TikTok, OnlyFans, Pornhub, etc.
    
    ✅ FREE service
    ✅ Confidential
    ✅ Very effective
    

    3. Law Enforcement:

    ☐ Local police: File report immediately
       - Take evidence folder
       - Ask for detective specialized in cyber/sex crimes
       - Get case number
    
    ☐ FBI IC3: File online report
       - Category: "Non-consensual pornography" + "Extortion" (if blackmail)
       - Detail: Take It Down Act violation
    
    ☐ State cybercrime unit: File report
    

    Legal Action (Within first week):

    Consult Attorney:

    Specialist needed: Cyber civil rights attorney
    
    Find one:
    - Cyber Civil Rights Initiative: Attorney referral
    - State bar association: Search "cyber rights" or "privacy law"
    
    What attorney will do:
    1. Emergency cease & desist to poster
    2. DMCA takedown to platforms
    3. Court injunction (restraining order)
    4. Civil lawsuit for damages
       - Take It Down Act: $150K-$250K statutory
       - Emotional distress damages
       - Punitive damages
    5. Pursue criminal charges (work with prosecutor)
    

    Platform-Specific Actions:

    Twitter/X:

    Report form: help.twitter.com/forms/private_information
    Category: "Non-consensual nudity"
    Expectation: Removal within 24-48 hours
    

    Reddit:

    Report: reddit.com/report
    Category: "Non-consensual intimate media"
    Subreddit-specific: Contact moderators (often faster)
    

    Porn sites (Pornhub, XVideos, etc.):

    Most have abuse report forms:
    Pornhub: pornhub.com/content-removal
    XVideos: Contact form in footer
    
    Provide:
    - Your ID (to prove it's you)
    - URL of content
    - Statement of non-consent
    
    Under Take It Down Act: Must remove within 48 hours
    

    Support Resources:

    Cyber Civil Rights Initiative:
    - Website: cybercivilrights.org
    - Services: Legal referrals, removal guides, emotional support
    - Hotline: Available via website
    - FREE
    
    Without My Consent:
    - Website: withoutmyconsent.org
    - State-by-state legal guides
    - Self-help resources
    
    RAINN (Rape, Abuse & Incest National Network):
    - Hotline: 1-800-656-HOPE (4673)
    - Online chat: rainn.org
    - 24/7 support
    

    What NOT to Do

    ❌ DON'T confront the creator directly (may escalate)
    ❌ DON'T pay blackmail demands (rarely stops there)
    ❌ DON'T delete evidence (needed for prosecution)
    ❌ DON'T blame yourself (this is a crime against you)
    ❌ DON'T try to "tough it out" alone (support exists)
    

    Expected Timeline

    Hour 0: Discovery
    Hour 2: Evidence documented, crisis support contacted
    Day 1: Platforms reported, Take It Down filed, police report filed
    Day 2-3: Content removed from platforms (48-hour requirement)
    Week 1: Attorney consulted, legal action initiated
    Week 2-4: Cease & desist sent, injunction filed
    Month 1-3: Criminal investigation progresses
    Month 3-12: Civil lawsuit proceeds, settlement likely
    Year 1+: Criminal trial if prosecution proceeds
    

    Success Metrics

    ✅ Content removed from all platforms within 1 week
    ✅ Take It Down hash prevents re-uploads
    ✅ Perpetrator identified and charged
    ✅ Civil damages recovered
    ✅ Emotional support accessed
    ✅ You regain sense of control
    

    ---

    Scenario 4: Personal Impersonation

    Scenario: Someone created deepfake videos impersonating you to scam your family/friends or damage your reputation.

    Action Plan

    Immediate Actions:

    ☐ Alert your network: Post on your REAL accounts warning of impersonator
    ☐ Screenshot impersonation content
    ☐ Identify all platforms where impersonator is active
    ☐ Document any financial requests or harmful statements
    

    Warning Message Template (post on your real accounts):

    🚨 IMPERSONATION ALERT 🚨
    
    Someone is using AI deepfake technology to create fake videos of me.
    
    ❌ These are NOT me
    ❌ I am NOT asking for money
    ❌ I did NOT make those statements
    
    If you receive messages or see videos that seem out of character:
    ☐ Verify directly with me (call or in-person only)
    ☐ Do NOT send money
    ☐ Report the account
    ☐ Warn others
    
    My ONLY official accounts:
    Instagram: @[yourhandle]
    TikTok: @[yourhandle]
    Facebook: [link]
    
    Stay safe. Thanks for the heads up if you see anything suspicious.
    

    Platform Reports:

    ☐ Report impersonator account (not just individual videos)
    ☐ Category: "Impersonation"
    ☐ Provide: Your ID, proof you're the real person
    ☐ Request: Account termination
    
    For each platform:
    - Instagram: "About This Account" → Report impersonation
    - TikTok: Report account (long-press profile)
    - Facebook: Report profile for impersonation
    - Twitter/X: Report profile (harder without clear policy)
    

    Protect Your Identity:

    ☐ Watermark your real content going forward
    ☐ Use verification badge if eligible
    ☐ Consistent "verification" statement in bios
    ☐ Link all your real accounts together (cross-reference)
    
    Bio template:
    "✅ This is my ONLY [platform] account. Beware of impersonators. Verify at [your website]. I will NEVER DM you asking for money."
    

    Legal Action (If significant harm):

    ☐ Defamation lawsuit (if false statements damage reputation)
    ☐ Right of publicity violation (unauthorized use of likeness)
    ☐ Fraud charges (if scamming people)
    
    Consult attorney if:
    - Impersonator successfully scammed people
    - Your reputation/career is damaged
    - Impersonator refuses to stop after cease & desist
    

    Monitor and Follow-Up:

    ☐ Set Google Alerts for your name
    ☐ Reverse image search your photos weekly
    ☐ Check for new impersonator accounts monthly
    ☐ Educate your network on verification methods
    
    Verification methods:
    "If you're not sure it's really me, call me on the phone number you've always had for me. Or verify in person. Don't trust social media alone."
    

    ---

    Victim Support Resources

    Legal Support

    Cyber Civil Rights Initiative (CCRI)
    Website: cybercivilrights.org
    Services: Legal referrals, removal guides, 24/7 crisis helpline
    Focus: Non-consensual intimate imagery
    Cost: FREE
    
    Without My Consent
    Website: withoutmyconsent.org
    Services: State-by-state legal guides, DIY removal tools
    Focus: Revenge porn, deepfakes
    Cost: FREE
    
    Electronic Frontier Foundation (EFF)
    Website: eff.org
    Services: Digital rights advocacy, some legal support
    Focus: Privacy, free speech balance
    Cost: FREE consultations
    
    Volunteer Lawyers for the Arts
    Website: vlany.org (New York, similar orgs in other states)
    Services: Free legal consultations for creators/individuals
    Focus: Intellectual property, right of publicity
    Cost: FREE initial consultation
    

    Mental Health Support

    Crisis Text Line
    Text: HOME to 741741
    Available: 24/7
    Focus: Any crisis, including cyber harassment
    Cost: FREE
    
    RAINN (Rape, Abuse & Incest National Network)
    Hotline: 1-800-656-HOPE (4673)
    Chat: rainn.org
    Focus: Sexual violence (includes deepfake NCII)
    Cost: FREE
    
    National Suicide Prevention Lifeline
    Hotline: 988
    Available: 24/7
    Focus: Suicide prevention, mental health crisis
    Cost: FREE
    

    Content Removal Services

    Take It Down (NCMEC)
    Website: takeitdown.ncmec.org
    Service: Hash-based content blocking
    Platforms: Facebook, Instagram, TikTok, OnlyFans, Pornhub, +more
    Cost: FREE
    
    Reputation Defenders (Commercial)
    Website: reputationdefender.com
    Service: Professional content removal, monitoring
    Cost: $1,000-$10,000+ (paid service)
    
    StopNCII.org
    Website: stopncii.org
    Service: Similar to Take It Down, different platform partnerships
    Cost: FREE
    

    Fact-Checking Organizations

    Snopes
    Website: snopes.com
    Tip line: snopes.com/contact
    Response time: 24-72 hours
    Focus: Viral content, celebrity deepfakes
    
    PolitiFact
    Website: politifact.com
    Tip line: politifact.com/article/2018/may/03/suggest-fact-check
    Response time: 24-48 hours (elections faster)
    Focus: Political misinformation
    
    Lead Stories
    Website: leadstories.com
    Tip line: leadstories.com/contact.html
    Response time: 12-24 hours
    Focus: Breaking viral misinformation
    

    Technical Tools

    InVID Browser Extension
    Download: invid-project.eu
    Purpose: Verify video authenticity, extract keyframes
    Cost: FREE
    
    Google Alerts
    Website: google.com/alerts
    Purpose: Monitor for impersonation, re-uploads
    Cost: FREE
    
    TinEye
    Website: tineye.com
    Purpose: Reverse image search to find impersonation
    Cost: FREE
    

    ---

    What NOT to Do

    Critical Mistakes to Avoid

    1. DON'T Share to "Warn Others"

    ❌ Wrong: "OMG look at this fake video of [celebrity]! Don't fall for it!"
             [shares the video]
    
    Why wrong:
    - Amplifies the deepfake's reach
    - Increases views (helping algorithm)
    - Some people will miss "it's fake" part
    
    ✅ Right: "There's a fake AI video claiming [X]. DON'T share if you see it.
              Here's how to spot it: [description of tells].
              Report it instead: [platform link]"
              [NO video attached]
    

    2. DON'T Comment on the Original Post

    ❌ Wrong: Commenting "This is fake!" on the deepfake video
    
    Why wrong:
    - Boosts engagement (algorithm amplifies)
    - Alerts creator you found it (may delete and re-upload elsewhere)
    - Creator sees your profile (may target you next)
    
    ✅ Right: Report silently, document privately, warn others separately
    

    3. DON'T Contact the Creator Directly

    ❌ Wrong: DMing creator "I know this is fake, take it down or I'll sue"
    
    Why wrong:
    - Tips them off (may destroy evidence)
    - May escalate (harassment, more deepfakes)
    - Creator may flee/delete account before authorities can ID
    - Could be considered communication that hurts legal case
    
    ✅ Right: Let authorities handle direct contact after you've reported
    

    4. DON'T Delete Evidence

    ❌ Wrong: Deleting screenshots after platform removes content
    
    Why wrong:
    - Needed for legal action later
    - Needed to track re-uploads
    - Needed for law enforcement investigation
    - Can't recreate once original is gone
    
    ✅ Right: Archive EVERYTHING for at least 2 years
    

    5. DON'T Pay Blackmail

    ❌ Wrong: "I'll delete the deepfake if you pay me $5,000"
             [You pay]
    
    Why wrong:
    - Rarely stops (they'll demand more)
    - Funds criminal activity
    - No guarantee of deletion
    - Makes you a target (you paid once, you'll pay again)
    
    ✅ Right: Screenshot blackmail attempt, report to FBI immediately
             (Extortion is federal crime)
    

    6. DON'T "Take Justice Into Your Own Hands"

    ❌ Wrong: Hacking the creator's account, doxxing them, retaliatory deepfakes
    
    Why wrong:
    - You become the criminal
    - Undermines your legal case
    - May face charges yourself
    - Escalates conflict
    
    ✅ Right: Let legal system handle punishment
    

    7. DON'T Assume It Will "Go Away on Its Own"

    ❌ Wrong: "I'll ignore it and it'll stop spreading"
    
    Why wrong:
    - Deepfakes can go viral in hours
    - Re-uploads persist for years
    - Ignoring doesn't stop harm
    - Missed opportunity for early intervention
    
    ✅ Right: Act within first 24 hours (most critical window)
    

    8. DON'T Publicize If You're the Victim (Until Strategic)

    ❌ Wrong: Immediately posting "Someone made a deepfake of me!" on all your socials
    
    Why wrong:
    - Draws attention to deepfake (Streisand effect)
    - Some people will seek it out
    - Emotional post may say things you regret
    - Should consult attorney first
    
    ✅ Right: Private actions first (reports, evidence, legal counsel)
             Public statement only if strategic (after removal, with attorney guidance)
    

    ---

    Conclusion: You Have the Power to Fight Back

    What we covered:

    5-Phase Action Framework:

  • ✅ Document and preserve evidence (15-30 min)
  • ✅ Report to platforms (10-20 min per platform, 48-hour removal required)
  • ✅ Report to FBI IC3 (20-40 min for serious cases)
  • ✅ Alert affected parties (varies by scope)
  • ✅ Monitor and follow-up (7-30 days)
  • Platform reporting procedures: TikTok, YouTube, Instagram, Facebook, Twitter/X

    Legal protections (2025):

  • **Take It Down Act**: $150K-$250K damages for NCII, 48-hour removal requirement
  • **State laws**: 40+ states with deepfake legislation
  • **NO FAKES Act**: Federal publicity rights (pending)
  • Victim resources:

  • Cyber Civil Rights Initiative (cybercivilrights.org)
  • Take It Down (takeitdown.ncmec.org)
  • FBI IC3 (ic3.gov)
  • Crisis support (741741)
  • The reality: Deepfakes are increasing (179 incidents Q1 2025, up 19% from all 2024). But so are the tools to fight them.

    Your role matters:

  • Every report helps build case patterns
  • Your evidence might connect to other cases
  • Platforms prioritize multiple reports
  • You protect the next potential victim
  • Remember:

  • ✅ Act quickly (first 24 hours most critical)
  • ✅ Document thoroughly (evidence is power)
  • ✅ Report everywhere (platforms + authorities)
  • ✅ Don't face it alone (resources exist)
  • ✅ Legal protections are real and enforceable
  • You found a deepfake. Now you know exactly what to do.

    Don't wait. Act now.

    ---

    Quick Reference Checklist

    Print or bookmark this page. When you find a deepfake:

    ☐ STOP - Don't share or engage
    ☐ SCREENSHOT - Capture everything
    ☐ DOWNLOAD - Save evidence
    ☐ REPORT TO PLATFORM (choose one or more):
       ☐ TikTok in-app report
       ☐ YouTube privacy request (support.google.com/youtube/answer/142443)
       ☐ Instagram report (in-app)
       ☐ Facebook report (in-app or facebook.com/help/contact)
       ☐ Twitter report (in-app)
    
    ☐ REPORT TO FBI (if serious): ic3.gov
    
    ☐ ADDITIONAL ACTIONS (if applicable):
       ☐ Alert victim/affected person
       ☐ File Take It Down report (takeitdown.ncmec.org) [for NCII]
       ☐ Contact fact-checkers (snopes.com/contact)
       ☐ Warn your community
       ☐ Consult attorney (cybercivilrights.org for referral)
    
    ☐ MONITOR:
       ☐ Set 48-hour reminder to check removal status
       ☐ Search for re-uploads weekly
       ☐ Set Google Alert for related terms
    
    ☐ FOLLOW UP:
       ☐ If not removed in 48 hours → Escalate (legal notice)
       ☐ Save all confirmation emails/numbers
       ☐ Document timeline
    

    ---

    Test Your Video:

    Not sure if something is a deepfake? Upload it to our free detector:

  • ✅ **90%+ detection accuracy**
  • ✅ **100% browser-based** (privacy-protected)
  • ✅ **Detailed analysis report**
  • ✅ **Free unlimited scans**
  • Analyze Video Now →

    ---

    Last Updated: January 10, 2025

    Updated to reflect Take It Down Act (May 2025) and current platform policies

    ---

    References:

  • Take It Down Act - Public Law 119-XXX (May 2025)
  • FBI Internet Crime Complaint Center (IC3) - 2023 Annual Report
  • TikTok Community Guidelines - Integrity and Authenticity
  • YouTube Privacy Complaint Process - AI/Synthetic Content Removal
  • Meta - Approach to Labeling AI-Generated Content
  • Cyber Civil Rights Initiative - NCII Victim Resources
  • National Center for Missing & Exploited Children - Take It Down Service
  • U.S. Department of Justice - Deepfake Prosecution Guidelines
  • CISA - Election Misinformation Reporting
  • State Deepfake Laws Compilation - 50-State Survey (2025)
  • Try Our Free Deepfake Detector

    Put your knowledge into practice. Upload a video and analyze it for signs of AI manipulation using our free detection tool.

    Start Free Detection