Launching 2026 · Early Access Open

Deepfake hiring fraud
is up 3,000%.

Federal agencies warn that AI-generated candidates are infiltrating hiring pipelines at scale. SmartShield™ detects deepfake video, cloned voices, AI-written applications, and stolen identities — before they reach your team.

1 in 4 candidates will be fake
by 2028 — Gartner
300+ US firms hired imposters
linked to state actors
70 min to create a fake
candidate — Palo Alto
SmartShield™ — Resume Authenticity Scan
SCANNING · resume_john_d.pdf
94% AI-Generated
John Doe
Software Engineer · San Francisco, CA
AI-Generated — 97% confidence
Results-driven software engineer with 6+ years of experience building scalable, high-performance applications. Passionate about leveraging cutting-edge technologies to deliver innovative solutions that drive business growth.
Experience
Senior Engineer — Acme Corp 2021–Present
Spearheaded the development of a microservices architecture that reduced system latency by 40%, resulting in significant improvements to user engagement and platform reliability. AI-Generated — 92%
Managed team of 4 engineers across 2 time zones. Likely Human
AI-Generated — 89% confidence
Proficient in Python, JavaScript, TypeScript, React, Node.js, AWS, Docker, Kubernetes, CI/CD pipelines, and agile methodologies with a proven track record of delivering complex projects on time.
Perplexity
12.3 /100
Uniformity
High
Burstiness
0.08 /1.0
HIGH RISK — 3 of 4 sections flagged as AI-generated
v2.4

The threat is real — and accelerating

3,000%
increase in deepfake fraud
attempts — 2023
76%
of hiring managers say AI
makes authenticity harder
50%
of businesses have faced
AI-driven deepfake fraud
$16.6B
in total fraud losses
reported to FBI in 2024

Sources: Keepnet 2026, Resume Genius 2024, CBS News 2025, FBI IC3 2024

Four Layers of Detection

Every signal. Every format. Every stage.

SmartShield™ scans across the entire candidate journey — from application to offer.

Most Common Threat

AI-Generated Text Detection

Detects ChatGPT, Claude, and other LLM-written content in resumes, cover letters, assessment responses, and written submissions. Uses perplexity scoring, linguistic fingerprinting, and stylometric analysis to flag AI-authored content with confidence levels.

Resumes & Cover Letters Assessment Answers Written Submissions

Video Deepfake Detection

Real-time and asynchronous analysis of video interviews. Scans facial landmark consistency, micro-expression patterns, lip-sync alignment, and pixel-level artifacts to detect face-swaps, AI-generated faces, and manipulated video.

Live Interviews Recorded Video Async Submissions

Voice Clone Detection

Analyzes voice frequency patterns, breath dynamics, prosody, and spectral features to detect AI-synthesized or cloned voices. Catches candidates using voice changers or AI voice-over during phone screens and interviews.

Phone Screens Video Audio Track Voice Messages

Identity Verification

Matches government-issued ID against the live person on camera. Liveness detection ensures the candidate is physically present — not a photo, video replay, or mask. Catches stolen and synthetic identities before onboarding.

ID + Face Match Liveness Detection Synthetic ID Flagging

Live Interview Scanning

Deepfake detection that runs while you interview.

SmartShield™ monitors video interviews in real time — analyzing facial landmark consistency, lip-sync alignment, voice biometrics, and micro-expression patterns frame-by-frame. Works live or on recorded submissions.

Facial Landmark Analysis

Frame-by-frame tracking of 468+ facial landmarks to detect inconsistencies invisible to the human eye.

Lip-Sync Detection

Compares audio waveform timing against lip movement patterns to catch AI voice-overs and dubbed video.

Voice Biometrics

Spectral analysis of voice frequency, breath dynamics, and prosody to detect AI-synthesized or cloned voices.

Live & Async Support

Runs during live video calls or on uploaded recordings from HireVue, Spark Hire, Zoom, and any platform.

Fraud Evidence Report

Every flag comes with proof — not just a score.

SmartShield™ doesn't just flag candidates. It generates a detailed evidence report for every scan — showing exactly what was detected, where, and with what confidence. Your recruiters review clear findings, not opaque AI scores.

Confidence scores for each detection layer
Timestamped evidence with visual annotations
Exportable PDF for legal and compliance records
Full audit trail for every fraud attempt
SmartShield™ Evidence Report
Scan #4821 · Mar 3, 2026
JD
John D. — Software Engineer
Remote · Video Interview · Mar 3 at 2:14 PM
HIGH RISK — Fraud Indicators Detected
2 of 4 detection layers flagged anomalies
AI Text (Resume)
94% AI
Video Deepfake
87% fake
Voice Biometrics
88% real
ID Verification
96% match
Candidate consent: ✓ Obtained · BIPA compliant Export PDF →

Built for Compliance from Day One

SmartShield™ includes a built-in candidate consent workflow — candidates are informed and opt in before any biometric analysis begins. Designed to comply with BIPA (Illinois), GDPR (EU), EEOC anti-discrimination guidelines, and state-level biometric data laws.

BIPA Compliant GDPR Ready EEOC Aligned Consent Workflow Built In

Risk Calculator

What would one fraudulent hire cost you?

Most companies don't calculate the risk until after it happens. See your exposure now.

Your Hiring Pipeline

100
101,000
$85,000
$30K$200K
5%
1%25% (Gartner 2028 proj.)
Estimated Annual Fraud Exposure
$1,275,000
Fraudulent Hires / Year
5
Cost Per Incident
$255,000
Data Breach Risk
High
Legal Liability

Cost per incident includes salary paid, investigation costs, remediation, potential data breach liability, and reputational damage. Based on industry estimates of 3× annual salary for fraud-related separations. Federal agencies now consider lack of verification controls a negligent hiring factor.

Integrations

Standalone or embedded. Your choice.

Use SmartShield™ as a standalone scan platform, or embed it directly into your existing interview and ATS tools.

Spark Hire
Spark Hire
Zoom
Zoom
Teams
Teams
Workday
Workday
REST API

Standalone Mode

Upload recordings, resumes, or connect your interview links. SmartShield™ scans and returns a full evidence report.

Embedded Mode

Integrate directly into your video interview platform or ATS. Scans run automatically — recruiters see results inline.

Common Questions

Everything you need to know

Both. SmartShield™ can run real-time analysis during live video interviews, catching deepfakes as they happen. It also supports asynchronous scanning — upload a recorded video, audio file, or written submission and receive a full evidence report within minutes.

Yes — transparency is built in. SmartShield™ includes a consent workflow that informs candidates before any biometric analysis begins. This is required for BIPA compliance and aligns with GDPR. Candidates who decline consent are flagged for manual review, not automatically rejected.

Yes. Our text analysis engine uses perplexity scoring, linguistic fingerprinting, and stylometric analysis to detect content generated by ChatGPT, Claude, Gemini, and other LLMs. It works on resumes, cover letters, assessment responses, and any written submission in your pipeline. Each scan returns a confidence score showing the probability of AI authorship.

The recruiter receives a detailed evidence report with confidence scores, flagged anomalies, and timestamped evidence. SmartShield™ never auto-rejects — human review is always in the loop. The report is exportable as PDF for legal records, and every scan is logged in a fraud attempt audit trail for compliance.

SmartShield™ is currently in development with a planned launch in 2026. Early access members get priority onboarding, feature input, and preferred pricing. The threat is growing now — join the waitlist to be ready when it arrives.

Don't wait until it happens to you

Your next hire might
not be real.

Be the first to deploy AI fraud detection in your hiring pipeline. Early members get priority access, feature input, and preferred pricing.

No commitment. We'll reach out to discuss your needs.