Fake Candidates, Real Consequences: Why Workforce Identity Verification Must Evolve
A growing number of cyber incidents are no longer beginning with breached credentials. They’re starting much earlier, by targeting the workforce itself during the hiring process.
Recruiters and HR teams are raising urgent concerns about candidates who seem legitimate on paper but turn out to be someone else entirely. Behind polished résumés and convincing video interviews, attackers are using deepfakes, synthetic identities, and stolen credentials to land jobs under false pretenses. Some are in it for financial gain. Others, like North Korean operatives posing as U.S.-based IT contractors, have far more dangerous motives.

In 2023, the U.S. Department of Justice exposed a network of 14 such operatives who infiltrated U.S. companies, exfiltrated sensitive data, and funneled $88 million back to their government. They didn’t breach the system. They got hired into it.
According to research from Unit 42 at Palo Alto Networks, attackers are leveraging AI-generated visuals, forged documents, and even deepfaked interviews to pose as legitimate candidates. The result? Undetected synthetic identities capable of bypassing traditional background checks and security gates, slipping through the hiring funnel. (*Source: Unit 42 by Palo Alto Networks)*
The Deepfake Problem: Cheaper, Faster, Scarier
Creating a convincing deepfake used to take time, money, and skill. Not anymore.
In November 2024, TechCrunch highlighted just how low the barrier to entry has dropped. The cost to make a deepfake of U.S. Vice President Kamala Harris was five dollars and ten minutes.
With just a few seconds of audio taken from an Instagram Live video, a TikTok post, or even a voice note, fraudsters can create a believable voice clone that can be manipulated to suit their needs. (Source: McAfee)
This technology is now easily accessible to anyone with an internet connection. It is being used to fool video interview platforms, bypass onboarding processes, and impersonate job candidates at scale. Deepfakes are also becoming more sophisticated, harder to detect, more realistic, and increasingly tailored to specific targets. And it doesn't stop there. They are fueling disinformation campaigns, interfering with elections, enabling extortion schemes, driving fraud, and spreading defamation. The implications for organizations and society are vast and increasingly difficult to ignore.
Traditional IAM Systems Are Not Built for the Deepfake Era
Most IAM (Identity Access Management) and MFA (Multi-factor Authentication) systems assume the user is real. They verify credentials, devices, or static biometrics like fingerprints or uploaded selfies. But these measures can be:
- Spoofed with generative AI and deepfake video
- Stolen via social engineering or phishing
- Bypassed with SIM swaps or FIDO2 key theft
Even worse, these systems don’t typically verify identity at the point where risk is introduced, during the hiring or onboarding process.
Meanwhile, operational burdens are growing. According to Gartner, 40 percent of help desk calls are still password-related. Forrester estimates that each password reset can cost organizations up to $70. These access gaps are draining time and resources, but the stakes go far beyond inefficiency. They are opening the door to serious vulnerabilities, with consequences such as reputational damage, regulatory non-compliance, legal penalties, and erosion of customer trust.
Candidate Verification Is The First Line of Defense Against AI-Driven Impersonation
Organizations must shift their approach to security. It is no longer enough to verify credentials alone. Companies must verify the identity of the person behind those credentials, ensuring that a real human is on the other side of the screen and that they are exactly who they claim to be.
This requires real-time identity verification at every critical moment along the employee journey, including hiring, onboarding, and account recovery. These are points where attackers are most likely to exploit gaps in trust.
That is where Incode Workforce comes in. Our platform brings AI-powered biometric, real-time, liveness-verified identity checks into your existing workflows. By validating both the authenticity of the user and the truth of their claimed identity, Incode Workforce helps organizations stop imposters before they ever gain access.
Incode Workforce: Intelligent Identity Verification for Today’s Threats
Incode Workforce integrates directly into existing IAM systems to confirm a person’s true, unique identity, not just what they know, like a password, or what they carry, like an ID that could be lost, stolen, or forged.
Stay Ahead of Deepfakes and Identity Threats
Legacy tools miss what Incode catches. Powered by multi-signal AI and our proprietary Trust Graph, Incode blocks 99% of spoofing and synthetic fraud attempts before they become a threat.
Using advanced facial recognition, passive liveness detection, and behavioral fraud signals, Incode detects and stops deepfakes, impersonators, and synthetic identities in real time, helping organizations stay a step ahead of the fastest-evolving identity threats.
Simple and Secure User Experience
Employees and candidates complete a one-time enrollment using a live selfie and government-issued ID. Later, when identity needs to be verified, such as during a password reset or access escalation, they simply take a quick selfie. The process is completed in seconds, all within an intuitive platform. No passwords or codes required.
Device-independent and Fully Automated
Incode does not require special hardware or tokens. It works across devices and platforms, and can be triggered automatically by your IAM system during high-risk workflows.
Why This Matters to Security and HR Professionals
For Information Security leaders, synthetic identities and deepfake-enabled impersonation introduce new vulnerabilities that traditional access controls cannot detect. Incode Workforce adds an intelligent identity layer that enhances security without increasing user friction, ensuring only real individuals can access critical systems.
For HR professionals, the rise of fake candidates jeopardizes hiring integrity and increases the risk of insider threats. It also exposes companies to reputational harm, legal liability, and compliance gaps. Verifying identity upfront protects hiring pipelines, preserves company trust, and ensures qualified candidates are who they say they are.
Both teams play a vital role in workforce integrity. Incode Workforce empowers them to partner more effectively in reducing risk from day one.
The Way Forward
The workforce is now a frontline target for cybercriminals. Deepfakes, synthetic identities, and AI-driven impersonation are redefining how attackers infiltrate organizations, not by breaking in but by getting hired.
Outdated identity and access tools are falling short. The new threat landscape demands real-time, human-centric verification that works from day one and across the employee lifecycle.
Incode Workforce delivers exactly that. It brings security, simplicity, and intelligence to every access point, empowering both IT and HR teams to prevent impersonation and build trust with every hire.
Workforce identity verification is not just a response to the deepfake era. It is the right move for any organization that wants to stay secure, stay compliant, and stay ahead.
Sources
- U.S. DOJ: North Korean IT Worker Fraud
- Unit 42: Synthetic Identity Creation
- TechCrunch: Deepfake Accessibility
- Beware the Artificial Impostor - McAfee
- Gartner IAM Trends
- Forrester Research
Related Articles
Join The GBI Impact Community
Sign up to make an impact and hear about our upcoming events
By registering anywhere on the site, you agree with our terms and privacy policy