Events Upcoming
New Members
How Employers Are Confronting Deepfake Interview Fraud
The interview begins. But the candidate’s eye and mouth movements appear unnatural and not in sync with his speaking. The edges of his face are distorted. His clothes don’t move when he shifts in his chair.
These are signs that many recruiters are beginning to recognize and take seriously. Signs that their job candidate isn’t real. Especially in remote, tech-heavy roles, deepfake candidates and AI-enabled interview fraud are no longer edge cases. They have become part of the everyday risk landscape for hiring teams.
According to Greenhouse’s AI in Hiring Report, 91% of U.S. hiring managers report that they have either suspected or caught AI-driven candidate misrepresentation.
Greenhouse, a recruiting software company based in New York City, found that the most common forms of AI-enabled interview fraud are candidates’ use of voice cloning, AI scripts during interviews, and deepfake technology to pose as someone else or have someone else pose as them.
“We’re seeing a spectrum of AI use,” said Daniel Chait, CEO and co-founder of Greenhouse. “It’s on the rise because AI use has become more pervasive, hiring has moved increasingly to being online rather than in person, and hiring volumes have increased, creating the conditions for this type of candidate behavior.”
Chait said that there’s less concern about job applicants using AI to write cover letters or tailor resumes. But on the other end, “you have persistent efforts by North Korea and cybercriminals to infiltrate companies. And in between is a very large gray area that’s changing fast.”
When AI Help Becomes AI Deception
Job seekers use AI tools to mass-apply, tailoring resumes to job descriptions. Candidates routinely use AI tools to prepare for interviews. But some are using AI to help them respond to questions during an interview or take an assessment.
“Some companies are saying that it is important for us to hire people who are fluent in AI and it’s OK to use it in some cases, while others are saying ‘we’re trying to assess you, not the AI,’ ” Chait said. “The tools are changing rapidly, societal expectations of what’s ethical are shifting, and it’s very much a cat-and-mouse game,” he added.
But there is one line nearly everyone agrees should not be crossed: pretending to be someone else. Using deepfake technology to mask who is actually taking the interview “is definitely not okay. And it is happening,” Chait said.
Remote technology and customer service roles are the most common targets, according to experts.
‘Something Just Feels Off’
For fully remote companies like Zapier, based in San Francisco, the risk isn’t theoretical. “We’ve seen the whole range,” said Anita Chandrasekhar, global head of talent strategy and operations at Zapier. “From low-risk, high-noise bot applications that overwhelm recruiters to more serious cases of fraud that are getting better and harder to detect. And impersonation, where one person interviews and a completely different person shows up to do the job.”
Chandrasekhar noted that candidate deception didn’t start with AI. But AI has “made it easier to conduct and harder to spot.”
Zapier’s response has been deliberately multi-layered. At the top of the funnel, the company runs checks on work history, IP addresses, location data, and social profile consistency. The company flags candidates whose IP location doesn’t match where they are interviewing from, recognizing that they could be traveling and applying or doing the interview from another location.
Zapier expects cameras to be on during interviews, records the interviews, and trains recruiters to watch for signals such as typing sounds before answers, long pauses that suggest listening to a script, or inconsistencies between interview stages.
“A big red flag is someone who consistently has excuses for why they can’t turn on their camera for an interview,” Chandrasekhar said. “Recorded video interviews have become a great tool to detect fraud. We send the videos from the recruiter screen to the hiring manager to check if the same person shows up for the later interview,” she said.
“With deepfakes, there’s a mismatch in speaking and gesturing,” she added. “Something just feels off. The initial deepfakes were easier to spot, but they have gotten more sophisticated.”
Chait added that companies are training their teams to look for suspicious behavior like darting eyes and long pauses, in addition to implementing remote proctoring tools to ensure integrity.
According to the Greenhouse study, 87% of U.S. recruiters say they have tightened their screening process in some way due to candidate fraud.
As for returning to in-person job interviews, Chait said that companies are thinking about it but are put off by the cost and the message that action may send to top talent.
Technology and Training
Recruiting technology providers are racing to keep up with candidate cheating, developing fraud detection tools to spot deepfakes.
Greenhouse has added identity verification in partnership with CLEAR that can be triggered at any stage of the hiring process, ensuring that the person interviewed is actually the candidate and the same person who shows up on day one. Another Greenhouse tool analyzes a candidate’s digital footprint, scanning IP addresses, emails, resume patterns, and LinkedIn data for signals commonly associated with fraud.
“The technology is adaptive,” Chait said. “It’s constantly building protection against new attack vectors.”
Chandrasekhar said that “we’ve gotten ahead of this as much as we can and we will continue to evolve as the fraud does. Process, training, and technology all have to work together.”
She added that interview training is ongoing and an operating procedure for encountering red flags is being continuously added to. For example, when interviewers notice a candidate using a blurred background, “we ask them to disable it for a second and then they can put it back. If they are using a deepfake, that will be a challenge.”
For the sake of candidate experience, Zapier’s recruiters and hiring managers are directed to refrain from ending an interview or confronting the candidate because of suspected fraud. There could be a legitimate reason for what the interviewer is experiencing.
Chandrasekhar also emphasized the importance of explaining the “why” behind these controls. Candidates are more willing to accept intrusive camera requirements, recordings, and identity checks when they understand that these policies are there to protect both sides from fraud, she said.
This article is courtesy of Society for Human Resource Management (SHRM)