
Detecting AI Misuse in Recruitment Interviews
Learn how AI detectors identify fraudulent candidate responses in interviews, ensuring authentic hiring.
Candidate fraud occurs when individuals misrepresent themselves at any point during the job application process. Lying about one’s skills, experiences, and education is nothing new. However, generative AI has significantly simplified the ease with which candidates can mislead employers. These tools are also now capable of creating convincing text, images, videos, and more. As the situation has become an existential threat to employers, hiring teams must now educate themselves on how to identify candidate fraud and strategies they can use to stop it.
Candidate fraud is rising quickly as generative AI and remote hiring have lowered the skill level required to fake resumes, cover letters, headshots, and even live video interviews. Although virtual recruiting is a boon for companies, it also requires a more rigorous approach to identity verification. Sourcing, interviewing, and hiring remotely, even for in-office or hybrid roles, can invite fraudulent activity into the recruiting and hiring process.
Fake job applications, or individuals who commit fraud in one or more parts of their application, can manage to rise to the top of the candidate pool. Gartner predicts that by 2028, one in four job applicants will be fake.
AI tools and remote hiring are primary factors, but they are not the only contributors to this growing problem. Companies in many industries are hiring fewer people and relying on AI automated tools to reduce labor costs. In fact, Anthropic’s CEO, Dario Amodei, predicts that AI could eliminate half of entry-level white-collar jobs, a situation that may increase applicant reliance on fraudulent activities. That decrease in job availability cuts across many industries. Combined with a growing demand from employers for a broader range of skills and experience, desperate job seekers and even criminal actors have turned to advanced tools to trick hiring teams.
Deepfake and AI-based fraud encompasses any fraudulent content or activities generated using AI tools. The term “deepfakes” generally applies to AI-generated audio, video, or images that look and sound real. These fakes are considered “deep” because the content being used to commit fraud can sometimes be difficult to detect if you have never encountered them, are not familiar with common signs that fraud is occurring, or lack the technology to automatically red flag potential instances of AI-generated content.
HR teams are increasingly experiencing fraud at every step of the hiring process, although most commonly within resumes and cover letters. For example, the cybersecurity company Pindrop explained in a blog post that one job posting received over 800 applications in a short span of time. After performing a deeper analysis on the applicants, Pindrop found that over one-third had fake profiles.
Pindrop’s case is not the norm, but situations like that are growing more frequent, particularly for companies that use job boards like LinkedIn or Indeed to automate many parts of the recruitment process.
Most companies should be aware of and ready for AI fraud, but should not panic at the prospect of fake candidates. Although Gartner predicts that a quarter of all applicants will be fake by 2028, generative AI still has many limitations that significantly reduce the threat to companies recruiting and hiring new employees.
At present, deepfakes and fake identities are relatively easy to detect both visually and with the right software. Attention-grabbing headlines about state operatives using deepfakes to infiltrate companies often overstate the market saturation of deepfakes. There are very real consequences to hiring the wrong person based on false application information, but most have always existed even without deepfake candidates or AI-generated applicant fraud.
The primary concern companies have with deepfakes now is learning how to identify common signs of AI-generated content, using either dedicated AI-detection software, or tools that have integrated AI-detecting features.
Companies must now train hiring teams to recognize the most common types of candidate fraud. These include (but are not limited to) all of the following:
Resume fraud is among the most common types of candidate fraud and significantly predates the proliferation and assistance from generative AI tools. While not all resume fraud is easy to determine, hiring teams can use various strategies to identify potential issues, such as discrepancies between a candidate’s resume and information on their professional profile on sites like LinkedIn.
Other red flags on a resume include inconsistent dates, vague bullet points, or noticeably atypical accomplishments for the person’s background and work history.
Credential fraud occurs when a candidate falsifies their educational background, including certificates and degrees. Entry-level job candidates have been known to embellish their GPAs or claim academic honors they did not receive. Many employers fail to verify education claims, making this type of fraud relatively easy to commit.
Identity fraud occurs when a candidate’s identity is either stolen or completely fabricated. There are multiple ways in which this can occur. For example, a candidate may use someone else’s Social Security number to fake employment verification checks. Identity fraud in the hiring process has become more sophisticated, as many candidates can now create fraudulent social media profiles and generate fake videos, voice recordings, and images to create an entire identity that does not exist or is not theirs.
Much like resume fraud, fake references have been a problem among job applicants for decades. Commonly, candidates will supply either fake names and numbers, hoping that potential employers won’t check, or they will have friends or family members pose as former managers.
The problem has only increased with the growth of generative AI tools, as candidates can now easily fake audio and video. That means any number listed could easily lead you to a phone that the candidate owns, and the voice you hear on the other end could easily be a generative AI tool trained to provide the answers you want to hear.
Interview fraud is a concept that covers any type of candidate deception during the interview. This can include candidates using outside help (like a more qualified friend or AI chatbot) to answer questions. There are even well-reported instances of candidates using stand-ins on camera who pretend to be the person you are trying to hire.
Fraud during interviews can be as simple as someone reading answers generated by ChatGPT while you ask them questions. It can also be as complex as candidates faking their image, voice, and background in a video to deceive hiring managers, particularly for remote jobs.
Companies must be abundantly cautious to avoid hiring fake candidates. While minor types of candidate fraud can cause a few headaches, fake applicants carry security and financial risks for the company.
More broadly, failing to effectively verify a job candidate’s identity can lead to any of the following:
Data breaches
Imposters can gain access to sensitive systems and data.
Regulatory fines
Hiring applicants with stolen identities can result in violating various privacy and hiring laws, potentially putting the company at risk for fines.
Wasted resources
Screening, interviewing, and onboarding new employees takes time and costs the company money. Fake candidates can waste these resources and require companies to go through the hiring process again.
Damage to the company's brand and reputation
News that the company hired a “ghost worker” or was tricked by a fraudulent applicant can cause both existing and potential employees to lose trust in the leadership’s decision-making capabilities.
Strong security policies and a well-established brand reputation can help companies avoid these issues or mitigate their impact, but they are not as effective as eliminating fake candidates from the application pool as early as possible.
The best way to detect and prevent candidate fraud is to learn about the most common signs that fraud is occurring and implement sound strategies to prevent fraudulent candidates from working their way through the process. A dedicated plan should be established and then taught to hiring teams to minimize the potential impact of fake candidates.
Catching deepfakes can seem difficult, but even the most sophisticated fraudulent content can be uncovered either manually or with dedicated software. This is because AI-generated content has many known limitations that can be exploited to easily catch fraudsters.
A well-trained hiring team can identify many types of AI fraud and deepfakes, even without the assistance of fraud-detecting software. However, HR software with fraud detection capabilities will help flag most fraudulent content submitted by job applicants.
There are multiple ways to prevent candidate fraud, several of which are unique to the type of candidate fraud you may encounter.
While manual verification has its place, smart hiring teams are turning to technology to stay one step ahead of candidate fraud. These digital tools work hand-in-hand with your existing screening process to catch dishonest behavior before it impacts your hiring decisions. From blocking cheating attempts during assessments to confirming candidates are who they claim to be, technology fills the gaps that manual review might miss. Here's how the right tech solutions can protect your assessment process and help you evaluate genuine candidate abilities.
Question Security:
Browser Control & Monitoring:
Identity Verification:
You can identify a fake candidate by checking references, using identity verification tools, and conducting live interviews using video interview platforms that can detect anomalies in the candidate’s video feed. Structured interview questions or assessments can also identify candidates with fake skills or credentials.
The most effective way to assess candidates is through structured interview questions and live interview assessments. This strategy will help remove the risk of candidates using outside tools or people to answer questions for them. Hiring teams should also look for indications of assistance during live tests or questions, including notable delays in receiving answers.
Candidate fraud occurs when an individual fakes their identity, skills, experience, credentials, and references during the hiring process. Candidate fraud may even include impersonation during interviews, where real people take the place of the candidate, or through deepfakes of video and audio generated by AI tools.
Modernize your hiring process with expert insights and advice.