VidCruiter Logo

Candidate Fraud: How to Prevent It

Written by

Sam Cook

Reviewed by

VidCruiter Editorial Team

Last Modified

Jun 26, 2025
Candidate Fraud Hero

SHARE THIS ARTICLE

  • LinkedIn
  • X icon
  • Facebook
  • URL copied to clipboard!


SUBSCRIBE TO OUR NEWSLETTER

Candidate fraud occurs when individuals misrepresent themselves at any point during the job application process. Lying about one’s skills, experiences, and education is nothing new. However, generative AI has significantly simplified the ease with which candidates can mislead employers. These tools are also now capable of creating convincing text, images, videos, and more. As the situation has become an existential threat to employers, hiring teams must now educate themselves on how to identify candidate fraud and strategies they can use to stop it.

Candidate fraud on the rise

Is Candidate Fraud on the Rise?

Candidate fraud is rising quickly as generative AI and remote hiring have lowered the skill level required to fake resumes, cover letters, headshots, and even live video interviews. Although virtual recruiting is a boon for companies, it also requires a more rigorous approach to identity verification. Sourcing, interviewing, and hiring remotely, even for in-office or hybrid roles, can invite fraudulent activity into the recruiting and hiring process. 

Fake job applications, or individuals who commit fraud in one or more parts of their application, can manage to rise to the top of the candidate pool. Gartner predicts that by 2028, one in four job applicants will be fake. 

AI tools and remote hiring are primary factors, but they are not the only contributors to this growing problem. Companies in many industries are hiring fewer people and relying on AI automated tools to reduce labor costs. In fact, Anthropic’s CEO, Dario Amodei, predicts that AI could eliminate half of entry-level white-collar jobs, a situation that may increase applicant reliance on fraudulent activities. That decrease in job availability cuts across many industries. Combined with a growing demand from employers for a broader range of skills and experience, desperate job seekers and even criminal actors have turned to advanced tools to trick hiring teams.

 

Deepfake and AI-Based Fraud

Deepfake and AI-based fraud encompasses any fraudulent content or activities generated using AI tools. The term “deepfakes” generally applies to AI-generated audio, video, or images that look and sound real. These fakes are considered “deep” because the content being used to commit fraud can sometimes be difficult to detect if you have never encountered them, are not familiar with common signs that fraud is occurring, or lack the technology to automatically red flag potential instances of AI-generated content.

HR teams are increasingly experiencing fraud at every step of the hiring process, although most commonly within resumes and cover letters. For example, the cybersecurity company Pindrop explained in a blog post that one job posting received over 800 applications in a short span of time. After performing a deeper analysis on the applicants, Pindrop found that over one-third had fake profiles.

Pindrop’s case is not the norm, but situations like that are growing more frequent, particularly for companies that use job boards like LinkedIn or Indeed to automate many parts of the recruitment process.

Deepfake Fraud

Should Your Company Be Worried About AI Fraud?

Most companies should be aware of and ready for AI fraud, but should not panic at the prospect of fake candidates. Although Gartner predicts that a quarter of all applicants will be fake by 2028, generative AI still has many limitations that significantly reduce the threat to companies recruiting and hiring new employees.

At present, deepfakes and fake identities are relatively easy to detect both visually and with the right software. Attention-grabbing headlines about state operatives using deepfakes to infiltrate companies often overstate the market saturation of deepfakes. There are very real consequences to hiring the wrong person based on false application information, but most have always existed even without deepfake candidates or AI-generated applicant fraud. 

The primary concern companies have with deepfakes now is learning how to identify common signs of AI-generated content, using either dedicated AI-detection software, or tools that have integrated AI-detecting features.

Common Types of Candidate Fraud

Companies must now train hiring teams to recognize the most common types of candidate fraud. These include (but are not limited to) all of the following:

Resume Fraud

Resume fraud is among the most common types of candidate fraud and significantly predates the proliferation and assistance from generative AI tools. While not all resume fraud is easy to determine, hiring teams can use various strategies to identify potential issues, such as discrepancies between a candidate’s resume and information on their professional profile on sites like LinkedIn. 

Other red flags on a resume include inconsistent dates, vague bullet points, or noticeably atypical accomplishments for the person’s background and work history.

Resume Fraud

Credential Fraud

Credential fraud occurs when a candidate falsifies their educational background, including certificates and degrees. Entry-level job candidates have been known to embellish their GPAs or claim academic honors they did not receive. Many employers fail to verify education claims, making this type of fraud relatively easy to commit.

Identity Fraud

Identity fraud occurs when a candidate’s identity is either stolen or completely fabricated. There are multiple ways in which this can occur. For example, a candidate may use someone else’s Social Security number to fake employment verification checks. Identity fraud in the hiring process has become more sophisticated, as many candidates can now create fraudulent social media profiles and generate fake videos, voice recordings, and images to create an entire identity that does not exist or is not theirs.

Reference Fraud

Much like resume fraud, fake references have been a problem among job applicants for decades. Commonly, candidates will supply either fake names and numbers, hoping that potential employers won’t check, or they will have friends or family members pose as former managers. 

The problem has only increased with the growth of generative AI tools, as candidates can now easily fake audio and video. That means any number listed could easily lead you to a phone that the candidate owns, and the voice you hear on the other end could easily be a generative AI tool trained to provide the answers you want to hear.

Reference Fraud

Interview Fraud

Interview fraud is a concept that covers any type of candidate deception during the interview. This can include candidates using outside help (like a more qualified friend or AI chatbot) to answer questions. There are even well-reported instances of candidates using stand-ins on camera who pretend to be the person you are trying to hire. 

Fraud during interviews can be as simple as someone reading answers generated by ChatGPT while you ask them questions. It can also be as complex as candidates faking their image, voice, and background in a video to deceive hiring managers, particularly for remote jobs.

Risks of Candidate Fraud for Employers

Companies must be abundantly cautious to avoid hiring fake candidates. While minor types of candidate fraud can cause a few headaches, fake applicants carry security and financial risks for the company.

More broadly, failing to effectively verify a job candidate’s identity can lead to any of the following:

Data breaches

Imposters can gain access to sensitive systems and data.

 

Regulatory fines

Hiring applicants with stolen identities can result in violating various privacy and hiring laws, potentially putting the company at risk for fines.

Wasted resources

Screening, interviewing, and onboarding new employees takes time and costs the company money. Fake candidates can waste these resources and require companies to go through the hiring process again.

Damage to the company's brand and reputation

News that the company hired a “ghost worker” or was tricked by a fraudulent applicant can cause both existing and potential employees to lose trust in the leadership’s decision-making capabilities.                             

Strong security policies and a well-established brand reputation can help companies avoid these issues or mitigate their impact, but they are not as effective as eliminating fake candidates from the application pool as early as possible.

How to Detect and Prevent Candidate Fraud

The best way to detect and prevent candidate fraud is to learn about the most common signs that fraud is occurring and implement sound strategies to prevent fraudulent candidates from working their way through the process. A dedicated plan should be established and then taught to hiring teams to minimize the potential impact of fake candidates.

How to prevent candidate fraud

How to Spot Fake Candidates

Catching deepfakes can seem difficult, but even the most sophisticated fraudulent content can be uncovered either manually or with dedicated software. This is because AI-generated content has many known limitations that can be exploited to easily catch fraudsters.

AI Fraud Limitations

Red X Cover letters and job applications:
  • Generic phrasing and overused buzzwords
  • Lack of personalized details
  • Overuse of certain types of grammatical syntax
  • Repetitive phrasing or sentence structure
  • Lack of contextual understanding
  • No personalization
Red X Professional profiles:
  • Suspiciously new professional profile on networks like LinkedIn
  • Unnatural headshots
  • Suspicious or inconsistent career path
  • Generic endorsements (specifically on LinkedIn)
  • Very few connections on professional networks
  • No posting history or lack of activity on professional networks
Red X Video interviews:
  • Unnatural facial movements
  • Inconsistencies and anomalies in lighting and shadows
  • Behavioral inconsistencies
  • Asynchronous audio and visual not due to connection issues
  • Notable lags between questions and responses
Red X Phone interviews:
  • Inconsistent voice patterns
  • Lack of voice and tone variation
  • Strange background noises
  • Notable lags between questions and responses

A well-trained hiring team can identify many types of AI fraud and deepfakes, even without the assistance of fraud-detecting software. However, HR software with fraud detection capabilities will help flag most fraudulent content submitted by job applicants.

Preventing Candidate Fraud

There are multiple ways to prevent candidate fraud, several of which are unique to the type of candidate fraud you may encounter.

Pre-Interview

  • Resume: Hiring teams should always take the time to verify credentials listed on a candidate’s resume. Software-based approaches can automate the process and improve the accuracy of verification. Consider using background check tools or services and leveraging video interviewing software.

 

  • References: If it’s requested, your team should also take the time to verify a candidate’s listed references. Doing so will easily expose many low-effort fraudulent activities in job applications. 

 

  • Anti-cheating confidentiality agreements: These agreements require candidates to sign a document promising not to cheat, share test questions, or help others cheat. By signing, candidates acknowledge they understand the rules and agree to face consequences like disqualification or score cancellation if they're caught violating them. This creates a legal deterrent and establishes clear consequences for fraudulent behavior. 

During Interview

  • Deepfake video identification: Catching AI-generated deepfakes in a video interview can be challenging, but some strategies exist, such as having candidates put their hand in front of their face during video interviews (which will disrupt face-swapping filters). 



 

  • Video filters: While interviews are in progress, consider asking candidates to remove any filters they may have up, even if temporarily. Refusal to do so could be a red flag. Many video interview platforms now have built-in ID verification software that can automatically match the face on the camera to the photo ID provided and can detect suspicious behavior, including multiple faces on a screen or lip movements that don’t match the audio.



 

  • Structured interviews: Structured interviews can help hiring teams identify anomalies during the interview process. People who commit interview fraud often study top answers to common interview questions they’ve found online, but they can’t predict follow-up questions or in-depth clarifications. For example, you might prepare a technical or behavioral problem-solving question that requires on-the-spot thinking. 

 

  • Skills assessments: Whenever possible or necessary, incorporate a sample assignment or skills test. Coding exercises and writing prompts are common, but you should use whichever type of test is relevant for the role. Many video platforms allow you to embed these tasks into the interview. Imposters will likely struggle to match the skills they claimed to have on their resume to the questions asked in the interview assessment.

ID Verification

  • Employers should always use identity verification tools that detect potential fraud involving personally identifiable information (PII), such as Social Security numbers. At some point in the application process, applicants should be asked to upload a photo of their driver’s license or passport. This data can be cross-checked in public databases to ensure it’s both legitimate and visually matches the candidate.

Post-Interview

  • Follow-up checks: Verify information you received during the interview by checking with the candidate’s previous employers and educational institutions. If you suspect visual deepfakes, ask previous employers to verify that the candidate’s headshot matches the individual they worked with.

 

  • Check network and device metadata (if available): Log the IP address and geolocation from the live interview. Discrepancies in location can be a sign of fraud.

During Onboarding

  • Monitor competency: During the first 90 days, monitor the candidate’s skills-related competency. Determine whether the new hire’s stated skills are appropriately displayed. An obvious lack of skills for the role is a strong indication of fraud. This can include having new hires complete live and supervised projects.

 

  • Limit access to sensitive systems: Limit the new hire’s access to secure systems by provisioning their account with the bare minimum access. Check for spikes in off-hours access or large data exports.
Onboarding Fraud

How to Use Technology to Prevent Candidate Fraud

While manual verification has its place, smart hiring teams are turning to technology to stay one step ahead of candidate fraud. These digital tools work hand-in-hand with your existing screening process to catch dishonest behavior before it impacts your hiring decisions. From blocking cheating attempts during assessments to confirming candidates are who they claim to be, technology fills the gaps that manual review might miss. Here's how the right tech solutions can protect your assessment process and help you evaluate genuine candidate abilities.

Question Security:

  • Copy-paste restrictions - Blocks candidates from copying test questions to share with others or pasting pre-written answers from external sources. This ensures responses are original and prevents candidates from building question banks to share with future applicants.
  • Word count limits - Forces candidates to provide brief, original responses instead of copying lengthy text from websites or documents. Limited word counts make it harder to paste irrelevant content and encourage focused, authentic answers.
  • Spellcheck disabling - Reveals candidates' true writing abilities by removing automated spelling and grammar assistance. This helps employers assess actual communication skills without the help of browser tools or writing software.

Browser Control & Monitoring:

  • Screen/browser locking - Automatically ends the assessment session if candidates try to navigate away from the test page, preventing them from searching Google, using ChatGPT, or accessing other resources. This ensures candidates can only focus on the assessment materials provided.

Identity Verification:

  • Pre-recorded interviews with configuration settings - Offers stronger identity confirmation than traditional phone screenings through visual verification and controlled recording parameters. Candidates must appear on camera and follow specific instructions, making it much harder for impostors to participate.
  • Deepfake detection technology - Uses advanced algorithms to identify artificially generated or manipulated video and audio content, catching candidates who might use fake personas or AI-generated responses to misrepresent themselves.
  • Background Checks and IP Address monitoring - You may also want to use background check tools that can verify a candidate’s listed credentials, as well as automated reference checking software that can detect common signs of reference and credential fraud, such as identical IP addresses between job applicants and their references.

Frequently Asked Questions

How can you identify fake candidates?

You can identify a fake candidate by checking references, using identity verification tools, and conducting live interviews using video interview platforms that can detect anomalies in the candidate’s video feed. Structured interview questions or assessments can also identify candidates with fake skills or credentials.

How do you test candidates to detect fraud?

The most effective way to assess candidates is through structured interview questions and live interview assessments. This strategy will help remove the risk of candidates using outside tools or people to answer questions for them. Hiring teams should also look for indications of assistance during live tests or questions, including notable delays in receiving answers.

What is candidate fraud?

Candidate fraud occurs when an individual fakes their identity, skills, experience, credentials, and references during the hiring process. Candidate fraud may even include impersonation during interviews, where real people take the place of the candidate, or through deepfakes of video and audio generated by AI tools.