Predictive Validity
Predictive Validity Definition
How can HR define predictive validity? It assesses how effectively HR technologies (recruitment software, employee assessment tools, or performance management systems) predict employees' future performance.
Along with concurrent validity, predictive validity (PV) is a subtype of criterion validity. Its applications span beyond recruitment—organizations in healthcare, psychology, and education all test for predictive validity.
High predictive validity means the recruitment software or HR technology accurately and reliably predicts job performance, employee retention, and even leadership potential.
Predictive Validity Examples
Applicant Tracking Systems (ATS)
A recruiter analyzes candidates’ resumes, past job experiences, and behavioral assessments based on predefined crtieria to predict which applicants are most likely to succeed in a given role within an ATS. If the tool predicts high performers who deliver strong results, the ATS has high predictive validity.
AI-Powered Recruiting
AI tools use algorithms to analyze resume keywords, assess candidate skills, and predict a candidate’s likelihood of success in a role. These tools rely on historical data to create predictive models, and compare past candidate profiles with high-performing employees to forecast success on the job.
Pre-Employment Assessments
Cognitive ability and skills tests are conducted to predict future job success—especially in roles that require quick thinking and complex decision-making. Skills testing helps organizations assess a candidate for similar tasks they’ll be required to perform on the job.
Employee Engagement Software and Retention
Survey tools or feedback apps that monitor employee engagement can predict which employees are at risk of leaving the organization. An example of high predictive validity in turnover is if employees with low engagement scores are more likely to quit within the year.
Related Terms
Criterion Validity
is about how well one measure accurately predicts the outcome based on another measure. There are two subtypes of criterion validity: concurrent validity and predictive validity.
Convergent Validity
is the degree to which measured constructs correspond to the measures of related constructs.
Concurrent Validity
is designed to evaluate the accuracy with which a test measures the desired outcome.
Measurement Validity
refers to the extent to which a tool accurately measures what it is intended to measure. There are four types of measurement validity: construct validity, face validity, criterion validity, and content validity.