VidCruiter Logo

What Risks Are Involved When Using AI in Human Resources?

Written by

Lauren Barber

Reviewed by

VidCruiter Editorial Team

Last Modified

Dec 4, 2024
AI Risks hero

SHARE THIS ARTICLE

  • LinkedIn
  • X icon
  • Facebook
  • URL copied to clipboard!


SUBSCRIBE TO OUR NEWSLETTER

To strategically incorporate artificial intelligence (AI) into your hiring process, you need to understand and work to mitigate the risks that come with using this technology. AI in recruiting can come with many benefits, but it’s also capable of perpetuating inequality. Protect your organization by using low-risk AI-augmented solutions.

There’s no question that AI is part of the future of HR. More than ever, talent acquisition teams and HR departments are expected to leverage technology to conduct interviews, enhance diversity, equity, inclusion, and accessibility, and improve key performance indicators. AI-augmented HR tech solutions, especially those that integrate with your applicant tracking system (ATS), are promising a lot — but it’s up to those purchasing the software to determine whether the vendor’s claims are true.

There’s great potential for using AI in hiring, but you must ask questions, do research, and manage the risks.

How AI in HR Works

AI adds complexity in order to simplify HR tasks. For an HR tech solution to be AI-augmented, it has to use machine learning (ML). Instead of being programmed, ML learns how to do things using data.

The more data you give AI, the more accurate its predictions will be, and the same is true in the reverse (the less data you provide, the less accurate it will be).

There are two types of data that AI can use to learn: real and synthetic. Real data is specific to an organization, gathered over the course of time, and based on real events. Synthetic data is artificially created, which makes managing privacy much easier, but it also makes it impossible to know if it actually reflects real underlying trends.

Unlike humans, AI can process hundreds of thousands of data points (or more) rapidly. However, there are a few downsides.

No data set, real or synthetic, will be perfect, so AI will never be 100% accurate. Unless you proactively identify and correct specific biases in a data set, the AI will reflect whatever already exists.

For example, if one department in your company predominantly hires people in a certain age group, and you are using AI for screening resumes, the tool may be more likely to detect and select candidates for roles in that department from that age group.

Even if you did obtain the perfect data set, it’s still possible for biases to be inherent in the AI. There’s no question AI can do so much to help HR departments, but it comes with limitations to be aware of.

AI in Human Resources

Is it possible to create a bias-free algorithm?

Some organizations are trying. Amazon is working on an AI “fairness metric” called conditional demographic disparity that requires the algorithm’s outcomes to check off certain conditions (Harvard Business Review). While these efforts acknowledge the problem, there’s still no categorical way to define the conditions that lead to equitable hiring choices. Until the code is cracked, the responsibility to oversee AI for bias is still in the hands of humans.

Risks of Using AI in HR

Before adopting AI into your hiring processes, here are some risks to be aware of.

Perpetuating or Reinforcing Bias or Discrimination

According to IHRIM, in order to truly eliminate bias from AI, we’d need to “perfectly identify all of the factors (called features in AI) that describe the thing in the real world that we want to model; collect enormous amounts of error-free data to train a model; and then, with 100% accuracy, predict something.”

Many organizations don’t realize it, but creating and capturing data sets that are more consistent and comparable starts with how you interview. When done correctly, the structured interview methodology can help standardize and enhance the interview process.

Increase the predictive validity of your hiring process

Learn how to conduct a structured interview →

Discrimination can be an unfortunate byproduct of how AI learns. For instance, AI tools trained on homogenous data inputs commonly result in adverse impact or outputs that lack diversity (Human Resource Executive).

Compromising Privacy

The more data collected using AI, the more protection from unethical data practices is required. Not only does this data need to be stored securely, but it must also be collected and used fairly so privacy rights aren’t violated.

In 2022, the US White House Office of Science and Technology Policy (OSTP) released the Blueprint for an AI Bill of Rights. It outlines five core protections all consumers are entitled to, one of which is data privacy and agency over how data gets used.

Without obtaining consent and offering the candidate the correct protections, using facial recognition AI during video interviews can compromise a candidate’s privacy and leave employers exposed to lawsuits.

Legal Issues

Due to the reasons above, the unpredictable nature of generative AI (ML that can create content based on the data it has access to), and the black-box nature of vendor algorithms, using AI in HR can put your organization at a greater risk for legal issues. However, this greatly depends on what type of AI hiring tool you use, as not all are created equal.

Using AI in specific ways for hiring and recruiting can also conflict with federal, state, or local laws where you operate, which can also lead to legal issues. Remember to research AI regulations in your area before deciding on a solution.

Legal Issues

Examples of legal issues

If you use high-risk AI, you automatically accept the responsibility of staying up to date with evolving regulations and managing overlapping laws or restrictions. If not, it may end in a lawsuit like these two examples.

In 2019, Illinois passed the Artificial Intelligence Video Interview Act (AIVI). It was the first state to regulate the use of algorithms and other forms of AI to analyze applicants during video interviews (ToTalent). Several states have followed suit since then.

Kristen Deyerler filed a class action lawsuit in January 2022 against a software vendor that sells video interviewing solutions with facial recognition AI.

The lawsuit contained three major arguments:

  1. It claimed that the company illegally collected facial data without proper permission when the plaintiff interviewed for a job in 2019 using the platform.
  2. It accused the company of violating the Biometric Information Privacy Act.
  3. It alleged that the company failed to provide a publicly available retention schedule and/or guidelines for permanently destroying the biometrics.

In May 2023, Brendan Baker launched a class-action lawsuit against CVS after completing a video interview on a platform that uses AI facial recognition to assess honesty, amongst other qualities. The case was brought to court on the basis that it’s illegal for employers in Massachusetts to use a lie detector to screen job applicants (Boston Globe).

In the US, the Federal Trade Commission holds both users and vendors accountable. Just because a high-risk AI tool is commonly used, doesn’t mean using it won’t violate federal, state, or city laws.

The State of AI in HR

Despite the risks, HR departments that avoid AI altogether may end up trying to catch up to the competition. Statista says, “businesses that begin using AI early will find it easier to reap the benefits.”

The Equal Employment Opportunity Commission (EEOC) and the Department of Justice urge employers to ask vendors questions and analyze the AI technology they use to make employment decisions. Employers are just as liable as vendors for any negative outcomes related to AI recruiting tools.

With the pressure to incorporate AI into HR processes at an all-time high, it’s important to practice caution because the stakes are still incredibly high.

State of AI in Human Resources

Low vs. High-Risk AI-Enabled Hiring Solutions

Risk, in this context, relates to the level of certainty that issues around bias, data privacy, or laws and regulations will become real.

There is some degree of risk in everything we decide. But there’s a big difference between an AI solution with a high probability of undesirable outcomes (triggering a discrimination lawsuit, for example) versus a low probability.

Here are some examples of tasks AI can complete that are low and high-risk.

Low Organizational Risk AI Tasks

High Organizational Risk AI Tasks

Process performance analysis based on predictive validity

Candidate performance analysis based on facial recognition

Bias tracking using outcome data

Candidate-focused interview analysis

Content curation

Chatbot screening interviews

This table shows us that the risk is increased or decreased depending on how employment decisions take place and who the AI is assessing.

Low-Risk

High-Risk

Human-led, process-focused tools

AI-led, candidate-focused tools

Interview intelligence is a low-risk AI tool. It enhances a structured interview process by providing feedback that the HR department can use to optimize each step and the outcomes. Interview intelligence can tell you what elements should be changed to optimize rater performance (content selection, panel selection, etc.). It can also help teams optimize interview scheduling, among other functions to improve the overall interviewing process.

High-risk AI tools to look out for include resume scanners, gamified online tests that assess job skills, and video interviewing software that tracks speech patterns or facial expressions. These tools often provide low transparency into how exactly they work because their algorithms are proprietary.

In order for high-risk AI to be accurate, your real data has to be job-relevant, and the only way to do that is by conducting a job analysis as the first step in your hiring process. However, the job analysis is often skipped, and it impacts the validity of the AI’s decision-making.

Be aware of automation bias

When humans give undue weight to AI-generated employment decisions or recommendations, it’s called automation bias, a common type of bias. Selections, scores, or rankings aren’t inherently better or more precise because they came from a computer or an algorithm.

Automation Bias

AI-Led vs. Human-Led Hiring Tools

A recent survey projected that by 2024, roughly 4 in 10 companies will use AI for job interviews. Of that number, 15% of employers said they will rely on AI to make hiring decisions without human input (ResumeBuilder).

AI-led tools are attractive because of their efficiency, but they increase organizational risk because an algorithm assumes responsibility for assessing, advancing, or rejecting candidates. A lack of human oversight and formal auditing can lead to bias or discrimination. With AI-led tools, you don’t always know how AI makes decisions. The logic isn’t always transparent, so it can be difficult to defend certain decisions the AI might make if you don’t understand them. The risk is too high to let AI tools make decisions autonomously.

Not all AI-augmented hiring tools are designed to make decisions for you, so you can still take advantage of AI without the enhanced risk. Human-led HR tech tools still require you to make employment decisions, but they provide valuable information based on your unique organizational data that allows the hiring team to make faster, more informed decisions.

Here’s the difference between AI-led vs. human-led AI hiring tools:

AI-led = AI has the autonomy to make decisions without human oversight
Human-led = AI provides recommendations to help the hiring team make better decisions

Scrutinize AI vendors thoroughly and ask if the tool you are considering is AI-led or human-led. AI-led tools can make you liable when it comes to litigation, significantly altering the risk you agree to.

US regulations in progress around AI and ML decision making

In January 2023, the EEOC put forward a draft of the Strategic Enforcement Plan (SEP) for 2023-2027. AI was identified as an area of focus, especially the “use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions.”

This shows that AI decision-making in hiring is being monitored by the EEOC due to its high-risk nature.

US Regulation around AI

Candidate-Focused vs. Process-Focused: What’s The Difference?

Process-focused AI directs the assessment internally, and candidate-focused AI directs the assessment externally. Both are used to optimize hiring and recruitment processes, but the outcomes are different.

Candidate-focused AI solutions use AI to assess a candidate’s written, verbal, or video performance in a video interview to make or suggest employment decisions. Candidate-focused AI evaluates candidates based on a mix of pre-defined parameters, the model’s training data, and the AI’s inherent characteristics. Even the vendors of these types of tools can’t always predict how they will respond or what exactly they will assess in specific situations.

Process-focused AI solutions collect data on the hiring process and interviewer performance to ensure the process adheres to best practices, incorporates organization-specific data, and is optimized to identify candidates who meet the organization’s preferred criteria.

For context, the Harvard Business Review reported that Amazon shifted from using candidate-focused AI tools for hiring to using process-focused AI tools to “detect flaws in its current recruiting approach.”

Real-time vs. post-interview feedback

Let’s say you have a process-focused software solution that uses AI to assess interview duration. If it can provide real-time interview coaching, you can use this tool two different ways.

You can enable real-time feedback that notifies the interviewer when the interview goes over time (this could be because of the hiring manager or the candidate) or make this data only accessible to certain people once the interview is complete. In both cases, the information can be used for oversight, coaching, and training, allowing you to use the data in a way that works best for your organization.

Frequently Asked Questions

Should employers disclose when they use AI in HR?

The EEOC encourages employers to tell candidates about how AI is used in the process, and what it’s measuring. But this isn’t just about building trust — transparency around AI in HR is written into the law in some places. Employers in New York, for example, are legally required to disclose to candidates AI’s role in the process (CBS). To be on the safe side, explain what the AI tool is, what it does, and ask for consent to use it in the hiring process.

How is AI used in human resource management?

AI, ML, or automation can be used to manage almost every area of HR, including recruiting, tracking performance, onboarding and offboarding employees, capturing employee engagement, training, answering employer-related FAQs (chatbot), and forecasting and workforce planning. According to SHRM, the most popular area of HR to automate or enhance with AI is recruiting and hiring (79%), with learning and development in second place (41%).

What are the disadvantages of AI in hiring?

Disadvantages are more subjective, whereas risks are more objective. Between the two, there is some overlap (bias, privacy concerns), but there are also a few disadvantages that are not considered risks: the need to regularly audit AI-augmented tools, the learning curve and costs related to implementing a complex new tool, and the diffusion of accountability.