VidCruiter Logo

Is Your Recruiting Ready for AI Regulation?

Written by

Jasmine Williams

Reviewed by

Dr. Andrew Buzzell

Last Modified

Dec 11, 2024
AI regulation hero

SHARE THIS ARTICLE

  • LinkedIn
  • X icon
  • Facebook
  • URL copied to clipboard!


SUBSCRIBE TO OUR NEWSLETTER

Artificial Intelligence (AI) regulation will govern how AI technologies can be used in recruitment processes. AI technology has the potential to streamline recruiting and boost efficiency. For example, many companies currently use AI tools to screen candidates, schedule interviews, and to power ‘virtual assistants’ to communicate with applicants. However, AI also comes with its fair share of risks that can lead to adverse impacts. AI regulation mitigates these risks, reducing the likelihood of them occurring.

Why AI Use in Hiring is Being Regulated

Recruitment automation and AI recruiting applications are not new phenomena. Applicant tracking systems, for instance, are an AI-adjacent tool that organizations have been using for decades to streamline hiring processes. However, the rise of generative AI and large language model (LLM) technologies in the past few years has led to the rapid adoption of AI recruitment tools.

In turn, governing bodies in the US, Canada, and the EU, among others, have identified AI-supported hiring and employee evaluations as being at high risk of violating existing regulations in new and often unexpected ways, infringing on established rights, and having a variety of negative social impacts. These jurisdictions have regulated, or are in the process of regulating, these technologies to protect people from the dangers of AI. Here are a few of the most common areas of concern regarding AI in hiring.

AI use being regulated

Ethical Considerations

Before implementing AI recruitment tools, it is crucial to address several ethical considerations to ensure fairness, transparency, and mitigate unintended outcomes that pose AI risks in the hiring process.

Irrelevant and Invalid Decisions

Without extensive onboarding, testing, analysis, and ongoing monitoring, AI systems may make decisions and produce recommendations based on correlations and patterns with little relevance to applicants' aptitudes and qualifications. Some algorithms might favor applicants who talk louder, have a particular zip code, mention skills in a specific order, or have some other property that is arbitrarily boosted by the AI, often without this being evident. This causes lower-quality hires and higher turnover, generating exposure to legal and regulatory risks. 

Privacy and AI Security Risks

AI recruitment tools often have access to sensitive personal information. Without proper safeguards, adding AI to these tools risks exposing personal information, and could subject candidates and organizations to serious privacy and security risks, such as phishing attacks and data breaches. 

Bias and Fairness 

AI tools can acquire biases from their training data, developers, or even spontaneously. For example, if an employer trains an AI tool on mostly male candidate data, it could start discriminating against female applicants. Additionally, AI recruitment technology trained primarily on non-disabled candidates may not correctly analyze or understand people with disabilities’ body language, speech, or behavior. This hiring bias can significantly affect a company's diversity, equity, and inclusion efforts and lead to more homogeneous workplaces. 

Algorithmic Transparency

There is a need for genuine transparency from both AI vendors and companies using AI tools in recruiting, including when they are used, their purposes, and how data is processed, shared, and stored. Without transparency, it's difficult for candidates to properly consent to using these tools and understand how they are affected by them.

Human Intervention

There are many examples of automated employment decision tools making erratic, arbitrary, and often discriminatory decisions that rule against job candidates. Without human intervention, candidates may get unfairly rejected for reasons that would not impact their job performance. It is often unclear to recruiters, and to applicants, the extent to which human oversight is required, and actually implemented.

Scale of Impact

One biased hiring manager could harm a few hundred or a few thousand applicants. However, a biased AI recruitment platform used by all hiring managers in a large company or multiple organizations could hurt millions of applicants. As more organizations use AI recruitment technologies, this risk grows exponentially. 

Who gets left out when AI tools make decisions?

A 2021 study by a Harvard Business School professor found that automated decision software can exclude more than 10 million “hidden workers” from hiring discussions.

Legal Compliance in AI Hiring

Many of the concerns above are not just ethical considerations but legally protected rights that could apply to AI in recruitment. These are the four key domains of law that are most relevant to AI in HR and recruitment.

Data Protection Laws

Regulations such as the EU and UK’s General Data Protection Regulation (GDPR), as well as US privacy regulations on the federal and state levels (e.g., California Consumer Privacy Act (CCPA)) apply to AI recruitment technologies regarding how these systems collect and process personal information.

Human Rights Protection

The UK’s Human Rights Act 1998 and Equality Act 2010, Title VII of the Civil Rights Act of 1964 in the US, and the Equality Framework Directive 2000/78/EC in the EU all apply to AI tools due to their risk of bias and discrimination.

AI-Centric Legislation

More laws are coming into effect that will control the use of AI in various contexts, including HR and recruitment. The AI Act in the EU, the US’s AI Bill of Rights and Algorithmic Accountability Act, and Canada’s proposed Artificial Intelligence and Data Act (the “AIDA”) are all examples of this regulation.

AI Recruitment Laws

The Equal Employment Opportunity Commission (EEOC) has issued specific guidance addressing AI in hiring. Various US states and cities have also enacted laws and proposed legislation to control AI’s use in hiring. We will explore more of these regulations in the next section.

Jurisdictional AI Hiring Regulations

In the US, governing bodies at various levels are creating more artificial intelligence legislation to control how it's applied to hiring and recruitment and ensure it’s used ethically, safely, and responsibly. Here are some of the most notable AI laws and guidelines related to hiring.

The EEOC

In May 2023, the EEOC issued guidance around AI in hiring focused on compliance with Title VII and the Americans with Disabilities Act (ADA)

In terms of Title VII compliance, the EEOC guidelines state that an employer would be liable if an algorithmic decision-making tool discriminates against candidates based on characteristics like race, color, religion, sex, or national origin, even if an outside vendor made the tool. The EEOC encourages employers to regularly analyze their AI tools for bias to ensure they’re not biased towards protected groups. 

Regarding the ADA, the guidelines state that employers must provide reasonable accommodation to people with visual disabilities whom AI tools may not accurately assess.

The four-fifths rule

This is a general rule of thumb from the EEOC that organizations can use to determine whether their AI tool is biased towards certain groups. The rule states that the selection ratio of a minority group should be at least four-fifths (80%) of the selection ratio of the majority group. This rule isn’t a definitive test, but simply an indicator of potential bias that you should investigate further, especially since this test can both miss and indicate false positive results.

The Federal Trade Commission (FTC)

The FTC’s guidance around AI maintains that they will regulate companies using and making AI tools the same way it regulates any other business. Companies need to be careful not to mislead consumers about AI and notify customers when they’re engaging with AI instead of humans. 

Companies also cannot use AI’s “black box” nature as a defense against an FTC claim. They are responsible for understanding how their AI tools work, providing appropriate training to employees using the tools, conducting regular audits and impact assessments, and correcting any incorrect or unfair algorithmic decisions. 

Illinois, US

Illinois’ Artificial Intelligence Video Interview Act requires employers to notify applicants when they’re using AI for video interviewing, provide information explaining how the AI works and how it will evaluate candidates, and obtain their consent for AI evaluation. Additionally, employers relying solely on AI analysis of video interviews to screen candidates must collect and report race and ethnicity data on all candidates selected for in-person interviews, rejected, and ultimately hired.

Maryland, US

Maryland’s House Bill 1202 states that employers can’t use facial recognition services to create a facial template (i.e., a machine-interpretable pattern of facial features) during a job interview without their consent. 

New York City, NY

New York City’s Automated Employment Decision Tools law requires employers to conduct bias audits on automated employment decision tools, including AI or similar technologies. They must also post or link to their results on their websites and disclose selection or scoring rates for different gender, race, or ethnicity categories. They also need to notify employees or candidates when they use these tools and provide alternative selection processes or accommodations when requested. Lastly, employers can’t use automated employment decision tools that have not been audited in over a year.  

Jurisdictional AI hiring regulations

Case Study: Regulation in Action

An e-learning company settled a lawsuit with the EEOC, which claimed that their recruiting software’s algorithm discriminated against older applicants. According to the EEOC, the software had an age bias and automatically rejected female applicants over the age of 55 and male applicants over the age of 60 and screened out more than 200 candidates on these bases. 

Specifically, they stated that the candidate who applied with her actual birth date was automatically rejected. This job seeker resubmitted the same application a day later and was offered an interview after adjusting her birth date to appear younger. 

As a part of their settlement agreement, the company had to pay $365,000 in monetary relief, post notice about the settlement, devise non-discrimination and complaint policies, conduct manager training, and submit reports to the EEOC.

What the Future of Regulation Looks Like

While Illinois, Maryland, and New York City may be some of the first localities to enact AI regulation, they won’t be the last. Similar legislation is pending in New Jersey, California, and Massachusetts, as well as internationally in Canada, the EU, and beyond.

Overall, this regulation will likely require employers to be more attentive to the validity of AI hiring tools, and more transparent about their AI use and the potential impact of automated decision-making technologies. They will also need to be more diligent about auditing these tools for potential bias and AI security risks.  

How To Prepare for AI Hiring Regulation

As more governing bodies create and enforce AI recruitment laws, employers can prepare for AI hiring regulations by creating internal policies and checklists to ensure compliance. Here are a few recommended best practices.

  • Regularly test your AI tools. Organizations should test tools for potential bias and data privacy risks before and periodically after using them. Since this is a relatively new field, there is no universal audit criteria and a tool that passes one vendor’s audit, could fail another. Therefore, organizations should consider conducting multiple internal and third-party audits.
  • Maintain transparency. Employers should clearly understand how their AI software works and have procedures in place to explain it to consumers, candidates, employees, and regulators. They should also consider publicly publishing AI audit results and their AI tool’s code.
  • Lawfully collect and process candidate and employee data. Employers should obtain valid consent where required, protect personal information, and limit the amount of information they collect, use, and disclose through AI tools to only what’s necessary.
  • Offer accommodations. Give candidates and employees the option to opt out of AI evaluations and offer alternative selection processes if asked. 
  • Keep humans ‘in the loop’. Don’t rely on AI to screen, reject, or hire candidates. A human should always be involved to help prevent inaccurate or discriminatory results. Ensure there is always a recruiter or hiring manager accountable for explaining the reasoning behind hiring decisions.
  • Stay ahead of regulations and legal implications. The regulatory landscape for AI use in recruitment is continuously evolving, with different regional regulations. Employers should stay informed about current laws and upcoming changes to ensure compliance in every jurisdiction that they operate and recruit in. Regularly review legal updates and consult with experts to anticipate how new regulations may impact hiring practices. 

Following these guidelines can help ensure your organization uses AI for recruitment ethically, safely, and responsibly and protect you from future litigation.

Where to start with an AI hiring policy?

When developing an AI framework for recruiting, make sure you do a regulatory review and compliance check, develop a response plan, and create a process for continuous improvement.

Future of regulation

Frequently Asked Questions

What is AI regulation?

AI regulation is the development of policies and laws from governing bodies (e.g., municipal, state, and federal governments, professional organizations, commissions, etc.) that control the use of AI. Regulation can help ensure this technology is used appropriately and manage AI dangers and adverse impact.

Should AI be regulated in recruiting?

Despite its many benefits, AI technology, if left unchecked, could lead to severe consequences. For example, AI tools can exacerbate data privacy and security risks in HR and recruitment and make discriminatory hiring decisions. Regulating AI, and ensuring reasonable and consistent public expectations, can help protect both organizations and individuals from these harmful effects.

Why is AI hard to regulate?

AI is hard to regulate because the term itself is vague. Many tools and applications fall under the umbrella term of AI, and some are more risky than others. It’s also difficult for regulatory bodies to keep pace with technological advancements, and collaborate with different regions and nations to create far-reaching standards.