Leverage Ethical AI with Interview Intelligence
Explore how to gain valuable insights and make informed decisions with real-time interview analytics.
0
Artificial Intelligence (AI) regulation will govern how AI technologies can be used in recruitment processes. AI technology has the potential to streamline recruiting and boost efficiency. For example, many companies currently use AI tools to screen candidates, schedule interviews, and to power ‘virtual assistants’ to communicate with applicants. However, AI also comes with its fair share of risks that can lead to adverse impacts. AI regulation mitigates these risks, reducing the likelihood of them occurring.
Recruitment automation and AI recruiting applications are not new phenomena. Applicant tracking systems, for instance, are an AI-adjacent tool that organizations have been using for decades to streamline hiring processes. However, the rise of generative AI and large language model (LLM) technologies in the past few years has led to the rapid adoption of AI recruitment tools.
In turn, governing bodies in the US, Canada, and the EU, among others, have identified AI-supported hiring and employee evaluations as being at high risk of violating existing regulations in new and often unexpected ways, infringing on established rights, and having a variety of negative social impacts. These jurisdictions have regulated, or are in the process of regulating, these technologies to protect people from the dangers of AI. Here are a few of the most common areas of concern regarding AI in hiring.
Before implementing AI recruitment tools, it is crucial to address several ethical considerations to ensure fairness, transparency, and mitigate unintended outcomes that pose AI risks in the hiring process.
Without extensive onboarding, testing, analysis, and ongoing monitoring, AI systems may make decisions and produce recommendations based on correlations and patterns with little relevance to applicants' aptitudes and qualifications. Some algorithms might favor applicants who talk louder, have a particular zip code, mention skills in a specific order, or have some other property that is arbitrarily boosted by the AI, often without this being evident. This causes lower-quality hires and higher turnover, generating exposure to legal and regulatory risks.
AI recruitment tools often have access to sensitive personal information. Without proper safeguards, adding AI to these tools risks exposing personal information, and could subject candidates and organizations to serious privacy and security risks, such as phishing attacks and data breaches.
AI tools can acquire biases from their training data, developers, or even spontaneously. For example, if an employer trains an AI tool on mostly male candidate data, it could start discriminating against female applicants. Additionally, AI recruitment technology trained primarily on non-disabled candidates may not correctly analyze or understand people with disabilities’ body language, speech, or behavior. This hiring bias can significantly affect a company's diversity, equity, and inclusion efforts and lead to more homogeneous workplaces.
There is a need for genuine transparency from both AI vendors and companies using AI tools in recruiting, including when they are used, their purposes, and how data is processed, shared, and stored. Without transparency, it's difficult for candidates to properly consent to using these tools and understand how they are affected by them.
There are many examples of automated employment decision tools making erratic, arbitrary, and often discriminatory decisions that rule against job candidates. Without human intervention, candidates may get unfairly rejected for reasons that would not impact their job performance. It is often unclear to recruiters, and to applicants, the extent to which human oversight is required, and actually implemented.
One biased hiring manager could harm a few hundred or a few thousand applicants. However, a biased AI recruitment platform used by all hiring managers in a large company or multiple organizations could hurt millions of applicants. As more organizations use AI recruitment technologies, this risk grows exponentially.
Who gets left out when AI tools make decisions?
A 2021 study by a Harvard Business School professor found that automated decision software can exclude more than 10 million “hidden workers” from hiring discussions.
Many of the concerns above are not just ethical considerations but legally protected rights that could apply to AI in recruitment. These are the four key domains of law that are most relevant to AI in HR and recruitment.
Data Protection Laws
Regulations such as the EU and UK’s General Data Protection Regulation (GDPR), as well as US privacy regulations on the federal and state levels (e.g., California Consumer Privacy Act (CCPA)) apply to AI recruitment technologies regarding how these systems collect and process personal information.
Human Rights Protection
The UK’s Human Rights Act 1998 and Equality Act 2010, Title VII of the Civil Rights Act of 1964 in the US, and the Equality Framework Directive 2000/78/EC in the EU all apply to AI tools due to their risk of bias and discrimination.
AI-Centric Legislation
More laws are coming into effect that will control the use of AI in various contexts, including HR and recruitment. The AI Act in the EU, the US’s AI Bill of Rights and Algorithmic Accountability Act, and Canada’s proposed Artificial Intelligence and Data Act (the “AIDA”) are all examples of this regulation.
AI Recruitment Laws
The Equal Employment Opportunity Commission (EEOC) has issued specific guidance addressing AI in hiring. Various US states and cities have also enacted laws and proposed legislation to control AI’s use in hiring. We will explore more of these regulations in the next section.
In the US, governing bodies at various levels are creating more artificial intelligence legislation to control how it's applied to hiring and recruitment and ensure it’s used ethically, safely, and responsibly. Here are some of the most notable AI laws and guidelines related to hiring.
In May 2023, the EEOC issued guidance around AI in hiring focused on compliance with Title VII and the Americans with Disabilities Act (ADA).
In terms of Title VII compliance, the EEOC guidelines state that an employer would be liable if an algorithmic decision-making tool discriminates against candidates based on characteristics like race, color, religion, sex, or national origin, even if an outside vendor made the tool. The EEOC encourages employers to regularly analyze their AI tools for bias to ensure they’re not biased towards protected groups.
Regarding the ADA, the guidelines state that employers must provide reasonable accommodation to people with visual disabilities whom AI tools may not accurately assess.
The four-fifths rule
This is a general rule of thumb from the EEOC that organizations can use to determine whether their AI tool is biased towards certain groups. The rule states that the selection ratio of a minority group should be at least four-fifths (80%) of the selection ratio of the majority group. This rule isn’t a definitive test, but simply an indicator of potential bias that you should investigate further, especially since this test can both miss and indicate false positive results.
The FTC’s guidance around AI maintains that they will regulate companies using and making AI tools the same way it regulates any other business. Companies need to be careful not to mislead consumers about AI and notify customers when they’re engaging with AI instead of humans.
Companies also cannot use AI’s “black box” nature as a defense against an FTC claim. They are responsible for understanding how their AI tools work, providing appropriate training to employees using the tools, conducting regular audits and impact assessments, and correcting any incorrect or unfair algorithmic decisions.
Illinois’ Artificial Intelligence Video Interview Act requires employers to notify applicants when they’re using AI for video interviewing, provide information explaining how the AI works and how it will evaluate candidates, and obtain their consent for AI evaluation. Additionally, employers relying solely on AI analysis of video interviews to screen candidates must collect and report race and ethnicity data on all candidates selected for in-person interviews, rejected, and ultimately hired.
Maryland’s House Bill 1202 states that employers can’t use facial recognition services to create a facial template (i.e., a machine-interpretable pattern of facial features) during a job interview without their consent.
New York City’s Automated Employment Decision Tools law requires employers to conduct bias audits on automated employment decision tools, including AI or similar technologies. They must also post or link to their results on their websites and disclose selection or scoring rates for different gender, race, or ethnicity categories. They also need to notify employees or candidates when they use these tools and provide alternative selection processes or accommodations when requested. Lastly, employers can’t use automated employment decision tools that have not been audited in over a year.
An e-learning company settled a lawsuit with the EEOC, which claimed that their recruiting software’s algorithm discriminated against older applicants. According to the EEOC, the software had an age bias and automatically rejected female applicants over the age of 55 and male applicants over the age of 60 and screened out more than 200 candidates on these bases.
Specifically, they stated that the candidate who applied with her actual birth date was automatically rejected. This job seeker resubmitted the same application a day later and was offered an interview after adjusting her birth date to appear younger.
As a part of their settlement agreement, the company had to pay $365,000 in monetary relief, post notice about the settlement, devise non-discrimination and complaint policies, conduct manager training, and submit reports to the EEOC.
While Illinois, Maryland, and New York City may be some of the first localities to enact AI regulation, they won’t be the last. Similar legislation is pending in New Jersey, California, and Massachusetts, as well as internationally in Canada, the EU, and beyond.
Overall, this regulation will likely require employers to be more attentive to the validity of AI hiring tools, and more transparent about their AI use and the potential impact of automated decision-making technologies. They will also need to be more diligent about auditing these tools for potential bias and AI security risks.
As more governing bodies create and enforce AI recruitment laws, employers can prepare for AI hiring regulations by creating internal policies and checklists to ensure compliance. Here are a few recommended best practices.
Following these guidelines can help ensure your organization uses AI for recruitment ethically, safely, and responsibly and protect you from future litigation.
Where to start with an AI hiring policy?
When developing an AI framework for recruiting, make sure you do a regulatory review and compliance check, develop a response plan, and create a process for continuous improvement.
AI regulation is the development of policies and laws from governing bodies (e.g., municipal, state, and federal governments, professional organizations, commissions, etc.) that control the use of AI. Regulation can help ensure this technology is used appropriately and manage AI dangers and adverse impact.
Despite its many benefits, AI technology, if left unchecked, could lead to severe consequences. For example, AI tools can exacerbate data privacy and security risks in HR and recruitment and make discriminatory hiring decisions. Regulating AI, and ensuring reasonable and consistent public expectations, can help protect both organizations and individuals from these harmful effects.
AI is hard to regulate because the term itself is vague. Many tools and applications fall under the umbrella term of AI, and some are more risky than others. It’s also difficult for regulatory bodies to keep pace with technological advancements, and collaborate with different regions and nations to create far-reaching standards.
Modernize your hiring process with expert insights and advice.