AI Recruiting:
Are We Ready For It?
Artificial intelligence (AI) within recruitment is becoming increasingly common – both as a topic of discussion and as an obtainable solution for hiring teams looking to transform their processes.
But is AI really a smarter recruiting practice? We explore the ways AI is currently being used in recruitment workflows and its perceived benefits, along with the drawbacks and potential legal issues of using currently-available AI recruitment technology.
- Chapter 1: What is AI recruiting?
- Chapter 2: How is AI being used in recruiting?
- Chapter 3: How does an AI interview work?
- Chapter 4: Why are companies using AI in recruiting?
- Chapter 5: Will AI replace recruiters?
- Chapter 6: What are some challenges of using AI in recruiting?
- Chapter 7: The legal and ethical implications of using AI in recruitment
- Chapter 8: Laws being created to regulate AI in recruiting
- Chapter 9: Using AI? Here’s what you need to be doing…
- Chapter 10: What are some efficient alternatives to using AI in recruiting?
- Chapter 11: Does AI have a place in recruiting?
What is AI recruiting?
AI hiring refers to the recent adoption of using artificial intelligence (AI) to automate parts of the hiring process, from sourcing and screening to interviewing and evaluating candidates.
It’s important to note upfront that not all recruitment tech solutions are examples of AI. For a technology to be considered AI-powered, it needs to feature components of machine learning (the system learns and improves by gathering data, rather than being explicitly programmed).
Here’s an example of an AI recruiting process vs automation that’s used within recruiting tech:
AI-powered recruiting
Learns the desired skill set and features for a role based on a growing number of data sets and using the information to scan applications and advance qualified candidates.
Automation in recruiting
(not AI-powered)
Enables humans to make faster decisions by using preset rules to prioritize applications and advance qualified candidates.
How is AI being used in recruiting?
Here is how some companies are using AI within their recruitment practices:
-
Write ‘enticing’ job posts
Using data findings, AI can suggest the most effective words and phrases to include in a job posting based on the job title, industry, and location.
-
Automatic job matching
In some cases, AI is hard at work before a candidate has even applied. AI has the ability to determine the requirements of a job role based on previous or similar applicants for the role. This data is used to dictate which candidates see a job posting based on their experience, knowledge, and skills.
-
Communicating with candidates via chatbot
AI can be used to mimic human conversational abilities, using technology such as natural language understanding (NLU) to comprehend a candidate’s text-based messages and know how to respond. These chatbots can be used to schedule or determine job fit. They can even recommend other jobs that might be a fit, and encourage applicants to apply.
-
Filtering candidates
AI can scan or evaluate candidates’ resumes using an algorithm to scan for keywords that are relevant to a job role or preferred skill set. Some AI software can also analyze candidate data to discover their online presence for a broader scope of their history and abilities.
-
Video interview candidates
AI software can analyze video interview transcripts using natural language processing (NLP), without recruiters needing to be present. Candidates complete a one-way AI-based video interview by recording their answers to preset questions. AI performs an algorithmic analysis on the recording and determines the candidate’s outcome.
Here are some features that are analyzed in an AI-based video interview:
- Word and phrase choices
- Tone of voice
- Body language
- Facial expressions
- Emotional responses
- Eye movement
- Communication skills
- Level of interest
- Level of confidence
Do all video interview platforms use AI to evaluate a candidate? No.
Many pre-recorded video interviews – including all interviews conducted on VidCruiter’s platform – do not use AI as of today. Non-AI platforms offer a convenient solution that allows candidates to interview at a time that works best for them, and recruiters to evaluate the videos when they’re available to do so.
Leading video interview providers use a structured interview methodology, which includes a structured rating guide and standard rating scale. This helps to facilitate a fair and comparable interview process that allows recruiters to evaluate efficiently, every time.
Interested in a non-AI interviewing approach?
Learn about the available alternatives.
How does an AI interview work?
Within the recruiting process, AI “robots” can speak to and understand candidates through the use of conversational AI. Conversational AI is used to facilitate human-like chatbot conversations with candidates, and aspects of it are also used in AI-evaluated audio and video interviews.
How does conversational AI work? The AI-powered application receives the spoken word (or written text) and transcribes it into a machine-readable text. Next, the application uses natural language understanding (NLU), which is the first component of natural language processing (NLP), to understand the intent of the text. Generally, in interviews, the system isn’t required to make a response, so NLP is used to evaluate the text, based on AI algorithms.
In circumstances where a response is required (e.g. chatbot interactions), the system’s dialogue management (DM) formulates a response and converts it into an understandable format using natural language generation (NLG), the other component of NLP. The application delivers the response to the user via text, or text-to-speech, depending on the conversation style.
Lastly, the application uses machine learning (ML) to improve the responses for future interactions by accepting corrections and carrying context from one conversation to the next.
How conversational AI is used to interact candidates
In theory, conversational AI is an efficient and convenient way to filter and engage with candidates. However, in real-world use cases, it has limitations. Different languages, dialects, and accents fail to be understood properly by AI applications, meaning some transcriptions are full of incorrect information, which can ultimately cause bias. Even in text-based conversations, instances of sarcasm, emojis, and slang can confuse AI, causing misinterpretations.
Would you know if you were speaking to a robot?
Probably not…
In a recent study, 72% of candidates thought that they had spoken with a recruiter, even though they were notified upfront that the chatbot was a virtual assistant.
Would you know if you were speaking to a robot?
Probably not…
In a recent study, 72% of candidates thought that they had spoken with a recruiter, even though they were notified upfront that the chatbot was a virtual assistant.
Why are companies using AI in recruiting?
Companies seek out AI to assist with their recruiting for the following reasons:
“Speeds up
time-to-hire”
AI takes over some processes, so fewer resources are needed for recruiting tasks.
“Improves
candidate experience”
AI can be used to communicate with candidates in a timely manner and help improve the candidate’s interviewing experience.
“Improves
quality of hires”
Hundreds of data points are collected for each candidate interaction, allowing AI to objectively calculate the top talent.
“Minimizes
hiring bias”
AI software can be programmed to ignore demographic information such as gender, race, and age, however, some factors of AI can also cause bias.
Will AI replace recruiters?
AI allows hiring teams to remove many of the repetitive, time-consuming processes from their recruitment workflows. Companies that produce AI recruiting software say this allows recruiters more time to focus on engaging with candidates, training hiring teams, and
Are we heading towards a dystopian future where robots are in full control of corporate hiring? The short answer is no, nor should any company want AI to take over their human taskforce. AI isn’t able to replace the social skills, empathy, and negotiating abilities needed for a successful recruitment workflow, particularly while AI recruiting is still considered to be in its infancy.
What are some challenges of using AI in recruiting?
AI technology is a double-edged sword in most use cases. Within recruiting, AI can help introduce efficiencies and eradicate certain time-consuming tasks. However, the software can also create new – sometimes serious – challenges to be aware of:
-
AI needs a lot of data to be accurate
Machine learning (the component of AI that allows algorithms to be improved) requires a lot of data to accurately mimic the intelligence of humans. For example, AI that’s used to screen applications would need to screen potentially hundreds of thousands of resumes for a specific role to be as accurate as a human recruiter. Its intelligence is always limited to the data source available, therefore at first, the AI tool may be less than helpful, and even potentially biased.
-
AI can learn bias from previous data
Companies that create AI recruitment software often share how AI can eliminate bias from the hiring process through its use of factual information, rather than the subjective, and sometimes biased decisions found in human evaluations. However, saying AI can eliminate bias is avoiding a large part of how AI works – it’s trained to find patterns in previous behavior. As mentioned above, AI extracts insights from large amounts of data, then makes predictions based on its findings. This is what makes AI recruiting so powerful, but it can also make its algorithms heavily susceptible to learning from past biases.
For example, if a company has more male than female employees, an AI-powered tool can easily favor male candidates to match the current identity of the company, so long as there isn’t a regularization term to stop the system from doing so. In a harder-to-detect example, say many employees graduated from the same university. This could be due to its proximity, or because of a referral program. The AI software could notice this trend, and form a pattern to favor graduates of that university or those with similar backgrounds. This pattern could end up being highly discriminatory towards non-college grads and certain demographics that were less likely to attend that specific university.
Amazon’s AI hiring bias
In 2014, Amazon created its own AI-powered recruiting tool to help screen resumes, scoring them from one to five stars. Its algorithm used all resumes submitted to the company over a ten-year period to learn how to determine the best candidates. As there was a much lower proportion of women working in the company at the time, the algorithm picked up on the male dominance and presumed it was a factor in success.
Amazon made edits to the software to rectify the issue, but there was no guarantee that the machines wouldn’t sort candidates in another way that could be discriminatory. The project was abandoned a few years later.
-
AI lacks the human touch
It goes without saying, but humans are complex. AI can screen a candidate’s skills and abilities that are relative to the role, but the system would struggle to analyze many aspects of a candidate’s emotional intelligence that could help them succeed in the company. For example, an AI interviewing platform that analyzes facial expressions and tone of voice along with the candidate’s response isn’t able to determine exactly what a smile and a formal tone mean – does it mean the candidate is sincere and serious? Or possibly, they’re trying to be friendly but their tone makes them seem distant? Perhaps it also depends on the question asked. AI doesn’t have the technology to fully understand the nuances of social cues, and cannot possibly allocate these features to imply the presence or absence of specific skill sets.
Secondly, AI cannot build a rapport with a candidate. As we’re currently experiencing a candidate-driven market, companies need to be able to truly connect with top talent – failure to do so could result in high-candidate drop off. In order to win them over, recruiters need to show interest and empathy, and remember details from previous conversations – even if AI could replicate these traits, a system would entirely lack authenticity.
-
AI can misinterpret human speech
AI recruiting tools that screen, interview or evaluate applicants will use automated speech recognition (ASR) software that’s also used in voice recognition services. This software listens to the applicant’s spoken response and converts the voice data into computer-readable text data. In theory, this allows companies to rely on AI to capture a candidate’s complete response and evaluate them fairly and objectively.
However, anyone that’s used leading voice recognition services, such as Alexa, Siri, or Google will know that not every word is interpreted correctly – in fact, entire sentences can be misinterpreted, leading to an incorrect response from the platform. Specific minorities are more commonly prone to these errors. A study conducted by Stanford University found that five leading ASR systems (Apple, IBM, Microsoft, Google, and Amazon) showed substantial racial discrepancies, with an average of 35% of words being incorrectly translated for black speakers compared with 19% of words being incorrectly translated for white speakers.
Black speakers are more likely to be misunderstood by speech recognition software
A study conducted by Stanford University found that five leading ASR systems (Apple, IBM, Microsoft, Google, and Amazon) had an average word error rate (WER) of 35% for black speakers compared with 19% for white speakers.
If the leading ASR systems can’t always recognize and contextualize voice commands, how can an AI software company, with far less funding, create an algorithm that can properly analyze lengthy and often complex interview responses? Unfortunately, they can’t. Even a leading AI-driven interviewing provider states that their software has a word error rate (WER) of ‘less than 10%’ for native American English speakers – so about 1 in every 9 or 10 words are incorrectly translated. The WER was higher for speakers outside of the U.S., depending on their country of origin (e.g. 12% WER for Canadian English speakers and 22% WER for participants with a Chinese accent).
This means that in an AI-powered interview, the software will fail to understand at least approximately 10% of a candidate’s response, and is likely to misinterpret up to a quarter of the response from a non-native English speaker.
How does an AI interviewing platform make errors?
Let’s say that a person speaks 17 words in their interview response. Among those words spoken, the automated speech recognition included 3 errors. This calculates a 18% word error rate (3÷17 = 0.176).
Interview question:
“Tell me about your educational background…”
Spoken word:
“So I got a 1350 on my SAT which got me into UT Austin to study psychology…”
Computer-generated transcription:
“So I got a 1350 on my AC, which got me into you see Austin to study psychology…”
Outcome:
The system failed to acknowledge the candidate’s credentials or university experience due to misinterpreting abbreviations.
Interview question:
“Tell me about your educational background…”
Spoken word:
“So I got a 1350 on my SAT which got me into UT Austin to study psychology…”
Computer-generated transcription:
“So I got a 1350 on my AC, which got me into you see Austin to study psychology…”
Outcome:
The system failed to acknowledge the candidate’s credentials or university experience due to misinterpreting abbreviations.
-
It can be hard to get buy-in
Not everyone is interested in using AI within the recruitment process – many companies are comfortable with traditional, or less intrusive, hiring methods and aren’t looking for a change. Additionally, candidates are often hesitant to complete an AI-based interview:
-
Push-back from HR teams
Whenever people are asked to make technological advances in their processes – even if they’re told it will make their lives easier – it comes with an inevitable push-back. Any change requires more training and new processes. Additionally, recruiters may be hesitant to embrace AI as they’re fearful of their jobs becoming automated and, ultimately, obsolete.
-
“AI is only as good at the input that it has. We’ve seen very public issues of organizations using it in hiring decisions and there was bias in the process. I don’t know if AI is necessarily ready to make hiring decisions, and what’s more interesting is how many people are prepared for and willing to allow it to make those decisions.”
Jon Thurmond
HOST OF #HRSOCIALHOUR HALF HOUR PODCAST
-
-
Failing to win over candidates
There’s a lot of conflicting information regarding how candidates feel about AI in recruiting. One survey from 2016 (which is still heavily credited by AI-powered recruitment companies today) asked 200 candidates how they felt about AI recruitment: 58% of candidates were comfortable interacting with AI technologies to answer initial questions. However, in 2018 a survey of 2000 Americans reported that 88% would feel uncomfortable if an AI interview application was used during their candidate screening process.
For an authentic, up-to-date response, it doesn’t take long to find direct comments on forums such as Reddit, with the vast majority providing negative feedback. Users call AI interviewing “dehumanizing”, “the worst interviewing experience”, and “a pure waste of time”.
-
The legal and ethical implications of using AI in recruitment
AI usage in recruitment has been on the radar of U.S. federal regulators for a long time, but in recent years, issues surrounding AI usage have gained a lot of traction. Back in 2020, ten U.S. Senators sent a letter to the EEOC (Equal Employment Opportunity Commission) about the use and impacts of hiring technologies and the commission’s ability to conduct much-needed oversight and research on the topic.
In 2022, the EEOC and the U.S. Department of Justice issued guidance to employers explaining how AI can lead to hiring discrimination against those with disabilities, potentially violating the Americans with Disabilities Act.
The guidance document, “Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring”, explains how AI technology tools have been found to be discriminatory, including examples such as job advertisements only shown to targetted groups, game-like online tests that assess skillsets, and video interview software that uses AI to measure speech patterns or facial expressions.
The EEOC and the U.S. Department of Justice warn how AI in recruitment leads to discrimination and can violate the ADA.
Laws being created to regulate AI in recruiting
As AI becomes more prolific within the recruiting space, laws, and regulations are being implemented to safeguard the rights of candidates. Here are a few examples of how lawmakers are reacting to AI recruiting tools:
- In 2020, Illinois became the first state to regulate the increasing usage of artificial intelligence in recruitment practices, cracking down on the use of AI in video interviews with the introduction of the “Artificial Intelligence Video Interview Act“.
- Also in 2020, Maryland passed a law that requires notice and consent from candidates prior to using facial recognition technology during a job interview.
- In 2022, the European Commission proposed regulations to address the use of AI in the EU. The rules will use a risk-based approach, with AI recruitment considered “high risk”.
- In early 2023, New York City employers will be banned from using AI to screen candidates unless the technology has had a “bias audit” conducted a year before its usage.
Gender and racial bias – a big AI problem beyond recruitment
In 2021, the Berkeley Haas Center for Equity, Gender, and Leadership reported that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias.
A 2018 study by MIT and Stanford University showed that facial recognition algorithms had a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.
Using AI? Here’s what you need to be doing…
If your company uses AI within its recruitment lifecycle, here are a few things you should consider to ensure you’re in control and in compliance, and you’re providing a transparent hiring experience:
-
Fully understand the algorithms being used
In the same way there are guidelines surrounding how a candidate is traditionally screened and evaluated, recruiting teams and other stakeholders should be fully aware of the factors being considered by the AI algorithm. Consider all inputs being fed into the screening and evaluating software – is all information job-related? Also, look at the data being created by the system – does it comply with data governance standards?
-
Audit AI tools on a regular basis
Tools that use AI can adapt to their own findings, meaning algorithms can progress over time. Therefore, teams can’t simply conduct an initial analysis to ensure the results aren’t biased or disadvantageous to a specific group. These tools need to be regularly audited to make sure the AI isn’t unintentionally learning immoral or even illegal algorithms from the data it’s receiving.
-
Understand that outsourced tools don’t eliminate liability risks
There are only a select few companies that have the resources to internally develop AI tools, so most companies use outsourced, third-party recruiting solutions. However, using third-party software doesn’t exempt companies from liable risks, such as allegations of discriminatory hiring practices found within the software. Companies need to ensure recruiters and third-party vendors are compliant with all existing, relevant employment laws.
-
Share how AI is used within your hiring process
Commonly, candidates want to know the full ins and outs of the recruitment process – not only to help them succeed but also to build trust. As a matter of transparency, think about what is communicated about the use of AI in the hiring process. Consider informing candidates ahead of time that AI will be used to screen or evaluate their application (in some cases, this could be a legal requirement).
Leading companies taking the high ground
Walmart, Meta, Deloitte, and IBM are some of the leading companies that have joined the newly formed Data & Trust Alliance – an organization that’s helping companies to learn, develop and adopt responsible data and AI practices. Their first initiative is helping companies evaluate recruitment vendors by detecting and monitoring bias in their algorithms.
What are some efficient alternatives to using AI in recruiting?
Using AI within a recruitment process has some beneficial factors, but it can also present a lot of serious challenges and risks. Fortunately, there are ways to create a more efficient hiring experience without the use of AI. We’ve looked at the top reasons teams turn to AI-powered recruiting and provided solutions that offer comparable benefits, without the implications of AI.
Recruiting goal
Solutions available without using AI
Recruiting goal
Speed up time-to-hire
Solutions available without using AI
- Pre-recorded video interviewing: Evaluate candidates’ pre-recorded interviews at any time
- Automated scheduling: Real-time availability shared with candidates
- Automated reference checking: Reference forms are automatically sent to referees
- Automated communication/reminders: Timely reminders and email/SMS notifications sent to all parties
Recruiting goal
Improve candidate experience
Solutions available without using AI
- Applicant empowerment: Allow candidates to select how they’re most conformable to conduct their interview – virtual or on-site
- Automated communication/reminders: Provide consistent communication to keep candidates in the know
- Mobile first platform: Allow candidates to apply and interview via mobile
Recruiting goal
Improve the quality of hires
Solutions available without using AI
- Advanced searchability: Mass-boolean search applications and resumes for job-specific keywords and skill sets
- Pre-recorded video interviewing: Get deep insights into candidates from day one
- Structured interview methodology: Understand candidates better with a predictive validity of up to 65%
- Skills testing and proctoring: Assess candidates fairly using proctored skills tests or work samples
Recruiting goal
Minimize hiring bias
Solutions available without using AI
- Built-in rating guides and rating scales: Evaluate fairly within the interview with HR-approved evaluation tools
- Structured interviewing: Methodology that assures an equal interview for every candidate
- Diversified evaluators: Interviews can be recorded and reviewed by a diversified panel
- Accessibility-friendly system: Candidates can use screen readers and opt-in to utilize other accessibility features
Does AI have a place in recruiting?
AI has incredible potential when it comes to HR and recruiting. The current software has shown it can tackle long-standing, common recruiting challenges by speeding up time-to-hire and eradicating low-value administrative tasks. However, AI software within recruiting is still in its infancy, and because of the technical developments still required, using AI has created its own set of challenges.
It’s likely that regulatory issues and allegations of unfair hiring algorithms will plague AI-powered recruitment software for some time. Companies that use AI in their recruitment processes will likely face severe push-back internally and externally as people become more aware of AI’s presence and power.
AI has a place in recruiting when technological advancements allow the software to exceed the screening and evaluating methods used by human recruiters. It’s unlikely that this level of technology will exist in the next few years, or possibly even in the next decade. Until then, there are many non-AI recruitment solutions available that can offer the same benefits – and more – without the serious drawbacks.