VidCruiter Logo
AI Framework hero mobile

Why You Need an AI Framework for Recruiting

Written by

Lauren Barber

Reviewed by

VidCruiter Editorial Team

Last Modified

Mar 25, 2024

An artificial intelligence (AI) framework for recruiting captures an organization’s philosophy on AI use in the hiring process. Having an AI framework for recruiting allows you to make decisions that dictate how AI should and shouldn’t be used in the recruiting process.

To create an AI recruiting framework, you need to start by conducting an audit to understand what AI tools are being used, how, and why. After all, it’s hard to create effective policies when you haven’t established a baseline of how AI is currently being used.

Key Considerations For AI Recruiting

Two main principles guide the development of your AI framework for recruiting: people-first processes and decision-making authority.  Your position on these concepts guides your AI framework for recruitment, which informs your policy-making, and drives your everyday decision-making.

Key considerations for AI recruiting

People-First Processes

A people-first process prioritizes equal treatment and protects candidate, employee, and organizational well-being.

Here are some best practices to help you create people-first processes.

People-first Processes

Privacy and Confidentiality

Go above and beyond to protect candidate privacy.

  • Ensure that AI tools only access relevant information and avoid invading candidates’ privacy unnecessarily.
  • Implement AI tools that follow data protection regulations.
  • Have a robust personal information protection policy before using AI in selection processes.
  • Create an organizational data protection policy to protect proprietary company information.

Transparency

Candidates deserve to know how AI is being used and should have the right to opt out of being analyzed by it. It’s impossible to eliminate bias in AI selection and evaluation tools, so those vulnerable to bias should have the option.

  • Tell candidates exactly how you use AI tools in your assessment process. Any scoring or evaluation metrics should be explainable.
  • Offer the opportunity to select a recruitment pathway where the candidate does not come in contact with AI.
  • Give everyone on the hiring team the information they need to understand how AI is used in the recruitment process and provide accurate answers to candidate questions as they arise.

Diversity, Equity, Inclusion, and Accessibility (DEIA)

AI-driven recruiting should promote diversity, not inhibit it.

  • Treat every candidate the same way by only assessing criteria directly related to job performance.
  • Remain vigilant and monitor all AI-based recruiting processes for biases. If something seems off, it might be! Audits don’t detect everything.
  • Provide equal access to the recruiting process, regardless of technological proficiency. Offer on-demand support and all reasonable accommodations that may be necessary.

Being intentional about these three things will strengthen your employer brand and help you make smart policies and decisions around AI moving forward.

Decision-Making Authority

AI-augmented recruiting tools are now widely used to make decisions in recruiting. According to Pew Research, “These tools will provide ever-larger volumes of information to people that, at minimum, will assist them in exploring choices and tapping into expertise as they navigate the world.”

So who (or what) is making hiring decisions? Humans or AI? Or both? The answer is it depends on the tool, and the structure of your process.

Decision-making authority can exist on a spectrum with human control on one end and AI autonomy on the other. In the middle of this spectrum, there is flexibility for the decision-maker to be different based on the context, or to use AI-generated data to complement human decisions.

Assigning agency, on the other hand, is more concrete. Agency represents complete autonomy over a specific decision or set of decisions in the hiring process. Don’t be fooled: agency is always assigned by humans, so humans are completely responsible for the AI’s actions, even if they take place independently.

Human Agency

Humans are the sole decision-makers with complete visibility over the process. AI may be involved in collecting, processing, and arranging information for human decision-makers but does not effectively make determining decisions. Pay attention to how AI arranges, manipulates, or prepares the information.

Human agency can expose a process to the decision-maker's personal biases because it essentially relies on human experience and intuition. However, this can be counteracted with emotional intelligence, DEIA training, interview training, and empathy.

AI Agency

Humans participate in the process (to some extent), but the decisions are made by AI. AI choices are based on algorithms and foundation models that can be trained or optimized to modify outcomes. This often involves data analysis, pattern recognition, and predictive modeling. These systems have the potential to be highly-regulated in the future and in many geographies, must already be audited for bias potential.

If adopted at scale, AI agency has the potential to introduce systemic bias across many organizations sharing the same or similar components, code, data treatments, data sets, and foundation models.

There are concerns in the scientific and regulatory community about how much agency should be given to AI. Keeping records of human oversight and intervention over decision-making processes may be valuable if you choose AI agency. Be sure to develop an organizational policy outlining the boundaries around AI agency before using any tools.

decision-making Authority

Recruiting tools are often designed for centralized control rather than personalized control (Pew Research). This can mean that humans or AI are assigned agency over something like resume screening, and it's harder to customize control based on an individual job scenario, for example. Many AI tools are not designed with blended agency in mind.

Embrace The Decision-Making Spectrum

The spectrum of human to AI decision-making should be front and center when approaching an AI tool. Recruitment processes with blended agency maintain structure where it matters, while still being dynamic and nuanced.

Different tasks within the selection process are better served by human or AI decision-making. Blended agency allows organizations to personalize decision-making while also remaining informed, logical, and aligned with the greater organizational strategy.

Decision-making is getting more complicated

A 2021 Gartner survey found that 65% of decisions involve more stakeholders or choices than they did two years ago. The business world is becoming more complex and uncertain, and so is decision-making.

When you use AI, you can’t always explain why it makes certain decisions. While this may seem like it removes a level of responsibility, organizations are liable for the decisions their AI tools make. As a result, there is greater regulatory pressure to be able to explain or justify decisions.

For example, The California Supreme Court ruled that businesses that perform certain job-related functions for employers can be held directly liable for discrimination. The ruling will affect several industries, from AI vendors to recruiters and screeners. The ruling will have a nationwide impact, as organizations only need to work with California companies — not employ California workers — for the law to apply to them.

Decision-making is getting more complicated

Finding the Right Blend of Human and AI Agency

The best way to understand where your organization sits on the decision-making spectrum is to look at your overarching goals.

If your recruiting goal for the organization is to make faster decisions, you’re likely to be more aligned with AI agency. In practice, this would be a recruiting tool powered by autonomous AI.

If you want to prioritize human contact and involvement, you would more likely align with human agency. This would mean having a fully manual recruiting process without AI inputs.

If your goal is to make the best possible decisions and continuously improve your decision-making processes, you’d likely want to embrace blended agency.

The Risks Of Recruiting Using AI

There are many potential benefits of using AI in recruitment. However, when you incorporate AI into your hiring process, you accept the performance, security, economic, societal, and enterprise risks that come with it.

If any of these potential risks turn into real issues, it could damage your organization’s brand and reputation.

Unknowingly making poor hiring decisions

Unknowingly leaking sensitive data

Compromising candidate experience by creating a negative sentiment

Exposing the organization to illegal activities

Perpetuating social inequalities

Creating a homogenous workforce

The Artificial Intelligence Ethics Framework For The Intelligence Community recommends considering the scale and likelihood of the risk and asking, “Do likely negative impacts outweigh likely positive impacts?” (Intelligence.gov)

To understand your organization’s appetite for risk, look at the type of risk introduced by specific AI functions.

Process-Specific Risks

In your AI framework for recruiting, consider detailing your organization’s level of AI involvement for every process listed below.

Candidate Sourcing, Screening, Evaluations, and Filtering

Using AI for any of these processes can be impacted by bias inherent in the AI system and a lack of transparency around what your AI is doing and why. This includes chatbot interviews as well. Tools that rely on shared data can also introduce bias to your process or perpetuate systemic bias.

Evaluations are also something to be cautious about because even the vendors may not always know how the AI recognizes, categorizes, or weighs a particular criterion. Many vendors acknowledge and recognize the danger of using AI in high-risk situations that could impact people’s individual rights or well-being. For example, OpenAI and Google both explicitly recommend against using AI for assessing candidate eligibility in their terms and conditions.

Candidate Experience

AI can elevate the recruitment experience, or it can create a negative sentiment depending how you use it. Most automated processes that contribute to a better candidate experience, including candidate communications, interview scheduling, and status updates (this is for interviewers to keep the process moving forward), are relatively safe.

Be cautious of using chatbots to assess or handle candidate concerns and questions, as the cost of creating a negative sentiment might not be worth the time saved. Make sure it’s easy to reach a person and get help.

Candidate Experience
Automation Fatigue Icon

Candidates experience automation fatigue, too

Sometimes, the technology that intends to help becomes a barrier. Consider the last time you called a customer service line for help. Maybe you patiently went through a few automated prompts, hoping the answer you need is buried deep in one of the menus, only to end up in an endless loop trying to figure out how to speak to someone.

Replace yourself in this situation with a candidate, and replace the automated phone system with a chatbot interview where the stakes are higher. While this interview type has its benefits, much is lost in translation during a chatbot interview (just think how often Siri, Alexa, or Google Assistant misunderstands you). Save candidates the frustration and make it easy to reach a real person if needed.

Content Curation

Using AI to produce job descriptions, interview questions, and rating guides can be safe, but the accuracy can be questionable, and there is potential to introduce bias.

In order to follow best practices, you would need to validate the interview content produced by AI. First, you’d need the role-specific competencies identified from a well-executed job analysis. From there, you’d ideally have an industrial and organizational psychologist review and validate everything to ensure the language is inclusive and unbiased.

It may be faster and easier to produce interview content using AI, but you still want to have it reviewed and validated by an expert to ensure it’s right for the role.

Candidate Sentiment Analysis and Personality Tests

AI tools that evaluate candidates using facial recognition and speech pattern analysis can introduce bias and result in disproportionately unfavorable assessments. While these practices can appear to generate efficiencies, they’ve been shown to perform poorly and resulted in lawsuits against vendors and hiring organizations.

Bias Identification And Mitigation Tools

Some argue that the potential to introduce bias outpaces the ability to monitor and mitigate bias. These tools are often ‘black-box’ technologies. The lack of transparency makes it difficult for human operators to trust or challenge the results.

Algorithmic bias detection tools and the 4/5 rule

Many algorithmic bias tools that claim to detect bias use the 4/5 rule.

But how accurate are these tools? Do they really help prevent potential discrimination? And is the 4/5 rule the best way to legally establish the threshold for bias?

The answer is no to all of the above. Here’s why:

Issue 1: According to the Equal Employment Opportunity Commission, the 4/5 rule is merely a “rule of thumb.”
This calculation is a rudimentary measuring instrument that isn’t meant to prove adverse impact. The 4/5 rule is merely a way to test for discrimination that would lead to a greater investigation (if the findings were statistically significant).

Issue 2: These tools can miss potential biases in non-selection.
Is your selection method job-relevant and equitable if it misses qualified people?

Selection at all stages of the hiring process needs to be assessed for bias. Algorithmic bias detection tools assess the applicants who get advanced, but they don’t consider the applicants who were not selected (What if the algorithm is missing out on good applicants?). You might think you’re protected, but because these tools don’t examine the very top of the recruitment funnel, there’s a lot that can be missed.

Issue 3: The 4/5 rule doesn’t hold up in court as proof of fair practices and processes.
In discrimination cases, typically contextual factors come into play, and issues get resolved based on statistical significance, not the 4/5 rule. The evidence produced by the 4/5 calculation is often an oversimplification for two reasons: It misses bias in non-selection, and a single metric can’t possibly stand in for the complexities of the law that inspired it.

All these issues multiply in cases where people are not overseeing the process, and AI is reproducing the same decision-making procedure for each applicant.

Algorithmic Bias Detection Tools

Current and Future Legal Compliance

Take a proactive approach to stay compliant and future-proof your adoption of AI for recruitment.

Understanding Current Regulatory Landscapes

The use of AI in the recruitment space has been relatively unregulated until recently. Part of responsible AI use is mapping out the overlapping jurisdictions that may affect how and if your organization adopts AI processes into your recruiting strategy.

Ensure your AI use for recruitment complies with:

  • Local and regional laws
  • National laws
  • International regulations
  • Industry-specific bodies, and associations

Multinational corporations need to be aware of international regulations regarding data privacy, AI ethics, and employment. Work towards a centralized compliance strategy that works for all jurisdictions, starting with the most strict.

Anticipating Future Regulations

Regulations don’t change swiftly, but they are constantly evolving.

Here are a few strategies to stay ahead:

  • Pay attention to public consultation processes and monitor industry publications.
  • Establish a person or team to monitor and interpret emerging AI-related regulations.
  • Review and adjust the AI framework in accordance with any regulatory changes.

Build redundancies into your process for protection in case something changes and has the potential to be deemed unjust or illegal. It's better to be safe than caught on the back foot.

Anticipating Future Regulations

Candidate Use of AI

With the wide-scale adoption of generative AI tools, it’s inevitable that candidates will use them in the recruitment process. Candidates commonly use AI for resume optimization, content generation (for the job application), interview preparation, and to create portfolio or assessment submissions.

This raises an important question: Will you ban candidate use of generative AI in the hiring process, or should you devise strategies to ensure an authentic application process?

Before hastily banning the use of generative AI in your hiring process or trying to ignore it entirely, be mindful that it can impact the diversity of your applicant pool.

Whatever position you take on the use of generative AI, the most important thing is to make the terms of use crystal clear internally and externally, especially if they are not aligned. For example, if you tell candidates they can’t use specific AI tools in their application, but they are expected to know how to use them for the role.

If employees use AI tools, you might want to consider allowing candidates to use them too (with the same guardrails in place).

Screening Out Icon

Is AI detection software the solution for screening out candidates who use AI?

No, AI software can’t screen out candidates who use AI (yet). According to USA Today, creators of popular AI detection tools like OpenAI, Turnitin and GPTZero, have warned against making definitive decisions using their detectors due to possible inaccuracies in the software.

Communicating Your Position on Generative AI to Candidates

Clearly communicate to candidates how your organization uses AI in the recruiting process and your expectations of them. Transparency builds trust. Express your stance in the job description or application and other phases of the recruiting process.

If your organization prioritizes human-AI collaboration, it's essential to show this. Likewise, if individual authenticity is a core value, ensure it's genuinely reflected in your communications.

Considerations When Writing AI Policies

Once you determine where your organization and its activities fit within the broader landscape of regulations, industry norms, and best practices, you'll need to articulate a vision for how you will safely and effectively use AI.

An AI policy is a practical set of guidelines or rules that align with your AI framework for recruiting — it doesn't need to be a comprehensive strategic document. AI use, especially in recruiting, is a difficult-to-understand subject. Even though the topics can be complex, you have to write them in a way that any employee could understand and put into action.

Quotes Icon

Practical implications of AI-based recruiting

A 2022 research review that looked at the use of AI in recruiting had this to say:

“Even if AI software vendors advertise the avoidance of human bias, algorithms may be biased due to technical shortcomings, such as biased training sets or algorithmic design. Problems become even more complex when algorithms are based on ML and develop individually, so that developers are no longer able to explain how the AI has come to its decisions.

Moreover, companies should be aware that the validity of the decisions made is not only determined by the AI itself, but also the underlying criteria used to predict job performance, which may not be scientifically validated.”

Anna Lena Hunkenschroer and Christoph Luetege,

The Journal of Business Ethics , Vol. 178, No. 4

Considerations When Implementing An AI Recruiting Framework

Once your AI recruiting framework is complete, there are a few final things to do before distribution.

Do a Regulatory Review and Compliance Check

If you haven’t already done this, evaluate your process to ensure your use of AI complies with regulations, laws, and other organizational policies outside of your department. Consider all the locations where you do business, as there are AI-driven processes (using facial recognition for candidate screening, for example) that are fine in some places but not others.

Develop a Response Plan

Recruiting involves large amounts of sensitive personal data, and the nature of machine learning and AI-assisted tools is to consume, process, and distribute large amounts of data. It’s imperative to have a well-designed response plan to carry out in an emergency.

Your response plan might include immediate corrective actions, transparent communication with affected parties, ownership and accountability of the process and the remedy, and strategies to prevent future occurrences. Proactively build a team in advance and work with your communications or marketing department to develop scenario-specific key messages that reflect what’s in your AI framework.

Create a Process for Continuous Improvement

Put a process in place to regularly assess the effectiveness of your AI recruiting framework and any tools you may be using. Incorporate feedback from candidates, recruiters, and hiring managers as much as possible.

Stay Up-To-Date With Industry and Regulations

AI and its applications in recruiting are evolving rapidly. Regularly update the AI framework based on new research, tools, and best practices.

Inform Stakeholders About the Distribution Plan

Distribute the policy internally to ensure adoption across all teams. To encourage staff to take the policy seriously, consider requiring them to sign and confirm they’ve read and understood the policy.

Having policies in place increases accountability and prevents shadow AI, which are “the AI systems, solutions, and services used or developed within an organization without explicit organizational approval or oversight” (Tech Policy Press).

Ensure hiring managers, HR personnel, and candidates know how AI is used in the processes they come in contact with. Staff need to be clear on what they are responsible for and what responsible AI use looks like.

Preparation needs to come before AI adoption

According to an article for Harvard Business Review by Tim Fountaine, Brian McCarthy, and Tamim Saleh, leaders need to get employees on board when introducing AI into the business.

Here are four things they recommend devoting early attention to:

  1. Choose the right AI tools based on feasibility, time investment, and value.
  2. Explain to employees and stakeholders the vision, why now, and what’s in it for them.
  3. Anticipate unique barriers to change, like the fear of becoming obsolescence (Colorado Biz).
  4. Budget time for integration and adoption in addition to budgeting $$ for the technology.

Conclusion

Accenture’s 2022 Tech Vision research found that 35% of global consumers trust how organizations implement AI, and 77% think organizations must be held accountable for their misuse of AI.

Creating a framework for AI-based recruitment is the safe and smart way to prepare to use AI tools, set internal expectations, protect candidates, and respect the ethical, social, and legal responsibilities of using this technology.

AI Framework Conclusion
The Modern Guide to Structured Interviewing cover

E-BOOK

The Modern Guide to Structured Interviewing

Get your free copy of VidCruiter’s comprehensive white paper about structured interviewing. This practical guide:

  • Green check Shows you how you can use technology to optimize your hiring
  • Green check Teaches you how to develop and conduct a structured interview process

By providing email address, you agree to receive updates from Vidcruiter.
Read our Privacy Policy.