Striking the Balance Between Automation and Human Decision-Making
As AI continues to revolutionize the recruitment industry, one of the biggest challenges we face is figuring out where the line should be drawn between automation and human decision-making. Across organizations, recruitment leaders are grappling with a particular question:
How far should we allow AI to go in the hiring process?
The consensus in the industry seems to be that AI is invaluable when it comes to streamlining administrative tasks – scheduling interviews, aggregating notes, or even crafting automated rejection emails – but what happens when AI starts making decisions that directly impact people’s careers? Should AI be used for that level of decision-making? And if so, how do we ensure that it’s done fairly, transparently, and with the candidate’s best interests at heart?
It’s a conversation that we need to be having more openly and one that, I believe, requires a balance between technological efficiency and human empathy.
AI’s Role in Enhancing Administrative Efficiency
Let’s start by acknowledging the areas where AI has already proven to be a game-changer in recruitment: the automation of repetitive, time-consuming tasks. For instance, I was recently speaking with a leader in the talent space about this, and his team has developed a tool that simplifies the job spec creation process. A hiring manager can input a job title and a code, and the system spits out a complete job description, suggested interview questions, and even a guide to the market. This tool demonstrates how AI can take on administrative tasks that would otherwise eat up valuable time.
AI is also being used effectively in scheduling interviews, summarizing interview notes, and automating the dispositioning processes. I know of recruiting teams who have AI tools that generate hyper-personalized rejection emails, using data from the recruitment system to explain why a candidate didn’t make the cut. These are practical, no-brainer applications of AI that free up recruiters to focus on what really matters – building relationships with candidates and hiring managers.
In this sense, AI acts as an enabler. It handles the busywork, so recruiters don’t have to. As the above examples show, the integration of AI into these processes has been widely accepted and even embraced. The administrative burden is lightened, and candidates benefit from a more streamlined, efficient hiring process.
Where Do We Draw the Line?
But then we get to the more contentious issue: using AI for decision-making. For instance, some companies are experimenting with using AI to automatically reject large portions of applicants in high-volume hiring scenarios. If you receive 2,000 applications, AI might rank and reject the bottom 70% based on a set of specific criteria. Here’s where the ethical question arises: Is AI making decisions that should be left to humans?
Thom Staight, GM of Global Talent Acquisition in EMEA & Asia at Microsoft, spoke about this at one of our recent SocialTalent Live events. Microsoft made a deliberate choice not to use AI for decision-making in hiring given the risks associated with it. The technology can do it, but they’ve decided that it’s not appropriate. “Today, we absolutely wouldn’t use an AI tool to make a decision about a human,” Thom told us, “but I don’t rule out that it might involve an AI tool at some point in the coming years.”
And they’re not alone, many other organizations have made similar decisions, effectively saying, “We’ll use AI for everything except for decisions about people.”
But where do we draw the line between what is and isn’t decision-making in recruitment? Some might argue that having AI stack-rank applicants based on scores isn’t decision-making because the recruiter still makes the final call. But if those scores are influencing which candidates are reviewed and which are ignored, isn’t that a form of decision-making? If a recruiter sees that a candidate has a score of 25, they’re unlikely to take a second look, even though there might be something valuable about that candidate that the AI missed.
Learn more: Navigating the AI Revolution in Talent Acquisition
The Candidate’s Perspective: Transparency and Redress
This brings us to an important but often overlooked aspect of the AI discussion: the candidate’s visibility into how AI is being used in their hiring process. As recruiters, it’s easy to focus on how AI can make our jobs more efficient, but what about the candidate’s experience? Should candidates be informed when AI is involved in decisions that affect their job prospects? I would argue that they absolutely should.
Transparency is essential. If a candidate is rejected because of an automated process, they should know that, and they should have the right to challenge that decision. A good example would be offering candidates an option to have their application reviewed by a human if they believe the AI-made rejection was a mistake. This could be as simple as including a note in the rejection email:
“This decision was made using an automated process. If you would like a human to review your application, click here to request a second review.”
The idea of offering a “right to redress” isn’t just about being fair to candidates – it’s about fostering trust. Imagine you’re a candidate who has just been rejected by AI. Wouldn’t you feel more positive about the company if you knew there was a mechanism in place for you to challenge that decision and get a second chance?
Moreover, in some regions, such as the EU, laws are already in place to address this issue. If a machine solely makes a decision about your application, you have the right to challenge it and have a human review it. And the recently published EU AI Act looks to put even more requirements in place around this. So it’s crucial that we, as an industry, stay ahead of these regulations and ensure we’re treating candidates with fairness and transparency.
Balancing AI with Human Judgment
Ultimately, I believe the future of recruitment lies in striking the right balance between AI and human judgment. AI is fantastic at handling the administrative side of recruiting, but when it comes to decisions that directly affect people’s careers, we need to proceed with caution.
Recruitment is a people business. No matter how sophisticated AI becomes, there will always be elements of hiring that require human empathy, intuition, and experience. The key is to integrate AI in ways that enhance the human touch, not replace it. Candidates want to feel like they’re interacting with people, not just machines. That’s where the “balance” comes in.
For example, asynchronous video interviews – where candidates record their answers to preset questions – have been controversial. The data would suggest that candidates generally dislike this approach, but hiring managers love it because it allows them to quickly filter through applicants. I was talking with Cheryl Petersen, the Regional Talent Resourcing Leader at Arup, recently and it’s something her team has been struggling with. The solution? To add a transparent disclaimer to their job ads indicating that the first step in the process is a prerecorded video that will actually give candidates an opportunity to speak more about themselves beyond a resume.
“Sometimes it’s just about reframing and shaping how automation is being used in the process. It’s not to weed candidates out, but actually to bring them in to have a human conversation and a human interaction.”
Moving Forward with AI in Recruitment
AI is here to stay, and its role in recruitment will only grow in the coming years. But as we integrate these tools into our processes, we must remain mindful of the impact they have on both recruiters and candidates. The organizations that will thrive in this new AI-powered world are the ones that strike the right balance between automation and human judgment, and that do so with transparency and fairness.
As we move forward, let’s commit to giving candidates visibility into how AI is being used, and let’s ensure they have the opportunity to challenge decisions when necessary. By doing so, we can build a more equitable, inclusive hiring process – one that leverages the best of both technology and humanity.
This article originally appeared in Johnny Campbell’s Talent Leadership Insights LinkedIn newsletter. Click here to subscribe!