Article: Humans vs. Machines – Striking the right balance in recruiting

Recruitment

Humans vs. Machines – Striking the right balance in recruiting

Humans make errors, so do machines. How should the recruitment function navigate the future with technology playing a key role?
Humans vs. Machines – Striking the right balance in recruiting

“As human beings, what are the problems that we bring to the workplace?” Bianca Rehmer, a Senior Manager- Employee Insights at Indeed, asked the audience at People Matters Talent Acquisition League 2018.

In a session on “Human vs. Machines – Striking the right balance in recruiting,” she pointed out that human beings can be inconsistent and biased. From systemic or institutional bias to affinity bias (which is rooted in a feeling of identifying with the person), recruiters are prone to making mistakes.

A study of experienced parole judges in Israel revealed that fatigue among judges was linked to unfavorable court rulings. About 65 percent of the time, the first prisoner who appeared in the morning and after lunch break was granted freedom. According to one of the researchers in the study “The evidence suggested that when judges make repeated rulings, they show an increased tendency to rule in favor of the status quo.” This example clearly showed that there’s more than rational decision making at work. In this case, it could have been mental fatigue.

It is the human error – of hiring the wrong person due to bias that has forced innovative recruitment processes. From blind recruitment interviews to anonymized resumes, companies are trying to reduce the human error in recruitment.

But what about machines?

The use of technology is often touted as “data-driven, optimized and targeted,” but do algorithms always solve problems? The evidence and experience of using technology suggest that machines aren’t fool-proof either.

Sharing a personal anecdote of using the recommendations feature on a popular e-commerce platform, Bianca noted that although many recommendations were accurate, many others were not. This showed that the algorithms still needed to evolve.

Algorithms are not immune from bias either. They often bake in human biases. Take, for example, the failed ‘bot’ experiment by Microsoft on the Twitter platform in 2016. The bot “TAY.ai” that was meant to interact and learn from the users was quickly silenced because it had become racist. It was a classic case of “garbage-in, garbage out.”

The problem with machines is that they too can be wrong, biased, narrow and scary.

The way forward

Bianca points out that algorithms leave out the human side, they often can’t answer ‘why’, and they need simple, clear objectives. In this context, we need human recruiters to read between the lines.

In an era where we’re testing self-driving cars, the trust in technology seems to be in a state of flux. That’s why it is essential that recruiters too make the best of both worlds – both their technology and people.

She ended her session with a word of advice. “It is the humans who have the responsibility to understand what machines are capable of, to know what they are incapable of, and most importantly to use them ethically.”

You can watch the session here:

Read full story

Topics: Recruitment, Talent Acquisition, #TAL2018

Did you find this story helpful?

Author

QUICK POLL

How do you envision AI transforming your work?