AI in Hiring: Reducing Bias or Making It Worse?

Martina Bretous
Martina Bretous

Updated:

Published:

Using AI for hiring sounds like a foolproof plan. After all, humans are flawed and biased. Well, it turns out, so is AI.

ai in hiring

Critics of artificial intelligence – giving the “We told you so” look as we speak – have long feared this technology would eliminate jobs from the workforce.

New research suggests AI is doing so in an unexpected way: By discriminating against qualified candidates based on gender, race, and age.

AI has the potential to even out the playing field in recruitment – except that it’s trained by humans. Let’s break it down.

Click Here to Subscribe to HubSpot's AI Newsletter

Where’s the bias coming from?

In theory, an AI-powered screening tool is perfect. It can’t think or pass judgment, so it can make objective decisions, right? Not exactly.

As AI adoption has risen, so has the number of reported incidents of bias. Here are a few from 2023 alone.

In June, Bloomberg reported that an analysis of 5,000 images generated by Stable Diffusion reinforced gender and racial stereotypes “worse than those found in the real world.”

High-paying jobs were consistently represented by subjects with lighter skin tones while lower-paying jobs were associated with darker skin.

When it came to gender, similar stereotypes were observed. Roles like cashiers and social workers were largely represented by women, while images of politicians and engineers almost exclusively featured men. In fact, most occupations in the dataset were displayed by male subjects.

A spokesperson for the startup StabilityAI, which runs Stable Diffusion, responded to Bloomberg that all AI models have biases based on the datasets they’re trained on. And therein lies the problem.

Companies have or acquire datasets. They train AI models using those datasets. The AI models perform tasks based on observed patterns. A flawed dataset = a flawed model.

So, what happens when those same AI models are used for hiring? Ask Workday, the systems software company offering HR and finance solutions.

They’re currently facing a class action lawsuit led by Derek Mobley, a job seeker who said their AI screening tool discriminates against older, Black and disabled applicants.

Mobley – a 40-year old Black man diagnosed with anxiety and depression – alleges that since 2018, he has applied to roughly 100 positions and has been rejected every time, despite meeting the job qualifications. He represents an undisclosed number of people who reported the same discrimination.

A spokesperson for Workday says the company conducts regular internal audits and legal reviews to ensure compliance with regulations and that the lawsuit is without merit.

A jury will determine whether this is true. But it wouldn’t be the first time an AI recruitment software showed bias.

In 2017, Amazon infamously scrapped its own AI-powered screening tool because it was less likely to deem female applicants qualified for jobs in an industry dominated by men.

And in July of this year, Stanford University researchers found that seven AI detection tools frequently misclassified writing by non-native English speakers as generated by AI.

AI detection tools measure text perplexity. In simpler terms, the more complex or surprising the next word in the sentence is, the less likely it is to be generated by AI.

Because non-native English speakers may use simpler words due to limited vocabulary, detection tools are more likely to label these applicants’ work as composed by generative AI.

In Standford’s case, a grade is at stake. In the workplace, the stakes can be much higher for non-native job applicants who are deemed less qualified by a screening tool, based on the words they use.

These incidents all point to the same problem: When an AI tool is trained on existing data where cultural, racial and gender disparities are present, it will mirror the exact biases companies are attempting to remove.

What’s the government saying about AI recruitment bias?

For a while, they weren’t saying much.

In 2019, Illinois created a bill requiring employers to notify candidates of AI analysis during video interviews.

Maryland followed suit by prohibiting the use of facial recognition during pre-employment interviewing without applicant consent.

Then, in 2021, the Equal Employment Opportunity Commission (EEOC) launched an agency-wide initiative called the “Artificial Intelligence and Algorithmic Fairness Initiative,” to monitor and assess the use of technology in hiring practices.

That same year, the New York City Council passed a law that would require employers using AI technology to disclose it to applicants and undergo yearly audit checks for bias.

Enforcement for this law started in July 2023, by the way. But not everyone is seeing this as a win.

Some critics of the law say it’s not specific enough, leaving room for loopholes. Others call out that it doesn’t cover discrimination against older folks or those with disabilities.

As part of their AI initiative, the U.S. Justice Department and the EEOC also released a joint guide alerting employers of the risk of discrimination when using AI-powered tools for employment decisions.

They advise against “blind reliance on AI,” which could lead to civil rights violations. The same ones Workday is facing today.

More recently, the EEOC heard from a series of witnesses – including computer scientists, legal experts, employer representatives – in a hearing to discuss the potential benefits and harms of AI in the workplace.

What’s next for AI in hiring?

Many companies across the U.S. have already integrated AI to their hiring process. In a 2022 study of 1,688 respondents, SHRM found that 79% of companies used automation and/or AI for recruitment and hiring.

In the coming years, organizations should expect more oversight in their hiring practices.

This new law in New York should to have a domino effect across the country, with California, New Jersey, Vermont, and New York reportedly working on their own laws to regulate AI usage in hiring and recruitment.

Resource-heavy conglomerates like Amazon can build, train, and test their tools for better outcomes. But for companies purchasing tools through third-party vendors, the risk for bias – and civil rights violations – is even higher.

Thorough vetting and auditing processes are crucial for companies using any AI technology for employment purposes.

The takeaway for businesses? Soon, it won’t be enough to say, “It wasn’t us, AI did it.” Plan for the humanity on both sides of your new AI hiring software.

Click Here to Subscribe to HubSpot's AI Newsletter

Related Articles

A weekly newsletter covering AI and business.

SUBSCRIBE

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

START FREE OR GET A DEMO