[US] Lawsuit highlights role of AI in recruitment process discrimination

[US] Lawsuit highlights role of AI in recruitment process discrimination
08 Mar 2024

In the US, a lawsuit has been filed alleging that the AI tools used by a recruitment platform have an algorithmic bias which is leading to discrimination in the hiring process, HR Leader reports.

According to reporting from Reuters, Workday is facing renewed claims that it uses AI tools that directly discriminate against applicants for roles at the major companies it recruits for. 

Derek Mobley reportedly filed a complaint in San Francisco Federal Court in which he claims to have been rejected for more than 100 jobs he applied for through Workday’s platform.

In the complaint, Mr Mobley, a black man, said that in using Workday’s platform for recruitment, employers are effectively handing over their authority to make hiring decisions to Workday, which enables these issues to occur.

“Because there are no guardrails to regulate Workday’s conduct, the algorithmic decision-making tools it utilises to screen out applicants provide a ready mechanism for discrimination,” Mr Mobley’s lawyers said in the complaint.

The company denied wrongdoing and stated when the lawsuit was filed that it engages in an ongoing “risk-based review process” to ensure that its products comply with applicable laws and aren’t engaging in any forms of discrimination.

Many employers and recruitment agencies use AI for the hiring process; 80 per cent of US employers have reportedly acknowledged taking advantage of this new technology. Such tech includes software made by Workday and other firms that can review multiple job applicants and screen out applicants for various reasons.

Algorithmic bias

This screening process is where problems can arise. AI-enabled recruitment has the potential to boost efficiency and reduce transactional work but algorithmic bias can result in discriminatory hiring practices based on gender, race, and colour. Something Mr Mobley is alluding to through his lawsuit.

According to HR Leader, algorithmic bias refers to systematic and replicable errors in computer systems that lead to unequal and discrimination-based hiring practices towards legally protected characteristics, such as race and gender.

Zhisheng Chen - the author of the linked article, based at Nanjing University of Aeronautics and Astronautics - explained the source of algorithmic bias.

 “The primary source of algorithmic bias lies in partial historical data. The personal preferences of algorithm engineers also contribute to algorithmic bias,” Mr Chen said.

“Despite algorithms aiming for objectivity and clarity in their procedures, they can become biased when they receive partial input data from humans. Modern algorithms may appear neutral but can disproportionately harm protected class members, posing the risk of agentic discrimination.”

Technical measures such as constructing unbiased data sets and enhancing algorithmic transparency can reportedly be implemented to combat algorithmic hiring discrimination.

Workday claims to partake in constant self-regulation regarding their AI tools through an ongoing “risk-based review process”. However, Mr Chen told HR Leader he believes self-regulation isn’t enough to truly confront and overcome this issue.

“Although self-regulation can help reduce discrimination and influence lawmakers, it has potential drawbacks. Self-regulation lacks binding power, necessitating external oversight through third-party testing and the development of AI principles, laws, and regulations by external agencies,” Mr Chen said.

Third-party oversight is reportedly key to removing algorithmic biases because it brings accountability but the entrenched mindset that AI offers inherently “objective” and “neutral” practices must also be overcome while they continue to reflect the biases that people hold.

Without accountability, recruitment algorithms can keep exacerbating inequalities and perpetuating discrimination against minority groups.


Source: HR Leader

(Link and quotes via original reporting)

In the US, a lawsuit has been filed alleging that the AI tools used by a recruitment platform have an algorithmic bias which is leading to discrimination in the hiring process, HR Leader reports.

According to reporting from Reuters, Workday is facing renewed claims that it uses AI tools that directly discriminate against applicants for roles at the major companies it recruits for. 

Derek Mobley reportedly filed a complaint in San Francisco Federal Court in which he claims to have been rejected for more than 100 jobs he applied for through Workday’s platform.

In the complaint, Mr Mobley, a black man, said that in using Workday’s platform for recruitment, employers are effectively handing over their authority to make hiring decisions to Workday, which enables these issues to occur.

“Because there are no guardrails to regulate Workday’s conduct, the algorithmic decision-making tools it utilises to screen out applicants provide a ready mechanism for discrimination,” Mr Mobley’s lawyers said in the complaint.

The company denied wrongdoing and stated when the lawsuit was filed that it engages in an ongoing “risk-based review process” to ensure that its products comply with applicable laws and aren’t engaging in any forms of discrimination.

Many employers and recruitment agencies use AI for the hiring process; 80 per cent of US employers have reportedly acknowledged taking advantage of this new technology. Such tech includes software made by Workday and other firms that can review multiple job applicants and screen out applicants for various reasons.

Algorithmic bias

This screening process is where problems can arise. AI-enabled recruitment has the potential to boost efficiency and reduce transactional work but algorithmic bias can result in discriminatory hiring practices based on gender, race, and colour. Something Mr Mobley is alluding to through his lawsuit.

According to HR Leader, algorithmic bias refers to systematic and replicable errors in computer systems that lead to unequal and discrimination-based hiring practices towards legally protected characteristics, such as race and gender.

Zhisheng Chen - the author of the linked article, based at Nanjing University of Aeronautics and Astronautics - explained the source of algorithmic bias.

 “The primary source of algorithmic bias lies in partial historical data. The personal preferences of algorithm engineers also contribute to algorithmic bias,” Mr Chen said.

“Despite algorithms aiming for objectivity and clarity in their procedures, they can become biased when they receive partial input data from humans. Modern algorithms may appear neutral but can disproportionately harm protected class members, posing the risk of agentic discrimination.”

Technical measures such as constructing unbiased data sets and enhancing algorithmic transparency can reportedly be implemented to combat algorithmic hiring discrimination.

Workday claims to partake in constant self-regulation regarding their AI tools through an ongoing “risk-based review process”. However, Mr Chen told HR Leader he believes self-regulation isn’t enough to truly confront and overcome this issue.

“Although self-regulation can help reduce discrimination and influence lawmakers, it has potential drawbacks. Self-regulation lacks binding power, necessitating external oversight through third-party testing and the development of AI principles, laws, and regulations by external agencies,” Mr Chen said.

Third-party oversight is reportedly key to removing algorithmic biases because it brings accountability but the entrenched mindset that AI offers inherently “objective” and “neutral” practices must also be overcome while they continue to reflect the biases that people hold.

Without accountability, recruitment algorithms can keep exacerbating inequalities and perpetuating discrimination against minority groups.


Source: HR Leader

(Link and quotes via original reporting)