AI Could Be Biased Against Job Seekers. Here’s How We Can Protect Them.

Artificial Intelligence (AI) tools are swiftly reshaping the landscape of employment—notably, the processes involved in recruiting, overseeing, and advancing staff members.

Based on the latest information available, Responsible AI Index In 2024, 62% of Australian organizations utilized AI for recruitment purposes either moderately or extensively.

Many of these systems classify, rank and score applicants, evaluating their personality, behavior or abilities. They decide—or help a recruiter decide—who moves to the next stage in a hiring process and who does not.

But such systems pose distinct and novel risks of discrimination. They operate at a speed and scale that cannot be replicated by a human recruiter. Job seekers may not know they are being assessed by AI and the decisions of these systems are inscrutable.

My research study examined this problem in detail.

The utilization of AI systems by employers during the hiring process—includingCV screening, assessments, andvideo interviews—presents significant potentialfor discrimination againstwomen, older employees,job applicantswith disabilities, andindividuals who speak English with anaccent. Current laws have notyet been updated to address these issues adequately.

The increase of AI-powered interviewers

For my study, I conducted interviews with recruiters, HR specialists, AI experts, developers, and career counselors. Additionally, I reviewed public materials from two leading software providers in Australia’s market.

I found the way these AI screening systems are used by employers risks reinforcing and amplifying discrimination against marginalized groups.

Discrimination may be embedded in the AI system via the data or the algorithmic model, or it might result from the way the system is used by an organization.

For example, the AI screening system may not be accessible to or validated for job seekers with disability.

One research participant, a career coach, explained that one of his neurodivergent clients, a top student in his university course, cannot get through personality assessments.

He believes the student's atypical answers have resulted in low scores and his failure to move to the next stage in recruitment processes.

Lack of transparency

The time limits for answering questions may not be sufficient or communicated to candidates.

One participant, also a career coach, explained that not knowing the time limit for responding to questions had resulted in some of her clients being "pretty much cut off halfway through" their answers.

Another stated: "[…] there's no transparency a lot of the time about what the recruitment process is going to be, so how can [job seekers with disability] […] advocate for themselves? "

New barriers to employment

AI screening systems can also create new structural barriers to employment. Job seekers need a phone and secure internet connection, and must possess digital literacy skills, to undertake an AI assessment.

These systems could lead to candidates choosing not to apply for roles or withdrawing from the application process altogether.

The protections we have

Current federal and state antidiscrimination statutes cover instances of bias by employers utilizing AI screening tools, yet there are gaps These regulations must be made clearer and more robust to tackle this emerging type of discrimination.

For instance, such regulations might be revised to establish a presumption in any legal dispute that an AI system has discriminated against a job applicant, thereby shifting the responsibility onto employers to demonstrate they did not engage in discrimination.

At present, the responsibility for demonstrating such discrimination lies with job applicants. However, they are poorly positioned to fulfill this role, since AI screening systems are intricate and insular.

Reforms in privacy laws should also incorporate a provision for an explanation whenever AI systems are utilized during the hiring process.

The government recently put into power under Albanese must likewise adhere to its intention to implement mandatory "guardrails" For "high-risk" AI applications, like those utilized in the hiring process.

Safeguards must include a requirement that training data be representative and that the systems be accessible to people with disability and subject to regular independent audits.

We also urgently need guidelines for employers on how to comply with these laws when they use new AI technologies.

Should AI hiring systems be banned?

Some groups have called for a ban on the use of AI in employment in Australia.

In its Future of Work report , the House of Representatives Standing Committee recommended that AI technologies used in HR for final decision-making without human oversight be banned.

There is merit in these proposals—at least, until appropriate safeguards are in place and we know more about the impacts of these systems on equality in the Australian workplace.

As one of my research participants acknowledged: "The world is biased and we need to improve that but […] when you take that and put it into code, the risk is that no one from a particular group can ever get through."

This article has been republished from The Conversation under a Creative Commons license. Review the original article .

Provided by The Conversation

This tale was initially released on INSPIRDIGITAL . Subscribe to our newsletter for the latest sci-tech news updates.

Post a Comment

0 Comments