A third of Australian companies rely on artificial intelligence to help them hire the right person. But studies show its not always a benign intermediary
Sun 26 Mar 2023 10.00 EDT
Michael Scott, the protagonist from the US version of The Office, is using an AI recruiter to hire a receptionist.
Guardian Australia applies.
The text-based system asks applicants five questions that delve into how they responded to past work situations, including dealing with difficult colleagues and juggling competing work demands.
Potential employees type their answers into a chat-style program that resembles a responsive help desk. The real and unnerving power of AI then kicks in, sending a score and traits profile to the employer, and a personality report to the applicant. (More on our results later.)
This demonstration, by the Melbourne-based startup Sapia.ai, resembles the initial structured interview process used by their clients, who include some of Australias biggest companies such as Qantas, Medibank, Suncorp and Woolworths.
The process would typically create a shortlist an employer can follow up on, with insights on personality markers including humility, extraversion and conscientiousness.
For customer service roles, it is designed to help an employer know whether someone is amiable. For a manual role, an employer might want to know whether an applicant will turn up on time.
You basically interview the world; everybody gets an interview, says Sapias founder and chief executive, Barb Hyman.
The selling points of AI hiring are clear: it can automate costly and time-consuming processes for businesses and government agencies, especially in large recruitment drives for non-managerial roles.
Sapias biggest claim, however, might be that it is the only way to give someone a fair interview.
The only way to remove bias in hiring is to not use people right at the first gate, Hyman says. Thats where our technology comes in: its blind; its untimed, it doesnt use rsum data or your social media data or demographic data. All it is using is the text results.
Sapia is not the only AI company claiming its technology will reduce bias in the hiring process. A host of companies around Australia are offering AI-augmented recruitment tools, including not just chat-based models but also one-way video interviews, automated reference checks, social media analysers and more.
In 2022 a survey of Australian public sector agencies found at least a quarter had used AI-assisted tech in recruitment that year. Separate research from the Diversity Council of Australia and Monash University suggests that a third of Australian organisations are using it at some point in the hiring process.
Applicants, though, are often not aware that they will be subjected to an automated process, or on what basis they will be assessed within that.
The office of the Merit Protection Commissioner advises public service agencies that when they use AI tools for recruitment, there should be a clear demonstrated connection between the candidates qualities being assessed and the qualities required to perform the duties of the job.
The commissioners office also cautions that AI may assess candidates on something other than merit, raise ethical and legal concerns about transparency and data bias, produce biased results or cause statistical bias by erroneously interpreting socioeconomic markers as indicative of success.
Theres good reason for that warning. AIs track record on bias has been worrying.
In 2017 Amazon quietly scrapped an experimental candidate-ranking tool that had been trained on CVs from the mostly male tech industry, effectively teaching itself that male candidates were preferable. The tool systematically downgraded womens CVs, penalising those that included phrases such as womens chess club captain, and elevating those that used verbs more commonly found on male engineers CVs, such as executed and captured.
Research out of the US in 2020 demonstrated that facial-analysis technology created by Microsoft and IBM, among others, performed better on lighter-skinned subjects and men, with darker-skinned females most often misgendered by the programs.
Last year a study out of Cambridge University showed that AI is not a benign intermediary but that by constructing associations between words and peoples bodies it helps to produce the ideal candidate rather than merely observing or identifying it.
Natalie Sheard, a lawyer and PhD candidate at La Trobe University whose doctorate examines the regulation of and discrimination in AI-based hiring systems, says this lack of transparency is a huge problem for equity.
Messenger-style apps are based on natural language processing, similar to ChatGPT, so the training data for those systems tends to be the words or vocal sounds of people who speak standard English, Sheard says.
So if youre a non-native speaker, how does it deal with you? It might say you dont have good communication skills if you dont use standard English grammar, or you might have different cultural traits that the system might not recognise because it was trained on native speakers.
Another concern is how physical disability is accounted for in something like a chat or video interview. And with the lack of transparency around whether assessments are being made with AI and on what basis, its often impossible for candidates to know that they may need reasonable adjustments to which they are legally entitled.
There are legal requirements for organisations to adjust for disability in the hiring process, Sheard says. But that requires people to disclose their disability straight up when they have no trust with this employer. And these systems change traditional recruitment practices, so you dont know what the assessment is all about, you dont know an algorithm is going to assess you or how. You might not know that you need a reasonable adjustment.
Australia has no laws specifically governing AI recruitment tools. While the department of industry has developed an AI ethics framework, which includes principles of transparency, explainability, accountability and privacy, the code is voluntary.
There are low levels of understanding in the community about AI systems, and because employers are very reliant on these vendors, they deploy [the tools] without any governance systems, Sheard says.
Employers dont have any bad intent, they want to do the right things but they have no idea what they should be doing. There are no internal oversight mechanisms set up, no independent auditing systems to ensure there is no bias.
Hyman says client feedback and independent research shows that the broader community is comfortable with recruiters using AI.
They need to have an experience that is inviting, inclusive and attracts more diversity, Hyman says. She says Sapias untimed, low-stress, text-based system fits this criteria.
You are twice as likely to get women and keep women in the hiring process when youre using AI. Its a complete fiction that people dont want it and dont trust it. We see the complete opposite in our data.
Research from the Diversity Council of Australia and Monash University is not quite so enthusiastic, showing there is a clear divide between employers and candidates who were converted or cautious about AI recruitment tools, with 50% of employers converted to the technology but only a third of job applicants. First Nations job applicants were among those most likely to be worried.
DCA recommends recruiters be transparent about the due diligence protocols they have in place to ensure AI-supported recruitment tools are bias-free, inclusive and accessible.
In the Sapia demonstration, the AI quickly generates brief notes of personality feedback at the end of the application for the interviewee.
This is based on how someone rates on various markers, including conscientiousness and agreeableness, which the AI matches with pre-written phrases that resemble something a life coach might say.
A more thorough assessment not visible to the applicant would be sent to the recruiter.
Sapia says its chat-interview software analysed language proficiency, with a profanity detector included too, with the company saying these were important considerations for customer-facing roles.
Hyman says the language analysis is based on the billion words of data collected from responses in the years since the tech company was founded in 2013. The data itself is proprietary.
So, could Guardian Australian work for Michael Scott at the fictional paper company Dunder Mifflin?
You are self-assured but not overly confident, the personality feedback says in response to Guardian Australias application in the AI demonstration.
It follows with a subtle suggestion that this applicant might not be a good fit for the receptionist role, which requires repetition, routine and following a defined process.
But it has some helpful advice: Potentially balance that with variety outside of work.
Looks like were not a good fit for this job.
{{topLeft}}
{{bottomLeft}}
{{topRight}}
{{bottomRight}}
{{.}}
Read more:
Robot recruiters: can bias be banished from AI hiring? - The Guardian