top of page

“Invitation to Complete Video interview”: The Hidden Dangers of AI Interviews

Josh Gacutan | Cyber & Technology Fellow

Artificial Intelligence (AI) is shaping our lives in ways we cannot see. With thousands of students applying for coveted summer internships and graduate roles every year, major employers are turning to AI systems to help determine who is fit for work and who isn’t. Yet many of these AI systems operate in a ‘black box’–we don’t know how these decisions are made.

With the US-based recruitment tech company, HireVue, employers ask applicants to answer pre-recorded questions on camera. HireVue then analyses tens of thousands of data points related to an applicant’s speaking voice, word selection and facial expressions, to determine what kinds of traits the applicant may have. Based on this analysis, HireVue produces an automatically generated “employability” score.

HireVue’s “AI-driven assessments” is used by over 700 companies worldwide.

At first glance, relying on these AI-driven assessments is hardly malicious. Employers can sift through applicants more quickly. They can also reduce human errors and assist overworked HR teams tasked with assessing hundreds of video interviews.

HireVue’s chief technology officer claims that “people are rejected all the time based on how they look, their shoes, how they tucked in their shirts and how ‘hot’ they are”, and that HireVue’s “algorithms eliminate most of that in way that hasn’t been possible before”. The basic premise is that AI treats every applicant the same way, where a human interviewer might not.

This line of thinking, however, is misguided. As Ellen Broad, a leading expert in AI and data policy, explains, AI is made by humans. And humans make mistakes. Humans are both the source of the data that computers access and interpret, and the designers of the systems that do so. The same biases that we have would also be transferred to any AI systems we create.

HireVue, for example, uses the results of a select group of high performing employees already in that workplace as the measure to determine which ‘successful’ applicants to progress to the next stage. While this may seem like an objective and fair measure, the training data–a select group of employees–could represent a skewed sample of the people who can do that job well. What if they all shared the same characteristics? Perhaps they’re predominantly male, private school educated and have similar sets of experiences. Analysing applicants in this way would discriminate against non-native speakers, disabled applicants or anyone else who does not express themselves in the same way as the “model” employee.

Even if HireVue’s systems were set up ‘properly’ to hire the best and most diverse candidates, there is no way for independent researchers to assess their accuracy, reliability or validity. Automated decision-making systems are inherently opaque–either the algorithms are too complex to understand or the owners of the AI want to protect their trade secrets and refrain from disclosing their algorithms. Without meaningful transparency, HireVue’s claims that their assessments reduce bias, discrimination and increase diversity remain unsubstantiated and could amount to false or misleading representations.

What can be done?

Often an early mover, the American State of Illinois this year made it compulsory for employers using AI-based video interviews to obtain the applicant’s consent to be evaluated by AI before completing the video interview. But there’s an obvious problem with this. Students applying for graduate positions in one of the toughest job markets probably won’t voice their concerns about the fallibility of AI-driven assessments to their future employers. They have no choice but to consent and entrust potentially life-changing decisions to automation.

Instead, individuals should have a right to know how AI systems make decisions about them. My research calls for Australia to follow the European Union’s lead in introducing a statutory right to an explanation for automated decisions regardless of sector. This will force companies developing automated decision-making systems to rethink how they can make their decisions transparent and explainable to people.

It’s also more than updating existing legal frameworks. We need to have more conversations about the values and ethics that are embedded in AI to approach innovation in a responsible and sustainable manner.

Currently, employers, not applicants, are the “customers” whom the AI hiring companies seek to court with promises of efficiency and cost savings. This needs to change: AI should not be developed in isolation. Tech companies need engineers, ethicists, and lawyers to work together–to think beyond productivity and efficiency–and to address the hidden dangers of automated decisions.

Josh Gacutan is the Cyber & Technology Fellow for Young Australians in International Affairs.

bottom of page