One of the latest trends in HR is using artificial intelligence (AI) to conduct video job interviews. These AI video interview software tools claim they can help employers by weeding through the candidates for them and creating recommendations. There are a handful of tools out there that conduct these AI video interviews. In the tools, the candidates are interviewed by an artificial intelligence platform which requires them to use the tool to record themselves answering questions in a set amount of time.
The video is then submitted for processing the data of the candidate. Processing can include things like analyzing visual, vocal and verbal. In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer. However, some studies on these tools point to serious concerns about their bias and reliability. Using these tools could create a lot more problems than they resolve and employers should reconsider using these tools to replace existing processes.
Issues with AI video interviews
The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI video interview systems are embedded with gender bias, with about 26% displaying both gender and race bias. Of the systems that exhibited gender-bias, 70% percent resulted in lower quality of service for women and non-binary individuals. Voice-recognition performed worse for women and unfair allocation of resources, information, and opportunities for women manifested in 61.5% of the systems deprioritizing women’s applications. These results are troubling and point to how companies could be unknowingly hurting their diversity and inclusion efforts by using these tools to evaluate candidates.
These AI video interview tools also do not appear to be accurate in their assessment of candidates. The MIT Technology Review podcast tested software from two firms specializing in AI job interviews and found variations in the predictions and job-matching scores, raising concerns about what exactly these algorithms are evaluating if the results for the same person are so varied between platforms. Interestingly enough, in their experiment they had their fake candidate speak German in her answers and the tools rated her highly in both English proficiency and in the overall application sorting system. These results demonstrate that the machines probably have a bit more learning to do before they can be accurate in their assessment of candidates.
How to stay EEOC compliant
While no machine-learning is perfect, using these AI video interviewing systems that take into account things like appearance and voice are prone to more problems. One of the best ways to steer clear of bias and EEOC compliance issues is to use a tool that measures all candidates equally and does not include in its measurements factors that could be influenced by gender or race. An example of these kinds of tools is a more traditional written psychometric assessment that produces a report based on a candidate’s responses and doesn’t take into account how they look or speak.