Screening exercises are often used to sift larger pools of candidates during a recruitment process. You may require applicants to complete online cognition tests, record short videos to specific questions as an early stage interview round, or for some companies that recruit internationally may require testing for language competencies in which applicants hear audio questions and must record their responses verbally.
All these additional tasks can introduce accessibility issues. With the additional introduction of more and more automated tools or AI screening tasks there are growing risks around how these affect disabled candidates.
In this guide we are going to cover some general good practice when considering screening tools as part of a recruitment process and look at some emerging examples which may pose particular challenges to certain candidate groups.
Sometimes screening exercises are appropriate to use, particularly where you have high quantities of applications that can be relatively indistinguishable in terms of skillset or experience. This can be the case for many roles such as for recent graduates, where they may not be expected to have standout or any experience in a particular field, or for other “entry” level roles where there are no pre-existing experience requirements.
If you must use screening tasks, ensure that they are directly related to the skills that the job role requires. For example, the language competency and a level of hearing and speech acuity may be required in the case of a call centre operative, and the ability to present on video may be required for a news reader. But for a non-customer facing desk job are those tests still as applicable? Probably not.
Ideally any tests will be related to assessing relevant skills to the role. This might be a level of reasoning, checking maths or English competency to a certain level, or other appropriate tasks.
When considering screening tasks, you should focus on two areas of accessibility to ensure that you are not inadvertently discriminating.
- The type of task content and whether the task itself could present barriers to completion for disabled applicants. For example, if you are asking for responses to audibly asked questions, this could significantly impact a Deaf or hard of hearing applicant. What can you do to offer a more inclusive task?
- The platform you are using to provide the task. Many screening tasks are 3rd party applications. You must ensure that any platform you are using including 3rd party applications to deliver your recruitment process meet relevant accessibility requirements and can be used by everyone. For more information we recommend our accessibility in procurement guide and 3rd party responsibilities guide.
We have recently seen an explosion in AI use and other automated tools to assist with screening processes. Some of which may have been triggered by a need to remotely assess candidates during the pandemic, and others caused by the recent AI boom.
Below we have listed some of the trends we have noticed and our personal opinions on where these could present accessibility barriers to certain candidate groups. These should not be taken as an assessment of any individual organisation’s approach to screening but should be viewed as commentary on the approach in general.
One way videos
Some companies have begun including more video recording tasks into recruitment processes. These are often presented as “one-way” video interviews where candidates are presented with questions and asked to provide their answers back in an interview style video response.
Video recording and audio only tests may present barriers to Deaf or hard of hearing applicants, or those with a speech impediment, stammer or other condition that may impact their ability to present on a timed video exercise. In the case where some video recording platforms only allow one or two restart attempts this may place additional pressure on candidates to get things right in one take, which inversely can create more stress and make many speech impediments more pronounced leading to a greater likelihood of failed attempts.
It also goes without saying that this type of “talking to a brick wall” approach removes the humanity out of the interview experience. Applicants cannot see or gauge the reactions of interviewers or receive any of those other non-verbal pieces of feedback and there is no opportunity to build rapport at this stage. These pieces of non-verbal feedback can be important to many candidates during a normal interview process.
Likewise, there is no guarantee that these single sided interviews will even be watched by a human on the other end. Many companies now are using AI tools to review video applications and make suggestions based on noted behaviours or responses. This is discussed further in the next section on AI use in recruitment processes.
It should in our opinion be considered quite rude and disrespectful of a candidate’s time to ask applicants to record their side of an interview without having the decency to structure a two way conversation. An expectation that someone gets “dressed up” to look presentable on camera only to then be asked to record a video there is no necessary guarantee someone will even watch. It’s an interview without the interviewer.
AI sifting
There is a growing concern within some disability groups regarding the use of AI in sifting processes, particularly in the review of written and video recorded content. Some organisations such as For Dyslexics by Dyslexics allege that the growing use of Multi-modal Large Language Models (LLMs) such as ChatGPT within screening approaches may be embedding bias and effectively screen out dyslexics when filtering for the “best” candidates if these algorithms make negative associations between disability related traits.
There is the suggestion that these LLMs are being used to scans through CVs, cover letters, other application documents and video recordings and can pick up on dyslexic writing traits and other “tells”, tics, or micro expressions.
Watch an 11 minute video presented by For Dyslexics by Dyslexics campaigner providing a breakdown of their concerns about AI use in disability discrimination.
Given the uncertainty of this evolving situation and the historical cases of AI inheriting human biases in cases such as racial discrimination for police crime prediction AIs, we certainly do not have the final answer for making AI support tools which pose no risk of discrimination. Our best advice would be to avoid using AI predictive sifting tools if possible until further refinement occurs that proves beyond reasonable doubt that they do not pose any risks of discrimination to any protected group.
If you are considering using an AI based assessment, CV sifting tool or other AI based application to support your recruitment processes, you should push strongly for evidence from the supplier to show what due diligence they have done to ensure that the AI has not inherited discrimination biases. This might be challenging them on the size and diversity of training data sets, how they considered protected characteristics when the were sourcing training data, and what they are doing to test and quantify evidence on result discrepancies based on changing protected characteristic aspects.
Further reading:
- AI Law Hub – A difficult different discrimination: Artificial Intelligence and Disability
- MIT Technology Review – Disability rights advocates are worried about discrimination is AI hiring tools
- Toolify.ai – Combatting AI Discrimination: Advocating for Dyslexic Rights
- VentureBeat – Researchers claim AI system can distinguish between dyslexic and skilled readers
Personality tests
Cognitive, personality, cultural fit tests (which also go by many other names) can be discriminatory to neurodiverse applicants, particularly those that may not conform to expected social cues when presented with some of these abstract corporate culture questions.
Many of these personality tests may ask questions such as picking the least bad option out of four behaviours no person would ever opt for in a professional situation. They may ask more abstract questions based on metaphor or non-verbalised social expectations. The purpose often appears to be asking questions that are obtuse or incomprehensible by design. For many people they may get stuck at this stage, thinking they would never pick any of these answers to a question, or not feeling comfortable on their understanding of the implied meaning, leaving them to at best, guess what the recruiter wants to hear.
A BBC article titled “Autistic people held back by job interview questions”, includes examples such as an autistic man who has difficulty filling out forms and usually asks to talk through application questions with an employer over the phone instead - but says some workplaces refuse his request. Examples like these are not uncommon when people are looking for clarification on opaque screening tasks. The article highlights some of the findings of the Buckland Review on Autism and Employment which includes a series of recommendations 9-13 specifically focussed on recruitment practices that appropriately support autistic applicants.
Some organisations are creating ever more inaccessible personality test options which take the removal of accessible alternatives of clear question asking to a new level. A recent example that has been doing the rounds on various forums is a personality test from America which presents users with a series of images depicting a character from the 1998 hit single I’m Blue (Da Ba Dee) by Eiffel 65 in various incomprehensible situations. The test takers are then asked to state whether they see themselves in the images by selecting “Me” or “Not Me” as a response.
Not only are these images wholly reliant on perceived context but are in many cases so surreal that any professional context would be almost impossible to identify for any user let alone neurodivergent applicants. Three examples of these images have been provided below.
Our alt text attempts:
- A blue person yawns while resting their elbow on a table. They are facing away from a large purple crystal sculpture surrounded by an orange double helix.
- A blue person faces forward with their eyes closed with a slight smile. In the background a vortex whirls with other angry blue people facing towards the main person, a whirling clock, and several flying pieces of paper.
- A blue person stands up with their hands in the air and a shocked expression on their face. They are standing in a tiered seating area. The person directly behind them is looking at them with a hard to discern negative reaction.
We have tried within the alt text description for the bank of images above to describe them as objectively as we can focussed on the objects within the images. What alt text could possible be provided in the original circumstances for these images that would offer a comparative task to a screen reader user?
Just as likely we could flippantly suggest that these “professional” screening questions may instead be:
- Do you yawn at the dark crystal and are generally unimpressed by the works of Brian Froud?
- Do you sleep well without thinking about the inexorable passage of time wasting away beneath you?
- Do you react by throwing your hands in the air and screaming when being held at gunpoint?
When you strip out all clarity and attempt to abstract content to the point that it can be so broadly interpreted, this is where personality tests can cease to act as functional tests of someone’s aptitude to fill a role, and more as an arbitrary tool to dismiss candidates for any reason.
If you are considering using personality tests or other culture fit tests, you should ensure that any questions are very clear in description and the optional responses. Likewise, you should take a step back and consider whether some of the cultural fit aims you may be testing for are inherently ableist in some way, or are specifically marking against neurodivergent behaviours that may not have a material impact on someone’s ability to fill a role.
Using looks based checks
If you would like to forget any kind of decency whatsoever you could follow in the footsteps of some American companies that have recently started requiring passing a “looks” scoring test for some customer facing roles. In essence a task within the recruitment process is to prove that you are above a 7 on the attractiveness scale. The way this is measured is through the use of AI tools that judge looks on a number of metrics such as level of grooming, leanness, complexion, facial symmetry and so on.
It is probably apparent why this is such an outright risk of discrimination not only against disabled people but people of different ethnicities as well. For example, there is no guarantee of what training data set these attractiveness AIs have been built on. Perhaps they are focussed on Caucasian features as the standard of beauty which may inherently give lower scores to someone with darker skin or non-white facial features.
As they also make comment on the complexion or patchiness of the skin, anyone with a skin condition, birth mark, facial injury etc. may then also be provided a lower score in no way related to their ability to complete a task.
Obviously, attractiveness is in many ways subjective and should be in no way used as a metric to judge the worthiness of someone. This type of AI use may be the next attempt by some unscrupulous companies but should be roundly condemned and avoided by anyone that does not want to open themselves up to significantly increased risks of discrimination complaints, not to mention the moral reasons why this is so unpleasant.
Our advice to candidates if you ever encounter one of these in use in the wild would be to play the AI at their own game. Use camera filters to adjust your appearance to score higher, before reporting the company for using discriminatory hiring practices.