Organizations are increasingly using algorithms and automated decision-making to assist them in making decisions about individuals, but to what extent is this a step in the right direction?
Many employers now include algorithms and automated decision-making in hiring and other personnel processes. The London School of Economics and Political Science recently reported that more than 60% of firms had adopted new digital technologies and management practices as a result of COVID-19. Our own Dentons AI Survey found that 60% of businesses were using or piloting AI today.
Whilst the use of these AI tools provides benefits to an organisation, such as speed and cost savings, employers should be mindful of the legal implications of placing too much reliance on AI without appropriate understanding of the legal risks and controls in place.
Consideration of data protection law
UK GDPR (as applies in the UK post-Brexit) provides that data must be processed “lawfully, fairly and in a transparent manner”. The complexity of explaining AI decision-making processes presents a challenge. However, clarity of the logic (if not the technicalities) is most important here. Technologies are also being developed to assist with further clarity.
In addition, businesses must ensure that the processing of employee data does not have an unjustified, adverse effect on the individual. This would be “unfair” processing. It would also likely mean they would not be able to rely on “legitimate interests” as a legal basis for processing (and another basis of processing would be necessary).
When organizations use algorithms to process special category data (eg health, race and religion), this requires more protection. Most likely express consent is required, unless it is being used in a way that is necessary for compliance with employment law requirements. These circumstances are likely to be limited in practice, and there are challenges in relying on consent in an employment context (see below). So, use of special category data to support AI needs to be considered very carefully.
In addition, there are rules on “automated decision-making”. UK GDPR specifically prohibits “solely automated decision-making that has a legal or similarly significant effect” unless:
- you have explicit consent from the individual;
- the decision is necessary to enter into or perform a contract; or
- it is authorized by EU or member state law.
The first test here is whether the outcome of the AI decision has a “legal or similarly significant effect”. Not all AI decisions will satisfy this requirement. However, there will be many in an employment context that may – applicant screening tools; role suitability assessment tools etc.
The grounds to permit this activity represent a high bar to satisfy for employers. Consent might appear to be the most relevant in an employment context, but there is a risk that the power imbalance between a job candidate and prospective employer could result in consent not being considered to be freely given (and, as such, invalid). Where consent is relied upon as a basis of processing, organizations also need to keep in mind that individuals are entitled to refuse or withdraw consent at any time, without suffering any detriment (in practice, that means they could have a right to switch to a process that does not involve automation).
What is “necessary” to enter into a contract can be difficult to establish. The Information Commissioner’s Office guidance states that the processing must be a targeted and proportionate step which is integral to delivering the contractual service or taking the requested action. This exemption will not apply if another decision-making process with human intervention was available.
So, targeted use of this technology in these vital decision-making processes – most likely to support, rather than replace, a human decision – will be necessary to avoid running into some GDPR hurdles.
Before introducing algorithms and automated decision-making as part of any process, organizations must prepare a Data Protection Impact Assessment (DPIA) to analyse, identify and minimize the data protection risk to ensure compliance with UK GDPR.
Consideration of the Equality Act 2010
Algorithms are human-made and, as such, they are inherently at risk of featuring some bias. A significant concern could arise if the algorithm inadvertently leads to discrimination in breach of the Equality Act.
For example, an automated recruitment system could discriminate if it:
- favors one gender over another (including scoring language more typically used by male candidates more highly than language more commonly used by females);
- values length of service in past roles disproportionately over experience/skills, which could lead to age discrimination risks; or
- does not recognize overseas qualifications on a par with those from the UK (potentially exposing an employer to race discrimination claims).
Any automated decision-making process that does not build in disability discrimination safeguards and reasonable adjustments could also place the employer at risk. There are examples of individuals whose disability impacts on their ability to satisfactorily complete multiple choice tests, despite them being able to answer the question using free text. An automated process that does not build in flexibility (including appropriate triggers for human checks) could lead to equality concerns.
A robust AI tool may recommend candidates for recruitment that surprise an organization. We know that diverse teams work well but that does not always play out in recruitment decisions. Diversity and a range of personality types can challenge existing (often unconscious) preferences related to team cohesion. This could leave the recruiters wondering if the AI tool has got it wrong and needs changed/overruled, or if it has instead shone a spotlight on potential bias in the human decision-making process left unchecked until now.
Takeaway considerations for employers
Bias and discrimination can unfortunately be found in AI tools, often stemming unintentionally from the humans who program them or inherent bias in the datasets used to “train” the technology.
Notwithstanding this, AI may also be the solution (or at least a helpful part of it) to achieve more equitable decisions. As technology continues to develop, algorithms can be programmed to detect and hopefully reduce discrimination and bias in decision-making. And, perhaps, we should be prepared to embrace some surprise outcomes from AI that in fact redress unidentified bias in the human decision-making process (robot 1:0 human).