Follow us


Automated decision making based upon profiling which may be inaccurate or unfair is one of the great dangers of the digital age. It's easy to see the potential for serious harm: the allocation of a university place, a job, a mortgage, insurance or even lifesaving medical treatment on the basis of a computer's binary decision.

The power of AI and algorithms to produce world changing outputs will be unleashed only if people trust those outputs and continue to allow their data to be used as an input.

There have been two recent developments in the UK in relation to individual rights and data which tech companies should factor into their planning.

Article 22 GDPR – UK divergence from Europe?  

Article 22 of the GDPR, which has become the global standard for data protection, is, of course, a key piece of legislation designed to manage the risks of automated decision making.

Article 22 (1) gives individuals the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly affects him or her. Article 22 (3) contains the right to obtain human intervention if certain conditions apply.  

In practice, Article 22 has not had the impact the legislators may have hoped.

This may be because the GDPR is principles based and claimants appear to have been reluctant to incur the cost of establishing what the terms mean in individual cases. 

We have a good idea on some points,  for example, we know that "legal effect" means anything that impacts a person’s legal status or rights, e.g. access to social welfare. Also, guidance suggests that "significant effect" includes decisions which may affect finances, health, education or employment.

But other trickier questions remain unanswered. How much human input is needed for something not to be automated? What about rules devised by humans, adapted by machine learning? How do we locate when the decision is made? Is this when there is a score or when the score is used? What if a decision is made by a human but their discretion is fettered upstream by algorithm?

Recently, the UK Government – via its recent Taskforce on Innovation, Growth and Regulatory Reform report - proposed the removal of Article 22 from UK law altogether. And, if this step were "deemed too radical" it proposed reforming the law to "permit automated decision-making" and "remove human review of algorithmic decisions".  

Outgoing Information Commissioner, Elizabeth Denham, has spoken out to explain why the right to human review must remain.

In her final speech as Information Commissioner Denham described the proposals in relation to Article 22 as a "step backwards", pointing to ICO's response to the consultation, which suggested that simply removing the right to human review was not, in its view, in people's interests and was likely to reduce trust in the use of AI. Instead, in the response ICO stated that it considered the Government should consider the extension of Article 22 to cover partly, as well as wholly, automated decision making in order to better protect people, given the increase in decision-making where there is a human involved but the decision is still significantly shaped by AI or other automated systems.  ICO also encouraged consideration of how the current approach to transparency could be strengthened to ensure human review is meaningful.

As Denham pointed out in her speech the fundamental principles – of maintaining trust and of asking if data is being used fairly and transparently – remain constant however fast technologies and opportunities are developing.  

The consultation is now closed and the outcome awaited.

UK Supreme Court offers roadmap for claims challenging certain business models in tech

The UK Supreme Court in its decision in Lloyd v Google gave individuals a clear roadmap for challenging business models based upon collecting, sharing and misusing vast quantities of data.  Misuse is any interference with individuals' privacy rights, which fails to be justified.

The court indicated that the English common law tort of misuse of private information would be the natural cause of action in cases where a defendant's "very purpose" was to wrongfully obtain and use private information to exploit its commercial value.

It also indicated that it considered "user damages", whilst not available under data protection law, should be available under misuse of private information. "The law should not be prissy about awarding compensation based on the commercial value of the exercise of the right to control the use of the information," it said.

The fact that the cause of action is rooted in common law allows the courts to protect the right to privacy when appropriate with an intense focus upon all the facts of a particular case. This decision may act as a brake on the plans of some tech companies.

Conclusion

Put together with global developments, such as the EU's Artificial Intelligence Act and the Australian Human Rights Commission Report on Human Rights and Technology it's clear that tech companies need to consider their practices in relation to data carefully at this critical juncture in the development of the new landscape.   

Key contacts

Kate Macmillan photo

Kate Macmillan

Consultant, London

Kate Macmillan