You are here

Biometric myopia risks stifling the facial recognition debate

17 June 2020 | Australia
Legal Briefings – By Anna Jaffe and Kwok Tang

It is clear from recent events that the debate surrounding the collection of biometric data was and continues to increase in volume, it is arguably not expanding in scope, creating risks and missed opportunities for regulators and innovators.

2020 has been a rollercoaster ride for many reasons, including the ebb and flow of media headlines about — and support for bans on — the use of facial recognition technology.

The rush to suspend the use of facial recognition, and the use of reactive regulation, risks stifling the necessary responsible innovation to get the technology right.

The past six months have seen detailed journalistic investigations of Clearview AI, the deployment of surveillance technologies for COVID-19 response, and now increased scrutiny and suspicion of the use of facial recognition technology by law enforcement as a result of the recent protests.

Then in the last week we had announcements by technology companies such as IBM and Amazon that they will temporarily or permanently cease offering such technologies to law enforcement.

This means that the need to seriously consider the use of such technology in both the public and private sector is as urgent as ever.

In Australia, the Australian Human Rights Commission has proposed the introduction of a “legal moratorium on the use of facial recognition technology in decision making that has a legal, or similarly significant, effect for individuals.”

But it is time to reconsider these ‘default’ short-term, reactive responses to new and emerging technologies.

Although it is no less critical to respond to the issues raised by facial recognition technology, a fragmented and inconsistent approach limits the effectiveness of potential regulatory responses and makes it difficult to ‘future-proof’ the implementation of such technology.

This is not to suggest that all regulation impedes innovation. In fact, regulation can be an innovation ‘enabler’ for industry, helping participants establish or maintain consistency of operations and a social licence to operate, and creating an environment for industry to flourish.

However, regulation must be implemented in a way that advances a coherent and holistic global standard to do so.

The imposition of a ban or moratorium on a specific technology is unlikely to further the development of a global standard but could instead stifle innovation (including responsible innovation).

This current regulatory approach to such technologies focuses too narrowly on specific aspects of the technology being regulated, rather than the manner in which it is used and the outcomes achieved by it. In relation to facial recognition technology, this means first that too little emphasis is placed on whether each implementation is effective, fit for purpose, or accompanied by appropriate safeguards.

Concerns about the accuracy of, and potential for bias in, biometric identification technologies were raised by the Parliamentary Joint Committee on Intelligence and Security in its rejection of proposed bills relating to the use of identity-matching services.

These concerns were later reinforced by overseas research which found that facial recognition algorithms used by law enforcement are at least 100 times more likely to misidentify Asian- and African-Americans and Native Americans than white men.

Secondly, the narrow focus on specific biometric identification technologies also leaves little room to consider the broader contexts in which such technologies (and the data collected by them) will be used.

Biometric information — which includes facial images but also other identifiers — is unique because it is easily accessed and collected, potentially even without the awareness of the subject of that information, and difficult to alter and conceal.

When this data is combined, it can create a detailed, highly targeted portrait of an individual that is capable of almost-literally following that person around in their day-to-day life. This can facilitate simple, frictionless access to public and private sector services and benefits, and (as we have seen) can be deployed as part of broader public health efforts, but if not responsibly deployed, can also come with costs to individual privacy.

To fully realise these benefits, we must carefully consider and implement a path forward, ideally without resorting to an outright ban or moratorium on specific technology.

This could take the form of an adaptable framework for the responsible use of new and emerging technologies that have the potential to significantly impact upon individual rights.

This framework can, and should, be crafted as part of a collaborative, multi-disciplinary and multi-stakeholder process to ensure the principles under the framework are appropriate, proportionate and contain necessary safeguards.

 

First published in the AFR on 16 June 2020.

To read more about this issue, please download Herbert Smith Freehills’ submission to the Australian Human Rights Commission on this topic above.

See how we help our clients in

Technology, Media and Telecommunications

Learn More

主要联系人