You are here

Policing biometrics – A post-facial recognition playbook emerges

17 August 2021 | Insight

As Australian watchdogs turn their attention to regulating biometrics, a broad consensus is emerging


The ease with which technology allows the collection of and access to biometric information has raised widespread concerns around the possible harms and significant potential impact to individual rights. In response, regulation of biometric technology has sharply increased in the past few years, with the number of new or proposed laws more than doubling since 2018. While awareness and the volume of regulations may have increased, the regulatory landscape for biometrics remains somewhat unclear—although key principles are beginning to crystallise through proposed regulatory frameworks and approaches to other technologies.

BIOMETRIC DATA LEGISLATION

 

THE KEY TO EFFECTIVE REGULATION

As biometrics regulation has matured, there is now a general understanding that effective regulation relies on a clear understanding of the offensive uses of such technologies, in order to recognise and mitigate against any potential harms. This focus on identifying harms, risks and corresponding safeguards is essential for regulation which outlasts the most recent phase of the news cycle.

This approach signals a progression from the immediate aftermath of recent public scrutiny over the development and deployment of facial recognition technology, where regulation often focussed on facial recognition without contemplating other kinds of biometric information and technology, and on preventing, rather than understanding, its use.

Notably, approximately half of current actual or proposed regulations globally which apply to biometrics address facial recognition and fingerprint data specifically, with the remaining proportion largely going to easily accessible physiological data such as eye and voice information. Little consideration has been given in such regulations of data requiring more advanced technology, such as for gait or vascular recognition. While the unique nature of biometric information means that specialised regulation is often necessary, regulation that is overly targeted and not developed proactively is unlikely to be sufficiently flexible to accommodate the potential breadth of biometric information.

These responses were also often introduced in a reactive manner, meaning that the measures adopted may not be suitable for the long-term as they fail to engage with the emerging spectrum of biometric data, technological developments and their various applications (in fact, in a rapidly changing technology landscape, regulations may already not be fit for purpose by the time they are introduced). In particular, moratoriums avoided the possibility of harm arising from facial recognition technology by forgoing it altogether (temporarily or otherwise), but failed to address how such technologies might eventually be used in an acceptable manner or to guide the development of such technologies.

It is now clear that a sustainable approach to regulation requires the development of a framework which contemplates how biometric technology may actually be used in an acceptable manner, by engaging substantively with the specific risks of certain applications and building in principles, safeguards and obligations that mitigate those risks.

Two recent proposals, canvassed below, take significant steps in this direction.

The Human Rights and Technology Final Report published by the Australian Human Rights Commission (AHRC) earlier this year advances the thinking in Australia considerably. Amongst new and emerging technologies, the AHRC specifically considered the use of facial recognition and biometric systems, noting the high risk and likely impacts on human rights of such technology. While the AHRC does propose a temporary moratorium on the use of biometric systems in certain contexts where there is a high risk to human rights (such as policing and law enforcement), it also recommends the introduction of specific laws for those situations and identified critical criteria for the assessment of human rights impacts. To develop specific protections, the AHRC considers that the human rights impacts of biometric technologies must be assessed through three critical lenses: the type of technology used (e.g. one-to-one vs one-to-many), the context for its use and the existing protections against the risk of harm (legal or otherwise).

The European Union has presented a more fully-fledged example of this risk-based regulatory approach in its Proposal for a Regulation laying down harmonised rules on artificial intelligence (the Proposal).

The Proposal includes a broad prohibition on the use of AI in ‘real time’ remote biometric identification when used in publicly accessibly spaces for the purpose of law enforcement. It is also flexible in its application to biometrics (determining that such prohibited systems should be identified functionally, ‘irrespectively of the particular technology, processes or types of biometric data used’), but specific in the nature and risks of the prohibited use (the high latency, low transparency, broad applicability and significant consequences of that use being particularly intrusive on individual rights and freedoms).

Specific exceptions to the prohibition are made available where use is ‘strictly necessary to achieve a substantial public interest’, such that the inherent risks of such use are outweighed. However, even in such cases the Proposal recognises that there will be variation on the necessity and proportionality of use among the permitted exceptions. There is an onerous threshold for failing to obtain express and specific authority prior to any such use – where the urgency is such that it is ‘effectively and objectively impossible’ to obtain authorisation beforehand.

Outside of the prohibition, the Proposal also responds to other risks specific to biometric systems, with remote biometric systems subject to specific requirements for logging capabilities, human oversight and conformity assessments by notified bodies. More broadly, natural persons must be notified when they interact with affect recognition or biometric categorisation systems regardless of whether that system is high risk, due to the specific risk of impersonation or deception.

These two proposals exemplify, but are by no means definitive of, the emergent approach to biometrics that decentres facial recognition and the technical capability of biometric systems, and instead uses a risk-based lens to identify offensive applications and mitigation of potential harms.

Both private and public organisations should be proactive in considering how a risk-based approach shapes their engagement with biometric technology. Taking cues from the development of responses to other areas of risk that have undergone significant change from rapidly emerging technologies, such as cyber or privacy, measures adopted for specific technologies or threats are unlikely to be successful or sustainable. The focus should now be on developing principles for the use of biometric systems and strengthening the risk assessment matrix to be prepared for the post-facial recognition landscape.

主要联系人