Follow us


A fragmented and narrow focus on facial recognition tools risk undermining their potential

In the past six months, the global furore over public as well as private sector use of facial recognition technology has only increased. However, although this debate is increasing in volume, it is arguably not expanding in scope. 

The current regulatory approach focuses too narrowly on specific aspects of the technology being regulated, rather than the manner in which it is used and the outcomes achieved by it.  Not only does this limit the effectiveness of potential responses by regulators, but this also makes it difficult for innovators to truly plan for and ‘future-proof’ the implementation of such technology. This inability to fully grasp the significance of the issue at hand increases the risk of losing a real and present opportunity to set a clear, coherent and comprehensive standard for responding to the challenges of new technologies.

In the past month alone, the New York Times reported that an American start-up, Clearview AI, claimed extensive relationships with numerous North American law enforcement agencies for the purposes of using its exclusive personal identification service, where an officer can upload a person’s photo and, in real time, obtain access to matching photos scraped from social media and other websites. Although subsequent investigations have revealed that Clearview’s claims about both its relationship with law enforcement bodies and technical capacities may be inaccurate, this is only one example of the almost-daily headlines about the use of technology by governments and other private sector entities in order to identify individuals, either in isolation or as part of a crowd (or both). These headlines have been met with proposals to introduce various restrictions on the use of such technology — ranging from self-regulatory principles and frameworks to outright bans — but coordinated global consensus has not yet emerged.

What is clear from the steps that have been taken to date is that much of the discussion around the use of facial recognition technology, and the corresponding proposals for its regulation, has been reactive in nature and accordingly far too narrow in approach.

So what do we talk about when we talk about ‘facial recognition’? At the outset, many proposals focus specifically (and in some cases exclusively) on the use of facial images for identity matching purposes, but facial images constitute only one type of data that can be used for this purpose. Significant advances in storage and analytics technology mean that the collection and use of biometric information is growing not only in variety but also in volume. ‘Biometric information’ in this context includes facial images but can also range from fingerprints and iris patterns to other identifiers less discernible to the naked eye, yet just as unique to an individual (for example, in patterns of human movement), and are both universal as well as easily and publicly accessible. Despite this, references abound to ‘facial recognition’ in isolation.

This narrow focus is also clear in the way that current proposals for regulatory reform are targeted at what the relevant technology is, rather than what it does or could do in each case. This leads to two separate, but equally problematic, outcomes. The first is that too little emphasis is placed on whether each such technology is effective, fit for purpose or accompanied by appropriate safeguards. Concerns about the accuracy of, and potential for bias in, biometric identification technologies were raised by Australia’s bipartisan Parliamentary Joint Committee on Intelligence and Security in its rejection of proposed Australian legislation relating to the use of identity-matching services (particularly in relation to facial images) in late 2019. These have been reinforced by findings of the US National Institute of Standards and Technology that facial recognition algorithms used by law enforcement are at least 100 times more likely to misidentify Asian- and African-Americans and Native Americans than white men. Similar concerns have been raised in reporting on Clearview’s facial matching technology.

This narrow focus on specific technologies also leaves little room to consider the broader contexts in which such technologies (and the data collected by them) will be used. Biometric information is itself unique because it is easily accessed and collected, potentially even without the awareness of the subject of that information, and difficult to alter and conceal. However, the true power of this information may only truly be realised when it is combined with other information collected about that person, creating a detailed, highly individualised and targeted portrait of an individual (out of the ‘mosaic effect’ of combined data points) that is capable of almost-literally following that person around in their day-to-day life.

Ultimately, if this overly narrow or fragmented approach to these technologies continues, it is likely that these technologies — and their uptake by both the public and private sector —  will only continue to expand to fill these unregulated spaces.

CONTINUE READING

What do we mean by ‘biometric information’?

What characterises information as ‘biometric information’?

Biometric identification uses physiological information about a natural person (known as biometric information, or biometric identifiers) to identify that person. The kinds of biometric information available for collection and use is growing exponentially alongside the development of technical capabilities and the required technological infrastructure to support such use. This might range from facial images, fingerprints and iris patterns to others less discernible to the naked human eye. For example, French and Australian researchers have discovered that individuals may have unique ‘muscle activation signatures’ apparent in their movement, which may be used for identification purposes.

What is unique about biometric information?

Biometric information is both universal and accessible; that is, each individual has unique biometric identifiers, and such information is easily accessible (making this information difficult to change or conceal). Unlike a password or an alias, biometric identifiers are not intentionally created by the individual. The nature of such information also means that the individual does not necessarily need to be informed about, or give consent for, the collection and use of data about them.

Further, although facial images have long been available and attached to government or other identifying documents, these can now be captured with any number of cameras in public or private environments. Similar technologies are an increasing part of the infrastructure around us: some banks are recording the way that individuals scroll, click and move their mouse or fingers while using banking platforms, creating haptic and gestural profiles that have been used to detect fraud. Not only is it easier than ever to capture biometric information; comparisons of biometric templates and correlations within an abundance of available data, in turn, create even more data about an individual.

How might biometric information be collected?

Clearview has reportedly set itself apart from competing biometric identification systems by scraping images from the open web to gather a total three billion photos, forming part of a much larger database than the 640 million photos available to the FBI or the 1.8 million photos in the San Diego police’s recently shuttered mobile facial recognition system. These photos are also more diverse than the mugshots or DMV photos used by most other facial recognition databases, potentially providing images with a variety of angles, points in time, environments and photos of (or including) a person uploaded by another.

The automated collection of large volumes of information via ‘scraping’ (usually from a website) is prohibited by many major platforms including Facebook, Youtube, Instagram and Twitter. However, it is technically difficult to prevent while maintaining the usability of these services on the open web, and has only been actively prevented by digital platforms in recent years. Furthermore, data that has been scraped at one point in time will remain available even where it has been taken down or restricted on the original point of upload.

However, Clearview is not the only organisation scraping online personal photos, nor is scraping the only way in which individuals may fail to give consent to the collection, and subsequent use, of their biometric information. In mid-2019, Microsoft removed public access to its database of 10 million scraped photos of 100,000 public figures. Other methods of re-purposing available information may also bypass consent: in the US, the FBI and ICE can search state databases of driver’s licence photos.

What are the applications of biometric identification technology?

How does biometric identification technology work?

Biometric identification technology can operate on a:

  • ‘one-to-many’ basis (such as Clearview’s solution), where a person’s biometric identifier is captured probabilistically compared to biometric information in an existing database to find the closest, but not an exact match; or
  • ‘one-to-one’ basis (such as Apple’s Face ID on its mobile devices), which determines whether an individual’s biometrics match with existing information in order to verify that they are who they purport to be.

Artificial intelligence (AI) now enables this matching to be performed by algorithms that are complex and efficient enough to match ‘one-to-many’ identifiers and produce results with minimal delay and input: Clearview was reportedly able to identify a reporter even when their nose and the bottom of their face was covered.

One of the principal appeals of AI is the capacity of the technology to itself learn and apply what is and is not useful in accomplishing a particular task. However, this ability to self-select and weigh data points often means that it is difficult to know what criteria an algorithm is using to reach its result — the creators and users of these systems see inputs and outputs, but not the process for getting from one to the other (also known as the ‘black box’ problem). As a result, biometric identification systems are particularly vulnerable to poor design choices or errors, as well as attacks, that impact upon or introduce bias into the decision-making components of the technology as well as the biometric information collected and stored within it.

The opacity of AI decision-making is likely to increase as systems become both more sophisticated and exposed to (and ‘trained’ upon) more information. In addition, as more biometric identification systems become commercially available, this technology may increasingly be used by laypersons who are unable to discern (or may be less concerned with) how these systems work. At the same time, ‘algorithmic bias’ may lead users to place too much confidence in the results achieved by the technology, regardless of its real-world accuracy or effectiveness.

How is biometric identification technology currently used?

One of the most popular applications of biometric identification technology is for law enforcement purposes. For example, Clearview pitches itself as a tool for law enforcement to identify suspects or perpetrators. However, neither the biometric information collected nor the identification technology employed by Clearview itself limits its potential uses and applications: the current exclusive application of Clearview’s technology to policing and security is a decision made by the business behind it.

In accordance with the general lack of consensus on the use and applications of biometric identification, current use of the technology by both the public and private sector varies widely and is not limited to police and law enforcement. For example:

  • In terms of police use, San Diego shut down its mobile facial recognition services after seven years due to California’s short-term ban from the end of 2019, whereas the widespread roll-out of similar services in London was confirmed in early 2020.
  • China has introduced a law that requires anyone purchasing a SIM card or registering for new mobile phone services to provide a facial scan.
  • Blurring the line between government and private biometric identification, the Department of Home Affairs has submitted that the identification services contemplated under its proposed legislation could be used for age verification to determine access to online wagering and pornography, and be made more broadly available to private sector participants that apply for access.

What are the current regulatory responses and approaches?

In response to the rapid growth of the information and technology available for biometric identification, there is also an increasingly widespread public concern about whether and in what circumstances the use of biometric identification technology is necessary, as well as what happens after an individual is identified.

What concerns are regulators grappling with?

In many contexts, public concern has arisen from the lack of detail and transparency in relation to the technology itself. The broadest criticism of the proposed Australian legislation was that the identification services it contemplated lacked sufficient detail across the board. The Parliamentary Committee noted that the rejected bills did not clearly specify the technology that would be used in the identification services and as a result, Australian citizens could not determine how their rights and responsibilities might be affected, concerned parties had insufficient detail to engage with how information would be shared and used, and the Department of Home Affairs’ unwillingness to reveal technology vendors made it impossible to verify the accuracy of the technology used. In Clearview’s case, until the New York Times publication, there was little awareness among the general public that such technology was being used at all.

This lack of transparency often extends to what the technology could do in future, as well as its current capabilities. For example, the New York Times found that Clearview’s source code contained language that would allow it to be integrated with augmented-reality glasses, allowing identification of every person in the wearer’s field of view. Although the identification technology supported it, and an internal prototype had been developed, Clearview merely clarified that there were no current plans to introduce that service. Similarly, the Department of Home Affairs reported that the proposed ‘identity matching’ services in Australia would not be used for mass surveillance due to the Department’s lack of current capability and resources, rather than any restriction within the proposed laws themselves.

What current regulatory proposals are under discussion?

Governments are facing active challenges to their proposed uses of biometric identification and the legal protections available against corresponding potential harms. The UK and US governments are facing legal challenges from civil rights organisations for information about the facial recognition technology used in visa applications and surveillance by the Justice Department, DEA and FBI respectively. These challenges have extended to calls to impose moratoriums and bans on the use of this technology entirely, including:

  • NYU’s AI Now Institute has pushed for a ban of facial recognition in sensitive social and political contexts, by both governments and private actors, until there is sufficient knowledge of and regulations for risks;
  • the Australian Human Rights Commission has also suggested a moratorium on potentially harmful uses of facial recognition until the introduction of a legal human rights framework;
  • some American jurisdictions are banning or restricting the use of facial recognition by certain actors or in certain applications; and
  • in early 2020, it emerged that the EU may be considering a five-year ban on the use of facial recognition by public agencies, until a framework for ethical AI use is formulated.

From an industry perspective, some technology providers have expressed support for proposed legal restrictions on the widespread use of facial recognition technology. Google is supportive of a temporary moratorium on the use of facial recognition, citing concerns about the technology for why it has not been rolled out across their services; Microsoft supports limited bans on private use, but believes that moratoria would impede development of both the technology and strategies to deal with its negative repercussions. Similar discussions are likely to ensue as other forms of biometric identification come to the fore, again in both private and public spheres.

 

Key contacts

Julian Lincoln photo

Julian Lincoln

Partner, Head of TMT & Digital Australia, Melbourne

Julian Lincoln
Hayley Brady photo

Hayley Brady

Partner, Head of Media and Digital, UK, London

Hayley Brady
Alexandra Neri photo

Alexandra Neri

Partner, Paris

Alexandra Neri