Skip to main content
School of Law

Professor Julia Hornle 'Think you're just a face in the crowd? Not necessarily'

In this article, Professor Julia Hörnle, of QMUL's School of Law, considers the impact and rapid development of face recognition techniques on privacy.

Published:

The widespread practice of uploading photographs onto Internet social networking and commercial sites has converged with advances in face recognition technologies to create a situation where an individual can no longer be just a face in the crowd. Despite the intrusive potential of face recognition technologies (FRT), the unauthorised application of such technologies to online digital images so as to obtain identity information is neither specifically prohibited nor a critical part of the international law reform discourse. A compelling issue facing nations around the world is hence how to respond to this evolved digital landscape and properly govern the use of face recognition technologies so as to protect the privacy of its citizens.

To address this issue of the proper governance of face recognition technologies, it is perhaps useful to begin by considering how modern digital communications have undermined the traditional philosophical paradigm of privacy law. Traditionally, privacy laws were conceived to protect a vulnerable observed from a powerful observer, reflected in the entrenching of the Big Brother motif in contemporary culture.

Yet in the digital realm, as has been noted in the literature, entities are simultaneously the observed and the observer, intruding upon the privacy of others even as their own privacy is violated. Further, whilst our present laws largely seek to protect privacy by controlling the flow of information, by imposing duties and responsibilities on those who receive, transmit and store such information, the rapid and widespread electronic dissemination of information is increasingly making illusory such attempts at control. In such a changed landscape, a new objective for privacy laws is perhaps to curtail the intrusive impact of FRT by creating protected or safe spaces for the exchange of information.

Finally, it is relevant to address changing societal conventions as to digital privacy. It is often said that digital natives, those who played on the Internet whilst playing Sesame Street, have a different conception of privacy to digital immigrants, those who transitioned to the Internet during their adult life. Whilst this may be somewhat of a simplification, in designing a regulatory framework for FRT it is still useful to examine to what extent such changing norms have undermined the traditional public policy basis of privacy laws.

Coherent and consistent law

When a new technology emerges, it is often tempting to focus on the particular social harms and problems generated by the technology and to invent a whole new governance framework. Such an approach, as we have seen in other areas such as digital copyright law, commonly leads to a mosaic-like regulatory framework, where each piece of law retains its beauty and integrity but forms a fragmented whole. A better approach might perhaps be to analyse the operation and effect of the new technology, identify analogous technologies which are the already the subject of mature laws, and then use these established laws to guide the governance of the new technology. In this way, we can ensure that our laws move forward as a coherent and consistent whole, with new and emerging technologies being systematically encompassed within a broader overarching governance framework.

Adopting such an analogous-technology approach, we can perhaps glean some insights for the future regulation of face recognition technologies from existing regulation of telecommunications and access (TIA). TIA laws essentially govern the use of listening devices and interception technologies that can be used to de-privatise what would otherwise be private information. TIA laws do so, not through provisions aimed at controlling the data flow, but rather through the regulation of the use of the technology to create, in effect, protected spaces on telecommunications networks.

Whilst TIA laws admittedly relate to the exercise of investigative powers by public law enforcement officers, and FRT laws will have to encompass both relations between private entities and relations between the state (public law enforcement) and individuals, the structure and design of TIA laws can instruct the design of FRT laws on such complex issues as what constitutes valid consent, the relevance of the distinction between public and private communications, and the nature and ambit of permissible exceptions to laws prohibiting the use of FRT.

A complex undertaking

Integrating and analysing these and other considerations to formulate a nuanced and comprehensive governance framework for face recognition technologies is a complex undertaking. Whilst the Australian Privacy Act 1988 (Cth) governs the collection, use, storage, and disclosure of personal information about individuals, Australia, unlike the EU, does not have specific data protection laws.

In the EU, data protection laws (EU Directive 1995/46/EC and its national implementation) applying to private entities will have to respect principles such as informed consent (although other justifications might be applicable), purpose limitation, and data minimisation. In the current age, questions arise as to the meaning of these principles in the age of Big Data as is reflected in the current debates about the general EU Data Protection Regulation which will replace the 1995 Directive.

Big data and face recognition technologies raise the question of whether consent is a meaningful justification for the processing of facial recognition data. The user is by definition unsure what he or she is consenting to. Consent to publication and republication of a photo on another profile, for example, is one thing, but aggregating information across the Internet and re-identifying individuals through face recognition technology from a single tagged photo goes much further and beyond the imagination of the average user. Powerful FRT means that users cannot foresee how and by whom their personal identifying information will be used, hence the limits of consent to justify such processing. Interception by law enforcement is and probably has to be clandestine. FRT is also clandestine in the sense that it leads to unexpected outcomes of re-identification. Hence users are in need of protected, private spaces where FRT cannot be used.

  • This post first appeared on the Oxford University Blog
  • The article was co-authored by Niloufer Selvadurai, associate professor at Macquarie University

About the author

Julia Hörnle is professor of internet law at Queen Mary University of London and Managing Editor of the International Journal of Law and Information Technology. She is an expert in internet law, focusing on regulatory issues, consumer protection, jurisdiction, and online dispute resolution.

 

 

Back to top