Foto: Matthew Henry via Unsplash

What does the potential EU-Commission facial recognition ban have to do with trust?

It’s on the table: the EU announced in January that it might put a stop to Facial Recognition (FR) – at least for now. The reason? The Clearview Report revealed uncertainties surrounding the interface of FR and our social lives, which is strongly tied to trust. In brief, we are uncertain about future outcomes regarding FR: a perfect distrust scenario. And distrust is a first good step.

Read the full length version here

Trust is an outcomes-oriented relationship in which we make assumptions about another party regarding our own imagined (favorable) future. Trusting someone or something without having properly distrusted first (blind trust), lacks an element of scrutiny we need in order to place our trust in what is actually trustworthy. Working that out is going to take some time and requires different building blocks. 

A few of the trust building blocks in relation to FR include (for more see our full length blog):

Reliance: We want to rely on FR for being able to consistently make the right facial identification (training data biases and skin tones seem to present some problems here) and data being consistently used for the specified purposes only. As the technology is being implemented problematically – such as in low-income public housing (USA) or when it was used to publically shame elderly wearing pajamas (China) – and in increasingly diverse situations, achieving this reliance is difficult.

Transparency: Transparency can help us make the right decisions regarding trust only when the right kind of information about FR is being given. It seems that FR is described as doing one thing, when it is serving duel or alternative purposes. In addition, FR is pretty easy to hide. It does not require direct consent to be trained using an individuals’ features – any clear photo will do the job.

Third Party Regulation: Innovative developments often fall outside of the scope of regulation. On the one hand this is because they tend to outpace those regulations. As of yet, the public has very little regulatory assurances regarding FR being used in their lives, despite having little choice about its implementation. 
Ethos of Trust: Facial Recognition may be perpetuating a devil’s cycle in regards to our ethos of trust. The more we have a generalized feeling of trust within our communities and societies (what we call the ethos of trust), the more we are able to face risks and uncertainties as a group. FR is often sold as a mechanism providing safety. It can ensure proper identification or can seemingly be used to more easily catch criminals. But that narrative establishes our society as being generally unsafe and untrustworthy, which in turn makes us more risk-averse, including when faced with new FR technology. 

Values: Should they force the ban, establishing clear values and worldviews that underlie possible implementations of FR, is perhaps the most important task the EU Commission has. It needs to be established which values FR risks breaking and which it is able to uphold – realistically. And given that information, how does the further development of this technology need to be changed? Then, we as potential consumers or subjects of FR, can be more deliberate about our (dis)trust. 

Next Steps for the FR tech world

Ultimately, the discussion is not about winning trust. It is about a thorough analysis of these diverse building blocks, based on which changes to fulfill some of those requirements can be made in order to signal real trustworthiness. 
What a potential ban on FR might offer us is the breathing space to carefully examine under which conditions we should accept FR or not: this seems to be an important societal discussion to have. It allows to fully engage in distrust of the technology to then carefully decide to build (dis)trust towards FR in the future.