People on crosswalk

Image: Ryoji Iwata via Unsplash

Coded Bias: A Call to activism for a positive digital future

There is a persistent belief (and hope) that AI will rid society of discrimination, personal biases and create a fair and just world. A friend of mine recently described his love for machine learning. He enthused about the beauty of data, being without value judgement, without ideology – an entity pure and neutral.
He couldn’t be further from the truth.

An algorithm goes beyond being just data. Or rather: data is not neutral. Ideology, personal experience and biases come into play at every step of its creation. That artificial intelligence is biased and results in discriminatory outcomes for marginalised groups is indisputable. We need increased public awareness of those dangers and to tread very carefully when it comes to the applications of these technologies: a message taken up by the trending documentary Coded Bias (available on Netflix). It outlines just how problematic AI can be, with particular focus on one application: facial recognition. Badass female data scientists and grassroot organisations bring to the public consciousness that technology is being rolled out without guidelines, legal frameworks or supervision. With artificial intelligence at the base of cutting edge surveillance technology we are facing the civil rights struggle of a generation. Researcher Joy Buolamwini calls to action and warns that apathy and feeling powerless are society’s biggest enemies when it comes to averting social risks of digitalisation. 

Facial recognition, a wind of change?

Coded Bias lays out how the technology is used in policing and in monitoring poor communities. It shows streets routinely yet covertly scanned by the London Metropolitan Police, and persons who wish to hide their face automatically marked as highly suspicious and issued with a fine, for reasons unclear. If you have nothing to hide, there’s nothing to worry about. Right? 
Chances are that if your face is compared to the database of Interpol’s watchlist, there will be a match, supposedly with a match confidence of over 90%. I make the wild assumption that most readers of this blog are not internationally wanted murderers or terrorists. The UK based civil liberties group Big Brother Watch has conducted a freedom of information campaign and found that 98% of those matches are in fact incorrect. Police forces around the world have rolled out this technology without a legal basis, framework or any oversight.

The documentary takes a deep dive in how facial recognition technology is gaining wider application, while studies repeatedly find that it is not quite fit for purpose: the current technology is rather bad at recognising women and darker skin tones. Joy Buolamwini, who initially raised the alarm on this inadequacy, was first ridiculed, then discredited and with time (and a good portion of social media outrage)  became a leading voice in the discussion on banning the technology altogether. Following her work uncovering flaws in facial recognition technology, IBM has acted on it, and their tools are far more accurate on a diversity of faces. Accuracy for lighter female faces went from 92.9% to 100%, darker males from 88% to 98% and darker females from 65.3% to 96.5.%  But do we even want this technology to be more accurate, or to exist at all? 

First, we must think about where facial recognition is used. There are certainly applications where facial recognition has positive use for society, a definition that may also depend on where you stand politically. Yet, at what human cost ? More accuracy and perfect classification enables surveillance of specific social groups. The social credit score in China has caused many a dystopian shudder in western media, yet facial recognition and algorithmic scoring comes into play in western societies as well. The organisations behind it are just less open about it. Facebook has filed a patent which uses its 2.6 billion database of faces for a tool to give your face a trustworthiness score or measure your “mood” when you enter a shop. Scoring individuals happens all the time: algorithms determine the ads we are shown, the prices we get when shopping online, the information we consume, our credit limit or how likely it is that we are profiled as a criminal. “The key difference between the United States and China”, says futurist Amy Webb, “is that China is more transparent about it.”

Do we live in a lawless wild west, where organisations can apply new technologies any way they want, without oversight or limits to applications? In June 2020 the scene changes. Black Lives Matter protests managed to show the problems that emerge when you couple racist policing with high-tech surveillance. Scientists spoke out in the US congress and following San Francisco’s example, a number of US cities have put a ban or moratorium on the use of facial recognition in police forces. A bill to ban its use by federal law enforcement has also been introduced. 
In Big Tech, IMB has disrupted their business model and stopped research on facial recognition, nor do they distribute these services. Amazon introduced a year long pause –  but this runs out soon, in June 2021. 

Harms of digital technologies acerbate discrimination, further disadvantage marginalised groups and so far there’s a lack of government guidelines and oversight. The EU commission is aware of bias issues. Yet, when it comes to AI, “high-risk applications” take center space in the debate. High risk meaning serious harm to life and bodily integrity, as may occur through self-driving cars. Application and context matter though. The distinction between high risk, and not, is problematic. The recent Artificial Intelligence Act proposal, a 108 page document, lays down a legal framework attempting to regulate an emerging technology before it becomes mainstream. A first of its kind, the draft sets rules for AI application beyond self-driving cars, to scoring exams, mortgages, hiring decisions. Live facial recognition in public spaces is to be banned altogether, with some exceptions, namely national security. However, situations where your face is wrongly matched, you are excluded from participating in society, economic opportunities or the possibility to make a better livelihood are not given increased attention. The draft is open to a lot of interpretation and much is left at the discretion of companies and tech developers. Civil society groups say the policy proposal needs to go further and establish firm boundaries and red lines of what is acceptable.  

Beyond outcomes

Coded bias offers a hopeful message of change and a shift in the distribution of power – a retelling of David against Big Tech, only that David in this case is a black, female scientist representing groups whose voices are usually ignored. The film’s focus however, strongly lies on the negative consequences of AI and only slightly touches some other important questions. To understand the power dynamics behind negative outcomes of AI applications, we need to ask ourselves three questions: What is designed? (And which applications of the technology are even thought of?) Who is it designed for? And how is it designed?

What is being designed?
Facial recognition software is currently catering to private and public surveillance efforts. Law enforcement, secret services, military and private security systems try to sell us their fallacious logic that more surveillance will lead to more safety.  That’s a stunning lack of imagination of how AI and facial recognition can be used. The tech elite proclaims that there’s only one economic model: surveillance capitalism – and that it’s the only possible way for the public good. Alternative uses are not much discussed. For example, AI-based facial recognition supports the ICRC’s efforts in restoring family links, making humanitarian work better, faster and more effective. Reuniting loved one’s displaced by conflict or natural disasters is a core role of the Red Cross Red Crescent Movement and one of its oldest activities – the website Trace the Face has just revolutionised this work.  

Who is it being designed for?
With a relatively homogenous tech force, the question arises: whose needs are catered for through the current technological advancements and research – and whose needs are being left out. Technology surveils the already highly surveilled: poor, marginalised communities, and exclusion from mainstream society traditionally (in most countries) is defined along ethnical lines. This makes it even more problematic that it works so badly for people of colour. A darker side of machine learning are deepfakes. The danger to democracy and its potential to spread misinformation are obvious. But these tools were first and foremost developed to undress women and deepfakes are by 90-95% used for revenge porn as an instrument of gendered power and control. 

How is it designed?
So how does it come to that? Potential for bias in machine learning comes in at every step of its creation. From the data selected, how they are categorised, the labels used, and definitions given, for whom the technology is developed and how it is applied. At each step the question arises: and who has been left out? The data that exists is almost never as thorough and representative as might be imagined. It might be big in data of one specific demographic (the usual suspect: white and male) but rather inadequate and lacking when it comes to anyone else. Make sure to check-out Invisible Women by Criado Perez about the gender data gap and the critical consequences it has for the wellbeing and survival of women.

Dr. Alexa Hagerty, from the University of Cambridge states that “…we are beginning to realise we are not really ‘users’ of technology, we are citizens in a world being deeply shaped by technology, so we need to have the same kind of democratic, citizen-based input on these technologies as we have on other important things in societies…”. If an automated decision impacts lives, we need to be able to ask “how did it come to that conclusion?” With AI, knowledge has been in the hands of a few people, who hold power. A place at the table matters, because this technology affects all of us. Diverse teams help, yet among leading AI researchers, less than 12% are women. At least half the genius of society is missing from the equation. The onus should also be on tech companies to become welcoming, creative and safe spaces. Further, diversity goes beyond intersectional lines of gender and race, but also includes professional background – people who know how discrimination works in society. 

If disengagement with the social implications of technology is our enemy, as the film states, the solution is public participation. The debate of where we want our societies to go with new technologies needs to be democratised. Activism starts at recognising a problem and talking about it. The communities most affected by surveillance technology are poorer, feminized, ethnically diverse and people with disabilities. Most people outside the tech bubble do not know what machine learning is, or understand how AI works. Basic digital and AI literacy classes for all can facilitate non-techy people to participate in this societal debate. A project from Finland open to the world is a great place to start. We at ethix aim to stimulate debate, transmit understanding and involve people who tend to see digitalisation as something that happens to them, not something they can actively shape according to their needs. Through our event series “Tech & Society Breakfast” we aim to build a bridge between academia, business and a broader public for transdisciplinary discussions. DigitalLabor On Tour, a project brought to life together with the think tank Dezentrum, brings the debate on digital transformation outside the bubble and into villages all around Switzerland. Because the digital future is something to be negotiated with the inclusion of everyone. The documentary is more than a film, it’s a growing movement to raise awareness and gather political momentum for guardrails and policies for AI applications. The documentary has triggered debate: it has successfully brought a very serious topic outside of the usual ethical tech bubble, onto the frontpages of mainstream newspapers and into the minds of the many.