Block letters

Image: Amador Loureiro

An Opinion Written in Code? AI and the Distribution of Power

It is commonly believed that social issues of discrimination and stereotyping arising through algorithms are mere representations of the injustices of the real world. It is a bit more complicated than that – algorithms reinforce these injustices.

Because of the way machines learn. Not too different from us, they make rough generalisations out of examples. Yet, unlike us, machines are masters at finding patterns, out of which they create hard-coded rules. So, extra attention needs to be paid to what knowledge is fed to the machine, and what patterns the data scientists are not even aware of themselves. 

Language and images – an expression of power

Everyday sexism and racism gets encoded (yes, even in 2021). «An algorithm is an opinion expressed in code» says Ivana Bartoletti, founder of the Women Leading in AI network. «If it’s mostly men developing the algorithm, then of course the results will be biased…» To illustrate this, have a look at how Google translates a Hungarian text. Keep in mind that Hungarian has no gender pronouns, so Google chooses for you. Here is what the world looks like to google:

Graphical user interface, text, application, email

Description automatically generated

Language is a tool and expression of power. It shapes how we see and interpret the world. The way language is translated into code and the definitions chosen for specific concepts, illustrates an imbalance of power. This is where ConceptNet comes into play: a semantic network used for natural language understanding, part of Artificial Intelligence. Yet, this library of definitions often used to teach machines, seems to have a bit of a problem: “man” is defined as a male person associated with “respect” and “honesty”. But according to ConceptNet, a woman “has a baby” and “wants to be loved and wants a man”. “Black woman” has no entry at all, which shows how tech tends to make women of colour invisible and non-existent. In her book “Algorithms of Oppression” researcher Dr. Safiya Noble compellingly outlines how search engines reinforce racism and are particularly biased against black women.  

These examples offer us a glimpse into how Artificial Intelligences make sense of the world. AIs are often fed with big data from the internet. For natural language understanding for example, text analyses look at which words stand closer to specific concepts. The problem is that the vast amount of data available on the internet is far from being a nuanced discourse on race, gender and other issues of social justice. In the early 90s, there once was a feminist hope, that the cyberspace would be a place where social categories need not matter anymore. The internet was hoped to become a virtual utopia where no discrimination exists. However, the opposite turned out to be the case. Out-dated and highly problematic world views are overrepresented and thus stereotypes reinforced. And the saying: “The internet is for porn” does not come out of nowhere and influences in particular how image-based AIs function. Hence, the radically different google image results for “school boy” as compared to “school girl”. American politician Alexandria Ocasio-Cortez has recently made headlines for all the wrong reasons: an AI that dresses people has put her in a bikini, while men got to wear suits.

There is cause for concern not just about the data fed into the machine and its results, but also the very way machines are designed. What values and social hierarchies are embedded into the digital tools that we use? Ever noticed that virtual assistants always have a female voice? Machines are often feminised (and also sexualised). Those meant to serve us (Siri, Alexa, Google Home) are by default female voices. Sociologists are alarmed at how it teaches us that the role of people who are gendered female is to respond on demand. Funnily enough, “clever” AIs solving complex problems and those giving us instructions can be heard with a masculine voice. It makes us wonder why robots are gendered in the first place, as it goes against the principle of robotics not to deceive. There are strong ethical arguments as to why machines should not be gendered and there are also innovative, feminist alternatives: check out the first genderless digital voice.

Concentration of power

«How do we make this a far deeper democratic conversation around how these systems are already influencing the lives of billions of people in primarily unaccountable ways that live outside of regulation and democratic oversight?» asks leading researcher Dr. Kate Crawford in an interview recently.
Algorithms are given the power to decide over people’s lives and take decisions on how resources are distributed: be it jobs, housing, welfare benefits, credit limits, or terms of probation in the criminal justice system. Women and people of colour are often put at a disadvantage even when the category of gender or race has purposefully been taken out of the equation. A zip code indicates your race in the US and recruiting tools take markers of a candidate being female (female sports teams, university clubs etc.) as criteria to sort them out as ineligible. This year a study laid out that women are not shown certain jobs on facebook ads, depending on the gender make-up of the company’s employees. In the case of Netflix, which employs more female software engineers, the algorithm is skewed in their favour.

«…terms like ethics and AI for good have been so completely denatured of any actual meaning» says Crawford. Her excellent new book “Atlas of AI” aims at pulling aside the curtains and look at who’s running the levers of AI systems. Going to the very source of an AI, she also raises awareness of the material impact on the planet of these seemingly purely virtual systems. To her, we need to move away from just focusing on abstract ethical principles to start talking about power. «We’re looking at a profound concentration of power into extraordinarily few hands. You’d really have to go back to the early days of the railways to see another industry that is so concentrated, and now you could even say that tech has overtaken that.»

The image of the quintessential capitalist – cigar smoking rail tycoons – could not seem further away from Steve Jobs type tech gurus. Big tech prides itself on being ethically responsible and performs outward commitment to diversity and the societal impact of their products. Companies such as Apple, Google and Microsoft all have research teams in ethics and AI. Are they more than mere lip service, neatly dressed in abstract ethical principles without clear definition or practical implementation? A PR stunt?
Google for example is heavily investing in the area of natural language processing for large AI models. However, it shows a zero tolerance policy towards criticism – even when they come from their own ethics team. In the draft version of a paper, star researcher Dr. Timnit Gebru (one of the very very few black women at Google) warned of the environmental and social dangers within large language tools. They are not fit for cultural nuances and expressions, movements such as Black Lives Matter or Me Too have worked hard to reframe and reclaim. These models promote racist, sexist and abusing content, and with the right prompts can encourage genocide, self-harm and child sexual abuse.  Following the scandal of the company’s reaction to her unpublished paper, Gebru was forced out of the company at the end of last year, and Google subsequently also fired a co-author of the paper and founder of the AI ethics unit, Margaret Mitchell. This puts into question just how much diversity, the societal impact of their products and ethics matter to Big Tech. After showing the consequences of voicing criticism, Google has announced in May that the model behind this story: LaMDA, will be integrated into Google’s search engine. Critics call it the beginning of mass produced misinformation. Data ethics in Silicon Valley still has a long way to go. 

Striving for AI data fairness

Realities as presented by machines are more than mere reflections of the world. Despite the popular belief that machines are neutral, they have shown to encode the unequal distribution of power within our society. The possibility to check and control for bias is hidden within the black box of intellectual property rights. Bias itself becomes invisible, harder to trace. That makes it all the more harmful for already marginalised groups. In order for AI to contribute to balancing inequality in society, ethical oversight and monitoring is needed. Training algorithms need careful crafting and consciousness of what to do (and what not to do): avoid skewed datasets, carefully evaluate them, think about your categories, think of representation and diversity, do rigorous and ongoing testing. Make sure to check out our step by step AI Fairness Guide and other tools.