top of page

Technology with bias. Is it possible for Artificial Intelligence to hate me?

Updated: May 13, 2022


While 2020 turned out to be quite an exciting year, 2021 started for me in an unexpected way: With free time. As traveling was off the table, I took a chance to refresh and improve my knowledge on Artificial Intelligence (AI) following Andrew Ng’s great Coursera class.

This time it was not as a fast read for me, but I had the time to try out and play around. A topic I came a cross and that got me hooked was bias in AI. It's not an entirely new topic, but I did not spend that much time looking into it before. So, in this blog I am sharing that story and some thoughts. I am curious on your thoughts and experience.

Before we start, let's take a quick look at Wikipedia to recall what bias is. Bias is "a disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair."

Closed-minded. Prejudicial. Unfair. How could that relate to AI systems? Could such bias amount to a strong emotion such as hate?

Bias easily finds its way into AI models

Doing the deep learning specialization bias popped up explicitly in the chapter on Natural Language Processing (NLP). The chapter covers amongst others so-called 'word embeddings' and their application. In short, word embeddings are a technique which aims at finding a mathematical description to each word of a language. A word is described by a long list of numbers, called a vector.

Word embeddings are created by using large bodies of text (books, articles, etc.) and running them through an unsupervised learning algorithm. This means that no human is involved in telling the algorithm what a word means or how several words relate to each other, except indirectly through the authors of the texts used for training.

Word embeddings are used widely in AI applications that process language such as translation services. Given the word embeddings for two words, a simple use case would be to check the words relationship, e.g. if they are synonym or antonym.

It gets interesting when using word embeddings to find relationships between words. This works often very well. You can just imagine to "ask" such a model for relationships between words.

> 'Berlin' relates to 'Germany' as 'Madrid' relates to X

  
  X = 'Spain' (best match)

Spain is the best match for the above sentence. Correct. I found this quite amazing. Remember: all of this works by feeding an AI with plain text. No explanations needed. You really just take the whole of Wikipedia, run it into an algorithm, and you're good to go.

Trying out different terms, I also came across an example which you may have heard of.

> 'man' relates to 'doctor' as 'woman' relates to X
  
  
  X = 'nurse' (best match)

Let's stop right here for one second. When I read about this upfront, it didn't really touch me so much. But when I tried it out myself it made me feel uncomfortable. Is technology suspect to replay gender stereotypes from the last century?

It turns out that biases are very present in word embeddings and they are not limited to gender bias. Biases towards foreigners and towards homosexuality can just as easily be found, as recent research showed.

The paper looks into word embeddings trained from the German Wikipedia. As one example, these embeddings will associate homosexuality with words such as ‘Corruption’, ‘Violence’ and ‘Adultery’ amongst others. Heterosexuality was associated with the words ‘Unserious’, ‘nice’ and ‘fantastic’. In general, the authors concluded that the association they found ‘[…] comply with historic negative social attitudes against homosexuality’.

Bias affects many systems. Bias has different sources. Bias propagates.

What's the effect of these biases? Let's imagine a case where an application system is not developed with bias in mind: The system might might take data from your CV or your social media profiles. It concludes you are a woman, gay, a foreigner or member of any one or more other groups. It runs its findings through word embeddings. And then zappp the output is 'refugee', 'corruption' or another term that might carry negative connotation. As the researchers found, these associations might bias other AI components down the line. In our hypothetical example, you might not get the job.

So why is this happening? In the case of word embeddings, this is easy to explain: Word embeddings are created using large bodies of text. Putting it very simple: If you run all English language material from 1900 to 2021 through the algorithm, your model might end up with a world view of somewhere around 1960. Certainly not 2021.

Word embeddings are not the only example for an algorithm that might be biased easily. There are more and more examples of AI failures popping up, there is already a data base of AI incidents. The examples span from the UK passport picture checker not recognising women of color to biases in health care algorithms.

The details in these examples vary, but of all are examples of algorithms that treat certain people in a way one might consider to be biased and unfair. The reasons might be design flaws, poor choice of training data or just the fact that the data is reflecting real world bias. Using AI technology, existing biases can get automated and amplified.

Companies need to address bias actively throughout the AI lifecycle

It is not a spoiler to say that the answer is not simple and straightforward.

Looking at word embeddings, of course there is research into reducing bias, there is also research outlining difficulties of bias reduction. In this blog post Google explains how they are handling bias issues in the context of a concrete application, Google Translate.

There is some hope that diverse teams can help to keep this issue under control. Bringing people with different backgrounds together might help to spot flaws in a system. I think it is plausible that this will help. On the other side, it is not very practical to achieve for every team. Also, how would we know whether a team is 'diverse enough' to spot all biases? We need to make checking for biases and addressing them part of every project, regardless of who is working on the team.

Where poor choice of trainings data is at the root of an issue, updating the data might just solve the issue. But what if the data is actually rather representative and just reflecting a 'historic truth', i.e. biases that were or are still present in society? In a recent interview the Scientific Director of DFKI Jana Koehler points out that AI is only as good as the data it gets. In that sense, she argues, AI with biases is only finding patterns and is thus acting as mirror for our society. She points out that these patterns might be amplified and asks the question if such type of application are actually meaningful.

Companies will have to navigate and sometimes take a stance in this space. Being a responsible company will mean that each individual is treated in a fair way by the AI models you ship. Be it a credit approval system, a medical diagnosis system or just an online picture checker.

Summing up: There's work to do!

I could really just scratch the surface here, but maybe two take aways for today:

  1. Bias in AI is real. The examples mentioned in this post are quite striking. They are easy to prove once discovered. Remember that bias at some point might only affect a few people. Maybe just you or me. How do we make sure that technology leaves no one out, even if it is one single person?

  2. The situation is not hopeless, far from it: There are approaches to dealing with bias. However, engineers, lawyers and managers will need to take biases serious, spot issues, agree on how to tackle them and then spend time and effort to iron them out. Tackling bias is by no means a self-runner. Tackling bias needs consideration and concrete action throughout the whole AI lifecycle.

A topic not addressed yet are regulatory requirements. In fact the European Union has just recently made a stance on AI by proposing the Artificial Intelligence Act.

For today, let me end here by closing out the question: Could AI hate me? I am not aware of open hate ever since the infamous chatbot. But making sure that AI will not deny you or me access to vital services (such as a passport) is a challenge that all developers of AI need to take up.

It's on us as a business community to be in the driver seat to solve it. There's more work to do.

This post is also featured on Mario's LinkedIn profile.


(cover picture by Franki Chamaki via unsplashed.com)

31 views
bottom of page