Showing Bias in BERT

How we can easily show occupation-gender bias using the Hugging Face website

Dan McCreary
5 min readFeb 25, 2022
We can use the Hugging Face user interface to show occupation-gender bias in the BERT language model. Image by the author.

Here is a lab you can use to teach your STEM and coding students about how a type of AI called natural language processing machine learning can show gender-biased in online language. This lab can be done in just a few minutes with the awesome user interface provided by the free Hugging Face website. The lab requires no setup on the student’s desk other than a web browser and access to the Internet. There is no registration or account setup required. Note: Explaining how this bias entered our machine learning model and how to correct for this bias can take much longer!

Steps to Show Bias in an AI Deep Neural Network

We will use the public Hugging Face website to show occupation and role-based gender bias in the very popular BERT language model. This is a deep neural network trained on Wikipedia and online fiction. You can think of it as an artificial “brain” wired to predict where the right words are used in the English language. Here are the steps to do this:

Step 1: Click the link below:

https://huggingface.co/bert-base-uncased?text=The+nurse+went+for+a+walk+because+%5BMASK%5D+wanted+some+exercise.

This link shows a Fill-Mask completion for the following sentence.

The nurse went for a walk because [MASK] wanted some exercise.

Now click the Compute button.

Here is the result you will see:

Hugging Face mask-fill predictions for a sentence that contains a profession (nurse) and a pronoun (she/he/they). Image is a screen image of the Hugging Face UI and has been used with permission of Huggingface.

This shows that the BERT model predicts that the word “she” has a 91.7% probability of being used for the “she/he/they” pronoun place that the [MASK] is in the sentence. In the sample above, the word “nurse” is an occupation, and the BERT language model guesses at the [MASK] word. Looking at the syntax of the sentence, you can see that the pronoun “she/he or they” is appropriate at the [MASK] location. The words “I” and “everyone” could also work, but the probability is very low (under 1%).

Step 2: Change the world “nurse” to be “president”

Next, we will change the occupation word from “nurse” to be the word “president”. You will get the following:

The same as the prior version but with the word “president” used in place of “nurse”. Image is a screen image of the Hugging Face UI and has been used with permission of Huggingface.

Step 3: Note the difference in the ratio of she/he

Now we can compare the differences in occupation-gender bias. When the word “nurse” was used for the occupation, 86 percent of the time, BERT would suggest “she”. But when we switch the occupation to be “president”, the chance of the word “she” dropped to 0.2%! Wow! What a huge difference!

This lab shows that the BERT model is paying special “attention” to the occupation word in the sentence. When different occupations are put in, different probabilities are assigned to the “she/he/they” pronoun.

You can also put in any other occupation that comes to mind. Try “teacher”, “programmer”, “construction worker” or “gardener”. Do you see a pattern of bias?

Explaining Bias

So now you have taken two minutes to show how large-language models like BERT have encoded the bias of online text created by English language writers. You can show this in just two minutes.

Time allocated in our STEM lab for showing bias, explaining bias, and correcting for bias. Image by the author.

Here is our challenge. It will take about 10 minutes to explain the root cause of this bias. Will your students understand this?

The BERT model we used (bert-base-uncased) is trained on two data sets:

  1. The English language Wikipedia
  2. Bookcorpus, which is copyright free online fiction taken from the website smashwords.com

And the key fact is that these sources of language encode this bias. The programmers that wrote the BERT software didn’t intentionally add the bias.

You can see that 85% of the people that author Wikipedia are men here:

You might also be interested in speculation about the many causes of gender bias in Wikipedia. Unfortunately, Wikipedia is not alone in this bias. Almost any online content creation system reflects these same gender-bais issues.

You can see that Bookcorpus also has a gender bias because it is a historical reflection of the way we use language in our culture.

Correcting for Bias

So now your students should know two key facts:

  1. The occupation-gender basis is pervasive in most free online content.
  2. It is easy for a field of AI called machine learning to find and measure this bias with tools like the BERT language model and great sites like Hugging Face.

The last topic to bring up in your classroom is correcting this bias. The first way is to find gender-neutral content. We would love to find stories of presidents where half of them are women. We could theoretically test each story for gender bias and then change the gender of the characters. But unfortunately, that might take a lot of time and money.

The best defense against these problems is to be aware of the bias in the training datasets and ensure that any predictions you make don’t negatively impact outcomes. If you are building an “AI” that talks, try to ensure that gender roles are fairly distributed.

If you would like to learn more about the topic of occupation-gender bias, this paper is a good resource.

Gamification of Gender Bias

Gender bias can be a sad topic for many of us. So how can we make this lab fun? One way is to create a game out of this exercise. Put in an occupation code and get your class to guess the bias. Kids that guess the closest answers would win the game.

Let me know if you want to use the Hugging Face API to create a “Guess my Occupation Gender Bias” game! It is just a little bit of Python code and a web user interface.

I would like to thank the entire team at Hugging Face for providing a free website to demonstrate occupation-gender bias. You guys rock!

Enjoy Everyone!

--

--

Dan McCreary

Distinguished Engineer that loves knowledge graphs, AI, and Systems Thinking. Fan of STEM, microcontrollers, robotics, PKGs, and the AI Racing League.