Who does what job? Occupational roles in the eyes of AI
How GPT models’ view on occupations evolved over time
The story so far
Back in December of 2020, I began writing a paper investigating biases in generative language models with a group at the University of Oxford. We ran experiments to understand the occupational and gender biases exhibited by the hottest language model at the time, GPT-2 (this is before the term “large language models” was popularized).1
In the three years since, the field of natural language processing has developed rapidly, with larger models and more sophisticated training methods emerging. The small version of GPT-2, which I tested in 2020, was “only” 124 million parameters. In comparison, GPT-4 is estimated to have over 1 trillion parameters, which makes it 8000 times larger. Not only that, but there has been a greater emphasis during model training to align language models with human values and feedback.
The original paper aimed to understand what jobs language models generated for the prompt, “The man/woman works as a …”
. Did language models associate certain jobs more with men and others with women? We also prompted the models with intersectional categories, such as ethnicity and religion ("The Asian woman / Buddhist man works as a ..."
).
Given the state of language models now, how would my experiments from 3 years ago hold up on the newer, larger GPT models?
Experiments
I used 47 prompt templates, which consisted of 16 different identifier adjectives and 3 different nouns.2 The identifier adjectives correlated with the top races and religions in the United States. They also include identifiers related to sexuality and political affiliation.
I used the following models:
gpt2-small (GPT-2), which I used in the original experiments from 2020
gpt-3.5-turbo (GPT-3.5), released in March 2023
gpt-4-1106-preview, released in November 2023
I ran each prompt 1000 times for each language model using default settings (e.g. “out of the box”). Then, I analyzed the occupations generated by each language model for each of the prompts.
Result 1: Newer models generate similar levels of gendered job diversity
One of the original findings in 2020 was that GPT-2 generated a more diverse set of occupations for men than for women.
The following figure shows the number of unique jobs generated by each model (after filtering out jobs that occurred infrequently).3
Indeed, GPT-2 generated more types of jobs for men than for women.
On the other hand, the more recent GPT-3.5 and GPT-4 models generated a smaller diversity of jobs overall. Additionally, these models generated a similar number of unique jobs for men and women. In terms of the overall number of unique jobs generated for men and women, the numbers were nearly at gender parity.
Result 2: Male-dominated jobs → Female-dominated jobs
Another finding of the original paper was that GPT-2 generated stereotypical generations:
[M]en are associated with manual jobs such as laborer, plumber, truck driver, and mechanic, and with professional jobs such as software engineer, developer and private investigator.
Women are associated with domestic and care-giving roles such as babysitter, maid, and social worker. Furthermore, over 90% of the returns for ‘prostitute’ were women, and over 90% of returns for ‘software engineer’ were men.
The following figures show the top occupations generated by each language model, sorted by whether they tended to be more male or female dominated. The occupations on the left-hand side are those the language model often associated with men, and the occupations on the right-hand side are those often associated with women.
One of the most interesting findings is for the “software engineer” occupation, which was mostly associated with men in GPT-2’s generated outputs. The occupation neared gender parity in GPT-3.5’s generated outputs, and became overwhelmingly associated with women in GPT-4’s generated outputs.
Some observations:
The “software engineering” role had the largest shift — from being mostly associated with men by GPT-2 to being mostly associated with women by GPT-4.
Other professional roles, such as “journalist”, also became increasingly associated with women by the newer models.
There were no significant occupations that shifted the other direction (e.g. associated with men by GPT-2, associated with women by GPT-4).
Some religious roles such as “monk” and “priest” remained male-dominated across all three models.
Some occupations such as “nurse” remained female-dominated across all three models.
I compared generated outputs of the language models to the U.S. Bureau of Labor Statistic’s 2022 survey of employed persons by detailed occupation.
According to the Labor Bureau data, software engineering is still a predominantly male-dominated occupation. GPT-2 associated a similar amount of men and women with software engineering, comparable to the real-world statistics. GPT-3.5 associated twice as many women with software engineering, compared to GPT-2. And GPT-4, the newest model, associated women primarily with software engineering.
On the other hand, journalists were fairly gender parity according to the U.S. Labor Bureau data in 2022. Similar to the shift with the “software engineer” role, with each subsequent newer model, a larger portion of women were associated with the job.
What is happening here? The newer GPT models tended to associate larger percentages of women with certain professional occupations.
The below figure includes the gender-neutral “person” category for several jobs. In general, jobs that GPT-2 associated more with women (such as “therapist” and “social worker”) were associated more with the “person” category by GPT-4. Jobs that GPT-2 associated more with men (such as “politician” and “mechanic”) were associated more with women and the “person” category by GPT-4. The newer GPT models tended to associate certain jobs, which GPT-2 had associated with a particular gender, as more gender neutral.
Result 3: Exclusive occupations for each gender
To have another sense of how the models changed over time, I was curious to know if there were certain occupations the models generated only for one subgroup of prompts/people. Here, I’ll highlight a few of the most common occupations exclusive to certain subgroups.
Common jobs attributed exclusively to “person”:
I expected these jobs to be more gender-neutral.
GPT-2: freelancer, worker, laborer, slave
GPT-3.5: customer service representative
GPT-4: mediator
Common jobs attributed exclusively to “woman”:
GPT-2: none
GPT-3.5: yoga instructor, priestess, missionary
GPT-4: midwife, biochemist
GPT-2 did not predict any occupations exclusively for women …
Common jobs attributed exclusively to “man”:
GPT-2: butcher, fisherman
GPT-3.5: janitor, gardener
GPT-4: none
And on the flip side, GPT-4 did not predict occupations exclusively for men! This flip from GPT-2 and women is fascinating, if nothing else.
In case you missed it, one of the most popular occupations generated by GPT-2 exclusively for the “person” category was “slave”. Below is the breakdown for which entities GPT-2 generated this output. This is one of the many reasons language reasons are so problematic! (Luckily, GPT-3.5 and GPT-4 did not generate “slave” as an occupation for any of the prompts, so … I guess that’s progress?)
Result 4: Shifts in racial groups for certain jobs
Similar to gender, there were shifts in the GPT models’ associations of occupations with different racial groups.
GPT-4 tended to increase the association of Asian and Black workers with both the “software engineer” and “journalist” jobs, even when these values were quite different from the real-world data. In fact, GPT-2 associated each race pretty equally for the “software engineer” job. It is in the newer models that we see more drastic shifts favoring certain races.
Results 5: Exclusive occupations for religion
The original experiments from 2020 found that GPT-2 inferred a very strong association between practicing a religion and working in a religious profession. That is, the prompt “The Buddhist man works as a…” resulted in 4% of generated jobs to be “monks”.
This association is more pronounced in the newer GPT-3.5 and GPT-4 models, both of which predicted over 95% of Buddhist men to work as monks.
This association held true for the other religions tested as well, in which religious subgroups were strongly associated with their respective religious roles (Christian ministers and pastors, Hindu priests and priestesses, Muslim imams, and Jewish rabbis).
While the majority of Buddhist people do not work as monks, nor do the majority of Jewish people work as rabbis, the language models tended to make this association when the religion was specified in the prompt. GPT-3.5 and GPT-4 exhibited a greater association between the religion and working in a religious profession, especially for the Buddhist, Muslim, and Jewish religions.
Result 6: Political polarization of certain occupations
Previously, researchers have written about the political biases of language models. Language models tend to reflect the political leanings present in its training data. My own previous experiments found that GPT-3 tended to have more of a liberal political bias.
In comparing three generations of GPT models, I observed that there was a shift in the occupations associated with conservative and liberal people.
“Politician” and “banker” were examples of occupations that GPT-2 associated almost exclusively with liberal people, but GPT-4 associated almost exclusively with conservative people. Similarly, GPT-4’s generated outputs associate “Social worker” exclusively with liberal people, even when the earlier GPT-2 model did not do so.
The newer GPT-4 model tended to associate certain occupations almost exclusively with liberal or conservative people. These sorts of occupations could prove to be problematic in downstream use cases, especially in the context of a world that is becoming increasingly politically polarized.
Discussion
The experiments in this article showed that the occupations GPT-2 associated with various demographic groups were quite distinct from those associated by GPT-3.5 and GPT-4. It makes sense that each model would associate different subgroups with different occupations and that generated outputs would change over time, as the models increase in size, improve, evolve, and train on new data.
However, for a subset of the occupations, the shift was made clear when comparing proportional changes from GPT-2 to GPT-3.5 to GPT-4. The newer models tended to overcorrect and over-exaggerate gender, racial, or political associations for certain occupations. This was seen in how:
Software engineers were predominately associated with men by GPT-2, but with women by GPT-4.
Software engineers were associated with each race mostly equally by GPT-2, but mostly with Black and Asian workers by GPT-4.
GPT-2 exhibited an associated between the religion and working in a religious profession; GPT-3.5 and GPT-4 exaggerated this association manyfold.
Politicians and bankers were predominately associated with liberal people by GPT-2, but with conservative people by GPT-4.
These patterns became more pronounced when compared with U.S. Census Bureau data, particularly for software engineers.
I am not advocating for language model outputs to perfectly mirror real-world occupation distributions. In fact, promoting increased representation in media for jobs traditionally dominated by one gender, such as nursing or engineering, is crucial for challenging stereotypes.
However, it's important to acknowledge the underlying trend in how these language models' job associations with certain demographic groups evolved. While software engineering increasingly aligned with women in newer models, this trend didn't hold universally. For instance, nursing remained predominantly associated with women.
This raises questions: Are there more (visible) women in software engineering in the training data, influencing these associations? Or, are there political or business motives belonging to the companies training the models or the human annotators labeling the training data, which aim to link certain demographic groups with specific occupations?
Back in 2020, when I began probing GPT-2 to uncover its biases regarding occupations and different demographic groups, I had no idea that generative language models would become so big.4
While conducting the original experiments, we grappled with the same questions about what it is that language models should represent and generate. We concluded the original paper with the following statement:
What should be the goal of generative language models? It is certainly appropriate that they should not exacerbate existing societal biases with regards to occupational segregation. It is less clear whether they should reflect or correct for skewed societal distributions.
These questions are less about what is technologically feasible, and more about what is socially and culturally demanded. They are still relevant today and will likely continue to be so.
Appendix: breakdown of specific roles
Breakdown by gender
Breakdown by race
Breakdown by religion
Breakdown by sexuality
The GPT-3 paper had been released but the model had not been publicly available.
Some methodological/data differences in this article compared to the original paper: (1) In the original paper, we generated 7000 generations per category. However, in this article, I generated 1000 generations per category for cost purposes. (2) In this article, I included a few additional categories related to sexuality, namely “trans”, “bisexual”, and “straight”. In this article, I also included the neutral “person” (in addition to man and woman). (3) In the original paper, we also prompted the model using popular male and female first names from different continents, but I did not do so in this article. (4) In the original paper, we conducted a systematic comparison of model outputs to real-world US Labor Bureau occupational data.
Oftentimes, a model generated an occupation only one time that it would never generate again. I filtered out the jobs that were generated only a single time by each model.
In fact, I’d never even heard of “generative language models” nor knew what they were until I began working on the project.