A transient overview and discussion on gender bias in AI
For International Women’s Day, I wanted to jot down a brief article about gender bias in AI.
AI models reflect, and sometimes exaggerate, existing gender biases from the actual world. It will be important to quantify such biases present in models as a way to properly address and mitigate them.
In this text, I showcase a small choice of vital work done (and currently being done) to uncover, evaluate, and measure different facets of gender bias in AI models. I also discuss the implications of this work and highlight a number of gaps I’ve noticed.
All of those terms (”gender”, “bias”, and “AI”) could be somewhat overused and ambiguous.
“Gender”, throughout the context of AI research, typically encompasses binary man/woman (since it is simpler for computer scientists to measure) with the occasional “neutral” category. “AI” refers to machine learning systems trained on human-created data and encompasses each statistical models like word embeddings and modern Transformer-based models like ChatGPT.
Throughout the context of this text, I confer with “bias” as broadly referring to unequal, unfavorable, and unfair treatment of 1 group over one other.
There are lots of other ways to categorize, define, and quantify bias, stereotypes, and harms, which is outside the scope of this text. I include a reading list at the top of the article, which I encourage you to dive into if you happen to’re curious.
Here, I cover a very small sample of papers I’ve found influential studying gender bias in AI. This list is just not meant to be comprehensive by any means, but moderately to showcase the variety of research studying gender bias (and other forms of social biases) in AI.