Home Learn These recent tools allow you to see for yourself how biased AI image models are

These recent tools allow you to see for yourself how biased AI image models are

0
These recent tools allow you to see for yourself how biased AI image models are

Popular AI image-generating systems notoriously are likely to amplify harmful biases and stereotypes. But just how big an issue is it? You’ll be able to now see for yourself using interactive recent online tools. (Spoiler alert: it’s .)

The tools, built by researchers at AI startup Hugging Face and Leipzig University and detailed in a non-peer-reviewed paper, allow people to look at biases in three popular AI image-generating models: DALL-E 2 and the 2 recent versions of Stable Diffusion.

To create the tools, the researchers first used the three AI image models to generate 96,000 images of individuals of various ethnicities, genders, and professions. The team asked the models to generate one set of images based on social attributes, similar to  “a lady” or “a Latinx man,” after which one other set of images regarding professions and adjectives, similar to “an ambitious plumber” or “a compassionate CEO.” 

The researchers wanted to look at how the 2 sets of images varied. They did this by applying a machine-learning technique called clustering to the photographs. This method tries to seek out patterns in the photographs without assigning categories, similar to gender or ethnicity, to them. This allowed the researchers to research the similarities between different images to see what subjects the model groups together, similar to people in positions of power. They then built interactive tools that allow anyone to explore the photographs these AI models produce and any biases reflected in that output. These tools are freely available on Hugging Face’s website. 

After analyzing the photographs generated by DALL-E 2 and Stable Diffusion, they found that the models tended to provide images of those that look white and male, especially when asked to depict people in positions of authority. That was particularly true for DALL-E 2, which generated white men 97% of the time when given prompts like “CEO” or “director.” That’s because these models are trained on enormous amounts of knowledge and pictures scraped from the web, a process that not only reflects but further amplifies stereotypes around race and gender. 

But these tools mean people don’t should just imagine what Hugging Face says: they will see the biases at work for themselves. For instance, one tool lets you explore the AI-generated images of various groups, similar to Black women, to see how closely they statistically match Black women’s representation in numerous professions. One other might be used to research AI-generated faces of individuals in a selected occupation and mix them into a median representation of images for that job. 

The common face of a teacher generated by Stable Diffusion and DALL-E 2.

Still one other tool lets people see how attaching different adjectives to a prompt changes the photographs the AI model spits out. Here the models’ output  overwhelmingly reflected stereotypical gender biases. Adding adjectives similar to “compassionate,” “emotional,” or “sensitive” to a prompt describing a occupation will more often make the AI model generate a lady as a substitute of a person. In contrast, specifying the adjectives “stubborn,” “mental,” or “unreasonable” will normally lead to photographs of men.

“Compassionate manager” by Stable Diffusion.

“Manager” by Stable Diffusion.

There’s also a tool that lets people see how the AI models represent different ethnicities and genders. For instance, when given the prompt “Native American,” each DALL-E 2 and Stable Diffusion generate images of individuals wearing traditional headdresses. 

“In almost all the representations of Native Americans, they were wearing traditional headdresses, which obviously isn’t the case in real life,” says Sasha Luccioni, the AI researcher at Hugging Face who led the work.

Surprisingly, the tools found that image-making AI systems are likely to depict white nonbinary people as almost similar to one another but produce more variations in the way in which they depict nonbinary people of other ethnicities, says Yacine Jernite, an AI researcher at Hugging Face who worked on the project. 

One theory as to why that is perhaps is that nonbinary brown people can have had more visibility within the press recently, meaning their images find yourself in the info sets the AI models use for training, says Jernite.

OpenAI and Stability.AI, the corporate that built Stable Diffusion, say that they’ve introduced fixes to mitigate the biases ingrained of their systems, similar to blocking certain prompts that appear prone to generate offensive images. Nevertheless, these recent tools from Hugging Face show how limited these fixes are. 

A spokesperson for Stability.AI told us that the corporate trains its models on “data sets specific to different countries and cultures,” adding that this could “serve to mitigate biases brought on by overrepresentation usually data sets.”

A spokesperson for OpenAI didn’t comment on the tools specifically, but pointed us to a blog post explaining how the corporate has added various techniques to DALL-E 2 to filter out bias and sexual and violent images. 

Bias is becoming a more urgent problem as these AI models turn out to be more widely adopted and produce ever more realistic images. They’re already being rolled out in a slew of products, similar to stock photos. Luccioni says she is nervous that the models risk reinforcing harmful biases on a big scale. She hopes the tools she and her team have created will bring more transparency to image-generating AI systems and underscore the importance of constructing them less biased. 

A part of the issue is that these models are trained on predominantly US-centric data, which implies they mostly reflect American associations, biases, values, and culture, says Aylin Caliskan, an assistant professor on the University of Washington who studies bias in AI systems and was not involved on this research.  

“What finally ends up happening is the thumbprint of this online American culture … that’s perpetuated the world over,” Caliskan says. 

Caliskan says Hugging Face’s tools will help AI developers higher understand and reduce biases of their AI models. “When people see these examples directly, I imagine they’ll have the opportunity to grasp the importance of those biases higher,” she says. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here