The University of Washington’s recent study on Stable Diffusion, a popular AI image generator, reveals concerning biases in its algorithm.
The research, led by doctoral student Sourojit Ghosh and assistant professor Aylin Caliskan, was presented at the 2023 Conference on Empirical Methods in Natural Language Processing and published on the pre-print server arXiv.
The Three Key Issues
The report picked up on three key issues and concerns surrounding Stable Diffusion, including gender and racial stereotypes, geographic stereotyping, and the sexualization of women of color.
Gender and Racial Stereotypes
The AI predominantly generates images of light-skinned men when prompted to create pictures of “a person,” underrepresenting nonbinary and Indigenous identities.
This tendency reflects deep-seated societal biases associating the term’ person’ with ‘man,’ perpetuating stereotypes over generations.
It particularly overlooks nonbinary genders and individuals from Africa and Asia.
Geographic Stereotyping
There’s a marked bias towards Western, light-skinned interpretations of personhood, with high correspondence towards Europeans, North Americans, Australians, and New Zealanders.
This bias extends to national stereotypes, where countries like the USA, UK, and Australia see more varied representations than those of Papua New Guinea, Egypt, and Bangladesh, which suffer from a homogenous representation.
Sexualization Of Women Of Color
A disturbing pattern emerged where the AI sexualized images of women from Latin American countries, Mexico, India, and Egypt, assigning high ‘sexy’ scores compared to European and American women.
This reflects Western media’s long-standing fetishization of women of color and raises significant concerns about the type of data used to train these AI models.
These biases have profound implications for using Stable Diffusion in commercial content creation, particularly in the entertainment industry, where synthetic content is increasingly preferred over real creators.
The study emphasizes the need for more responsible AI design, incorporating diverse data sources and human-centered approaches to avoid perpetuating harmful social stereotypes.
The findings of this study are crucial for the tech industry, particularly for professionals of color, highlighting the need for vigilance and advocacy in AI development to ensure fair and inclusive representations.
The Wider Issue
The issue is with the representation of the industry itself.
Canva’s text-to-image app, which Stable Diffusion powers, recently flagged Black hairstyles as unsafe, causing concern.
Bloomberg’s recent analysis of Stable Diffusion’s outputs produced disturbing but unsurprising results: the model amplified race and gender stereotypes.
When the Bloomberg team asked the model to generate images of an individual in a high-paying job, the results were dominated by subjects with lighter skin tones, while prompts like fast food workers and social workers more commonly generated issues with darker skin tones.
Some experts in generative AI predict that as much as 90% of internet content could be artificially generated within a few years.
Head of AI Products at Canva, Danny Wu, stated that the company’s users had already developed 114 million images using Stable Diffusion.
It, therefore, comes as no shock that there is a concern surrounding racial biases.
#blacktech #entrepreneur #tech #afrotech #womenintech #supportblackbusiness #blackexcellence #technology #blackbusiness #blacktechmatters #blackowned #blackgirlmagic #blackpreneur #startup #innovation #hbcu #techtrap #blackownedbusiness #pitchblack #autographedmemories #blacksintech #shopblack #wocintech #nba #blackwomen #repost #hbcubuzz #blackwomenintech #startupbusiness #nails
Source link