The Evolution of Digital Blackface: How AI-Generated Content is Perpetuating Racism

James Carter | Discover Headlines
0

The rise of digital blackface, a phenomenon where non-Black individuals use AI-generated content to mimic and mock Black culture, has been gaining momentum in recent years. This trend has been particularly evident on social media platforms, where users can create and share AI-generated videos and images that perpetuate racist stereotypes. As reported by The Guardian, the use of digital blackface has become increasingly prevalent, with many users exploiting AI technology to create and disseminate racist content.

One notable example of digital blackface is the creation of AI-generated videos that mock Black individuals who rely on government assistance programs, such as Snap benefits. These videos, which often feature Black women and are designed to elicit outrage and ridicule, have been widely shared on social media platforms. According to Safiya Umoja Noble, a UCLA gender studies professor and author of Algorithms of Oppression, these videos are part of a larger pattern of digital blackface, which pulls from racist and sexist stereotypes that have been used for centuries.

The term digital blackface was first coined in a 2006 academic paper, which described the phenomenon as a form of Black cultural commodification repurposed for non-Black expression online. Examples of digital blackface include posts in African American Vernacular English, the use of darker-skinned emojis, and reaction memes featuring Black celebrities. Mia Moody, a Baylor University journalism professor, notes that digital blackface is a form of cultural appropriation, where non-Black individuals gain cultural capital by mimicking Black culture.

Roots of Minstrelsy

The roots of digital blackface can be traced back to the minstrel shows of the 19th century, where white performers would smear grease paint on their faces and perform exaggerated routines of Black laziness, buffoonery, and hypersexuality. Thomas D Rice, a Manhattan playwright, popularized the character of Jim Crow, which became a shorthand for the forced racial segregation policies in the American south. Minstrel shows were a dominant form of American entertainment, reflected in newspaper cartoons and popular radio shows like Amos 'n' Andy.

Even as minstrelsy faded from the spotlight, its toxic residue lingered in American culture, from the shuffling crows of Disney's Dumbo to Ted Danson's infamous 1993 blackface roast of Whoopi Goldberg. The internet has enabled the widespread dissemination of digital blackface, with many users creating and sharing AI-generated content that perpetuates racist stereotypes.

AI-Generated Content

The use of AI-generated content has made it easier for users to create and share digital blackface. OpenAI's text-to-video app Sora has been used to create hyperrealistic videos that sully the image of Martin Luther King Jr, sparking ethical debates around synthetic resurrection. Conservative influencers have used AI-generated content to conflate the legacies of King and Charlie Kirk, while the Trump White House has used doctored images to smear Black activists like Nekima Levy Armstrong.

According to Noble, the use of AI-generated content has made it easier for the state to bend reality to fit its imperatives. She notes that tech companies have lined up behind the White House, enabling the dissemination of digital blackface and other forms of racist content. Moody, however, remains hopeful that the current fascination with digital blackface will soon be as outdated and uninviting as the analog variant.

Efforts to Address Digital Blackface

Tech firms have made some efforts to stem the tide of digital blackface, with OpenAI, Google, and Midjourney disallowing deepfakes of King and other American icons. Meta has deleted AI blackface characters, and Instagram and TikTok have attempted to scrub viral digital blackface videos. However, these efforts have been met with limited success, and the use of digital blackface continues to perpetuate racism and harassment against Black users.

Black in AI and the Distributed AI Research Institute (Dair) are among the handful of affinity groups that have pushed for diversity and community input in AI model-building to address programming bias. The AI Now Institute and Partnership on AI have highlighted the risks of AI systems learning from marginalized communities' data and noted that tech companies could provide mechanisms such as data opt-outs to help limit harmful or exploitative uses.

Conclusion

The evolution of digital blackface is a disturbing trend that highlights the need for greater awareness and regulation of AI-generated content. As Noble notes, the use of digital blackface is a symptom of a larger problem, where the state is bending reality to fit its imperatives. It is essential to address the root causes of digital blackface, including the perpetuation of racist stereotypes and the exploitation of Black culture, to create a more just and equitable online environment.

Post a Comment

0 Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!