The Evolution of Digital Blackface: A Threat to Black Identity and Culture

James Carter | Discover Headlines
0

The proliferation of digital blackface has become a pressing concern in the United States, with the rise of generative AI video tools and social media platforms. According to Safiya Umoja Noble, a UCLA gender studies professor and author of Algorithms of Oppression, there has been a significant acceleration in the creation and dissemination of digital blackface content over the past two years. As reported by The Guardian, this phenomenon is not only a form of cultural appropriation but also a means of perpetuating racist and sexist stereotypes.

The term digital blackface was coined in a 2006 academic paper to describe the repurposing of Black cultural expression for non-Black online identity. Mia Moody, a Baylor University journalism professor, notes that this phenomenon has evolved over time, from the use of bitmojis and African American Vernacular English to the creation of AI-generated avatars modeled on Black faces. The net effect is a distorted representation of Blackness, stripped of cultural context and obligation.

The recent surge in digital blackface content has been fueled by the accessibility of AI video tools, such as OpenAI's text-to-video app Sora. This has led to the creation of deepfakes, including those that sully the image of Martin Luther King Jr. and other prominent Black figures. Bernice King, MLK's daughter, has criticized these synthetic videos as "foolishness." The Trump White House has also been implicated in the creation and dissemination of digital blackface content, including a doctored photo of Minnesota activist Nekima Levy Armstrong.

Historical Context

Digital blackface has its roots in the minstrel revues of the early 19th century, where white performers used grease paint and exaggerated routines to caricature Black features. This form of entertainment was a dominant force in American culture, with newspaper cartoons and radio shows like Amos 'n' Andy perpetuating racist stereotypes. Black performers were forced to adopt minstrel elements to gain a foothold in the entertainment industry, further erasing their personhood.

Tom Fletcher, a vaudeville minstrel and actor, explained that the objectives of minstrelsy were to make money and break down ill feelings towards Black people. However, the toxic residue of minstrelsy has lingered in American culture, from Disney's Dumbo to Ted Danson's infamous 1993 blackface roast of Whoopi Goldberg.

Academic Perspectives

Researchers like Noble and Joy Buolamwini have been sounding the alarm about the inherent racial biases in AI algorithms related to medical treatment, loan applications, hiring decisions, and facial recognition. The proliferation of digital blackface has exposed the darker side of AI-generated content, with tech firms struggling to stem the tide of racist and sexist stereotypes.

Noble notes that the impact of AI-generated digital blackface is difficult to quantify, but its use by the Trump administration highlights its potential as a powerful tool of official disinformation. The Obama Truth Social entry, which revived a slur that has festered in darker online corners, is a prime example of this phenomenon.

Personal Accounts

Black users are disproportionately affected by digital blackface, with many experiencing personalized abuse and harassment. Noble argues that this phenomenon is a manifestation of the state bending reality to fit its imperatives, with tech companies lining up behind the White House to facilitate propaganda.

Moody, however, remains hopeful that the current fascination with digital blackface will soon be outdated and uninviting. She believes that as people become more aware of the implications of AI technology, they will move on to other forms of expression, leaving digital blackface behind.

Labour Market Data

The creation and dissemination of digital blackface content have significant implications for the labour market, particularly in the tech industry. The lack of diversity in AI development teams has led to the creation of biased algorithms, which perpetuate racist and sexist stereotypes. Initiatives like Black in AI and the Distributed AI Research Institute (Dair) are pushing for diversity and community input in AI model-building to address programming bias.

The AI Now Institute and Partnership on AI have highlighted the risks of AI systems learning from marginalized communities' data, noting that tech companies could provide mechanisms such as data opt-outs to help limit harmful or exploitative uses. However, widespread adoption has been slow, and the problem of digital blackface persists.

Post a Comment

0 Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!