Can We Make AI Brown?

With the rise of ChatGPT, DALL-E, and Midjourney, addressing bias in artificial intelligence is more pressing than ever. But the solution isn’t so clear.

feature image - Brown AI
A series of Midjourney images created by artist Madhav Kohli, Twitter user Aslan Pahari, and a since-deleted r/Bangladesh Redditor on ‘Bengali dining’

Allana Akhtar

|

August 9, 2023

What does a South Asian woman look like? Artificial intelligence thinks it knows.

In a viral TikTok with 1.3 million views and counting, AI produced images of what it assumes women from Sri Lanka, India, Bhutan, Pakistan, Bangladesh, Nepal, Afghanistan, and Maldives look like. With their contoured cheekbones and ruby-red lips, the women are strikingly beautiful — but far from reflective of each country’s diverse population. The light-skinned Pakistani woman, for instance, suggests everyone from the country is fair. Other AI queries produce more disturbing results: when a journalist asked ChatGPT, both the crown jewel and hypebeast poster child for the generative AI revolution, to determine which type of flight passenger would pose a security risk, the algorithm said the risk increases “if the traveler is from a country” — including Afghanistan — “that is known to produce terrorists.”

Bias in AI isn’t newsworthy, as algorithms are known to perpetuate discrimination against minority groups (who could forget when, in 2015, Google Photos tagged photos of Black people as gorillas). As generative AI gets smarter, however, the issue is also coming to the forefront for South Asians — technically not a minority when it comes to the global population. South Asians face a puzzling conundrum: if creators who could offer diverse perspectives deny AI their creations, they risk machines telling incomplete versions of their stories. If they submit to it, they often don’t get paid for their work.

Join today to read the full story.
Already a subscriber? Log in