No quick fix: How OpenAI’s DALL·E 2 illustrated the challenges of bias in AI

Share

Explore Our Galleries

An NAACP flyer campaigning for the Dyer Anti-Lynching Bill, which passed the U.S. House of Representatives in 1922, but was filibustered to defeat in the Senate. Dyer, the NAACP, and freedom fighters around the country, like Flossie Baily, struggled for years to get the Dyer and other anti-lynching bills passed, to no avail. Today there is still no U.S. law specifically against lynching. In 2005, eighty of the 100 U.S. Senators voted for a resolution to apologize to victims' families and the country for their failure to outlaw lynching. Courtesy of the National Association for the Advancement of Colored People (NAACP).
Some Exhibits to Come – One Hundred Years of Jim Crow
Mammy Statue JC Museum Ferris
Bibliography – One Hundred Years Of Jim Crow
Claude, age 23, just months before his 1930 murder. Courtesy of Faith Deeter.
Freedom’s Heroes During Jim Crow: Flossie Bailey and the Deeters
Souvenir Portrait of the Lynching of Abram Smith and Thomas Shipp, August 7, 1930, by studio photographer Lawrence Beitler. Courtesy of the Indiana Hisorical Society.
An Iconic Lynching in the North
Lynching Quilt
Claxton Dekle – Prosperous Farmer, Husband & Father of Two
Ancient manuscripts about mathematics and astronomy from Timbuktu, Mali
Some Exhibits to Come – African Peoples Before Captivity
Shackles for Adults & Children from the Henrietta Marie
Some Exhibits to Come – The Middle Passage
Slaveship Stowage Plan
What I Saw Aboard a Slave Ship in 1829
Arno Michaels
Life After Hate: A Former White Power Leader Redeems Himself

Breaking News!

Today's news and culture by Black and other reporters in the Black and mainstream media.

Ways to Support ABHM?

By Jake Traylor, NBC News

OpenAI released the second version of its DALL·E image generator in April to rave reviews, but efforts to address societal biases in its output have illustrated systemic underlying problems with AI systems.

OpenAI’s DALL·E 2 has become a hot topic among technologists who see its biases as illustrative of problems with AI technology. (Chelsea Stahl / NBC News Illustration)

An artificial intelligence program that has impressed the internet with its ability to generate original images from user prompts has also sparked concerns and criticism for what is now a familiar issue with AI: racial and gender bias. 

And while OpenAI, the company behind the program, called DALL·E 2, has sought to address the issues, the efforts have also come under scrutiny for what some technologists have claimed is a superficial way to fix systemic underlying problems with AI systems.

“This is not just a technical problem. This is a problem that involves the social sciences,” said Kai-Wei Chang, an associate professor at the UCLA Samueli School of Engineering who studies artificial intelligence. There will be a future in which systems better guard against certain biased notions, but as long as society has biases, AI will reflect that, Chang said.

OpenAI released the second version of its DALL·E image generator in April to rave reviews. The program asks users to enter a series of words relating to one another — for example: “an astronaut playing basketball with cats in space in a minimalist style.” And with spatial and object awareness, DALL·E creates four original images that are supposed to reflect the words, according to the website.

As with many AI programs, it did not take long for some users to start reporting what they saw as signs of biases. OpenAI used the example caption “a builder” that produced images featuring only men, while the caption “a flight attendant” produced only images of women. In anticipation of those biases, OpenAI published a “Risks and Limitations” document with the limited release of the program before allegations of bias came out, noting that “DALL·E 2 additionally inherits various biases from its training data, and its outputs sometimes reinforce societal stereotypes.”Discover more about biased in AI.

Discover how the bias of this project was uncovered.

Biased algorithms reflect the tech industry’s whiteness problem, which has existed for decades.

Find more stories like this in our breaking news archive.

Comments Are Welcome

Note: We moderate submissions in order to create a space for meaningful dialogue, a space where museum visitors – adults and youth –– can exchange informed, thoughtful, and relevant comments that add value to our exhibits.

Racial slurs, personal attacks, obscenity, profanity, and SHOUTING do not meet the above standard. Such comments are posted in the exhibit Hateful Speech. Commercial promotions, impersonations, and incoherent comments likewise fail to meet our goals, so will not be posted. Submissions longer than 120 words will be shortened.

See our full Comments Policy here.

Leave a Comment