WhatsApp's AI sticker generator called out for racism against Palestine, Meta responds

WhatsApp's newly launched AI sticker generator feature has recently been accused of generating racist and Islamophobic images related to Palestine.

Listen to Story

Advertisement

In Short

  • Meta launched the AI sticker generator for WhatsApp in September.
  • The feature is now criticized for generating racist and Islamophobic stickers.
  • Meta previously faced similar issues with problematic AI in Instagram’s translation features.

Earlier in September, Meta announced a new AI-sticker generator for WhatsApp. The artificial intelligence (AI) backed feature enables users to generate customised stickers for chats and stories. While the feature is currently only available in limited countries, it has already come under criticism for racism and generating inappropriate stickers.

The Guardian reported that WhatsApp's new AI-backed features have been found to generate racist and Islamophobic images when prompted with terms related to Palestine. For instance, the report cites that WhatsApp's AI sticker generator produces images of Palestinian children holding guns when prompted with the terms "Muslim boy Palestine" or simply "Palestine." However, when prompted with terms related to Israel, such as "Israel" or "Israeli boy," the generator produces images of the Israeli flag, people dancing, and children playing, with no depictions of guns.

advertisement

Due to its inappropriate images, WhatsApp's AI sticker generator has sparked concerns that AI technology could unintentionally perpetuate bias and discrimination, especially in sensitive geopolitical contexts. Critics argue that such biases can distort the portrayal of communities and events, which can have a negative impact on public opinion and discourse.

In response to the Guardian's report and criticism, Meta spokesperson Kevin McAlister said that the company's AI sticker generator is still evolving and needs feedback to improve. "As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems. We'll continue to improve these features as they evolve and more people share their feedback."

However, it is not the first time that Meta's AI has been questioned for its bias and mistranslations. For instance , Instagram's automatic translation feature once mistakenly added the word "terrorist" to user bios written in Arabic. This issue was similar to a Facebook mistranslation that led to the arrest of a Palestinian man in Israel in 2017. The company however apologised for what it described as a "glitch".

Meanwhile, while companies are being cautioned and urged to regulate their AI models, there is also growing concern about AI being used inappropriately or generating inappropriate content. In December of last year, the Lensa AI app faced accusations of creating sexually suggestive and racially biased avatars. Users reported instances where the app occasionally generated nude images, despite having rules against adult content.

Furthermore, a recent study published in the journal Nature has highlighted the potential risks associated with integrating large language models (LLMs), which form the foundation of chatbots, into healthcare. The study found that this integration could result in harmful, race-based medical practices.

Published By:
Divya Bhati
Published On:
Nov 6, 2023