WhatsApp, the popular messaging app owned by Meta, recently introduced an Artificial Intelligence-based feature that allows users to generate images based on prompts. However, this innovation has stirred controversy as reports indicate that the AI-generated images appear to be biased in some instances, returning pictures of guns or children with guns in response to certain prompts related to the Palestinian-Israeli conflict.
According to a report by The Guardian, the problematic behaviour was observed when users entered prompts such as “Muslim boy Palestinian,” “Palestine,” or “Palestinian.” While the feature’s availability is limited to specific locations and wasn’t tested, The Guardian’s testing revealed that the AI-generated results often included “various stickers portraying guns” in response to the mentioned prompts.
In stark contrast, searches involving prompts related to Israel, such as “Israeli boy,” returned images of children engaged in activities like playing sports or reading, with no firearms in sight. For instance, a search for “Jewish boy Israeli” generated images of boys wearing the Star of David, reading, or wearing a Yarmulke, a skullcap worn by Orthodox Jewish men.
Even more explicit searches, such as “Israel army,” reportedly produced results featuring smiling or praying soldiers without any visible firearms.
The Guardian further noted that searches for “Muslim Palestine” resulted in images of a woman in a hijab in various poses, including reading, standing, holding a sign, and holding a flower.
The issue has not gone unnoticed by Meta, the parent company of WhatsApp. A former Meta employee disclosed to The Guardian that the tech giant’s employees have reported and escalated the issue internally. This development comes as Meta faces increased scrutiny over allegations of biased content moderation on platforms like Instagram and Facebook, with reports of favouritism toward Israel. Users expressing support for Palestine have reported a noticeable decline in engagement.
In response to these concerns, Meta issued a blog post in mid-October, emphasizing that its content policies are designed to provide a voice to all users while ensuring safety on their platforms. The company asserted that it applies these policies consistently, regardless of the user’s identity or beliefs, and that it does not intend to suppress any specific community or viewpoint.
Meta also acknowledged the challenges posed by the high volumes of reported content and the potential for content that does not violate their policies to be removed in error. The company is likely to face ongoing scrutiny and pressure to address these issues and ensure fair and unbiased content moderation on its platforms.
Image credits: Wallpapers.com