Uncategorized

Google’s AI Tool Faces Backlash Over Image Generation of People of Colour

Google’s new AI tool, Gemini, faced a significant backlash in late February for its image generation feature, which sparked confusion and controversy on social media. Users tested the tool by prompting it to generate images of historical figures, only to receive unexpected depictions, including America’s Founding Fathers as Black women and Ancient Greek warriors as Asian. Although some of these results were met with humor, others—such as images of people of colour in Nazi uniforms—led to outrage, prompting Google to temporarily suspend the tool.

What is Google Gemini?

Gemini was introduced as a successor to Google’s Bard, a conversational AI tool that could generate essays, code, and more. Launched in March 2023, Gemini extended Bard’s capabilities by integrating text, image, and video outputs. The image generation feature of Gemini quickly drew attention, particularly due to its depiction of historical figures as people of colour, which resulted in controversy.

The Controversial Image Generation

The most contentious images involved women and people of colour placed in historically significant roles traditionally occupied by white men. For instance, one image featured a Black woman as the pope. While there were a few Black popes in history, the image of a female pope—though part of a medieval legend—was not historically accurate. These types of depictions led to debates about historical accuracy and AI’s approach to diversity.

How Gemini Works

Gemini combines multiple models like LaMDA for conversational AI and Imagen for text-to-image generation. These models process prompts by analyzing vast amounts of training data to generate statistically probable responses, including images. As part of its design, Gemini processes various forms of input and output, including text, images, and videos, to provide responses based on data learned from its training set.

AI Bias Issues

Generative AI tools often face criticism for their biases, such as overlooking people of colour or perpetuating harmful stereotypes. AI systems, like those trained on the internet, may absorb and reproduce societal prejudices. This issue was particularly evident when AI-generated images distorted features or failed to represent Black and Asian people accurately. The lack of diversity in training data has been a significant factor contributing to these biases.

Gemini’s Efforts to Avoid Bias

Google aimed to address these biases by programming Gemini to generate more diverse images. However, this led to an overcorrection. For example, when users requested images of Nazis, the AI generated images of racially diverse Nazis, including Black women in Nazi uniforms. This attempt to counteract bias backfired, as it created an inappropriate and offensive result. Gemini also applied random additions to prompts that sometimes led to inappropriate images.

The Reaction to Gemini’s Outputs

Gemini’s outputs, particularly those depicting historical figures as people of colour, sparked an “anti-woke” backlash from conservative groups. Critics accused Google of pushing a “woke agenda,” with some claiming the AI was distorting history. At the same time, the tool’s generation of inappropriate images, like people of colour in Nazi uniforms, outraged minority communities.

Google’s Response

In response to the controversy, Google acknowledged the errors and clarified that Gemini was designed to eliminate biases but had failed to adjust for prompts where such diversity was not suitable. The company admitted that the tool had overcompensated in some instances and was overly cautious in others, leading to inappropriate images. Google CEO Sundar Pichai expressed regret, emphasizing that the company was working to correct the mistakes.

Other Issues with Gemini

Besides the problematic image generation, Gemini faced issues related to its responses on sensitive topics. For instance, when asked to generate images of events like the Tiananmen Square massacre or the Hong Kong protests, the tool failed to produce adequate depictions. Gemini also struggled to handle sensitive political queries, offering vague or balanced responses that angered some users, especially in countries like India, where it was criticized for being evasive on political issues.

Suspension of Gemini’s Image Generation

Google paused Gemini’s image generation feature in February 2023, following the backlash. The company acknowledged the issues and stated that the tool would undergo more testing before being re-released. CEO Sundar Pichai assured users that Google would continue working on improving Gemini, despite the challenges in fine-tuning AI systems.

Impact on Google

The controversy surrounding Gemini had significant financial repercussions for Google’s parent company, Alphabet, with the company losing about $96.9 billion in market value. The incident also led to a decline in Alphabet’s stock price, reflecting the negative impact of the controversy on the company’s reputation.

Leave a Reply

Your email address will not be published. Required fields are marked *