Google apologizes after Gemini AI generates images of Nazis as people of colour

Google has paused its Gemini AI tool (formerly Bard)’s ability to generate images of people. The company also issued an apology after the tool was found to be generating historically inaccurate illustrations, including depictions of Nazi soldiers as people of colour.

SEE ALSO:

Google announces Gemini 1.5, a flashy upgrade to its flagship AI model

As reported by The Verge, a number of different prompts led to the bot responding with historical inaccuracies. People of colour were reportedly shown in illustrations offered for queries about German soldiers in 1943, as well as requests to show U.S. Senators in the 1800s.

“We’re already working to address recent issues with Gemini’s image generation feature,” said Google on X. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

In an earlier statement, Google had confirmed that Gemini’s AI image generation generates a wide range of people.

“That’s generally a good thing because people around the world use it,” Google said. “But it’s missing the mark here.”

AI image generators have been found to perpetuate racist stereotypes, and one of Google’s AI principles is to “avoid creating or reinforcing unfair bias.” But as Jack Krawczyk, who works on Gemini, pointed out on X, “historical contexts have more nuance to them.”

“As part of our AI principles we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously,” Krawzyk wrote. “We will continue to do this for open ended prompts.”



Source

      Guidantech
      Logo
      Shopping cart