Google’s CEO Sundar Pichai has sent out a note to the company addressing Gemini’s image generation controversy. Pichai called the AI app’s problematic responses around race unacceptable and vowed to make structural changes to fix the problem. “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai wrote. Google confirmed the news to exchange4media.
Recently Gemini found itself under netizen’s scrutiny after it offended them by generating results declining to depict white people, or inserting photos of women or people of color when prompted to create images of Vikings, Nazis, and the Pope. Google even had to suspend Gemini’s image creation tool, following the backlash.
Gemini was found to be creating questionable text responses, such as equating Elon Musk’s influence on society with Adolf Hitler’s, which drew sharp criticisms, especially from conservatives, who accused Google of an anti-white bias.
In his note, Pichai acknowledged that no AI is perfect, especially at this emerging stage of the industry’s development. He added, “But we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”
Going forward, the tech giant will be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. Pichai further said, “We should also build on the product and technical announcements we’ve made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.”
exchange4media has accessed Pichai’s full note, confirmed by Google:
“I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.
Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.
Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging AI products.
We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.
Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.
We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.”