Google CEO Admits Its AI Totally Screwed Up


Google has had a terrible week and a half.

Last week, its recently renamed Gemini AI engine started spitting out pictures of racially diverse Nazis, confounding the internet with its tastelessness and historical inaccuracy.

“Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” Google senior VP Prabhakar Raghavan admitted in a Friday blog post.

The news got an immense amount of mainstream media attention, ballooning the incident into a PR crisis of epic proportions. In response, they blocked the system from generating any images of people at all.

More embarrassing revelations followed, like when the chatbot refused to say whether genocidal dictator Adolf Hitler was worse than Tesla CEO Elon Musk.

Perhaps unsurprisingly, Google CEO Sundar Pichai is furious, calling out the screwup in an email sent to staff on Tuesday and obtained by Semafor.

“I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong,” he wrote, adding that Google’s “mission to organize the world’s information and make it universally accessible and useful is sacrosanct.”

It’s an especially precarious position for the tech giant considering Google has been on the back foot throughout the emerging AI race, struggling to keep up with fierce competition from the likes of OpenAI.

In response, Pichai tried to calm staffers and reassure them that Gemini hadn’t completely lost its mind.

“Our teams have been working around the clock to address these issues,” he wrote. “We’re already seeing a substantial improvement on a wide range of prompts.”

However, his explanation for the bizarre incident was vague and unsatisfying.

“No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” Pichai argued. “And we’ll review what happened and make sure we fix it at scale.”

In short, the incident hammers home a lesson that powerful tech companies are learning over and over: it’s easy enough to whip up an AI that generates text or images, but extremely difficult to test it for all the problems that users could have — and, as Google seems to be learning in real time, embarrassing when the AI screws up hyper-publicly.

More on Gemini: Google Blocked Gemini From Generating Images of Humans, But It Still Does Clowns


Leave a Reply

Your email address will not be published. Required fields are marked *