Following the ChatGPT breakthrough, Google stopped listening to suggestions from its experts for fear of being displaced.
In March, shortly before introducing Bard, the artificial intelligence chatbot, to the public, Google asked its employees to test the tool. What no executive in the company could imagine was the reaction that he garnered among programmers.
Google, which over the years spearheaded much of the AI research, had not yet integrated a consumer-facing version of generative AI into its products when ChatGPT was released.
Until then, the company was wary of its power and the ethical considerations that would come with integrating the technology into search and other flagship products, according to employees. But the fear of being displaced by competitionmade him lose his mind.
Now, its workers maintain that these concerns were ignored in a frantic attempt to catch up with ChatGPT and avoid the threat that could mean for Google’s search business.
An employee came to the conclusion that Bard was “a pathological liar”: according to screenshots of the internal discussion. Another called it “shocking,” according to a Bloomberg report.
The ethics task force that Google vowed to fortify is now disempowered and demoralized, current and former workers said.
Employees responsible for the safety and ethical implications of new products have been told not to get in the way nor try to kill any of the generative AI tools in development, they said.
Google intends to revitalize its search business around this cutting-edge technology, which could bring generative AI to millions of phones and homes around the world, ideally before Microsoft Corp. (MSFT)-backed OpenAI gets ahead of the game. to the enterprise.
One employee wrote that when asked for tips on how to land a plane, Bard often gave advice that would lead to an accident. Another said that he gave answers about scuba diving “that would likely result in serious injury or death.”
“AI ethics has taken a backseat,” said Meredith Whittaker, president of the Signal Foundation, which supports private messaging, and a former Google director. “If ethics is not placed above profit and growth, it won’t work in the end.”
The privileges of working in AI
Silicon Valley as a whole continues to struggle to reconcile competitive pressures with security. Researchers building AI outnumber those who focus on security by a proportion of 30 to 1said the Center for Human Technology in a recent presentation, underscoring the often lonely experience of voicing concerns in a large organization.
Large language models, the technologies on which ChatGPT and Bard are based, ingest huge volumes of digital text from news articles, social media posts, and other Internet sources, and then use that written material to train software that predicts and generates content by itself when given a prompt or query.
But ChatGPT’s remarkable debut precipitated it all. In February, Google launched an ad blitz for generative AI products, promoting the Bard chatbot. In turn, he anticipated that on YouTube creators will be able to swap costumes in videos or create “fantastic cinematic scenarios” using generative AI.
Two weeks later, he announced new AI features for Google Cloud, showing how Docs and Slides users will be able, for example, to create sales training documents and presentations, or compose emails.
On the same day, the company announced that it would incorporate generative AI into its healthcare offerings. Employees say they’re concerned that the speed of development won’t leave enough time to study potential damage.