View Pictures/Getty Images
- Some Google employees have been raising the alarm about the company’s AI development, per the NYT.
- Two workers tried to stop the company from releasing its AI chatbot, Bard, the publication reported.
- The pair were concerned the chatbot generated dangerous or false statements, it added.
Some Google employees have been raising the alarm about the company’s artificial intelligence development, The New York Times reported.
According to the Times report, two Google employees who were tasked with reviewing AI products tried to stop the company from releasing its AI chatbot, Bard. The pair were concerned the chatbot generated dangerous or false statements, per the report.
In March, the two reviewers working under Jen Gennai, the director of Google’s Responsible Innovation group, recommended blocking Bard’s release in a risk evaluation, two people familiar with the process told the publication. The employees felt that despite safeguards, the chatbot was not ready, it added.
However, The New York Times reported that the people it spoke to told them Gennai altered the document to remove the recommendation and downplay the risks of the chatbot.
Gennai told The Times she had “corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.” She said reviewers were not supposed to weigh in on whether to proceed. Google told The Times it had released Bard as a limited experiment because of these debates.
Representatives for Google did not immediately respond to Insider’s request for comment. Insider was unable to reach Gennai by email. She did not immediately respond to a request for comment via a LinkedIn message.
In recent months, the tech world has been rushing to deploy generative AI products. The race was seemingly prompted by the release and viral popularity of OpenAI’s ChatGPT, but the speed of development is raising alarm elsewhere.
In March, several AI heavyweights signed an open letter calling for a six-month pause on advanced AI development. The letter said AI companies were locked in an “out-of-control race” and cited profound risks to society from the advanced technology.
John Burden, one of the letter’s signatories and research associate at The Centre for the Study of Existential Risk, previously told Insider the rate of AI development had picked up at an unprecedented speed.
“Things that five years ago would have seemed unrealistic to expect in the next decade have come and gone,” he said. “On a bigger scale, we just aren’t ready for the impact that this technology might have — considering we don’t really know how these models are doing what they are doing.”