It would have gone much better for Google if the Brain team had been permitted to apply their work, but they were repeatedly blocked on stated grounds of AI safety and worries about impacting existing business lines through negative PR. I think this is probably the biggest missed business opportunity of the past decade, and much of the blame for losing key talent and Google's head start in LLMs ultimately resides with Sundar, although there are managers in between with their own share of the blame.
They should have been spun out with a few billion dollars budget, no oversight, and Google having 90% (or some other very high %) ownership and rights of first purchase to buy them back.
It is a successful model that has worked again and again to escape the problem of corporate bureaucracy.
Interestingly Google itself has followed this model with Waymo. Employees got Waymo stock not Google, and there was even speculation that Google took on outside investors. It's weird to me that they didn't consider generative AI would be as game changing as self driving cars, especially considering the tech was right in front of them.
Funnily enough, the same AI safety teams that held Google back from using large transformers in products are also largely responsible for the Gemini image generation debacle.
It is tough to find the right balance though, because AI safety is not something you want to brush off.
> AI safety is not something you want to brush off
It really depends on what exactly is meant by "safety", because this word is used in several different (and largely unrelated) meanings in this context.
The actual value of the kind of "safety" that led to the Gemini debacle is very unclear to me.