I'm less concerned with the AI teams lobotomizing utility, more concerned with damage to language, particularly redefining the term "safe" to mean something like "what we deem suitable".
That said, when zero "safety" is at stake might be the best time to experiment with how to build and where to put safety latches, for when we get to a point we mean actual safety. I'm even OK with models that default to parental control for practice provided it can be switched off.
That said, when zero "safety" is at stake might be the best time to experiment with how to build and where to put safety latches, for when we get to a point we mean actual safety. I'm even OK with models that default to parental control for practice provided it can be switched off.