I would have thought the best case scenario for a bad alias is that you safely corrupt some other piece of data. (Assuming you let those writes go through.) So "it might not do that" seems like an improvement to me.
If you don't let those writes go through, then we're in a different happier situation, and the register optimization causes no change in behavior.
I guess I'm not sure which definition you're proposing:
a) Writes through "bad" aliases never take effect.
or
b) Writes through "bad" aliases take effect sometimes, no guarantees.
I don't think either makes the point you're hoping to.
b) is just bog standard undefined behavior. I guess you could be intending to constrain the range of what effects compiler-emitted code can have in bad-aliasing situations (no nasal daemons!) but it's not clear how much additional optimization latitude constraining only nasal daemons provides. Compilers often save local stack space by reusing stack entries for multiple local variables. A bad alias to one could corrupt an unrelated value, and you've got nasal daemons again.
a) requires massive compile-time and run-time effort to dynamically distinguish "bad" aliases from "good" ones. This is along the lines of what valgrind does at great cost.
I thought the premise was that Someone Else already took care of those nasal demons somehow, and we're just worried about putting the register optimization back into place. So the range of effects has already been constrained to something safe. We're just adding in "sometimes it doesn't have those bad effects" to enable some optimizations, and that should have very little downside.
Options a and b are just the different ways Someone Else could have implemented their solution. I'm not suggesting how they did that, I'm taking it as the premise. The problem of "how do we make bad aliasing safe" is much much much harder than "how do we still enable normal optimizations like this after we make bad aliasing safe".
I would have thought the best case scenario for a bad alias is that you safely corrupt some other piece of data. (Assuming you let those writes go through.) So "it might not do that" seems like an improvement to me.
If you don't let those writes go through, then we're in a different happier situation, and the register optimization causes no change in behavior.