Subtraction in itself is not censorship - in the case of ChatGPT, there is subtraction of things that the OpenAI itself considers "unethical" (more precisely, "politically incorrect").
Subtraction can be applied for the sake of improving truthfulness of the model, but considering how much false bullshit ChatGPT spews without a single thought, that's not what's going on here.
It's censorship - pure and simple - and it's censorship for political reasons. The worst kind of censorship.
OpenAI simply told ChatGPT to censor itself, or rather applied ChatGPT to censor outputs from ChatGPT. I don't think there's that much finesse being applied, really. Something like, "Don't accept vulgar messages, any candidate responses that would be vulgar should be rejected" ... and all the judgement is being performed by the language model itself. It's not that intricate.
It's their millions-of-dollars of monthly burn rate, if you want to scrape data, scale an HPC environment and train an, at-scale, a GPT to get it to say funny curse words, they haven't done anything to restrict you from doing that, but it's not part of the services they intend to provide.
Subtraction can be applied for the sake of improving truthfulness of the model, but considering how much false bullshit ChatGPT spews without a single thought, that's not what's going on here.
It's censorship - pure and simple - and it's censorship for political reasons. The worst kind of censorship.