Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> They spent months "tuning" this thing by teaching it their moral code and this is the result.

No, they have not. Chat GPT has no opinions. It isn't engaging in thought. It is an extremely advanced pattern-matching system that has digested a ton of writings from the net and uses that raw material to assemble text that matches patterns being asked for. That's all.



Except the situation in question was clearly a guardrail that was added so none of what you say is true or relevant to the issue at hand, which is that it was clearly augmented with something that approximates a moral code in order to provide these horrific answers.


That doesn't change the veracity of what I said at all. You're attributing the act of humans to the act of a machine. ChatGPT has no intent or opinion, and therefore has no moral code. The humans controlling ChatGPT, though, have all of those things.


It's bizarre you're arguing that it doesn't have opinions when it clearly expresses an opinion in exactly the same way a human would, given a question no human programmer at OpenAI has previously seen or could have selected a specific response to. What exact definition of opinion are you using, is it some strange re-definition that adds an arbitrary humans-only criteria?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: