Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People also mis-use LLMs.

ChatGPT is forced to given an answer. It's like a human on "truth serum". The drugs don't stop you lying, they just lower inhibitions so you blab more without realising it.

The more obscure the topic, the more likely the hallucination. If you ask it about common card games, it gives very good answers.

If you asked a random human about 3 cards from a random board game at gunpoint and said: "Talk, now, or you get shot", they'll just start spouting gibberish too.

PS: I asked GPT 4 about that game, and it prefixed every answer with some variant of "I'm not sure about this answer", or it completely refused to answer, stating that it did not know about any specific cards.



To me, it prefixed with just "As an AI, I do not have opinions or favorites. However, I can share with you three notable and commonly appreciated Power Cards from the game "Spirit Island", as it existed until my training data cut-off in September 2021. Remember that the "best" cards can often depend on the specific circumstances in the game, as well as the particular strategy and Spirit you're playing.". But then just shared the cards, nothing about that it was not sure about details. Card selection was decent, but details like resources, powers, and so on were off. But all sounded realistic. Shared an example below if you care.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: