Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that next word prediction, while technically correct, doesn’t capture the full nature of what LLMs are optimized for. And the article gets it wrong. In fact they’re optimized for sycophancy and human preference, to produce plausible feel good slop that looks good and makes you read it uncritically, the high fructose corn syrup of reading.

So things like brainstorming or summarization actually give horrible results optimized to make you feel smart and not to help you learn or critically appraise anything.

OTOH, for most actual facts, I think LLMs are pretty good and continue to get better (as long as you’re asking direct questions about a real thing).

So yeah they’re not just next word predictors, even if that describes how they work; they’re something much more insidious that has been optimized by world experts to be more convincing than you, whether right or wrong. If your boss is citing LLMs you’ve already lost, just move on.



> In fact they’re optimized for sycophancy and human preference, to produce plausible feel good slop that looks good and makes you read it uncritically, the high fructose corn syrup of reading.

I'm scared that the thought registered instinctively that an LLM absolutely could come up with that metaphor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: