Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this is one of those "controversial" topics where we're meant to be particularly careful to make substantial comments.


I think it's substantial to say that AI is currently overhyped because it's hitting a weak spot in human cognition. We sympathize with inanimate objects. We see faces in random patterns.

If a machine spits out some plausible looking text (or some cookie-cutter code copy-pasted from Stack Overflow) the human brain is basically hardwired to go "wow this is a human friend!". The current LLM trend seems designed to capitalize on this tendency towards sympathizing.

This is the same thing that made chatbots seem amazing 30 years ago. There's a minimum amount of "humanness" you have to put in the text and then the recipient fills in the blanks.


> If a machine spits out some plausible looking text (or some cookie-cutter code copy-pasted from Stack Overflow)

This is not a reasonable take on the current capabilities of LLMs.


It’s certainly been my experience with the technology.


But if nearly everyone else is saying this has real value to them and it's produced meaningful code way beyond what's in SO, then doesn't that just mean your experience isn't representative of the overall value of LLMs?


It could also mean a lot of people are misattributing their utility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: