Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In fact it’s not even AI in the more general sense, it’s almost entirely just LLMs that get discussed.

"AGI is right around the corner" "No it's not" "Yes it is, LLMs are the future." "We don't even know if AGI is possible." "LLMs are the future." "No they aren't." "AGI is right around the corner..."

or

"LLMs are really useful." "No they're not" "Yes they are." "No they aren't." with a little bit of "They sucked the last time I used them." "Did you use them recently?" "That's what someone said last time." "But LLMs are really useful" ...

over and over and over.

It isn't even that it's mostly just LLMs being discussed, it's how they're begin discussed, they're effectively just a proxy for optimists and pessimists to argue over which worldview is better.

If we were talking about AI in general and not LLMs, the same conversation structures would still pop up.



No because those conversations are specific to off the shelf models used in generalised ways, which basically only applies to LLMs.

When you look at other applications of AI, such as models trained for medical usage or AI used in Hollywood (something I personally have professional experience in) then it’s a completely different story because they’re bespoke models trained and used for specialised edge cases. Rather than Joe Blogs using an off the shelf package to do the same thing as the previous person albeit slightly differently.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: