Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a big difference between those two examples.

Mars is hard but there are paths forward. More efficient engines, higher energy density fuels, lighter materials, better shielding, etc, etc. It's hard but there are paths forward to make it possible with enough time and money. We have an understanding of how to get from what we have now to what makes Mars possible.

With LLMs, there is no path from LLM -> gAI. No amount of time, money or compute will make that happen. They are fundamentally a very 'simple' tool that is only really capable of one thing - predicting text. There is no intelligence. There is no understanding. There is no creativity or problem solving or thought of any kind. They just spit out text based on weighted probabilities. If you want gAI you have to go in a completely different direction that has no relationship with LLM tools.

Don't get me wrong, the work that's been done so far took a long time and is incredibly impressive, but it's a lot more smoke and mirrors than most people realize.



I'll grant that we could send humans to mars sooner if we really wanted to. My point is that not achieving a bigger dream doesn't make current progress a hype wave followed by a winter.

And "LLM's just make plausible looking but incorrect text" is silly when that text is more correct than the average adult a large percentage of the time.


_unless_ intelligence is really mostly an emergent property of something very similar to language, in which case we're most of the way there




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: