Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs don't learn from a project.

How long do you think that will remain true? I've bootstrapped some workflows with Claude Code where it writes a markdown file at the end of each session for its own reference in later sessions. It worked pretty well. I assume other people are developing similar memory systems that will be more useful and robust than anything I could hack together.



For LLMs? Mostly permanently. This is a limitation of the architecture. Yes, there are workarounds, including ChatGPT's "memory" or your technique (which I believe are mostly equivalent), but they are limited, slow and expensive.

Many of the inventors of LLMs have moved on to (what they believe are) better models that would handle such learnings much better. I guess we'll see in 10-20 years if they have succeeded.


Permanently.

There’s an interplay between two different ideas of reliability here.

LLMs can only provide output which is somehow within training boundaries.

We can get better at expanding the area within these boundaries.

It will still not be reliable like code is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: