Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Permanently.

There’s an interplay between two different ideas of reliability here.

LLMs can only provide output which is somehow within training boundaries.

We can get better at expanding the area within these boundaries.

It will still not be reliable like code is.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: