Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I personally love LLMs and use them daily for a variety of tasks. I really do not know how to “fix” the terminology. I agree with you that they are not thinking in the abstract like humans. I also do not know what else you would call “chain-of-thought”.

Perhaps “journaling-before-answering” lol. It’s basically talking out loud to itself. (Is that still being too anthropomorphic?)

Is this comment me “thinking out loud”? shrug



Chain of thought is what LLMs report to be their internal process--but they have no access to their internal process ... their reports are confabulation, and a study by Anthropic showed how far they are from actual internal processes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: