Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can current LLMs actually do that, though? What Ilya posed was a thought experiment: if it could do that, then we would say that it has understanding. But AFAIK that is beyond current capabilities.


Someone should try it and create a new "mysterybench". Find all mystery novels written after LLM training cutoff, and see how many models unravel the mystery




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: