Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I ran into one of the most frightening instances of this recently with Gemini 2.5 Pro.

It insisted that Go 1.25 had made a breaking change to the filepath.Join API. It hallucinated documentation to that effect on both the standard page and release notes. It refused to use web search to correct itself. When I finally (by convincing it that is was another AI checking the previous AIs work) got it to read the page, it claimed that the Go team had modified their release notes after the fact to remove information about the breaking change.

I find myself increasingly convinced that regardless of the “intelligence” of LLMs, they should be kept far away from access to critical systems.



I've found that when any of these agents start going down a really wrong path, you just have to start a new session. I don't think I've ever had success at "redirecting" it once it starts doing weird shit and I assume this is a limitation of next-token prediction since the wrong path is still in the context window. When this happens I often have success telling it to summarize the TODOs/next steps, edit them if I have to remove weird or incorrect goals, and then paste them into a new session.


Like social media, they'll seem benign until they're inervated the populace and start a digital fascism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: