Sometimes, it takes some effort to get to the rewarding part of an activity. A little pressure is not bad when it's helping you reach your goals. Millions of people force themselves to go to the gym.
People enjoy making things with their hands. They love conveying their emotions and adding their flair. If the masters did not deter people from picking up a paintbrush, why would AI slop?
This is a task I think is suited for a sub agent that is small in size. It can can take the context beating to query for relevant tools and return only what is necessary to the main agent thread.
When I have a bug I’m iterating on it’s much easier and faster to have it write out the playwright script. That way it does not have to waste time or tokens performing the same actions over and over again.
Download the godot docs and tell the skill to use them. It won’t be able to fit the entire docs in the context but that’s not the point. Depending on the task it will search for what it needs
Presumably cargo clippy --fix was the intention. Not all things are fixable, though, which is where LLMs are reasonable for -- the squishy hard-to-autofix things.
You can use the built in task agent. When you have a plan and ready for Claude to implement, just say something along the line of “begin implementation, split each step into their own subagent, run them sequentially”
subagents are where Claude code shines and codex still lags behind. Claude code can do some things in parallel within a single session with subagents and codex cannot.
Yeah, in parallel. They don't call it yolo mode for nothing! I have Claude configured to commit units of work to git, and after reviewing the commits by hand, they're cleanly separated be file. The todo's don't conflict in the first place though; eg changes to the admin api code won't conflict with changes to submission frontend code so that's the limited human mechanism I'm using for that.
I'll admit it's a bit insane to have it make changes in the same directory simultaneously. I'm sure I could ask it to use git worktrees and have it use separate directories, but I haven't (needed to) try that (yet), so I won't comment on how well it would actually do with that.
What I say is that the search results part of the page, with or without the summary should be the same in theory.
So if the other person saw a difference in the result returned it might be only because of the impact of the question mark character itself on the search index
I personally love LLMs and use them daily for a variety of tasks. I really do not know how to “fix” the terminology. I agree with you that they are not thinking in the abstract like humans. I also do not know what else you would call “chain-of-thought”.
Perhaps “journaling-before-answering” lol. It’s basically talking out loud to itself. (Is that still being too anthropomorphic?)
Chain of thought is what LLMs report to be their internal process--but they have no access to their internal process ... their reports are confabulation, and a study by Anthropic showed how far they are from actual internal processes.
It’s all a matter of perspective and personal goals, no?
reply