Hacker Newsnew | past | comments | ask | show | jobs | submit | s900mhz's commentslogin

Why not? If one of your goals is to try to relax and/or meditate more, then I feel it’s a valid list entry.

It’s all a matter of perspective and personal goals, no?


Because if you read the conclusion, they say that having this list gives them more pressure.

You have missed a good opportunity to be curious, rather than judgmental. Mind the HN guidelines: https://news.ycombinator.com/newsguidelines.html

Sometimes, it takes some effort to get to the rewarding part of an activity. A little pressure is not bad when it's helping you reach your goals. Millions of people force themselves to go to the gym.

People enjoy making things with their hands. They love conveying their emotions and adding their flair. If the masters did not deter people from picking up a paintbrush, why would AI slop?


I personally replaced my playwright mcp with this. Seems to use less context and generally more reliable.

This is a task I think is suited for a sub agent that is small in size. It can can take the context beating to query for relevant tools and return only what is necessary to the main agent thread.


When I have a bug I’m iterating on it’s much easier and faster to have it write out the playwright script. That way it does not have to waste time or tokens performing the same actions over and over again.

Think of it as TDD.


Download the godot docs and tell the skill to use them. It won’t be able to fit the entire docs in the context but that’s not the point. Depending on the task it will search for what it needs


Claude can then proceed to fix the issues for you


Presumably cargo clippy --fix was the intention. Not all things are fixable, though, which is where LLMs are reasonable for -- the squishy hard-to-autofix things.


You can use the built in task agent. When you have a plan and ready for Claude to implement, just say something along the line of “begin implementation, split each step into their own subagent, run them sequentially”


subagents are where Claude code shines and codex still lags behind. Claude code can do some things in parallel within a single session with subagents and codex cannot.


By parallel, do you mean editing the codebase in parallel? Does it use some kind of mechanism to prevent collisions (e.g. work trees)?


Yeah, in parallel. They don't call it yolo mode for nothing! I have Claude configured to commit units of work to git, and after reviewing the commits by hand, they're cleanly separated be file. The todo's don't conflict in the first place though; eg changes to the admin api code won't conflict with changes to submission frontend code so that's the limited human mechanism I'm using for that.

I'll admit it's a bit insane to have it make changes in the same directory simultaneously. I'm sure I could ask it to use git worktrees and have it use separate directories, but I haven't (needed to) try that (yet), so I won't comment on how well it would actually do with that.


I personally do not do any writes in parallel but parallel works great for read operations like investigating multiple failing tests.


https://apps.apple.com/us/app/wipr-2/id1662217862

Wipr 2

It’s a one time paid app that works for both Mac and iOS


I've also been happy with Wipr on both those OSes.


It’s a feature of Kagi. Putting the question mark does invoke AI summaries.

https://help.kagi.com/kagi/ai/quick-answer.html


I know for the summary.

What I say is that the search results part of the page, with or without the summary should be the same in theory.

So if the other person saw a difference in the result returned it might be only because of the impact of the question mark character itself on the search index


Ahh thanks for the clarification, I misunderstood!


I personally love LLMs and use them daily for a variety of tasks. I really do not know how to “fix” the terminology. I agree with you that they are not thinking in the abstract like humans. I also do not know what else you would call “chain-of-thought”.

Perhaps “journaling-before-answering” lol. It’s basically talking out loud to itself. (Is that still being too anthropomorphic?)

Is this comment me “thinking out loud”? shrug


Chain of thought is what LLMs report to be their internal process--but they have no access to their internal process ... their reports are confabulation, and a study by Anthropic showed how far they are from actual internal processes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: