The problem is, I'm not expected to be a bullshitter, and I don't expect others to be either (just say you don't know!). So delegating work to a LLM or working with others who do becomes very, very frustrating.
Keep feedback loops short and critical output to be verified by humans short.
So this means that outputted answers in something like Kagi Assistant shouldn't be like those "Deep Research" report products where humans inevitably skim over the pages of outputted text.
Similarly if you're using an LLM for coding or to write, keep diffs small and iteration cycles short.
The point is to design the workflow to keep the human in the loop as much as possible, instead of "turn your brain off" coding style.
I don't think you caught the spirit of GP's question.
Essentially they were asking if there's no meaningful difference between your "working with the tool" and "mindlessly 'delegating' work". I'm not seeing anything in your reply that would indicate such difference, so you could say that your "you shouldn't 'delegate' work" claim was bullshit.
Which makes total sense, because humans are also bullshitters. Yes, even I.
Elaborate prompts laying down the full context and framework applied, often with very specific description of steps to follow and small examples wherever possible.
Treat it exactly as the direct-able powerful autocomplete that it is, NOT an answering/reasoning engine.