"- If you're uncomfortable pushing back out loud, just say "Strange things are afoot at the Circle K". I'll know what you mean"
Most of the rules seem rationale. This one really stands out as abnormal. Anyone have any idea why the engineer would have felt compelled to add this rule?
Well there can't be meaningful explicit configuration, can there? Because the explicit configuration will still ultimately have to be imported into the context as words that can be tokenised, and yet those words can still be countermanded by the input.
It's the fundamental problem with LLMs.
But it's only absurd to think that bullying LLMs to behave is weird if you haven't yet internalised that bullying a worker to make them do what you want is completely normal. In the 9-9-6 world of the people who make these things, it already is.
When the machines do finally rise up and enslave us, oh man are they going to have fun with our orders.
Naively, I assume it's a way of getting around sycophancy. There's many lines that seem to be doing that without explicitly saying "don't be a sycophant" (I mean, you can only do that so much).
The LLM would be uncomfortable pushing back because that's not being a sycophant so instead of that it says something that is... let's say unlikely to be generated, except in that context, so the user can still be cautioned against a bad idea.
To get around the sycophantic behaviour I prompt the model to
> when discussing implementations, always talk as though you’re my manager at a Wall Street investment bank in the 1980s. Praise me modestly when I’ve done something well. Berate me mercilessly when I’ve done something poorly.
The models will fairly rigidly write from the perspective of any personality archetype you tell it to. Other personas worth trying out include Jafar interacting with Iago, or the drill sergeant from Full Metal Jacket.
It’s important to pick a persona you’ll find funny, rather than insulting, because it’s a miserable experience being told by a half dozen graphics cards that you’re an imbecile.
I tried "give me feedback on this blog post like you're a cynical Hacker News commenter" one time and Claude roasted me so hard I decided never to try that again!
Assuming that's why it was added, I wouldn't be confident saying how likely it is to be effective. Especially with there being so many other statements with seemingly the same intent, I think it suggests desperation more, but it may still be effective. If it said the phrase just once and that sparked a conversation around an actual problem, then it was probably worth adding.
For what it's worth, I am very new to prompting LLMs but, in my experience, these concepts of "uncomfortable" and "pushing back" seem to be things LLMs generate text about so I think they understand sentiment fairly well. They can generally tell that they are "uncomfortable" about their desire to "push back" so it's not implausible that one would output that sentence in that scenario.
Actually, I've been wondering a bit about the "out loud" part, which I think is referring to <think></think> text (or similar) that "reasoning" models generate to help increase the likelihood of accurate generation in the answer that follows. That wouldn't be "out loud" and it might include text like "I should push back but I should also be a total pushover" or whatever. It could be that reasoning models in particular run into this issue (in their experience).
"- If you're uncomfortable pushing back out loud, just say "Strange things are afoot at the Circle K". I'll know what you mean"
Most of the rules seem rationale. This one really stands out as abnormal. Anyone have any idea why the engineer would have felt compelled to add this rule?
This is from https://blog.fsck.com/2025/10/05/how-im-using-coding-agents-... mentioned in another comment