Provenance matters because LLM writing is cheap compared to actually having to think about what to say.
I only have a limited amount of time to read. Skipping someone's Internet comment because it looks like spam often means I get to engage with something else.
I don't see that provenance matters per se. LLM-assisted writing is comparatively cheaper than producing the same writing without an LLM, but not inherently cheap in absolute terms.
If someone who typically bills $500/hr spends 30 - 60 minutes on a comment or blog post, that's still $250 - 500 worth of their time invested regardless of whether or not an LLM was involved. An LLM is comparatively cheaper than hiring a human editor or research assistant, but it's not negative cost.
Likewise, prompting ChatGPT with "write a blog post about bees" may be cheaper than hiring someone off Fiverr to respond to the exact same prompt, but in either case the resulting content will be low-value (yet still higher-value than the string "write a blog post about bees") because its source material was cheap. The fact that the latter version would have been written by a human is incidental.
TLDR: Have two LLMs, one privileged and quarantined. Generate Python code with the privileged one. Check code with a custom interpreter to enforce security requirements.