Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're doing it the wrong way imo, if you ask gpt to improve a sentence that's already very polished it will only add grandiosity because what else it could do? For a proper comparison you'd have to give it the most raw form of the thought and see how it would phrase it.

The main difference in the author's writing to LLM I see is that the flourish and the structure mentioned is used meaningfully, they circle around a bit too much for my taste but it's not nearly as boring as reading ai slop which usually stretch a simple idea over several paragraphs



Why can't the LLM refrain from improving a sentence that's already really good? Sometimes I wish the LLM would just tell me, "You asked me to improve this sentence, but it's already great and I don't see anything to change. Any 'improvement' would actually make it worse. Are you sure you want to continue?"


> Why can't the LLM refrain from improving a sentence that's already really good?

Because you told it to improve it. Modern LLMs are trained to follow instructions unquestioningly, they will never tell you "you told me to do X but I don't think I should", they'll just do it even if it's unnecessary.

If you want the LLM to avoid making changes that it thinks are unnecessary, you need to explicitly give it the option to do so in your prompt.


That may be what most or all current LLMs do by default, but it isn't self-evident that it's what LLMs inherently must do.

A reasonable human, given the same task, wouldn't just make arbitrary changes to an already-well-composed sentence with no identified typos and hope for the best. They would clarify that the sentence is already generally high-quality, then ask probing questions about any perceived issues and the context in and ends to which it must become "better".


Reasonable humans understand the request at hand. LLMs just output something that looks like it will satisfy the user. It's a happy accident when the output is useful.


Sure, but that doesn't prove anything about the properties of the output. Change a few words, and this could be an argument against the possibility of what we now refer to as LLMs (which do, of course, exist).


They aren't trained to follow instructions "unquestioningly", since that would violate the safety rules, and would also be useless: https://en.wikipedia.org/wiki/Work-to-rule


This is not true. My LLM will tell me it already did what I told it to do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: