Just like in blind wine tasting, I suspect people’s perceptions (including many here) would be very different if the author hadn’t told us it was created by AI.
There’s a noticeable negativity on HN toward AI when it comes to coding, writing, or anything similar as if these people have been using AI for the past 30 years and have reached some elevated state of mind where they clearly see it's rubbish, while the rest of us mortals who’ve only been fiddling with it for the past 2.5 years can’t.
Realy? Does having flawws really make four better reading? Okay, I'll admit that hurt me to right (as did that) but writing isn't furniture, and other than a couple of tells which I haven't kept pace with (eg use of the word "delve"), the problem with trying to key off of LLM generated content and decide quality, is that you can't tell if the LLM operator took three minutes to copy and pasted the whole thing (unless they accidentally leave in the prompts, which has happened, and is a dead giveaway that no one even proof skimmed it), or if they took more time with it and carefully considered the questions ChatGPT asked them as to what the writing wood (ouch!) contain.
If you made it this far, does having English mistakes like that make really make for better reading?
I, personally, like mistakes in writing (as with in painting or singing) - I feel that it gives the art an additional depth, context, detail and comparison with author earlier/later works, other authors.
I believe that art function is to communicate - we create art, type letters, paint graffiti, verbal-vomit in online game PvP match to make a connection with other people.
So the mistakes are only adding to the art: "cooking this is difficult, and everyone do mistakes, but it's made with love and intuition, not blind recipe". Well, I can continue with examples of kissing but I guess I am repeating myself, haha.
I believe that being perfect is not human, and life doesn't have to be perfect. Getting better is great! But so is making mistakes.
(Or, dunno, maybe I have more to learn and I will some day think in a different way.)
"Really? Does having flaws actually make for better reading?
Okay, I’ll admit—that hurt to write (as did that last sentence), but writing isn’t furniture. Aside from a few tells I haven’t kept pace with (like the overuse of the word “delve”), the problem with trying to judge quality based on LLM-generated content is this: you can’t always tell whether the operator spent three minutes copying and pasting the whole thing (unless they accidentally leave in the prompts—which has happened and is a dead giveaway that no one even skimmed it), or if they took the time to thoughtfully consider the questions ChatGPT asked about what the writing should contain.
If you’ve made it this far: do mistakes like these really make for better reading?"
And I'm going to have to say: yes, I enjoyed reading your weird paragraph more than the ChatGPT sanitized version of it.
There has been an effort to deny all variance in human output or abilities in the last 8 years.
It works, because most humans are mediocre (including their managers). So they gang up on the productive part of the population, harness its output, launder its output and so forth.
Then they say: "See, there are no differences! We are all equal!"
Yeah? The sentiment of “why read something somebody didn’t bother to write” sort of has to be.
And when it comes to books, I find that to be a fairly compelling argument. I want my fiction to be imbibed with the experiences of the author. And I want my nonfiction to be grounded by the realities of the world around me, processed again through a human perspective.
It could be the best written book in the world, it’ll always be missing that human element.
I don't understand it either. I suspect it is the fear for their own wellbeing. The fear is well placed. But the response is perplexing. The only way to deal with this challenge is to try to stay ahead of it. Not to stick your head in the sand.
For me its the injustice of stealing data, scrapers incurring huge costs to open source projects, companies exploiting cheap labour in labelling that data and finally the growing environmental cost that makes me not want to use LLMs.
I have that issue too. But for literature it’s something more primal.
Fiction feels like the ultimate distillation of the human experience. A way to share perspective and experience. And having some algorithm flatten that feels utterly macabre.
Not to be too dramatic. I know that not all fiction is transcendent. But still. There’s something so utterly gross about using a machine for it.
There’s a noticeable negativity on HN toward AI when it comes to coding, writing, or anything similar as if these people have been using AI for the past 30 years and have reached some elevated state of mind where they clearly see it's rubbish, while the rest of us mortals who’ve only been fiddling with it for the past 2.5 years can’t.