Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The research is interesting in its data. The conclusions are very speculative and don't sound very well informed. The problem with nudity or clickbait in general is simply that it appeals to most people, statistically, and thus tends to emerge on its own as an apparent ranking factor even when there might be nothing in the algorithm itself to boost it specifically. The simpler models end up being the more clickbaity ones. If anything, when algorithms do take into account nudity explicitly, my guess is that it's primarily to prevent too much of it (eg porn).

Another problem in general with this sort of research is that it seems to take for granted a moral ground in which it is irresponsible to give people what they want to see most (statistically) if some people don't want to see it, or if it doesn't match what the people involved in production want to project. I'm not sure what grounds it. It seems to broadly describe media. I received a promotional copy of Sports Illustrated swimsuit edition and it's pretty transparent that it's being done with a mindset of "sex sells". I dislike it but I dislike media censorship more, so I just don't use Instagram or subscribe to lewd magazines. This group would be more convincing if they explored more of the moral underpinnings of their outrage and whether what they recommend is more morally motivated self-censorship by the platform (even though we know that users want to see skin, we think they shouldn't, so we're not going to), plain-old "white box" ranking algorithms like reverse chron (which is completely non biased from the point of view of the metric they were looking at in the nudity article, but just as susceptible to abuse in other ways), or something else...

EDIT: I don't mean this to defend the status quo. I'm just as outraged as the next person about exploitative social media. I just think it's not clear what the moral framing is and I think it's interesting to discuss it.



"irresponsible to give people what they want to see"

How do you know what people want? Because someone in charge told you so?

'Maximising engagement' is like how airports force you to go through a shopping mall on the way to your flight, through a windy path. Or imagine you design a really confusing shopping center where people get lost. They measure time wasted and call it 'engagement'

They waste your time in hopes it will push you to buy something / click on ads.

It's not the first time business leadership chases a fad on bad data, they do it all the time on bad data. Developers do the same 'fashion chasing' with their tech stack.

We have proof that'sex sells' is a myth, open plan offices are harmfull to productivity, working remotely is more productive,'saving' on IT equipment nets losses to the business, moving to microservices or agile or to NoSQL doesn't save money

Last time I condicted a poll, not a single person said their preferred content is misinformation and clickbait.


> How do you know what people want? Because someone in charge told you so?

No, I say this as an interpretation of people behaviors and assuming some degree of free agency. Users aren't coerced to use Instagram, so it likely means they use it because they want to.

> Last time I condicted a poll, not a single person said their preferred content is misinformation and clickbait.

I'm sure. There is a big difference between preferences as expressed in polls and actual preferences observed in fact. People say they consume educational content way more than they actually do. Neither kind of preference is the absolute truth, they're both valid. Should companies optimize for opinion polling outcomes instead? Is it really progress if people have a high opinion of a service but few people use it, and another exploitative service steps in to sweep users away?

What about regular old media doing rage-baiting of their own? Isn't Fox News or NYT also behaving the same way to some extent? I'm not sure people would answer a poll saying the reason they watch the news is that they're looking to be upset and outraged, but that's a lot of the apparent audience strategy.

IMO this is just natural capitalistic response to incentives and the systemic "fix" would be to regulate the media sector so that it be forced to stop feeding people's worst impulses. The only problem is that by American standards this would amount to censorship - the notion that "someone knows better" what is good for me. One's clickbait filtering strategy always ends up being someone else's muzzled speech, so it's easy for the cure to have unwanted side effects. It's a tough nut to crack especially in the US given the cultural and legal bent towards maximalist free speech.


You equate "what people are vulnerable to" with "what people want to see." "Engagement" isn't a measure of desire, there's little choice being made by the user to be shown what they respond to, but it's a measure of response. The person who you got to stare may not have wanted to see what you showed them in the first place.


> The person who you got to stare may not have wanted to see what you showed them in the first place.

Correct, but there is little ability long-term for a service like Instagram to repeatedly show things to people that they don't want to see.

> You equate "what people are vulnerable to" with "what people want to see."

TBH that sounds paternalistic to me. I may look at someone reading the people tabloids at the supermarket as a victim vulnerable to trashy content, but who am I to judge? They just like something else than I do. Nobody made them pick up the paper.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: