I cannot make a sensible comment on algorithmwatch per se, but I think the old "we cannot reveal our algorithm otherwise bad people will game against it" has reached the end of its life.
These feed algorithms (and co-related moderation rules) do have an enormous effect on what information people globally receive. Studying them should be encouraged, and IMO the code itself open sourced or otherwise made available to study. If they take approaches that are game-able maybe we just have to dial it down and it have such sohisticated algorithms.
Modern social media platforms are highly profitable AI-accelerated mass manipulation systems, on steroids. These companies have little incentive and are very reluctant to changing their 'winning formula', obviously.
Misinformation is rampant and even rewarded as studies have shown repeatedly. Continuous exposure to these centralised feeds providing an infinite amount of content - managed by barely regulated for-profit companies - is worrying.
Also most platforms are based in the US, a country that has completely different social/cultural views (and laws) compared to the rest of the world. I'm glad I am not a US citizen for example. Still these platforms are being used worldwide, so I think that decisions made about the 'feed algorithm(s)' can have a big social/cultural impact on other countries. More studies (and regulation, taxes) are probably a good idea.
How useful would the algorithm being open source be without things like the ML models that feed its parameters or the data used to train the models? It would be like knowing PageRank without any of the data and asking why one site is ranked above another in search results.
We don't need to know the details of the algorithm or the models when we can see how it behaves. E.g. the example of one of the other posters where they cite that IG keeps pushing bikini pics over other content. Just knowing that the above anecdote is true empirically via whatever means is super useful and can shape our understanding and push back.
Its more about knowing whether the algorithm isnt politically rigged to engineer public opinion by amplifying certain truths and decreasing visibility of other truths.
Since I don't have any insight into their algorithm I'm going to use a naive hypothetical example: Say they did have a system to downrank negative/harmful/spammy content and it works by applying some kind of embedding model to posts and comparing their cosine similarity to a dictionary of known bad content. You have the code, but neither the trained model for doing embeddings nor the set of "bad" vectors. Can you answer the question: Is it politically rigged?
That's why I tend to be in agreement with the idea of treating it as a black box and observing what biases it exhibits.
These feed algorithms (and co-related moderation rules) do have an enormous effect on what information people globally receive. Studying them should be encouraged, and IMO the code itself open sourced or otherwise made available to study. If they take approaches that are game-able maybe we just have to dial it down and it have such sohisticated algorithms.