Oh, this is not a China thing. I've had a paper have a bunch of reviewers suggest a bunch of references each. Every bunch had every paper share at least one author. Every bunch was pairwise disjoint in the author sets. Draw your own conclusions.
Edit: just to be clear: I didn't at the time read that as "submission tax". More of, trying to be helpful and using things they personally were familiar with. Most, if not all, of the extra references would make our paper better... If we weren't fighting that damned page limit, that is.
Question is, why don't scientists just put everything on public platforms (read: github) and call it a day? Is it only a matter of funding, or do other factors also play a role in that?
Because nobody reads it there and, more importantly, funders don't recognise the work you've done there. The "prestige" (as indicated by the scientific-looking but mostly inaccurate "Impact Factor") of the journal you publish in determines how good they think your work is.
That's a problem that would fix itself the moment most useful research was mainly available on such platforms.
> more importantly, funders don't recognise the work you've done there
Once again, that sounds like mostly a problem that would disappear if a large migration to open platforms was to happen.
----
So it seems the main poroblem seems to be that there's no incentive to be among the first to make the move? IIRC it's often the journals that don't want content to be published elsewhere, so I guess just doing both is also not that simple.
For all its faults, peer-review is still the best mechanism to keep science in right track.
What you propose would mean twitter or facebook will replace those journals, people with huge twitter followings, or "celebrity" scientists would dominate science, the works of people without such marketing skills would get drowned out.
(This is sort of true for current system too, but I think situation would be much worse in new system.)
> For all its faults, peer-review is still the best mechanism to keep science in right track.
Peer review is often effective, but it can't reliably block fraudulent publications like those described in the posted article. Most bad papers are rejected, but the authors can always try again at another journal. Any paper will probably get published somewhere, eventually, even if only in a Hindawi or MDPI journal. The journals aren't accountable to anyone, and as long as they have enough good articles to serve as cover, academics will need to pay for access because citing relevant prior work is obligatory. The publishing system is very weak against fraud.
> people with huge twitter followings [...] would dominate science
Isn't that at its core the same as with scientific journals? People trust these journals to curate science in the same way you suggest twitter would come to curate science if it made the move online.
1. It's already possible to call attention to a paper through twitter, regardless of whether it's published in a journal or not. Paywalls gate-keep the content somewhat and makes sharing easier, but that's a minor side effect of a very broken system.
2. Papers (and involved data) being available on public platforms like github that already have mechanisms for reporting and tracking issues as well as built-in review tools, in githubs case even a separate discussion feature now, would allow for much quicker discussion critizising bad methodology.
3. Working with a VCS like git would automatically make it clear who wrote, edited or removed what.
Scientific data does not fit on most public platforms. GitHub in particular has tight limits on file size, push size (100 MB), bandwidth, and storage ($100 / TB / month). Which isn't that surprising; git is designed for code, not data.
Even if funders gave large sums of money dedicated to data publication, if recurring billing is involved it will eventually break as attention wanes. Data archives need to be managed by an institution or purchased with a single up-front fee, otherwise they won't stick around.
There's also the aspect that, even if you as an individual take it upon yourself to publish your data without institutional support, anyone who reads your paper will most likely ignore your dataset. Which is somewhat demotivating.
Funding for the project/department as well as personal career prospects of everyone involved are tied to the publications. Various approaches to analysing those produce importance numbers. (Note: Pagerank was an attempt of doing same to non-scientific publications and we all know how that went.) Said numbers are picked by bureaucracies to determine the objective worth of groups and individuals. Growing said numbers is literally what the livelihood of academics, at least at some stages of their careers, depends on.
So, yes, that's fundamentally "a matter of funding". It can be fixed by academics and bureaucrats agreeing to switch to some other system. On international level. I think if you got the top 20 countries to coordinate, the rest would follow suit. Any bets on when that will happen? ;)
Edit: just to be clear: I didn't at the time read that as "submission tax". More of, trying to be helpful and using things they personally were familiar with. Most, if not all, of the extra references would make our paper better... If we weren't fighting that damned page limit, that is.