Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I appreciate the detailed reply and wholeheartedly disagree with most of your points. Most foundational research also has no quantifiable ROI, and rightly so. Putting these details aside, let me put it in more practical terms, since both you and the other person who replied seem to be unaware of the unavoidable practicalities of funding evaluation.

As someone who has been in various evaluation panels and also based on my own successful applications (the guidelines are always public), I want to stress that "popular" in the context of science evaluation really just means that the majority of evaluators think a proposal has more scientific merits, is original, can be expected to make substantial contributions, can be pulled off, is supported by the candidates CV, and so on. Providing more funding for "misfits" and "outcasts" could mean two different things.

First, it could mean that less popular proposals are getting higher chances of obtaining funding without jeopardizing the proposals of which the panel thinks they have most merits. Then this is merely a complicated way of saying that there should be more funding for everyone. I've never met a researcher who disagreed with that.

Second, it could mean that unpopular proposals of "misfit" candidates should be given precedence over more popular proposals in certain calls, for example by explicit guidelines or special grants. Although there are grant schemes for that (e.g. special grants for "high risk" projects in the EU Horizon 2020 scheme), this would be almost contradictory as a general guideline in all calls. It just doesn't make sense. Scientific peers must at one point or another evaluate proposals to decide over grants. This cannot be done by politicians or mere administrators. It would be idiotic to ask them to favor proposals they think have no merits.

So, Yes: Funding schemes for high risk proposals should exist, and they do exist. But these are the exception, not the rule, just like successful outsiders in science are outliers instead of the rule. Most of science is an extremely collaborative effort. (The same holds for philosophy, which is not a science.)

Regarding replication efforts: If someone's area is deeply unpopular and most people think the area or approach has no merits, then nobody will replicate it either. That's exactly my point. There needs to be a high enough threshold of scientists working on the same approach and the corresponding networking and embedding into the scientific community to ensure there are enough replication efforts and there is enough critical evaluation. Otherwise, the "misfit" is bound to publish in fringe journals with no quality control and nobody will ever check whether the work makes sense. That's not good.



First off, why are you talking to me as if I don't have a PhD? I understand you've been in academia longer, but we are peers. You don't need to talk to me like I don't know the first thing about how academia works.

Second off, we do work in different fields. Do you truly believe the way things work in your field are going to directly apply to STEM fields? Because

  > Most foundational research also has no quantifiable ROI
This is a strange statement to me, even if we're applying to pure mathematics. I even gave one example already.

  > If someone's area is deeply unpopular and most people think the area or approach has no merits, then nobody will replicate it either
This is completely orthogonal. There's a huge difference between nothing being replicated because no one wants to attempt it and no one replicating something because you are actively discouraged to do so. A neutral incentive is not the same as a stick.

  > Otherwise, the "misfit" is bound to publish in fringe journals with no quality control and nobody will ever check whether the work makes sense.
Well it seems you absolutely ignored my point about how I review. Those concerns go away if you start evaluating works by their own merit and not comparatively. There's no physical limit to how many works a journal can publish, so there's no reason to target a rejection rate.

How do you not see this as a huge problem? If a *factually correct*[0] paper is unable to publish in main stream venues then we have entirely fucked up the system.

[0] We're not talking philosophy here where things are entirely subjective. We're talking about systems with verifiable results. If you disagree with this, then let's work with the philosophical thought experiment and pretend such verifiable results are possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: