Sometimes echo chambers amplify silly noise, sometime they just work because they reflect the truth.
HN is biased toward CS and CS researchers know that papers on deep learning algorithms, papers on computer vision research, are almost all published for free.
Reputation and testable software replace peer review advantageously. Researchers want citations, and being behind a paywall hinders that.
> So why aren’t there millions of publishers springing up to publish articles for $30 a pop (or even better - free!)?
Because we only need one. ArxiV has 1,303,895 papers online, for free. Drafts are enough for serious research and collaboration to happen.
And this is not limited to CS. Here is what Krugman, a well published, very cited, a Nobel prized economist says about this:
> Now, these working papers have long since become more than working papers used to be in the days of yore, largely because publication takes so long. For decades, the journals have basically been tombstones — places that validate your work, that you can cite when seeking tenure, but not where people keep up with what’s happening now. Working papers have long been where the active discussion takes place. A case in point: I released this paper in working paper form in 1988; by the time it was formally published in 1991, there was already a huge derivative literature, which I ended up citing in the published version.
> and so far nothing has done much to dislodge the journal
That is because it is a racketing scheme: to make real research, universities have to make deals to access the stash of old papers kept behind firewalls. These deals will often have discounts if they publish their papers on this publisher.
That is taking a monopoly position as the proof that their product is superior.
> These deals will often have discounts if they publish their papers on this publisher.
I've never heard of such a scheme.
But on to your larger point: one thing I'd say is that the needs of academics varies greatly from field to field. There are reasons CS works totally differently than lots of other fields. There's a reason arXiv started in physics. Fields with logically provable assertions and testable algorithms are a lot easier to assess the quality of the content without relying on the peer review and journal acceptance signaling effect (although in CS the conference acts as that signaling effect just as strongly I'd argue).
Preprint servers are great, and certainly provide value and are a piece of the puzzle. In some fields (CS, Math, Physics) that can be a pretty large piece of the puzzle. But in many other fields there's a much greater need for expert curation and vetting.
> But in many other fields there's a much greater need for expert curation and vetting.
Let's remind something that is too often forgotten: peer review, while still useful, is NOT paid by the publisher. It is done for free by the peers. Yes, the publisher manages that and gets paid to organize it. The complex, high-educated job of peer-review is done for free, the automatizable, simple job of passing emails around is paid. Don't worry, I am sure we can find university staff or even a simple emailing script that can replace this function easily.
I think that the reason it started with CS is much more mundane: computer scientists are much more likely to know that hosting a PDF is very easy and costs zero, and will probably be more confident in using tools like LaTeX.
HN is biased toward CS and CS researchers know that papers on deep learning algorithms, papers on computer vision research, are almost all published for free.
Reputation and testable software replace peer review advantageously. Researchers want citations, and being behind a paywall hinders that.
> So why aren’t there millions of publishers springing up to publish articles for $30 a pop (or even better - free!)?
Because we only need one. ArxiV has 1,303,895 papers online, for free. Drafts are enough for serious research and collaboration to happen.
And this is not limited to CS. Here is what Krugman, a well published, very cited, a Nobel prized economist says about this:
> Now, these working papers have long since become more than working papers used to be in the days of yore, largely because publication takes so long. For decades, the journals have basically been tombstones — places that validate your work, that you can cite when seeking tenure, but not where people keep up with what’s happening now. Working papers have long been where the active discussion takes place. A case in point: I released this paper in working paper form in 1988; by the time it was formally published in 1991, there was already a huge derivative literature, which I ended up citing in the published version.
[1] https://krugman.blogs.nytimes.com/2013/04/22/understanding-t...
> and so far nothing has done much to dislodge the journal
That is because it is a racketing scheme: to make real research, universities have to make deals to access the stash of old papers kept behind firewalls. These deals will often have discounts if they publish their papers on this publisher.
That is taking a monopoly position as the proof that their product is superior.