I've never, personally, had any direct utility from traditional academic peer review for my scientific work.
I've never used that peer review as any kind of signal or credential in judging a paper or deciding whether to read it.
Nearly all the papers I read are preprints.
I've never used that peer review as any kind of signal or credential in judging a paper or deciding whether to read it.
Nearly all the papers I read are preprints.
Comments
But CS is a different beast, you can benchmark it, build it, test it at scale, online collaborations,...
probably except for security/crypto related stuff.
In the case of crypto, the author may assume their solution is working but other experts might be able to crack it (theoretical flaw or just white hat hacking)
And not for those who are well versed in critical appraisal or understand the ins and outs of some methods?
And/or keep asking for evidence where they should use common sense.
“There’s no evidence that a physical barrier would reduce exposure to bugs”
“There’s no evidence that cleaner air is better”
if that's the case for everything you did during the covid crisis, that makes my point even more valid: you cannot discard peer reviewing because you're like a 10000x scientist. Who has the time for this ? Peer reviewing is just a proxy.
Now I'm reading a paper like there's the written part, trying to convince reviewers, and a hidden one with what they actually do.
But I'm not one of those people, and I don't recall ever talking to anyone in my field (AI/ML) who made any indication that they used traditional peer review in their curation process.
I doubt a random reviewer would do a better job in their limited time.
That said, I wrote one paper where the peer reviews were really helpful. It was a paper on ecology, from physics pov. Reviewers helped rewrite paper so community could understand it.
I also trust my curated social media communities to poke at important papers and identify holes or missed opportunities.
I'm not referring to peer review as a way of improving one's own work based on feedback.
This property is very rare, even in statistics.
Also, most people cannot redo proofs, or experiments, in ML or elsewhere.
Its drawback is limited applicability to many problems. Its benefit is that it enables rapid progress because it creates good social dynamics.
Best paper awards can also be a great signal
- primer on bertology
- active learning from web
- adarank
- adversarial training high stakes
- watermark for large language models