Journal of Negative Results
I’ve discussed this with my friends and colleagues for many years now. I believe we even came up with a journal name (‘ARSE - annals of research and spurious experiments’ - yes, it’s not the best acronym but we wanted to get the word arse in there. Apparently we weren't the only ones wanting to brighten up scientific titles [1][2]). Our suggested implementation may have been silly, but the idea was not such a terrible one. After all, how difficult can it be to start your own journal?
Firstly, so our logic went, journals have authors paying them huge fees to publish there, then they don’t pay their reviewers, and they even often charge for access to the publications on an article-by-article basis!
Secondly, all we would need would be some good marketing and ‘boom!’ the articles would start rolling in. Everyone produces negative results. Constantly. It’s difficult for me to give a good estimate for theoretical or computational research, but in experimental research, I would say that well over 80% of our experiments fail (some would say that the number is even higher). Surely this is a huge waste of time and resources - surely many of these experiments are being repeated by other people in other labs around the world, with the same negative results?
Well, herein lies the problem. Or one of the problems. How do you prove, with absolute certainty, that your experiment failed (and that you know why)? The issue with many of the experiments that I perform is that there are so many different variables which could possibly affect the results without us knowing. Sometimes we scientists tend to get a bit superstitious as a result, having ‘favourite’ pipettes or not wearing certain items of clothing on experiment days (surely everyone remembers Megan’s bad boots?). Therefore unless every single detail was meticulously recorded (the weather that day, the humidity, the last time someone vacuumed the lab, which glassware was used, etc.), it would be very difficult to conclusively show the failure of an experiment.
Negative results are, in fact, already published in certain fields [3], however this is certainly not common practice nor is it seen by most scientists as a priority. For example the Journal of Negative Results in BioMedicine ceased to be published on the 1st of September 2017 [4] and the publishing record of the Journal of Negative Results in Ecology and Evolutionary Biology does not seem to have had so many contributions over the years [5]. In order to encourage people to publish negative results, the process would have to be made appealing and more accessible - bringing similar or at least only slightly lesser rewards for publication of negative results as publication of positive ones. Clearly, such publications would also need to be made open access.
In general, I would argue that in the interests of open and transparent science, publishing negative results should be done more commonly as long as they are reported to a high standard. This is the case for most journal publications now, where the precise methods and reagents used are given. Think how great it would be to rely on hard, scientific evidence for why your seemingly identical experiment worked on one day and not the next, rather than putting it down to the fact that you wore your special red t-shirt on one of the days and not the other?
[1] https://www.popsci.com/article/science/13-weirdest-named-academic-journals
[3] https://www.nature.com/articles/471448e
[4] https://jnrbm.biomedcentral.com/
[5] http://www.jnr-eeb.org/index.php/jnr/issue/archive