Wednesday, October 16, 2013

An experiment on open access journals

As you may have already heard, an experiment on open access journals was organized by Science magazine. A spoof manuscript describing a novel cancer drug was submitted to 304 open access journals and had a 70% acceptance rate. The manuscript had many intentional errors that should have been picked up easily by the peer review process.
As a fan of the ideals of open access publishing I do believe this was an important finding. Clearly there are problems with the peer review process in these journals. This needs to be addressed.
What I find strange is that the conclusions of this experiment fail basic logic. This experiment had no controls. There were no submissions of the spoof article to closed access journals, therefore it is impossible to conclude that the acceptance of poor scientific manuscripts is specific to open access journals. This stunt was also not a test of the open access ideology or business model, it was only a test of the peer review process of these journals. No doubt, those open access journals that accepted the article clearly failed the most basic requirement of scientific publishing, however Science magazine has also mistakenly accepted flawed papers. I found a more balanced assessment of the meaning of this experiment at National Geographic.