First order of business: yay for me! My first-author paper was finally accepted for publication by a pretty good journal and it's coming out next month! I'm so proud of it, and it's so adorable!
However, I have to say, my first foray into publishing has thoroughly disillusioned me concerning how review, publishing etc. works (not that I was all that illusioned to begin with).
One of the scammy things is the money (of course). You (and I) pay taxes. Some of these go to the NIH. The NIH funds my research with our tax money. In order to keep getting NIH funding and to advance my career, I have to publish the work I do with the NIH money. To publish, I have to pay the journal money (page/figure charges). Then if you, who payed the taxes that pay for my research, want to see what I did within a year of publication you have to pay the journal to access the content. Somehow the publishers are winning here.
And the winning keeps happening – because scientific journals are “peer-reviewed.” The people who review papers are other scientists who (in the vast majority of cases) are not being paid for their review work. In some cases, most of the editors are also unpaid scientists. While journal editing may give a bit of a resume boost, as far as I can tell, reviewing mostly doesn’t.
Now, the non-payment of reviewers makes the review process rather annoying. For this journal, I got to suggest 4 reviewers I wanted and 3 that I did not want. The journal may ask these people, but they might be too busy. So then they keep trying to find someone else competent and available to review. Once a reviewer accepts a paper, it’s probably not at the top of their to-do list, because they have their real job. So sometimes this takes a while and the results may be less than stellar.
Take my first set of reviews (which got the paper rejected)
· Reviewer 1: Liked the paper, had a few questions/suggestions, about 1.5 pages
· Reviewer 2: did not read the legend for the graphs in figure 1, consequently drew totally erroneous conclusions about our data vs. conclusions. Hated the paper. All this in half a page of comments.
· Reviewer 3: Thought the paper was ok. But really wanted us to play up the specifics of another paper that was barely relevant. We suspect this reviewer was the author of said paper, given how specific he/she got about it. We had cited this paper, mainly to say that they did their experiments under non-physiological conditions, which prevented direct comparison to our work.
Happily, pointing out the reviewer 2 hadn’t even read the legend to figure 1 allowed us to resubmit to the same journal. We did one additional experiment based on comments from reviewers 2 and 3, and our resubmission was accepted (yay!). And on a positive note, it really is a better paper thanks to some of the revisions and the additional experiment (so thanks, reviewers 1 and 3!).
However, even before submission, there was an experiment I KNEW we should have done. In fact, while we waited for review, I slaved away trying to make this additional experiment work, certain that at least one reviewer would see this hole and want it addressed. Not one reviewer noticed. (N.B. I still think it matters, and I’m still working on the experiment, my initial attempts failed, but I have now taken a different approach to testing the same hypothesis).