Peer-reviewing a submitted article can be an interesting process. On one hand, you get a chance to glimpse the latest findings in the field before they even become public (outside of the data appearing in an abstract or on a conference poster). There is also the challenge of putting your mind to the grindstone; a reviewer has to stay sharp and think around corners, pick out where fuzzy language might be masking a methodological problem, or find points of contention in the data that the authors missed. Yet he or she must also remain both tough and fair, so that when and if a paper is finally published it is worthy of high praise.
It can also be an annoying process, especially when a research group gets sloppy just because they have a lot of positive findings. So I thought I’d toss out a few suggestions for the world. Remember, reviewers can be your friends, as long as you’re thorough.
- If you get a decoupling of one pathology from your behavior of interest, but a second pathology remains coupled, just run the damn correlation. It only takes a few hours at most, and I’m gonna send it back and make you do it regardless.
- Make sure your graphical data doesn’t undermine your “representative” figures.
- Don’t just scatter your stats throughout the paper as if tossing parmesan on a plate of spaghetti. Clearly indicate which analyses you chose– and why– in the Methods section, even if you only write 2 sentences. Don’t put undiscussed post hoc analyses and data in the figure legends, keep ‘em grouped with the rest of the stats in the Results section and just report the p values in the legends.
- If you do get a relevant negative finding amidst your positive ones, don’t be afraid to discuss it and what it means in the context of the study… negative data is just as important! Don’t gloss over it.
- If the software program will run a crucial analysis automatically, put it in the damn paper