This entry inspired by Sci's running partner. I can definitely say I have been very lucky in my running partner, in that she is also a scientist and also pretty awesome. Talking about science and being awesome can really eat up a long run. And this past weekend, we ran one heck of a 30K, in under three hours. Couldn't have done it without her.
So the other day we were running and talking about science and awesomeness, and Running Partner said "yeah, I have all this data, and I did the comparison, and you know, no difference!! I worked so hard, man, it sucks. It's not publishable."
Now I hear you all say "but that's not true, negative data IS publishable". Yes, in a perfect world, it is. In the world where we live and do science, negative data gets in the lowest possible journal, Journals that are by far a negative impact on your career rather than a positive one.
And this bugs me a lot. Negative data SHOULD be publishable. It should be a feather in your cap, maybe not as long and nice as the positive data, but still, certainly not a pile of wasted time and resources. You still did the work. You still had a hypothesis, designed an experiment. You still analyzed and interpreted the results. The only difference was that your hypothesis was proven wrong. In the perfect world of the scientific method, this is FINE. This should be published.
But often, it's not. Why not? It's not sexy. A lot of times, there may not be an easily found mechanism for WHY something didn't work. Sometimes, there just aren't effects of a particular thing to be seen. And journals don't want to see that, not when there's lots of sexy positive results they can print.
As it is, a young researcher almost never publishes negative data. You can have negative data in a dissertation, but you're fooling yourself if you think it's going to get in a good journal. Those who can publish negative data are established researchers, people who have the name to back up their results (or lack thereof), and the experience to make a silk purse out of the sow's ear.
But I don't really think this is a good thing. In fact, I think it's TERRIBLE. When I'm starting a new research project, rummaging through the literature trying to find the answers to all the details so I can write my grant, I WANT to know the negative data! The negative data are just as important as positive results in allowing scientists to form a model and a hypothesis. They may not be pretty, but they still tell us something very important about the world, that there IS no effect of X on Y in this given situation. We NEED to know that. Otherwise we have gaping holes in our knowledge base, or perform the same negative controls as a whole bunch of other labs, wasting time and resources on getting the same negative results, which of course, will not be published.
Don't discount your negative data. It may not have told you what you wanted to know, but it definitely tells you something. And journals should not discount negative data either. In fact, Sci wants to start a journal. The Journal of Negative Results. We will only take papers with negative results. I bet pretty soon, we'll have a TON of citations, and a high impact factor. Because it may not be sexy, but it's still stuff we need to know.