What's Wrong with Negative Data?

Sep 28 2009 Published by under Academia

This entry inspired by Sci's running partner. I can definitely say I have been very lucky in my running partner, in that she is also a scientist and also pretty awesome. Talking about science and being awesome can really eat up a long run. And this past weekend, we ran one heck of a 30K, in under three hours. Couldn't have done it without her.
So the other day we were running and talking about science and awesomeness, and Running Partner said "yeah, I have all this data, and I did the comparison, and you know, no difference!! I worked so hard, man, it sucks. It's not publishable."

Now I hear you all say "but that's not true, negative data IS publishable". Yes, in a perfect world, it is. In the world where we live and do science, negative data gets in the lowest possible journal, Journals that are by far a negative impact on your career rather than a positive one.
And this bugs me a lot. Negative data SHOULD be publishable. It should be a feather in your cap, maybe not as long and nice as the positive data, but still, certainly not a pile of wasted time and resources. You still did the work. You still had a hypothesis, designed an experiment. You still analyzed and interpreted the results. The only difference was that your hypothesis was proven wrong. In the perfect world of the scientific method, this is FINE. This should be published.
But often, it's not. Why not? It's not sexy. A lot of times, there may not be an easily found mechanism for WHY something didn't work. Sometimes, there just aren't effects of a particular thing to be seen. And journals don't want to see that, not when there's lots of sexy positive results they can print.
As it is, a young researcher almost never publishes negative data. You can have negative data in a dissertation, but you're fooling yourself if you think it's going to get in a good journal. Those who can publish negative data are established researchers, people who have the name to back up their results (or lack thereof), and the experience to make a silk purse out of the sow's ear.
But I don't really think this is a good thing. In fact, I think it's TERRIBLE. When I'm starting a new research project, rummaging through the literature trying to find the answers to all the details so I can write my grant, I WANT to know the negative data! The negative data are just as important as positive results in allowing scientists to form a model and a hypothesis. They may not be pretty, but they still tell us something very important about the world, that there IS no effect of X on Y in this given situation. We NEED to know that. Otherwise we have gaping holes in our knowledge base, or perform the same negative controls as a whole bunch of other labs, wasting time and resources on getting the same negative results, which of course, will not be published.
Don't discount your negative data. It may not have told you what you wanted to know, but it definitely tells you something. And journals should not discount negative data either. In fact, Sci wants to start a journal. The Journal of Negative Results. We will only take papers with negative results. I bet pretty soon, we'll have a TON of citations, and a high impact factor. Because it may not be sexy, but it's still stuff we need to know.
Who's in?

32 responses so far

  • Bob O'H says:

    There already are several Journals of Negative Results - for example this one in ecology & evolution (I'm only linking to it because I was a founder), and PLoS One should fulfill a similar function. The problem is getting submissions - it's still a lot of work to prepare a paper.

  • Snoof says:

    It strikes me as something particularly useful. Consider a situation whereby only positive data is published. Wouldn't it be likely that people would spend a lot of time treading ground that has been done before, but don't _know_ it, because nobody ever wrote anything about their negative data? It'd be tremendously useful if you're trying to accomplish anything new to be able to go "Oh, no, Jones et. al. wrote a paper on that in 2002, it's a nice idea but it didn't pan out."
    Plus, if you publish negative data and someone else points out that your methodology is _flawed_, you could end up with positive data after all.

  • Drosera says:

    Negative results are only interesting when the original hypothesis appeared to have some merit. Therefore, in a paper with negative results it should be made very clear why the hypothesis was stated at all.

  • I definately think you should start that Journal of Negative Results. Not only will it be a valuable resource in stopping people wasting their time by replicating experiments that just don't work, it will also show a realistic face of the whole research lifestyle - not all experiments work! If you've discovered two things don't influence each other - hell, I think that's pretty nifty. Maybe not sexy, but nifty all the same.

  • Angela says:

    Or how about an open access repository for negative data? Peer reviewed or even maybe not peer reviewed but just given a rating to start. I think the quantum computer people have started a rating thing for arXiV papers. Maybe some rating scheme like that could be a start.

  • W L says:

    There are a few of these.
    Rejecta Mathematica is a real open access online journal publishing only papers that have been rejected from peer-reviewed journals in the mathematical sciences”
    “The Journal of Articles in Support of the Null Hypothesis publishes original experimental studies in all areas of psychology where the null hypothesis is supported. The journal emphasizes empirical reports with sound methods, sufficient power, with special preference if the empirical question is approached from several directions.”
    “The ‘Forum for Negative Results’ is a permanent special section of the Journal of Universal Computer Science (J.UCS), one of the oldest peer-reviewed electronic journals (started in 1994).”
    “A group of social scientists in Europe and the US has established a new journal of negative and unpublishable results in the social sciences. The mission of The Journal of Spurious Correlations (JSpurC) is to provide a legitimate venue for exploring pure and applied methodological questions in the social sciences[…]”
    “The Journal of Interesting Negative Results in Natural Language Processing and Machine Learning (ISSN 1916-7423) is an electronic journal, with a printed version to be negotiated with a major publisher once we have established a steady presence.”

  • Katie says:

    I will refine your request by saying there should be a journal of negative results for NEURO research. I think it would be tremendously helpful. Even if it was just short papers or an open repository like Angela suggested.

  • Scicurious says:

    Katie: yeah, NEURO. Comp Sci doesn't really help me much. 🙂
    And yes, a good hypothesis and explanation of WHY you thought it would work are very important. Also a good section of pitfalls as you why you think it didn't work and what you think other possibilities might be to MAKE it work.

  • FYI, PLoS ONE applies rigorous peer review to all submitted papers, is indexed in Pubmed, but does not apply any editorial filter of "interest/impact/etc". Thus, whether results are "negative" or "positive" plays no role in PLoS ONE peer review.
    If the experiments are properly controlled and performed and haven't been previously published, then the results are perfectly suitable for publication in PLoS ONE. And PLoS ONE appears to be attracting quite good work in the neuroscience area.

  • Becca says:

    I think the journal should be called "No effect? No problem!"

  • mpatter says:

    Quantitative biology articles can go on arxiv, but I don't get why they went that far without giving "proper" biology a place there. No equations? No preprint hosting for you!!

  • arvind says:

    Wait, what? You ran a 30k? Why are you even trying to diet, woman? If you run that much, just eat what you want!!

  • Diane G. says:

    A very real problem that has been acknowledged but seldom acted upon for decades if not centuries...usually called the "file drawer effect:"
    (Pardon the non-academic refs--I don't have access to a journal site ATM.)
    Believe I first read about the FDE in one of S.J. Gould's Natural History essays, long ago...
    While paper publication is not always practical, it would at least be valuable for each discipline to create a database in which "negative result" experiments would be collected & curated...

  • Zen Faulkes says:

    Comrade PhysioProf wrote: "(W)hether results are 'negative' or 'positive' plays no role in PLoS ONE peer review."
    That has not been my experience.

  • JR says:

    "Rejecta Mathematica"
    Rejecta Mathematica isn't really the same thing. It publishes papers that have been rejected for whatever reason, including being too trivial, wrong or dealing with topics that do not fit in the journal it was originally submitted to.

  • MadScientist says:

    I'm just about to submit a paper with some negative results. Built spiffy instrument to test an idea, found that the idea just doesn't match reality; however, gizmo is still spiffy and has other uses. Negative results are good in some situations such as:
    1. trying out ideas in medicine
    2. testing ideas that people like to talk about but no one has got around to investigating
    I don't know if Michelson published the results of his ether experiment, but many people were certainly made aware of the result. Those experiments were among the most important of that century and the results weren't what Michelson was hoping for. Michelson did develop some incredible tools for astronomy though and numerous versions of his original interferometer for detecting the ether have become a common chemical laboratory item.

  • MadScientist says:

    Oh, I have to admit that I often don't discuss negative results at all but retain my notebooks. Maybe I'm just an A-hole, but I'm amused when people talk about some idea and I look through my notebooks, pull one out and can show them exactly why their idea won't work. Admittedly it's not as much fun as when someone explains a brilliant idea and you pull out a notebook and show that they're absolutely right (just a few years too late). On the other hand I do occasionally save people a lot of time and effort by showing that their idea can't work. I've also had a good laugh watching competitors do things entirely the wrong way while I've plodded on to get results.

  • cervantes says:

    Actually the leading medical journals have already heard this message and they are publishing quite a lot of negative findings. There's been a spate of them lately. The recognition that it's at least as important to know what doesn't work as it is to know what does work is rapidly transforming medical publishing.
    In less applied fields, I don't know -- no doubt this is still a problem -- and it certainly is far from completely fixed in clinical research. Industry finances a lot of research and obviously they have no interest in publishing negative results, but the journals are increasingly willing to print it if the researchers are willing to submit it.
    And yes, of course, there has to have been some reasonable prior expectation of an association before anybody will be interested in negative findings. But they are indeed just as much science and just much findings and just as much a contribution to knowledge as positive findings, in principle. Remember, however, that out of all the possible propositions in the universe, an infinitesimal proportion are true. So without a substantial prior probability, negative findings don't contribute much to knowledge.

  • Pat Cahalan says:

    I'm in.
    If nothing else, it would save a lot of time to have a venue where you could go and find out if someone else already found out the nothing you're about to invest resources to rediscover.

  • lylebot says:

    “The Journal of Interesting Negative Results in Natural Language Processing and Machine Learning (ISSN 1916-7423) is an electronic journal, with a printed version to be negotiated with a major publisher once we have established a steady presence.”

    This journal is something of a joke in the circles I travel in (which are related to NLP and ML). It's incredibly easy to get negative results in NLP; I could fire up a couple of programs and get a few journal-article-pages worth of negative results in literally 30 minutes. It's a good idea in theory, as lots of grad students are trying the same experiments, and it would be nice to be able to point them to something that shows why they won't work. It falls down in practice because the bar for putting together a reasonable submission is very low.

  • That has not been my experience.

    If you were the corresponding author, and your paper was rejected for this reason, did you appeal the decision?

  • denparser says:

    when it's compatible, it will go in a good way..

  • dves says:

    well that's the fact in today's situation.

  • Dario Ringach says:

    As already pointed out, negative data can be very important depending on the strength of the initial hypothesis. It is easy to invalidate stupid ideas, but it is good science to rule out the interesting ones. I am sure a if you an prove some major theory to be wrong the result would be "Naturable".

  • MadScientist says:

    @cervantes: Unfortunately the only means I see of dealing fairly with industry-sponsored studies is to bring in more regulation (and I'm one of those people who really hate regulation). We need global databases where corporations (and also academics) can file experimental methodology and proposed scheme of analysis along with what claims will be tested and how that is addressed by the method; all that must be done before an experiment is conducted. Results should then address the previously published claims. That should cut down on "result mining" and also possibly save an awful lot of money by having the methods criticized beforehand. Too much time and money is wasted on poorly planned experiments and there are too many propaganda exercises pretending to be scientific experiments. In such a case if results are negative and the people involved refuse to publish the results, a big red note goes in to say "no results divulged". So-called 'studies' which do not participate in the scheme should be given a credibility rating of 0.

  • efrique says:

    Having disincentives (even if they are only imagined disincentives!) to publish negative results virtually guarantees that a disturbingly high proportion of published positive results are wrong.
    Why? because without publication of negative findings, people keep doing the experiment - most of them largely wasting effort redoing a largish number of experiments they don't even realize have been done... eventually /someone/, for whatever reason, be it data mining, type I error, fudging or various other innocent or even nefarious possibilities, gets a positive result.
    Their study gets published. Twenty-eight independent sets of failed results are never reported. Another piece of WRONG "knowledge" is now in the top-flight journals.
    Top journals publishing sexy results greatly increases the chance that they're also publishing wrong results. The lesser journals may well have a higher ratio of "real" results.
    Thus it is incumbent on top journals to not only publish negative results - they need, if anything, to bias in their favour slightly, so as to bring their 'negative result' credentials up enough that people don't perceive that it's a waste of time, career-wise to even write the results up.

  • trrll says:

    The real problem with negative results is that the absence of a positive result does not necessarily constitute a negative. A collaborator was showing me a result yesterday, asking why we couldn't put it in the paper. We'd looked at effects of two drugs. One produced a significant effect, the other chemically-related, compound did not. So why not show both results, to demonstrate a structure-activity relationship? That's important information, right? The problem is that while the second compound did not produce a significant effect, the mean was noticeably different from control. While there is not a significant effect compared to control, it is also not significantly different from the compound that did have a significant effect. So it is not a negative result, it is inconclusive. Perhaps we could do more replicates and it would become significant. But we already did a reasonable number--there were just outliers that we had no grounds to reject. And anyway, it is not really statistically valid to add replicates arbitrarily until a result become significant--that introduces a statistical bias, and means that your p value will not be accurate.
    So it goes on the shelf until such time as we can do it over--which we probably won't, because it was a lot of work, and why invest more resources into something that is likely negative, or worse, might turn out inconclusive again? Perhaps we'll try it again with another dose of the same drug in hopes of getting something that is either positive or convincingly negative.

  • I think that the hypothesis you are testing largely determines the impact of your negative results. Sometimes paradigm changing data can be negative.

  • Who's Yer Daddy? says:

    As a journal editor, I have no problem publishing "negative" results provided (1) the hypotheses are well supported by the literature and grounded in theory; (2) the methodology is sound and properly executed; (3) the paper is clearly written and well organized; and (4) implications for theory and future research are discussed. My experience is that papers often have both negative and positive results, though the balance will tilt more one way that the other.

  • Zen Faulkes says:

    Comrade PhysioProf: I bet the reviewers and editor would deny they disliked the paper because it contained negative results, but I think it was a very big factor.
    Nobody thinks they're biased, after all. It's always those other people.
    This is still an ongoing matter, so I don't want to comment in detail. And I accept that each case is different, the devil is in the details, and I do not have an unbiased perspective, and so on. Nevertheless, my experience leaves me to doubt that the reality at PLoS One concerning negative results is not as straighforward as the stated policy.

  • Laura says:

    One of my professors has published in the Journal of Negative Results in BioMedicine. They cover neuro stuff. Here's a link to his article: http://www.jnrbm.com/content/5/1/16

  • Kamu says:

    I just need to ask a question to experts here. I am in a very unavoidable condition.I am a PhD student in Computer Science. My supervisor never paid attention what I m doing or what I am publishing.He never gave me a feedback bz he does not have time.Because of some problem in a software developped by our lab, we have published something that someone other might not be able to reproduce the same results. By the time its too late bz I am going to defend thesis next year. BUT I am always worried now. Even I have other articles that are enough to fulfil criteria for defending PhD.But I am just afraid of all this situation. From childhood to now, I have been doing very good. iF something happens, it will ruin my professional career.I cant do any work now bz all the time I am thinking abt that paper. Any guidence plz..

Leave a Reply to Captain Skellett Cancel reply