Part of the problem is meta-analysis, the analysis of combined results of previous studies. Meta-analysis can be a powerful tool as a researcher, particularly when sample sizes aren't large enough to reach statistically significant conclusions. Business Week has a nice article discussing meta-analyses (a meta-meta-analysis??) and some of the associated pitfalls with that approach, mainly the lack of raw data for analysis and the introduction of bias with study selection. From the article:
The bottom line, really, is the need to look at the methodology (behind ANY study) before accepting a conclusion. "If people understand the process of science better, they'll be able to spot the gray reality behind the next black-and-white headline."
'"We know there is publication bias," says Frank E. Harrell Jr., chair of biostatistics at Vanderbilt University. It's much easier to get a study published that says, "something works!" than one saying, "Oops, the treatment had no effect." Using published data alone thus typically makes the final result more positive.'