Monday, August 06, 2007


One week caffeine is good for you, the next it isn't. One week smoking marijuana is worse than cigarettes, the next cigarettes are worse. Which is true? How is the general public supposed to sort out which medical claims are true, and which aren't when contradictory findings are blared across the front pages of the newspaper every week?

Part of the problem is meta-analysis, the analysis of combined results of previous studies. Meta-analysis can be a powerful tool as a researcher, particularly when sample sizes aren't large enough to reach statistically significant conclusions. Business Week has a nice article discussing meta-analyses (a meta-meta-analysis??) and some of the associated pitfalls with that approach, mainly the lack of raw data for analysis and the introduction of bias with study selection. From the article:

'"We know there is publication bias," says Frank E. Harrell Jr., chair of biostatistics at Vanderbilt University. It's much easier to get a study published that says, "something works!" than one saying, "Oops, the treatment had no effect." Using published data alone thus typically makes the final result more positive.'

The bottom line, really, is the need to look at the methodology (behind ANY study) before accepting a conclusion. "If people understand the process of science better, they'll be able to spot the gray reality behind the next black-and-white headline."


Bayman said...

Sounds to me like meta-analysis is a fancy word for uselessness. Why on earth would it be desirable to summarize findings if the data are "unbiased", that is equally argue for and against the hypothesis - like the data is a bunch of random information? If the data made it through peer-review to be published it has in theory been put through a "reality filter". Shouldn't we expect it to indeed be biased to reflect reality since expert reviewers have judged that the author's experimental approach is sound?

Clearly one only one of these sets of data are correct (for or against hypothesis); the other should never have been published in the first place and should not be included in a "meta-analysis", but ignored. So I agree, individual findings need to be treated and evaluated individually.

Clearly there is a bias in science toward sexy "positive" results that uphold the establishment's status quo, this is a continual challenge. Hopefully though in most cases it is still only "positive" results that are true getting through peer-review, and not "positive" results that are untrue. Ideally if proper review standards are applied, "publication bias" would simply mean a bias towards reality or truth.

It's a nice concept to try to increase statistical power by pooling results of different studies - but in practice how useful is it to throw all the data from experiments performed under different conditions into the same bag? Better to just make individual studies meaningful by using big enough sample sizes in the first place.

Anonymous said...

Look, studies can be evaluated individually, but you run into a little problem called generalizability. If your study is performed under one setting under specific conditions, then your results only stand for that setting under those conditions. Meta-analyses pool findings so that you don't have to go digging around to find specifics. If I'm interested in the relationship between A and B in setting C, for example, how relevant is a study done finding relationships between A and B in setting D? Meta-analyses allow for examinations in a broader context. Don't knock it.