Derek Lowe makes the very good point that much of the medical research published in professional journals is not only unreliable, but downright dishonest.
The problems are many: publication bias (negative findings don't get written up and reported as often), confirmation bias, and desire to stand out/justify the time and money/get a grant renewal. And then there's good old lack of statistical power. Ioannidis and his colleagues have noted that far too many studies that appear in the medical journals are underpowered, statistically, relative to the claims made for them. The replication rates of such findings are not good.
Interestingly, drug research probably comes out of his analysis looking as good as anything can. A large confirmatory Phase III study is, as you'd hope, the sort of thing most likely to be correct, even given the financial considerations involved. Even then, though, you can't be completely sure - but contrast that with a lot of the headline-grabbing studies in nutrition or genomics, whose results are actually more likely to be false than true.
There's more at the link.
This is of particular interest to me, having had two major (i.e. life-changing) surgeries. The first was to fuse my spine and attempt (unsuccessfully) to compensate for permanent nerve damage in my lower spine and left leg, back in 2004; the second was a quadruple bypass in October last year. As a result, I regularly look for information on new treatments that might help me, or factors I need to take into account in my lifestyle (for example, how long does heart bypass surgery remain effective? Can it be repeated, if necessary?). I've found a great many conflicting answers, many 'conditioned' by reference to some new 'miracle' drug that's supposed to do great things, but the efficacy of which is not yet proven in general use.
Dr. Lowe references a 2005 article by John P. A. Ioannidis, 'Why Most Published Research Findings Are False', and an article in the Atlantic profiling Dr. Ioannidis. The first article identifies six factors for concern:
- The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
- The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
- The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
- The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
- The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
- The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
The second article shows how Dr. Ioannidis has found these truths to apply across the board in medical research. The two articles together raise many interesting points . . . and, when you think about it, their findings can be applied to many fields, not just medical or scientific research. How much business (i.e. market) research, or political sampling, can be demonstrated to exhibit the same flaws? Just look at 'climate change' research - doesn't it demonstrate the reality of those problems in almost every study?
Food for thought.
Peter
No comments:
Post a Comment