View Single Post
  #54   ^
Old Sun, Sep-28-08, 21:46
melibsmile's Avatar
melibsmile melibsmile is offline
Absurdtive
Posts: 11,313
 
Plan: Atkins
Stats: 272.5/174.4/165 Female 5'4
BF:44?/32.6/20
Progress: 91%
Location: SF Bay Area
Default I'm aghast

I realize that I am seeing this thread several months after the fact, but I felt the need to comment. Speaking as someone who is trained in epidemiology and biostatistics, this study's leaps of logic are very upsetting. Unfortunately, many medical doctors who conduct research have little to no training in research methodology and therefore tend to assume causation when there is no evidence to support this conclusion. Just because intensive therapy is associated with increased mortality does not mean that it caused this increase in mortality.

No real conclusions can be drawn from this study due to severe confounding by treatment, notably that the variability of medications received and their dosing was not controlled for in the analysis. Without accounting for this, it is quite possible that the increased mortality is completely caused by the intensive therapy medications and not the lowering of the A1c. This assumption that an association of a lower A1c with increased mortality automatically means that the lower A1c caused the increase in mortality is extremely dangerous--this type of thinking is how the medical establishment got wrapped up in the low-fat hypothesis.

The fact that NEJM published this piece of rubbish in this form is disheartening. They should require that a statistician and an epidemiologist review manuscripts from clinical trials to prevent physicians from making these kinds of logic leaps.

As an aside, the "disclosures" statements in general are useless. They require the authors to disclose their financial interest, but they don't restrict them from publishing at all. This system obviously doesn't work and should be scraped for something more effective.

Ok now I am going to fume somewhere else.

--Melissa
-----------------------------------------------------------------
A couple additional thoughts on this as I've mused over the last couple hours.

First, this study really encapsulates the inherent tradeoffs of stopping a trial early. Most clinical trials have very detailed rules on when the study must be stopped--usually if there is an established risk to some or all of the participants or if the results are so unequivocal that continuing to withhold the treatment from the other groups would be unethical. This means that the Data Safety Monitoring Board (every study has one) has to review the data from the study periodically.

Statistically, this involves tradeoffs--the more you look at the data, the harder it becomes to find significant results from the study. Also, when the study is stopped early, the resulting data does not look the way it was planned--in terms of the length of follow up, the number of datapoints for each participant, etc. Most of the calculations for the study design are based on completion of the full study as it was originally envisioned. Since you did not complete the study as planned, you often do not have enough data to be able to definitively answer the questions that you set out to understand.

Last year I gave a presentation on Institutional Review Boards, which are present at each institution that conducts human research, to my Intervention Trial Design class at Berkeley. If anyone is interested in reading more about how these studies are evaluated, I can upload it.
Reply With Quote