Tuesday, June 8, 2010 | |

Should reform be based on statistical significance?

Having spent roughly half my career teaching research methods and statistical analysis to medical professionals, I believe the difference between statistical significance and real-world importance must be understood by anyone who makes decisions in health care.  Statistical significance is a measure of the probability (p) that random chance explains the outcome of a test to see if an experimental effect—such as a new drug, a change in care delivery, or an alternative mechanism for reimbursement—makes a difference.  Statistical significance increases as the p value declines.



However, statistical significance often has no practical importance for our daily lives or long-range plans.  I’ve taught hundreds of students to resist the temptation to overreact to studies based on statistical significance of the findings.  Presentations at last week’s annual meeting of the American Society of Clinical Oncology illustrated this very important point.  Several researchers suggested that cancer patients who took new drugs lived significantly longer than comparable patients who took a placebo or older medication under controlled conditions.  


The statistically significant difference sounds impressive, all other things being equal, but should we immediately start paying for a new drug if it extends life only three months and costs $50,000?  Of course not!  Today’s push for health reform is based on widespread agreement that our country cannot afford to spend more on medical care, and we could surely find a more productive way to spend an extra $50,000 if we had it.  


Today’s economic realities and political circumstances are forcing us to learn to live within our means.  We cannot adopt some new approach to medical care just because it is supported by statistically significant research.  (For the record, I am a very strong supporter of medical research.  This blog post questions the use of research reports, not the research itself.)


To complicate matters, a recent article in the Journal of the American Medical Association (JAMA; 26 May 2010, p. 2058-64) http://jama.ama-assn.org/cgi/content/abstract/303/20/2058 suggests that reports and interpretations of studies with statistically non-significant findings are frequently inconsistent with the actual results.  In other words, more than a few of today’s “scientific” publications convey impressions not supported by the data. 


We now run not only the risk of overreacting to good research, but to bad research as well.  I hope that you will join me in pressuring our policy-makers to put statistical significance into proper perspective.  Or am I the only one who fears that policy analysts are putting too much faith in data and too little in strategic vision of an efficient and effective health system with limited resources?  Please share your thoughts.

0 comments: