ADAM PARTNERS

Posts Tagged ‘John P. A. Ioannidis’

Mathematical Models, as the vocabulary of the MACHINE

In MATHEMATICAL MODELS on July 31, 2011 at 10:04 pm
Hippocrates stares down at the UCSF doctors, c...

HIPPOCRATES NO LONGER MATTERS?

Mathematical Models, as the vocabulary of the MACHINE has some or other magical spell it casts over people. As soon as the words are uttered, Mathematical Models take the place of common sense, though its common sense twice removed. And most of the time, they cannot be falsified, as they predict FUTURE events that have implic\tions for the here and now. Nobody cares in twenty or thirty years., Then the models are smiled at as archaic attempts. In the case of medicine models are not that lucky, and we see the real limitations they suffer from. They are VERY WRONG most of the time.  That’s the problem.

READ:

Tools designed to predict mortality may not forecast a patient’s demise very accurately, researchers have found.

In a meta-analysis of 118 predictive models, the median area under the curve (AUC) was just 0.77, John Ioannidis, MD, of the University of Ioannina in Greece, and colleagues reported online in the Archives of Internal Medicine.

“Most of the tools included in our analysis are not sufficiently accurate for wide use in clinical practice,” they wrote.

“I think that doctors should continue to use prognostic models but they should understand better what their performance is and its limitations,” Ioannidis told MedPage Today in an email. “Typically, prognostic performance is limited, which means that putting too much trust in a model can lead to a wrong conclusion about whether a patient will die or not.”

A multitude of prognostic tools have been developed in order to predict death in a wide variety of conditions, as forecasting mortality can help patients make sound medical decisions and have realistic expectations.

In order to assess the performance of these models, the researchers reviewed 94 studies, spanning 240 assessments of 118 predictive tools. All were published in 2009; to evaluate the entire predictive literature would be a huge task requiring hundreds of researchers, they explained.

The most commonly evaluated models included the Acute Physiology And Chronic Health Evaluation (APACHE) II model and the MELD score (Model for End-Stage Liver Disease).

The majority of the studies were done in the U.S. or Europe, had a prospective design, and pertained to acute disease conditions — mainly in critical illness; gastroenterology; and cardiovascular, infectious, and malignant diseases.

The median sample size of the studies was 502 patients, with a median number of deaths of 71.

Ioannidis and colleagues found that the AUC of the models ranged from 0.43 to 0.98, with a median of 0.77.

A total of 40% of the tools had an AUC of 0.80 or better, but only 10% had an AUC above 0.90.

Only 10 of the tools were evaluated in four studies or more. Of these 10, only one had a summary AUC above 0.80, the researchers said.

As well, there was marked heterogeneity of these tools across diverse settings and studies, which could be due to selective analysis and reporting of studies, leading to exaggerated results of predictive discrimination, they explained.

“Efforts at standardization of reporting are important in this regard,” they wrote.

Higher AUCs were associated with certain characteristics, including lower journal impact factor and larger sample size (P=0.01 and P=0.02, respectively).

The association with journal impact factor may be explained by lower methodologic quality accepted in those journals, the researchers said.

Higher predictive value of the tools was also seen among the highest-risk patients, and among children (P=0.002 and P<0.001, respectively).

The study was limited because it was restricted to analyses published in a single year, and because it only evaluated AUC, even though there are other metrics to determine predictive value.

Still, the researchers concluded that efforts are needed to “organize and synthesize the predictive literature,” in order to ultimately “enhance the evidence derived from predictive research and to establish standard methods for developing, evaluating, reporting, and eventually adopting new predictive tools in clinical practice.”

In the meantime, they said, clinicians “should be cautious about adopting new, initially promising predictive tools, especially complex ones based on expensive measurements that have not been extensively validated and shown to be consistently useful in practice.”

 

 

FROM

Kristina Fiore, Staff Writer, MedPage Today
Published: July 26, 2011
Reviewed by Robert Jasmer, MD; Associate Clinical Professor of Medicine, University of California, San Francisco and
Dorothy Caputo, MA, RN, BC-ADM, CDE, Nurse Planner