ADAM PARTNERS

PLEASE SIT DOWN. MACHINE WILL SEE YOU SHORTLY, NUMBER 445343-01.

In ARTIFICIAL INTELLIGENCE, WARNING on July 31, 2011 at 10:29 pm
An electronic medical record example

FIRST RULE OF ROBOTICS, DO NO HARM TO HUMANS, THE SECOND RULE. . .

 

IF YOU READ THIS carefully, one finds that Johns Hopkins says this: “More technology, less doctor = less technology more doctor. War is peace and peace is war.  You figure if there is just a tiny little bit of a falsehood contained in the piece below?

Consult with your doctors early and often. That’s not just good advice for patients; it’s what Stephanie Reel, the top IT officer at Johns Hopkins Medicine, says healthcare technology leaders must do to master intelligent medicine.

Speaking at this week’s InformationWeek Healthcare IT Leadership Forum in New York, Reel, head of IT at Johns Hopkins Medicine since 1994 and Chief Information Officer at the University since 1999, said the institution’s success in technology innovation is directly attributable to its habit of involving clinicians in IT projects. That point was backed up by Dr. Peter Greene, Johns Hopkins’ Chief Medical Information Office, who joined a panel discussion I led exploring “What’s Next In Intelligent Medicine.”

Data-driven decision-making helps develop a competitive edge.

Learn how to use real-time analytics to make educated business decisions.

There have been plenty of innovations at Johns Hopkins Medicine, a $5 billion-a-year organization that includes a renowned medical school, five hospitals, a network of physician offices, and massive research operations. The institution was among the pioneers of electronic health records (EHRs) through a clinical information system deployed in the early 1990s. The effort succeeded, Reel says, because it was initially supported by half a dozen clinicians who worked with IT to develop the system.

This interdisciplinary group has since grown to include about 75 people, and it still meets every month to “listen to the people on the front lines who are trying to make a difference,” Reel said.

Johns Hopkins’ clinical information system has evolved to embrace the latest EHR technologies, and it has also become the foundation for what Johns Hopkins calls “smart order sets.” These order sets have built-in checks, balances, and analytics to ensure that appropriate procedures, tests, and protocols are followed as appropriate for each patient.

Among the hundreds of smart order sets now in use at Johns Hopkins, one guides decision on appropriate regimens for diabetics. Hundreds of variables and possible recommendations are preprogrammed into the order set, but the right regimen is determined though the combination of known patient history, up-to-the-moment clinical measures, and feedback provided by doctors on a series of questions conditionally asked by the system based on known patient data and the clinician’s answers to key questions.

Smart order sets are developed by specialists and extensively studied by peer-review groups before they are embedded into patient care workflows. “The challenge is that you have to do a lot of custom work that isn’t included in off-the-shelf EHR products, so you can’t take on everything,” said Greene.

Johns Hopkins has prioritized based on risk, developing smart order sets for high-morbidity scenarios such as diabetic management and anticoagulation management.

For example, the institution has been widely recognized for its work on preventing venus thromboembolism (VTE), a dangerous blood-clotting condition that has been decreased by embedding intelligent risk-factor algorithms into admissions, post-operative, and patient-transfer order sets.

The approach has raised VTE assessment rates significantly, and VTE incidents have dropped significantly among at-risk patients, which is a huge achievement when lives are at stake.

One big risk to all this work is alert fatigue — the common problem whereby so-called intelligent systems and devices fire off so many alerts they are simply ignored. To minimize this, Johns Hopkins has built role-based and history-driven rules into many of its smart order sets.

Cardiologists, for example, would be assumed to be aware of the dangers of ordering both Cumadin and Ameoderone for the same patient whereas an intern would be shown an alert. But if an intern had already cleared such an alert for a particular patient on a particular day, the system recognizes that history and won’t alert that intern again that day regarding that patient.

At the cutting edge of intelligent medicine is personalized care. Smart order sets are part of Johns Hopkins’ strategy on that front, and the institution also has at least half a dozen departments working on other forms of personalized care.

The Department of Oncology, for instance, is exploring the use of genomics — study of the DNA of individual patients and of cancer cells. Even today, the presence of specific genetic biomarkers can trigger patient-specific recommendations about using, or not using, certain drugs, tests and protocols.

In the future, the DNA of both the patient and his or her cancer will be “readily available and integrated into every decision we’re making about your care,” Greene said, though he acknowledged that might be years from now.

Given that there are 3 billion base pairs in the human genome, the most advanced work will involve big-data computing. Oncology researchers at Johns Hopkins are collaborating with the university’s Department of Astronomy, which has a data center with rack upon rack of graphical processing units (GPUs, not CPUs) that are routinely applied to large-scale computational astronomy calculations.

Reel and Greene encouraged their peers to push the use of predictive analytics and use of data on the clinical side of their operations. Electronic health records and intelligent medicine aren’t where they should be, Reel said, in part because the financial incentives have favored administrative uses of the technology — reducing cost rather than improving diagnostics and clinical care.

“We talk often about productivity gains in medicine because of the introduction of technology, and systems tend to reward quantity rather than quality,” she said.

The next step in intelligent IT support for medicine, she said, would be to work with clinicians to minimize time-wasting usability, interoperability, and security hurdles so they can spend less time interacting with technology while still getting ever-smarter decision support. That, she said, will give doctors more time with their patients.

Software fingers fake entries

In ARTIFICIAL INTELLIGENCE on July 31, 2011 at 10:16 pm
View of the Gables Great Hall, Cornell Univers...

GABLES GREAT HALL, CORNELL UNIVERSITY

Just as some models do not work well, some forms of Artificial Intelligence does seem to work very well indeed. And sadly, we, flesh and blood people lose when models fail, and we lose when AI does work, because in both instances, we become a function of the machine.

READ

If you’re like most people, you give yourself high ratings when it comes to figuring out when someone’s trying to con you. Problem is, most people aren’t actually good at it–at least as far as detecting fake positive consumer reviews.

Fortunately, technology is poised to make up for this all-too-human failing. Cornell University researchers have developed software that they say can detect fake reviews (PDF). The researchers tested the system with reviews of Chicago hotels. They pooled 400 truthful reviews with 400 deceptive reviews produced for the study, then trained their software to spot the difference.

The software got it right about 90 percent of the time. This is a big improvement over the average person, who can detect fake reviews only about 50 percent of the time, according to the researchers.

They say people fall into two camps. One type accepts too much at face value and doesn’t reject enough fake reviews. The second type is overly skeptical and rejects too many real McCoys. Despite their very different approaches, each camp is right about half the time.

The Cornell system is similar to software that sniffs out plagiarism. While the plagiarism software learns to spot the type of language a specific author uses, the Cornell software learns to spot the type of language people use when they’re being deceptive in writing a review, said Myle Ott, the Cornell computer science graduate student who led the research.

The software showed that fake reviews are more like fiction than the real reviews they’re designed to emulate, according to the researchers. In part, deceptive writers used more verbs than real review writers did, while the real writers used more punctuation than the deceptive writers. The deceptive writers also focused more on family and activities while the real writers focused more on the hotels themselves.

The research team’s next steps are to use the technique with other types of service reviews, like restaurant reviews, and eventually try it with product reviews. The idea is to make it harder for unscrupulous sellers to spam review sites with fictitious happy customers.

Of course, just about any technology can be used for good or evil. The Cornell fake review spotter “could just as easily be used to train people to avoid the cues to deception that we learned,” Ott said.

This could lead to an arms race between fake review producers and fake review spotters. Ott and his colleagues are gearing up for it. “We’re considering… seeing if we can learn a new set of deception cues, based on fake reviews written by people trained to beat our original system,” he said.

Answer: the one on the right is fake.

Cornell software fingers fake online reviews | Crave – CNET.

 

Mathematical Models, as the vocabulary of the MACHINE

In MATHEMATICAL MODELS on July 31, 2011 at 10:04 pm
Hippocrates stares down at the UCSF doctors, c...

HIPPOCRATES NO LONGER MATTERS?

Mathematical Models, as the vocabulary of the MACHINE has some or other magical spell it casts over people. As soon as the words are uttered, Mathematical Models take the place of common sense, though its common sense twice removed. And most of the time, they cannot be falsified, as they predict FUTURE events that have implic\tions for the here and now. Nobody cares in twenty or thirty years., Then the models are smiled at as archaic attempts. In the case of medicine models are not that lucky, and we see the real limitations they suffer from. They are VERY WRONG most of the time.  That’s the problem.

READ:

Tools designed to predict mortality may not forecast a patient’s demise very accurately, researchers have found.

In a meta-analysis of 118 predictive models, the median area under the curve (AUC) was just 0.77, John Ioannidis, MD, of the University of Ioannina in Greece, and colleagues reported online in the Archives of Internal Medicine.

“Most of the tools included in our analysis are not sufficiently accurate for wide use in clinical practice,” they wrote.

“I think that doctors should continue to use prognostic models but they should understand better what their performance is and its limitations,” Ioannidis told MedPage Today in an email. “Typically, prognostic performance is limited, which means that putting too much trust in a model can lead to a wrong conclusion about whether a patient will die or not.”

A multitude of prognostic tools have been developed in order to predict death in a wide variety of conditions, as forecasting mortality can help patients make sound medical decisions and have realistic expectations.

In order to assess the performance of these models, the researchers reviewed 94 studies, spanning 240 assessments of 118 predictive tools. All were published in 2009; to evaluate the entire predictive literature would be a huge task requiring hundreds of researchers, they explained.

The most commonly evaluated models included the Acute Physiology And Chronic Health Evaluation (APACHE) II model and the MELD score (Model for End-Stage Liver Disease).

The majority of the studies were done in the U.S. or Europe, had a prospective design, and pertained to acute disease conditions — mainly in critical illness; gastroenterology; and cardiovascular, infectious, and malignant diseases.

The median sample size of the studies was 502 patients, with a median number of deaths of 71.

Ioannidis and colleagues found that the AUC of the models ranged from 0.43 to 0.98, with a median of 0.77.

A total of 40% of the tools had an AUC of 0.80 or better, but only 10% had an AUC above 0.90.

Only 10 of the tools were evaluated in four studies or more. Of these 10, only one had a summary AUC above 0.80, the researchers said.

As well, there was marked heterogeneity of these tools across diverse settings and studies, which could be due to selective analysis and reporting of studies, leading to exaggerated results of predictive discrimination, they explained.

“Efforts at standardization of reporting are important in this regard,” they wrote.

Higher AUCs were associated with certain characteristics, including lower journal impact factor and larger sample size (P=0.01 and P=0.02, respectively).

The association with journal impact factor may be explained by lower methodologic quality accepted in those journals, the researchers said.

Higher predictive value of the tools was also seen among the highest-risk patients, and among children (P=0.002 and P<0.001, respectively).

The study was limited because it was restricted to analyses published in a single year, and because it only evaluated AUC, even though there are other metrics to determine predictive value.

Still, the researchers concluded that efforts are needed to “organize and synthesize the predictive literature,” in order to ultimately “enhance the evidence derived from predictive research and to establish standard methods for developing, evaluating, reporting, and eventually adopting new predictive tools in clinical practice.”

In the meantime, they said, clinicians “should be cautious about adopting new, initially promising predictive tools, especially complex ones based on expensive measurements that have not been extensively validated and shown to be consistently useful in practice.”

 

 

FROM

Kristina Fiore, Staff Writer, MedPage Today
Published: July 26, 2011
Reviewed by Robert Jasmer, MD; Associate Clinical Professor of Medicine, University of California, San Francisco and
Dorothy Caputo, MA, RN, BC-ADM, CDE, Nurse Planner

Follow

Get every new post delivered to your Inbox.