Study highlights 'lurking question' of measuring EHR effectiveness: The science in Medical Informatics is dead

The science in Medical Informatics is dead.

I'm not going to even use academic fabric softener in my assertion, e.g., "may be", "appears to be", or "is it?" (as a question) dead.

It's dead.

When HIT experts recommend changing the study goalposts when existing studies don't give results they'd like to see, rather than first and foremost critically and rigorously examining why we're seeing unexpected results, science is dead.


Study highlights 'lurking question' of measuring EHR effectiveness

December 22, 2010 | Molly Merrill, Associate Editor

WASHINGTON – Hospitals' use of electronic health records has had just a limited effect on improving the quality of medical care nationwide, according to a study by the nonprofit RAND Corporation.

The study, published online by the American Journal of Managed Care, is part of a growing body of evidence suggesting that new methods should be developed to measure the impact of health information technology on the quality of hospital care.


[In other words, we're not getting the results we thought and hoped we'd get with "Clinical IT 1.0", so let's alter the study methodologies and endpoints --- rather than using the results we have to identify the causes and improve the technology to see if we can do better with "Clinical IT 2.0."

Further, it's not as if there's no other data on why health IT might not
work as hoped - ed.]

Most of the current knowledge about the relationship between health IT and quality comes from a few hospitals that may not be representative, such as large teaching hospitals or medical centers that were among the first to adopt electronic health records.


[This implies "other" "representative" hospitals are either not doing it right, or the technology is ill suited for them and may never work. Which is it? We really need to know before we proceed with hundreds of billions more in this "Grand Experiment"
- ed.]

The RAND study is one of the first to look at a broad set of hospitals to examine the impact that adopting electronic health records has had on the quality of care.

The research included 2,021 hospitals – about half the non-federal acute care hospitals nationally. Researchers determined whether each hospital had EHRs and then examined performance across 17 measures of quality for three common illnesses – heart failure, heart attack and pneumonia. The period studied spanned from 2003 to 2007.

The number of hospitals using either a basic or advanced electronic health records rose sharply during the period, from 24 percent in 2003 to nearly 38 percent in 2006.

[How many billions of dollars diverted from patient care needs does that represent? - ed.]

Researchers found that the quality of care provided for the three illnesses generally improved among all types of hospitals studied from 2004 to 2007. The largest increase in quality was seen among patients treated for heart failure at hospitals that maintained basic electronic health records throughout the study period.

However, quality scores improved no faster at hospitals that had newly adopted a basic electronic health record than in hospitals that did not adopt the technology.

[In other words, the improvements or lack thereof had little to do with electronic vs. paper record keeping
- ed.]

In addition, at hospitals with newly adopted advanced electronic health records, quality scores for heart attack and heart failure improved significantly less than at hospitals that did not have electronic health records.

[In other words, the clinical IT was probably impairing doctors compared to simpler paper methods and good HIM personnel
- ed.]

EHRs had no impact on the quality of care for patients treated for pneumonia.

Researchers say the mixed results may be attributable to the complex nature of healthcare.

[That is likely true, but maybe the mixed results are also -- and even more likely in major part -- due to poorly designed and/or poorly implemented IT
- ed.]

Focusing attention on adopting EHRs may divert staff from focusing on other quality improvement efforts.

[That speaks to poor EHR overkill, poor usability, unfitness for purpose, and other issues that may or may not be remediable in short or even long term
- ed.]

In addition, performance on existing hospital quality measures may be reaching a ceiling where further improvements in quality are unlikely.

[That speaks to a low ROI or even negative for the hundreds of billions of dollars being diverted to the IT sector
- ed.]

"The lurking question has been whether we are examining the right measures to truly test the effectiveness of health information technology," said Spencer S. Jones, the study's lead author and an information scientist at RAND. "Our existing tools are probably not the ones we need going forward to adequately track the nation's investment in health information technology."


["Probably" not the ones we need? How can the authors know this? This is not science, it is speculation.
Further, I'd say the scientific imperative before we design "the right measures" to "truly" test the effectiveness of HIT is to understand why the measures we're using now are not showing the desired results, because perhaps they are perfectly adequate and are revealing crucial flaws, overestimations and false assumptions that need to be dealt with, now, not after another round of billions is spent - ed.]

New performance measures that focus on areas where EHRs are expected to improve care should be developed and tested
, according to researchers.

[In pharma clinical trials, this is akin to what is known as "changing the study methodologies and endpoints"
- a form of manipulating clinical research, usually with the true ultimate endpoint of money - ed.]

For example, EHRs are expected to lower the risk of adverse drug interactions, but existing quality measures do not examine the issue.


[I believe the studies that have been done of CPOE have not been consistently supportive, and in fact show CPOE might create new med errors
- ed.]

"With the federal government making such a large investment in this technology, we need to develop a new set of quality measures that can be used to establish the impact of electronic health records on quality," Jones said.


[This is truly
putting the cart before the horse as I wrote here. The studies showing the benefit should have long preceded the "large investments" that were decided upon - ed.]

Support for the study was provided by RAND COMPARE (Comprehensive Assessment of Reform Efforts). RAND developed COMPARE to provide objective facts and analysis to inform the dialogue about health policy options. COMPARE is funded by a consortium of individuals, corporations, corporate foundations, private foundations and health system stakeholders.

Other authors of the study are John L. Adams, Eric C. Schneider, Jeanne S. Ringel and Elizabeth A. McGlynn.

The overarching assumption is that the metrics are wrong, not the quality and fitness for purpose of the technology, the 'wrongness' of which is painfully obvious from the aforementioned other literature, e.g., link, link, link, and the many posts at this blog referring to other literature. Are the authors unaware, one might ask? I know they are not. (Or - blinded? That is, could there be external pressures affecting the thought processes? The arguments might not unreasonably be construed to be skewed from that perspective.)

Recognizing the atrocious
user experience and mission hostile nature of the technology (link), how disruptive it is (link, link), how poorly implemented it is often by domain amateurs to support financial battles of the payers, not cognitive processes of clinicians, I am amazed there's any signs of improvement at all, not outright deterioration. (That there is not outright deterioration of care displays if anything the results of the hard mental labor and ingenuity of clinicians to work around the technology's deficits.)

I note that the last time RAND looked at such matters there were problems with pro-health IT bias, among other issues (see my Feb. 2009 post "Heartland Institute Research & Commentary: Health Information Technology").

Carl Sagan wrote that science is a candle in the dark in a demon haunted world.

It seems the demons are winning.

-- SS