It's Remarkable That EHRs Can't Do What Med Students Are Taught in PGY3-4 ... And Remarkable That Academics Don't Push This Information to the Public
The following article's full text from the journal Applied Clinical Informatics (ACI) is not freely available, but the abstract says all that needs to be said:
(They're even worse after care is finished; see my Feb. 2011 post "Electronic Medical Records: Two Weeks, Two Reams" on the piles of 'legible gibberish' put out as official medical records of entire hospital admissions.)
The implications are that these systems cause physicians to waste time, weed through inordinate amounts of data in seeking what's important to help patients - and prevent errors. Thus, they are highly likely to increase clinician cognitive burdens and quite possibly reduce the quality of care.
But don't just take my word for that.
The 2009 National Research Council report made similar observations about EHRs in research overseen by two of HIT's preeminent pioneers, Drs. Octo Barnett and William Stead. See "Current Approaches to Healthcare Information Technology are Insufficient" and the accompanying full NRC report here that states:
First:
Clinical summarization is an essential skill for patient pickups (e.g., taking over the care of a hospitalized patient or performing a consult on them), patient handoffs (such as when put in care of a covering physician), and other related informational reasons.
It is a skill taught (at least when I attended medical school) in the third and fourth years, also known as PGY3 and PGY4 (postgraduate years 3 and 4).
I was tested on my ability to perform this task adequately, and the quality of my clinical summaries were also evaluated during internship and residency training as one of many criteria for successful completion.
The author's results, therefore, indicate that EHR designers and implementers cannot, or will not, incorporate the skills of clinical summarization required of PGY3-4 medical students and interns into their products, even after several decades of product manufacturing.
I find this astonishing.
Second:
While I applaud the authors of the new ACI study for performing such work, which in the current environment of hyper-enthusiastic cybernetic technophilia in medicine might cause negative industry pushback, I find another aspect of their work astonishing (perhaps 'disappointing' is a better term):
They only published their findings in ACI.
ACI is an excellent new journal, but as it is a relatively specialized journal in a specialized domain, its content will reach probably on the order of thousands of people in a substantive way.
This very blog has surpassed one million "hits", we estimate; my Drexel University site on HIT difficulties (including its predecessors) has probably surpassed the quarter million mark.
My point is that, at a time when the Institute of Medicine, NIST and others have advised further study of health IT to decide if HIT is safe and efficacious (because they don't really know) in order to decide if FDA or other governmental regulation is needed, research findings such as that of Archana Laxmisan, M.D, Dean Sittig PhD, and Adam Wright PhD, deserve far wider dissemination than in a highly specialized, relatively new informatics journal.
Let me further go on to say that physicians, by the nature of their MD degree, and non-physician informaticists such as Sittig and Wright, should feel ethically compelled to actively diffuse this type of finding about an experimental, suboptimal and potentially dangerous medical device (as per FDA CDRH's designation as such) more widely.
They should not leave it to bloggers such as myself to do that work for them.
This is not to single out these authors, this journal or this article; the phenomenon is common. Health IT research findings of potential great import sit in scientific journals that rarely if ever get seen by the lay citizenry, despite the importance to their being informed about the technology's actual and potential downsides.
How about the New York Times or Wall Street Journal? Even YouTube? These observations are not at all to impugn publication in ACI in any way, whose editor in chief I deeply respect for publishing articles that could be viewed as "negative" about HIT. Wider diffusion in non-academic print media and/or "New Media", however, would get messages such as in the clinical summarization deficiency study out in a way that would have far more social impact - and help protect patients.
In other words, the authors of informatics studies need to actively "get out more."
After all, the industry and government spare no effort in positive publicity, even making unabashed marketing claims not backed by the scientific literature, such as in the Feb. 24, 2012 HHS press release at this link:
Unlike the relationships between, say, hydrochlorothiazide diuretics and reductions in blood pressure, or sterile techniques in surgery lessening infection, I don't think that in 2012 we "know" that at all (link).
-- SS
Feb. 26, 2012 addendum:
Regarding my final point about the HHS statement, a reader who wishes to remain unnamed, familiar with efforts outside the US, offers the following interesting observations:
-- SS
Clinical Summarization Capabilities of Commercially-available and Internally-developed Electronic Health Records (Vol. 3: Issue 1 2012)Put more directly, EHRs were poor at presenting relevant clinical information in summary form at the point of care.
A. Laxmisan (1), A. B. McCoy (2), A. Wright (3), D. F. Sittig (2)
(1) Houston VA Health Services Research and Development Center of Excellence, Michael E. DeBakey Veterans Affairs Medical Center and Section of Health Services Research, Department of Medicine, Baylor College of Medicine, Houston, TX; (2) School of Biomedical Informatics, The University of Texas Health Science Center at Houston (UTHealth), Houston, TX; (3) Brigham and Women’s Hospital, Harvard Medical School, Boston, MA
Summary
Objective: Clinical summarization, the process by which relevant patient information is electronically summarized and presented at the point of care, is of increasing importance given the increasing volume of clinical data in electronic health record systems (EHRs). There is a paucity of research on electronic clinical summarization, including the capabilities of currently available EHR systems.
Methods: We compared different aspects of general clinical summary screens used in twelve different EHR systems using a previously described conceptual model: AORTIS (Aggregation, Organization, Reduction, Interpretation and Synthesis).
Results: We found a wide variation in the EHRs’ summarization capabilities: all systems were capable of simple aggregation and organization of limited clinical content, but only one demonstrated an ability to synthesize information from the data. Conclusion: Improvement of the clinical summary screen functionality for currently available EHRs is necessary. Further research should identify strategies and methods for creating easy to use, well-designed clinical summary screens that aggregate, organize and reduce all pertinent patient information as well as provide clinical interpretations and synthesis as required.
(They're even worse after care is finished; see my Feb. 2011 post "Electronic Medical Records: Two Weeks, Two Reams" on the piles of 'legible gibberish' put out as official medical records of entire hospital admissions.)
The implications are that these systems cause physicians to waste time, weed through inordinate amounts of data in seeking what's important to help patients - and prevent errors. Thus, they are highly likely to increase clinician cognitive burdens and quite possibly reduce the quality of care.
But don't just take my word for that.
The 2009 National Research Council report made similar observations about EHRs in research overseen by two of HIT's preeminent pioneers, Drs. Octo Barnett and William Stead. See "Current Approaches to Healthcare Information Technology are Insufficient" and the accompanying full NRC report here that states:
I find several aspects of the new ACI study and its findings remarkable:
"Current efforts aimed at the nationwide deployment of health care information technology (IT) will not be sufficient to achieve medical leaders' vision of health care in the 21st century and may even set back the cause ... The report describes difficulties with data sharing and integration, deployment of new IT capabilities, and large-scale data management. Most importantly, current health care IT systems offer little cognitive support; clinicians spend a great deal of time sifting through large amounts of raw data (such as lab and other test results) and integrating it with their medical knowledge to form a whole picture of the patient. Many care providers told the committee that data entered into their IT systems was used mainly to comply with regulations or to defend against lawsuits, rather than to improve care. As a result, valuable time and energy is spent managing data as opposed to understanding the patient."
First:
Clinical summarization is an essential skill for patient pickups (e.g., taking over the care of a hospitalized patient or performing a consult on them), patient handoffs (such as when put in care of a covering physician), and other related informational reasons.
It is a skill taught (at least when I attended medical school) in the third and fourth years, also known as PGY3 and PGY4 (postgraduate years 3 and 4).
I was tested on my ability to perform this task adequately, and the quality of my clinical summaries were also evaluated during internship and residency training as one of many criteria for successful completion.
The author's results, therefore, indicate that EHR designers and implementers cannot, or will not, incorporate the skills of clinical summarization required of PGY3-4 medical students and interns into their products, even after several decades of product manufacturing.
I find this astonishing.
Second:
While I applaud the authors of the new ACI study for performing such work, which in the current environment of hyper-enthusiastic cybernetic technophilia in medicine might cause negative industry pushback, I find another aspect of their work astonishing (perhaps 'disappointing' is a better term):
They only published their findings in ACI.
ACI is an excellent new journal, but as it is a relatively specialized journal in a specialized domain, its content will reach probably on the order of thousands of people in a substantive way.
This very blog has surpassed one million "hits", we estimate; my Drexel University site on HIT difficulties (including its predecessors) has probably surpassed the quarter million mark.
My point is that, at a time when the Institute of Medicine, NIST and others have advised further study of health IT to decide if HIT is safe and efficacious (because they don't really know) in order to decide if FDA or other governmental regulation is needed, research findings such as that of Archana Laxmisan, M.D, Dean Sittig PhD, and Adam Wright PhD, deserve far wider dissemination than in a highly specialized, relatively new informatics journal.
Let me further go on to say that physicians, by the nature of their MD degree, and non-physician informaticists such as Sittig and Wright, should feel ethically compelled to actively diffuse this type of finding about an experimental, suboptimal and potentially dangerous medical device (as per FDA CDRH's designation as such) more widely.
They should not leave it to bloggers such as myself to do that work for them.
This is not to single out these authors, this journal or this article; the phenomenon is common. Health IT research findings of potential great import sit in scientific journals that rarely if ever get seen by the lay citizenry, despite the importance to their being informed about the technology's actual and potential downsides.
How about the New York Times or Wall Street Journal? Even YouTube? These observations are not at all to impugn publication in ACI in any way, whose editor in chief I deeply respect for publishing articles that could be viewed as "negative" about HIT. Wider diffusion in non-academic print media and/or "New Media", however, would get messages such as in the clinical summarization deficiency study out in a way that would have far more social impact - and help protect patients.
In other words, the authors of informatics studies need to actively "get out more."
After all, the industry and government spare no effort in positive publicity, even making unabashed marketing claims not backed by the scientific literature, such as in the Feb. 24, 2012 HHS press release at this link:
“We know that broader adoption of electronic health records can save our health care system money, save time for doctors and hospitals, and save lives,” said [HHS] Secretary Sebelius."
Unlike the relationships between, say, hydrochlorothiazide diuretics and reductions in blood pressure, or sterile techniques in surgery lessening infection, I don't think that in 2012 we "know" that at all (link).
-- SS
Feb. 26, 2012 addendum:
Regarding my final point about the HHS statement, a reader who wishes to remain unnamed, familiar with efforts outside the US, offers the following interesting observations:
"This [HHS statement from Sebelius] is political spin to try to keep some momentum in a project that is obviously failing. What is more they KNOW it is failing and can read the IOM report and the negative US and UK reports on EHR's.
The White House is clearly desperate to calm voter and healthcare professional opinion and this is another naked attempt to project the inevitable crisis to beyond the end of this year...well at least early November.
I know that this is true because Sebelius' language is exactly the same as that used by the UK Dept. of Health just before NPfIT completely folded...
Also, implementation rates of HIT projects must also be carefully scrutinized as they reflect the percentage of hospitals where something, usually anything has been switched on rather than the proportion where a full EHR, PACS, CPOE etc. has been achieved. A good question to ask is how many units are now paperless?"
-- SS
0Awesome Comments!