I have posted two guest posts by Dr. Scott Monteith, a psychiatrist/informaticist, at the Jan. 2011 post "Interesting HIT Testimony to HHS Standards Committee, Jan. 11, 2011, by Dr. Monteith" and the Dec. 2010 post "Meaningful Use and the Devil in the Details: A Reader's View".
Here is another, with his permission. He is responding to a talking point from a health IT commentary website that was distributed among the AMIA evaluations special interest group readership.
Dr. Monteith asks some very probing questions. He writes:
My own pithy, initial responses are as follows:
No, for the reasons after your comma. [i.e., in light of our ethical & legal obligation - ed.]
One should look to the past for lessons in what the "compromises" might be.
How about the Flexner Report of 1910 as a start?
The treatises on human research protections penned largely after the Tuskegee experiments and the horrors of WW2, such as these as listed on an NIH web page "Office of Human Subjects Research, Regulations and Ethical Guidelines" might also shed some light:
Perhaps with the line that "I never make mistakes ... everything I do is an experiment."
The technology is experimental. Perhaps the best way forward is to treat it as such.
In medicine I think there's a rich history of how to conduct proper (and improper) experimental research.
Again, Dr. Monteith raises some critical questions that need to be answered.
Or, more correctly, needed to be answered a long time ago, and long before planned national rollouts of healthcare information technology.
-- SS
Here is another, with his permission. He is responding to a talking point from a health IT commentary website that was distributed among the AMIA evaluations special interest group readership.
Dr. Monteith asks some very probing questions. He writes:
I would like to respond to what I see as being one of the most important and challenging “real-life” issues confronting clinicians, and is captured in this excerpt [below, from the multi-vendor sponsored site HISTalk - ed.]:
HisTALK: ... Somewhere between “we vendors are doing the best we can given a fiercely competitive market, economic realities, and slow and often illogical provide procurement processes that don’t reflect what those providers claim they really want” and “we armchair quarterbacks critics think vendors are evil and the answer is free, open source applications written by non-experts willing to work for free under the direct supervision of the FDA” is the best compromise.
That is, this excerpt performs the helpful task of framing “the best compromise” somewhere between two extreme viewpoints.
It would be helpful (at least for me) if this group could discuss what “the best compromise” actually ‘looks like’ in practice. How does one actually understand and live within “the best compromise”?
Let’s start with a relatively simple scenario:
What should clinicians do when they are working with EHRs that have known “problems” that are putting patients at risk, and the problems are not being immediately addressed, either directly or indirectly through, for example, an acceptable “work around,” or other adjustments to the EHR or local business processes?
Should the clinician continue to use the EHR and…
- assume that others (e.g., vendor, IT department, administration, etc.) will fix the problem(s)?
- report the problem(s)? Once? Twice? Three or more times? (How many?) To whom?
- inform the patient of the known problem(s) (if the problem(s) apply to the patient)?
- inform the patient that we do not have a good understanding of how to balance or even understand the risks posed by the EHR, given the dearth of peer-reviewed literature and algorithms? (What is “acceptable risk” for a given EHR problem? Does the EHR-related problem’s risk/benefit analysis change if the patient is in the hospital for a simple, nonlife-threatening problem vs. a complex, life-threatening problem)?
- give the patient the option to NOT use the EHR? (Note that we almost always give patients the choice to refuse other “risky ventures” such as diagnostic procedures and treatments.)
- inform their medical malpractice insurance company of the EHR-related problems?
- submit the problem(s) to the organization’s ethics committee, if there is one?
- report the problem(s) to the organization’s risk management staff?
- report the problem(s) in writing or verbally?
- stop using the EHR?
Etc. (including some combination of things).
Can providers (especially physicians) legitimately rationalize, given our ethical (to patients and our colleagues) and legal obligations (to patients and the state where we are licensed), the use of tools that are posing risks to patients and providers, when those risks are not spelled-out, not well understood, not peer-reviewed, etc.?
(Obviously everything we do has risks, but we are obligated to revel and discuss those risks as noted above. Further, the risks/benefits of a given diagnostic or treatment intervention are the product of peer-reviewed algorithms. Are patient’s aware of the risks associated with their doctor’s or hospital’s EHR?)
Again, the above excerpt suggests that there is a “ best compromise.” But what is/are “the best compromise(s)”?
I joke with friends that I am a “radical moderate” – that is, I usually find myself committed to the “middle ground” in most complex and thoughtful discussions. But when it comes to EHRs, I am finding it difficult to define or understand what a “moderate” or an acceptable “best compromise” looks like.
Given the current EHR exuberance driven by ONC’s incentive dollars and vendor profits (or hoped-for profits), we all know that the “politically correct” approach is to “go along” and be an “early adopter” (without too many protests). But is the politically correct approach really the “best compromise,” especially in light of our ethical and legal obligations?
I am anxious to hear what other people think about this matter. I am sincerely seeking help in better understanding a sensible, real-life “best compromise” for those of us in the trenches.
Note that if we cannot define “the best compromise,” then what does that say about us? How can we justify “getting on board” with patient care tools (e.g., EHRs, eRx’ing, etc.) that are posing risks (known and unknown), with no clear processes for informing patients, not giving patients their choice to use these e-tools, no clear evidence-based risk/benefit analyses, etc.?
My own pithy, initial responses are as follows:
Re: Given the current EHR exuberance driven by ONC’s incentive dollars and vendor profits (or hoped-for profits), we all know that the “politically correct” approach is to “go along” and be an “early adopter” (without too many protests). But is the politically correct approach [i.e, "go along to get along" - ed.] really the “best compromise,” especially in light of our ethical and legal obligations?
No, for the reasons after your comma. [i.e., in light of our ethical & legal obligation - ed.]
One should look to the past for lessons in what the "compromises" might be.
How about the Flexner Report of 1910 as a start?
The treatises on human research protections penned largely after the Tuskegee experiments and the horrors of WW2, such as these as listed on an NIH web page "Office of Human Subjects Research, Regulations and Ethical Guidelines" might also shed some light:
- 45 CFR 46 Protection Of Human Subjects
- The Belmont Report Ethical Principles and Guidelines for the Protection of Human Subjects of Research
- Nuremberg Code Directives for Human Experimentation
I further opine:
Re: How can we justify “getting on board” with patient care tools (e.g., EHRs, eRx’ing, etc.) that are posing risks (known and unknown), with no clear processes for informing patients, not giving patients their choice to use these e-tools, no clear evidence-based risk/benefit analyses, etc.?
Perhaps with the line that "I never make mistakes ... everything I do is an experiment."
The technology is experimental. Perhaps the best way forward is to treat it as such.
In medicine I think there's a rich history of how to conduct proper (and improper) experimental research.
Again, Dr. Monteith raises some critical questions that need to be answered.
Or, more correctly, needed to be answered a long time ago, and long before planned national rollouts of healthcare information technology.
-- SS
0Awesome Comments!