Showing posts with label Healthcare IT experiment. Show all posts
Showing posts with label Healthcare IT experiment. Show all posts

Know-Nothing, or Industry Shill? You Be The Judge.

I have not been writing much the past few weeks due to other concerns, and will probably not write much this summer.

However, I have been commenting on various posts on other blogs.  One resultant thread stands out as yet another example of a likely industry shill or sockpuppet defending the state of health IT, oddly at a blog on pharma (same blog as was the topic of tmy post "More 'You're Too Negative, And You Don't Provide The Solution To The Problems You Critique', This Time re: Pharma").

Industry-sponsored sockpuppetry is a form of stealth marketing or lobbying, through discreditation of detractors, although in a perverse form.

The following exchanges meet the sockpuppetry criteria once pointed out by business professional and HC Renewal reader Steve Lucas in 2010 in a post about an industry sockpuppet caught red-handed through IP forensics here:

... In reading this thread of comments I have to believe [anonymous commenter moniker] "IT Guy" is a salesperson. My only question is: Were you assigned this blog or did you choose it? We had this problem a number of years ago where a salesperson was assigned a number of blogs with the intent of using up valuable time in trying to discredit the postings.

In my very first sales class we learned to focus on irrelevant points, constantly shift the discussion, and generally try to distract criticism. I would say that HCR is creating heat for IT Guy’s employer and the industry in general.

I find it sad that a company would allow an employee to attack anyone in an open forum. IT Guy needs to check with his superiors to find out if they approve of this use of his time, and I hope he is not using a company computer, unless once again this attack is company sanctioned.

In the hopes that continued exposure of this nonsense can educate and thus help immunize against its effects, I present this:

At "In the Pipeline", a blog on medicinal chemistry (the science of drug making) and other pharma topics, a rebuttal to a claim that over 500,000 people (not 50K) might have died due to VIOXX was posted entitled "500,000 Excess Deaths From Vioxx? Where?"

That 500K possibility appeared on a UK site 'THE WEEK With the FirstPost' at "When half a million Americans died and nobody noticed."  The author of the FirstPost piece started out by raising the point made by publisher Ron Unz that life in China might be more valued than that in the U.S., where major pharma problems and scandals generally meet what this blog calls "the anechoic effect."  (In China, Unz noted, perpetrators of scandalous drug practices actually get arrested and suffer career repercussions.)

FirstPost notes:

ARE American lives cheaper than those of the Chinese? It's a question raised by Ron Unz, publisher of The American Conservative, who has produced a compelling comparison between the way the Chinese dealt with one of their drug scandals – melamine in baby formula - and how the US handled the Vioxx aspirin-substitute disaster ... (Unz) "The inescapable conclusion is that in today's world and in the opinion of our own media, American lives are quite cheap, unlike those in China." 

Not to argue to merits of the order of magnitude-expanded VIOXX claim, which I disagree with, but having concern for the general state of ethics in biomedicine in the U.S., I posted the following comment at the comment thread of the rebuttal post at "In the Pipeline" at this link:

6. MIMD on May 30, 2012 11:38 AM writes...

While I agree the VIOXX numbers here are likely erroneous, the point of the cheapening of the value of American life is depressingly accurate.

For instance, look how readily companies lay people off, ruining them, and perhaps forcing them out of the workforce forever.

Also, currently being pushed by HHS is a medical device for rapid national implementation known to cause injury and death. The government is partially financing it to the tune of tens of billions of dollars, probably with Chinese money no less.  [Either that, or with freshly-printed money adding to the trillions of $ in our deficit - ed.]

There are financial penalties for medical refuseniks (non-adopters).

However, FDA, the Institute of Medicine and others readily admit in publication thay have no idea of the magnitude of the harm because of lack of data collection, impediments to information diffusion and even legal censorship of the harms data. In effect, we don't even know if the benefits exceed the harms, and FDA and IOM admit it. FDA in fact refers to the known injuries and deaths from this device as "likely the tip of the iceberg."

Perhaps to some it's no longer a big deal if people are injured and/or die in data gathering for this medical enterprise.

E.g., see "FDA Internal Memo on H-IT Risks", and the Inst. of Medicine report in the same issues here.
It's all for the greater social good, they might say.
 
The following anonymous reply ensued:
10. Watson on May 30, 2012 1:47 PM writes...

@6 You keep using that word - "device" - I do not think it means what you think it means
 
I replied:

11. MIMD on May 30, 2012 3:48 PM writes...
 #10

'medical device' is the term chosen by FDA and SMPA (EU).
But that's a distraction from the points I raise in the linked post about the experiment.

To which this confused misdirection came forth from the ether:

12. Watson on May 30, 2012 4:46 PM writes...

The linked article is discussing the poor state of "medical device records" because of a lack of uniform specifications with respect to Health Information Technology, i.e. how these technologies code data and the challenges of making the data obtained uniform across a wide variety of implementations and vendors. [Erroneous, incomplete misdirection - ed.]

It seems that the concern, far from being that Health Information Technology is "killing" people, is that the Medical Device Records may contain duplicate reports for adverse health events because of health care providers encoding the data more than once for each event.  [What in the world? - ed.] This problem with replication exists because there are different health record systems where this data needs to be input, and perhaps the same patient uses different physicians who have different systems, but all of which are required to report adverse events. [I have little idea what this even means - ed.]
In other words, "Health Information Technology" is not some monolithic "device", and your conflation of "HIT" which is more properly an abstract term with the "devices" which are used to generate some forms of patient data is in my view the real distraction. [The "real" distraction from the ethical issues of the HIT experiment is terminology about medical devices?  Misdirection again from the ethical issue, and of a perverse nature - ed.]

Yes, some of the "devices" (a blood pressure monitor for example) may have underlying issues, which the FDA regulations for "medical device records" are designed to identify. The FDA, as a governmental entity has no constitutional power to mandate certain devices or implementations are to be used.  [Now we're in la-la land of misinformation and distraction- ed.] The power that the FDA does have is to inspect that the manufacturer of a device keeps appropriate medical device records (e.g. a lot of syringes, or a batch of formulated drug) and addresses any complaints about the device to the satisfaction of the FDA.

My replies:

17. MIMD on May 31, 2012 8:51 PM writes...

#12

It seems that the concern, far from being that Health Information Technology is "killing" people, is that the Medical Device Records may contain duplicate reports for adverse health events because of health care providers encoding the data more than once for each event

Yes, fix just that little problem and then the problems with clinical IT are solved! (Actually,I'm not even sure what you're referring to, but the evidence is that fixing it as you suggest is the cure.)  [Sarcasm - ed.]

The FDA, as a governmental entity has no constitutional power to mandate certain devices or implementations are to be used.

You are also right about FDA. They were completely toothless even in this situation[Sarcasm again - ed.]

18. MIMD on May 31, 2012 9:10 PM writes...

#12

In other words, "Health Information Technology" is not some monolithic "device", and your conflation of "HIT" which is more properly an abstract term with the "devices" which are used to generate some forms of patient data is in my view the real distraction.

Those who conducted the Tuskegee experiments probably felt the same way.

It's all about definitions, not ethics, and not data - which FDA as well as IOM or the National Academies, our highest scientific body, among others, admits, as in the linked posts in #6, is quantitatively and structurally lacking on risks and harms.

I don't really mean to laugh at you, not knowing how little you really know about the Medical Informatics domain, but you bring to mind this Scott Adams adage on logical fallacy:

FAILURE TO RECOGNIZE WHAT’S IMPORTANT
Example: My house is on fire! Quick, call the post office and tell them to hold my mail!

And with that, I move on, letting others enjoy the risible comments from surely to follow! :-)

I could not have been more correct.

In typical industry shill/sockpuppet fashion comes this, with clear evidence of a not-so-clever liar which I've bolded:

19. Watson on June 1, 2012 12:35 AM writes...
@18

I read the articles you originally linked to, and my comments were based upon trying to interpret your meaning from those selections. I worked in the industry and had to deal with GMP, and had to make sure to follow all of the guidelines with respect to medical device manufacturing and electronic records. I understand the terminology very well. Luckily, I never had to deal with "health IT", but I did have to pore over enough pages of Federal Register legalese to know that what is sufficient is not necessarily what is best.  [Right.  See below - ed.]

Is that risible enough for you?

The link that you provided in @17 was a much more concrete example, and if you had referenced it in your original post, would have cleared up much of the confusion that I (and I assume @9) faced in understanding what it was you were trying to convey. It would have been useful if you had explained which device or devices you were talking about. If you had more than conjecture to back up the Chinese money trail, and if you had provided an example of a company that has been damaged by being a refusenik, those would have supported your argument as well. [Continuing haphazardly with the irrelevant as a distraction in an attempt to shift the focus from the ethical issue of nationally implementing HIT in a relative risk-information vacuum, having weakly conceded the main argument's been lost - ed.]

Straw man and ad hominem fallacies are pretty transparent around here, and I wish you the best with both. [Another attempt at diversion - ed.]

I assure you that I recognize what's important, that I have ethics, and that I care about people having reliable healthcare. [This seems a form of post-argument-lost attempt to seize the moral high ground - ed.]

I then point out the nature of what is likely a bold-faced lie.  Someone who's read the Federal Register in depth would likely know FDA's authority is not a "Constitutional power", as bolded below:

20. MIMD on June 1, 2012 6:44 AM writes...

@19

The FDA taxonomy of HIT safety issues in the leaked Feb. 2010 "for internal use only" document "H-IT Safety Issues" available at the link in my post #6 is quite clear:
- errors of commission
- errors of omission or transmission
- errors in data analysis
- incompatibility between multi-vendor software applications or systems

This is further broken down in Appendices B and C, with actual examples.

Both this FDA internal report and the public IOM report of 2011 (as well as Joint Commission Sentinel Events Alert on health IT of a few years ago, and others) make it abundantly clear there is a dearth of data on the harms, due to multiple cultural, structural and legal impediments to information diffusion.

Yes, it's in the linked IOM report at #6 entitled "Health IT and Patient Safety: Building Safer Systems for Better Care". See for instance the summary pg. S-2 where IOM states about limited[ed] transparency on H-IT risk that "these barriers to generating evidence pose unacceptable risks to safety."

Argue with them, not me.

Back to my original point: national rollout of this medical device (whatever you call it is irrelevant to my point, but see Jeff Shuren's statement to that effect here) under admitted conditions of informational scarcity regarding risks and harms represents a cheapening of the value of patient's lives. Cybernetics Over All.

As to your other misdirection, spare me the lecture. It's not ad hominem to call statements like "The FDA, as a governmental entity has no constitutional power to mandate certain devices or implementations are to be used" for what they are - laughable (and I am being generous).

FDA's authority is statutory, not written in the Constitution. Same with their parent, HHS. To get quite specific, on human subjects experimentation, which the H-IT national experiment is, the statutory authority for HHS research subject protections regulations derives from 5 U.S.C. 301; 42 U.S.C. 300v-1(b); and 42 U.S.C. 28. [The USC or United States Code is the codification by subject matter of the general and permanent laws of the United States.  HHS revised and expanded its regulations for the protection of human subjects in the late 1970s and early 1980s. The HHS regulations on human research subjects protections themselves are codified at 45 CFR (Code of Federal Regulations, aka Federal Register) part 46, subparts A through D. See http://www.hhs.gov/ohrp/humansubjects/guidance/. - ed.]

A real scientist would have known things like this before posting, or have made it their business to know.
Tell me: are you in sales? Not to point fingers, but with your dubious evasion of the ethical issue that was the sole purpose of my post, and your other postings using misdirection and logical fallacy to distract, you fit that mindset.

You certainly don't sound like a scientist. Any med chemist worth their salt (pun intendend) would have absorbed the linked reports and ethical issues accurately, the first time.

I then pointed out I've moved this 'discussion' to the HC Renewal Blog, as it is not relevant per se to pharma, the major concern of In the Pipeline.

Industry shill/sockpuppet (as in the perverse example at this link) or just a dull, ill-informed but opinionated person who happens to read blogs for medicinal chemists, where layoffs have been rampant in recent years, takes issue with my attacks on those practices, and defends health IT like a shill?

You be the judge.

Whether a shill or know-nothing contributed the cited comments, it is my hope this post contributes to an understanding of pro-industry sockpuppetry.

-- SS











Experiments on Top of Experiments: Threats to Patients Safety of Mobile e-Health Devices - No Surprise to Me

As noted by columnist Neil Versel at MobiHealthNews.com in a Mar. 14, 2012 post "Beware virtual keyboards in mobile clinical apps":

... Remember the problems Seattle Children’s Hospital had with trying to run its Cerner EMR, built for full-size PC monitors, on iPads? The hospital tried to use the iPad as a Citrix terminal emulator, so the handful of physicians and nurses involved in the small trial had to do far too much scrolling to make the tablet practical for regular use in this manner.

[From that post: As CIO magazine reported last week, iPads failed miserably in a test at Seattle Children’s Hospital. “Every one of the clinicians returned the iPad, saying that it wasn’t going to work for day-to-day clinical work,” CTO Wes Wright was quoted as saying. “The EMR apps are unwieldy on the iPad.” - ed.]


Thank heaven it was a small trial, instead of a typical forced rollout to an entire clinical community. Someone seems to have grasped the experimental nature of the effort.

Well, there may be a greater risk than just inconvenience when tablets and smartphones stand in for desktop computers. According to a report from the Advisory Board Co., “[A] significant threat to patient safety is introduced when desktop virtualization is implemented to support interaction with an EMR using a device with materially less display space and significantly different support for user input than the EMR’s user interface was designed to accommodate.”

The report actually is a couple months old, but it hasn’t gotten the publicity it probably deserves. We are talking about more than user inconvenience here. There are serious ramifications for patient safety, and that should command people’s attention.


Unfortunately, far too little about health IT safety commands people's attention. It's merely assumed that either 1) health IT is inherently beneficent, or 2) the risks are deliberately ignored for - I'm sorry to note - profit and career advancement.

How many CIOs or even end users have considered another one of the unintended consequences of running non-native software on a touch-screen device, that the virtual, on-screen keyboard can easily take up half the display? “Pop-up virtual keyboards obscure a large portion of the device’s display, blocking information the application’s designer intended to be visible during data entry,” wrote author Jim Klein, a senior research director at the Washington-based research and consulting firm.


How many CIO's have considered unintended consequences of such experiments-on-top-of-experiments (i.e., handheld or other lilliputian computing devices on top of the HIT experiment itself)? Probably few to none.

The typical hospital CIO, usually of an MIS background and generally lacking meaningful backgrounds in research, computer science, medicine, medical informatics, social informatics, human-computer interaction, and other research domains, are usually "turnkey-shrinkwrapped software implementers." In fact, most have backgrounds woefully inadequate for any type of clinical device leadership role. They may even lack a degree of any kind, as major HIT recruiters over the past decade expressed the following philosophy ca. 2000:


I don't think a degree gets you anything," says healthcare recruiter Lion Goodman, president of the Goodman Group in San Rafael, California about CIO's and other healthcare MIS staffers. Healthcare MIS recruiter Betsy Hersher of Hersher Associates, Northbrook, Illinois, agreed, stating "There's nothing like the school of hard knocks." In seeking out CIO talent, recruiter Lion Goodman "doesn't think clinical experience yields [hospital] IT people who have broad enough perspective. Physicians in particular make poor choices for CIOs. They don't think of the business issues at hand because they're consumed with patient care issues," according to Goodman. (Healthcare Informatics, "Who's Growing CIO's".)


I wonder just how many CIO's "from the school of hard knocks" were put into action by those groups.

Back to the MobiHealthNews.com article:


... Klein said that users have two choices to deal with a display that’s much smaller than the software was designed for. The first is to zoom out to view the whole window or desktop at once, but then, obviously, users have to squint to see everything, and it becomes easy to make the wrong selection from drop-down menus and radio buttons.

Or, users can zoom in on a small part of the screen. “This option largely, if not completely, eliminates the context of interaction from the user’s view, including possible computer decision-support guidance and warnings, a dangerous trade-off to be sure,” Klein wrote.

In either case, the virtual keyboard makes it even more difficult to read important data that clinicians need to make informed decisions about people’s health and to execute EMR functions as designed.

Good observations. Two points:

1. As far back as the mid 1990's in my teaching of postdoctoral fellows in my role as Yale faculty in Medical Informatics, due to the limited screen real estate I uniformly presented the following 'diagram' regarding my beliefs about handheld devices (then commonly known as PDA's) as tools for significant EHR interaction:



Mid 1990's wisdom: small handhelds as desktop replacements at the bedside - just say "no"


This was before today's hi-res screens on small devices, but the limited real estate and its ramifications were obvious to critical thinkers who knew both medicine and medical informatics, even in the mid 1990's.

Similarly, experiments with HP95 handheld PC's running DOS failed miserably in a similar time frame at the hospital where I later became CMIO. (One benefit: I did get to salvage two of the devices from the trash bin for my obsolete computer collection!):


HP95LX - full DOS computer equivalent to an IBM PC (except, of course, for screen size).


Therefore, IMO the Advisory Report findings of 2012 merely verify what was obvious almost two decades ago.

Small devices are adjuncts only, suitable for limited uses (and only after extensive RCT's with informed consent even then, in my view).

2. Another issue that arises is more fundamental. The article notes:

“It seems clear that running even a well-designed user interface on a device significantly different than the class of devices it was intended to be run on will lead to additional medical errors,” Advisory’s Klein commented.

The critical thinking person's question is: who knows if the "class of device" the app was "intended to run on" is itself appropriate or optimal?

Commercial clinical IT (with the exception of devices that require special resolutions, pixel densities, contrast ratios etc., such as PACS imaging systems) is usually designed for commercially available hardware.

That means the same size/type of computer monitor you obtain at Best Buy or Wal Mart.

Is that sufficient? Is that optimal?

As in my Feb. 2012 post "EHR Workstation Designed by Amateurs", who really knows?


Click to enlarge. A workstation in an actual tertiary-care hospital ICU, 2011. How many things are wrong here besides the limited display size? See aforementioned post "EHR Workstation Designed by Amateurs".


These systems are not robustly cross-tested in multiple configurations, such as multiple-large screen environments vs. single screen, for example.

In summary, performing an experiment with small devices on top of another experiment - the use of cybernetic intermediaries (HIT) in healthcare that is already known to pose patient risk - is exceptionally unwise.

It would be best to decide what the optimal workstation configuration is, as applicable to different clinical environments, in limited RCT's with experts in HCI strongly involved, before putting patients at additional risk with lilliputian information devices that only a health IT Ddulite could love.

Ddulites: Hyper-enthusiastic technophiles who either deliberately ignore or are blinded to technology's downsides, ethical issues, and repeated local and mass failures.

-- SS

A Critical Review of a Critical Review of e-Prescribing ... Or Is It CPOE?

In PLoS medicine, the following article was recently published by researchers at the University of New South Wales in Australia:

Westbrook JI, Reckmann M, Li L, Runciman WB, Burke R, et al. (2012) Effects of Two Commercial Electronic Prescribing Systems on Prescribing Error Rates in Hospital In-Patients: A Before and After Study. PLoS Med 9(1): e1001164. doi:10.1371/journal.pmed.1001164


The section I find most interesting is this:

We conducted a before and after study involving medication chart audit of 3,291 admissions (1,923 at baseline and 1,368 post e-prescribing system) at two Australian teaching hospitals. In Hospital A, the Cerner Millennium e-prescribing system was implemented on one ward, and three wards, which did not receive the e-prescribing system, acted as controls. In Hospital B, the iSoft MedChart system was implemented on two wards and we compared before and after error rates. Procedural (e.g., unclear and incomplete prescribing orders) and clinical (e.g., wrong dose, wrong drug) errors were identified. Prescribing error rates per admission and per 100 patient days; rates of serious errors (5-point severity scale, those ≥3 were categorised as serious) by hospital and study period; and rates and categories of postintervention “system-related” errors (where system functionality or design contributed to the error) were calculated.

Here is my major issue:

Unless I am misreading, this research took place in hospitals (i.e., "wards" in hospitals) and does not seem to focus (if even refer to) discharge prescriptions.

I think it would be reasonable to say that what are referred to as "e-Prescribing" systems are systems used at discharge, or in outpatient clinic/offices to communicate with a pharmacy selling commercially and not involved in inpatient care.

From the U.S. Centers for Medicare and Medicaid Services (CMS), for example:

E-Prescribing - a prescriber's ability to electronically send an accurate, error-free and understandable prescription [theoretically, that is - ed.] directly to a pharmacy from the point-of-care

I therefore think the terminology used in the article as to the type of system studied is not well chosen. I believe it could mislead readers not experienced with the various 'species' of health IT.

This study appears to be of an inpatient Computerized Practitioner Order Entry (CPOE) system, not e-Prescribing.

Terminology matters. For example, in the U.S. the HHS term "certification" is misleading purchasers about the quality, safety and efficacy of health IT. HIT certification as it exists today (granted via ONC-Authorized Testing and Certification Bodies) is merely a features-and-functionality "certification of presence." It is not like an Underwriter Labs (UL) safety certification of an electrical appliance that the appliance will not electrocute you.

(This is not to mention the irony that one major aspect of Medical Informatics research is to remove ambiguity from medical terminology, e.g., via the decades-old Unified Medical Language System project or UMLS. However, as I've often written, the HIT domain lacks the rigor of medical science itself.)

I note that if this were a grant proposal for studying e-Prescribing, I would return it with a low ranking and a reviewer comment that the study proposed is actually of CPOE.

That said, looking at the nature of this study:

The conclusion of this paper was as follows. I am omitting some of the actual numbers such as confidence intervals for clarity; see the full article available freely at above link for that data:

Use of an e-prescribing system was associated with a statistically significant reduction in error rates in all three intervention wards. The use of the system resulted in a decline in errors at Hospital A from 6.25 per admission to 2.12 and at Hospital B from 3.62 to 1.46. This decrease was driven by a large reduction in unclear, illegal, and incomplete orders. The Hospital A control wards experienced no significant change. There was limited change in clinical error rates, but serious errors decreased by 44% across the intervention wards compared to the control wards.

Both hospitals experienced system-related errors (0.73 and 0.51 per admission), which accounted for 35% of postsystem errors in the intervention wards; each system was associated with different types of system-related errors.

I note that "system related errors" were defined as errors "where system functionality or design contributed to the error." In other words, these were unintended adverse events as a result of the technology itself.

The authors conclude:

Implementation of these commercial e-prescribing systems resulted in statistically significant reductions in prescribing error rates. Reductions in clinical errors were limited in the absence of substantial decision support, but a statistically significant decline in serious errors was observed.

The authors do acknowledge some limitations of their (CPOE) study:

Limitations included a lack of control wards at Hospital B and an inability to randomize wards to the intervention.

Thus, this was mainly a pre-post observational study, certainly not a randomized controlled clinical trial.

Not apparently accounted for, either, were potential confounding variables related to the CPOE implementation process (as in this comment thread).

In that thread I wrote to a commenter [a heckler, actually, apparently an employee of HIT company Meditech] with a stated absolute faith in pre-post studies that:

... A common scenario in HIT implementation is to first do a process improvement analysis to improve processes prior to IT implementation, on the simple calculus that "bad processes will only run faster under automation." There are many other changes that occur pre- and during implementation, such as training, raising the awareness of medical errors, hiring of new support staff, etc.

There can easily be scenarios (I've seen them) where poorly done HIT's distracting effects on clinicians is moderated to some extent by process and other improvements. Such factors need to be analyzed quite carefully, datasets and endpoints developed, and data carefully collected; the study design and preparation needs to occur before the study even begins. Larger sample sizes will not eliminate the possible confounding effects of these factors and many more not listed here.

The belief that simple A/B pre-post test that look at error rate comparisons are adequate is seductive, but it is wrong.

Stated simply, in pre-post trials the results may be affected by changes that occur other than the intervention. HIT implementation does not involve just putting computers on desks, as I point out above.

In other words, the study was essentially anecdotal.

The lack of RCT's in health IT are, in general, one violation of traditional medical research methodologies for studying medical devices. That issue is not limited to this article, of course.

Next, on ethics:

CPOE has already been demonstrated in situ to create all sorts of new potential complications, such in at Koppel et al.'s "Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors", JAMA. 2005;293(10):1197-1203. doi: 10.1001/jama.293.10.1197 that concluded:

In this study, we found that a leading CPOE system often facilitated medication error risks, with many reported to occur frequently. As CPOE systems are implemented, clinicians and hospitals must attend to errors that these systems cause in addition to errors that they prevent.

CPOE technology, at best, should be considered experimental in 2012.

In regards to e-Prescribing proper, there's this: Errors Occur in 12% of Electronic Drug Prescriptions, Matching Handwritten and this: Upgrading e-prescribing system can bump up error risk to consider; in other words, the literature is conflicting, confirming the technology remains experimental.

This current study confirmed some (CPOE) errors that would not have occurred with paper did occur with cybernetics, amounting to "35% of postsystem errors in the intervention wards."

In other words, patient Jones was now subjected to a cybernetic error that would not have occurred with paper, in the hopes that patients Smith and Silverstein would be spared errors that might have occurred without cybernetic aid.

Even though the authors observe that "human research ethics approval was received from both hospitals and the University of Sydney", since patient Jones did not provide informed consent to the experimentation with what really are experimental medical devices as I've written often on this blog [see note 1], I'm not certain the full set of ethical issues have been well-addressed. It's not limited to this occasion, however. This phenomenon represents a pervasive, continual world-wide oversight with regard to clinical IT.

Furthermore, and finally: of considerable concern is another common limitation of all health IT studies, which I believe is often willful.

What really should be studied before justifications are given to spend tens of millions of dollars/Euros/whatever on CPOE or other clinical IT is this:

The impact of possible non-cybernetic interventions (e.g., additional humans and processes) to improve "medication ordering" (either CPOE, or ePrescribing) that might be FAR LESS EXPENSIVE, and that might have far less IT-caused unintended adverse consequences, than cybernetic "solutions."

Instead, pre-post studies are used to justify expenditures of millions (locally) and tens or hundreds of billions (nationally), with results sometimes like this affecting an entire country.

There is something very wrong with this, both scientifically and ethically.

-- SS

Note:

[1] If these devices are not experimental, why are so many studying them to see if they actually work, to see if they pose unknown dangers, and to try to understand the conflicting results in the literature? More at this query link: http://hcrenewal.blogspot.com/search/label/Healthcare%20IT%20experiment


Addendum Feb. 10, 2012:

An anonymous commenter points out an interesting issue. They wrote:

The study was flawed due to its failure to consider delays in care and medication administration as an error caused by these experimental devices.

Delays are widespread with CPOE devices. One emergency room resorted to paper file cards and vacuum tubes to communicate urgency with the pharmacy. Delays were for hours.

I agree that lack of consideration of a temporal component, i.e., delays due to technology issues, is potentially significant.

I, for example, remember a more than five-minute delay in getting sublingual nitroglycerin to a relative with apparent chest pain due to IT-related causes. The problem turned out to be gastrointestinal, not cardiac; however, in another patient, the hospital might not be so lucky.

Addendum Feb. 12, 2012:

A key issue in technology evaluation studies is to separate the effects of the technology intervention from other, potentially confounding variables which always exist in a complex sociotechnical system, especially in a domain such as medicine. This seems uncommonly done in HIT evaluation studies. Not doing so will likely inflate the apparent contribution of the technology.

A "control ward" where the same education and training, process re-engineering, procedural improvements, etc. were performed as compared to the "intervention ward" (but without actual IT use) would probably be better suited to pre-post studies such as this.

A "comparison ward" where human interventions were implemented, as opposed to cybernetic, would be a mechanism to determine how efficacious and cost-effective the IT was compared to less expensive non-cybernetic alternatives.

-- SS

London Ambulance Service: Would You Like Some Death And Mayhem With Your American Healthcare IT?

It seems American companies are good at producing really noisome commercial healthcare IT and foisting it on other countries, such as outlined at "Is clinical IT mayhem good for [the IT] business? UK CfH leader Richard Granger speaks out" and at "Cerner's Blitzkrieg on London: Where's the RAF?".

Yet another example: Software for the London Ambulance Service (LAS). From Wikipedia:

The London Ambulance Service NHS Trust (LAS) is the largest "free at the point of contact" emergency ambulance service in the world. It responds to medical emergencies in Greater London, England, with its ambulances and other response vehicles and over 5,000 staff at its disposal.

Thanks to the U.S., the inhabitants of London are now the unconsenting subjects of an American IT beta-testing experiment that could cost them their lives.

From E-Health Insider.com:

LAS plans for IT go-live and failure
E-Health Insider.com
25 January 2012
Shanna Crispin

London Ambulance Service NHS Trust may terminate its contract with American supplier Northrop Grumman if a second attempt to go-live with a new dispatch system fails.

The trust initially attempted to launch the CommandPoint computer aided dispatch system in early June last year.

However, the technical switch-over to the new system had disastrous effects; with the system failing, staff having to use pen and paper, and then finally aborting the go-live by reverting to the old CTAK dispatch system.


Health IT can kill you ever before you ever reach the hospital...

An investigation into the incident has found the response to calls was delayed by more than three hours in some cases. One patient has lodged a legal claim for the delay he experienced, and the service has received four additional complaints.

A patient died in one of the calls affected. However, a separate investigation concluded that it could not be determined whether they would have survived if the response had been faster.


In other words, the patient very well might have survived without long delays for the ambulance to arrive.

Board papers drawn up for a board meeting next week say an investigation into the 8 June go-live attempt concluded that critical configuration issues were not identified during the testing phase.

It also found there were no operational procedures in place in the event of a critical system failure and that the product failed to deliver the system, technical and operational functionality expected.


At least in this case an absolution for the software itself was not made. One wonder if the vendor was "held harmless" contractually for this somber outcome.


The trust has since been working to further test the system, and is planning to go-live again on 28 March.

However, the trust’s director of information management and technology, Peter Suter, said if that go-live failed then “the contract with Northrop Grumman would need to be reconsidered.”


That will make two chances to get it right. In life-critical IT, I would only have given one.

A defective first-responder system is, on first principles, a public health menace. There is nothing to argue here, nothing to discuss on that point.

I note that disrupting the first-responder system in London would be the envy of terrorists, especially at the time of the London 2012 Olympic Games. However, who needs them when you have U.S. IT personnel who create a system as described?


The trust completed testing the software prior to Christmas, when it began training staff. Leading up the March go-live, the software will be subjected to four separate live runs, with the system staying live for progressively longer periods of time.

If the system fails to go live in March, the trust will abandon any further attempts to go-live before the Olympics in July.


That's still very tight timing to discover all the bugs in an IT system, in preparation for expected increased need during the Olympics...


Instead, it will keep operating the current CTAK system. However, the trust decided to procure a new system in 2007 because CTAK was deemed ‘unstable’ and in need of replacement.

An analysis of the CTAK system has now determined it is stable enough to handle the increased pressure during the Olympics, which is estimated to be an increase of 5.6% to 8.9% on top of the usual volume for this time of year.


One wonders if the "new system" was not needed at all, but was instead sold as vaporware by impressively attired, good-haired, shiny-toothed, fast-talking salespeople to hapless decision-makers with all sorts of promises of cybernetic and financial miracles.

(I've been in that game before from both sides - as a potential customer, and as part of a health IT sales team.)

One also wonders if, should the system be dismantled due to a second failure, the British taxpayers who paid for it will get their tax dollars refunded.

-- SS

Dr. Scott Monteith: On a New Wonder "Drug" on the Market

I've posted several guest posts by Dr. Scott Monteith, a psychiatrist/informaticist, at Healthcare Renewal.

These include the Mar. 2011 post "On 'The Best Compromise' on Physicians and Use of Troublesome Health IT", the Jan. 2011 post "Interesting HIT Testimony to HHS Standards Committee, Jan. 11, 2011, by Dr. Monteith" and the Dec. 2010 post "Meaningful Use and the Devil in the Details: A Reader's View".

Here's a new guest post from Dr. Monteith regarding a new "Wonder Drug" on the market:

New Drug on the market!

This new drug claims to be incredibly effective. It’s very expensive, but the sellers assure us that it’s worth it!

Using the drug is complicated, and the health care team will likely experience side effects including frustration, reduced efficiency, fatigue, lack of attention, confusion, and cognitive dissonance (as the drug often “appears” not to work -- but it really does work and we know because the manufacturers consistently tell us it does).

By the way, the sellers of this new drug say that side effects are the fault of the people who use the drug, not the drug itself.

The drug isn’t proven to work, but its investors view it as promising and tell us that this small detail can be ignored. Besides, it would be costly to prove that it’s effective (remember, it’s already a very expensive drug).

And there are case studies showing that it appears to work. Some people reportedly really like it (these people, in fact, often become drug salesmen). Never mind that there are also large numbers of case studies indicating that it’s ineffective and dangerous, and can even kill people.

[Of course, cases of success are scientifically robust and universally valid; cases of adverse events are merely anecdotal (link) - ed.]

Nor do we have clear best practices related to correct dosages and when to take it, but the drug’s producers tell us that more of the drug is generally better than less. Everyone will have to figure out these minor details on their own. It’s recommended that you hire experts to help you with this detail. No problem – there are experts around every corner (they’re called “consultants”).

Please join the manufacturers in their campaign to use and promote this new drug – the government has! Uncle Sam will pay you an incentive to use this new experimental drug over the next few years. If you refuse to use it, however, beware! Uncle Sam will penalize caregivers who refuse to use the drug by reducing their payments by 1% in the first year, and then accelerating these reductions in years 2-5.

We also need to work hard to keep groups of citizens from regulating it, or asking inconvenient questions like “can you please prove that it works before we take it and risk side effects and spend large amounts of money on it?” (You know, the same kind of nasty questions that we ask of other health care interventions.)

Almost forgot…once you start the drug, it’s very hard to quit to using it. No, it’s not heroin. Don’t be silly!

Finally, we’re working hard to keep the manufacturers safe from any liability associated with the drug. Yes, yes, I know – they claim it’s safe and works and all problems are related to the people who take it. But these “safe harbors” (whether through government intervention or strong user agreements) are necessary ... just in case ...

By now I’m sure that you want to run out to your pharmacy to buy this wonderful new drug. You may also want to invest in its maker as well. So here are the details…

The generic name is “HIT.” The trade name is “EHR.” It’s stock symbol is ONC.

I think the point is very well made.

-- SS

Why 99 Percent of the Irrationally Exuberant About Health IT Need To Be Removed From Healthcare

At Roy Poses' cross post "Why 99 percent of health care should be angry" over at the KevinMD blog, I introduced a comment into the "eruption of controversy" (his term here) caused by his post.

My comment was on the topic of government and health IT:

As one of Roy Poses' co-bloggers and a Medical Informaticist, I can say with certainty that government involvement in healthcare has been disastrous. Specifically, via ONC, ARRA and the HITECH Act, prematurely pushing still-experimental healthcare information technology on an unsuspecting medical profession (for the most part) and public. See "An updated reading list on health IT" at http://tinyurl.com/emrreadingl..., .

A reply typical of the irrationally exuberant was added to the thread (emphases mine):

What are you basing your "certainty" on? The examples discussed in the links sound like a case of bad configuration of an EMR. It could also be a just a poor solution from a vendor. Do you know if these were even a certified applications? I would like to suggest not painting all EMR implementations and the overall value of EMR’s from a single, albeit tragic, example. [I.e., an "anecdote" - ed.] A well implemented EMR, configured in collaboration with an organization’s physicians, has been repeatedly proven to reduce medical and medication errors. Why would any educated person, including legislators and executives, support the use of a tool that would increase harm, not safety.

Education aside, we will all be patients at some point so our innate need for self preservation would seem contrarian to arbitrary investments in useless technology to manage our care. Our current health delivery method produces far more harm than the new technology being implemented to address it. We need to embrace technology and make it work for us rather than putting our heads in the sand. Take the following quote as an example:

"That it will ever come into general use, notwithstanding its value, is extremely doubtful because its beneficial application requires much time and gives a good bit of trouble, both to the patient and to the practitioner because its hue and character are foreign and opposed to all our habits and associations." - The London Times, 1834 commenting on the "stethoscope"

Note that this reply came after I presented a link to a long list of articles, more than 50, with links to each article or its abstract for ease of reference, and a personal account of healthcare IT failure.

The articles challenge the beliefs in technological determinism common about health IT, i.e., that computers + medicine 'automagically' lead to better medicine, because, well, of the addition of computers, which must improve medicine, just - because.

The reason I write that the reply was typical of the irrationally exuberant is due to the interrelationship between irrationality, logical fallacy, and absence of evidence. These characteristics are usually present in the writings proffered by those so afflicted - and, to those with vested interests in health IT, a.k.a. conflicts of interest, I should add.

I replied:


You are lacking references supporting your arguments, which in themselves display logical fallacy.

I urge all readers to see my linked references list at the top of this thread, examine some of them (such as Jon Patrick's work on gross EHR defects, the ECRI Institute's Top Ten List of Healthcare Hazards, Romano et al.'s "Electronic Health Records and Clinical Decision Support Systems: Impact on National Ambulatory Care Quality" and others).

There are articles from reputable sources indicating today's health IT, lacking cognitive support and other necessities for clinicians (such as per the National Research Council itself in an investigation led by health IT pioneers Octo Barnett and William Stead, see http://www8.nationalacademies.... ) does not improve quality of care, and can cause harm.

These articles should raise caution in any physician, nurse or hospital contemplating use of this technology. These reports should not be cavalierly ignored, but should be a flag for great caution. The point is, with the literature conflicting, the technology should be considered experimental and caution used when deployed on human subjects. That includes both patients and clinicians, the former who can be injured or killed, the latter whose careers can be ruined through computer-caused or computer-aggravated errors. [Note: in health IT experiments, clinicians are, in fact, also experimental subjects - ed.]

Re: "The examples discussed in the links sound like a case of bad configuration of an EMR" - you omit the existence of clinical IT defects and problems such as poor software engineering causing unreliability, mission hostile human-computer interfaces (e.g., see http://www.tinyurl.com/hostile... ), incorrect or incomplete decision support algorithms, terminological problems, and other issues. You seem to indicate the findings in the reading list may be "anecdotal." A crushing reply to that line of thought, from an expert in Australia, is here: http://hcrenewal.blogspot.com/... .

As far as "certification" of HIT, this has little if anything to do with safety, reliability, usability, etc. ( e.g,, see http://hcrenewal.blogspot.com/... ). "Certification" of health IT is not validation of safety, usability, efficacy, etc., but a pre-flight checklist of features, interoperability, security and the like. The certifiers admit this explicitly. See the CCHIT web pages for example.

You use the logical fallacy of "appeal to authority" - or show severe naivete - in asking "why would any educated person, including legislators and executives, support the use of a tool that would increase harm, not safety."

"We need to embrace technology and make it work for us rather than putting our heads in the sand" - I ask - why now, if the technology is not ready? This seems like an appeal to novelty and perhaps the bandwagon fallacy (see http://www.nizkor.org/features... ).

Regarding your 1834 London Times quote, that was in a time before the human subjects experimentation guidelines such as the Belmont Report, World Medical Association Declaration of Helsinki, Guidelines for Conduct of Research Involving Human Subjects at NIH, the Nuremberg Code, and others came into being.

That said, the use of the 1834 stethoscope analogy is a type of red herring fallacy (http://www.nizkor.org/features... ). A stethoscope and enterprise clinical IT have little in common, the latter being potentially harmful to the point of causing patient death through interference in clinical care. (I note that if the 1834 story was brought up as an allusion to doctors and nurses who dislike today's IT being "Luddites" or the like, then that's an ad hominem fallacy.)

We as a society have supposedly learned something since 1834 regarding experimental medical devices. Or have we? FDA's Jeffrey Shuren MD, JD, Director of CDRH has admitted explicitly that health IT are medical devices with definite, but unknown, levels of risk - FDA stats "may represent only the tip of the iceberg in terms of the HIT-related problems that exist" were the exact words. That is prima facie evidence the devices are experimental.

However FDA refrains from regulating them under the FD&C Act, as they do pharma IT, other medical devices, drugs, etc. because they are a political "hot potato" - as at http://hcrenewal.blogspot.com/... , http://hcrenewal.blogspot.com/..., and http://hcrenewal.blogspot.com/... ).

As is customary at Healthcare Renewal, at those three posts are links to source, quoted in full context.

I've replied to so many irrationally exuberant commenters on this very blog, that I could have authored the reply above in my sleep.

Two points:

1. My reply and its links (and the source those links lead to) can and should be used as a "template" by clinicians to educate themselves, to reply to the health IT irrationally exuberant in their organizations, and to those in government prematurely pushing this technology onto clinicians;

2. The health IT irrationally exuberant, being irrational, ill-informed, and often markedly resistant to education, need to be removed from healthcare entirely. Their cavalier attitudes about cybernetic medical experiments are dangerous, and have no place in medical affairs. Such people impede, rather then help remediate the quality, safety, usability, and efficacy of health IT. In doing so, they contribute to increased risk and to actual patient harm. The irrationally exuberant are part of the problem, not part of the solution.

-- SS

Spend Billions More on HIT When Public-Health Services Get Crunched by Budget Woes?

I have the answer to these cutbacks of pubic health services.

In the midst of economic chaos, let's spend tens of billions or even better, hundreds of billions of dollars more on experimental healthcare IT.

(It worked out so well for the UK's NPfIT, we should follow the NHS's example of how to wisely spend our crucial healthcare billions.)

I will comment no further:

Wall Street Journal Health Blog
October 5, 2011, 10:00 AM ET
Public-Health Services Get Crunched by Budget Woes
By Betsy McKay

Immunizations, emergency preparations for hurricanes, and restaurant inspections are among local public-health services being cut back or eliminated amid budget constraints.

Some 55% of the nation’s county and city health departments reduced or eliminated at least one program between July 2010 and June 2011, and the public-health workforce continued to shrink, according to a new survey by the National Association of County and City Health Officials.

The cuts hit maternal and child health services (at 21% of the departments reporting cuts), personal health services (20%), emergency preparedness (20%), chronic disease screenings (17%), and food safety (11%), among other programs.

Health departments lost 5,400 jobs in the first half of this year, after losing 6,000 in all of 2010. There are currently about 120,600 local health department employees across the country after those cutbacks. While the workforce has been shrinking since 2008, the downsizing “is now eating into program capacity,” says Robert Pestronk, NACCHO’s executive director.

Particularly worrying are the cutbacks in emergency-preparedness programs, he says. “It’s troublesome given what we’re seeing in terms of weather conditions and threats in communities,” he tells the Health Blog. Health-department employees help plan for emergencies such as hurricanes, make sure supplies are in place and work as responders.

Meantime, cutbacks in immunization programs are making it harder for some children to get needed polio, tetanus and other preventive vaccinations, while reductions in food-safety programs mean fewer restaurant inspections or staff to interview people sickened in a food-borne illness outbreak, Pestronk says.

These woes aren’t limited to local health departments. The Centers for Disease Control and Prevention has seen its budget for preparedness and response fall by more than $350 million since 2005, to about $832 million in fiscal 2011. That challenges the CDC’s ability to respond to a pandemic like the type featured in the recent bio-thriller “Contagion,” Rear Admiral Ali Khan, the CDC’s chief of public health preparedness and response, told the Health Blog at a screening of the film.

Also see my March 2010 post "Hospitals Under the Knife: Sacrificing Hospital Jobs for the Extravagance of Healthcare IT?" where I observed:

... In effect, NY hospital physicians, nurses and support staff will lose their job due to budget shortfalls, at the same time the NY hospitals have been spending hundreds of millions of dollars on the extravagance of experimental clinical IT systems whose benefit is still an unknown.

Perhaps some of those millions could have been better spent on human beings, such as employees or better yet, patient care.

-- SS

Blake Medical Center (Bradenton, Fla.) Ignores Health IT Warning Letter From 100 Staff Physicians

In an article in the Bradenton Herald, Bradenton, FL, I found the following passage I bolded below truly striking:

Digital doctors: Will technology help or harm?
Sept. 4, 2011


BRADENTON -- At Blake Medical Center, the prognosis for pen and paper is poor: Doctors’ traditional tools for tracking cases and ordering medications and procedures are being phased out in favor of computers.

... Blake [Medical Center], which has been building its EHR system for years, launched a feature in April for doctors to enter their daily progress notes electronically. In June, it added a feature for ordering medications and procedures via computer. It’s part of a national push, called hCare, by Blake’s parent company HCA.

But many doctors were reluctant to give up their pens. More than 100 staff physicians signed a letter asking for the computerization project to be put on hold, saying the system is cumbersome and likely to induce errors.

Wow. Physicians with guts.

If I were an executive at this hospital, I'd make sure I were fully insured and my assets were in my spouse's name, especially in lawsuit-happy Florida.

If a patient injury or death occurs related to the EMR issues addressed in the letter from 100 staff physicians, which would/should seriously concern if not absolutely alarm any reasonable person, there could be charges of negligence, including criminal negligence, against the administration.


Criminal negligence: The failure to use reasonable care to avoid consequences that threaten or harm the safety of the public and that are the foreseeable outcome of acting in a particular manner ... Criminal negligence is negligence that is aggravated, culpable or gross.

A jury will not be happy with the letter being ignored, either.


... The project’s supporters acknowledge doctors and nurses have made mistakes as they learn the system, though they are unaware of any resulting in harm to patients. [Is that how you want your healthcare to proceed? - ed.]


In other words, no patient exposed to this experiment and its risks (a key issue here) was known to "hit the jackpot" - yet:

The EHR Slot Machine of Risk. Click to enlarge. (From my March 2011 post "On an EMR Forensic Evaluation by Professor Jon Patrick from Down Under: More Thoughts.") Congratulations! You've hit the EHR mis-processing jackpot! Perhaps today is a good day to die...


But they contend that, in the long run, an electronic system will be safer than using paper records -- something critics grudgingly admit.

Still, even as a $27 billion federal program is encouraging hospitals and doctors to launch EHR systems, no regulatory agency tests or regulates them. So a crucial question remains unanswered: Does it truly improve care? [Nobody really knows - e.g., see recent post here, and literature list here - ed.]


Critics readily admit health IT has the potential to improve healthcare, but that the technology is not yet nearly ready to do so, especially on a national basis, and is experimental. Far more work is needed. For example, according to the National Research Council of the United States:

Current efforts aimed at the nationwide deployment of health care information technology (IT) will not be sufficient to achieve medical leaders' vision of health care in the 21st century and may even set back the cause, says a new report from the National Research Council. The report, based partially on site visits to eight U.S. medical centers considered leaders in the field of health care IT, concludes that greater emphasis should be placed on information technology that provides health care workers and patients with cognitive support, such as assistance in decision-making and problem-solving.

... In the long term, success will depend upon accelerating interdisciplinary research in biomedical informatics, computer science, social science, and health care engineering.

Critics also readily admit that organizations experimenting with this technology without patient informed consent, on the basis of some future good, need lessons on the ethics of human experimentation.

Risk to patients seems to be of little concern to this industry.

It also now seems that hospital executives have become so arrogant that they fail to recognize the risks to themselves in ignoring their own medical staffs on HIT issues.

Perhaps they think they will be able to simply blame the physicians, using clinicians as scapegoats, but with official sites like this now coming online ... I think that excuse will rapidly lose traction.

-- SS

Sept. 19 addendum:

A major motivator for ignoring the physicians' warnings at this HCA (Hospital Corp. of America) hospital may be financial. See my Aug. 2011 post "Why EHR's Are Mission Hostile."

-- SS

Debt Ceiling deal could endanger health care law - and it would be beneficial if health IT/HITECH were part of the trimmings

A story about the recent political deal to raise the Debt Ceiling entitled "Deal could endanger health care law" appeared in the Politico (hat tip Drudge Report):

Deal could endanger health care law

By JENNIFER HABERKORN | 8/3/11 11:28 PM EDT

Politico.com

The debt ceiling agreement could jeopardize millions of dollars, and perhaps billions, in initiatives from President Barack Obama’s health care reform law if the super committee can’t come up with required spending cuts.

Many of the pots of money in the law — one of the Democrats’ most prized pieces of legislation — could get trimmed by the debt deal’s sequestration, or triggered cuts. The funds for prevention programs and community health centers, grants to help states set up insurance exchanges and co-ops, and money to help states review insurance rates could be slashed across the board if the panel can’t find enough cuts this fall.


My suggestion:

Put health IT and the HITECH Act (the health IT component that 'somehow' found its way into the fantastically-successful American Recovery and Reinvestment Act of 2009) on the table.

Health IT devices are medical devices that are dangerous, unregulated and unproven (e.g., see 'Reading List' here) in their current state of development and lawless environment.

Cutting HIT funding probably would be beneficial in avoiding waste as well.

Note what just occurred in the UK:

Time For A Summer Vacation at ONC - And Ethical Education of Health IT Zealots

A Microsoft Developer's Network (MSDN) blogger "Family Health Guy" a.k.a. Sean Nolan writes in a July 31, 2011 post entitled "Time for a summer vacation at ONC" that:

I’ve spoken at some length about my enthusiasm for the current leadership at HHS and ONC. President Obama has both directly and indirectly engaged some really gifted individuals to help us address healthcare challenges through the use of information technology --- which is awesome ... I’ve had the good fortune to participate in a few of these, and it’s been some of the most rewarding work of my career.

Truth is, I’m not used to seeing such great work out of government. So I’m a bit reluctant to throw out what could be perceived as a negative message --- but after my own two-week vacation thinking about it I’m convinced the time is right to ask ONC:

PLEASE, TAKE A BREAK!
JUST STOP TALKING FOR AWHILE AND LET US IMPLEMENT STUFF.

Why stop now? I think the answer is increasingly clear. Between Meaningful Use Stage 1, the Direct Project and the Health Data Initiative, government has kicked industry out of a funk it’s been in for the previous decade, and we’re seeing a ton of really exciting and positive innovation. But nobody, and certainly not ONC, knows at a detailed level how to turn that innovation into ubiquitous market reality.

What we need now is a period of implementation, competition and iteration to figure out how to deliver on the promise.


I find this attitude to be somewhat - well, let's just say, in a somewhat negative message, psychopathic.

We're talking about real, live human patients as the subject of our "figuring."

For, during the period of implementation, competition and iteration during which "we" will be figuring out how to deliver on "the promise" of experimental health IT medical devices, we shall be putting at risk and injuring non-consenting patients with our IT lessons-learned, and burying our more serious mistakes.

A far better way to figure out "how to deliver on the promise" would be to follow the protocols and procedures used to figure out how to deliver on the promise of other medical devices. These protocols and procedures, and ethical guidelines as well (link), were developed over decades in response to abuses of the past. (The Tuskegee experiments come to mind.)

That calls for controlled clinical trials under informed consent for safety and efficacy, careful study of the results, weeding out of unproven or dangerous technologies via regulation, and all this even before wide-scale rollouts of these health IT medical devices. Even after rollout, there needs to be a robust post-marketing surveillance process.

That's how it works with other medical devices and with pharmaceuticals. HIT medical devices deserve no special accommodation for any logical or ethical reason I can think of.

A lesson here: computer people, and health IT zealots of any background, need to be kept at arm's length from clinical settings, and on very short leashes in those settings.

-- SS

Aug. 3 addendum: the "period of implementation, competition and iteration to figure out how to deliver on the promise" in the UK did not turn out very well.

ePrescribing: Assuming Like This Makes An Ass (And Maybe a Corpse) Out Of You And Me

No surprise here to anyone with engineering sense (e.g., in my case, arising from my decades of experience in amateur radio, operating many different brands and types of extremely complex radio equipment):

American Medical News (American Medical Association)
Upgrading e-prescribing system can bump up error risk


Some of those risks can put patients in jeopardy in the first few weeks of implementation, a study finds.

By PAMELA LEWIS DOLAN, amednews staff. Posted June 13, 2011.

Switching to new or upgraded electronic-prescribing systems may pose patient safety risks during the transition period, despite the advanced clinical decision support tools offered by the newly implemented technology.

Let me translate this euphemism: "Patient safety risks" = risks that patients will be maimed or killed.

Many hospitals and physician practices are upgrading or switching their ePrescribing systems to meet meaningful use incentive requirements.

The collateral damage and/or roadkill on the way to "meaningful use" might be you or your loved one...

Physicians from Weill Cornell Medical College and New York-Presbyterian Hospital, both in New York, authored a report published online April 16 in the Journal of General Internal Medicine that identified challenges associated with switching to new e-prescribing technology. Some of those issues may pose safety threats during the first few weeks after implementation, the study found.

This is accurate, if you consider three months to a year of increased medical error risk "a few weeks":

Researchers examined prescribing errors that occurred from February 2008 through August 2009 at an academic-affiliated ambulatory clinic that switched from an older electronic medical record system to a new one with advanced clinical decision support for ePrescribing. They measured errors that occurred with the old system prior to implementing the new system, 12 weeks post-implementation of the new system and errors that took place one year later.

The report showed that the largest number of errors occurred before implementation, when the old system was still in use. The number of overall errors dropped from 36% to 12% one year later. [12% = more than one in ten. Why not near zero a.k.a. six-sigma for the billions of dollars spent? - ed.]

The most common errors, those caused by improper abbreviations, fell from 24% to 6% in one year. But the number of non-abbreviation errors, such as those associated with directions, frequency and dosage mistakes [just minor "issues" - ed], increased in the first 12 weeks of implementation of the new system.

[Remember, though. Three months = "just a few weeks" - ed.]


Experimental technology-related injury and roadkill figures were apparently not included.

Now for the assumption:

Study co-author Rainu Kaushal, MD, MPH, chief of the Division of Quality and Medical Informatics at Weill Cornell Medical College, said she knew the transition from paper to electronic was difficult for most physicians. She was surprised to learn that even for experienced e-prescribers, the move to a new system can be challenging. Dr. Kaushal, who was involved in the transition studied for the report, said she found the experience to be "exceedingly difficult."

"We thought it would be more of a seamless transition because people were already accustomed to sitting in front of a computer, entering in orders and so on, so they didn't have to get used to that piece," she said.

Where do foolhardy assumptions such as "We thought it would be more of a seamless transition because people were already accustomed to sitting in front of a computer, entering in orders and so" come from?

Do these 'experts' have absolutely no engineering sense, e.g., that when you have learned to use one complex machine (virtual in the case of an EMR), using another that has many differences is not 'automagically' a piece of cake?

By that flaw in logic, a Cessna pilot should have no problems rapidly learning to fly a 747 - or the Space Shuttle.

"But each electronic system has its nuances and learning how to utilize it and optimize the physician-computer interaction takes time. Every time a switch is made there are important issues that arise."

"Important issues" is a rather kind euphemism. Let me state the same concept via a dysphemism::

"Each electronic system has its nuances and learning how to utilize it and optimize the physician-computer interaction takes time. Every time a switch is made there are important clinical mistakes these changes cause clinicians to make. But mistakes never, ever, ever cause patients to be maimed or killed - at least not that we're going to admit publicly. Besides, these adverse outcomes are justified anyway, because we have the right to play God with peoples' lives through IT experimentation, using unregulated, unvetted medical devices without informed consent (link), for the greater good that will necessarily follow via technological determinism."


I think this is a more honest way to state the nature of the "issues" here.

Also see my June 2011 post "
Electronic medication prescribing: The Magic Bullet Theory of IT-Enabled Transformation once again bites the dust in the real world of medicine" for more on ePrescribing, and more on the change of health IT from clinician adviser to clinician governor. From that post:

As many as 12 percent of the drug prescriptions sent electronically to pharmacies contain errors, a rate that matches handwritten orders for medicine from physicians, researchers said.

-- SS