Showing posts with label manipulating clinical research. Show all posts
Showing posts with label manipulating clinical research. Show all posts

Giant GSK Settlement Provides Reminder of the Pervasiveness of Stealth Marketing

The latest  and biggest legal settlement involving health care to hit the news, that of GlaxoSmithKline (GSK) and the US government, has many familiar elements. As summarized by the New York Times,
In the largest settlement involving a pharmaceutical company, the British drugmaker GlaxoSmithKline agreed to plead guilty to criminal charges and pay $3 billion in fines for promoting its best-selling antidepressants for unapproved uses and failing to report safety data about a top diabetes drug, federal prosecutors announced Monday. The agreement also includes civil penalties for improper marketing of a half-dozen other drugs.

As was the case for nearly every other legal settlement we have discussed,
No individuals have been charged in any of these cases.
Thus, how well even such a large settlement will deter future wrong-doing is not clear.

Nonetheless, the documents released with it provide good documentation about how pervasive systematic, deceptive stealth marketing campaigns have become in health care. 

In particular, the official "complaint" filed by the US Department of Justice emphasized all these elements in the stealth marketing of paroxetine (Paxil, Seroxat in the UK) to adolescents.

Manipulation of Clinical Research

We have frequently discussed how health care corporations, particularly pharmaceutical, biotechnology and device companies, now sponsor  the majority of clinical research.  Their control of the design, implementation, analysis and dissemination of clinical research allows manipulation that increases the likelihood that the results will be favorable to their vested interests, usually the products and services they sell. 

We have previously discussed the manipulation of Study 329 to promote the marketing of Paxil (look here and here).  However, the US DOJ document makes these concerns more official.  It included:

Manipulation of Study Endpoints

Study 329 was a randomized controlled trial of Paxil vs imipramine vs placebo for depression in adolescents. The two primary endpoints pre-specified to the US Food and Drug Administration were "the degree to which a patient's Hamilton Rating Scale for Depression ('HAM-D') total score changed from baseline"; and "the patient's 'response' to medication, as defined as (a) a 50% or greater reduction in the patient's HAMD-D score, or (b) a HAM-D score of less than or equal to 8." However, initial analysis by GSK failed to show that Paxil improved either of these two end-points. The company concluded "it would be commercially unacceptable to include a statement that efficacy had not been demonstrated, as this would undermine the profile of [Paxil]."

So the analysis emphasized secondary outcomes, the "Study 329 investigators later added several additional efficacy measures not specified in the protocol. Paxil separate statistically from placebo on certain of these measures." Adding numerous post-hoc measures increased the likelihood of finding a difference on at least one due to chance alone.

Manipulation of Data

Initial analysis of the data suggested that patients given Paxil experienced 11 serious adverse events, including five that appeared related to suicidal ideation or action. When the FDA later reexamined the data, "upon closer examination the number of possible suicide-related events among the Study 329 Paxil patients increased beyond the five patients GSK described in the JACAAP article as having 'emotional lability.' While collecting saftey information for the FDA, GSK admitted that there were four more possible suicide-related events among Paxil patients in Study 329. In addition the FDA later identified yet another possible suicide-related event in the Study 329 Paxil patients, which was also not among the 11 serious advents listed in the JAACAP article. Thus, altogether, 10 of the 93 Paxil patients in Study 329 experienced a possible suicidal event, compared to one in 87 patients on placebo. This is a fundamentally different picture of Paxil's pediatric safety profile than the one painted by the JAACAP article...." 

Manipulation of Dissemination

The report describing the results of Study 329 (Keller MB, Ryan ND, Strober M et al.  Efficacy of paroxetine in the treatment of adolescent major depression: a randomized controlled trial.  J Am Acad Child Adolescent Psychiatry 2001; 40: 762-772.  Link here. ) was written under the control of GSK. "In April 1998, GSK hired Scientific Therapeutics Information, Inc (STI) to prepare a journal article about Study 329. GSK worked closely with STI on the article by providing a draft clinical report to 'serve as a template for the proposed publication.'"

The published report of Study 329 "mischaracterized the results." "Although the ... article identified the study's two primary endpoints in the abstract, the article did not explicitly state that Paxil failed to show superiority to placebo on either of the primary efficacy measures." Also, "the article did not explicitly identify the two protocol-specified primary outcome measures - or that Paxil failed to show superiority to placebo on these two measures. Instead the article claimed that there were eight efficacy measures and that Paxil was statistically superior to placebo on four of them." In addition, "while the article listed the five protocol-defined secondary endpoints, the text of the article omitted any discussion regarding three of the secondary measures on which Paxil failed to statistically demonstrate its superiority to placebo and instead focused on the five secondary measures that GSK added belatedly and never incorporated into the Study 329 protocol. The article claimed that these finve secondary measures had been identified 'a priori,' therefore incorrectly suggesting that all secondary endpoints had been part of the original study protocol." In other words, the final published articles contained multiple outright falsehoods about the drug's efficacy that exaggerated that efficacy.

Furthermore, initial analysis showed that patients given Paxil had more serious adverse events than others. An initial draft of the study article stated, "serious adverse events occurred in 11 patients in the paroxetine group, 5 in the imipramine group, and 2 in the placebo group." These included "headache during down-titration(1 patient), and various psychiatric events (10 patients): worsening depression (2); emotional lability (e.g., suicidal ideation/ gestures, overdoses), (5); conduct problems or hostility (e.g., aggressiveness, behavioral disturbance in school) (2); and mania (1)." As noted above, the number of suicide related events was actually double that noted in this draft as "emotional lability."  However, the published version of the report "falsely state[d] that only one of the 11 serious adverse events in Paxil patients was considered related to treatment...."  Nor did it mention the true number of events related to suicidal ideation or action.

The article only "listed at most five possibly suicidal events among Paxil patients, brushed those off as unrelated to Paxil, and conclude that treating children with Paxil was safe."

Later, GSK marketing materials described the results of the study thus,
This 'cutting-edge,' landmark study is the first to compare efficacy of an SSRI and a TCA with placebo in the treatment of major depression in adolescents. Paxil demonstrates REMARKABLE Efficacy and Safety in the treatment of adolescent depression."
Thus the conrol exerted by GSK over the published article, despite its apparent academic authorship, enabled it to promote a drug that was not efficacious and had major adverse events as remarkably safe and effective, a totally deceptive result that would mislead any health care professional who used the article to guide clinical practice. 

Suppression of Clinical Research

GSK sponsored two other studies of Paxil in pediatric populations, Studies 377 and 701. As stated in the Department of Justice's Criminal Complaint against GSK,
GSK Did Not Publicize the Results of Studies 377 and 701
43. GSK learned the results of Study 377 in 1998 and the results of Study 701 in 2001. Paxil failed to demonstrate efficacy on any of the endpoints of either study.
44. GSK did not hire a contractor to help write medical articles about the results of Studies 377 and 701, as it had with Study 329.
45. GSK did not inform its sales representatives about the results of Studies 377 and 701.
Thus, GSK managed to conceal the fact that the majority of the studies it sponsored about Paxil used for adolescent patients showed no evidence that the drug worked, again seriously distorting the evidence-base on which clinicians made decisions, and doubtless leading to the use of a dangerous, ineffective drug by numerous vulnerable patients.

Bribing Physicians to Prescribe

GSK's sales representative reflected in their call notes their use of money, gifts, entertainment and other kickbacks to induct doctors to prescribe GSK drugs....

One really creative way to pay physicians to be exposed to marketing:
For example, in or about 2000 or 2001, GSK used 'Reprint Mastery Training Programs' or 'RMTS' to further promote drugs by purporting to pay physicians to train sales representative to review reprints of studies. Although the training was purportedly for the representatives, in fact, the sales force was already familiar with the materials. GSK typically paid physicians $250 to $500 to review the reprints.
Thus GSK simply paid physicians to use its drug, a practice characterized as kickbacks in the official complaint.  An article in the Guardian noted that the US Attorney involved in the case put it even more bluntly,
The sales force bribed physicians to prescribe GSK products using every imaginable form of high-priced entertainment, from Hawaiian vacations [and] paying doctors millions of dollars to go on speaking tours, to tickets to Madonna concerts.

Use of Key Opinion Leaders as Disguised Marketers

GSK also created a group of national 'key opinion leaders' ('KOLs') who were paid generous consulting fees. GSK selected many of these physicians based on their prescribing habits and influence within the community and used the speaker fees paid to these physicians to induce and reward prescribing of GSK's products. GSK used these individual to communicate marketing messages focused on the drug's marketing campaigns at the time, including off-label uses. Some physicians on GSK's speaker's board have been paid more than a million dollars for speaking on behalf of the company and recommending its drugs.

Thus key opinion leaders were paid specifically to market drugs, and as a reward, a bribe for prescribing drugs.

Consulting Fees as Kickbacks

In general,
In order to induce physicians to prescribe and recommend its drugs, GSK paid kickbacks to health professionals in various forms, including speaking or consulting fees, travel, entertainment, gifts, grants, and sham advisory boards, training,....

In particular,
During 2000 and 2001 at least, GSK also utilized events termed 'advisory boards' or consultant meetings and forums to disseminate its promotional message. Although these boards were purportedly composed of 'thought leaders' for the purpose of obtaining advice from the physicians, in fact, the 'advisory boards' were little more than promotional events coupled with financial inducement to prescribing and influential physicians.

Also,
GSK typically paid the physician between $250 and $750 to attend each local 'advisory' meeting. The payments did not reflect the value of services. The physician was not required to do anything but show up. GSK had no legitimate business reason to hire thousands of 'advisors' to 'consult' with the company about a single drug.

Manipulation of Continuing Medical Education

GSK also used so-called CME and CME Express programs and other sham training for marketing purposes, and to promote off-labe uses for the GSK prescription drugs.

Furthermore,
These CME programs purported to be independent eduaction free of company influence, but in fact functioned as GSK promotional programs disguised as medical education. GSK maintained control and influence over the purportedly independent CME programs through speaker selection, and influence over content and audience, among other things. Although third party vendors were usually also involved, they served only as artificial 'firewalls' that did not insulate the program from GSK's influence.

Summary

The legal documentation of the GSK settlement demonstrated how one drug company used an integrated, systematic campaign incorporating deception and bribery to sell drugs. Its elements included manipulation and suppression of the clinical research it sponsored, paying key opinion leaders to be disguised drug marketers, outright payments to physicians to prescribe drugs, and manipulation, again using payments to physicians, of supposedly independent continuing medical education. 

Note that while I summarized the elements of the stealth marketing campaign to sell Paxil, particularly for use in pediatric patients, the US government complaint also documents similar activities used to sell other drugs.  Furthermore, other stealth marketing campaigns have come to light through legal action, and many other instances of manipulation and suppression of clinical research, use of KOLs as disguised marketers, kickbacks and bribes, and manipulation of CME have been documented.

This means that any claims that:
- commercially sponsored clinical research provides clear, unbiased data that should drive clinical decisions
- health care professionals and academics paid as consultants by commercial health care firms are not influenced by these payments, and can provide clear, unbiased opinions
- commercially sponsored medical education provides clear, unbiased teaching
unfortunately must be viewed with extreme skepticism. This is particularly unfortunate given that most clinical research is now supported by commercial sponsors, and the majority of influential academics in medicine get some form of payments from the health care industry (look here).

Of course, there are some physicians who consult for commercial firms who actually provide clinical or scientific advice or assistance, and some commercially sponsored activities are honest. But we must wonder what garden path all those advocates for increasing industry "collaboration" to promote "innovation," and who regard conflicts of interest as "inevitable" and "manageable" are taking us down (e.g., look here and here).

Although the current settlement will require a huge payment, as I have said many times before (as early as 2008, here), do not expect such settlements to deter future bad behavior like that listed above.  The cost of the settlement will actually be spread among all company shareholders, all company employees, and likely patients and taxpayers.  However, the settlement will entail no specific negative consequences to the people who authorized, directed, or implemented the bad behavior.  In particular, executives whose remuneration was swollen by proceeds from the sales of affected drugs, and the health care professionals who willingly accepted what the US Attorney called bribes will not pay any sort of penalty.  The bad behavior listed above was doubtless personally very profitable for some people.  Unless people who indulge in such behavior face the possibility of penalties worse than their expected gains, expect such bad behavior to continue.

In fact, as the New York Times reported,
critics argue that even large fines are not enough to deter drug companies from unlawful behavior. Only when prosecutors single out individual executives for punishment, they say, will practices begin to change.

'What we’re learning is that money doesn’t deter corporate malfeasance,' said Eliot Spitzer, who, as New York’s attorney general, sued GlaxoSmithKline in 2004 over similar accusations involving Paxil. 'The only thing that will work in my view is C.E.O.’s and officials being forced to resign and individual culpability being enforced.'

True health care reform would strive to eliminate important conflicts of interest affecting clinical research and medical education.  Specifically, it would prevent corporations that sell health care products or services from controlling clinical research meant to evaluate these products or services.  It would seek to eliminate serious conflicts of interest affecting health care professionals.  Finally, it would prevent vested interests from controlling medical education.  Not that I expect any such reform in the near future, it would be too threatening to those who have personally benefited from the current system.

Hat tip to Dr Howard Brody whose Hooked: Ethics, Medicine and Pharma blog scooped me on the details of the settlement relevant to study 329.

ADDENDUM (11 July, 2012) - see also this post on the 1BoringOldMan blog.

"More Marketing Than Science" - An Anonymous Confession About Deceptive Marketing Published in the British Medical Journal

The British Medical Journal just published an anonymous article by a pharmaceutical company insider that explained once again how pharmaceutical companies turn research studies, apparently scholarly articles, and medical education into stealth marketing efforts.  (See Anonymous.  Post-marketing observational studies: my experience in the drug industry.  Brit Med J 2012; 344: 28.  Link here.)

We have previously discussed examples of health care corporate insiders confessing their individual efforts to turn medical research and education into marketing.  For example, peruse this.  We have also discussed how documents made public through litigation have revealed marketing plans for specific drugs that used apparently academic, educational, or scholarly publications and venues to market without revealing this transformation.  For example, see the Neurontin marketing plan (see post here), and the Lexapro marketing plan (see post here).  We have also discussed numerous examples of manipulation of particular research studies by those with vested interests, and outright suppression of studies whose results did not favor such vested interests.

Yet I suspect majorities of health care academics and professionals, health care policy makers, and the public at large do not realize, or would not admit that the evidence base for making health care decisions, and the general academic and professional discourse has been so corrupted.  So it is worthwhile to review once again how an insider summarized this corruption.

Research Studies Designed Primarily as Marketing Vehicles

In general, the anonymous author suggested that at least some studies were done for marketing, not scientific purposes:
some of the studies I worked on were not designed to determine the overall risk:benefit balance of the drug in the general population. They were designed to support and disseminate a marketing message.

Whether it was to highlight a questionable advantage over a 'me-too' competitor drug or to increase disease awareness among the medical community (particularly in so called invented diseases) and in turn increase product penetration in the market, the truth is that these studies had more marketing than science behind them.

Furthermore, the studies were supervised not by physicians or scientists, but by marketers in the marketing department,
Although the medical department developed the publication plans, designed the study, performed the statistical analysis, and wrote the final paper (which when published was passed on to marketing and sales to be used as marketing material), the marketing team responsible for that product were directly involved in all stages. They also closely supervised the content of other educational 'scientific' materials produced in the medical department and intended for potential prescribers. Instructions from marketing to the medical staff involved were clear: to ensure that the benefits of the drug were emphasised and the disadvantages were minimised where possible.

Manipulation of Research Design, Implementation, or Analysis

The author described how the marketers manipulated research studies so they would produce the results desired from a marketing perspective, regardless of their underlying truth,
Since marketing claims needed to be backed-up scientifically, we occasionally resorted to 'playing' with the data that had originally failed to show the expected result. This was done by altering the statistical method until any statistical significance was found. Such a result might not have supported the marketing claim, but it was always worth giving it a go to see what results you could produce. And it was possible because the protocols of post-marketing studies were lax, and it was not a requirement to specify any statistical methodology in detail. On the other hand, the studies were hypothesis testing (such as cohort studies, case-control studies) rather than hypothesis generating (such as case reports or adverse events reports), so playing with the data felt uncomfortable.

Other practices to ensure the marketing message was clear in the final publication included omission of negative results, usually in secondary outcome measures that had not been specified in the protocol, or inflating the importance of secondary outcome measures if they were positive when the primary measure was not.

So to summarize, the marketers would control the statistical analyses, promoting multiple analyses to attempt to come up with the "right" result that would support the marketing message (although the more kinds of analyses one tries, the more likely one is likely to come up with false results by chance alone). Presumably the marketers did not care whether or not the results were really true, which is perhaps why even they felt "uncomfortable" in some circumstances.

They would also foster the suppression of negative results, and the dredging of data for extra outcome measures when analysis showed no advantage in terms of the real primary outcomes. Suppression of negative results could be viewed as plain lying. Deliberate analysis of multiple end-points again risks identifying random error as true results.

The Role of Key Opinion Leaders

The author described how key opinion leaders, that is, health care professionals thought to be especially influential on practice or policy, were hired to become marketers presumably without revealing this intention.
Every big international observational study had a large advisory board. This was critical since the success of a newly launched drug in the market would depend on how many key opinion leaders were part of the study. Not only would they add credibility to the results, but they would also be key in influencing decision makers and other prescribers. In regional studies with thousands of patients, the study’s advisory board was formed by at least one key opinion leader from each country in that region, ensuring that areas important in terms of possible sales were covered. The contributions of the key opinion leaders to the study were always positive, but in my experience more directed towards designing new studies to answer their specific clinical questions rather than critically appraising our results and conclusions. In general, the relationship was amicable. We took them to the best hotels and restaurants during our advisory board meetings, and they appeared as authors in our research. Later, they would act as the company’s 'ambassadors,' giving conferences, teaching doctors, or talking to the media about the benefits of the drug.

Note that even the anonymous author could not bring him or herself to call the key opinion leaders salespeople, or marketers, but used the ambiguous wterm "ambassadors." Nonetheless, the role of key opinion leaders described would clearly be that of marketers or salespeople. However, it is extremely doubtful that any of these KOLs felt they had to declare that they were paid salespeople when presenting at conferences, teaching doctors, or talking about the media.

I would suggest that their actions would therefore fit Transparency International's definition of ethical corruption, "abuse of entrusted power for private gain." The KOLs are entrusted to be professional, and in many cases, scholarly. Using a professional or scholarly guise to act as a salesperson appears to be abuse of that entrusted power, in my humble opinion. In nearly every case, KOLs are paid, often handsomely, by the companies whose products they are selling. Thus, key opinion leaders acting as described by the anonymous author appear to be ethically corrupt.

Summary

The evidence of unethical marketing practices by commercial health care firms is mounting. Although I am most familiar with evidence from the US, there is mounting evidence that these practices are global, often done by companies that are not US based, and meant to influence practice and policy world-wide.

As the evidence mounts, it becomes increasingly clear that many such marketing practices are corrupt, at least in an ethical sense. Whether they may break laws in particular countries is a question for someone else.

The latest BMJ article is a reminder how skeptical the shrinking group of health care professionals who do not have conflicts of interest, are not biased in favor of particular products, and who put patients' interests first must be about ALL published research, scholarly publications, and apparently educational activities. Advocates of true evidence-based medicine must be extremely careful to try to use the least biased and manipulated parts of the evidence-base.

The mounting evidence suggests that at a minimum, all research reports, scholarly articles, media reports, conferences, and educational programs should provide full, detailed disclosure of all conflicts of interest. Perhaps having to make a declaration like "I am paid 50,000 Euros a year by the marketing department of company X to help market drug Y" before a supposedly educational talk would make some health care professionals think twice about such relationships.

However, even such detailed disclosure may not be sufficient to hamper marketing practices that now appear overtly corrupt. In my humble opinion, it is time to consider a global ban on the funding or influencing of human research by companies and other organizations which stand to gain financially according to the results of the research, or whose products and services are subject of such research. It is also time to consider a global ban on the funding or influencing of health care education by such organizations.

However, so many people are making so much money from the current practices that I doubt such proposals would get much support, or even public notice beyond this humble blog.

Pseudo-Evidence Based Medicine Should be a Global Health Concern

We have frequently advocated for evidence-based medicine (EBM) , that is, medicine based on judicious use of the best available evidence from clinical research, critically reviewed and derived from systematic search, combined with biomedical knowledge and understanding of patients' values and preferences.  However, EBM risks being turned into pseudo-evidence based medicine due to systematic manipulation and distortion of the clinical research evidence base.  Dr Wally Smith wrote about pseudo-evidence based medicine, which he defined as "the practice of medicine based on falsehoods that are disseminated as truth," in the British journal Clinical Governance (Smith WR. Pseudoevidence-based medicine: what it is, and what to do about it. Clinical Governance 2007; 12: 42-52. Also see this post).  

Now it appears that this issue is causing concern in the Cochrane Collaboration, the main voluntary international group promoting EBM.  An article from December, 2011 in the Indian Journal of Medical Ethics outlined why we need to question the trustworthiness of the global clinical evidence base.  (See Tharyan P. Evidence-based medicine: can the evidence be trusted.  Ind J Med Ethics 2011; 8: 201-207.  Link here.)  It merits further review.

The Role of Vested Interests

Dr Tharyan, its author, emphasized that a major threat to the integrity of the clinical research data base is the influence on clinical research of those with vested interests in marketing particular products, e.g., drugs, devices, etc.
The motives for conducting research are often determined by considerations other than the advancement of science or the promotion of better health outcomes. Many research studies are driven by the pressure to obtain post-graduate qualifications, earn promotions, obtain tenured positions, or additional research funding; many others are conducted for financial motives that benefit shareholders, or lead to lucrative patents.

The Importance of Pervasive Conflicts of Interest

Early on, the author discussed how the distortion of the clinical research data base arises from the pervasive web of conflicts of interest in medicine and health care:
This hijacked research agenda perpetuates further research of a similar nature that draws more researchers into its lucrative embrace, entrenching the academic direction and position statements of scientific societies and academic associations. Funders and researchers are also deterred from pursuing more relevant research, since the enmeshed relationship between academic institutions and industry determines what research is funded (mostly drugs at the expense of other interventions), and even how research is reported; thus hijacking the research agenda even further away from the interests of science and society.


The article then goes on to show how the influence of vested interests may distort the design, implementation, analysis, and dissemination of research.

Distortion of Research Design and Implementation

The Research Question

Dr Tharyan noted
The majority of clinical trials conducted world-wide are done to obtain regulatory approval and a marketing licence for new drugs. These regulations often require only the demonstration of the superiority of a new drug over placebo and not over other active interventions in use. It is easier and cheaper to conduct these trials in countries with lower wages, lax regulatory requirements, and less than optimal capacity for ethical oversight. It is therefore not surprising that the focus of research does not reflect the actual burden of disease borne by people in the countries that contribute research participants, nor address the leading causes of the global burden of disease. Some 'seeding' trials, conducted purportedly for the purpose of surveillance for adverse effects, are often only a ploy to ensure brand loyalty among participating clinician-researchers.

Insufficient Sample Size

The article stated,
Many trials do not report calculations on which the sample size was estimated, often leading to sample sizes insufficient to detect even important differences between interventions (for primary, let alone secondary outcomes)

I would add that such small trials are particularly bad at detecting important adverse effects of the interventions being promoted.

Excessively Stringent Enrollment Criteria

As Dr Tharyan wrote,
Most RCTs funded by industry and academia are designed to demonstrate if a new drug works, for licensing and marketing purposes. In order to maximise the potential to demonstrate a 'true' drug effect, homogenous patient populations; placebo controls; very tight control over experimental variables such as monitoring, drug doses, and compliance; outcomes addressing short term efficacy and safety; and methods to minimise bias required by regulatory agencies are used to demonstrate if, and how, the drug works under ideal conditions.

The problem is that very few patients in clinical practice resemble those enrolled in such trials, so the generalizability of the trials' results is actually dubious. Ideally, clinicians practicing evidence-based medicine could refer to trials that include patients similar to those for whom they care.
Practical or pragmatic clinical trials are designed to provide evidence for clinicians to treat patients seen in day-to day clinical practice, and evaluate their effectiveness under 'real-world' conditions. These trials use few exclusion criteria and include people with co-morbid conditions, and all grades of severity. They compare active interventions that are standard practice, and in the flexible doses and levels of compliance seen in usual practice. They utilise outcomes that clinicians, patients, and their families consider important, such as satisfaction, adverse events, return to work, and quality of life). Recommendations exist on their design and reporting , but such trials are rare.

Comparisons to Placebo, not the Other Interventions Clinicians Might Realistically Consider

Per the article,
Industry sponsored trials rarely involve head-to head comparisons of active interventions, particularly those from other drug companies, thus limiting our ability to understand the relative merits of different interventions for the same condition.

The result may thus be a number of studies showing particular interventions appear to be better than nothing, but few if any studies that would help clinicians decide which intervention would be best for a particular patient.

Inappropriate Comparators

However,when studies are done comparing the intervention of interest to other active interventions, the details of the choice of comparators are often managed so that the comparators are likely to appear to be worse.
Even if active interventions are compared in industry-sponsored trials, the research agenda has devised ways in which the design of such trials is manipulated to ensure superiority of the sponsor’s drug. If one wants to prove better efficacy, then the comparator drug is a drug that is known to be less effective, or used in doses that are too low, or used in non-standard schedules or duration of treatment. If one wants to show greater safety, then the comparator is a drug with more adverse effects, or one that is used in toxic doses. Follow up, also, is typically too short to judge effectiveness over longer periods of time.

Meaningless Outcomes

A favorite tactic used in the design of trials influenced by vested interests is to choose outcomes that are likely to show results favorable to the product being promoted, but have no meaning for patients or clinicians. There are three ways this is commonly done.

The first is to use rating scales that are very sensitive to small perturbations, but not meaningful in terms of how patients feel or function:
The choice of outcome measures used often ensures statistically significant results in advance, at the expense of clinically relevant or clinically important results. Outcomes likely to yield clinically meaningless results include the use of rating scales (depression, pain, etc.). These scales yield continuous measures usually summarised by means and standard deviations, rather than the dichotomous measures clinicians use such as: clinically improved versus not improved. These rating scales, however extensively validated, are hardly ever used in routine clinical practice. A difference of a few points on these scales results in statistically significant differences (low p values), that have little clinical significance to patients.

Another is to use surrogate outcomes,
Other outcomes commonly used are surrogate outcomes; outcomes that are easy to assess but serve only as proxy indicators of what ought to be assessed, since the real outcome of interest may take a long time to develop. These are mostly continuous measures that require smaller sample sizes (blood sugar levels, blood pressure, lipid levels, CD4 counts, etc.). These measures easily achieve statistical significance but do not result in meaningful improvements (reduction in mortality, reduction in complications, improved quality of life) in patients’ lives, when the interventions are used (often extensively) in clinical practice.

Finally, there are composite outcomes,
The use of composite outcomes, where many outcomes (primary, secondary and surrogate outcomes) are clubbed together (e.g.: mortality, non-fatal stroke, fatal stroke, blood pressure, creatinine values, rates of revascularisation) as a single primary outcome, can also mislead. Such trials also require smaller sample sizes, and increase the likelihood of statistically significant results. However, if the composite outcome includes those of little clinical importance (lowered blood pressure, or creatinine values), the likelihood of real benefit (reduction in mortality, or strokes, or hospitalisation) and the potential for harm (increase in non-fatal strokes or all-cause mortality) are masked.

Distortion of Analysis

Relative Instead of Absolute Risks

A favorite trick is to emphasize relative risks rather than absolute risks. For example if a trial reduces the risk of a bad outcome from 2 of 10,000 patients (0.02%) to 1 of 10,000 patients (0.01%), the relative risk reduction is 50%, but only 1 of 10,000 patients experienced a benefit.
The use of estimates of relative effects of interventions, such as relative risks (RR) and odds ratios (OR) with their 95% confidence intervals, provides estimates of relative magnitudes of the differences and whether these exclude chance, as well as if these differences were nominal or likely to be clinically important.

However, even relative risks can be misleading since they ignore the baseline risk of developing the event without the intervention. The absolute risk reduction (ARR) is the difference in risk of the event in the intervention group and the control group, and is more informative since it provides an estimate of the magnitude of the risk reduction, as well the baseline risk (the risk without the intervention, or the risk in the control group). Systematic enquiry demonstrates that on average, people perceive risk reductions to be larger and are more persuaded to adopt a health intervention when its effect is presented as relative risks and relative risk reduction (a proportional reduction) rather than as absolute risk reduction; though this may be misleading.

Sub-Group Analysis

Another statistical trick used to present favourable outcomes for interventions is the use of spurious subgroup analyses, where observed treatment effects are evaluated for differences across baseline characteristics ( such as sex, or age, or in other subpopulations). While they are useful, if limited to a few biologically plausible subgroups, specified in advance, and reported as a hypothesis for confirmation in future trials; they are often used in industry-sponsored trials to present favourable outcomes, when the primary outcome(s) are not statistically significant.
A major statistical point is that the more ways one subdivides the study population, the more likely it is to find a difference between those receiving different interventions by chance alone.

Distortion of Dissemination

Poor Exposition

As noted by Dr Tharyan,
Evidence also shows that the published reports are not always consistent with their protocols, in terms of outcomes, as well as the analysis plan, and this again is determined by the significance of the results. Harms are very poorly reported in trials compared to results for efficacy; and are also often suppressed or minimised.

Ghost-Writing

Also, those with vested interests may try to ensure maximally favorable dissemination by employing ghost-writers,
Other tactics used to influence evidence-informed decision making include ghost-writing, where pharmaceutical companies hire public-relations firms who 'ghost-write' articles, editorials, and commentaries under the names of eminent clinicians; a strategy that was detected by one survey in 75% of industry-sponsored trials, where the ghost author was not named at all, and in 91% when the ghost author was only mentioned in the acknowledgement section. Detecting such conflicts of interest is difficult, since they are rarely acknowledged due to the secrecy that shrouds the nexus between academia and industry in clinical trials.

Industry-sponsored trials often place various constraints on clinical investigators on publication of their results; these publication arrangements are common, allow sponsors control of how, when, and what is published; and are frequently not mentioned in published articles.
Ghost-writers under the direct control of those with vested interests could more efficiently bias their writing in favor of their sponsors than could even academics constrained by contractual obligations to sponsors.

Suppression of Research

When all else fails, the most crude example of distorted dissemination is suppression of studies that despite all the manipulations noted above fail to produce favorable results for the product being promoted:
A considerable body of work provides direct empirical evidence that studies that report positive or significant results are more likely to be published; and outcomes that are statistically significant have higher odds of being fully reported, particularly in industry funded trials.

By the way, the latest demonstration of suppression of research was a study by Turner et al (Turner EH, Knoepflmacher D, Shapley L. Publication bias in antipsychotic trials: an analysis of efficacy comparing the published literature to the US Food and Drug Administration database. PLoS Medicine 2012; 9(3): e1001189. Link here) It showed that of 24 trials of atypical antipsychotics registered in the US FDA database, 15/20 (75%) of the published trials were positive, that is, had results in favor of the sponsors' drugs according to the FDA review, but only 1/4 (25%) of the unpublished trials were positive. In other words, 15/16 of positive trials were published, but only 5/8 non-positive ones were.

Summary

Thus, Dr Tharyan provided a nice summary of many of the ways that the clinical research data base can be manipulated or distorted to serve vested interests. Such distortions risk the transformation of evidence-based medicine into pseudo-evidence based medicine.

We have repeatedly discussed (see links above) most of these issues. Because we are located in the US, and speak mainly English, this blog may have given the impression that the lack of trustworthiness of the clinical research evidence base is primarily a US problem. The article by Tharyan emphasizes, however, that it is a global problem.

While this problem now seems to have the attention of the Cochrane Collaboration, it seems to be relatively anechoic in global health circles. It appears to be no more prominent on the agendas of global health organizations than is health care corruption (look here.) Yet the two issues are highly related. Most of the distortions in the global clinical evidence database may be driven by conflicts of interest. Conflicts of interest are risk factors for corruption. Distortions of the global clinical research data base that lead to use of expensive, but ineffective or dangerous interventions when other options would work as well or better leads to needless suffering and death, and by unnecessarily raising the costs of care, decreases access, especially for the poor.

Also note that conflicts of interest may be one reason that all these problems remain so anechoic (look here).

True global health care reform requires addressing health care corruption, but also conflicts of interest and their role in the distortion and manipulation of the clinical research data base. I still live in hope that that some academic health care institutions, professional societies, health care charities and donors, and patient advocacy groups will gain enough fortitude to stand up for accountability, integrity, transparency, and honesty in health care.

Guest Blog: Health Care in Dangerous Times

Health Care Renewal presents another guest blog by Steve Lucas, a retired businessman who formerly worked in real estate and construction who has a long standing interest in business ethics, and has long observed the health care scene.

Health Care Renewal has often covered the disconnect between the stated goals of companies and the realities of their day to day operations. This raised the following question: Has medicine moved from being dysfunctional to being dangerous?

There is certainly no lack of material to support this question as in the last two weeks we can find examples of pharma/biotech/device companies all engaged in questionable behavior.

Medtronic and Manipulation of Study Data

In the print media, The Wall Street Journal, a pro-business newspaper regularly highlights stories questioning the actions of companies.

In the June 29 story titled "Medtronic Surgeons Held Back, Study Says" by John Carreyrou and Tom McGinty we find doctors being paid by Medtronic held back information regarding negative out comes of a bone growth product.
'Medtronic paid millions to doctors and those same doctors, oddly enough, published the 'science' Medtronic needed to sell a product,' says Paul Thacker, a former aide to Sen. Charles Grassely…'

Chantix's Cardiac Adverse Effects

In the July 5 story, "Pfizer Drug Tied To Heart Risk" by Thomas M. Burton covers the increased cardiovascular problems with Chantix that only now seemingly have become evident.

However, J. Taylor Hays, a Mayo Clinic doctor who has received funding from Pfizer for research on Chantix, responded in a commentary, 'The risk for serious cardiovascular events is low and is greatly outweighed by the benefits of diminishing the truly 'heartbreaking' of smoking.'
The article then continues:
'Some people have had changes in behavior, hostility, agitation, depressing mood, suicidal thoughts or actions while using Chantix to help them quit smoking,' Pfizer says in safety information.

Overuse of Cardiac Stents

In the July 6 article, "Heart Treatment Overused" by Ron Winslow and John Carreyrou we find the over use of stents and the profit potential for doctors and hospitals.

Outside of heart attacks, doctors are often quick to use a common $20,000 procedure to treat patients suffering from coronary artery disease, a new study suggest.

Untested Imported Drug Ingredients

In the July 6 Op-ed, "Beware the Risk of Generic Drugs" by Roger Bate reinforces a point made often on Health Care Renewal when he covers the importation of untested or tainted product used in our pharmaceuticals.
China is now the largest supplier of pharmaceutical chemicals – hundreds of tons annually – to the world. And pharmaceutical companies that buy these chemicals do not test them.

Moving on to widely read blogs,

Ghost Writing and Risperdal

1BoringOldMan in his post on bipolar kids and the doctors involved in the Harvard debacle discussed an article favorable to the drug ostensibly written by Harvard Professor Josephy Biederman:

This covers the reworking of a previously done study to promote the use of drugs in children with this printed at the bottom of the first page:

“"Printed in the USA. Reproduction in whole or part is not permitted. Copyright © 2006 Excerpta Medica, Inc."

1BoringOldMan continues with this post that since we have identified children as bipolar we are free to ignore other factors and simply medicate them to death.

As reported by '60 Minutes' in September of last year, Rebecca Riley died on December 13, 2006, at her home in Hull, Massachusetts, due to an overdose of psychiatric drugs. The drugs — Depakote (divalproex; Abbott), Seroquel (quetiapine; AstraZeneca), and clonazepam — were prescribed by Tufts psychiatrist Kayoko Kifuji for the child’s bipolar disorder, which was diagnosed at the age of 2 years.
This covers the death of a child, a child, using the above ghost written drug information.

Bayer's Use of Social Media to Market Drugs

Pharma does adjust to the times and market, if it is not explicitly forbidden then it is fair game.

Per a post in Pharmalot entitled, "To Tweet or not to Tweet"

'To Tweet or not to Tweet?' That is a question that Bayer Healthcare will be pondering for some time. The drugmaker was upbraided by the UK’s Prescription Medicines Code of Practice Authority for recently Tweeting about two medicines, which was deemed to be a cause for concern since the information went directly to the public.

Much like shooting a gun into a crowd and then claiming they had no way of knowing they would injure someone, a tweet is sent with the full knowledge the first thing everybody will do is hit the forward to all button.

Pharma's Public Relations People Infiltrate Patient Support Groups

Per a post on the HealthReviewNews Blog we have one of the most frightening posts possible since it shows pharma following individuals and a willingness to intrude into their personal lives.

Marilyn Mann is approached on her Facebook page set up to support parents of children with lipid disorders by a drug PR person to promote a drug.

Hi Marilyn,

A few months ago, I had emailed you about some research I was doing about a new treatment for FH. I am now working with a pharmaceutical company, and the company currently has a drug in development to help treat people with severe FH that may not be responding to current therapies.

In the comment section we find this:

I'm a communications consultant to many big pharma firms and I'm not sure the PR person in this case did anything wrong. She was upfront and polite about her role, and asked if any patients would do what many have done, and be interviewed to raise the profile of FH.
The owner of the group politely declined as was her right.
It would have been a different case if there had been any subterfuge involved, but there wasn't.
I think on this occasion you have aimed at the wrong target.
Good luck with future postings.

What do all of these references have in common? Senior executives, and health care academics, removed from the day to day work of helping people being able to claim they are not responsible while making ever larger incomes.

Natrecor Shown Not to Work

The corporate culture of medicine has become so perverse that even selling a drug that has no benefit becomes acceptable.

Per this item in the Heart Health Center Health Day News:

Study Finds Heart Failure Drug Ineffective

Billions wasted on Natrecor in decade it took to find out it doesn't work, expert says.

By Steven Reinberg, HealthDay News

WEDNESDAY, July 6 (HealthDay News) — The heart failure drug Natrecor (nesiritide) is ineffective and linked to increased rates of potentially dangerous low blood pressure, a new study finds.

Summary: The Most Dangerous Game

So, back to my original question: Has medicine become dysfunctional or dangerous? I would contend dangerous. There is nothing magical about the above listed articles or posts. I am not a professional medical person, nor an academic, only someone concerned by the continued decline of medicine and medical care due to a corporate culture that promotes profit above all else.

When I meet doctors socially they all speak of being small businessmen. Time and time again we see hospital administrators of all types speaking about being the CEO’s of multi-billion dollar organizations.

Drug companies speak of blockbuster drugs as being those with sales of over one billion dollars and fines become the cost of doing business.

Today we have a small, but vocal group of people who feel all drugs should be offered to the public and it is up to them to decide if they are appropriate. They also want the government and insurance companies to pay for this return to the pre-FDA days.

How will the FDA itself withstand the onslaught of new technologies given out current Federal budget concerns? Tweets and Facebook represent only the beginning of a whole new wave of ways to communicate.

My personal opinion is this is a very dangerous time for doctors and patients. Doctors are being pushed into corporate practices where financial gain is the main driver, not health care. Information given to doctors can be so tainted by commercial interest as to be of no value.

Patients have no way of knowing where the doctor’s loyalty lies, with them or the practice? DTC ads are full of half truths and fear mongering. Patients need to bring even more information and skepticism to the doctor/patient relationship.

I fear we do live in dangerous times. The word “good” has left much of medicine.

Steven Lucas MBA

The University of Minnesota, Where Nothing Can Go Wrong, Go Wrong, Go Wrong...

As noted on the Periodic Table blog, the administration of the University of Minnesota continues to believe all is well with its clinical research activities.  A recent internal review said there was nothing more to investigate about the unfortunate death of a psychiatric patient years before. So should we all be relieved?

It will take an extensive review of the case to ultimately suggest we should not at all be relieved.  The case raised important concerns about the validity of clinical research, and whether it violates the trust of its patient-subjects.  These concerns had not been addressed before the university's most recent review, and thus seem even more pointed after its recent non-investigation.

Background: the Untimely Death of Dan Markingson

In May, 2008, the (Minnesota) Pioneer Press ran a series of articles about the untimely death of Dan Markingson which occurred while he was enrolled in a randomized trial sponsored by AstraZeneca (the CAFE study) at a site at the University of Minnesota.  The first article in the series made the following major points:

Mr Markingson had given his consent to be enrolled despite evidence that he was actively psychotic
He started having visions of killing his mother in the storm. Markingson was taken Nov. 12, 2003, to Regions Hospital in St. Paul, but it had no open psychiatric beds. He was then transferred to the University of Minnesota Medical Center, Fairview.

Weiss said discussions about research started right away at the hospital. Markingson was placed in Fairview's Station 12, a new unit at the time created to treat psychotic patients and screen them for research. Olson and Dr. Charles Schulz, head of the U's psychiatry department, helped launch the unit in part to enhance the hospital's startup schizophrenia program and meet the U's mandate to bring in more research dollars.

Olson first recommended on Nov. 14 that a Dakota County District Court commit Markingson to the state treatment center in Anoka because he was not fit to make decisions about his care. He wrote to the court that Markingson was convinced his delusions were real and that he wasn't mentally ill.

The doctor changed his opinion about the commitment in less than a week, telling the court Markingson had started to acknowledge the need for help.

Reversals by patients are common, Olson explained in an interview with the Pioneer Press last month. Schizophrenics often arrive for treatment with delusions and denial but change their outlook while hospitalized.

A judge agreed Nov. 20 with Olson's new recommendation, requiring Markingson to follow the doctor's treatment plan. The next day, Markingson signed a consent form to be part of a national anti-psychotic drug study, Comparison of Atypicals for First Episode, or CAFE.

His mother's multiple complaints that while in the study, Markingson was not getting better and not getting proper treatment were ignored
Weiss' letters to Olson and Schulz, who was a co-investigator in the study, urged them to consider different treatment options for her son, which would have disqualified him from the study. But the doctors were unconvinced by her pleas.

In particular, she wrote with strange prescience,
'Do we have to wait until he kills himself or someone else,' she asked three weeks before his suicide, 'before anyone does anything.'
There was evidence that Markingson was not getting optimal treatment

In retrospect, it was not even clear that Markingson was taking his study medications prior to his suicide:
An autopsy showed no medication in Markingson's bloodstream, and a coroner's photo showed a sealed bottle of his medication. Had he been taking his drugs?

Study officials could have been fooled. They only counted drugs left in pill bottles instead of testing blood levels in patients.
Suggestions that financial conflicts of interest influenced trial investigators' actions

The initial news article raised questions whether the study investigator had been unduly influenced to keep Markingson in the study by financial concerns:
CAFE was an early opportunity at the U for Olson to add research experience to his academic credentials. The U had recruited him in 2001 for his expertise in schizophrenia.

It was a slow start. Olson recruited one patient in 2002, and CAFE study leaders considered dropping him altogether, according to monthly recruiting summaries. Olson and the university had been dropped from a previous study because of low recruiting numbers, the doctor later said in his court deposition.

Exchanges between local and national study officials made it clear that there was pressure for results and a 'risk' that the study would be shut down if it didn't recruit enough patients.

Note that:
As Subject 13, Markingson was worth $15,000 to the U, with some of that going to Olson's salary and the psychiatry department. Switching or adding medications could have disqualified Markingson and halted payments to Olson and the department from AstraZeneca.

Overall, the study offered $327,000 to the U and an opportunity to raise the profile of its schizophrenia program.

An accompanying Pioneer Press article indicated that both Dr Olson, and the Chair of Psychiatry, Dr S Charles Schulz, were receiving considerable financial support from AstraZeneca and other pharmaceutical companies at the time of the study.
Olson received $220,000 from six companies since 2002, including $149,000 from AstraZeneca, according to the state records. Schulz received $562,000, including $112,000 as a researcher and consultant to AstraZeneca.

Olson said his AstraZeneca money went straight to the U but did support his salary. Markingson's full participation in the yearlong study meant up to $15,000 for the university.
Did the lawsuit's results indicate nothing was wrong?

Mr Markingson's mother sued the University of Minnesota and AstraZeneca, but (per the first Pioneer Press article),
The lawsuit ended this year after a judge ruled that the university had statutory immunity from such lawsuits and that AstraZeneca shouldn't stand trial because there was no convincing proof that its drug caused Markingson's death. Weiss settled with Olson, the only defendant left. She said she was granted $75,000, which went entirely toward legal bills.
Note that the results did not address the university's or its administration's role.

Dr Carl Elliott Takes Another Look

Thus the case appeared to end, with no real reconsideration of how medical schools' dependence on commercial funding of clinical studies, and how individual faculty members' financial relationships with drug, device, and biotechnology firms may affect research done on human beings.

However, in September, 2010, Mother Jones published an article by Dr Carl Elliott, a University of Minnesota bioethicist, which raised further questions about the case.
I talked to several university colleagues and administrators, trying to learn what had happened. Many of them dismissed the story as slanted and incomplete. Yet the more I examined the medical and court records, the more I became convinced that the problem was worse than the Pioneer Press had reported. The danger lies not just in the particular circumstances that led to Dan's death, but in a system of clinical research that has been thoroughly co-opted by market forces, so that many studies have become little more than covert instruments for promoting drugs

Major design defects of the CAFE study:
It barred subjects from being taken off their assigned drug; it didn't allow them to be switched to another drug if their assigned drug was not working; and it restricted the number of additional drugs subjects could be given to manage side effects and symptoms such as depression, anxiety, or agitation. Like many clinical trials, the study was also randomized and double-blinded: Subjects were assigned a drug randomly by a computer, and neither the subjects nor the researchers knew which drug it was. These restrictions meant that subjects in the CAFE study had fewer therapeutic options than they would have had outside the study.

In fact, the CAFE study also contained a serious oversight that, if corrected, would have prevented patients like Dan from being enrolled. Like other patients with schizophrenia, patients experiencing their first psychotic episode are at higher risk of killing themselves or other people. For this reason, most studies of antipsychotic drugs specifically bar researchers from recruiting patients at risk of violence or suicide, for fear that they might kill themselves or someone else during the study. Conveniently, however, the CAFE study only prohibited patients at risk of suicide, not homicide. This meant that Dan—who had threatened to slit his mother's throat, but had not threatened to harm himself—was a legitimate target for recruitment.

As Dr Elliott noted, this appeared to be yet another example of manipulation of clinical research designed to make the sponsors' products look better, a topic we have frequently discussed on Health Care Renewal:
A 2006 study in The American Journal of Psychiatry, which looked at 32 head-to-head trials of atypicals, found that 90 percent of them came out positively for whichever company had designed and financed the trial. This startling result was not a matter of selective publication. The companies had simply designed the studies in a way that virtually ensured their own drugs would come out ahead—for instance, by dosing the competing drugs too low to be effective, or so high that they would produce damaging side effects. Much of this manipulation came from biased statistical analyses and rigged trial designs of such complexity that outside reviewers were unable to spot them. As Dr. Richard Smith, the former editor of the British Medical Journal, has pointed out, 'The companies seem to get the results they want not by fiddling the results, which would be far too crude and possibly detectable by peer review, but rather by asking the 'right' questions.'

This was likely what was going on with the CAFE study:
Although the documents unsealed in the Seroquel litigation do not specifically mention the CAFE study in which Dan was enrolled, they do suggest that AstraZeneca planned to establish Seroquel as the "atypical of choice in first-episode schizophrenia,' according to a 2000 'Seroquel Strate'y Summary.' A later document titled 'Seroquel PR Plan 2001' discusses the agenda for an advisory panel meeting in Hawaii. Among the potential topics were the marketing of Seroquel to first-episode patients, adolescents, and the elderly. The document refers to these populations as "vulnerable patient groups."

Even more alarming are internal documents suggesting that AstraZeneca was designing clinical trials as a covert method of marketing Seroquel. In 1997, when Dr. Andrew Goudie, a psychopharmacologist at the University of Liverpool, asked AstraZeneca to fund a research study he was planning, a company official replied that 'R&D is no longer responsible for Seroquel research—it is now the responsibility of Sales and Marketing.' The official also noted that funding decisions would depend on whether the study was likely to show a 'competitive advantage for Seroquel.'
Were study subjects protected?
So, as Dr Elliott wrote,
Many clinical studies place human subjects at risk—at a minimum, the risk of mild discomfort, and at worst, the risk of serious pain and death. Bioethicists and regulators spend a lot of time and energy debating the degree of risk that ought to be permitted in a study, how those risks should be presented to subjects, and the way those risks should be balanced against the potential benefits a subject might receive. What is simply assumed, without much consideration at all, is that the research is being conducted to produce scientific knowledge. This assumption is codified in a number of foundational ethics documents, such as the Nuremberg Code, which was instituted following Nazi experiments on concentration camp victims. The Nuremberg Code stipulates that an 'experiment should be such as to yield fruitful results for the good of society,' and 'the degree of risk to be taken should never exceed that determined by the humanitarian importance of the problem to be solved by the experiment.'

But what if a research study is not really aimed at producing genuine scientific knowledge at all? The documents emerging in litigation suggest that pharmaceutical companies are designing, analyzing, and publishing trials primarily as a way of positioning their drugs in the marketplace. This raises a question unconsidered in any current code of research ethics. How much risk to human subjects is justified in a study whose principal aim is to 'generate commercially attractive messages'?

Conflicts of interest
Of course, university faculty pushed to bring in more external funds to support their careers (see this post) by university leaders with their eyes on the bottom line may not be too critical of the intricate designs of the studies they need to do to continue their academic careers, and whether such studies are really meant to promote science and improve patient care, or position products in the marketplace, especially when the same companies are paying them as consultants, speakers, etc.

In fact, Dr Elliott found reasons to make such concerns specific to the case of Mr Markingson's untimely death:
Olson had another financial reason to maintain good relations with AstraZeneca. According to a disclosure statement for a 2006 conference, he was a member of the AstraZeneca 'speaker's bureau,' giving paid talks for the company. He had similar arrangements with Eli Lilly and Janssen, the makers of the other atypicals being tested in the CAFE study, as well as Bristol-Myers Squibb and Pfizer. In addition, Olson was working as a paid consultant for Lilly, Janssen, Bristol-Myers Squibb, and Pfizer.

Bioethicists Demand an Investigation

So eight University of Minnesota bioethicists, including Dr Elliott, wrote a letter to the University administration demanding an investigation, as reported in December, 2010, by the Minneapolis- St Paul Star-Tribune,
In a letter to the board Monday, the professors questioned whether U psychiatrists lacked ethical judgment in enrolling the victim, Dan Markingson, a schizophrenic who may have lacked the wherewithal to consent to research. They also questioned whether financial incentives from AstraZeneca, the drugmaker funding the study, presented conflicts for the researchers, Dr. Stephen Olson and Dr. S. Charles Schulz.

At the time, the administration promised a serious response:
U leaders will take the letter seriously and take the protection of human research subjects seriously, said the U's general counsel, Mark Rotenberg.

But then almost immediately indicated its bias:
'The fact that this is tragic doesn't mean the treating physicians did anything wrong,' he said.

What, Us Worry?

It did not take long for Mark Rotenberg to decide that there was nothing more to worry about. As reported in February, 2011, by the Pioneer Press:
in a Monday letter to Elliott and colleagues, the chairman of the U's board of regents wrote 'we do not believe further university resources should be expended re-reviewing a matter such as this, which has already received such exhaustive analysis by independent authoritative bodies.'

'Our general counsel has provided us with the extensive reviews of this case that were performed over the years by a number of independent experts and governmental units,' chairman Clyde Allen Jr. said in the letter. 'Each and every one of these reviews resulted in the same conclusion: there was no improper or inappropriate care provided to Mr. Markingson, nor is there evidence of misconduct or violation of applicable laws or regulations.'

Of course, since Mr Rotenberg is responsible for, among other things, reducing the university's legal liability, one could see how he might not want to delve further into this case.  As we noted earlier, it is not clear that previous "exhaustive" investigations asked the questions that needed to be asked, or had access to all the relevant data.  The issues are not whether their was criminal conduct, or even civil liability, but whether the university is presiding over good science and protection of research subjects?

So we should be worried, of course, that commercial firms sponsor research on human beings mainly to serve marketing objectives, and that university faculty and administrators go along, allowing their formerly prestigious universities' names to be added to the research in exchange for the money they so much want to keep themselves living in the style to which they are accustomed. We ought to be particularly worried when these universities seem to forget about their mission to find and disseminate new knowledge in favor of defending the work that continues to bring in the money.

Thus, physicians, researchers, patients, and the public ought to be very skeptical about clinical research sponsored by commercial firms with vested interests in the research turning out a particular way, and even about research not sponsored by such firms, but done by researchers who have personal financial ties to such firms. Worse, patients ought to be extremely skeptical about the motives of researchers who want to enroll them in trials when the researchers have financial ties to commercial firms whose products could be promoted through such trials, and especially when such firms are sponsoring the trials.

As a long-time advocate for evidence-based medicine, whose advancement depends on the continuing creation of valid research evidence from clinical research, it is heart-breaking to have to make these recommendations, but they will be necessary until there is better assurance that clinical research is being done to advance science and patient care, not the commercial interests of the sponsors and the researchers.

Until academic medicine becomes more open about how and why it is doing clinical research, such skepticism is warranted.

However, I will end with a ray of hope. If the administrators and faculty do not get it, the student journalists do. Read these words in an editorial in the Minnesota Daily:
Of course, the University has maintained neither it nor anyone involved in the case did anything wrong, an odd claim to make after the Minnesota Legislature unanimously passed a law that prohibits exactly what happened and named the law after Markingson.

The University seems to think that because it was not held liable in court for Markingson’s death, it did nothing wrong. This is false; it is a cynical excuse to keep corporate drug money flowing into the University.

The regents’ decision fundamentally undermines our mission: Supposedly, the University is 'dedicated to … the search for truth.' But the letter makes it clear that corporate research cash is more important to the University than patient safety and transparency.

Refusing to set up an independent investigation is a willfully ignorant attempt to sweep the Markingson case under the rug and damages the integrity of the entire University.

Perhaps it is time for the state legislature to take another look at this issue.

True health care reform would separate clinical research, that is, research done on human beings, from the commercial interests of health care corporations and the people who work for them.

ADDENDUM (18 March, 2011) - See this post by Naomi Freundlich on the Health Beat blog.

St Jude Medical Settles Again

Here we go again to kick off the week, legal settlements, cardiac devices, alleged kickbacks, manipulated research, as reported by Bloomberg:

Another Legal Settlement by St Jude Medical

St. Jude Medical Inc. agreed to pay $16 million to settle a U.S. government probe of claims the company paid kickbacks to doctors who implanted its heart devices in patients.

The accord resolves a five-year investigation of St. Jude’s marketing practices for defibrillators and pacemakers.

This was not even the first recent settlement by this particular company:
In June, St. Jude agreed to pay the federal government $3.7 million to resolve a separate whistleblower case over claims that it made illegal payments to hospitals in Kentucky and Ohio that used the company’s heart devices.

Our relevant post about that previous settlement was here.

Kickbacks, Manipulated Research

The $16 million settlement stemmed from a case filed by Charles Donigian, a former St. Jude technician from St. Louis, who accused the company of using kickbacks to market products.

The kickbacks, which ranged as high as $2,000 per patient, came in the form of 'sham fees' for phony clinical-research studies on the devices, Donigian said in his suit.

There was slightly more detail in a report in TheHeart.org:
According to the DoJ, the company used postmarketing studies and a registry as vehicles to reward physicians participating in those studies to implant the devices. 'In each case, St Jude paid each participating physician a fee that ranged up to $2000 per patient,' a statement from US Attorney Carmen M Ortiz notes. 'The United States alleges that St Jude solicited physicians for the studies in order to retain their business and/or convert their business from a competitor's product.'

The statement also observes: 'Although St Jude collected data and information from participating physicians, it knowingly and intentionally used the studies and registry as a means of increasing its device sales by paying certain physicians to select St Jude pacemakers and ICDs for their patients.'

Summary

It all is getting so old, isn't it? Yet another pharmaceutical, biotechnology or device company settles a case alleging payments to physicians to induce them to use its products. The twist here is that the form of the payment allegedly also manipulated the clinical research process.

There have been other cases of manipulation or suppression of research about cardiac devices. For example, see our recent post about how Guidant, now a subsidiary of Boston Scientific, was put on probation for its actions in another such case.

The allegations amounted to serious external threats to physicians' professionalism: payments to promote use of health care products whether or not their implantation was in patients' best interests, and manipulation of clinical research that could have further corrupted the clinical evidence base, and presumably could have broken the trust of clinical research subjects. Left unsaid in the brief articles is how willingly the physicians gave in to these threats to their professionalism for the dollars involved.

Yet despite the apparently corrosive quality of the bad behavior, the penalties to the company were trivial. Per St Jude Medical's most recent financial profile, its yearly revenue was over $4.5 billion.  Two settlements totaling less than $20 million amount to a relatively small cost of doing business.  And note that St Jude Medical is not admitting it did anything wrong.  Per TheHeart.org, its statement was:
We are pleased to have reached a settlement agreement with the DoJ that fully resolves the postmarket-study matter in Boston. The company maintains that its postmarket studies and registries are legitimate clinical studies designed to gather important scientific data, and St Jude Medical does not admit liability or wrongdoing by entering into this agreement. The company entered into a settlement agreement to avoid the potential costs and risks associated with litigation. This settlement brings the previously reported postmarket-study investigation to a close.

So it all gets so tiresome. The continuing march of legal settlements like this do provide evidence of how professionalism and evidence-based medical practice are under threat. However, as we have said ad infinitum, such settlements are likely to do nothing to alleviate that threat.

To repeat the conclusion of our last post about St Jude Medical, the usual sorts of legal settlements we have described do not seem to be an effective way to deter future unethical behavior. Even large fines (and the one described above would be peanuts to a large health care corporation) can be regarded just as a cost of doing business. Furthermore, the fine's impact may be diffused over the whole company, and ultimately comes out of the pockets of stockholders, employees, and customers alike. It provides no negative incentives for those who authorized, directed, or implemented the behavior in question. My refrain has been: we will not deter unethical behavior by health care organizations until the people who authorize, direct or implement bad behavior fear some meaningfully negative consequences. Real health care reform needs to make health care leaders accountable, and especially accountable for the bad behavior that helped make them rich.

Study highlights 'lurking question' of measuring EHR effectiveness: The science in Medical Informatics is dead

The science in Medical Informatics is dead.

I'm not going to even use academic fabric softener in my assertion, e.g., "may be", "appears to be", or "is it?" (as a question) dead.

It's dead.

When HIT experts recommend changing the study goalposts when existing studies don't give results they'd like to see, rather than first and foremost critically and rigorously examining why we're seeing unexpected results, science is dead.


Study highlights 'lurking question' of measuring EHR effectiveness

December 22, 2010 | Molly Merrill, Associate Editor

WASHINGTON – Hospitals' use of electronic health records has had just a limited effect on improving the quality of medical care nationwide, according to a study by the nonprofit RAND Corporation.

The study, published online by the American Journal of Managed Care, is part of a growing body of evidence suggesting that new methods should be developed to measure the impact of health information technology on the quality of hospital care.


[In other words, we're not getting the results we thought and hoped we'd get with "Clinical IT 1.0", so let's alter the study methodologies and endpoints --- rather than using the results we have to identify the causes and improve the technology to see if we can do better with "Clinical IT 2.0."

Further, it's not as if there's no other data on why health IT might not
work as hoped - ed.]

Most of the current knowledge about the relationship between health IT and quality comes from a few hospitals that may not be representative, such as large teaching hospitals or medical centers that were among the first to adopt electronic health records.


[This implies "other" "representative" hospitals are either not doing it right, or the technology is ill suited for them and may never work. Which is it? We really need to know before we proceed with hundreds of billions more in this "Grand Experiment"
- ed.]

The RAND study is one of the first to look at a broad set of hospitals to examine the impact that adopting electronic health records has had on the quality of care.

The research included 2,021 hospitals – about half the non-federal acute care hospitals nationally. Researchers determined whether each hospital had EHRs and then examined performance across 17 measures of quality for three common illnesses – heart failure, heart attack and pneumonia. The period studied spanned from 2003 to 2007.

The number of hospitals using either a basic or advanced electronic health records rose sharply during the period, from 24 percent in 2003 to nearly 38 percent in 2006.

[How many billions of dollars diverted from patient care needs does that represent? - ed.]

Researchers found that the quality of care provided for the three illnesses generally improved among all types of hospitals studied from 2004 to 2007. The largest increase in quality was seen among patients treated for heart failure at hospitals that maintained basic electronic health records throughout the study period.

However, quality scores improved no faster at hospitals that had newly adopted a basic electronic health record than in hospitals that did not adopt the technology.

[In other words, the improvements or lack thereof had little to do with electronic vs. paper record keeping
- ed.]

In addition, at hospitals with newly adopted advanced electronic health records, quality scores for heart attack and heart failure improved significantly less than at hospitals that did not have electronic health records.

[In other words, the clinical IT was probably impairing doctors compared to simpler paper methods and good HIM personnel
- ed.]

EHRs had no impact on the quality of care for patients treated for pneumonia.

Researchers say the mixed results may be attributable to the complex nature of healthcare.

[That is likely true, but maybe the mixed results are also -- and even more likely in major part -- due to poorly designed and/or poorly implemented IT
- ed.]

Focusing attention on adopting EHRs may divert staff from focusing on other quality improvement efforts.

[That speaks to poor EHR overkill, poor usability, unfitness for purpose, and other issues that may or may not be remediable in short or even long term
- ed.]

In addition, performance on existing hospital quality measures may be reaching a ceiling where further improvements in quality are unlikely.

[That speaks to a low ROI or even negative for the hundreds of billions of dollars being diverted to the IT sector
- ed.]

"The lurking question has been whether we are examining the right measures to truly test the effectiveness of health information technology," said Spencer S. Jones, the study's lead author and an information scientist at RAND. "Our existing tools are probably not the ones we need going forward to adequately track the nation's investment in health information technology."


["Probably" not the ones we need? How can the authors know this? This is not science, it is speculation.
Further, I'd say the scientific imperative before we design "the right measures" to "truly" test the effectiveness of HIT is to understand why the measures we're using now are not showing the desired results, because perhaps they are perfectly adequate and are revealing crucial flaws, overestimations and false assumptions that need to be dealt with, now, not after another round of billions is spent - ed.]

New performance measures that focus on areas where EHRs are expected to improve care should be developed and tested
, according to researchers.

[In pharma clinical trials, this is akin to what is known as "changing the study methodologies and endpoints"
- a form of manipulating clinical research, usually with the true ultimate endpoint of money - ed.]

For example, EHRs are expected to lower the risk of adverse drug interactions, but existing quality measures do not examine the issue.


[I believe the studies that have been done of CPOE have not been consistently supportive, and in fact show CPOE might create new med errors
- ed.]

"With the federal government making such a large investment in this technology, we need to develop a new set of quality measures that can be used to establish the impact of electronic health records on quality," Jones said.


[This is truly
putting the cart before the horse as I wrote here. The studies showing the benefit should have long preceded the "large investments" that were decided upon - ed.]

Support for the study was provided by RAND COMPARE (Comprehensive Assessment of Reform Efforts). RAND developed COMPARE to provide objective facts and analysis to inform the dialogue about health policy options. COMPARE is funded by a consortium of individuals, corporations, corporate foundations, private foundations and health system stakeholders.

Other authors of the study are John L. Adams, Eric C. Schneider, Jeanne S. Ringel and Elizabeth A. McGlynn.

The overarching assumption is that the metrics are wrong, not the quality and fitness for purpose of the technology, the 'wrongness' of which is painfully obvious from the aforementioned other literature, e.g., link, link, link, and the many posts at this blog referring to other literature. Are the authors unaware, one might ask? I know they are not. (Or - blinded? That is, could there be external pressures affecting the thought processes? The arguments might not unreasonably be construed to be skewed from that perspective.)

Recognizing the atrocious
user experience and mission hostile nature of the technology (link), how disruptive it is (link, link), how poorly implemented it is often by domain amateurs to support financial battles of the payers, not cognitive processes of clinicians, I am amazed there's any signs of improvement at all, not outright deterioration. (That there is not outright deterioration of care displays if anything the results of the hard mental labor and ingenuity of clinicians to work around the technology's deficits.)

I note that the last time RAND looked at such matters there were problems with pro-health IT bias, among other issues (see my Feb. 2009 post "Heartland Institute Research & Commentary: Health Information Technology").

Carl Sagan wrote that science is a candle in the dark in a demon haunted world.

It seems the demons are winning.

-- SS