Word of Caution: Proving “Less is More” is More or Less Easy
Let’s talk about Comparative Effectiveness, as it seems to be the buzz word of the day. As cardiologists, we’re used to this type of research; in fact, I doubt any other field has performed more randomized controlled trials. But, why call it Comparative Effectiveness? I can’t help but worry that the driving force is less about science than about cost, and therefore really Cost Effectiveness. For a field as rich in innovative and expensive technology as interventional cardiology, this is potentially an important distinction.
One big problem with Comparative Effectiveness research is that it is always more difficult to prove one therapy is better than another therapy than to show they are similar, because randomized trials default to the null hypothesis of “no difference” unless thwarted by a perfectly devised and executed protocol. In addition, hard endpoints such as death and myocardial infarction (although clearly most important) are reduced by such a slim margin with even the most proven of therapies, that a trial has very little wiggle room before the null hypothesis becomes unavoidable.
Let’s take the COURAGE trial as an example, since this epitomizes the types of trials we may be faced with going forward. To reduce death and myocardial infarction with PCI, the study needs to be perfect. That’s a difficult thing to do for such a controversial topic. You need (1) the correct patient population that both addresses the question and is generalizable to the “real-world”; (2) the correct “guess” as to magnitude of treatment benefit; (3) adequate power to detect a difference; (4) the correct years of follow-up; (5) treatment in both arms that represent the “best” care for each approach; (6) lack of crossover between arms; (7) complete follow-up of all patients; (7) an impartial steering committee; and finally, (8) consecutive and rapid patient enrollment that avoids selection bias. This last point is very important, as trials that take too long to enroll or have a low “enrolled to screened ratio” typically end up enrolling low-risk patients, with hard endpoint event rates too low for the initial power calculation. Trials that don’t live up to these standards invariably default to the null hypothesis, in this case that PCI has no meaningful benefit.
The danger inherent to Comparative Effectiveness, therefore, is that most new technology, however great, has an uphill battle to show effectiveness, especially on hard clinical endpoints, within the confines of a randomized controlled trial or Comparative Effectiveness research. Moreover, in this day and age when mainstream media and the internet broadcast trial conclusions before they can be adequately dissected, “no difference” for technology or invasive management versus medical therapy (or the standard of care) may signal the immediate death of something truly great for patients, whether or not that conclusion was reached by a well-designed and executed trial interpreted accurately. That’s good news for Cost Effectiveness, but potentially bad news for interventional cardiology and the patients we serve.
I get offers to participate in trials all the time, as I’m sure many of you do as well. But, we as an interventional cardiology community should think long and hard about what things we want to compare, what endpoints are important, whether such comparisons are realistic from a protocol and patient enrolment standpoint, and how to make sure that chosen trials are only given credence if they prove themselves internally valid.
In short, we’re going to have to pay more attention to trial design, development, selection and enrollment going forward, keeping very much in mind that proving “less is more” is more or less easy.
Srihari S. Naidu, MD, FACC, FAHA, FSCAI is Director of the Cardiac Catheterization Laboratory, Interventional Cardiology Fellowship Program and Hypertrophic Cardiomyopathy Center at Winthrop University Hospital, and Assistant Professor of Medicine at SUNY – Stony Brook School of Medicine.