Back when I was a medical student, resident and then fellow I was taught that evidence-based medicine carried the solution to all medical conundrums. If we didn’t know which treatment is better, why not do a well-constructed clinical trial to figure it out? So, I spent all my time not only learning how to be a good clinician, but memorizing all the specific trials that speak to unique situations, so that I can not only develop a gestalt about doing what’s right for my patient, but also a set of rules that shouldn’t be crossed. Now, almost 7 years into my stint as an attending physician, and knocking on the door to Associate Professorship, I have realized that life is not quite as black and white as I once thought. Why? Because not only do trials have to be planned perfectly to accurately compare two strategies or treatments, but there are so many assumptions in terms of frequency of endpoints, statistical power, unbiased enrollment and unanticipated confounders that setting up an accurately designed and executed clinical trial remains very much a crapshoot in many cases. This is most evident for the most important questions; those questions that are so vital that we hesitate to even randomize patients, yet the truth would make it all worthwhile in the end. Unfortunately, these are the very trials that take 5-10 years to enroll, randomize a highly selected population, have significant crossover, and in the end typically show “no difference” between arms when evaluated on intention-to-treat (ITT). The most recent examples are the STITCH Trial comparing CABG to Optimal Medical Therapy in patients with LV dysfunction, and the PROTECT 2 Trial comparing IABP to Impella in high-risk PCI, but earlier examples include the COURAGE Trial and the SHOCK Trial. All of these trials featured very important clinical issues, in many respects trying to see if we could validate clinical practice of aggressive management in various common situations. Yet, the vast majority of patients screened could not be enrolled, leaving a highly-selected population to randomize in whom the clinician already deemed the two strategies as likely equivalent; indeed, those patients in whom the answer seemed obvious were not randomized. You can see the inherent problem with this, that no amount of statistical adjustment could ever correct. In the end, therefore, is it simply possible that nothing trumps clinical judgment and decades of experience for the most important situations? Is it enough to say “We know from years of experience that patients who get CABG for left main and multi-vessel disease and reduced LV function get benefit from revascularization.’? We certainly feel it’s enough for left main disease, as those patients were not allowed in the trial. I think we may need to get comfortable with the fact that clinical trials probably should not be performed in the most dangerous or high-risk patient populations, where experience has taught us what to do. Or, if we do perform them, we must get comfortable with a “no difference” primary endpoint, yet continue to look at the trial for secondary findings. Although we are taught not to “read into” trials if the primary endpoint is not met, this is simply not reasonable for trials that are likely never to be repeated. From the SHOCK trial, for example, we ultimately learned that outcomes improve with revascularization at 6 months; similarly, the STITCH Trial may prove overall benefit for CABG in secondary analyses, while the PROTECT 2 Trial may prove overall benefit for Impella support at longer follow-up. We should be mindful of these lessons as more “difficult to enroll” trials are completed and presented, such as FREEDOM. Dr. Srihari S. Naidu is Director of the Cardiac Catheterization Laboratory, Interventional Cardiology Fellowship Program and Hypertrophic Cardiomyopathy Center at Winthrop University Hospital, and Assistant Professor of Medicine at SUNY – Stony Brook School of Medicine.