The Case of the Anatomic Heart

16-3_staticOur obsession with diagnostic certainty has led us down many false paths and blind alleyways in the history of medicine. This statement has never been more true than when spoken in regards to cardiovascular health. The small successes that we have obtained when treating the highest acuity patients have been enthusiastically and incorrectly applied across the entire spectrum of coronary artery disease. Focusing too much on the anatomical definition of this disease state has limited our true understanding of the pathological process causing heart disease and in turn limiting our comprehension of how best to intervene. Despite a large body of evidence contradicting the theory, we have held fast to the “clogged pipe” model.

PCI found its initial success in the treatment of ST elevation myocardial infarctions, the most clinically obvious pathological end result of cardiovascular disease. In such cases we discovered that this invasive procedure was only slightly better than our systemic attempts to open the stenotic vessel using aspirin and thrombolytic therapy (1). In fact most data comparing PCI to thrombolytics in patients suffering from acute ST elevation MIs revealed that you had to treat close to 50 people with cardiac catheterization for one to benefit (1). Considering that thrombolytic therapy is only moderately better than aspirin alone, maybe the pedestal we have currently reserved for PCI in the management of ACS is not deserved (2).

The COURAGE trial, appropriately named for challenging the doctrine that coronary artery disease is best managed through invasive techniques, compared the efficacy of PCI vs “optimal” medical management in patients with stable coronary artery disease (3). The authors randomized patients with EKG evidence of ischemia at rest or ischemia induced by some form of provocative testing, with at least one culprit lesion of 70% occlusion or greater, to either PCI or medical management. Essentially the very patients we hope to identify through admission and provocative testing. No difference was found in the rates of death or MI during the follow up period (median 4.6 years) in the patients who received PCI vs those who underwent medical management (3).

These findings are consistent throughout the literature examining PCI vs medical management in patients with stable coronary artery disease. In a meta-analysis by Stergiopoulos et al of all 13 trials examining this question no difference could be found between medical management or aggressive interventional procedure (4). Even patients that are enzyme positive but otherwise clinically stable, no definitive benefits have been demonstrated with aggressive utilization of PCI. In fact when urgent PCI is empirically mandated, there is an increase in early mortality (5). In Emergency Department patients who have been ruled out for acute disease by EKG and enzymes, further evaluations for anatomic disease not only identify diminishingly small amounts of true positives but the interventions proposed do not result in clinically meaningful improvements in outcomes. Clearly we have overreached our meager successes and applied a crash procedure to a far different pathology than where it originally found it’s success.

This obvious lack of efficacy has not gone unnoticed. Many have suggested the need for a more refined method of identifying high risks lesions that would benefit from an invasive approach. Fractional Flow Reserve (FFR) is a technique that has been proposed as the answer to clarify which lesions would benefit from stent placement. This invasive technique is performed in concert with standard PCI and allows the interventionalist to assess the flow of blood before and after the stenotic lesion. These values are turned into a ratio in the hopes of numerically quantifying coronary flow. Anything under 0.80 is determined to be an ischemic stenosis and as such would benefit from the placement of a stent. Despite its physiologic plausibility, trials examining its efficacy have been less than stellar. The initial two studies, DEFER and FAME, comparing FFR-guided PCI to traditional PCI demonstrated a decreased rate of myocardial infarctions at 2 year follow-up (6,7). Initially these results seem to be in favor of FFR guided PCI but upon closer inspection the data reveals that the difference in the groups primarily consisted of a decrease in procedure related events. Suggesting that the only benefit FFR provides is to inhibit the ocular-stenotic reflex, quite prevalent in the modern Interventional Cardiologist (8). Neither of these studies address the important question, how does FFR-guided PCI compare to conservative medical management alone?

Introducing FAME-2. Like COURAGE before it, FAME-2 sought to answer the question whether FFR adds anything to medical management alone. The preliminary data was published in 2012 after the trials was halted prematurely (9), but the official 2-year follow-up results were recently published in the NEJM (10). Authors randomized patients with angiographic stentable lesions, with either classic anginal symptoms or positive findings on provocative testing after a negative ED workup, to either standard medical management or FFR-guided PCI. The authors utilized a composite endpoint of cardiovascular death, MI, or urgent revascularization. The trial was stopped early after enrolling only half its intended sample size, 1220 patients, due to an unacceptable number of events that occurred in the medical management group. Taken at face value this seems like an overwhelming approval of FFRs clinical utility. Long awaited proof that downstream testing after a negative ED workup results in clinically important benefits. A justification for the substantially large quantity of low-risk patients we admit to the hospital each day. Despite these seemingly positive results this trial does not justify our risk adverse strategy. Though there was a 11.4% absolute difference in the rate of primary events (8.1% vs. 19.5%) that occurred between the groups, this margin consisted entirely of an increased frequency of urgent revascularizations. There was no difference in the number of deaths or MIs that occurred between the groups. The majority of these excess revascularizations were due to persistent symptoms, demonstrating this claimed efficacy was primarily due to our biases rather than any overwhelming benefit of FFR-guided PCI.

Clearly FFR does not provide us with the clarity we seek. Even the theoretical ground FFR stands upon is thin as the ischemic threshold we currently use was derived by comparing FFR values to a gold standard of non-invasive perfusion imaging, a test with questionable clinical value (11). FFR may very well still play a role in the management of coronary artery disease, but its relevance in the management of ACS is insignificant.

How this changes the long-term management of coronary artery disease is unclear, but what is becoming increasingly apparent is an anatomic definition of coronary disease, following a negative Emergency Department work up for the diagnosis of ACS, provides no further clinical benefit. A number of trials have demonstrated that the addition of angiography or CT angiography add nothing to further risk stratify these patients (12-16). The rate of true positive disease in this cohort is diminishingly low (17). Even when the rare patient is found to truly have anatomically defined disease, direct invasive interventions add little clinical benefit over aggressive medical therapy. Anatomic investigation may very well remain an important component in the management of cardiovascular disease. Not to identify those patients that would truly benefit from cardiac catheterization, but to distinguish which patients require aggressive medical management. Surely this is not a priority in the Emergency Department evaluation of ACS.

More than ever we require a practical outlook when it comes to resource application in Emergency Medicine. Trying harder and doing more rarely lead to improved patient oriented outcomes. In the case of Emergency Department management of ACS it is imperative we admit that our current strategy has failed. We are striving to identify an exceedingly rare population in the hopes of offering an intervention which provides insignificant patient oriented benefits. Despite our technical mastery, technological advances, and intellectual mashinations, PCI remains a crash procedure that has only demonstrated proven benefit in the sickest cohorts of CAD. Outside the confines of ST- elevation MI we have yet to identify a population who consistently benefit from this invasive approach to management. Continually insisting on titling against the massive windmill that is Heart Diseae with a lance poorly equipped for this purpose, has led us too far down the path of madness. Surely its time to turn around and start the long walk back to sanity…

Sources Cited:

  1. Cucherat M, Bonnefoy E, Tremray G. Primary angioplasty ver-sus intravenous thrombolysis for acute myocardial infarction. Cochrane Database Syst Rev. 2000;2:CD001560.


  1. ISIS-2 (Second International Study of Infarct Survival) Collaborative Group. Randomised trial of intravenous streptokinase, oral aspirin, both, or neither among 17,187 cases of suspected acute myocardial infarction: ISIS-2.  Lancet. 1988;2(8607):349-60.


  1. Boden WE, O’rourke RA, Teo KK, et al. Optimal medical therapy with or without PCI for stable coronary disease. N Engl J Med. 2007;356(15):1503-16.
  2. Stergiopoulos K, Brown DL. Initial Coronary Stent Implantation With Medical Therapy vs Medical Therapy Alone for Stable Coronary Artery Disease: Meta- analysis of Randomized Controlled Trials. Archives of Internal Medicine 2012 Feb;172(4):312
  3. Mehta SR, Cannon CP, Fox KA, et al. Routine vs selective invasive strategies in patients with acute coronary syndromes: a collaborative meta-analysis of randomized trials. JAMA. 2005;293(23):2908-17.
  4. Pijls NH, van Schaardenburgh P, Manoharan G, et al. Percutaneous coronary intervention of functionally nonsignificant stenosis: 5-year follow-up of the DEFER Study. J Am Coll Cardiol 2007;49:2105-2111
  5. Tonino PA, De bruyne B, Pijls NH, et al. Fractional flow reserve versus angiography for guiding percutaneous coronary intervention. N Engl J Med. 2009;360(3):213-24.
  6. Bradley SM, Spertus JA, Kennedy KF, et al. Patient selection for diagnostic coronary angiography and hospital-level percutaneous coronary intervention appropriateness: insights from the national cardiovascular data registry. JAMA Intern Med. 2014;174(10):1630-9.
  7. De bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001.
  8. De bruyne B, Fearon WF, Pijls NH, et al. Fractional flow reserve-guided PCI for stable coronary artery disease. N Engl J Med. 2014;371(13):1208-17.
  9. Pijls NH, De Bruyne B, Peels K, et al. Measurement of fractional flow reserve to assess the functional severity of coronary-artery stenoses. N Engl J Med 1996;334:1703-1708
  10. deFilippi CR, Rosanio S, Tocchi M, et al. Randomized comparison of a strategy of predischarge coronary angiography versus exercise testing in low-risk patients in a chest pain unit: in-hospital and long-term outcomes. J. Am. Coll. Cardiol. 2001;37(8):2042-2049
  11. Goldstein JA, Gallagher MJ, O’Neill WW, Ross MA, O’Neil BJ, Raff GL. A randomized controlled trial of multi-slice coronary computed tomography for evaluation of acute chest pain. J Am Coll Cardiol. 2007;49(8):863-71
  12. Goldstein JA, Chinnaiyan KM, Abidov A, et al. The CT-STAT (Coronary Computed Tomographic Angiography for Systematic Triage of Acute Chest Pain Patients to Treatment) trial. J Am Coll Cardiol. 2011;58(14):1414-22
  13. Hoffmann U, Truong QA, Schoenfeld DA, et al. Coronary CT angiography versus standard evaluation in acute chest pain. N Engl J Med. 2012;367(4):299-308
  14. Litt HI, Gatsonis C, Snyder B, et al. CT angiography for safe discharge of patients with possible acute coronary syndromes. N Engl J Med. 2012;366(15):1393-403
  15. Hermann LK, Newman DH, Pleasant WA, et al. Yield of routine provocative cardiac testing among patients in an emergency department-based chest pain unit. JAMA Intern Med. 2013;173(12):1128-33.


“The Adventure of the Sussex Vampire”

midline picture

A brief forethought. This post we will stray from the usual whimsical rants regarding recent literature in Emergency Medicine. Instead we will focus on the far more practical topic of the insertion of  the Midline catheter. Of note, I am overwhelmingly biased in favor of these devices and have taken such a fanciful liking to them I can’t possible be an objective critic. With this in mind…


The dichotomy that is venous access has become far more ambiguous since the establishment of ultrasound (US) to identify vascular targets. Prior to US, admission to the circulatory system was gained via superficial peripheral veins located by direct visualization, palpation or through central veins large enough to cannulate by blindly sticking a needle in the area of their anticipated anatomic location.

Since employing US to sonographically identify a deeper set of peripheral veins, the clear-cut boundaries that once separated central and peripheral access have become blurred. Not only has US improved our success in both these traditional techniques (1,2), it has introduced a new target for our blood hungry beveled needle tips to pursue. A larger set of peripheralmidline2 veins that were once too deep for direct visualization and too small for blind exploration now, thanks to US, have become an appealing target. To our surprise and dismay these deep peripheral lines are far less durable than their size and integrity would suggest (3). In fact, the rate of line failure within the first 24-hours (46%) is unacceptably high for clinical use (3,4).

This failure is due, more likely to a deficit in our equipment rather than a deficiency in these anatomic structures. Our current line of peripheral catheters is not designed to access such deep vessels. These catheters were intended to cannulate superficial veins. When employed on deeper veins they typically do not have adequate length to sufficiently seat the catheter tip in the vessels. This creates two problems. First the catheter will often become dislodged by even the smallest movement from the patient, as the elastic recoil of the soft tissue surrounding the precariously placed catheter pulls it from its target vessel. Second the steep trajectory taken to ensure the catheter reaches these vessels often results in the needle damaging the posterior wall of the vessel. This would not be of great importance if the catheter possessed the necessary length to thread beyond the damaged endothelium. Unfortunately even our long peripheral catheters lack the required length and end up sitting right next to this damaged portion of the vessels. These deficits have led to an abnormally high failure rate.

The deep venous or Midline catheters offer a viable solution to these obstacles. A number of recent studies have demonstrated the successful use of a small wire to thread a single lumen catheter into the deep veins of the upper extremity. This technique allows for secure access to these deeper vessels, avoiding the risks associated with central line insertion. These studies, examining the insertion of such lines, have confirmed their superior durability when compared to standard catheters and far fewer line related infections than observed with central venous catheterization(7).

The most recent of these trials, by Meyer et al, examines the use of arterial catheters inserted into either the cephalic or basilic veins (5). In this study’s participating hospital it was routine for the ICU to provide “line consultations”, during which the Critical Care physician would be called to the ward to obtain venous access on particularly difficult patients. Difficult access was defined in this cohort as three failed attempts to insert a peripheral line by an experienced nurse. After which all lines were inserted by a single operator, the study’s second author, Dr. Pierrick Cronier, using either 18 or 20 gauge arterial catheters that were 8 to 11 cm in length. Catheters were placed using the Seldinger technique under full aseptic conditions.

All 29 lines were placed successfully, the majority in the basilic vein (66%) and the remainder in the cephalic vein (34%). Only three catheters were removed early. One was accidently removed by the patient up until which point it had been functioning normally. The remaining two catheters were removed due to absence of blood return and were found to be occluded. Two additional catheters were found to be colonized with coagulase.  Using ultrasound, no thrombophlebitis was visualized on any catheter prior to their removal.

The small sample size and use of a single skilled operator limit the conclusions that can be drawn from such a study and thus this trial does very little to answer the more pertinent questions Emergency Physicians have regarding the use of Midlines.

What is the optimal catheter for Midline insertion?

In the Meyer et al cohort the authors studied catheters intended for arterial use (5). In earlier cohorts both Elia et al and Mills et al employed single lumen central venous catheters of varying sizes (12 and 15 cm respectively) with equal success. While others have used catheters specifically designed for Midline insertion (3,6,7), no one study directly compares the various catheter choices, but all demonstrate similar efficacy and safety profiles in their independent cohorts. There is a Goldilocks effect in the sense that the longer the catheter inserted the greater the durability of the line, and yet if the catheter tip extends proximally enough to be seated in the subclavian vein, the rate of catheter related infections approaches numbers associated with the use of central line catheters (8). Seemingly no one catheter type appears superior to any other. It is important to remember that unlike central lines we are inserting catheters into much smaller, thin walled vessels and thus should choose a catheter insertion kit nimble enough to navigate such delicate structures.

Can vasopressors be safely administered through Midline catheters?

Most of the trials examining Midline catheters focused on the rate of successful insertion and line safety. All of these trials examining Midlines treated them as if they were peripheral lines (3,4,5,6,7), and as such there is no specific evidence evaluating the safety of administering vasopressor medication. This lack of evidence does not invalidate their use. The mostmidline3 commonly cited concern is that due to the depth of these venous structures, a delay in recognition of an infiltrative event may occur causing significant damage before the infusion could be stopped. This would be a valid concern except that unlike their more superficial cousins, Midlines very rarely infiltrate. When compared to traditionally inserted peripheral lines, Midlines fail far less often and in far different ways. Peripheral lines tend to infiltrate or become dislodged because of the short distance between insertion site and distal catheter tip in conjunction with the delicate nature of the vessels. Conversely Midlines fail later in their course, primarily due to distally located occlusions. The failure rate of Midline catheters is approximately 14% with a median time to failure of 6.19 days (3). In all the cohorts examining the insertion and use of Midlines only two infiltrative events occurred. In Mills et al, this event transpired directly after a difficult insertion performed by one of their less experienced practitioners (6). In El-Shafey et al, a single Midline catheter infiltrated shortly after its insertion (4). Both of these events occurred soon after insertion and quickly became clinically obvious. The much feared “occult” infiltration, has yet to manifest in any cohort examining the use of Midline catheters. That being said it is important to note that all Midlines are not created equal. The small tortuous vein in which one experiences great difficulty threading a catheter is a far different line than a large easily compressible vein that threads easily with adequate blood return.

Finally the generalizability of these trials is limited by the expertise of the practitioners inserting these lines. The majority of these trials called upon a few experienced practitioners to perform each catheterization. Only one trial, Elia et al, utilized a variety of practitioners (attendings, residents and nurses) with varying levels of experience (3). This variability is reflected in the lower success rate when compared to similar cohorts (86% vs 93% and 100%) (5,6). From my personal experience there is a learning curve when first beginning to insert these types of lines. The delicate act of threading the wire is far more difficult than the insertion of a central line. Because of the small size of these vessels there is a greater likelihood for the tip of the needle to become dislodged when detaching the syringe. Once this skill is perfected the placement of the lines becomes far simpler.

Our threshold for changing practice is frustratingly fickle. We have accepted that the use of US to identify deeper peripheral vessels will both improve peripheral line insertion and decrease the need for central line placement. Yet our success is limited by our use of traditional techniques and unsuitable equipment neither of which were intended to cannulate such deep vasculature. Likewise we incorrectly apply the rules that govern superficial peripheral vasculature to these deeper more durable vessels. In order to optimize our success with US guided peripheral access, it is imperative we realize these vessels are unlike their more superficial brethren and we must adapt our ideology, methods and tools accordingly.

Sources Cited:

 1. Leung J, Duffy M, Finckh A. Real-time ultrasonographically-guided internal jugular vein catheterization in the emergency department increases success rates and reduces complications: a randomized, prospective study. Ann Emerg Med. 2006;48(5):540-7.

2. Costantino TG, Parikh AK, Satz WA, Fojtik JP. Ultrasonography-guided peripheral intravenous access versus traditional approaches in patients with difficult intravenous access. Ann Emerg Med. 2005;46(5):456-61.

3. Elia F, Ferrari G, Molino P, et al. Standard-length catheters vs long catheters in ultrasound-guided peripheral vein cannulation. Am J Emerg Med. 2012;30(5):712-6.

4. Eid Mohamed El-Shafey, Tarek F. Tammam. Ultrasonography-Guided Peripheral Intravenous Access: Regular Technique Versus Seldinger Technique in Patients with Difficult Vascular. European Journal of General Medicine. 2012; Vol. 9, No. 4 .

5. Meyer P, Cronier P, Rousseau H, et al. Difficult peripheral venous access: Clinical evaluation of a catheter inserted with the Seldinger method under ultrasound guidance. J Crit Care. 2014;29(5):823-7.

6. Mills CN, Liebmann O, Stone MB, Frazee BW. Ultrasonographically guided insertion of a 15-cm catheter into the deep brachial or basilic vein in patients with difficult intravenous access. Ann Emerg Med. 2007;50(1):68-72.

7. Mermel LA, Parenteau S, Tow SM. The risk of midline catheterization in hospitalized patients. A prospective study. Ann Intern Med. 1995; 123:841-4.

8. Kearns PJ, Coleman S, Wehner JH. Complications of long arm-catheters: a randomized trial of central vs peripheral tip location. JPEN J Parenter Enteral Nutr. 1996;20(1):20-4.

A Case of Shadows


In medicine we frequently propagate half-truths and unsubstantiated certainties. Thus, truth is a relative experience, dependent primarily on how we choose to define it rather than any concrete state of reality. Increasingly we have favored a technological definition of truth over that of the clinical perspective. As such we are driven to act in disease states that are often best treated by blissful ignorance. Where we draw the line of clinical relevance and subclinical disease seems dependent on our own comfort with uncertainty. Given this current culture, it is not surprising that bedside ultrasound (US) has become a popular tool to evaluate the majority of ailments that may show up in the Emergency Department. With our expanding technical skills, so to has our comfort in using this modality to make clinical decisions. At this point, such a level of technical proficiency has been achieved that we have outpaced the literature base to guide these decisions. Until recently the majority of the literature addressing bedside US has been limited by its use of surrogate endpoints and disease oriented definitions of success. Thus we stand at a crossroads in Emergency Medicine. This is not intended to discredit bedside US as a modality but rather a commentary on its user, and our inability to separate clinically relevant reality from the pixilated truth we see on our monitors. To ask the question, how exactly should we determine our sonographic definition of truth?

A recent article published in The Lancet Respiratory Medicine, by Laursen et al, is the first randomized controlled trial examining the utilization of bedside US effects on patient outcomes (1). Up until the publication of this article, the efficacy of US was evaluated through studies addressing its diagnostic accuracy. US was compared to a more traditional diagnostic tool often using an impossible gold standard. In many cases US proved comparable or even superior to the traditional diagnostic modality. These types of studies helped us define the potential utility of bedside US, but we have outgrown these humble beginnings. What is now required are trials examining the patient centered effects of the incorporation of bedside US into our practice.

The findings of the Laursen trial were covered in more detail in my previous post found on EM Literature Of Note and examined in an even more expert fashion by Simon Carly on The St. Emlyn’s blog. I have included an excerpt from my post as a summation of these findings:

Authors randomized patients presenting to the ED with signs or symptoms concerning for a respiratory etiology to either a standard work up as determined by the treating physician or the addition of POCUS performed by a single experienced operator. The US protocol consisted of sonographic examination of the heart, lungs and lower extremity deep veins to identify possible causes of patients’ symptoms. The authors’ primary outcome was the percentage of patients with a correct presumptive diagnosis 4 hours after presentation to the Emergency Department as determined by two physicians blinded to ED POCUS findings, but with access to the records of the entire hospital stay.

Using this POCUS protocol the authors found stunning success in their primary endpoint. Specifically, the rate of correct diagnoses made at 4-hours in the POCUS group was 88% compared to 63.7% in the standard work up group. Furthermore 78% of the patients in the POCUS group received “appropriate” treatment in the Emergency Department compared to 56.7% in the standard work up group.

Though promising, these benefits did not translate into improvements in true patient oriented benefits. Though not statistically significant, the observed in-hospital and 30-day mortality trended towards harm in the POCUS arm (8.2% vs 5.1% and 12% vs 7% respectively). Nor was there any meaningful difference in length of stay or hospital-free days between those in the POCUS group and those in the control group. Even more concerning, was the significant increase in downstream testing that occurred in patients randomized to the POCUS group. Specifically the amount of chest CTs (8.2% vs 1.9%), echocardiograms (10.1% vs 3.8%) and diagnostic thoracocenthesis (5.7% vs 0%).

It is important to note the pathologies found in the POCUS group were not false positives. These patients had additional diagnostic tests confirming the validity of the bedside findings. As such this is not a question of technical competency, but rather a question of clinical relevancy. The significant increase in diagnostic proficiency found in the POCUS group did not result in improved patient oriented outcomes, in fact there were significant trends towards harm in both hospital and 30-day mortality. This, of course, may be statistical whimsy. Future trials may show this to be nothing more than the random noise generated by a small sample size , but these findings are concerning for a certain degree of overdiagnosis.

The Laursen trial is not a solitary signal standing out from a crowd of contrary data. There have been signs throughout the US literature demonstrating the potential for over-diagnosis and though not definitive this study certainly supports this hypothesis. When US is compared to CXR for the diagnosis of pneumonia, it reveals far more pathology (2). Does this mean we have been missing a large portion of pneumonias in otherwise well appearing patients or is this an example of overdiagnosis. Likewise US is a far more sensitive modality for identifying pneumothoraxes when compared to CXR (3). And yet like pneumothraxes that are discovered on CT but not seen on CXR, there is question of whether such lesions require any intervention at all. What we do with this information is hard to say. None of these trials are robust enough to draw definitive conclusions. Despite their many flaws surely we can no longer say with overwhelming certainty that ultrasound is free and harmless. As with any other test it is only as good as the practitioners who use it.

A recent article by Kenji et al, published in The Journal of Critical Care, revealed bedside US to be a far more successful tool when used to guide care (4). These authors, utilizing a before and after design, examined the use of bedside echocardiography (echo) to guide resuscitative strategies in ICU patients presenting with pressor-dependent shock. Patients were prospectively evaluated over a 1-year period, the first 6-month being the standard care group and the following 6-month the echo guided group. The standard care group used the “Surviving-Sepsis-Protocol” to guide resuscitation, while the echo-guided group followed a protocol involving evaluation of cardiac function and ICV collapsibility. Echo evaluations where conducted by one of three intensivists with expertise in the use of bedside echocardiography. None of the physicians performing the echo exams were the primary physicians caring for the patients, but rather made recommendations based off their findings. These recommendations were consistent with one of four scenarios:
1. If LV function was normal and IVC full, fluid was stopped and pressors continued
2. If LV function was normal and IVC was collapsible, a fluid bolus of 20-40 ml/kg was administered
3. If LV function was impaired and IVC was collapsible, 10-20 ml/kg was administered and dobutamine was initiated
4. IF LV function was impaired and IVC was full, fluid was restricted and dobutamine was initiated.

The primary outcome the authors examined was 28-day mortality. Secondary endpoints measured were the amount of fluid administered over the first four days of treatment, organ dysfunction and days free of renal replacement therapy. A total of 220 patients were examined (110 in the standard therapy group and 110 in the echo-guided group). The vast majority of the patients evaluated were in vasodialatory shock, followed by a small minority in cardiogenic shock and a handful of patients in mixed or hemorrhagic shock.  25% of the patients in the echo-guided group were found to have severely impaired left ventricular function. Only 35% were deemed to require fluid augmentation as determined by IVC collapsibility. As such, patients in the echo guided group received significantly less fluid over the first day of therapy (49 ml/kg vs 66 ml/kg) and were more likely to be started on dobutamine therapy than those in the standard care group (22% vs 12%).

28-Day mortality was 66% vs 56% in the standard and echo guided groups respectively. This 10% difference reached statistical significance with a P-Value of 0.04. Furthermore patients in the echo-guided group had a more days free of renal replacement therapy (RRT) and less grade 3 acute kidney injury (AKI).

This trial is by no way without its limitations. The before and after design and small sample size, not to mention the questionable efficacy of dobutamine, limit the strength of the conclusions that can be drawn. Despite these drawbacks, like the Laursen et al trial, the Kenji trial sets an important precedence in the US literature. Rather than examining US’s utility using a surrogate disease oriented endpoint both of these trials investigated the effect US had on patient oriented outcomes, specifically mortality.
Though these two trials are examining to very different aspects of bedside ultrasonography, their distinction serves to illustrate our point appropriately. In the Laursen et al trial all patients presenting with respiratory signs or symptoms underwent a protocolized ultrasonographic investigation independent of individual presentations. This shotgun distribution of sound waves is the equivalent of throwing a bunch of labs at a belly pain patient and seeing what sticks. Finding something on US and then retrospectively fitting the patients to these findings will inevitably lead us down many false paths. Kenji et al also used a standardized protocol, but unlike the Laursen trial, they asked a specific clinical question pertinent to the patient’s presentation and used US to answer this question.

As with any form of testing, the acuity of the patient and the pretest probability of disease determine the performance of the investigation. Even the most specific tests, if used on the wrong population will identify more false positives than true disease. It is my belief that bedside US is even more susceptible to these conditional circumstances. In the crashing trauma patient US becomes an invaluable tool to swiftly rule out tension pathology as the cause of the physiological insult (3). Conversely when used in a patient with a more clinically benign presentation the high sensitivity we so recently relied on becomes a detraction as it is now is prone to finding pneumothoraxes of little clinical relevance. Overall the sensitivity of US in the identification of appendicitis is fair, but as the disease process progresses and the clinical suspicion increases the sensitivity of the test becomes far more clinically useful (5). The EFAST Exam when applied to patients with traumatic injury has a poor sensitivity for identifying injury (6,7), but when used to identify the cause of a crashing trauma patient’s hypotension it is clinically invaluable (8). Interestingly in the hypotensive patient where the pretest probability of clinically relevant pathology is extremely high, the potential for overdiagnosis from empirically applying standardized screening protocols such as the EFAST or RUSH exam becomes much less relevant.

How do we move forward? US has been traditionally examined as a diagnostic test, meaning its utility is routinely compared to a gold standard. US studies of pneumonias, pneumothoraxes, appendicitis, or peritoneal injury are commonly evaluated against CT. Bedside echo is typically compared to comprehensive echocardiography as interpreted by an “expert Cardiologist”, and measurements of fluid responsiveness are likened to invasive hemodynamic monitoring. Each of these gold standards possesses their own flaws. CT scans are prone to overdiagnosing (3,4), Cardiologists disagree with each other as often as they disagree with the Emergency Physicians when diagnosing heart failure (9), and invasive hemodynamic measurements used to judge US’s ability to assess fluid responsiveness have not shown to improve patient oriented outcomes when examined clinically(10). We need to utilize patient relevant outcomes when evaluating the use of bedside US in order to assess its true value as a diagnostic tool. Future research should randomize patients with US+, CXR- pneumonias to antibiotic therapy or placebo, compare conservative management to chest tube insertion in patients found to have pneumothoraxes on US but not CXR, and assesse fluid responsiveness in the hemodynamically volatile patient by examining mortality outcomes when US findings are used to guide therapy.

It is an exciting time in the world of point-of-care US. There are great minds with extraordinary vision pushing this field forward everyday. It is a privilege to experience this progression. But as technology advances and the quality of our point-of-care machinery improves, overdiagnosis will become an ever more imperative concern. If we choose to stick our heads in the sand, holding fast to unquestionable certainty found in our pareidolic interpretation of shadows, we will surely redefine medical truth for the worst. Like CTPA once changed the diagnosis of pulmonary embolism from a clinically relevant dangerous disease to a primarily irrelevant disease oriented definition, point-of-care US will identify a large quantity of subclinical disease of questionable clinical bearing. Conversely, if we choose to continue to question the proper application of point-of-care US and focus not only on our procedural expertise but on our medical stewardship we will progress the field of bedside US and improve patient care. If we are to claim clinical expertise our knowledge must extend beyond the technical proficiencies and integrate the wisdom needed to interpret these shadows?

Sources Cited:


  1. Laursen et al. Point-of-care ultrasonography in patients admitted with respiratory symptoms: a single-blind, randomised controlled trial The Lancet Respiratory Medicine – 1 August 2014 ( Vol. 2, Issue 8, Pages 638-646
  2. Bourcier et al. Performance comparison of lung ultrasound and chest x-ray for the diagnosis of pneumonia in the ED. Am J Emerg Med. 2014;32(2):115-8.
  3. Alrajab et al. Pleural ultrasonography versus chest radiography for the diagnosis of pneumothorax: review of the literature and meta-analysis. Crit Care. 2013;17(5):R208.
  4. Kanji et al. Limited echocardiography-guided therapy in subacute shock is associated with change in management and improved outcomes. J Crit Care. 2014;29(5):700-5.
  5. Bachur et al. The effect of abdominal pain duration on the accuracy of diagnostic imaging for pediatric appendicitis. Ann Emerg Med. 2012;60(5):582-590.e3.
  6. Quinn et al. What is the utility of the Focused Assessment with Sonography in Trauma (FAST) exam in penetrating torso trauma?. Injury. 2011;42(5):482-7.
  7. Becker et al. Is the FAST exam reliable in severely injured patients?. Injury. 2010;41(5):479-83.
  8. Laselle et al. False-negative FAST examination: associations with injury characteristics and patient outcomes. Ann Emerg Med. 2012;60(3):326-34.e3.
  9. Januzzi et al. The N-terminal Pro-BNP investigation of dyspnea in the emergency department (PRIDE) study. Am J Cardiol. 2005;95(8):948-54.
  10. Harvey et al. Assessment of the clinical effectiveness of pulmonary artery catheters in management of patients in intensive care (PAC-Man): a randomised controlled trial. Lancet. 2005;366(9484):472-7.

A Secondary Analysis of the Adventure of the Crooked Man


Removing a cervical collar in the early aftermath of a traumatic injury is becoming an increasingly difficult task. With ever more sensitive imaging modalities we have progressively devalued the traditional methods used to evaluate the integrity of the spinal column in favor of more technologically advanced ones. Despite decades of success in treating this pathology, and clear evidence that clinically relevant spinal injuries present with obvious clinical signs, we have let anecdotal evidence get the best of us. With this in mind, we now turn to the enigma that is the neurologically intact patient with persistent midline tenderness with no evidence of pathology on cervical CT.

As we concluded in our previous post, in the neurologically intact patient with persistent midline tenderness, MRI identifies far more injuries than CT. In a cohort of 178 prospectively gathered patients with isolated persistent midline tenderness and a negative CT, Auckland et al reported 78(44%) with injuries identified on MRI (1). Although the majority required no intervention, 33(18.5%) required use of a collar and 5(2.8%) required surgical management. These findings taken at face value are concerning to say the least, and do not fit with our clinical experience. In fact there is reasonable evidence demonstrating this increased signal found on MRI is merely the noise of an overly sensitive test applied to an extraordinarily low risk population. MRI is prone to overcalling pathology. Even a surprising number of asymptomatic healthy controls, with no history of acute trauma, will have radiologically significant pathology found on MRI (2). Furthermore when findings on MRI are compared to the injuries identified during surgical exploration, MRI demonstrates a propensity for identifying lesions where none exist (specificities ranging from 59.0 to 80.5%) (3,4). Given this, it no longer seems appropriate to consider MRI the gold standard for defining disease in acute spinal trauma. Rather we should examine clinical follow-up and functional patient oriented outcomes. Simply put, what would happen to these patients if we just left well enough alone?

A recent article published in JAMA Surgery by Resnick et al attempted to examine this very question (5). The authors investigated the utility of MRI in patients with persistent midline tenderness or sensory deficits and normal CT findings. Only instead of using MRI as the gold standard they used the patients’ discharge diagnosis. In this prospectively gathered cohort the authors included all patients with a GCS of 15, who were not intoxicated, and had no distracting injuries. Of the 830 patients included in this trial, 164 (19.8%) had cervical spine injuries. 23 (2.8%) of these were deemed clinically significant, all were identified on the initial CT scan. Only 15 (2.2%) of the patients had injuries identified exclusively on the MRI, none of which were deemed clinically relevant.

Unfortunately due to the pragmatic nature of this trial, not all patients received an MRI. The decision was left up to each individual treating physician. Ultimately 100 of the 830 patients received an MRI during their hospital stay. The most common reasons an MRI was ordered were equivocal findings on CT followed by persistent midline tenderness, or sensory deficits concerning enough for the treating physician to require further investigations. Similar to the Ackland study, 46% of the patients who underwent MRI imaging were found to have additional findings that were not seen by CT. The majority of these were ligamentous and soft tissue injuries and none altered clinical management. Like the Ackland study, the MRI identifies far more pathology, very little of which is clinically relevant.

Compared to MRI, CT was 90.9% (CI from 85.3-94.8%) sensitive for identifying cervical spine injury. When discharge diagnosis was used as the gold standard, Resnick et al assert 100% sensitivity and specificity for diagnosing clinically important cervical spine injuries. Unfortunately long term follow-up to test the validity of these findings was not performed. Nor were there a sufficient number of patients with serious cervical injuries in this cohort to claim 100% sensitivity with any certainty (Confidence interval as low as 85.1%). A total of 5 patients were discharged home wearing C-collars for comfort, the rest of the patients had their collars removed before discharge. Of the patients with negative CTs and persistent midline tenderness, removing the collar prior to discharge did not result in catastrophic injury. There were not any reports of patients readmitted to these medical centers with obvious cervical spinal injuries. We are unable to determine how these patients did in the short term after discharge. There may, though unlikely, have been a catastrophic injury missed that presented to a different hospital. It is also unclear if any minor injuries that may have benefited from earlier intervention went undetected, though this later scenario is even less likely as C-collar use for comfort has for the most part been debunked as a useful therapy (6).

How we use this information is still not entirely clear. All midline tenderness is not created equal. There is a certain degree of clinical judgment that should be applied when evaluating these patients. Maybe patients with persistent tenderness who are unable to actively range their necks through 45 degrees of rotation (a retrospective application of the Canadian S-Spine Rule) are more concerning. Maybe those with bilateral paresthesia are those who merit further investigation. Maybe, like the Ackland study demonstrated, patients with severe cervical spondylosis on CT scan cannot be cleared by this modality. Performing MRIs on the majority of these patients will lead to a significant increase in pathological diagnoses. Most of these will be of little clinical significance and the few true positives are likely to reveal themselves clinically during the patients stay in the Emergency Department. If we insist on imaging all patients with persistent pain or tenderness, we risk exposing a group of patients, the large majority of which are without true clinical disease, to potentially harmful interventions. Some will be asked to follow-up with spine surgeons for further downstream testing. Some will be given a hard collar for 10 weeks and exposed to all the associated morbidity. Others will be exposed to surgical procedures that may very well not be clinically required. All will be turned into patients, given a label, diagnosed with a disease that’s major determinants of long term prognosis are patients’ mental well being and financial security (7).

We live in a world of ever advancing medical technology. A world where the boundaries between states of disease and health are becoming increasingly less defined. It is easy to demonize non-specific laboratory investigations like D-Dimer or procalcitonin for their intellectual dishonesty. Likewise the CT scan is an equally natural scapegoat because of its accessibility and the obvious concerns of radiation. Although each of these culprits are responsible in their own way for the crisis we currently face, the real perpetrator of overdiagnosis is information and the ambiguity it hurls at us. We have clearly demonstrated that modern medicine in its current form is incapable of standing idle. Our desire to act far overwhelms our powers of reason. Though the current data cannot definitively negate the utility of MRI in the neurologically intact patient with persistent midline tenderness, we can say its indications are few and far between. Used empirically it will surely lead to far more harm than good.

Sources cited:

  1. Ackland HM,Cameron PA,Varma DK, et al.Cervical spine magnetic resonance imaging in alert, neurologically intact trauma patients with persistent midline tenderness and negative computed tomography results. Ann Emerg Med. 2011 ; 58 : 521 – 30.
  2. Anderson, S et al Are there cervical spine findings at MR imaging that are specific to acute symptomatic whiplash injury? A prospective controlled study with four experienced blinded readers. Radiology. 2012 Feb;262(2):567-75. doi: 10.1148/radiol.11102115. Epub 2011 Dec 20.
  3. Rhin, J et al. Using Magnetic Resonance Imaging to Accurately Assess Injury to the Posterior Ligamentous Complex of the Spine: A Prospective Comparison of the Surgeon and Radiologist. J. Neurosurgery Spine. 12;391-396
  4. Rhin, JA et al. Assessment of the Posterior Ligamentous Complex Following Acute Cervical Trauma. J Bone Joint Surg Am. 2010 Mar;92(3):583-9.
  5. Resnick S et al. Clinical Relevance of Magnetic Resonance Imaging in Cervical Spine Clearance: A Prospective Study. JAMA Surg. Published online July 30, 2014.
  6. Verhagen AP et al. Conservative treatments for whiplash. Cochrane Database of Systematic Reviews 2007, Issue 2
  7. Outcomes at 12 Months After Early Magnetic Resonance Imaging in Acute Trauma Patients With Persistent Midline Cervical Tenderness and Negative Computed Tomography. SPINE.  2013; Volume 38, Number 13:1068–1081.

“The Adventure of the Red-Headed League”

pic1A peasant traveling home at dusk sees a bright light traveling along ahead of him. Looking closer, he sees that the light is a lantern held by a ‘dusky little figure’, which he follows for several miles. All of a sudden he finds himself standing on the edge of a vast chasm with a roaring torrent of water rushing below him. At that precise moment the lantern-carrier leaps across the gap, lifts the light high over its head, lets out a malicious laugh and blows out the light, leaving the poor peasant a long way from home, standing in pitch darkness at the edge of a precipice.

                                            -Welsh tale describing Will-o-the-Wisp


So much of what we do in Emergency Medicine is translating shades of grey into dichotomous patient oriented decisions. Truth in medicine is a fluid, tenuous state, very rarely encountered in the chaos of the Emergency Department. More often than not we are forced to act in varying states of uncertainty. Naturally we search out specific data points in this fog of ambiguity that we believe will provide guidance through the unknown. And yet, some of these beacons are just as likely to lead us astray as they are to provide safe passage.

One such variable is a history of loss of consciousness (LOC) in a patient suffering from a minor head trauma. Despite a multitude of contradictory data, LOC has persisted in the mind of the practitioner (often times in isolation) as a relevant branch-point in deciding who does and does not require further downstream investigations (2). The most recent excavation of the PECARN dataset, published in JAMA Pediatrics, should serve to remind us that just because a variable is found to have a statistical association to the endpoint in question, this does not necessarily mean it is a useful factor to guide clinical decision- making (2).

In this latest dive into the PECARN dataset, Lee et al set out to examine how influential LOC was in predicting clinically significant traumatic brain injury (ciTBI). In the original derivation and validation cohort, by Kupperman et al, LOC was identified as one of the six variables with a strong enough predictive value to be included in the formal decision rule (1). The original PECARN data set was a mammoth undertaking, which prospectively evaluated 42,412 pediatric patients presenting to the Emergency Department after experiencing a minor head injury. Of this group only 780 patients (1.8%) were found to have any evidence of TBI on CT. Only 376 (0.9%) of these patients had injuries of clinical relevance, of which only 60 patients (0.14%) required any form of neurosurgical intervention. Given this extremely low rate of ciTBI, one could argue that the PECARN authors had already identified a cohort of patients at incredibly low risk for relevant injury and any further risk stratification would be futile. Despite this the original authors derived and internally validated two age specific (< 2 years old and> 2 years old) decision rules that boasted negative predictive values of 100% and 99.95% respectively. This data set remains the most robust clinical decision rule derived to date in the pediatric population despite lacking sufficient external validation, incomplete follow up (1/5 of the 64.7% of the patients who did not undergo definitive testing were lost to follow-up), and the fact that the rule was outperformed by physician’s unstructured judgment (1).

Lee et al sought to improve, at least conceptually, on the diagnostic characteristics of the PECARN decisions rules by addressing the added value isolated LOC provides in identifying patients with ciTBI. The authors defined isolated LOC in two specific fashions. In one, termed PECARN-isolated LOC, they identified patients who experienced LOC without any of the other factors that make up the PECARN decision rules. The second utilized the expanded definition of LOC, which included predictors from other commonly used decision rules for head injury (Nexus 2, the New Orleans criteria, and the Canadian head CT rule). It is important to note that the expanded definition of LOC did not include mechanism of injury as a relevant predictor of ciTBI (2).

Of the 42,412 patients, 6,286 (15.4%) were found to have suspected or confirmed LOC. An interesting side note was that out of the 6,286 patients with LOC, 5,010 had a head CT performed, the majority of which the treating physician recorded the history of LOC as being the primary reason for the scan (demonstrating that even in this cohort LOC was considered a clinically important factor for predicting injury). Of the patients with a history of LOC, PECARN-isolated LOC was present in 2,780 (47.5%) patients. In the subgroup of patients with PECARN-isolated LOC, the incidence of TBI on CT was 1.9% and the incidence of ciTBI was 0.5%.  Unfortunately the expanded definition of isolated LOC was far less useful as only 576 (9.4%) of patients with LOC met its’ criteria, most likely do to the inclusion of “any traumatic scalp findings” as a relevant predictor. Of those that did meet these impossible standards, only 0.9% were found to have TBI on CT and 0.2% of these patients had a clinically relevant injury. In the PECARN cohort if LOC was used independently as a decision point for head CT, the sensitivity and specificity of identifying ciTBI would be 49.5% and 85.4% respectively. Clearly not the beacon of light we presume.

What is important to remember is that a statistically significant odds ratio found by using a multifactorial regression model does not directly translate into a clinically useful predictor. Multifactorial regression in all its forms is a statistical attempt to isolate the effect of one variable’s ability to predict the outcome in question. Essentially it is the graphical illustration (the slope of the line indicating the strength of the association) of how one variable affects another while a mathematical attempt is made to control for other factors (3). Despite its statistical authority, finding an independent association between a variable and the outcome in question is not the same as studying a group of patients otherwise well with the exception of the variable in question (LOC for example). Moreover the odds ratio that is typically reported as the result of a multifactorial regression model does not intuitively explain the clinical relevance of this correlation (3).

The utility of isolated LOC for predicting clinically significant TBI seems to have undergone this very mathematical augmentation. Although LOC has consistently demonstrated a statistically independent association with ciTBI, when applied clinically in patients with isolated LOC its predictive value is minimal. During the derivation cohort of the Canadian head CT rule Steill et al found LOC was independently associated with ciTBI (4). However when used clinically they found only 0.4% of patients with LOC had a clinically relevant ciTBI requiring intervention, and most of these could be identified simply by assessing the patient’s mental status in the ED (5). In the NEXUS 2 cohort, LOC was identified as a predictor of ciTBI but failed to maintain clinical relevance when assessed using a multifactorial model (6). Additionally if LOC was used to decide which patients in this cohort would receive further imaging it would have resulted in a sensitivity and specificity of 48% and 63% respectively (6).  In the original PECARN cohort the predictors that identified the bulk of the patients with ciTBI were altered mental status (AMS) or clinically obvious signs of skull fractures. These factors alone identified the bulk of patients with ciTBI. If patients did not present altered or with obvious signs of skull fracture, then their risk of ciTBI was incredibly low (0.9% in the under 2 years old group and 0.8% in the over 2 years old group). The remainder of the predictors found in the PECARN decisions rules, including LOC, did very little to further risk stratify patients (1).

What this can be reduced down to is our fear of the clinically occult head bleed. Based on the idea that the skull is a lead box blocking the transmission of potential chaos within from our external eye until it’s too late to intervene. This fear is driven by anecdote passed down from attending to resident in some form of modern-day oral history. Clearly these stories are not supported by the literature and the reality is these cases of clinically occult intracranial bleeding are rare and often identifiable by high-risk features (elderly, anticoagulant use, etc). A history of LOC in an otherwise well-appearing patient provides us with little guidance in identifying these rare cases. Moreover the lack of LOC does not safely eliminate the risk of significant injury. Often times its absence will give us a false sense of security and like the solitary peasant, lead us far from home, standing in pitch darkness on the edge of a cavernous precipice…


Sources Cited:

  1. Kuppermann  N, Holmes  JF, Dayan  PS,  et al; Pediatric Emergency Care Applied Research Network (PECARN).  Identification of children at very low risk of clinically-important brain injuries after head trauma: a prospective cohort study. Lancet. 2009;374(9696):1160-1170.
  2. Lee LK, Monroe D, Bachman MC, et al. Isolated Loss of Consciousness in Children With Minor Blunt Head Trauma. JAMA Pediatr. Published online July 07, 2014. doi:10.1001/jamapediatrics.2014.361.
  3. Barrett, Tyler W. et al. Is the Golden Hour Tarnished? Registries and Multivariable Regression. Annals of Emergency Medicine , Volume 56 , Issue 2 , 188 – 200
  4. Stiell IG, Wells GA, Vandemheen K.  et al.  The Canadian CT Head Rule for patients with minor head injury.  Lancet. 2001;357:1391-1396
  5. Stiell IG, Clement CM, Rowe BH, et al. Comparison of the Canadian CT Head Rule and the New Orleans Criteria in Patients With Minor Head Injury. JAMA. 2005;294(12):1511-1518. doi:10.1001/jama.294.12.1511
  6. Mower WR, Hoffman JR, Herbert M, et al, Developing a Decision Instrument to Guide Computed Tomographic Imaging of Blunt Head Injury Patients. J Trauma. 2005 Oct;59(4):954-9. (Nexus II)



“The Adventure of the Golden Standard”


We have all been told ghost stories and fairy tales. Campfire fables intended to frighten the gullible populace into behaving in a manner deemed appropriate. Even in Emergency Medicine we have our fair share of ghost stories. Most notably we are taught from an early age to fear and respect the clinically occult pulmonary embolism. A disease process so cryptic in nature it can go undetected throughout a patient’s Emergency Department stay and yet is deadly enough to strike a patient down shortly after their discharge. Though such a monster exists at least anecdotally it certainly does not strike with the alacrity these tales would have you believe. Like with any evil spirit that cannot be detected through normal measures, we have developed our own set of wards and charms in the hopes of keeping this demon at bay. One of our more frequently (over)used charms of this type is the serum D-Dimer. Armed with its protection we go to work everyday ready to battle the mythical beast that is the clinically occult pulmonary embolism.

A recent publication in JAMA, by Righini et al sought to expand D-Dimer’s role in the eradication of venothromboembolism (VTE)(1). In order to address the poor specificity of D-Dimer, experts have suggested increasing the threshold at which the assay is considered positive. Some have recommended doubling the threshold traditionally considered normal while others propose using an age-adjustment to account for the natural increase in serum levels with aging. Most of the data examining these strategies is retrospective in nature (2), and until this recent JAMA paper we had no prospective literature validating its efficacy. Righini et al examined the age-adjusted strategy using a level of 10 multiplied by the patient’s age (in years) as their threshold for a positive D-Dimer. Patients whose D-Dimer level was below their age adjusted threshold had no further testing performed, while those above this threshold went on to more definitive testing. Using a gold standard of PEs diagnosed by CT Pulmonary Angiogram(CTPA), V/Q scan or 3-month follow up, the authors examined the age-adjusted approach. The authors claim a missed VTE rate at 3-month follow up of 0.3%. Additionally, employing this age-adjusted threshold in low-risk patients over 75 years of age increased the specificity of the assay from 6.9% to 29.7%. Seemingly a landmark trial, this publication should reduce testing and allow for D-Dimer to be more clinically applicable in an older population. Unfortunately this paper’s success may just as likely be due to a low risk cohort, an imperfect gold standard and a limited definition of clinically positive events during follow up.

Though D-Dimer has experienced some degree of success in the recent literature, it has not always garnered such favor. In fact D-Dimer never achieved the diagnostic accuracy necessary for universal clinical use. Even the most sensitive assays were found to be incapable of safely ruling out pulmonary embolism in an undifferentiated cohort of patients suspected of having a PE (3,4,5,15). In one of the few trials that randomized hospital wards to encourage use of D-Dimer, compared to control wards where D-Dimer testing was discouraged, authors found that widespread utilization did the exact opposite of what it was intended. Not only did the evaluation of PE nearly double in the experimental arm compared to the control, the number of V/Q scans increased. Even more surprising, while the experimental arm diagnosed and treated significantly more patients for PE (160 vs 94) there was no difference in 3-month mortality or recurrent VTE (6). And yet despite these obvious flaws we could not let go. The physiological reasoning and clinical convenience of such a test were too attractive to abandon this assay as a failure. Instead we adapted our patients to fit the test. With a few minor adjustments of incidence, a small modification in the gold standard and a certain amount of looking the other direction when it came to clinical follow up, the D-Dimer was transformed into a highly sensitive assay capable of ruling out PE and reducing invasive testing.

We know from early studies of D-Dimer assays its sensitivity is only sufficient to rule out PE in cohorts in which the Res215pre-test probability is around 10-15% (3,15). Traditionally this was accomplished by using a low-risk Wells score of 2 or less. This strategy was first validated in a study by Wells et al published in Annals of Internal Medicine in 2001 (5), in which the authors hypothesized that using a low-risk Wells score of 0-2 in conjunction with a D-Dimer assay would reduce further downstream testing. The overall incidence of pulmonary embolism in this cohort was 9.5%. As expected D-Dimer performed admirably in such a low-risk cohort. The overall negative predictive value was 97.3%, which was powered primarily by the scarcity of disease in the low-risk group (1.3%). In fact when the test was used in the moderate and high-risk groups its negative predictive value fell to 93.9% and 88.5% respectively. The overall sensitivity of the D-Dimer in the entire cohort was only 78.5%. Such statistical machinations are relevant because the success of D-Dimer in the modern literature is driven largely in part by the utilization of negative predictive value in combination with low-risk cohorts to overestimate D-Dimers diagnostic capabilities. This acceptance of the negative predictive value as the endpoint of significance has tainted the literature examining D-Dimer’s effectiveness. Though Wells et al were forthright in reporting the true test characteristics of D-Dimer, later studies have not been so transparent. Most notably was the validation cohort, published by the Christopher group in 2006 (7). In this cohort the authors set out to demonstrate that patients with Wells scores up to 4 could be safely excluded using a D-Dimer. Similar to the Wells et al cohort these authors used 3-month VTE event rate in patients discharged with a negative D-Dimer. Unlike the Wells cohort, patients with Wells score less than 4 and a negative D-Dimer had no further testing. Authors claim success, emphasizing that the 3-month event rate in the negative D-Dimer group was only 0.5%. Again this negative predictive value is powered by the low incidence of disease in this cohort (12.1%). The actual sensitivity in this subgroup was 95%. This pattern is consistent throughout the PE literature. The incidence of pulmonary embolism in prospective cohorts has been progressively decreasing over the past few decades.  In the original PIOPED cohort, published by Stein et al in 1990, the high-risk, intermediate-risk and low-risk groups had a rule-in rate of 68%, 30% and 9% respectively (8). In contrast the PERC validation cohort, published in 2008 by Kline et al, had a rule-in rate of 31.1%, 10.4%, and 3% respectively. Obviously this decrease in incidence is due to our dwindling comfort with risk tolerance and the subsequent inclusion of a far lower risk patient population into the diagnostic pathway. This dilution of the disease state and the focus on negative predictive value as the metric of choice provides a false impression of D-Dimer’s capabilities. This test appears to be safe for use in moderate-risk patients when in reality very few moderate-risk patients have been included in these cohorts.

The second major flaw in the modern literature of D-Dimer is the gold standard used to define these thromboembolic events. Most notably for our current discussion is the utilization of CTPA as the gold standard test for diagnosing PE and the discrete, yet real, increase in overdiagnosis that has resulted from its adoption. The reclassification of clinically insignificant clot burden to a pathological state not only leads to overtreatment, transforming healthy people into patients, but it also makes it incredibly difficulty for us to assess the effectiveness of any diagnostic pathway. To understand the repercussions the adoption of CTPA as the accepted gold standard has had on clinical research we must first address its limitations. In PIOPED II, the largest trial examining the diagnostic characteristics of CTPA, published in the NEJM in 2006, Stein et al found that in patients with low-risk of pulmonary embolism by clinical assessment, the CTPA diagnosed far more PEs than the composite reference standard (Normal DSA or V/Q scan, a low probability V/Q with Wells <2 or a negative LE US). In fact, in patients with a Wells <2, 42% of the PEs diagnosed by CTPA were false positive findings. A significant increase in what would be considered a pulmonary embolism by the standard diagnostic criteria of the day. Conversely in the high-risk patients, the CTPA was not sensitive enough to safely rule out PE. In patients with a Wells score >6, 40% of the negative CTPAs were false negatives (10).

Despite these significant flaws, the CTPA has now become the gold standard by which the D-Dimer is judged against. A gold standard that is prone to overdiagnosing low-risk patients with clinically irrelevant emboli and underdiagnosing high-risk patients with clinically relevant ones. Not only is this is a poor standard to guide clinical judgment, when used as the gold standard comparator it leads to an overestimation of D-Dimer’s utility. Early examination of the accuracy of various D-Dimer assays found at best a moderate ability to rule out PE. When pre-CTPA gold standards were used (DSA, V/Q scan and serial US) in a high-risk patient in whom PE is suspected, a negative D-Dimer is not sufficient to rule out disease (5). In such cohorts only patients with Wells <2% could a D-Dimer be utilized to rule out PE. And so a portion of PEs in moderate-risk patients that would be missed by the more traditional composite endpoint are also in turn missed by the CTPA. This overestimates the sensitivity of the D-Dimer assay. Similar to D-Dimer, the CTPA tends to overdiagnose pulmonary embolism in the low-risk patient. This helps mask the true extent of D-Dimer’s poor specificity. Overall the CTPA is a gold standard designed to present an overly optimistic view of the D-Dimer assay.

The Righini et al trial committed all of the aforementioned errors in their examination of age-adjusted D-Dimer thresholds. Though the overall incidence of PE was high by modern standards (18.7%), the authors did not specifically state the incidence of PE in the subgroup in which D-Dimer was used to rule out disease and thus it is hard to determine how the acuity level of the cohort affected the negative predictive value. The only criteria available for us to judge the acuity of each subgroup is the quantity of patients stratified to each respective risk group. In the Righini study only 12.8% of the patients had a Wells score greater than 4 (1). In contrast, the Christopher cohort had 33.2% of their patients with a Wells score greater than 4(7). The mortality in the high-risk group following a negative CTPA at 3-month follow up was 1.2% in the Righini cohort compared to 8.6% in the Christopher study. This information suggests the Righini cohort is comprised of a far healthier patient population than those in the Christopher trial. Following in the Christopher trialists’ footsteps, the authors used positive findings on CTPA or any event or death during 3-month follow up period deemed due to a VTE (as determined by three independent experts blinded to the patient’s initial diagnostic workup) as their surrogate gold standard. Though the authors claim that only one event was missed at 3-month follow up in the patients discharged from the ED using the age-adjusted threshold, further examination reveals that in fact seven deaths and seven suspected VTEs occurred in this group, only one of which was deemed to VTE-related by the expert panel. Though none of the seven deaths were judged to be related to pulmonary emboli, a number were caused by COPD and end-stage cancer, both of which are easily confused with pulmonary embolism and commonly placed as the default diagnosis on death records (13).

In 1977, Annals of Internal Medicine published an editorial by Dr. Eugene Robinson on the current state of PE management. Though all the diagnostic tests used to differentiate disease from non-disease have changed, the flaws in management have persisted (11). Specifically we continue to obsess over diagnosing clinically unimportant pulmonary embolisms in the young and healthy while simultaneously ignoring the sick vulnerable patients where PE is far more likely and clinically relevant. In August 2013, den Exter et al published an article in Blood supporting Dr. Robinson’s thoughts (12). In this paper the authors examined the factors associated with recurrent pulmonary emboli and mortality in a cohort of 3,728 patients undergoing a work up for PE. The authors found that clot location, clot burden and even identification of clot on CTPA were not important factors when predicting clinical outcomes at follow-up. In fact the mortality during the follow-up period was 10.3% in those with a subsegmental PE vs 6.3% in those with a proximal PE vs 5.2% in those with a negative CTPA. The only factors that demonstrated clinically significant predictive value were history of malignancy, age and history of heart failure. Simply put, elderly patients with comorbidities are at an increased risk for clinically relevant pulmonary emboli. Similarly the Christopher study reported patients who were discharged after a negative CTPA had a mortality rate of 8.6%.  No amount of testing can significantly modify this risk. Even those that do not have an embolic event diagnosed during their Emergency Department visit are at significant risk of experiencing an embolic event over the next 3 months. Clot burden, clot location or even presence of a clot on imaging did not predict clinical outcomes, patient variables did.

The D-Dimer assay is one of the many flawed tests in a flawed system built to identify pulmonary emboli in the young and healthy, in whom the diagnosis is rarely of clinical importance. Like the PERC rule, and even to some extent the CTPA, D-Dimer performs best in this young, healthy cohort with low risk of clinical disease. Conversely in the sick and vulnerable high-risk patients, it is rarely negative and even if it is, does not possess the diagnostic qualifications to safely rule out the disease of concern. In fact the only patients in which D-Dimer can be consistently utilized, is the young patient at low risk of pulmonary embolism.  We are left with a test that possesses diagnostic characteristics capable of ruling out the presence of pulmonary embolisms of little clinical significance and incapable of ruling out the disease in patients in which we should be truly concerned. Clearly despite its best intentions, D-Dimer adds very little to the diagnostic pathway for PE. Playing with thresholds on the ROC curve does nothing to improve D-Dimer’s test characteristics. Its success dependent on its ability to ward off a fictitious disease in a healthy population that will likely do well no matter what. A test best suited to treat our own fears rather than our patients’ maladies. Surely there is a better way to identify those who require workups for PE. Exactly what this consists of is still unclear, but certainly ghost stories, campfire tales and even D-Dimer assays will provide no assistance.


Sources Cited:

1.Righini M et al. Age-Adjusted D-Dimer Cutoff Levels to Rule Out Pulmonary Embolism: The ADJUST-PE Study. JAMA. 2014;311(11):1117-1124.

2. Schouten HJ et al. Diagnostic accuracy of conventional or age adjusted D-dimer cut-off values in older patients with suspected venous thromboembolism: systematic review and meta-analysis. BMJ 2013;346:f2492

3. Ginsberg JS et al. Sensitivity and Specificity of a Rapid Whole-Blood Assay for D-Dimer in the Diagnosis for Pulmonary Embolism. Annals of Internal Medicine. 1998; 129(12): 1006-1011

4. Stein, PD et al. D-Dimer for the Exclusion of Acute Venous Thrombosis and Pulmonary Embolism. Annals of Internal Medicine. 2004; 140(8) 589-607

5. Wells PS et al. Excluding Pulmonary Embolism at the Bedside without Diagnostic Imaging: Management of Patient with Suspected Pulmonary Embolism Presenting to the Emergency Department by Using a Simple Clinical Model and D-Dimer. Annals of Internal Medicine. 2001; 135(2): 98-107

6. Goldstein NM et al. The Impact of the Introduction of a Rapid D-Dimer Assay on the Diagnostic Evaluation of Suspected Pulmonary Embolism. Arch Intern Med. 2001;161(4):567-571.

7. Writing Group for the Christopher Study Investigators*. Effectiveness of Managing Suspected Pulmonary Embolism Using an Algorithm Combining Clinical Probability, D-Dimer Testing, and Computed Tomography. JAMA. 2006;295(2):172-179.

8. The PIOPED Investigators. Value of the Ventilation/Perfusion Scan in Acute Pulmonary Embolism: Results of the Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED). JAMA. 1990;263(20):2753-2759.

9. Kline J.A. et al. Prospective multicenter evaluation of the pulmonary embolism rule-out criteria. Journal of Thrombosis and Haemostasis. 2008; 6(5): 772–780

10. Stein PD et al. Multidetector Computed Tomography for Acute Pulmonary Embolism. N Engl J Med 2006; 354:2317-2327.

11. Robinson ED. Overdiagnosis and Overtreatment of Pulmonary Embolism: The Emperor May Have No Clothes. Ann Intern Med. 1977;87:775-781.

12. den Exter PL et al. Risk profile and clinical outcome of symptomatic subsegmental acute pulmonary embolism. Blood 2013,122(7)1144-114913.

13. Wexelman, BA et al. Survey of New York City Resident Physicians On Cause-Of-Death Reporting. 2010. Prev Chronic Dis. 2013 10:E76

14. Sohne M et al. Accuracy of clinical decision rule, D-dimer and spiral computed tomography in patients with malignancy, previous venous thromboembolism, COPD or heart failure and in older patients with suspected pulmonary embolism. J Thromb Haemost 2006; 4: 1042–6.

15. Gibson NS et al. The Importance Of Clinical Probability Assessment In Interpreting A Normal D-Dimer In Patients With Suspected Pulmonary Embolism. Chest. 2008;134(4):789-793.

16. Righini, Marc et al. Effects of age on the performance of common diagnostic tests for pulmonary embolism. The American Journal of Medicine , Volume 109 , Issue 5 , 357 – 361




“The Adventure of the Dancing Men”



The illustrious Cardinal Commendoni suffered sixty epileptic paroxysms in the space of 24 hours, under which nature being debilitated and oppress’d he at leangth sank, and died. His skull being immediately taken off, I found his brain affected with a disorder of the hydrocephalous kind.         -Gavassetti, 1586




The state of Status Epilepticus (SE) is one which evokes an almost visceral response of urgency. The physical manifestations of a mind in crisis are if nothing else, strong motivators to action. We are trained to act with decisiveness and certainty, yet due to a paucity of high quality trials, an ever-changing definitional diagnosis, and the utilizations of surrogate endpoints in place of true evidence of benefit, our understanding of the management of status epilepticus is been severely constrained.

In a recent article published in JAMA, Chamberlain et al examined the efficacy of diazepam vs lorazepam in the treatment of status epilepticus in a pediatric population (1). The authors randomized 273 children, ranging from 3 months to 18 years old, experiencing an episode of status (defined as 5 minutes or longer of seizure activity or multiple seizures without a return to baseline) to receive either 0.2 mg/kg of diazepam or 0.1 mg/kg of lorazepam IV. Though the authors found no significant difference in their primary or secondary endpoints (seizure cessation within 10 minutes, rate of recurrence and time-to-seizure-cessation), certain limitations make it difficult to interpret the utility of this publication.

The authors enrolled patients with at least 5 minutes of seizure activity who had not received any anti-epileptic drugs (AEDs) en route to the hospital. This exclusionary criteria obviously affected their enrollment as over a 4-year period out of the 11,630 patients assessed for eligibility, only 273 patients were enrolled in the trial. 4,357 were excluded for no longer seizing upon arrival to the ED and 6,729 for other factors, presumably a large proportion receiving AED treatment before arriving to the hospital. Obviously this injures the trial’s external validity, as the child seizing upon presentation to the emergency department who has received AED treatments en route is a different and far more commonly encountered patient than one who has yet to receive any intervention. Thus the spectrum of disease encountered in this cohort is far less severe than in previous trials examining SE.

In fact this is only the last of many changes in enrollment criteria for patients involved in trials examining the various treatments for SE. The definition of status itself is continually in flux. In 1993 the American Epilepsy Society Working Group on Status Epilepticus defined status as “a seizure lasting 30 minutes or the occurrence of two or more seizures without recovery of consciousness in between” (2). Since this statement the temporal requirement to be considered SE has become progressively more lax. The Epilepsy Society Working Group subsequently lowered the time requirement to 20 minutes. In 1998 the Veterans Affairs Status Epilepticus Cooperation (VASEC) published a study comparing various treatment options for SE (3). Their enrollment criteria defined SE as 10 minutes of continuous seizure activity or multiple seizures without a return to baseline in between. That same year Lowenstein et al published an article in the NEJM reviewing the etiology of SE and recommended the definition of status be changed to continuous seizures lasting no more than 5 minutes (4). In 2001 the San Francisco Emergency Medical Services published the Pre-Hospital Treatment of Status Epilepticus (PHTSE) Trial, comparing pre-hospital efficacy of diazepam, lorazepam and placebo. The authors adopted Dr. Lowenstein’s suggestion, enrolling patients with seizure activity greater than 5 minutes (5). Since then, the majority of publications examining SE have used this 5 minute definition. Though the argument proposed by Dr. Lowenstein, that most seizures lasting more than 5 minutes require treatment is a valid one, the inclusion of these patients in the same category as those with continuous seizures for greater than 30 minutes seems misconstrued. In fact if you examine mortality in the Veterans study compared to that of the PHTSE cohort the 30-day mortality of patients fell from 37% to 9.2% (3,5). Clearly the acuity of the patients included in these respective cohorts is significantly different.

The second limitation seen both in the Chamberlain trial and throughout the recent SE literature is the belief that time-to-seizure-cessation is a clinically relevant endpoint. Though there is relatively robust data describing the association between seizure length and poor outcomes (3,4), the converse statement, that chemically shortening seizure length will in turn improve outcomes, is inherently flawed. In the PHTSE Trial, upon arrival to the hospital, 21% of patients’ seizures terminated in the placebo group compared to 42.6% and 59.1% in the diazepam and lorazepam groups respectively(5). Despite the obvious clinical efficacy of both these medications in time-to-seizure-cessation, there were no statistical difference in mortality or functional neurological outcomes observed between the active or control groups. Likewise in the VASEC trial, the authors found lorazepam to be more efficacious than the other treatment strategies in stopping the seizures in a timely manner. Despite this superiority in time-to-seizure-cessation no mortality benefit was observed (3). They did find that those who were resistant to the first and second line agents were far more likely to have malignant cause of their SE. Clearly the underlying disease process that results in refractory status is the cause of the bad outcomes.

The only seemingly clinically relevant endpoint included in the Chamberlain publication was the rate of ventilatory support (defined as need for bag-valve-mask ventilation or endotracheal intubation) required in each group. This too was statistically equivalent. 16% and 17.6% of patients required some form of ventilatory assistance in either group (1).   A similar number of patients required ventilatory assistance in the VASEC, PHTSE and RAMPART cohorts (3,5,6). In fact if you examine the respective groups in the PHTSE cohort the need for intubation did not differ whether the patients received lorazepam, diazepam or placebo (5), indicating that it is again the underlying pathology rather than the medical intervention that causes the subsequent airway compromise.

Our continued vacillations in the definition of SE have led to a much more benign disease process than the status of our forefathers. The acuity of the patients included in trials examining treatments for SE has been progressively decreasing over the past 15 years. In Chamberlain et al, 33% of the populations’ seizures were febrile in nature, which often require no further treatment. Compare that to the VASEC cohort where 33% had a life threatening cause of their SE.  Given this dilution, identifying benefit for any treatment in a modern day SE cohort suffers from a significant Pollyanna effect. Additionally our persistent assumption that time-to-seizure-cessation is a clinically relevant endpoint further obscures our understanding of any true treatment effect our various interventions may provide.

Chamberlain et al demonstrated that with today’s broad spectrum of status the choice of your first line benzodiazepine matters very little. Whether this is because of the equal efficacy of the various medications or the fact that in most cases the seizures will resolve no matter what treatment is given. Without a true placebo group it is hard to say. What is clear is the underlying cause of the seizures is far more important than your choice of medication. Those who are resistant to your first and second line treatments are far more likely to track their lineage from the status of old, and have a malignant cause that should be pursued.

Sources Cited:

1.Chamberlain et al. Lorazepam vs Diazepam for Pediatric Status Epilepticus: A Randomized Clinical Trial. JAMA. 2014;311(16):1652-1660.

2. Brodie MJ Status epilepticus in adults. Lancet. 1990 Sep 1; 336(8714):551-2.

3. VA Status Epilepticus Cooperative Study Group: A comparison of four treatments for generalized convulsive status epilepticus. N Engl J Med 1998;339: 792–798

4. Lowenstein DH, Alldredge BK.  Status epilepticus.  N Engl J Med. 1998; 338:970-976.

5. Alldredge BK, Gelb AM, Isaacs SM, et al. A comparison of lorazepam, diazepam, and placebo for the treatment of out-of-hospital status epilepticus. N Engl J Med 2001;345:631-7

6. Silbergleit R, Durkalski V, Lowenstein D, Conwit R, Pancioli A, Palesch Y, Barsan W; NETT Investigators. Intramuscular versus Intravenous Therapy for Prehospital Status Epilepticus. N Engl J Med. 2012 Feb 16;366(7):591-600.




“A Timely Reexamination of the Case of the Thirteen Watches”


-Doing the same thing over and over again and expecting different results

Albert Einstein on Insanity-

  For a near decade now our mad dash to the cath lab has been based off flawed data and an illogical certainty that every moment of delay is detrimental to our patients. As such we were completely flabbergasted when Menees et al published their findings in the NEJM in September 2013 (1). Despite reducing door-to-balloon time from a mean of 82 minutes to 67 minutes, no benefit in mortality was demonstrated. After 4 years and just over 95,000 patients, the authors were unable to demonstrate a benefit associated with this dramatic decrease in time to revascularization. These findings should not be as surprising as they initially appear. In fact, there is a multitude of evidence demonstrating that “time is myocardium” is a far more complex phenomenon than Door-to-Balloon time can account for. Rather than taking a rational data-driven approach to this pathology, we instead focused on the data that suited our desire to act. The evidence used to support our current STEMI guidelines is primarily based off an observation cohort published in JAMA in 2000 (2). This article by Cannon et al demonstrated a correlation between increased Door-to-Balloon times and increased mortality. The obvious shortcomings of these types of data sets and the mountain of evidence demonstrating the far more complex reality of time is myocardium can be found in a former post. What is important is how we utilized this limited data to serve our purposes and ignored the remainder of the evidence. With our blinders firmly attached, we chose to make door-to-balloon time the metric of choice when assessing quality in STEMI management.

Though the Menees cohort has reminded us that Door-to-Balloon time is very rarely an important metric, it is unlikely these findings will have any influence in changing our current practice. The momentum we have gained in this sprint towards futility has created a body with an inertial vector that is almost impossible to deflect. What this article should provide is a warning. An example of what happens when a healthcare system mobilizes extraordinary quantities of resources based off flawed surrogate outcomes. Currently we stand at a similar crossroads in yet another field of medicine. We are once again at the precipice of mobilizing these very same resources based on similarly flawed data. This time, the question at hand, is time brain?

In February of 2013 the results of 3 RCTs were published in the NEJM (3,4,5). They accounted for the largest and highest quality trials examining the efficacy of endovascular interventions for acute ischemic stroke. All 3 trials were universally negative. Though each trial had its own unique design, they were unable to demonstrate even trends towards benefit when comparing endovascular interventions to IV tPA therapy alone. So much so that the authors of the largest of the trials, IMS-3, state in their conclusion that these therapies should not be utilized outside the purview of a randomized control trial. Yet despite these universally negative findings, there has been a great deal of pressure to once again create the infrastructure necessary to deliver eligible patients swiftly to endovascular capable facilities. After all, every minute counts…

Time is brain has been a commonly accepted mantra of stroke management since the earliest inception of reperfusion therapies. And much like the overall efficacy of reperfusion therapy in acute CVA, the data addressing the time is brain hypothesis have yielded mixed results. In the area of thrombolytic therapy the largest, highest quality data sets have failed to uncover any convincing evidence that time to treatment is an important determinant of neurologic outcomes. The Cochrane Database examined all 26 trials comparing thrombolytics to placebo and found no evidence that time-to-treatment affected outcomes (6). IST-3, the largest trial to date examining thrombolytics in acute ischemic stroke found no temporal relationship between improved outcome and time-to-treatment (7). Finally the NINDS trial, the rallying cry for tPA apologists world-wide, in its original manuscript the authors were unable to demonstrate that patients who received the tPA in under 90 minutes fared better than those treated in the 90-180 minute window (8). In fact, when Dr. Jerry Hoffman and Dr. David Schriger reexamined the patient level data from the NINDS cohort, they too found that time to treatment had no association with 3-month neurological outcomes (9). Moreover when they accounted for the obvious baseline differences present in the NINDS trial (10) (using change in NIHSS at 3-months) the overall benefit of tPA also disappeared.

Other than a highly selected review of eight cherry picked trials published in the Lancet in 2010(11), no analysis of RCT data has demonstrated a temporal benefit to IV tPA therapy in acute ischemic CVA.  A number of publications using registry data have attempted to examine the time is brain phenomenon. Two such studies were published a month apart from one another in JAMA and JAMA Neurology (12, 13). These trials used similar registries, similar methods, and similar statistical analysis and yet found completely antipodal results. These contradictory findings are less of a comment on the truth of the temporal relationship of revascularization than on the limitations of such data and how it will submit to even the smallest amount of statistical coercion. In a brilliantly written letter to editor by Dr. Ryan Radecki addresses the flaws in the conclusions drawn by Saver et al, authors of the JAMA article that concluded time is in fact brain (14). Dr. Radecki writes:

Dr. Saver and colleagues used the Get With The Guidelines–Stroke (GWTG-Stroke) registry to investigate the association of time to tissue-type plasminogen activator (tPA) treatment and outcomes from stroke. However, the authors did not address the handling of transient ischemic attacks (TIAs) and stroke mimics within the registry, which is a potential confounder in the abstraction method used in the study.

Most recently the authors of IMS-3 published a secondary analysis of their cohort in an effort to demonstrate that similar to IV tPA therapy, with endovascular treatments, time to reperfusion matters (15). Using the IMS-3 cohort, these authors examined the association of time to endovascular intervention and 3-month functional neurological outcomes. After retrospectively excluding patients found not to have large vessel occlusions (the proximal MCA or ICA terminus), Khatri et al found a statistically significant association with time-to-intervention and improved functional neurological outcomes. Like the Cannon and Saver cohorts, this data is severely flawed. There are a multitude of reasons why patients may be delayed in receiving endovascular therapy. Most obviously they were sicker and required some form of stabilization before being transported to the intervention suite. In fact after the authors account for some of these confounders by utilizing multifactorial logistic regression, the overall effect that time to intervention had on functional neurological outcomes translated to a coefficient of determination (R2 ) of 0.18. Meaning that 18% of the variation in neurological outcomes at 3-months can be explained by time-to-intervention. The remaining 82% is determined by other factors. Clinically a very small effect. Especially when considering this regression model did not account for the fact that the subgroup of patients treated in a more timely fashion were far more likely to include patients having a TIA or a stroke mimic, who will universally have a better outcome independent of the intervention they receive. More importantly this was a negative study which found no difference in 3-month outcomes between patients who received IV thrombolytics plus endovascular treatment or IV thrombolytics alone.

This is observational data demonstrating a small association with time to treatment and improved outcomes. Using this data the most we can say is, patients who take longer to receive a therapeutic intervention have worse neurological status at 3-months. The corollary statement, reducing this time to treatment will improve neurological outcomes, cannot be made and taking into account the summation of the data on reperfusion therapies for acute ischemic stroke, is most likely false.

Data sets like the Cannon and Saver articles should not be mistaken as investigations in search of scientific truths designed to answer clinically relevant questions. Rather these publications are examples of how easily we can manipulate data to serve our purposes. Like its predecessors the Khatri et al reanalysis of the IMS-3 cohort appears promising as long as you choose to ignore the control group. At this point all we have to support the efficacy of endovascular therapies in acute ischemic stroke are  stories of patients rising from the cath lab table reciting poetry they never knew before their infarct and perfusion studies taken after the intervention showing near normal restoration of blood flow. Let us note confuse anecdote and pretty pictures as evidence of benefit.  Using such data to justify the restructuring of our healthcare infrastructure is unwise. The resources required to train an army of interventionists to be ready at a moments notice, equip a nation of cath labs to be accessible 24 hours a day, and mobilize a pre-hospital system to deliver these patients swiftly and safely to the facilities capable of endovascular interventions would be massive. All for a treatment that not only has failed to demonstrate efficacy over our current “standard of care”, but for a theory of temporal urgency that has never been demonstrated in a conclusive fashion. It is not hard to imagine that if we fail to heed the warnings of the Menees trial, in 13 years the NEJM will once again publish findings from a large national registry. Only on this occasion it will be examining patients undergoing endovascular interventions for acute ischemic stroke. Like the Menees cohort this registry will demonstrate that over a 5-year period we have reduced time to intervention impressively. And yet despite this effort and the massive resources invested to achieve it, this cohort failed to demonstrate any improvement in neurological outcomes. We will be left wondering where we went wrong.

Here and now…


Sources Cited:

1. Menees DS et al. Door-to-balloon time and mortality among patients undergoing primary PCI. N Engl J Med. 2013 Sep 5;369(10):901-9

2. Cannon CP, Gibson CM, Lambrew CT, et al. Relationship of symptom-onset-to-balloon time and door-to-balloon time with mortality in patients undergoing angioplasty for acute myocardial infarction. JAMA. 2000; 283: 2941–2947.

3. Broderick JP, Palesch YY, Demchuk AM, et al. Endovascular therapy after intravenous t-PA versus t-PA alone for stroke. N Engl J Med 2013;368:893-903

4. Ciccone A, Valvassori L, Nichelatti M, et al. Endovascular treatment for acute ischemic stroke. N Engl J Med 2013;368:904-913

5. Kidwell CS, Jahan R, Gornbein J, et al. A trial of imaging selection and endovascular treatment for ischemic stroke. N Engl J Med 2013;368:914-923

6. Wardlaw JM, Murray V, Berge E, Del Zoppo GJ. Thrombolysis for acute ischaemic stroke. Cochrane Database Syst Rev. 2009 Oct 7;(4):CD000213.

7. The IST-3 collaborative group. The benefits and harms of intravenous thrombolysis with recombinant tissue plasminogen activator within 6 h of acute ischaemic stroke (the third international stroke trial [IST-3]): a randomised controlled trial. Lancet 2012; 379

8. The National Institute of Neurological Disorders and Stroke rt-PA Stroke Study Group. Tissue plasminogen activator for acute ischemic stroke. N Engl J Med 1995;333:1581-1587

9. Hoffman JR, Schriger DL. A graphic reanalysis of the NINDS Trial. Ann Emerg Med. 2009 Sep;54(3):329-36, 336.e1-35.

10. Mann, J. Efficacy of Tissue Plasminogen Activator (Tpa) for Stroke Truths about the NINDS study: setting the record straight. West J Med. May 2002; 176(3): 192–194.

11. Lees KR, Bluhmki E, von Kummer R, et al. Time to treatment with intravenous alteplase and outcome in stroke: an updated pooled analysis of ECASS, ATLANTIS, NINDS, and EPITHET trials. Lancet. 2010 May 15; 375(9727): 1695-703.

12. Saver JL, Fonarow GC, Smith EE, et al. Time to treatment with intravenous tissue plasminogen activator and outcome from acute ischemic stroke. JAMA. 2013 Jun 19; 309(23): 2480-8.

13. Ahmed N et al. Results of Intravenous Thrombolysis Within 4.5 to 6 Hours and Updated Results Within 3 to 4.5 Hours of Onset of Acute Ischemic Stroke Recorded in the Safe Implementation of Treatment in Stroke International Stroke Thrombolysis Register (SITS-ISTR): An Observational Study. JAMA Neurol. 2013;70(7):837-844.

14. Radecki RP.  Acute ischemic stroke and timing of treatment. JAMA. 2013 Nov 6;310(17):1855-6

15. Khatri et al. Time to angiographic reperfusion and clinical outcome after acute ischaemic stroke: an analysis of data from the Interventional Management of Stroke (IMS III) phase 3 trial. The Lancet Neurology – 28 April 2014  

“The Adventure of the Greek Interpreter Revisited”



If our affair with thrombolytics had not started off with the success it did, we may not still be trying to nostalgically relive our yesteryears of throbolytic glory. Whether it was streptokinase, alteplase or tenectoplase (TNK), thrombolytics have consistently demonstrated a mortality benefit when used in patients experiencing an ST-elevation infarction (1). If it was not for the superiority of PCI in both measures of efficacy and financial gain, our romance with thrombolytics might still be in full swing.  Our initial triumph with STEMI patients has led us to believe in the efficacy of thrombolytics in all hypercoagulable disease states, despite its mediocre performance outside the confines of ACS.

When thrombolytics fell out of favor in the management of STEMI, supplanted by mechanical reperfusion therapy, it seemed only natural we turned our focus to the treatment of acute ischemic stroke to fill our thrombo-philic void. Though the efficacy of thrombolytics in CVA is still under debate, it is clear they have never demonstrated the mortality benefits as exhibited in myocardial infarction (2). What we are left debating is small differences on scales measuring functional neurological outcomes. Scales that are so unreliable, two neurologists grading the very same patient one after another, often disagree by one or more points (3).  Whether these potential improvements in neurological outcome are of clinical relevance or not, they are a far cry from the life saving benefits thrombolytics provide in STEMI management.

Pulmonary embolism was another likely candidate for thrombolytic intervention. As clinicians, we have become hyperaware and preoccupied by diagnosing even the most clinically irrelevant pulmonary emboli.  When we do happen to stumble upon emboli of clinical import we ironically have very little to offer the patient other than a hospital bed, IV heparin and the promise of a six month course of coumadin therapy. So the idea that thrombolytics may help dissolve these larger clots is an appealing one to say the least.  Despite the sparse evidence supporting their utility and no mortality benefit demonstrated in patients with massive pulmonary embolism (4), thrombolytics have gained general acceptance in this subgroup. And though this “standard of care” is based more on our fear of watching the patient decompensate in front of us, and less upon proof of benefit, their role in the management of massive pulmonary embolism is now a IIA recommendation in the AHA guidelines of the management of pulmonary embolism (5).

A looming question is whether patients with sub-massive pulmonary embolism are candidates for lytic therapy. The PEITHO trial was the largest RCT to have examined this question to date (6). PEITHO’s results, originally released in abstract form last year, were finally published in totality on April 10th 2014 by Meyer et al in the NEJM.  This study randomized normotensive patients with radiographic evidence of PE with concern for right heart strain (positive troponin, BNP or evidence of right heart strain on CT or ECHO) to either a thrombolytic strategy (TNK) or placebo. In the “The Adventure of the Greek Interpreter”, I discussed the results of this trial, but in brief it was disappointing. The authors claim success in a number of surrogate endpoints they categorized as “hemodynamic collapse”.  As a reader we cannot help but feel cheated, as the mortality between the groups was statistically equivalent. What the PEITHO trial did illustrate was that when patients are given thrombolytics, they bleed.  Overall there was an approximate 9% difference in major bleeding between the TNK and placebo group (11.5% vs 2.4%). Additionally there was an approximate 2% increase in ICH in those patients given TNK.

And so, since the acute benefits of thrombolytics in pulmonary embolism are nothing less than sub-tacular, the debate on the utility of thrombolytics in sub-massive pulmonary emboli hinges on their ability to improve functional outcomes in the long-term. The evidence supporting thrombolytics’ efficacy in preventing post-embolic pulmonary hypertension is unconvincing at best. Unfortunately the authors of the PEITHO trial failed to publish long-term functional outcomes. In the PEITHO’s trial design published in the American Heart Journal in 2012, the authors report that 6-month functional outcomes would be recorded, including NYHA classification and echocardiographic findings.  A second publication on the PEITHO cohort including these results may very well answer some of the uncertainties we currently have (7).

Until then, the best evidence we have supporting the practice of thrombolytic therapy in acute pulmonary embolism is the MOPPET trial (8). In this trial, comprised of 121 patients diagnosed with sub-massive pulmonary embolism and evidence of right heart strain, patients were randomized to either placebo or 50 mg of tPA (“half-dose” tPA). The authors found a staggering 41% absolute difference in their primary endpoint, the number of patients with pulmonary hypertension at 2 years post enrollment. As discussed in the original post, “The Adventure of the Greek Interpreter”, the rate of pulmonary hypertension in the placebo arm was far higher than the rate of pulmonary hypertension observed in similar cohorts (9,10,11). These impressive results are far more likely due to the surrogate outcome the authors chose as their primary endpoint rather than the efficacy of thrombolytics. Whereas most trials define pulmonary hypertension by echocardiographic evidence of pulmonary hypertension in the symptomatic patient, the authors of the MOPPET trial chose to use echocardiographic findings alone.  In the asymptomatic patient we are unsure of the clinical relevance this radiographic information provides in isolation.

A recently published trial by Jeff Kline, the man who defined pulmonary embolism for the past decade, hoped to delineate the clinical effect of thrombolytic therapy on the incidence of pulmonary hypertension after sub-massive pulmonary embolism (12). Named TOPCOAT, this trial examined thrombolytics’ effects on 3-month post-pulmonary embolism functional outcomes. Unfortunately interpreting the results is difficult due to its premature stoppage (after only 83 patients) and its convoluted primary endpoint, a composite outcome of recurrent PE, poor functional capacity (RV dysfunction with either dyspnea at rest or exercise intolerance) or an SF36 Physical Component Summary (PCS) score <30 at 90 day follow-up. Patients were randomized to either a single bolus of 30-50 mg of tenectoplase (TNK) or placebo. The authors examined the composite outcome of functional capacity and perception of wellness at 3-months. The authors also examined the rate of pulmonary hypertension as defined by echocardiographic findings.

In the TOPCOAT trial, the TNK arm certainly seemed to have slightly better functional outcomes at 90 days. The TNK group had lower rates of patients with a New York Heart Association Functional (NYHA) class greater than 3 (8 vs 2) and the number of patients with a low perceptional wellness score under 30 (2 vs 0). None of these differences reached statistical significance, and overall the groups’ functional outcomes were fairly similar, both arms of the trial had almost identical mean NYHA score, VEINES-QOL score, and SF-36 Mental Component score. In fact the number of patients with poor functional outcome at 3-months, defined as NYHA >3 and evidence of right heart hypertrophy on echo (the traditional definition of post-embolic hypertension), was identical (approximately 7.5%).  If echocardiographic findings alone (similar to the MOPPET definition) were used to diagnosis post-embolic pulmonary hypertension the incidence would have increased to 32.5%.
TOPCOAT like MOPPET demonstrated that thrombolytics may provide some benefit in long-term outcomes after sub-massive pulmonary embolism.  Just how relevant these benefits are is still unclear. TOPCOAT further reinforces that the unrealistic findings in MOPPET were just that, too good to be true. Whether these benefits outweigh the 2% risk of ICH that PEITHO revealed is still unknown. Furthermore it is still unclear as to who truly benefits from acute thrombolytic therapy. It may very well be that the young healthy patient with no comorbidities and a significant pulmonary reserve is unlikely to develop pulmonary hypertension, while the older patient with COPD or chronic heart failure, are more at risk and likely to benefit from thrombolytic therapy.  Ironically according to the PEITHO cohort these are the very same patients that are at the highest risk for ICH.

Finally the question arises of whether the differences in the doses and the protocols used in the MOPPET, TOPCOAT and PEITHO trials alter clinical outcomes and the incidence of ICH. Was the “half-dose” strategy that was used in the MOPPET trial the reason for this cohort’s low rate of ICH or was it just random chance and a small population size?  From the existing data we are unable to resolve these uncertainties. Historically these lines of inquiry have always proved fruitless. As far back as the GISSI-2 trial (13)examining thrombolytics in acute myocardial infarction, a particular thrombolytic agent failed to demonstrate superiority over any other agents. Not only were the authors unable to demonstrate superiority of any particular agent, it didn’t matter whether these clot busters were administered with or without heparin. Additionally, when the Cochrane Group examined thrombolytic therapy for acute ischemic stroke, they were unable to find a difference in efficacy between the individual thrombolytic agents or in the various dosing strategies utilized (14).

Like Thrombolytics in acute ischemic stroke, their use in sub-massive pulmonary embolism has failed to demonstrate the objective benefits that we saw with acute myocardial infarction. Thus like in CVA we are left deciphering the relevance of subjective endpoints of uncertain value. At least in the area of acute ischemic stroke we are familiar with the methods used to evaluate functional outcomes and there are accepted standards (an mRS >2) for poor outcomes, with which we can judge performance. The outcomes used to evaluate functional outcomes in post-pulmonary embolism patients are as of yet alien. Furthermore, there has yet to be a consistent set of metrics or time period utilized when measuring these outcomes. There does seem to be a consistent signal throughout the thrombolytic literature for pulmonary embolism. Whether it is clinically relevant or outweighs the obvious harms is still uncertain. At least in theory “half-dose” thrombolytic therapy seems physiologically plausible, but it is important and healthy that we maintain a robust state of skepticism until we have more than physiological reasoning and the warm memories of the golden years of thrombolytics supporting their use in sub-massive pulmonary embolism.

Sources Cited:

  1. Fibrinolytic Therapy Trialists’ (FTT) Collaborative Group. Indications for fibrinolytic therapy in suspected acute myocardial infarction: collaborative overview of early mortality and major morbidity results from all randomised trials of more than 1000 patients. Lancet. 1994 Feb 5;343(8893):311-22.
  2. Wardlaw JM, Murray V, Berge E, del Zoppo GJ. Thrombolysis for acute ischaemic stroke. Cochrane Database of Systematic Reviews 2009, Issue 4. Art. No.: CD000213. DOI: 10.1002/14651858.CD000213.pub2.
  3. Banks et al. Outcomes validity and reliability of the modified Rankin scale: implications for stroke clinical trials: a literature review and synthesis. Stroke. 2007 Mar;38(3):1091-6. Epub 2007 Feb 1.
  4. Wan S, Quinlan DJ, Agnelli G, Eikelboom JW. Thrombolysis compared with heparin for the initial treatment of pulmonary embolism: a meta-analysis of the randomized controlled trials. Circulation. 2004; 110: 744–749
  5. Jaff et al. Management of Massive and Submassive Pulmonary Embolism, Iliofemoral Deep Vein Thrombosis, and Chronic Thromboembolic Pulmonary Hypertension  A Scientific Statement From the American Heart Association. Circulation. 2011; 123: 1788-1830
  6. Meyer et al. Fibrinolysis for Patients with Intermediate-Risk Pulmonary Embolism N Engl J Med 2014; 370:1402-1411 April 10, 2014
  7. Steering Committee. Single-bolus tenecteplase plus heparin compared with heparin alone for normotensive patients with acute pulmonary embolism who have evidence of right ventricular dysfunction and myocardial injury: rationale and design of the Pulmonary Embolism Thrombolysis (PEITHO) trial. Am Heart J. 2012 Jan;163(1):33-38.e1. doi: 10.1016/j.ahj.2011.10.003.
  8. Sharifi et al.  Moderate pulmonary embolism treated with thrombolysis (from the “MOPETT” Trial). Am J Cardiol. 2013 Jan 15;111(2):273-7
  9. Vittorio Pengo, M.D., Anthonie W.A. Lensing, M.D., Martin H. Prins, M.D., Antonio Marchiori, M.D., Bruce L. Davidson, M.D., M.P.H., Francesca Tiozzo, M.D., Paolo Albanese, M.D., Alessandra Biasiolo, D.Sci., Cinzia Pegoraro, M.D., Sabino Iliceto, M.D., and Paolo Prandoni, M.D. for the Thromboembolic Pulmonary Hypertension Study Group. Incidence of Chronic Thromboembolic Pulmonary Hypertension after Pulmonary Embolism. N Engl J Med 2004; 350:2257-2264
  10. Kline JA, Steuerwald MT, Marchick MR, Hernandez-Nino J, Rose GA. Prospective evaluation of right ventricular function and functional status 6 months after acute submassive pulmonary embolism: frequency of persistent or subsequent elevation in estimated pulmonary artery pressure. Chest. 2009; 136: 1202–1210.
  11. Becattini C, Agnelli G, Pesavento R, et al. Incidence of chronic thromboembolic pulmonary hypertension after a first episode of pulmonary embolism. Chest 2006;130(1):172-175.
  12. Kline et al. Treatment of submassive pulmonary embolism with tenecteplase or placebo: cardiopulmonary outcomes at 3 months: multicenter double-blind, placebo-controlled randomized trial. J Thromb Haemost. 2014 Apr;12(4):459-68.
  13. Gruppo Italiano per lo Studio della Sopravvivenza nell’Infarto Miocardico . GISSI-2: a factorial randomised trial of alteplase versus streptokinase and heparin versus no heparin among 12 490 patients with acute myocardial infarction. Lancet 1990; 336: 65-71
  14. Wardlaw JM, Koumellis P, Liu M. Thrombolysis (different doses, routes of administration and agents) for acute ischaemic stroke. Cochrane Database Syst Rev. 2013 May 31


“The Case of the Dying Detective Continues…”

A picture of Florence Nightingale (1820-1910), "The Lady with the lamp", the English nurse, famous for her work during the Crimean War, is seen here in the hospital at Scutari, Turkey.

Survivors of the Armageddon in any of its many forms, zombie, alien, or otherwise, are often left in a state of emotional turmoil. They face an uncertain future, the loss of loved ones, and the constant stress of imminent danger. Underneath the obvious anguish lies a deeper more subtle but equally distressing sentiment, uncertainty. Now faced with a world completely devoid of the values they once held dear, they are often incapable of finding meaning in this post-apocalyptic wasteland. On March 18th 2014, the publication of the ProCESS trial has ushered in a new era of sepsis management (1). And yet despite being the largest and highest quality trial thus far to examine the efficacy of various strategies for managing the septic patient, it has done very little to illuminate what this post-Early Goal Directed Therapy (EGDT) era will entail.

In 2001 Rivers et al published the findings of a single center 263 subject RCT examining the efficacy of an Emergency Department based protocol consisting of reaching stepwise goals meant to optimize hemodynamics and tissue perfusion (2). Comparing this protocol to “standard care” the authors reported astounding results, with an absolute mortality benefit of 16% in favor of the protocol based strategy. Initial trials of Goal Directed Therapy which failed to demonstrate benefit when applied to ICU patients, now obtained incredible results when implemented in the Emergency Department (11,12). And thus the era of EGDT was born. This acronym was the battle cry for Emergency Physicians near and far.  Enforced, in some cases, in a militaristic fashion it became the standard of care in Emergency Departments internationally.

However there was unease among the troops, in the form of a number of those opposed to accepting EGDT in its entirety. After all was it a wise decision to globally adopt a protocol based off a single center study with so few participants? They challenged the wisdom of the unquestioning application of EGDT as a bundled therapy. Though components of EGDT undoubtedly benefit patients in septic shock (fluids, early antibiotics and supportive care), others have proven to be of no benefit and in some cases harmful (dobutamine use and CVP monitoring)(3). These subtleties required further examination before adopting the bundle universally.

ProCESS sought to address these very concerns, and in a sense it was a success. In a 1:1:1 RCT design, Yealy et al compared the Rivers EGDT protocol, to both a less invasive but still protocol-based strategy, and a “usual care” group(care as determined by the attending physician).  The authors found no difference in any of the endpoints measured. Most importantly, the primary endpoint, 60-day mortality was found to be 21.0%, 18.2%, and 18.9% respectively. Although there were small differences in the total amount of fluid given within the first 6 hours, the main differences in the 3 groups were the use of vasopressors (significantly higher in the two protocol-based groups) and dobutamine (only used with any consistency in the EGDT group).

ProCESS exposes many important aspects of the management of sepsis. First the importance of EGDT is not in the execution of the bundle in its entirety, but rather the value of early and aggressive fluid resuscitation and the necessity of early administration of broad-spectrum antibiotics. ProCESS also establishes that there is more than one way to manage the septic patient. Providing evidence that the unstructured judgment of physicians is as effective in determining fluid status, hemodynamics and tissue perfusion as a standardized protocol.

What the ProCESS trial fails to divulge is the most effective strategy to guide fluid therapy. The authors compared unstructured clinician judgment (not specifically defined) of fluid responsiveness to either CVP or SBP plus shock index, neither of which are reliable indicators of true fluid responsiveness. We have known for some time now that from a physiological standpoint CVP is a poor marker of fluid responsiveness(4). Since the publication of the Rivers EGDT bundle many more elegant and intrinsically accurate methods of assessing fluid responsiveness have been proposed.

Bedside ECHO, IVC ultrasound, and non-invasive CO monitors have all been suggested as alternatives to CVP monitoring (each found to be more reliable predictors of fluid responsiveness). The trials that examine the accuracy of these methods in assessing fluid responsiveness have used the surrogate endpoint of CO, measured by pulmonary artery catheter (PAC) (5,6,7,8,9). PAC has generally been viewed as the gold standard for measuring cardiac output (CO), and yet in the case of assessing fluid responsiveness in the septic patient it should be viewed as a surrogate endpoint. When treating a patient in septic shock it is not critical to know their specific CO or how our fluid challenge affects it. What is important is how our fluid challenge affects this patient’s morbidity and mortality. Though we assume that cardiac output and direct assessment of fluid responsiveness with a PAC are ideal metrics to follow, we have no real proof supporting this concept. In fact the only real evidence we have has demonstrated just the opposite. A large multi-center RCT published by Richards et al in JAMA in 2003, examined this very question (10). 681 ICU patients in shock (86% septic in origin) were randomized to have their treatment facilitated by PAC measurements or solely based on the clinical judgment of the treating physician. This trial failed to demonstrate any added clinical benefit to the addition of direct monitoring of a patient’s cardiac output and fluid responsiveness. Thus using the accuracy with which ECHO, IVC ultrasound, or non-invasive CO monitors predict PAC findings to decide the ideal strategy to guide fluid resuscitation, when direct measurements of these metrics via PAC were of no benefit to clinical outcomes, seems logically flawed.

It is necessary to examine how ECHO, IVC ultrasound and non-invasive CO monitors affect patient oriented, clinically relevant endpoints. Rivers et al proposed CVP, and up until the publication of the ProCESS trial, it was the only metric that when used to guide fluid resuscitation in a clinically trial improves mortality. The ProCESS trial has demonstrated that CVP is not superior to unstructured clinician judgment. Unfortunately, ProCESS fails to provide us with a better option. ECHO, IVC ultrasound, or non-invasive CO monitors may be more accurate guides, but until they are tested against clinician judgment, using patient oriented endpoints, it is hard to truly quantify their utility. In the ProCESS trial mortality was unaffected between groups despite the fact that there was over a liter difference in the quantity of fluid administered (5,059 mL, 5,511 mL, 4,362 mL respectively).  This may suggest a precise measurement of fluid responsiveness is not necessary (1). Merely assessing for fluid tolerance rather than responsiveness and using IVC ultrasound may be the simplest and most effective method to guide fluid administration.

ProCESS has ushered in a new era for the management of sepsis in the Emergency Department. Though this trial was able to clarify the importance of fluid and early antibiotics as key components in the septic bundle, it has yielded little assistance on how best to guide the administration of said fluid. In this post-EGDT dystopia, it may be that a single metric will never be as powerful a tool as the flawed mind of the physician caring for the patient. The human brain, with all its beautiful imperfections may prove to be superior to any single objective measurement. A new era indeed…


Sources Cited:

1. The ProCESS Investigators. A randomized trial of protocol-based care for early septic shock. N Engl J Med.  2014 March.

2. Rivers E, Nguyen B, Havstad S, et al. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med.  2001;345:1368-1377.

3. Marik et al. Early goal-directed therapy: on terminal life support? Am J Emerg Med. 2010 Feb;28(2):243-5.

4. Marik et al. Does the central venous pressure predict fluid responsiveness? An updated meta-analysis and a plea for some common sense. Crit Care Med. 2013 Jul;41(7):1774-81.

5. Marik et al.Noninvasive cardiac output monitors: a state-of the-art review. J Cardiothorac Vasc Anesth. 2013 Feb;27(1):121-34.

6. Marik et al. Hemodynamic parameters to guide fluid therapy. Annals of Intensive Care. 2011, 1:1.

7. Barbier et al. Respiratory changes in inferior vena cava diameter are helpful in predicting fluid responsiveness in ventilated septic patients. Intensive Care Med 2004, 30:1740-1746.

8. Feissel et al. The respiratory variation in inferior vena cava diameter as a guide to fluid therapy. Intensive Care Med 2004, 30:1834-1837.

9. Biais et al. Changes in stroke volume induced by passive leg raising in spontaneously breathing patients: comparison between echocardiography and Vigileo/FloTrac device. Crit Care 2009, 13.

10. Richard et al. Early Use of the Pulmonary Artery Catheter and Outcomes in Patients With Shock and Acute Respiratory Distress Syndrome: A Randomized Controlled Trial. JAMA. 2003;290(20):2713-2720.

11. Hayes et al. Elevation of Systemic Oxygen Delivery in the Treatment of Critically Ill Patients. N Engl J Med. 1994 Jun;330(24):1717-22.

12. Gattinoni et al. A Trial of Goal-Oriented Hemodynamic Therapy in Critically Ill Patients. N Engl J Med. 1995 Oct;333(16):1025-32.