“The Adventure of the Red-Headed League”

pic1A peasant traveling home at dusk sees a bright light traveling along ahead of him. Looking closer, he sees that the light is a lantern held by a ‘dusky little figure’, which he follows for several miles. All of a sudden he finds himself standing on the edge of a vast chasm with a roaring torrent of water rushing below him. At that precise moment the lantern-carrier leaps across the gap, lifts the light high over its head, lets out a malicious laugh and blows out the light, leaving the poor peasant a long way from home, standing in pitch darkness at the edge of a precipice.

                                            -Welsh tale describing Will-o-the-Wisp

 

So much of what we do in Emergency Medicine is translating shades of grey into dichotomous patient oriented decisions. Truth in medicine is a fluid, tenuous state, very rarely encountered in the chaos of the Emergency Department. More often than not we are forced to act in varying states of uncertainty. Naturally we search out specific data points in this fog of ambiguity that we believe will provide guidance through the unknown. And yet, some of these beacons are just as likely to lead us astray as they are to provide safe passage.

One such variable is a history of loss of consciousness (LOC) in a patient suffering from a minor head trauma. Despite a multitude of contradictory data, LOC has persisted in the mind of the practitioner (often times in isolation) as a relevant branch-point in deciding who does and does not require further downstream investigations (2). The most recent excavation of the PECARN dataset, published in JAMA Pediatrics, should serve to remind us that just because a variable is found to have a statistical association to the endpoint in question, this does not necessarily mean it is a useful factor to guide clinical decision- making (2).

In this latest dive into the PECARN dataset, Lee et al set out to examine how influential LOC was in predicting clinically significant traumatic brain injury (ciTBI). In the original derivation and validation cohort, by Kupperman et al, LOC was identified as one of the six variables with a strong enough predictive value to be included in the formal decision rule (1). The original PECARN data set was a mammoth undertaking, which prospectively evaluated 42,412 pediatric patients presenting to the Emergency Department after experiencing a minor head injury. Of this group only 780 patients (1.8%) were found to have any evidence of TBI on CT. Only 376 (0.9%) of these patients had injuries of clinical relevance, of which only 60 patients (0.14%) required any form of neurosurgical intervention. Given this extremely low rate of ciTBI, one could argue that the PECARN authors had already identified a cohort of patients at incredibly low risk for relevant injury and any further risk stratification would be futile. Despite this the original authors derived and internally validated two age specific (< 2 years old and> 2 years old) decision rules that boasted negative predictive values of 100% and 99.95% respectively. This data set remains the most robust clinical decision rule derived to date in the pediatric population despite lacking sufficient external validation, incomplete follow up (1/5 of the 64.7% of the patients who did not undergo definitive testing were lost to follow-up), and the fact that the rule was outperformed by physician’s unstructured judgment (1).

Lee et al sought to improve, at least conceptually, on the diagnostic characteristics of the PECARN decisions rules by addressing the added value isolated LOC provides in identifying patients with ciTBI. The authors defined isolated LOC in two specific fashions. In one, termed PECARN-isolated LOC, they identified patients who experienced LOC without any of the other factors that make up the PECARN decision rules. The second utilized the expanded definition of LOC, which included predictors from other commonly used decision rules for head injury (Nexus 2, the New Orleans criteria, and the Canadian head CT rule). It is important to note that the expanded definition of LOC did not include mechanism of injury as a relevant predictor of ciTBI (2).

Of the 42,412 patients, 6,286 (15.4%) were found to have suspected or confirmed LOC. An interesting side note was that out of the 6,286 patients with LOC, 5,010 had a head CT performed, the majority of which the treating physician recorded the history of LOC as being the primary reason for the scan (demonstrating that even in this cohort LOC was considered a clinically important factor for predicting injury). Of the patients with a history of LOC, PECARN-isolated LOC was present in 2,780 (47.5%) patients. In the subgroup of patients with PECARN-isolated LOC, the incidence of TBI on CT was 1.9% and the incidence of ciTBI was 0.5%.  Unfortunately the expanded definition of isolated LOC was far less useful as only 576 (9.4%) of patients with LOC met its’ criteria, most likely do to the inclusion of “any traumatic scalp findings” as a relevant predictor. Of those that did meet these impossible standards, only 0.9% were found to have TBI on CT and 0.2% of these patients had a clinically relevant injury. In the PECARN cohort if LOC was used independently as a decision point for head CT, the sensitivity and specificity of identifying ciTBI would be 49.5% and 85.4% respectively. Clearly not the beacon of light we presume.

What is important to remember is that a statistically significant odds ratio found by using a multifactorial regression model does not directly translate into a clinically useful predictor. Multifactorial regression in all its forms is a statistical attempt to isolate the effect of one variable’s ability to predict the outcome in question. Essentially it is the graphical illustration (the slope of the line indicating the strength of the association) of how one variable affects another while a mathematical attempt is made to control for other factors (3). Despite its statistical authority, finding an independent association between a variable and the outcome in question is not the same as studying a group of patients otherwise well with the exception of the variable in question (LOC for example). Moreover the odds ratio that is typically reported as the result of a multifactorial regression model does not intuitively explain the clinical relevance of this correlation (3).

The utility of isolated LOC for predicting clinically significant TBI seems to have undergone this very mathematical augmentation. Although LOC has consistently demonstrated a statistically independent association with ciTBI, when applied clinically in patients with isolated LOC its predictive value is minimal. During the derivation cohort of the Canadian head CT rule Steill et al found LOC was independently associated with ciTBI (4). However when used clinically they found only 0.4% of patients with LOC had a clinically relevant ciTBI requiring intervention, and most of these could be identified simply by assessing the patient’s mental status in the ED (5). In the NEXUS 2 cohort, LOC was identified as a predictor of ciTBI but failed to maintain clinical relevance when assessed using a multifactorial model (6). Additionally if LOC was used to decide which patients in this cohort would receive further imaging it would have resulted in a sensitivity and specificity of 48% and 63% respectively (6).  In the original PECARN cohort the predictors that identified the bulk of the patients with ciTBI were altered mental status (AMS) or clinically obvious signs of skull fractures. These factors alone identified the bulk of patients with ciTBI. If patients did not present altered or with obvious signs of skull fracture, then their risk of ciTBI was incredibly low (0.9% in the under 2 years old group and 0.8% in the over 2 years old group). The remainder of the predictors found in the PECARN decisions rules, including LOC, did very little to further risk stratify patients (1).

What this can be reduced down to is our fear of the clinically occult head bleed. Based on the idea that the skull is a lead box blocking the transmission of potential chaos within from our external eye until it’s too late to intervene. This fear is driven by anecdote passed down from attending to resident in some form of modern-day oral history. Clearly these stories are not supported by the literature and the reality is these cases of clinically occult intracranial bleeding are rare and often identifiable by high-risk features (elderly, anticoagulant use, etc). A history of LOC in an otherwise well-appearing patient provides us with little guidance in identifying these rare cases. Moreover the lack of LOC does not safely eliminate the risk of significant injury. Often times its absence will give us a false sense of security and like the solitary peasant, lead us far from home, standing in pitch darkness on the edge of a cavernous precipice…

 

Sources Cited:

  1. Kuppermann  N, Holmes  JF, Dayan  PS,  et al; Pediatric Emergency Care Applied Research Network (PECARN).  Identification of children at very low risk of clinically-important brain injuries after head trauma: a prospective cohort study. Lancet. 2009;374(9696):1160-1170.
  2. Lee LK, Monroe D, Bachman MC, et al. Isolated Loss of Consciousness in Children With Minor Blunt Head Trauma. JAMA Pediatr. Published online July 07, 2014. doi:10.1001/jamapediatrics.2014.361.
  3. Barrett, Tyler W. et al. Is the Golden Hour Tarnished? Registries and Multivariable Regression. Annals of Emergency Medicine , Volume 56 , Issue 2 , 188 – 200
  4. Stiell IG, Wells GA, Vandemheen K.  et al.  The Canadian CT Head Rule for patients with minor head injury.  Lancet. 2001;357:1391-1396
  5. Stiell IG, Clement CM, Rowe BH, et al. Comparison of the Canadian CT Head Rule and the New Orleans Criteria in Patients With Minor Head Injury. JAMA. 2005;294(12):1511-1518. doi:10.1001/jama.294.12.1511
  6. Mower WR, Hoffman JR, Herbert M, et al, Developing a Decision Instrument to Guide Computed Tomographic Imaging of Blunt Head Injury Patients. J Trauma. 2005 Oct;59(4):954-9. (Nexus II)

 

 

“The Adventure of the Golden Standard”

curiosity-quack-medicine-pictures-24-1

We have all been told ghost stories and fairy tales. Campfire fables intended to frighten the gullible populace into behaving in a manner deemed appropriate. Even in Emergency Medicine we have our fair share of ghost stories. Most notably we are taught from an early age to fear and respect the clinically occult pulmonary embolism. A disease process so cryptic in nature it can go undetected throughout a patient’s Emergency Department stay and yet is deadly enough to strike a patient down shortly after their discharge. Though such a monster exists at least anecdotally it certainly does not strike with the alacrity these tales would have you believe. Like with any evil spirit that cannot be detected through normal measures, we have developed our own set of wards and charms in the hopes of keeping this demon at bay. One of our more frequently (over)used charms of this type is the serum D-Dimer. Armed with its protection we go to work everyday ready to battle the mythical beast that is the clinically occult pulmonary embolism.

A recent publication in JAMA, by Righini et al sought to expand D-Dimer’s role in the eradication of venothromboembolism (VTE)(1). In order to address the poor specificity of D-Dimer, experts have suggested increasing the threshold at which the assay is considered positive. Some have recommended doubling the threshold traditionally considered normal while others propose using an age-adjustment to account for the natural increase in serum levels with aging. Most of the data examining these strategies is retrospective in nature (2), and until this recent JAMA paper we had no prospective literature validating its efficacy. Righini et al examined the age-adjusted strategy using a level of 10 multiplied by the patient’s age (in years) as their threshold for a positive D-Dimer. Patients whose D-Dimer level was below their age adjusted threshold had no further testing performed, while those above this threshold went on to more definitive testing. Using a gold standard of PEs diagnosed by CT Pulmonary Angiogram(CTPA), V/Q scan or 3-month follow up, the authors examined the age-adjusted approach. The authors claim a missed VTE rate at 3-month follow up of 0.3%. Additionally, employing this age-adjusted threshold in low-risk patients over 75 years of age increased the specificity of the assay from 6.9% to 29.7%. Seemingly a landmark trial, this publication should reduce testing and allow for D-Dimer to be more clinically applicable in an older population. Unfortunately this paper’s success may just as likely be due to a low risk cohort, an imperfect gold standard and a limited definition of clinically positive events during follow up.

Though D-Dimer has experienced some degree of success in the recent literature, it has not always garnered such favor. In fact D-Dimer never achieved the diagnostic accuracy necessary for universal clinical use. Even the most sensitive assays were found to be incapable of safely ruling out pulmonary embolism in an undifferentiated cohort of patients suspected of having a PE (3,4,5,15). In one of the few trials that randomized hospital wards to encourage use of D-Dimer, compared to control wards where D-Dimer testing was discouraged, authors found that widespread utilization did the exact opposite of what it was intended. Not only did the evaluation of PE nearly double in the experimental arm compared to the control, the number of V/Q scans increased. Even more surprising, while the experimental arm diagnosed and treated significantly more patients for PE (160 vs 94) there was no difference in 3-month mortality or recurrent VTE (6). And yet despite these obvious flaws we could not let go. The physiological reasoning and clinical convenience of such a test were too attractive to abandon this assay as a failure. Instead we adapted our patients to fit the test. With a few minor adjustments of incidence, a small modification in the gold standard and a certain amount of looking the other direction when it came to clinical follow up, the D-Dimer was transformed into a highly sensitive assay capable of ruling out PE and reducing invasive testing.

We know from early studies of D-Dimer assays its sensitivity is only sufficient to rule out PE in cohorts in which the Res215pre-test probability is around 10-15% (3,15). Traditionally this was accomplished by using a low-risk Wells score of 2 or less. This strategy was first validated in a study by Wells et al published in Annals of Internal Medicine in 2001 (5), in which the authors hypothesized that using a low-risk Wells score of 0-2 in conjunction with a D-Dimer assay would reduce further downstream testing. The overall incidence of pulmonary embolism in this cohort was 9.5%. As expected D-Dimer performed admirably in such a low-risk cohort. The overall negative predictive value was 97.3%, which was powered primarily by the scarcity of disease in the low-risk group (1.3%). In fact when the test was used in the moderate and high-risk groups its negative predictive value fell to 93.9% and 88.5% respectively. The overall sensitivity of the D-Dimer in the entire cohort was only 78.5%. Such statistical machinations are relevant because the success of D-Dimer in the modern literature is driven largely in part by the utilization of negative predictive value in combination with low-risk cohorts to overestimate D-Dimers diagnostic capabilities. This acceptance of the negative predictive value as the endpoint of significance has tainted the literature examining D-Dimer’s effectiveness. Though Wells et al were forthright in reporting the true test characteristics of D-Dimer, later studies have not been so transparent. Most notably was the validation cohort, published by the Christopher group in 2006 (7). In this cohort the authors set out to demonstrate that patients with Wells scores up to 4 could be safely excluded using a D-Dimer. Similar to the Wells et al cohort these authors used 3-month VTE event rate in patients discharged with a negative D-Dimer. Unlike the Wells cohort, patients with Wells score less than 4 and a negative D-Dimer had no further testing. Authors claim success, emphasizing that the 3-month event rate in the negative D-Dimer group was only 0.5%. Again this negative predictive value is powered by the low incidence of disease in this cohort (12.1%). The actual sensitivity in this subgroup was 95%. This pattern is consistent throughout the PE literature. The incidence of pulmonary embolism in prospective cohorts has been progressively decreasing over the past few decades.  In the original PIOPED cohort, published by Stein et al in 1990, the high-risk, intermediate-risk and low-risk groups had a rule-in rate of 68%, 30% and 9% respectively (8). In contrast the PERC validation cohort, published in 2008 by Kline et al, had a rule-in rate of 31.1%, 10.4%, and 3% respectively. Obviously this decrease in incidence is due to our dwindling comfort with risk tolerance and the subsequent inclusion of a far lower risk patient population into the diagnostic pathway. This dilution of the disease state and the focus on negative predictive value as the metric of choice provides a false impression of D-Dimer’s capabilities. This test appears to be safe for use in moderate-risk patients when in reality very few moderate-risk patients have been included in these cohorts.

The second major flaw in the modern literature of D-Dimer is the gold standard used to define these thromboembolic events. Most notably for our current discussion is the utilization of CTPA as the gold standard test for diagnosing PE and the discrete, yet real, increase in overdiagnosis that has resulted from its adoption. The reclassification of clinically insignificant clot burden to a pathological state not only leads to overtreatment, transforming healthy people into patients, but it also makes it incredibly difficulty for us to assess the effectiveness of any diagnostic pathway. To understand the repercussions the adoption of CTPA as the accepted gold standard has had on clinical research we must first address its limitations. In PIOPED II, the largest trial examining the diagnostic characteristics of CTPA, published in the NEJM in 2006, Stein et al found that in patients with low-risk of pulmonary embolism by clinical assessment, the CTPA diagnosed far more PEs than the composite reference standard (Normal DSA or V/Q scan, a low probability V/Q with Wells <2 or a negative LE US). In fact, in patients with a Wells <2, 42% of the PEs diagnosed by CTPA were false positive findings. A significant increase in what would be considered a pulmonary embolism by the standard diagnostic criteria of the day. Conversely in the high-risk patients, the CTPA was not sensitive enough to safely rule out PE. In patients with a Wells score >6, 40% of the negative CTPAs were false negatives (10).

Despite these significant flaws, the CTPA has now become the gold standard by which the D-Dimer is judged against. A gold standard that is prone to overdiagnosing low-risk patients with clinically irrelevant emboli and underdiagnosing high-risk patients with clinically relevant ones. Not only is this is a poor standard to guide clinical judgment, when used as the gold standard comparator it leads to an overestimation of D-Dimer’s utility. Early examination of the accuracy of various D-Dimer assays found at best a moderate ability to rule out PE. When pre-CTPA gold standards were used (DSA, V/Q scan and serial US) in a high-risk patient in whom PE is suspected, a negative D-Dimer is not sufficient to rule out disease (5). In such cohorts only patients with Wells <2% could a D-Dimer be utilized to rule out PE. And so a portion of PEs in moderate-risk patients that would be missed by the more traditional composite endpoint are also in turn missed by the CTPA. This overestimates the sensitivity of the D-Dimer assay. Similar to D-Dimer, the CTPA tends to overdiagnose pulmonary embolism in the low-risk patient. This helps mask the true extent of D-Dimer’s poor specificity. Overall the CTPA is a gold standard designed to present an overly optimistic view of the D-Dimer assay.

The Righini et al trial committed all of the aforementioned errors in their examination of age-adjusted D-Dimer thresholds. Though the overall incidence of PE was high by modern standards (18.7%), the authors did not specifically state the incidence of PE in the subgroup in which D-Dimer was used to rule out disease and thus it is hard to determine how the acuity level of the cohort affected the negative predictive value. The only criteria available for us to judge the acuity of each subgroup is the quantity of patients stratified to each respective risk group. In the Righini study only 12.8% of the patients had a Wells score greater than 4 (1). In contrast, the Christopher cohort had 33.2% of their patients with a Wells score greater than 4(7). The mortality in the high-risk group following a negative CTPA at 3-month follow up was 1.2% in the Righini cohort compared to 8.6% in the Christopher study. This information suggests the Righini cohort is comprised of a far healthier patient population than those in the Christopher trial. Following in the Christopher trialists’ footsteps, the authors used positive findings on CTPA or any event or death during 3-month follow up period deemed due to a VTE (as determined by three independent experts blinded to the patient’s initial diagnostic workup) as their surrogate gold standard. Though the authors claim that only one event was missed at 3-month follow up in the patients discharged from the ED using the age-adjusted threshold, further examination reveals that in fact seven deaths and seven suspected VTEs occurred in this group, only one of which was deemed to VTE-related by the expert panel. Though none of the seven deaths were judged to be related to pulmonary emboli, a number were caused by COPD and end-stage cancer, both of which are easily confused with pulmonary embolism and commonly placed as the default diagnosis on death records (13).

In 1977, Annals of Internal Medicine published an editorial by Dr. Eugene Robinson on the current state of PE management. Though all the diagnostic tests used to differentiate disease from non-disease have changed, the flaws in management have persisted (11). Specifically we continue to obsess over diagnosing clinically unimportant pulmonary embolisms in the young and healthy while simultaneously ignoring the sick vulnerable patients where PE is far more likely and clinically relevant. In August 2013, den Exter et al published an article in Blood supporting Dr. Robinson’s thoughts (12). In this paper the authors examined the factors associated with recurrent pulmonary emboli and mortality in a cohort of 3,728 patients undergoing a work up for PE. The authors found that clot location, clot burden and even identification of clot on CTPA were not important factors when predicting clinical outcomes at follow-up. In fact the mortality during the follow-up period was 10.3% in those with a subsegmental PE vs 6.3% in those with a proximal PE vs 5.2% in those with a negative CTPA. The only factors that demonstrated clinically significant predictive value were history of malignancy, age and history of heart failure. Simply put, elderly patients with comorbidities are at an increased risk for clinically relevant pulmonary emboli. Similarly the Christopher study reported patients who were discharged after a negative CTPA had a mortality rate of 8.6%.  No amount of testing can significantly modify this risk. Even those that do not have an embolic event diagnosed during their Emergency Department visit are at significant risk of experiencing an embolic event over the next 3 months. Clot burden, clot location or even presence of a clot on imaging did not predict clinical outcomes, patient variables did.

The D-Dimer assay is one of the many flawed tests in a flawed system built to identify pulmonary emboli in the young and healthy, in whom the diagnosis is rarely of clinical importance. Like the PERC rule, and even to some extent the CTPA, D-Dimer performs best in this young, healthy cohort with low risk of clinical disease. Conversely in the sick and vulnerable high-risk patients, it is rarely negative and even if it is, does not possess the diagnostic qualifications to safely rule out the disease of concern. In fact the only patients in which D-Dimer can be consistently utilized, is the young patient at low risk of pulmonary embolism.  We are left with a test that possesses diagnostic characteristics capable of ruling out the presence of pulmonary embolisms of little clinical significance and incapable of ruling out the disease in patients in which we should be truly concerned. Clearly despite its best intentions, D-Dimer adds very little to the diagnostic pathway for PE. Playing with thresholds on the ROC curve does nothing to improve D-Dimer’s test characteristics. Its success dependent on its ability to ward off a fictitious disease in a healthy population that will likely do well no matter what. A test best suited to treat our own fears rather than our patients’ maladies. Surely there is a better way to identify those who require workups for PE. Exactly what this consists of is still unclear, but certainly ghost stories, campfire tales and even D-Dimer assays will provide no assistance.

 

Sources Cited:

1.Righini M et al. Age-Adjusted D-Dimer Cutoff Levels to Rule Out Pulmonary Embolism: The ADJUST-PE Study. JAMA. 2014;311(11):1117-1124.

2. Schouten HJ et al. Diagnostic accuracy of conventional or age adjusted D-dimer cut-off values in older patients with suspected venous thromboembolism: systematic review and meta-analysis. BMJ 2013;346:f2492

3. Ginsberg JS et al. Sensitivity and Specificity of a Rapid Whole-Blood Assay for D-Dimer in the Diagnosis for Pulmonary Embolism. Annals of Internal Medicine. 1998; 129(12): 1006-1011

4. Stein, PD et al. D-Dimer for the Exclusion of Acute Venous Thrombosis and Pulmonary Embolism. Annals of Internal Medicine. 2004; 140(8) 589-607

5. Wells PS et al. Excluding Pulmonary Embolism at the Bedside without Diagnostic Imaging: Management of Patient with Suspected Pulmonary Embolism Presenting to the Emergency Department by Using a Simple Clinical Model and D-Dimer. Annals of Internal Medicine. 2001; 135(2): 98-107

6. Goldstein NM et al. The Impact of the Introduction of a Rapid D-Dimer Assay on the Diagnostic Evaluation of Suspected Pulmonary Embolism. Arch Intern Med. 2001;161(4):567-571.

7. Writing Group for the Christopher Study Investigators*. Effectiveness of Managing Suspected Pulmonary Embolism Using an Algorithm Combining Clinical Probability, D-Dimer Testing, and Computed Tomography. JAMA. 2006;295(2):172-179.

8. The PIOPED Investigators. Value of the Ventilation/Perfusion Scan in Acute Pulmonary Embolism: Results of the Prospective Investigation of Pulmonary Embolism Diagnosis (PIOPED). JAMA. 1990;263(20):2753-2759.

9. Kline J.A. et al. Prospective multicenter evaluation of the pulmonary embolism rule-out criteria. Journal of Thrombosis and Haemostasis. 2008; 6(5): 772–780

10. Stein PD et al. Multidetector Computed Tomography for Acute Pulmonary Embolism. N Engl J Med 2006; 354:2317-2327.

11. Robinson ED. Overdiagnosis and Overtreatment of Pulmonary Embolism: The Emperor May Have No Clothes. Ann Intern Med. 1977;87:775-781.

12. den Exter PL et al. Risk profile and clinical outcome of symptomatic subsegmental acute pulmonary embolism. Blood 2013,122(7)1144-114913.

13. Wexelman, BA et al. Survey of New York City Resident Physicians On Cause-Of-Death Reporting. 2010. Prev Chronic Dis. 2013 10:E76

14. Sohne M et al. Accuracy of clinical decision rule, D-dimer and spiral computed tomography in patients with malignancy, previous venous thromboembolism, COPD or heart failure and in older patients with suspected pulmonary embolism. J Thromb Haemost 2006; 4: 1042–6.

15. Gibson NS et al. The Importance Of Clinical Probability Assessment In Interpreting A Normal D-Dimer In Patients With Suspected Pulmonary Embolism. Chest. 2008;134(4):789-793.

16. Righini, Marc et al. Effects of age on the performance of common diagnostic tests for pulmonary embolism. The American Journal of Medicine , Volume 109 , Issue 5 , 357 – 361

 

 

 

“The Adventure of the Dancing Men”

 

SE

The illustrious Cardinal Commendoni suffered sixty epileptic paroxysms in the space of 24 hours, under which nature being debilitated and oppress’d he at leangth sank, and died. His skull being immediately taken off, I found his brain affected with a disorder of the hydrocephalous kind.         -Gavassetti, 1586

 

 

 

The state of Status Epilepticus (SE) is one which evokes an almost visceral response of urgency. The physical manifestations of a mind in crisis are if nothing else, strong motivators to action. We are trained to act with decisiveness and certainty, yet due to a paucity of high quality trials, an ever-changing definitional diagnosis, and the utilizations of surrogate endpoints in place of true evidence of benefit, our understanding of the management of status epilepticus is been severely constrained.

In a recent article published in JAMA, Chamberlain et al examined the efficacy of diazepam vs lorazepam in the treatment of status epilepticus in a pediatric population (1). The authors randomized 273 children, ranging from 3 months to 18 years old, experiencing an episode of status (defined as 5 minutes or longer of seizure activity or multiple seizures without a return to baseline) to receive either 0.2 mg/kg of diazepam or 0.1 mg/kg of lorazepam IV. Though the authors found no significant difference in their primary or secondary endpoints (seizure cessation within 10 minutes, rate of recurrence and time-to-seizure-cessation), certain limitations make it difficult to interpret the utility of this publication.

The authors enrolled patients with at least 5 minutes of seizure activity who had not received any anti-epileptic drugs (AEDs) en route to the hospital. This exclusionary criteria obviously affected their enrollment as over a 4-year period out of the 11,630 patients assessed for eligibility, only 273 patients were enrolled in the trial. 4,357 were excluded for no longer seizing upon arrival to the ED and 6,729 for other factors, presumably a large proportion receiving AED treatment before arriving to the hospital. Obviously this injures the trial’s external validity, as the child seizing upon presentation to the emergency department who has received AED treatments en route is a different and far more commonly encountered patient than one who has yet to receive any intervention. Thus the spectrum of disease encountered in this cohort is far less severe than in previous trials examining SE.

In fact this is only the last of many changes in enrollment criteria for patients involved in trials examining the various treatments for SE. The definition of status itself is continually in flux. In 1993 the American Epilepsy Society Working Group on Status Epilepticus defined status as “a seizure lasting 30 minutes or the occurrence of two or more seizures without recovery of consciousness in between” (2). Since this statement the temporal requirement to be considered SE has become progressively more lax. The Epilepsy Society Working Group subsequently lowered the time requirement to 20 minutes. In 1998 the Veterans Affairs Status Epilepticus Cooperation (VASEC) published a study comparing various treatment options for SE (3). Their enrollment criteria defined SE as 10 minutes of continuous seizure activity or multiple seizures without a return to baseline in between. That same year Lowenstein et al published an article in the NEJM reviewing the etiology of SE and recommended the definition of status be changed to continuous seizures lasting no more than 5 minutes (4). In 2001 the San Francisco Emergency Medical Services published the Pre-Hospital Treatment of Status Epilepticus (PHTSE) Trial, comparing pre-hospital efficacy of diazepam, lorazepam and placebo. The authors adopted Dr. Lowenstein’s suggestion, enrolling patients with seizure activity greater than 5 minutes (5). Since then, the majority of publications examining SE have used this 5 minute definition. Though the argument proposed by Dr. Lowenstein, that most seizures lasting more than 5 minutes require treatment is a valid one, the inclusion of these patients in the same category as those with continuous seizures for greater than 30 minutes seems misconstrued. In fact if you examine mortality in the Veterans study compared to that of the PHTSE cohort the 30-day mortality of patients fell from 37% to 9.2% (3,5). Clearly the acuity of the patients included in these respective cohorts is significantly different.

The second limitation seen both in the Chamberlain trial and throughout the recent SE literature is the belief that time-to-seizure-cessation is a clinically relevant endpoint. Though there is relatively robust data describing the association between seizure length and poor outcomes (3,4), the converse statement, that chemically shortening seizure length will in turn improve outcomes, is inherently flawed. In the PHTSE Trial, upon arrival to the hospital, 21% of patients’ seizures terminated in the placebo group compared to 42.6% and 59.1% in the diazepam and lorazepam groups respectively(5). Despite the obvious clinical efficacy of both these medications in time-to-seizure-cessation, there were no statistical difference in mortality or functional neurological outcomes observed between the active or control groups. Likewise in the VASEC trial, the authors found lorazepam to be more efficacious than the other treatment strategies in stopping the seizures in a timely manner. Despite this superiority in time-to-seizure-cessation no mortality benefit was observed (3). They did find that those who were resistant to the first and second line agents were far more likely to have malignant cause of their SE. Clearly the underlying disease process that results in refractory status is the cause of the bad outcomes.

The only seemingly clinically relevant endpoint included in the Chamberlain publication was the rate of ventilatory support (defined as need for bag-valve-mask ventilation or endotracheal intubation) required in each group. This too was statistically equivalent. 16% and 17.6% of patients required some form of ventilatory assistance in either group (1).   A similar number of patients required ventilatory assistance in the VASEC, PHTSE and RAMPART cohorts (3,5,6). In fact if you examine the respective groups in the PHTSE cohort the need for intubation did not differ whether the patients received lorazepam, diazepam or placebo (5), indicating that it is again the underlying pathology rather than the medical intervention that causes the subsequent airway compromise.

Our continued vacillations in the definition of SE have led to a much more benign disease process than the status of our forefathers. The acuity of the patients included in trials examining treatments for SE has been progressively decreasing over the past 15 years. In Chamberlain et al, 33% of the populations’ seizures were febrile in nature, which often require no further treatment. Compare that to the VASEC cohort where 33% had a life threatening cause of their SE.  Given this dilution, identifying benefit for any treatment in a modern day SE cohort suffers from a significant Pollyanna effect. Additionally our persistent assumption that time-to-seizure-cessation is a clinically relevant endpoint further obscures our understanding of any true treatment effect our various interventions may provide.

Chamberlain et al demonstrated that with today’s broad spectrum of status the choice of your first line benzodiazepine matters very little. Whether this is because of the equal efficacy of the various medications or the fact that in most cases the seizures will resolve no matter what treatment is given. Without a true placebo group it is hard to say. What is clear is the underlying cause of the seizures is far more important than your choice of medication. Those who are resistant to your first and second line treatments are far more likely to track their lineage from the status of old, and have a malignant cause that should be pursued.

Sources Cited:

1.Chamberlain et al. Lorazepam vs Diazepam for Pediatric Status Epilepticus: A Randomized Clinical Trial. JAMA. 2014;311(16):1652-1660.

2. Brodie MJ Status epilepticus in adults. Lancet. 1990 Sep 1; 336(8714):551-2.

3. VA Status Epilepticus Cooperative Study Group: A comparison of four treatments for generalized convulsive status epilepticus. N Engl J Med 1998;339: 792–798

4. Lowenstein DH, Alldredge BK.  Status epilepticus.  N Engl J Med. 1998; 338:970-976.

5. Alldredge BK, Gelb AM, Isaacs SM, et al. A comparison of lorazepam, diazepam, and placebo for the treatment of out-of-hospital status epilepticus. N Engl J Med 2001;345:631-7

6. Silbergleit R, Durkalski V, Lowenstein D, Conwit R, Pancioli A, Palesch Y, Barsan W; NETT Investigators. Intramuscular versus Intravenous Therapy for Prehospital Status Epilepticus. N Engl J Med. 2012 Feb 16;366(7):591-600.

 

 

 

“A Timely Reexamination of the Case of the Thirteen Watches”

Wepfer

-Doing the same thing over and over again and expecting different results

Albert Einstein on Insanity-

  For a near decade now our mad dash to the cath lab has been based off flawed data and an illogical certainty that every moment of delay is detrimental to our patients. As such we were completely flabbergasted when Menees et al published their findings in the NEJM in September 2013 (1). Despite reducing door-to-balloon time from a mean of 82 minutes to 67 minutes, no benefit in mortality was demonstrated. After 4 years and just over 95,000 patients, the authors were unable to demonstrate a benefit associated with this dramatic decrease in time to revascularization. These findings should not be as surprising as they initially appear. In fact, there is a multitude of evidence demonstrating that “time is myocardium” is a far more complex phenomenon than Door-to-Balloon time can account for. Rather than taking a rational data-driven approach to this pathology, we instead focused on the data that suited our desire to act. The evidence used to support our current STEMI guidelines is primarily based off an observation cohort published in JAMA in 2000 (2). This article by Cannon et al demonstrated a correlation between increased Door-to-Balloon times and increased mortality. The obvious shortcomings of these types of data sets and the mountain of evidence demonstrating the far more complex reality of time is myocardium can be found in a former post. What is important is how we utilized this limited data to serve our purposes and ignored the remainder of the evidence. With our blinders firmly attached, we chose to make door-to-balloon time the metric of choice when assessing quality in STEMI management.

Though the Menees cohort has reminded us that Door-to-Balloon time is very rarely an important metric, it is unlikely these findings will have any influence in changing our current practice. The momentum we have gained in this sprint towards futility has created a body with an inertial vector that is almost impossible to deflect. What this article should provide is a warning. An example of what happens when a healthcare system mobilizes extraordinary quantities of resources based off flawed surrogate outcomes. Currently we stand at a similar crossroads in yet another field of medicine. We are once again at the precipice of mobilizing these very same resources based on similarly flawed data. This time, the question at hand, is time brain?

In February of 2013 the results of 3 RCTs were published in the NEJM (3,4,5). They accounted for the largest and highest quality trials examining the efficacy of endovascular interventions for acute ischemic stroke. All 3 trials were universally negative. Though each trial had its own unique design, they were unable to demonstrate even trends towards benefit when comparing endovascular interventions to IV tPA therapy alone. So much so that the authors of the largest of the trials, IMS-3, state in their conclusion that these therapies should not be utilized outside the purview of a randomized control trial. Yet despite these universally negative findings, there has been a great deal of pressure to once again create the infrastructure necessary to deliver eligible patients swiftly to endovascular capable facilities. After all, every minute counts…

Time is brain has been a commonly accepted mantra of stroke management since the earliest inception of reperfusion therapies. And much like the overall efficacy of reperfusion therapy in acute CVA, the data addressing the time is brain hypothesis have yielded mixed results. In the area of thrombolytic therapy the largest, highest quality data sets have failed to uncover any convincing evidence that time to treatment is an important determinant of neurologic outcomes. The Cochrane Database examined all 26 trials comparing thrombolytics to placebo and found no evidence that time-to-treatment affected outcomes (6). IST-3, the largest trial to date examining thrombolytics in acute ischemic stroke found no temporal relationship between improved outcome and time-to-treatment (7). Finally the NINDS trial, the rallying cry for tPA apologists world-wide, in its original manuscript the authors were unable to demonstrate that patients who received the tPA in under 90 minutes fared better than those treated in the 90-180 minute window (8). In fact, when Dr. Jerry Hoffman and Dr. David Schriger reexamined the patient level data from the NINDS cohort, they too found that time to treatment had no association with 3-month neurological outcomes (9). Moreover when they accounted for the obvious baseline differences present in the NINDS trial (10) (using change in NIHSS at 3-months) the overall benefit of tPA also disappeared.

Other than a highly selected review of eight cherry picked trials published in the Lancet in 2010(11), no analysis of RCT data has demonstrated a temporal benefit to IV tPA therapy in acute ischemic CVA.  A number of publications using registry data have attempted to examine the time is brain phenomenon. Two such studies were published a month apart from one another in JAMA and JAMA Neurology (12, 13). These trials used similar registries, similar methods, and similar statistical analysis and yet found completely antipodal results. These contradictory findings are less of a comment on the truth of the temporal relationship of revascularization than on the limitations of such data and how it will submit to even the smallest amount of statistical coercion. In a brilliantly written letter to editor by Dr. Ryan Radecki addresses the flaws in the conclusions drawn by Saver et al, authors of the JAMA article that concluded time is in fact brain (14). Dr. Radecki writes:

Dr. Saver and colleagues used the Get With The Guidelines–Stroke (GWTG-Stroke) registry to investigate the association of time to tissue-type plasminogen activator (tPA) treatment and outcomes from stroke. However, the authors did not address the handling of transient ischemic attacks (TIAs) and stroke mimics within the registry, which is a potential confounder in the abstraction method used in the study.

Most recently the authors of IMS-3 published a secondary analysis of their cohort in an effort to demonstrate that similar to IV tPA therapy, with endovascular treatments, time to reperfusion matters (15). Using the IMS-3 cohort, these authors examined the association of time to endovascular intervention and 3-month functional neurological outcomes. After retrospectively excluding patients found not to have large vessel occlusions (the proximal MCA or ICA terminus), Khatri et al found a statistically significant association with time-to-intervention and improved functional neurological outcomes. Like the Cannon and Saver cohorts, this data is severely flawed. There are a multitude of reasons why patients may be delayed in receiving endovascular therapy. Most obviously they were sicker and required some form of stabilization before being transported to the intervention suite. In fact after the authors account for some of these confounders by utilizing multifactorial logistic regression, the overall effect that time to intervention had on functional neurological outcomes translated to a coefficient of determination (R2 ) of 0.18. Meaning that 18% of the variation in neurological outcomes at 3-months can be explained by time-to-intervention. The remaining 82% is determined by other factors. Clinically a very small effect. Especially when considering this regression model did not account for the fact that the subgroup of patients treated in a more timely fashion were far more likely to include patients having a TIA or a stroke mimic, who will universally have a better outcome independent of the intervention they receive. More importantly this was a negative study which found no difference in 3-month outcomes between patients who received IV thrombolytics plus endovascular treatment or IV thrombolytics alone.

This is observational data demonstrating a small association with time to treatment and improved outcomes. Using this data the most we can say is, patients who take longer to receive a therapeutic intervention have worse neurological status at 3-months. The corollary statement, reducing this time to treatment will improve neurological outcomes, cannot be made and taking into account the summation of the data on reperfusion therapies for acute ischemic stroke, is most likely false.

Data sets like the Cannon and Saver articles should not be mistaken as investigations in search of scientific truths designed to answer clinically relevant questions. Rather these publications are examples of how easily we can manipulate data to serve our purposes. Like its predecessors the Khatri et al reanalysis of the IMS-3 cohort appears promising as long as you choose to ignore the control group. At this point all we have to support the efficacy of endovascular therapies in acute ischemic stroke are  stories of patients rising from the cath lab table reciting poetry they never knew before their infarct and perfusion studies taken after the intervention showing near normal restoration of blood flow. Let us note confuse anecdote and pretty pictures as evidence of benefit.  Using such data to justify the restructuring of our healthcare infrastructure is unwise. The resources required to train an army of interventionists to be ready at a moments notice, equip a nation of cath labs to be accessible 24 hours a day, and mobilize a pre-hospital system to deliver these patients swiftly and safely to the facilities capable of endovascular interventions would be massive. All for a treatment that not only has failed to demonstrate efficacy over our current “standard of care”, but for a theory of temporal urgency that has never been demonstrated in a conclusive fashion. It is not hard to imagine that if we fail to heed the warnings of the Menees trial, in 13 years the NEJM will once again publish findings from a large national registry. Only on this occasion it will be examining patients undergoing endovascular interventions for acute ischemic stroke. Like the Menees cohort this registry will demonstrate that over a 5-year period we have reduced time to intervention impressively. And yet despite this effort and the massive resources invested to achieve it, this cohort failed to demonstrate any improvement in neurological outcomes. We will be left wondering where we went wrong.

Here and now…

 

Sources Cited:

1. Menees DS et al. Door-to-balloon time and mortality among patients undergoing primary PCI. N Engl J Med. 2013 Sep 5;369(10):901-9

2. Cannon CP, Gibson CM, Lambrew CT, et al. Relationship of symptom-onset-to-balloon time and door-to-balloon time with mortality in patients undergoing angioplasty for acute myocardial infarction. JAMA. 2000; 283: 2941–2947.

3. Broderick JP, Palesch YY, Demchuk AM, et al. Endovascular therapy after intravenous t-PA versus t-PA alone for stroke. N Engl J Med 2013;368:893-903

4. Ciccone A, Valvassori L, Nichelatti M, et al. Endovascular treatment for acute ischemic stroke. N Engl J Med 2013;368:904-913

5. Kidwell CS, Jahan R, Gornbein J, et al. A trial of imaging selection and endovascular treatment for ischemic stroke. N Engl J Med 2013;368:914-923

6. Wardlaw JM, Murray V, Berge E, Del Zoppo GJ. Thrombolysis for acute ischaemic stroke. Cochrane Database Syst Rev. 2009 Oct 7;(4):CD000213.

7. The IST-3 collaborative group. The benefits and harms of intravenous thrombolysis with recombinant tissue plasminogen activator within 6 h of acute ischaemic stroke (the third international stroke trial [IST-3]): a randomised controlled trial. Lancet 2012; 379

8. The National Institute of Neurological Disorders and Stroke rt-PA Stroke Study Group. Tissue plasminogen activator for acute ischemic stroke. N Engl J Med 1995;333:1581-1587

9. Hoffman JR, Schriger DL. A graphic reanalysis of the NINDS Trial. Ann Emerg Med. 2009 Sep;54(3):329-36, 336.e1-35.

10. Mann, J. Efficacy of Tissue Plasminogen Activator (Tpa) for Stroke Truths about the NINDS study: setting the record straight. West J Med. May 2002; 176(3): 192–194.

11. Lees KR, Bluhmki E, von Kummer R, et al. Time to treatment with intravenous alteplase and outcome in stroke: an updated pooled analysis of ECASS, ATLANTIS, NINDS, and EPITHET trials. Lancet. 2010 May 15; 375(9727): 1695-703.

12. Saver JL, Fonarow GC, Smith EE, et al. Time to treatment with intravenous tissue plasminogen activator and outcome from acute ischemic stroke. JAMA. 2013 Jun 19; 309(23): 2480-8.

13. Ahmed N et al. Results of Intravenous Thrombolysis Within 4.5 to 6 Hours and Updated Results Within 3 to 4.5 Hours of Onset of Acute Ischemic Stroke Recorded in the Safe Implementation of Treatment in Stroke International Stroke Thrombolysis Register (SITS-ISTR): An Observational Study. JAMA Neurol. 2013;70(7):837-844.

14. Radecki RP.  Acute ischemic stroke and timing of treatment. JAMA. 2013 Nov 6;310(17):1855-6

15. Khatri et al. Time to angiographic reperfusion and clinical outcome after acute ischaemic stroke: an analysis of data from the Interventional Management of Stroke (IMS III) phase 3 trial. The Lancet Neurology – 28 April 2014  

“The Adventure of the Greek Interpreter Revisited”

tumblr_lsgerodcAW1qk931ho1_500

 

If our affair with thrombolytics had not started off with the success it did, we may not still be trying to nostalgically relive our yesteryears of throbolytic glory. Whether it was streptokinase, alteplase or tenectoplase (TNK), thrombolytics have consistently demonstrated a mortality benefit when used in patients experiencing an ST-elevation infarction (1). If it was not for the superiority of PCI in both measures of efficacy and financial gain, our romance with thrombolytics might still be in full swing.  Our initial triumph with STEMI patients has led us to believe in the efficacy of thrombolytics in all hypercoagulable disease states, despite its mediocre performance outside the confines of ACS.

When thrombolytics fell out of favor in the management of STEMI, supplanted by mechanical reperfusion therapy, it seemed only natural we turned our focus to the treatment of acute ischemic stroke to fill our thrombo-philic void. Though the efficacy of thrombolytics in CVA is still under debate, it is clear they have never demonstrated the mortality benefits as exhibited in myocardial infarction (2). What we are left debating is small differences on scales measuring functional neurological outcomes. Scales that are so unreliable, two neurologists grading the very same patient one after another, often disagree by one or more points (3).  Whether these potential improvements in neurological outcome are of clinical relevance or not, they are a far cry from the life saving benefits thrombolytics provide in STEMI management.

Pulmonary embolism was another likely candidate for thrombolytic intervention. As clinicians, we have become hyperaware and preoccupied by diagnosing even the most clinically irrelevant pulmonary emboli.  When we do happen to stumble upon emboli of clinical import we ironically have very little to offer the patient other than a hospital bed, IV heparin and the promise of a six month course of coumadin therapy. So the idea that thrombolytics may help dissolve these larger clots is an appealing one to say the least.  Despite the sparse evidence supporting their utility and no mortality benefit demonstrated in patients with massive pulmonary embolism (4), thrombolytics have gained general acceptance in this subgroup. And though this “standard of care” is based more on our fear of watching the patient decompensate in front of us, and less upon proof of benefit, their role in the management of massive pulmonary embolism is now a IIA recommendation in the AHA guidelines of the management of pulmonary embolism (5).

A looming question is whether patients with sub-massive pulmonary embolism are candidates for lytic therapy. The PEITHO trial was the largest RCT to have examined this question to date (6). PEITHO’s results, originally released in abstract form last year, were finally published in totality on April 10th 2014 by Meyer et al in the NEJM.  This study randomized normotensive patients with radiographic evidence of PE with concern for right heart strain (positive troponin, BNP or evidence of right heart strain on CT or ECHO) to either a thrombolytic strategy (TNK) or placebo. In the “The Adventure of the Greek Interpreter”, I discussed the results of this trial, but in brief it was disappointing. The authors claim success in a number of surrogate endpoints they categorized as “hemodynamic collapse”.  As a reader we cannot help but feel cheated, as the mortality between the groups was statistically equivalent. What the PEITHO trial did illustrate was that when patients are given thrombolytics, they bleed.  Overall there was an approximate 9% difference in major bleeding between the TNK and placebo group (11.5% vs 2.4%). Additionally there was an approximate 2% increase in ICH in those patients given TNK.

And so, since the acute benefits of thrombolytics in pulmonary embolism are nothing less than sub-tacular, the debate on the utility of thrombolytics in sub-massive pulmonary emboli hinges on their ability to improve functional outcomes in the long-term. The evidence supporting thrombolytics’ efficacy in preventing post-embolic pulmonary hypertension is unconvincing at best. Unfortunately the authors of the PEITHO trial failed to publish long-term functional outcomes. In the PEITHO’s trial design published in the American Heart Journal in 2012, the authors report that 6-month functional outcomes would be recorded, including NYHA classification and echocardiographic findings.  A second publication on the PEITHO cohort including these results may very well answer some of the uncertainties we currently have (7).

Until then, the best evidence we have supporting the practice of thrombolytic therapy in acute pulmonary embolism is the MOPPET trial (8). In this trial, comprised of 121 patients diagnosed with sub-massive pulmonary embolism and evidence of right heart strain, patients were randomized to either placebo or 50 mg of tPA (“half-dose” tPA). The authors found a staggering 41% absolute difference in their primary endpoint, the number of patients with pulmonary hypertension at 2 years post enrollment. As discussed in the original post, “The Adventure of the Greek Interpreter”, the rate of pulmonary hypertension in the placebo arm was far higher than the rate of pulmonary hypertension observed in similar cohorts (9,10,11). These impressive results are far more likely due to the surrogate outcome the authors chose as their primary endpoint rather than the efficacy of thrombolytics. Whereas most trials define pulmonary hypertension by echocardiographic evidence of pulmonary hypertension in the symptomatic patient, the authors of the MOPPET trial chose to use echocardiographic findings alone.  In the asymptomatic patient we are unsure of the clinical relevance this radiographic information provides in isolation.

A recently published trial by Jeff Kline, the man who defined pulmonary embolism for the past decade, hoped to delineate the clinical effect of thrombolytic therapy on the incidence of pulmonary hypertension after sub-massive pulmonary embolism (12). Named TOPCOAT, this trial examined thrombolytics’ effects on 3-month post-pulmonary embolism functional outcomes. Unfortunately interpreting the results is difficult due to its premature stoppage (after only 83 patients) and its convoluted primary endpoint, a composite outcome of recurrent PE, poor functional capacity (RV dysfunction with either dyspnea at rest or exercise intolerance) or an SF36 Physical Component Summary (PCS) score <30 at 90 day follow-up. Patients were randomized to either a single bolus of 30-50 mg of tenectoplase (TNK) or placebo. The authors examined the composite outcome of functional capacity and perception of wellness at 3-months. The authors also examined the rate of pulmonary hypertension as defined by echocardiographic findings.

In the TOPCOAT trial, the TNK arm certainly seemed to have slightly better functional outcomes at 90 days. The TNK group had lower rates of patients with a New York Heart Association Functional (NYHA) class greater than 3 (8 vs 2) and the number of patients with a low perceptional wellness score under 30 (2 vs 0). None of these differences reached statistical significance, and overall the groups’ functional outcomes were fairly similar, both arms of the trial had almost identical mean NYHA score, VEINES-QOL score, and SF-36 Mental Component score. In fact the number of patients with poor functional outcome at 3-months, defined as NYHA >3 and evidence of right heart hypertrophy on echo (the traditional definition of post-embolic hypertension), was identical (approximately 7.5%).  If echocardiographic findings alone (similar to the MOPPET definition) were used to diagnosis post-embolic pulmonary hypertension the incidence would have increased to 32.5%.
TOPCOAT like MOPPET demonstrated that thrombolytics may provide some benefit in long-term outcomes after sub-massive pulmonary embolism.  Just how relevant these benefits are is still unclear. TOPCOAT further reinforces that the unrealistic findings in MOPPET were just that, too good to be true. Whether these benefits outweigh the 2% risk of ICH that PEITHO revealed is still unknown. Furthermore it is still unclear as to who truly benefits from acute thrombolytic therapy. It may very well be that the young healthy patient with no comorbidities and a significant pulmonary reserve is unlikely to develop pulmonary hypertension, while the older patient with COPD or chronic heart failure, are more at risk and likely to benefit from thrombolytic therapy.  Ironically according to the PEITHO cohort these are the very same patients that are at the highest risk for ICH.

Finally the question arises of whether the differences in the doses and the protocols used in the MOPPET, TOPCOAT and PEITHO trials alter clinical outcomes and the incidence of ICH. Was the “half-dose” strategy that was used in the MOPPET trial the reason for this cohort’s low rate of ICH or was it just random chance and a small population size?  From the existing data we are unable to resolve these uncertainties. Historically these lines of inquiry have always proved fruitless. As far back as the GISSI-2 trial (13)examining thrombolytics in acute myocardial infarction, a particular thrombolytic agent failed to demonstrate superiority over any other agents. Not only were the authors unable to demonstrate superiority of any particular agent, it didn’t matter whether these clot busters were administered with or without heparin. Additionally, when the Cochrane Group examined thrombolytic therapy for acute ischemic stroke, they were unable to find a difference in efficacy between the individual thrombolytic agents or in the various dosing strategies utilized (14).

Like Thrombolytics in acute ischemic stroke, their use in sub-massive pulmonary embolism has failed to demonstrate the objective benefits that we saw with acute myocardial infarction. Thus like in CVA we are left deciphering the relevance of subjective endpoints of uncertain value. At least in the area of acute ischemic stroke we are familiar with the methods used to evaluate functional outcomes and there are accepted standards (an mRS >2) for poor outcomes, with which we can judge performance. The outcomes used to evaluate functional outcomes in post-pulmonary embolism patients are as of yet alien. Furthermore, there has yet to be a consistent set of metrics or time period utilized when measuring these outcomes. There does seem to be a consistent signal throughout the thrombolytic literature for pulmonary embolism. Whether it is clinically relevant or outweighs the obvious harms is still uncertain. At least in theory “half-dose” thrombolytic therapy seems physiologically plausible, but it is important and healthy that we maintain a robust state of skepticism until we have more than physiological reasoning and the warm memories of the golden years of thrombolytics supporting their use in sub-massive pulmonary embolism.

Sources Cited:

  1. Fibrinolytic Therapy Trialists’ (FTT) Collaborative Group. Indications for fibrinolytic therapy in suspected acute myocardial infarction: collaborative overview of early mortality and major morbidity results from all randomised trials of more than 1000 patients. Lancet. 1994 Feb 5;343(8893):311-22.
  2. Wardlaw JM, Murray V, Berge E, del Zoppo GJ. Thrombolysis for acute ischaemic stroke. Cochrane Database of Systematic Reviews 2009, Issue 4. Art. No.: CD000213. DOI: 10.1002/14651858.CD000213.pub2.
  3. Banks et al. Outcomes validity and reliability of the modified Rankin scale: implications for stroke clinical trials: a literature review and synthesis. Stroke. 2007 Mar;38(3):1091-6. Epub 2007 Feb 1.
  4. Wan S, Quinlan DJ, Agnelli G, Eikelboom JW. Thrombolysis compared with heparin for the initial treatment of pulmonary embolism: a meta-analysis of the randomized controlled trials. Circulation. 2004; 110: 744–749
  5. Jaff et al. Management of Massive and Submassive Pulmonary Embolism, Iliofemoral Deep Vein Thrombosis, and Chronic Thromboembolic Pulmonary Hypertension  A Scientific Statement From the American Heart Association. Circulation. 2011; 123: 1788-1830
  6. Meyer et al. Fibrinolysis for Patients with Intermediate-Risk Pulmonary Embolism N Engl J Med 2014; 370:1402-1411 April 10, 2014
  7. Steering Committee. Single-bolus tenecteplase plus heparin compared with heparin alone for normotensive patients with acute pulmonary embolism who have evidence of right ventricular dysfunction and myocardial injury: rationale and design of the Pulmonary Embolism Thrombolysis (PEITHO) trial. Am Heart J. 2012 Jan;163(1):33-38.e1. doi: 10.1016/j.ahj.2011.10.003.
  8. Sharifi et al.  Moderate pulmonary embolism treated with thrombolysis (from the “MOPETT” Trial). Am J Cardiol. 2013 Jan 15;111(2):273-7
  9. Vittorio Pengo, M.D., Anthonie W.A. Lensing, M.D., Martin H. Prins, M.D., Antonio Marchiori, M.D., Bruce L. Davidson, M.D., M.P.H., Francesca Tiozzo, M.D., Paolo Albanese, M.D., Alessandra Biasiolo, D.Sci., Cinzia Pegoraro, M.D., Sabino Iliceto, M.D., and Paolo Prandoni, M.D. for the Thromboembolic Pulmonary Hypertension Study Group. Incidence of Chronic Thromboembolic Pulmonary Hypertension after Pulmonary Embolism. N Engl J Med 2004; 350:2257-2264
  10. Kline JA, Steuerwald MT, Marchick MR, Hernandez-Nino J, Rose GA. Prospective evaluation of right ventricular function and functional status 6 months after acute submassive pulmonary embolism: frequency of persistent or subsequent elevation in estimated pulmonary artery pressure. Chest. 2009; 136: 1202–1210.
  11. Becattini C, Agnelli G, Pesavento R, et al. Incidence of chronic thromboembolic pulmonary hypertension after a first episode of pulmonary embolism. Chest 2006;130(1):172-175.
  12. Kline et al. Treatment of submassive pulmonary embolism with tenecteplase or placebo: cardiopulmonary outcomes at 3 months: multicenter double-blind, placebo-controlled randomized trial. J Thromb Haemost. 2014 Apr;12(4):459-68.
  13. Gruppo Italiano per lo Studio della Sopravvivenza nell’Infarto Miocardico . GISSI-2: a factorial randomised trial of alteplase versus streptokinase and heparin versus no heparin among 12 490 patients with acute myocardial infarction. Lancet 1990; 336: 65-71
  14. Wardlaw JM, Koumellis P, Liu M. Thrombolysis (different doses, routes of administration and agents) for acute ischaemic stroke. Cochrane Database Syst Rev. 2013 May 31

 

“The Case of the Dying Detective Continues…”

A picture of Florence Nightingale (1820-1910), "The Lady with the lamp", the English nurse, famous for her work during the Crimean War, is seen here in the hospital at Scutari, Turkey.

Survivors of the Armageddon in any of its many forms, zombie, alien, or otherwise, are often left in a state of emotional turmoil. They face an uncertain future, the loss of loved ones, and the constant stress of imminent danger. Underneath the obvious anguish lies a deeper more subtle but equally distressing sentiment, uncertainty. Now faced with a world completely devoid of the values they once held dear, they are often incapable of finding meaning in this post-apocalyptic wasteland. On March 18th 2014, the publication of the ProCESS trial has ushered in a new era of sepsis management (1). And yet despite being the largest and highest quality trial thus far to examine the efficacy of various strategies for managing the septic patient, it has done very little to illuminate what this post-Early Goal Directed Therapy (EGDT) era will entail.

In 2001 Rivers et al published the findings of a single center 263 subject RCT examining the efficacy of an Emergency Department based protocol consisting of reaching stepwise goals meant to optimize hemodynamics and tissue perfusion (2). Comparing this protocol to “standard care” the authors reported astounding results, with an absolute mortality benefit of 16% in favor of the protocol based strategy. Initial trials of Goal Directed Therapy which failed to demonstrate benefit when applied to ICU patients, now obtained incredible results when implemented in the Emergency Department (11,12). And thus the era of EGDT was born. This acronym was the battle cry for Emergency Physicians near and far.  Enforced, in some cases, in a militaristic fashion it became the standard of care in Emergency Departments internationally.

However there was unease among the troops, in the form of a number of those opposed to accepting EGDT in its entirety. After all was it a wise decision to globally adopt a protocol based off a single center study with so few participants? They challenged the wisdom of the unquestioning application of EGDT as a bundled therapy. Though components of EGDT undoubtedly benefit patients in septic shock (fluids, early antibiotics and supportive care), others have proven to be of no benefit and in some cases harmful (dobutamine use and CVP monitoring)(3). These subtleties required further examination before adopting the bundle universally.

ProCESS sought to address these very concerns, and in a sense it was a success. In a 1:1:1 RCT design, Yealy et al compared the Rivers EGDT protocol, to both a less invasive but still protocol-based strategy, and a “usual care” group(care as determined by the attending physician).  The authors found no difference in any of the endpoints measured. Most importantly, the primary endpoint, 60-day mortality was found to be 21.0%, 18.2%, and 18.9% respectively. Although there were small differences in the total amount of fluid given within the first 6 hours, the main differences in the 3 groups were the use of vasopressors (significantly higher in the two protocol-based groups) and dobutamine (only used with any consistency in the EGDT group).

ProCESS exposes many important aspects of the management of sepsis. First the importance of EGDT is not in the execution of the bundle in its entirety, but rather the value of early and aggressive fluid resuscitation and the necessity of early administration of broad-spectrum antibiotics. ProCESS also establishes that there is more than one way to manage the septic patient. Providing evidence that the unstructured judgment of physicians is as effective in determining fluid status, hemodynamics and tissue perfusion as a standardized protocol.

What the ProCESS trial fails to divulge is the most effective strategy to guide fluid therapy. The authors compared unstructured clinician judgment (not specifically defined) of fluid responsiveness to either CVP or SBP plus shock index, neither of which are reliable indicators of true fluid responsiveness. We have known for some time now that from a physiological standpoint CVP is a poor marker of fluid responsiveness(4). Since the publication of the Rivers EGDT bundle many more elegant and intrinsically accurate methods of assessing fluid responsiveness have been proposed.

Bedside ECHO, IVC ultrasound, and non-invasive CO monitors have all been suggested as alternatives to CVP monitoring (each found to be more reliable predictors of fluid responsiveness). The trials that examine the accuracy of these methods in assessing fluid responsiveness have used the surrogate endpoint of CO, measured by pulmonary artery catheter (PAC) (5,6,7,8,9). PAC has generally been viewed as the gold standard for measuring cardiac output (CO), and yet in the case of assessing fluid responsiveness in the septic patient it should be viewed as a surrogate endpoint. When treating a patient in septic shock it is not critical to know their specific CO or how our fluid challenge affects it. What is important is how our fluid challenge affects this patient’s morbidity and mortality. Though we assume that cardiac output and direct assessment of fluid responsiveness with a PAC are ideal metrics to follow, we have no real proof supporting this concept. In fact the only real evidence we have has demonstrated just the opposite. A large multi-center RCT published by Richards et al in JAMA in 2003, examined this very question (10). 681 ICU patients in shock (86% septic in origin) were randomized to have their treatment facilitated by PAC measurements or solely based on the clinical judgment of the treating physician. This trial failed to demonstrate any added clinical benefit to the addition of direct monitoring of a patient’s cardiac output and fluid responsiveness. Thus using the accuracy with which ECHO, IVC ultrasound, or non-invasive CO monitors predict PAC findings to decide the ideal strategy to guide fluid resuscitation, when direct measurements of these metrics via PAC were of no benefit to clinical outcomes, seems logically flawed.

It is necessary to examine how ECHO, IVC ultrasound and non-invasive CO monitors affect patient oriented, clinically relevant endpoints. Rivers et al proposed CVP, and up until the publication of the ProCESS trial, it was the only metric that when used to guide fluid resuscitation in a clinically trial improves mortality. The ProCESS trial has demonstrated that CVP is not superior to unstructured clinician judgment. Unfortunately, ProCESS fails to provide us with a better option. ECHO, IVC ultrasound, or non-invasive CO monitors may be more accurate guides, but until they are tested against clinician judgment, using patient oriented endpoints, it is hard to truly quantify their utility. In the ProCESS trial mortality was unaffected between groups despite the fact that there was over a liter difference in the quantity of fluid administered (5,059 mL, 5,511 mL, 4,362 mL respectively).  This may suggest a precise measurement of fluid responsiveness is not necessary (1). Merely assessing for fluid tolerance rather than responsiveness and using IVC ultrasound may be the simplest and most effective method to guide fluid administration.

ProCESS has ushered in a new era for the management of sepsis in the Emergency Department. Though this trial was able to clarify the importance of fluid and early antibiotics as key components in the septic bundle, it has yielded little assistance on how best to guide the administration of said fluid. In this post-EGDT dystopia, it may be that a single metric will never be as powerful a tool as the flawed mind of the physician caring for the patient. The human brain, with all its beautiful imperfections may prove to be superior to any single objective measurement. A new era indeed…

 

Sources Cited:

1. The ProCESS Investigators. A randomized trial of protocol-based care for early septic shock. N Engl J Med.  2014 March.

2. Rivers E, Nguyen B, Havstad S, et al. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med.  2001;345:1368-1377.

3. Marik et al. Early goal-directed therapy: on terminal life support? Am J Emerg Med. 2010 Feb;28(2):243-5.

4. Marik et al. Does the central venous pressure predict fluid responsiveness? An updated meta-analysis and a plea for some common sense. Crit Care Med. 2013 Jul;41(7):1774-81.

5. Marik et al.Noninvasive cardiac output monitors: a state-of the-art review. J Cardiothorac Vasc Anesth. 2013 Feb;27(1):121-34.

6. Marik et al. Hemodynamic parameters to guide fluid therapy. Annals of Intensive Care. 2011, 1:1.

7. Barbier et al. Respiratory changes in inferior vena cava diameter are helpful in predicting fluid responsiveness in ventilated septic patients. Intensive Care Med 2004, 30:1740-1746.

8. Feissel et al. The respiratory variation in inferior vena cava diameter as a guide to fluid therapy. Intensive Care Med 2004, 30:1834-1837.

9. Biais et al. Changes in stroke volume induced by passive leg raising in spontaneously breathing patients: comparison between echocardiography and Vigileo/FloTrac device. Crit Care 2009, 13.

10. Richard et al. Early Use of the Pulmonary Artery Catheter and Outcomes in Patients With Shock and Acute Respiratory Distress Syndrome: A Randomized Controlled Trial. JAMA. 2003;290(20):2713-2720.

11. Hayes et al. Elevation of Systemic Oxygen Delivery in the Treatment of Critically Ill Patients. N Engl J Med. 1994 Jun;330(24):1717-22.

12. Gattinoni et al. A Trial of Goal-Oriented Hemodynamic Therapy in Critically Ill Patients. N Engl J Med. 1995 Oct;333(16):1025-32.

“The Adventure in the Valley of Fear”

019852403x.x-rays.2

In the biomedical industry’s relentless war against clinical reasoning, a multitude of biomarkers have been developed that promise assistance in the diagnosis and management of sepsis. In this ocean of mediocrity, procalcitonin has risen to the top. Since its predictive value is only slightly better than chance alone, it behooves those promoting its value to accentuate our limitations as medical providers, rather than emphasize its worth, questionable as it may be.  Preying upon the clinical doubt that is inherent in the diagnosis and treatment of septic patients, the makers of procalcitonin choose to exploit these deeply rooted insecurities.

The first of these insecurities called into question is whether our clinical judgment as Emergency Physicians is sensitive enough to identify the subtle signs of early sepsis. The proponents of procalcitonin argue that by the time the disease becomes obvious enough for our clumsy clinical faculties to identify, it is too late. The thought is procalcitonin will be able to identify these patients earlier in their disease process, allowing us to intervene in advance of clinically obvious sepsis. Wacker et al recently published a systematic review and meta-analysis considering this very subject (1). Published in Lancet Infectious Disease in May 2013, this meta-analysis examined 31 data sets of ED and ICU patients to determine procalcitonin’s diagnostic capabilities. The pooled sensitivity and specificity of procalcitonin was 77% and 79% respectively, with an AUC of 0.85. The authors point out that a great deal of heterogeneity exists among the included studies owing to the multitude of cutoffs used in the individual trials, each one retrospectively selecting the threshold that provided the optimal performance of the assay in question. Given this, it is not unreasonable to conclude that when used clinically procalcitonin will perform worse than even these mediocre diagnostic characteristics suggest. In fact in an analysis of this very publication by Rucker et al, the authors examine what occurs to the accuracy of procalcitonin when you statistically account for the bias introduced by retrospectively selecting cutoffs that optimize the sum of procalcitonin’s sensitivity and specificity (2). Using a method known as the Youden Index (3) the authors calculate a more realistic sensitivity and specificity of procalcitonin , 72% and 73% respectively. Clearly not nearly accurate enough to be utilized clinically.

Notably, this meta-analysis addresses how well procalcitonin identifies sepsis in a vacuum. It does not address whether procalcitonin adds anything to our clinical ability to identify these patients. To answer this we have to examine procalcitonin’s ability when compared to clinical judgment.  In a study by Maisel et al published in European Journal of Heart Failure, the authors examined procalcitonin’s ability to differentiate bacterial infectious causes of dyspnea from non-infectious or viral causes (4). Using the gold standard of final hospital diagnosis made by two cardiologists and one pulmonologist blinded to the results of procalcitonin assay, the authors compared procalcitonin’s ability to identify pneumonia to Emergency Physicians’ unstructured judgment. Of the 1641 patients enrolled, 6.8% (112) had the final the diagnosis of pneumonia. The remainder had congestive heart failure, COPD, asthma, bronchitis, ACS, influenza, or various other maladies.

Similar to the trials included in the meta-analysis, the authors of the Maisel trial retrospectively fitted the diagnostic threshold of procalcitonin to optimize its diagnostic capabilities. In this trial it happened to be 0.25 ng/mL. Ideally this cutoff should be validated in a new prospective cohort to see if this threshold remains stable. Even with these bolstered numbers the test performed horribly. The AUC of procalcitonin for the diagnosis of pneumonia was 0.723, clinically useless. Especially given that the physician’s gestalt performed far better (AUC 0.84). When a decision tool was built using multivariate logistic regression combining clinical judgment and procalcitonin values, the AUC increased to 0.863. Though the authors claimed statistical significance, clinically this adds very little to physician’s judgment alone. Especially given the procalcitonin threshold was retrospectively fitted for optimal performance. The authors attempt to account for this by performing a bootstrap analysis in which the AUC of clinical judgment and clinical judgment plus procalcitonin were 0.834 and 0.857 respectively. Not only did clinical judgment outperform procalcitonin but so did chest x-ray findings. The authors report the ability of chest x-ray to diagnose pneumonia in this cohort with an AUC of 0.79. They go on to report the added benefit procalcitonin provides to chest x-ray (AUC of 0.864). They failed to report the combined diagnostic abilities of clinical judgment and chest x-ray, the classical way in which pneumonia is diagnosed. Despite the authors statistical chicanery they fail to provide a convincing argument for the utility of procalcitonin.

In an ironic juxtaposition, proponents of procalcitonin not only attack our failings in identifying patients who require antibiotic therapy but they further allege, on the rare occasion we do provide adequate antibiotic coverage we do so for longer than is required. In this vicious attack on our antibiotic stewardship, procalcitonin apologists testify that this biomarker’s mediocre specificity, which is only equaled by its mundane sensitivity, can somehow guide the course of antibiotic use in sepsis. Despite these claims, multiple studies examining this question have failed to show a benefit. The most notable is the PRORATA trial, published in The Lancet in 2010 (5). In this trial, Bouadma et al compare 28 and 60-day mortality and antibiotic use in ICU patients when procalcitonin was used to guide therapy compared to unstructured judgment. The authors randomized 621 patients to either traditional management or management dictated by procalcitonin-based protocol. The authors found a statistically significant increase in number of days without antibiotics in the procalcitonin group without statistically increasing mortality. What the authors fail to highlight was though 28-day mortality was not statistically different (21.2% in the procalcitonin group vs 20.4% in the control group) overall, the procalcitonin group performed worse. 60-day mortality was 30% in the procalcitonin group compared to 26.1% in the control group. Relapse rate of 6.5% in the procalcitonin group compared to 5.1% in the controls. Incidence of superinfection was 34.5% in the procalcitonin group compared to 30.9% in the controls. Length of stay in the ICU was 15.9 days in the procalcitonin group compared to 14.4 in the controls. Though procalcitonin may reduce the utilization of antibiotics, it does not appear this reduction is of any clinical benefit.

In the PASS trial, published in Critical Care Medicine in 2011, Jensen et al examined a similar procalcitonin-based protocol to guide antibiotic stewardship (6). Like the PRORATA trial, the authors randomized 1200 ICU patients with suspected infections to either standard care or a procalcitonin-based protocol. In this cohort the authors were not able to identify a decrease in antibiotic use in the procalcitonin group, in fact a procalcitonin-based protocol seemed to increase antibiotic use in this Danish population. Though the mortality rate was similar in the two groups ( 31.6% vs 32%), patients in the procalcitonin group spent more days on the ventilator, more days in the ICU and more days with organ failure.

Clearly we are not as bad in the diagnosis and management of sepsis as the makers of procalcitonin assays would have you believe. Whatever deficits we do possess have not been corrected with the addition of procalcitonin. Brain Natriuretic Peptide (BNP) forced its way into clinical use in a similar fashion to which procalcitonin is now attempting. As a marker of a disease process with moderate prognostic capabilities, BNP when examined in a vacuum, using retrospectively determined thresholds performs adequately, but when tested in the clinical arena, does nothing to augment medical decision-making(7). Sepsis is a bad disease with unfortunate outcomes. These outcomes should not be viewed as a comment on our clinical abilities, but rather a statement of the severity of the disease itself. Not as our inadequacies but rather medical realities. Turning to an expensive biomarker incapable of providing the certainty desired is obviously not the answer. The schoolyard bully tactics of those promoting procalcitonin should not persuade us otherwise.

This post is dedicated to Dr. Eric Wasserman

 

Sources Cited:

1. Wacker et al. Procalcitonin As a Diagnostic Marker For Sepsis: a Systematic Review and Meta-Analysis. Lancet Infect Dis. 2013 May;13(5):426-35

2. Rucker et al. Procalcitonin As a Marker For Sepsis. The Lancet Infectious Dis. Volume 13, Issue 12, Pages 1012 – 1013, December 2013

3. Rücker G, Schumacher M. Summary ROC curve based on the weighted Youden index for selecting an optimal cutpoint in meta-analysis of diagnostic accuracy. Stat Med 2010; 29: 3069-3078

4. Maisel et al. Use of procalcitonin for the diagnosis of pneumonia in patients presenting with a chief complaint of dyspnoea: results from the BACH (Biomarkers in Acute Heart Failure) trial. Eur J Heart Fail. Mar 2012; 14(3): 278–286.

5. Bouadma et al. Use of Procalcitonin to Reduce Patients’ Exposure to Antibiotics in Intensive Care Units: a Multicenter Randomized Controlled Trial. The Lancet Vol 375. Feb 6th 2010

6. Jensen et al. Procalcitonin-guided interventions against infections to increase early appropriate antibiotics and improve survival in the intensive care unit: a randomized trial. Crit Care Med. 2011 Sep;39(9):2048-58.

7. Hohl CM. Should natriuretic peptide testing be incorporated into emergency medicine practice? CJEM. 2006 Jul;8(4):259-61.

“The Case of the Uncertain Principle”

image006

Atoms or elementary particles themselves are not real; they form a world of potentialities or possibilities rather than one of things or facts.  -Werner Heisenberg

The uncertainty principle states, there is a limit to the precision with which the position and momentum of  any subatomic particle can be measured. Their location and velocity can only be described in degrees of probability, rather than with the certainties we are accustomed to. This nanoscopic world cannot be predicted by classical physics nor understood using the anecdotal experiences of everyday life. What, you may ask, does this have to do with the practice of Emergency Medicine? Although some would argue that Schrodinger and his cat would have made wonderful Emergency Physicians, until now Emergency Medicine and Quantum Mechanics have occupied their own mutually separate sectors of space. With the arrival of high-sensitivity troponin assays and the uncertainty that comes with interpreting their results, these independent circles may have come closer to intercepting then we ever would have anticipated.

Ideally during an acute myocardial infarction,  serum levels of cardiac-specific troponin rise incrementally. We utilize this predicted rise and its high specificity to confirm our suspicion of myocardial necrosis. More recently, as the troponin assays have become more sensitive they have been utilized to further risk stratify chest pain patients at low risk for ACS (9). Given the delayed fashion of troponin’s release into the blood stream, a single troponin is not sensitive enough to effectively rule out ACS (2), and thus emergency providers have taken to measuring troponin levels in a serial fashion. Traditional recommendations state providers should allow at least 3-6 hours between measurements to ensure identification of patients who are early in their presentation (6). The hope of those who are supporters of the high-sensitivity assays is that these tests will be able to identify patients earlier in their disease process and reduce time required between serial measurements leading to faster, more accurate dispositions.

The difficulty with the increasing sensitivity of troponin assay is two-fold. First, the newest generation of assays can now detect troponin levels in well over 50% of the general population (1). In fact, in an article published in NEJM in 2009 by Reichlin et al, the Roche high sensitivity troponin assay, at its limit of detection (LOD) found 87% of the cohort to have measurable troponin levels (2). This baseline troponinemia makes differentiating ACS from baseline noise a difficult proposition. The standard concept of utilizing the 99th percentile, or the troponin level below which 99% of a healthy cohort will fall, is only moderately more effective. In this same trial,  using the 99th percentile, Reichlin et al raised the specificity of the assay to 80% but at the cost of missing 5% of the acute myocardial infarctions (2). To better distinguish this baseline troponin levels from the disease state in question, a delta troponin approach has been proposed (4). The delta troponin strategy asserts that rather than using an absolute threshold from which to base your decisions on, trending the changes in the troponin level with serial troponin measurements may be a more accurate method of differentiating ACS from this baseline troponinemia. Given that the high-sensitivty assays are capable of measuring levels of troponin exponentially smaller than our standard assays, they seem to be the ideal tool for the delta strategy.

Unfortunately things are not so simple. The second concern with the use of the high-sensitivity troponin is the inherent imprecision of the assays themselves. All assays, both standard and high-sensitivity, will demonstrate a certain degree of test-retest variability. At high values of serum troponin, this variability is inconsequential but at the very low levels with which we are attempting to trend incremental changes, this imprecision becomes increasingly more important (7). The 10% coefficient of variation is a measurement attempting to quantify this variability. This is the serum troponin level below which the variability of the assay is greater than 10%. Simply put, it is the level at which the imprecision of the assay becomes clinically significant (6). The accuracy of assays in which the level of the 99th percentile is below that of the 10% coefficient of variation will suffer (2).Similarly as you attempt to measure smaller  absolute changes in troponin between serial measurements, this imprecision will undermine your efforts (3).

These flaws are illustrated nicely in an article published in Circulation in 2011, in which Reichlin et al attempt to validate the delta troponin strategy. The authors compare the absolute and relative changes of two troponin assays at 1 and 2 hours after presentation to the “gold standard of the diagnoses of myocardial infarction made by 2 cardiologists using standard troponin assays drawn at six and nine hours after presentation. Using an absolute change of 0.007 micrograms/L at 2 hours after presentation,  the hs-TnT assay had a sensitivity of 89% and a specificity of 93%. In the subgroup of patients who presented with initial troponin levels above the 99th percentile, the delta troponin methods produced a sensitivity and specificity of 90% and 87% respectively. In the subgroup who presented with an initial troponin level below the 99th percentile, the  delta troponin provided a negative predictive value of 100%. Unfortunately in this group a positive delta troponin meant very little, providing a positive predictive value of only 22%. Compared to the diagnostic characteristics of a single troponin measurement taken at presentation (sensitivity of 95% ,specificity of 80%, NPV of 99% and PPV of 50%), very little is gained from this delta troponin strategy (2).

Thus you are left in perpetual uncertainty. Unsure if the low level of troponin is the early rise associated with myocardial necrosis or a baseline troponinemia present in so many patients. Likewise you are equally uncertain if the change in levels at 2-hours is further confirmation of infarction or simply due to the random imprecision of the assay itself. In a recently published article by Pretorius et al, these authors have turned to statistical modeling in an attempt to resolve this ambiguity (5). The reference change value (RCV) is a mathematical concept contrived to combat this variability. It is a calculation that takes into account this analytical imprecision, the estimates of within-subject biological variation and calculates the amount of change above which is not likely to be due to chance alone (8). Pretorius et al have attempted to apply this concept to the delta troponin strategy (5).  Using the RCV the authors calculate a z-score and propose a threshold of 1.96 above which the delta troponin measurements should be considered positive. This is the level that corresponds to a p-value of 0.05 or a 5% probability that the change in troponin level was due to chance. This strategy of course speaks to the more nerdy among us but how well does it perform clinically?

Using a prospectively gathered cohort of Emergency Department chest pain patients, Pretorius et al retrospectively applied their z-score, comparing it’s performance to absolute and relative change of the 2-hour troponin assay levels. As a tool to rule in myocardial infarction the z-score method outperformed both the absolute and relative changes methods. Among the three high-sensitivity assays examined, the z-score’s specificity was found to be 94%, 97%, and 98% respectively, each one outperforming their absolute and relative change counterparts. Unfortunately when using the z-score, the sensitivities  of each assay (79%, 77%, and 69% respectively) suffered. Interestingly most of the AMI patients, which the z-score missed, had initial troponin levels well above the diagnostic threshold of the 99th percentile and would probably have not required a second troponin value to confirm the diagnosis. Of note, this was a retrospective application of this statistical method and it will have to be tested prospectively on a novel cohort before it can be applied clinically. In addition, its performance was evaluated in an undifferentiated chest pain population. Where the z-score may potentially provide benefit is in the clinically ambiguous patients.

The most obvious question is how clinically relevant is any of this? The major weakness of all these trials is that each of the various assays and techniques which were evaluated, were done so in a vacuum. What is important to the Emergency Physician is not how these high-sensitivity assays perform in isolation but how much they add to our clinically evaluation and EKG findings.  The majority of the ACS cases will be identified by clinical exam and EKG. When these factors are taken into consideration, the troponin assay provide very little additional diagnostic utility. Than et al utilized the strategy of clinical risk stratification in combination with EKG and serial troponin measurements in 2-hours increments (9).  In doing so, the authors identified 99.7% of 30-day major adverse cardiac events (MACEs). Taken together TIMI risk score and EKG identified 98.3% of these events without the help of the troponin assay. The added value of the standard assay was minimal. When a high-sensitivity troponin assay was applied to the same cohort it added no statistical or clinically relevant diagnostic utility (10).

Up to this point, the benefits that high-sensitivity assays provide has been abstract in nature. As far back as the publication of Dr. Hector Pope’s trial in NEJM, Emergency Physicians were accurately identifying the large majority of ACS patients. These physicians ultimately missed 19 out 10,689 patients (11). To improve on this performance would be a Herculean task. Increasing the sensitivity of our troponin assays does not seem to be the answer. Using minute changes in troponin values to guide our treatment is fraught with uncertainty. These changes are equally likely to be due to random chance as they are to be caused by true myocardial necrosis. Like the subatomic particle, discrete troponin values and their respective momentum can only by reported in varying degrees of uncertainty. After all, physicist Werner Heisenberg wrote when describing the uncertainty principle, In the sharp formulation of the law of causality– “if we know the present exactly, we can calculate the future”-it is not the conclusion that is wrong but the premise.

Sources Cited:

1. Lippi G, Cervellin G. Do we really need high-sensitivity troponin immunoassays in the emergency department? Maybe not. Clin Chem Lab Med. 2014 Feb 1;52(2):205-12.

2. Reichlin et al. et al. Early diagnosis of myocardial infarction with sensitive cardiac troponin assays. N Engl J Med. 2009;361(9): 858-867.

3. Mueller M, Biener M, Vafaie M, et al. Absolute and relative kinetic changes of high-sensitivity cardiac troponin T in acute coronary syndrome and in patients with increased troponin in the absence of acute coronary syndrome. Clin Chem 2012; 58: 209–218.

4. Reichlin et al. Utility of absolute and relative changes in cardiac troponin concentrations in the early diagnosis of acute myocardial infarction. Circulation. 2011 Jul 12;124(2):136-45.

5.Pretorius et al. Towards a consistent definition of a significant delta troponin with z-scores: a way out of chaos? Eur Heart J Acute Cardiovasc Care. 2013 Dec 17.

6. Thygesen K, Alpert JS, White HD. Joint ESC/ACCF/AHA/WHF Task Force for the Redefinition of Myocardial Infarction, Jaffe AS, Galvani M, Katus HA, Newby LK et al. Universal definition of myocardial infarction. Circulation. 2007;116:2634–2653

7. Panteghini et al. Evaluation of Imprecision for Cardiac Troponin Assays at Low-Range Concentrations.  Clinical Chemistry. February 2004 vol. 50 no. 2 327-332

8. Fraser, CG. Reference change values. Clin Chem Lab Med. 2011 Sep 30;50(5):807-12.

9. Than et al. 2-Hour accelerated diagnostic protocol to assess patients with chest pain symptoms using contemporary troponins as the only biomarker: the ADAPT trial. J Am Coll Cardiol. 2012 Jun 5;59(23):2091-8

10. Cullen et al. Validation of High-Sensitivity Troponin I in a 2-Hour Diagnostic Strategy to Assess 30-Day Outcomes in Emergency Department Patients With Possible Acute Coronary Syndrome. J Am Coll Cardiol, Volume 62, Issue 14, 1 October 2013, Pages 1242-1249

11. Pope et al. Missed Diagnoses of Acute Cardiac Ischemia in the Emergency Department. N Engl J Med 2000; 342:1163-1170 April 20, 2000

 

“The Adventure of the Second Stain”

Bery-01

Subarachnoid Hemorrhage (SAH) is one of the more angst-inducing pathologies an Emergency Physician faces on a daily basis. A disease for which we have a well established diagnostic pathway. The strategy of non-contrast CT followed by a lumbar puncture (LP) has been showed to effectively eliminate the risk of SAH (1). And yet there is great resistance to performing an LP on the large majority of patients suspected of SAH. In a paper by Perry et al published in CJEM, the authors demonstrated that only 27% of the patients who received a CT to rule out SAH had a subsequent LP. In the patients who did receive an LP the length of stay increased significantly(2). Because of this there have been various alterative strategies proposed to eliminate the need for a lumbar puncture in the Emergency Department. One of these strategies involves a non-contrast CT followed by a CT angiogram (CTA). The thought process is that if no blood is identified on the CT and no aneurysm can be seen on the CTA, then even if the patient does have a SAH it is not of aneurysmal origin and therefore requires no further management.

There are a number of problems with a CT/CTA protocol. The first and most obvious is that CTA is an anatomic test meant to identify the presence of an aneurysm. It does not tell you whether that aneurysm is the cause of the presenting headache. Given this, using CT/CTA as a diagnostic pathway to rule out SAH will inevitably lead to the frequent misclassification of incidental aneurysm in an otherwise benign headache. Studies have found the rate of asymptomatic unruptured cerebral aneurysms in the general population as high as 7%(3). In assuming all aneurysms discovered by CTA are the cause of the presenting headache,  we will transform people into patients, and cause unnecessary downstream testing and interventions.

In addition to this decrease in specificity, there is no evidence CT/CTA adds any diagnostic value to our current management of suspected SAH. Currently our knowledge of the diagnostic accuracy of CTA for SAH is based primarily off small cohorts trials comparing CTA to the gold standard of digital subtraction angiography (DSA) in patients who have already been diagnosed with SAH. In these trials CTA was utilized to identify an aneurysmal cause of a known SAH. Even in this population, the sensitivity of CTA ranges from 81% to 99% (4-7). In fact in some cases even the initial DSA can be falsely negative secondary to arteriospasm, and only appear positive after a delayed DSA 2-3 days later (11). The failure of logic occurs when attempting to apply this data to the undifferentiated headache patient in the Emergency Department. These studies do nothing to inform us how CTA performs when used in series with CT for the diagnosis of SAH in undiagnosed patients presenting with thunderclap headache. If one assumes independence of the two tests and applies the most optimistic test characteristics for CTA, then the post-test probability following a negative CT and CTA may be clinically useful. Unfortunately, given the current data we are unable to assume independence between these two tests.

Conditional Independence is the assumption that two diagnostic tests have different factors that determine the individual diagnostic accuracy of each test. People who argue for the CT/CTA strategy assume that since CT and CTA are looking for different findings (blood vs aneurysm), then their ability to identify SAH are independent and augmentative. On the other hand, if the aneurysmal bleeds that are missed by CTA are the same bleeds that are commonly read as negative on CT, then this protocol will add little to the sensitivity of CT alone. This concept is demonstrated nicely in a paper by McCormack et al, published in Academic Emergency Medicine (8). In this paper the authors calculate the post-test probability following a negative CT and CTA given 25, 50, and 75% dependence. In the case where the tests only have 25% dependence the post-test probability of a negative CT and CTA would be 0.29% (or 1 in 344 patients). On the other hand if you were to assume 75% dependence between the tests, then the post-test probability would be 0.86% (1in 116 patients),very close to the post-test probability of a negative CT alone.

This of course comes back to, what is an acceptable miss rate? At what threshold does the harm of over-testing, false positives, and needless interventions overcome the harm of a missed SAH? Perry et al’s data on the sensitivity of non-contrast CT alone performed within the first 6-hours of symptom onset demonstrates that a negative CT already stratifies these patients into a fairly low risk cohort(9). What to do after a negative CT is still up for debate. If you believe that a negative CT places a patient below the test threshold, and any further testing will do more harm than good, then there is reasonable data to support this decision(10). In the cases of the high risk patient in whom CT alone is not adequate to rule out SAH, the LP is the appropriate next step. Although LP has a high rate of false positive findings (a specificity of only 67%) (1) in its own right, it has  demonstrated a proven negative predictive value above CT alone (1).

Without formal investigations evaluating the performance of a CT/CTA in the diagnosis of SAH, we are incapable of knowing its true diagnostic utility. If CT followed by CTA strategy ends up having a significant degree of conditional dependence, not only will this protocol increase the rate of false positive findings by identifying incidental aneurysms, it will not add diagnostic capabilities above CT alone. It is clear this strategy provides very little immediate benefit and will surely lead to far more downstream harm.

Sources Cited:

1. Perry JJ, Spacek A, Forbes M, et al. Is the combination of negative computed tomography result and negative lumbar puncture result sufficient to rule out subarachnoid hemorrhage? Ann Emerg Med. 2008; 51:707–13.

2. Perry et al.. Diagnostic test utilization in the emergency department for alert headache patients with possible subarachnoid hemorrhage. CJEM 2002;4(5):333-337

3. Ming-Hua  et al Prevalence of Unruptured Cerebral Aneurysms in Chinese Adults Aged 35 to 75 Years A Cross-sectional Study. Annals of Internal Medicine. 2013 Oct;159(8): 514-521.

4. MaccKinnon et al. Acute Subarachnoid haemorrhage: Is a negative CT angiogram enough? Clinical Radiology, Vol. 68, Issue 3, 232-238

5. Ergun et al. Diagnostic Value of 64-slice CTA in Detection of Intracranial Aneurysm Patients with SAH and Camparison of the CTA

Results with 2D-DSA and Intraoperative Findings

6. Kokkinis et al. The Role of 3d-Computer Tomography Angiography (3D-CTA) in Investigation of Spontaneous Subarachnoid haemorrhage: Comparison with Digital Subtraction Angiography (DSA) and Surgical Finding. British Journal of Neurosurgery. Vol. 22:71-78

7. Westerlaan et al. Multislice CT Angiography in the Selection of Patients with Rupture Intracranial Aneurysms Suitable for Clipping or Coiling. Neuroradiology (2007) 49:997-1007

8. McCormack et al. Can Computed Tomography Angiography of the Brain Replace Lumbar Puncture in the Evaluation of Acute-onset Headache After a Negative Noncontrast Cranial Computed Tomography Scan? Volume  17, Issue 4, pages 444–451, April 201)

9. PerryJJ,StiellIG,SivilottiML,etal.Sensitivityof computed tomography performed within six hours of onset of headache for diagnosis of subarachnoid haemorrhage: prospective cohort study. BMJ. 2011;343:d4277.

10. LP for Subarachnoid Hemorrhage: The 700 Club DECEMBER EMERGENCY PHYSICIANS MONTHLY. 4TH, 2012http://www.epmonthly.com/features/current-features/lp-for-subarachnoid-hemorrhage-the-700-club/

11. Agid et al. Negative CT Angiographic Finds in Patients with Spontaneous Subarachnoid Hemorrhage: When is Digital Subtraction Angiography still Needed? Am J Neuroradiol 31:696-705, April 2010

The Adventure of the Resident Patient

VIRGIL-FINLAY-Robot-Grave-HEADER

In a properly automated and educated world, then, machines may prove to be the true humanizing influence. It may be that machines will do the work that makes life possible and that human beings will do all the other things that make life pleasant and worthwhile.   -ISAAC ASIMOV

 

The term Cyborg, short for cybernetic organism, was first coined by Manfred Clynes and Nathan Kline in a 1960 article entitled Cyborgs and Space(1). In this article Clynes and Kline suggest that for prolonged space travel to be possible it is more logical to alter man to meet the requirements of an extraterrestrial environment than to attempt to provide an earthly environment in space. Slowly gaining fame and recognition, the Cyborg Foundation was finally founded in 2010 by Cyborg activists Neil Harbisson and Moon Ribas with the simple mission to help humans become cyborgs, to promote the use of cybernetics as part of the human body and (of course) to defend cyborg rights(2).  In modern medicine the Man-Machine interface is now a reality in the form of Mechanical Circulatory Support (MCS) devices, more commonly known as ventricular assist devices (VADs). Though the average LVAD patient is in no way similar to the marching hoards of Cybermen, as depicted in Dr. Who, or the unstoppable chiseled T800 from the Terminator series, they are in their own right miracles of modern medicine. Constructed using a small centrifugal pump and inlet and outlet flow grafts, these devices in their various iterations have proven they are capable of extending and improving the lives of patients in severe heart failure. Since their initial approval in 2001, the use of LVADs has steadily grown. Originally intended for patients who were not candidates for transplants, they soon provided a bridge to transplant in patients waiting for a donor. Safer, smaller, more durable devices were built with more compact battery packs to encourage a more “active lifestyle”. This public success hit its first obstacle in November 2013 with the publication of the NEJM article warning us of the alarming increase in pump thrombosis in the current model of ventricular assist devices.

In what is essentially a phase IV post-marketing trial, Starling et al published the findings from three major LVAD centers(3). Using data extracted from each institute’s respective registries, the authors examined the results of every device inserted at these three centers from January 2004 to May 2013. Patients’ baseline characteristics, LDH levels and outcome measures were all recorded. Confirmed pump thrombosis was defined as a thrombus found on the blood-contacting surfaces of the HeartMate II, its inflow cannula, or its outflow conduit at pump replacement, urgent transplantation, or autopsy. The authors found an alarming increase in the rate of pump thrombosis beginning in 2011. The increase was seen across all three sites and could not be isolated to individual treatment protocols or a specific surgical technique. In a postscript of the article. the authors describe an additional 150 VADs implanted at the University of Pennsylvania from 2004 to 2013. Though these devices were not included in the official data set, they too observed an alarming rise in the rate of pump thrombosis after 2011, further demonstrating this is not a site specific phenomenon. Even the national registry, the INTERMACS database, confirmed a similar trend in the rate of thrombosis since 2011 with a 6 month incidence of 5%(4). According to Starling’s data what once was a rate of 2% thrombosis in a 12-month period, ballooned to close to 8% after 2011. The majority of this increased risk is seen within the first 30 days (1.4%) with a gradual decrease over the initial 6 months and finally plateauing at a much smaller but still elevated risk of 0.4% per month.

It is important to remember that the evidence for the use of MCS devices is based primarily on a single open label trial entitled the REMATCH trial, which compared a pulsatile LVAD to optimized medical management in 129 severely sick ArrowLionHeart2heart failure patients (5). To date this is the only RCT comparing MCS devices with optimal medical management. Published in the NEJM in 2001, this trial found the LVAD to be superior to medical management in both mortality and quality of life measures. It wasn’t that the VAD patients did well, on the contrary they did horribly. Over the first year, 48% of the patients in the VAD group had died. By two years the mortality rate had reached 77%. The majority of these deaths were due to complications of the VAD itself, 41% due to sepsis and 17% due to device failure. This of course would be an utter failure if it was not for the fact that the control group fared far worse. At one year, only 25% of the control patients were still alive. By two years the mortality reached a staggering 92%, almost all these deaths due to terminal heart failure.

These devices showed promise, but were not without their own complications. It is very important to understand that the device’s success was due in part to the high acuity of the patients selected.  The great majority of these patients were categorized as NYHA class IV, the average ejection fraction was 17%, and close to 75% of them were on inotropic support.  In addition, an open trial of 129 patients leaves a lot to be desired methodologically speaking, especially considering the extra attention almost certainly given to the VAD group. For those of you who work in a center that installs and maintains these devices and have witnessed the quantity of attention they garner upon arrival in the Emergency Department, it is not hard to imagine the disparity of care that may have occurred between the VAD patients and their controls. The FDA clearly had similar concerns when they approved the use of these devices and stipulated that post-marketing research must be performed demonstrating similar outcomes could be obtained outside the arena of clinical trials. Thus The Interagency Registry for Mechanically Assisted Circulatory Support (INTERMACS) was born (6). INTERMACS is a prospective registry that collects clinical data, including follow up, essentially as it happens.  Post-implant follow up data is collected at 1 week, 1 month, 3 months, 6 months and every 6 months thereafter.  Major outcomes after implant, e.g. death, explant, rehospitalization and adverse events, are entered as they occur and also as part of the defined follow-up scheduled intervals.

Since the REMATCH trial, LVAD technology has vastly improved. In 2007, the NEJM published the initial findings of a continuous LVAD, the HEARTMATE II, in patients awaiting transplant (7). This was followed in 2009 by a RCT comparing the original pulsatile devices to their new continuous flow counterparts as destination therapy in heart failure patients not appropriate for transplant (8). These trials not only established that continuous VAD devices could provide a bridge to transplant, but also that they performed far better than their pulsatile counterparts. In the 2009 NEJM article, Slaughter et al found continuous flow devices had better outcomes with 46% of patients surviving complication free at 2-years compared to only 11% of the pulsatile group. Interestingly the pulsatile group in 2009 performed far better than their historical counterpart from the original REMATCH trial.

It is important to remember that the INTERMACS data is an observational registry, and suffers from all the relevant biases of such data sets. Though survival rates have increased, without randomized controls we are unable to determine if this is due to enhancements in VAD technology or simply due to improvements in heart failure management. Even with the improved outcomes and decreased adverse event rates reported in the INTERMACS database, VAD implantation and management is not a benign procedure and risk of mortality and complications are still high. The 30 day, 1 year and 2 year mortality are 5%, 20% and 30% respectively (9). 41% of patients will have some form of a pump related event in the first 30 days after implantation. In 2001, when VADS were first approved for use in patients with severe heart failure, they were implanted in the sickest of the sick. Before 2011, 64% were implanted into patients in cardiovascular shock or severe decompensated heart failure. In 2012, this number dropped to just under  54% (10). Though these patients are still well within the entry criteria first proposed by the REMATCH trial, certainly there is a trend to installation of VAD devices in a healthier population.

The exact cause of the increased rate of pump thrombosis is still unclear. Some have hypothesized various explanations including; the change from pulsatile to continuous flow devices, which in direct comparison had slightly higher rates of pump thrombosis (8), the recent change in anticoagulant recommendations from an INR of 2-3 to 1.5-2.5 secondary to increased  bleeding risk found with continuous devices when compared to their pulsatile counterparts (10), the increase in VAD use for destination therapy(9), overall a slightly sicker population, or a yet to-be-identified mechanical defect in the Heartmate II itself. What is clear is that this increased risk of thrombosis should be considered before installing one of these devices.

For better or worse this increased risk of pump thrombosis does very little to change our management in the Emergency Department. If a patient presents with the clinical and pump characteristics of thrombosis then they should be managed in the appropriate fashion whether their risk is 0.4% or 1.4%. What this article does provide is some guidance in the diagnosis and management of pump thrombosis, for example the authors demonstrate a clear association between increased levels of lactate dehydrogenase (LDH) and clinically obvious pump thrombosis (3). Whether this association is strong enough to differentiate thrombosis from the baseline hemolysis that occurs with LVADs is not explored in this paper. Two small case-control trials demonstrate excellent test characteristics for LDH in the diagnosis of pump thrombosis, but this data was retrospectively fitted to the levels of optimal performance and requires prospective validation before their clinical utility can be assessed (11,12).

Very little high quality data exists to guide us in the management of pump thrombosis but what Starling’s paper demonstrates is we have little to offer these patients in the Emergency Department. In this cohort, 19 of the 38 patients with confirmed pump thrombosis who were managed medically died (48%). In contrast, those with confirmed pump thrombosis who were managed surgically with either transplant or pump replacement had survival rates similar to those without thrombosis (16%). Ideally these patients are already optimized on Warfarin therapy so other than starting those who have sub-therapeutic INRs on heparin, there is little more to offer. Some recommendations suggest giving these patients tPA but there is no evidence of its benefit in these situations and given their already elevated bleeding risks it seems the downside is too high to justify its use. Once we have stabilized the patients and ruled out other more imminent causes of their distress (sepsis, hemorrhage, etc. ), we must provide them with the resources they need to correct the problem, specifically someone with the capabilities to either replace the pump or if appropriate progress the patient to a heart transplant.

Although the cyborgs of today are far less virile than what our beloved science fiction stories promised, they are a population that we will inevitably encounter. The third generation of ventricular assist devices are currently undergoing clinical testing and are in the early stages of use. Some reports are boasting 1-year survival rates as high as 90% (13). Becoming comfortable with the diagnostic and resuscitative intricacies of these devices will become ever more important as they become the standard treatment of end stage heart failure.  It is a brave new world indeed…

Sources Cited:

1. Manfred E. Clynes and Nathan S. Kline.Cyborgs and Space [1],” in Astronautics (September 1960)

2.Cyborg Foundation: http://eyeborg.wix.com/cyborg

3. Starling et al. Unexpected Abrupt Increase in Left Ventricular Assist Device Thrombosis. NEJM, November 27, 2013

4. Initial analyses of suspected pump thrombosis. 2013 (http://www.uab.edu/ medicine/intermacs/images/INTERMACS _PI_and_Website_Notice_9-6-2013_2.pdf).

5 Rose EA, Gelijns AC, Moskowitz AJ, et al. Long-term use of a left ventricular assist device for end-stage heart failure. NEJM 2001;345:1435-43.

6. Kirklin et al.INTERMACS database for durable devices for circulatory support: first annual report. J Heart Lung Transplant. 2008;27:1065–1072

7. Miller LW, Pagani FD, Russell SD, et al. Use of a continuous-flow device in patients awaiting heart transplantation. NEJM 2007;357:885-96

8. Slaughter MS, Rogers JG, Milano CA, et al. Advanced heart failure treated with continuous-flow left ventricular assist de-vice. NEJM 2009;361:2241-51.

9. Kirklin JK, Naftel DC, Kormos RL, et al. Fifth INTERMACS annual report: risk fac-tor analysis from more than 6,000 mechan-ical circulatory support patients. J Heart Lung Transplant

10. Crowe et al. Gastrointestinal bleeding rates in recipients of nonpulsatile and pulsatile left ventricular assist devices. J Thorac Cardiovasc Surg 2009;137:208-15

11. Palak et al. Diagnosis of hemolysis and device thrombosis with lactate dehydrogenase during left ventricular assist device support. The Journal of Heart and Lung Transplantation, article in press

12. Uriel et al. Development of a novel echocardiography ramp test for speed optimization and diagnosis of device thrombosis in continuous flow left ventricular assist devices: The Columbia Ramp Study. J Am Coll Cardiol. 2012 October 30; 60(18): 1764–1775.

13. Yamazaki et al. Japanese clinical trial results of an implantable centrifugal blood pump “EVAHEART”. J. Heart Lung Transplant. 27, S246 (2008).