On the 13th of April 2014 Mo Farah ran his first full Marathon in London. He took 2:08:21?h to complete the run with an average speed of around 5?min per mile. These are probably the slowest miles he has run for some time. In a 10?km run he would sustain a significantly higher pace, constantly pushing to get closer to 4?min per mile. So was his speed per mile in the Marathon good or bad? It probably depends whether one would measure it against the speed in short, mid or long-distance runners. Or whether one would make the winning of the race the prime outcome measure. Even with these caveats quantifying performance is comparatively easy in the world of athletics and significantly more complex in the world of Medical Emergency Teams (MET) or Rapid Response Teams (RRT).
The current edition of Resuscitation contains a review of the performance of what can only be described as ‘the original MET team’, Ken Hillman’s unit at Liverpool Hospital in Sydney Australia has influenced the course of the way we see patient safety in hospitals more than almost any other team in the world. The Australian MET reviewed more than 19,000 patients over the time between 2000 and 2012, enough to populate a small town. Given that the team had already been at work for nearly 10 years at the start of the study period there was no major reduction in the good rates of cardio-pulmonary arrests or unplanned Intensive Care admissions. Hospital mortality decreased by 20% between 2005 and 2012: overall impressive results and a hospital where most readers would feel safe.
Let’s assume for a moment that Australia is hitting a deep recession and administrators at Liverpool Hospital would want to cut costs. Does the data show that the MET team kept cardiac arrests at a low rate and pushed down mortality? Is it worth keeping the investment for the service? The authors of the paper imply that the MET’s performance is linked to the good results of the hospital, but in a fierce discussions this might be a more complicated argument: Did the mortality in other hospitals in Australia also improve? Probably, but most of them would have implemented MET teams at around 2005 so comparison is tricky. Were there other initiatives that might have impacted on mortality?, Surgical checklists?The surviving sepsis campaign? Better training of doctors and nurses?
The Institute for Healthcare Improvement’s (IHI) 100,000 lives campaign concluded in 2006 that RRTs were a critical ingredient in the package of measures that helped participating units to reduce standardised mortality rates. In 2009 an evaluation of the Safer Patient Initiative in the UK led to the implementation of Rapid Response Teams and other patient safety interventions in a number of UK hospitals. Despite the improvements in mortality in the trial units a wider analysis showed similar results in non-participating units. Did the METs not make a difference? Or was the profile of the interventions so high, that everybody implemented them at around the same time? Or did it ‘just’ influence organisation culture?
How can we use data to work out which one of these answers is correct and to show that METs are the reason for improvements in the outcomes of hospitalised patients? The data collection that is recommended for METs in the literature is almost certainly completely unworkable without automated data capture. In the world of the IHI, three types of data would be required to show effectiveness of an intervention: Outcome measures demonstrate the voice of the patient and are usually the ones that reflect whether the intervention is an improvement: What are we trying to achieve? Most RRTs would aim for reductions in hospital mortality, avoidable death and in preventable cardiac arrests. But how about measuring length of hospital stay for patients with physiological abnormalities. There is a reasonable body of evidence that delayed treatment drives up days in hospital. Process measures evaluate whether systems are working as planned, whether there is an uptake of the intervention – How many patients are there with potential activation criteria for a MET and how many receive the intervention? Do people record vital signs, recognise abnormality, report to the MET and does this respond appropriately? Last up are balancing measures – Has our intervention unintended consequences in other parts of the system? Is the ICU swamped by patients who have been identified by the MET and did not need ICU before, is hospital mortality actually improved, are patients with other needs showing signs of neglect? Is over zealous usage of fluids and antibiotics linked to iatrogenic pulmonary oedema or development of multi-resistant bacterial strains?
Given the limited and non-standardised data-sets that most MET’s collect it is often difficult to quantify the part that METs and Rapid Response Systems play in reducing mortality. What could be a sensible minimum data set: As a working hypothesis we would have to assume that the effect of a MET would be on faster treatment of deteriorating patients with reversible pathology; or that by raising awareness of acute physiological deterioration less patients get anywhere near the state of physiological instability. Additionally we would need to show that the majority of patients with abnormal vital signs or other features of physiological deterioration receive appropriate and timely care by the MET with subsequent clinical improvement.
How about failure to rescue: the assumed cause of many events is the failure to activate the MET. This number of missed opportunities is even more difficult to quantify but the progress that the implementation of electronic patient records is making in the US, following Obama’s affordable care act, represents a major opportunity to quantify the number of potential events and the proportion in which team activation occurred.
Very few teams will currently hold enough data to document both reliable activation and improvement of deteriorating patients. For METHOD (Medical Emergency Teams Hospital Outcomes’s in a Day) we have recently challenged teams to collect this type of pragmatic data for a week. 51 teams from three continents took up the challenge in February 2014. First results will be shown at the international Rapid Response meeting in Miami in May. But already it is clear that there is a need for a pragmatic data collection format that allows to capture the outcomes of MET calls. Only then will be able to say whether the system performance was good, adequate or just too slow, and which team is getting close to the MET equivalent of the 5-min mile.
Chris Subbe is a Principal Investigator of a study sponsored by Philips Healthcare.