There Are No Outliers in Healthcare Benchmarking … Only Real People
July 15, 2014 | Delia Caldwell, MBA
For emergency departments engaged in healthcare benchmarking, it might seem natural to focus on tracking median statistics for patient care — such as Time to Provider and Length of Stay for Discharged and Admitted patients.
But relying solely on median numbers, in my experience, tells only part of the story — and a slightly skewed one at that.
Unless hospitals also gather statistics on the averages for each of these measures, they’re probably seeing a rosier picture of their operations. More to the point, they run the risk of losing sight of the patients whose experience was far worse than the norm — that is, what some people call the “outliers.” It may seem reasonable to ignore or discount the statistics concerning the outliers. But each of these statistics is actually a person … a person whom the EDs may have failed to properly serve.
It’s no surprise that healthcare benchmarking efforts might tend to focus more on medians than averages. For one thing, CMS itself requires regular ED reporting in medians, not averages. In addition, many directors and managers, given the choice between two analyses of their ED’s performance, would naturally prefer the numbers that look better.
Another factor to consider is the all-too-common misunderstanding about the differences between median and average. In fact, many people use the terms almost interchangeably, and the numbers can indeed be very close; but other times, they can be quite different.
Tracking the differences between median and average
If you already understand the distinction between median and average, please skip to the next section. If not, here’s a quick illustration.
Imagine you’re coaching a track team and you want to get a sense of how fast your six runners are, over all, in the mile event. You start with their best individual times in the event:
3:58 | 4:02 | 4:05 | 4:47 | 6:10 | 7:15
You could calculate the runners’ median time (that is, the point where half the team is faster and the other half slower) as 4:26 — a pretty respectable time. But the average time is a good bit slower: 5:03. Both calculations are based on accurate math, but one suggests that the team is significantly faster than the other. More to the point, as the coach, if you look only at the median time, you might be tempted to not worry as much about your two slowest runners. After all, they’re just outliers, right?
Finding and understanding variations
In my consulting work with EDs, I’m generally focused on streamlining processes and reducing process variation. I need to find the data that falls outside the “normal” range, and then try to determine what caused the deviations. Perhaps a patient waited four hours to see his provider — or perhaps she was in the ED for a total of three days. My job is to find out what happened with each of these patients.
In some cases, there are legitimate reasons to regard certain patients truly as outliers, and that may justify omitting them from my analysis. In other cases, I need to find the causes of the sub-optimal results, so that our team can develop ways to prevent them from happening again.
This is why I typically calculate both the median and the averages for each key statistic. I also strive to gather a broader set of data: preferably at least a year’s worth, or even more if we’re projecting further out than a few years. The bigger the data set, the more likely I’ll find outliers — and these are the examples that often hold the key to process improvements that can benefit all patients. In my experience, a lot of firms in our industry don’t do as deep a data dive in the course of their healthcare benchmarking. Instead, they place more emphasis on interviewing staff and examining aggregate data.
Insights from the “long tail”
Often organizations are surprised at the results of analyses of statistical averages — and especially the “long tails” of data about patients whose experiences deviate greatly from the norm. Our analyses help us to not only identify and quantify the problems, but also determine the underlying causes and recommend changes to address them.
My point is that before an ED starts using making decisions about staffing changes, expanding or reconfiguring the footprint of the ED, it’s worth examining a broad dataset of healthcare performance. Only then can the department truly “take ownership” of every patient.
Just as the track coach in the example above can’t simply ignore his two slowest runners, an ED needs to figure out ways to bring the most serious causes of delays or inefficiencies into line. It’s not enough to just track the required data and focus on the most positive numbers. Rather, real improvement in efficiency and patient care comes from digging down to find the stories and details of the “outliers” … and then making the process improvements necessary to prevent them from happening again.
ABOUT THE AUTHOR Delia Caldwell MBA
Delia Caldwell works with clinical staff to put the systems and processes in place they need to improve care and save lives. Through simulation modeling, process mapping, and dashboard tools she helps departments reduce LOS, improve patient outcomes, and streamline operations. A skilled facilitator, Delia guides organizations through change, using data to demonstrate that her recommendations will improve productivity and efficiency. With more than 85 operational studies completed, her efforts have redefined the way that providers deliver care. Read more from Delia.
You May Also Like
May 21, 2014
June 27, 2014
April 23, 2014