As we wrap up the series on measurement, here are techniques to overcome the inevitable resistance to measuring complex human services, like patient care. Most care happens in real time, in physical space, with massive quantities of emotional and narrative information that go uncaptured. The oft maligned documentation is insufficient to capture “what’s really going on” because the map is not the territory. Observation of performance is required to ensure your organization makes the promises, then keeps them. With this in mind, let’s dive in.
We’ve talked some about qualitative vs. quantitative metrics, emphasizing the power of quantitative metrics in running a company. When dealing with complex services (like healthcare), do not be too biased toward quantitative measures by trying to jam what should be qualitative data into numbers, resulting in painfully asinine surveys. Think of quantitative measures as the smoke–indicating when and where a problem is occurring, but not exactly why. Through qualitative measures you find the fire– the what and why of a problem and usually a strong clue on how to improve. When facing problems with your service, avoid the temptation to waste weeks scouring over spreadsheets and white boarding in meetings cooking up a solution to the problem you do not understand. Rather, spend the hours required to leave your comfort zone, go to the gemba and figure out quickly what to do next to improve the work. Always be willing to waste hours to avoid wasting weeks.
The perennial complaint that “measuring is too much work” goes double for qualitative measurements because direct observation requires (your) effort, discomfort and often boredom. There will always be pressure for automatic, effortless collection of measures and this is indeed a worthy end goal, especially as you progress through product/market fit into growth & scale. Rather, start measuring systematically rather than automatically with intention and discipline, so you can start separating signal from noise.
Let’s use a very concrete example measure of “We want clinicians to always wash their hands” to illustrate a hierarchy of systematic qualitative measurement methods, each of which will garner violent opposition from someone because it is more work for them:
Self-report/survey: Let’s ask clinicians how often they wash their hands.
This is the most commonly decided on approach by managers because it puts measurement on the workers themselves (so managers don’t have to DO anything). If you are going to subject your employees to surveys and questionnaires, given all the bias risks of poorly constructed surveys, please only use that output as qualitative data, also known as reading the comments. The comments will clue you in to the solution. Attempting to jam qualitative data into quantitative data is generally a massive waste of time, designed to convince someone in the Drucker camp that does not want to be convinced of something. Management consultants, I feel your pain.
Direct observation: Let’s watch clinicians enter/exit rooms and count hand washing.
Most managers HATE direct observation because they have to do it, in real, scheduled time and it can be boring. Those under observation may be similarly averse because they fear reprisal for failure, which is a leadership/cultural issue. Plus, while observing you cannot help but meddle with people directly or indirectly. One mitigating approach is to make use of recordings (within the bounds of the law) and then review those recordings together with multiple members. These “watching parties” provide the chance for multiple perspectives, accountability for actually watching the recording and for you to inject cultural norms through praising excellent service. Complex behavior requires complex observation. You cannot (easily?) reduce great service to numbers. Go watch your people. Listen, Think, ask questions, learn and then remove whatever is in the way.
Correlating measurement: Let’s track how much hand sanitizer is ordered monthly.
This is where measurement starts to get cool and can actually be quantified and automated. When you have enough observation data to build a model of the system you’d like to study, you can start to find correlations between things that are easy to measure (hand sanitizer ordering) and that which you’d like to measure (hand washing). The leap from direct observation to correlation is perilous, but worthy if done properly.
Everyone loves to make the computers do the work, except the people that make the computers do the work. Your engineering and data people are pushing back on your desire to automate your half-baked measures not because they are lazy, but because they understand that implementing poorly defined metrics is nearly impossible and potentially catastrophic to your organization. So don’t bother with automatic collection until you are certain a metric is tested, well-defined and useful.
One more thing about metric types, specifically quantitative metric types: avoid percent thresholds. They are easy to capture and do not require knowing your current performance (which you never will, because measurement is hard). For example, if we want to measure rapid patient access, so we choose a metric of “80% of our patients should have an appointment within 7 days of calling.” Okay, sounds great, right? Let’s forget the pathetic state of healthcare in which that is a bold metric. According to your metric, if 79% of your patients get appointments within 1 day, but 21% get them on day 8, you are failing, despite actually doing a great job. In contrast, if 81% of your patients get appointments in 6 days, and 19% get them in 6 weeks you can declare victory, despite doing a terrible job of ensuring patient access. While this may seem like an extreme or trivial example, percent threshold measures are proposed so often and obscure reality so subtly that they warrant a blanket warning. You would be better off watching a trailing average of time between call and appointment to see when it starts to tick up, indicating a problem. Extra points if understand distracting outliers which often plague all of healthcare service operations.
After four posts on measurement, the takeaway is this: decide on a few, clear measures and hold them fast. Quickly implement initial metrics, defer targets to those closer to the work and change them as needed. Learn from the process of measurement and changes in the result, then iterate rapidly until the data output starts to become useful. Now that we are through measurement, it is time to delve into goal setting!