A Painless Look at Using Statistical Techniques to Find the Root Cause of a Problem
Add bookmarkEditor's Note
There are three basic data analysis concepts that must be understood by all managers and executives—namely: (1) homogeneity; (2) detecting significant variation in performance metrics and; (3) disaggregating non-homogeneous data into meaningful subgroups.
Sound complicated? Probably. But this article hopes to demystify these highfalutin statistical notions and illustrate how they interrelate when attempting to find the root cause of an identified problem.
Stay with it. It's really painless. And if you understand what is said, you're going to be well on your way to achieving real success in continuous productivity improvement programs.
The business press is not aware of it. Neither is corporate America. National magazines deny its existence. And public service organizations think it’s a process by which to extend the shelf life of milk.
It is called homogeneity. It is seldom mentioned in most data analytics and like learning programs, claiming to be "practical, down-to-earth and hard-hitting."
Yet without an understanding of this concept called "homogeneity,” data analytics programs of all kinds will fail to alert conscientious learners of one of the major "watch-out's" related to data analysis and data analytics.
Before defining the homogeneity concept, let’s define the purpose of statistical methods. Here are some of the best definitions:
- Making sense out of variation
- Dealing with variation
- The study of variation
So far, so good. You now know "statistical methods make sense out of variation." Do us a favor. Memorize it. Don’t keep reading this until you can say it three times without looking at this page. "Statistical methods make sense out of variation." That's great!
Preface to Homogeneity
Everyone talks about performance measurements. We will discuss what to measure and how to measure in a future article. But for now, it is important to understand the difference between a descriptive statistic that is a measure of performance, and a performance measurement methodology.
Descriptive measures do not lead to improvement in productivity. Take, for example, statistics on accidents: They tell you about the number of accidents in the workplace, in the home and on the road.
Indeed, one can even determine if there is a trend in the frequency of accidents using data collected on accidents over time. But this statistic cannot, nor does it pretend to, tell you how to reduce the number of accidents.
A statistical-based performance measurement analysis methodology–prescriptive analytics–if properly used, can locate the root cause of, say, accidents and point the way to taking corrective action in reducing their frequency.
This "descriptive statistic" has its uses. But you would be terribly irresponsible and totally misguided if you thought a descriptive statistic provides a prescriptive answer.
In short, what can be done to lower the accident rate? There is a world of difference between "knowing" a problem exists and "doing" something about it.
Is Your Data Overly Aggregated?
Assume it is! You can never go wrong with this mindset. Although most people are not aware of it, the key to finding the root cause of many problems is to "disaggregate" an aggregate number.
Find relevant sub-aggregates or sub-groups. This requires thinking and knowledge of a particular situation.
The average SAT scores of a given university can be subdivided by different schools within the university—namely, the engineering school, the business school, the nursing school and the like.
Instinctively, you know there will be variation in SAT scores among the different divisions or schools within the university. The many specialty divisions/schools of the university are our subgroups in this situation.
Most people, when given this illustration, usually respond with: "I knew that." And we do not doubt them. Sounds so easy.
Yet in many organizational situations, the simplicity of finding relevant subgroups is elusive. People jump to the wrong conclusions because they fail to think through the relevant subgroups.
A Quick Digression
Before the advent of high-speed computing, the "aggregation" problem was an impediment to management analysis, planning and control. Computers eliminated the aggregation problem (or so it seemed).
Data could now be obtained at the correct level of aggregation. That it could be obtained didn't mean it was obtained.
For 50 years, information technology has centered on data—their collection, storage, transmission and presentation.
The focus was on the "T " in IT. Today's new emphasis is on the "I." Data analytics forces organizations to ask, "What is the meaning and purpose of information?"
Back to Our Discussion...Subdivide, Subdivide, Subdivide
The percentage of delayed flights for a given airline is an "aggregate number." If delayed flights are subdivided into groups representing delayed flights at five airports, the resulting set of numbers represent "sub-aggregates" or "partitions."
The degree of aggregation in data refers to the level of detail or refinement in data. A high level of aggregation conceals differences between and among subgroup categories.
Most people realize "analysis of disaggregated data" may reveal important problems.
For example, if the number of delayed flights increased in the aggregate, the number of delayed flights may have decreased in four out of the five airports.
One airport may account for the overall increases in delayed flights because of, say, extremely bad weather conditions or other assignable causes.
This increase in delayed flights, if offset by a decrease in delayed flights in other airports, "conceals what’s happening" and prevents remedial action if information is presented in a highly aggregated form.
(Without going into detail in this article, this is the core idea underlying Pareto analysis—one of the most powerful tools in the manager/executive toolbox.)
Looking for Significant Differences Between and Among Subgroup Categories
Let's take a very simple example. Someone tells you, "The invoice error rate equals 10 percent." You shake your head and mumble something to the effect of, "That seems rather high."
The basic question is whether the aggregate number or percentage (the result for the total group) conceals differences among subgroups. For example, can the aggregate statistic, 10 percent incorrect invoices—be subdivided into sub-groups?
Definitely. One basis for subdivision is the two shifts doing the invoicing. For example:
Table One
Shift | Invoice Error Rate | Number of Invoices Processes |
8 a.m. - 4 p.m. | 0 percent | 1,000 |
4 p.m. - 12 a.m. | 20 percent | 1,000 |
2,000 |
This subgrouping reveals a significant difference in invoice error rates. All incorrect invoices were made during the second shift (20 percent).
Since the number of invoices processed is equal, it is unnecessary to use a weighted average to arrive at the average invoice processing error rate.
In this case, we add the invoice error rates and divide by the total number of rates (0 percent + 20 percent /2 = 10 percent).
This Process Can Be Extended to Several Additional Subdivisions
By introducing additional subdivisions, and finding significant differences or variations among subgroups, we can move closer to determining the "root cause" of the incorrect invoices.
For example, the experience level of the employees working the two shifts supplies a possible explanation for the excessive number of the incorrect invoices. In this organization it was discovered that all new workers were routinely assigned to the second shift.
Table Two
Experience Level | Invoice Error Rate | Number of Invoices |
Experienced People | 0 percent | 500 |
Inexperienced People | 40 percent | 500 |
1,000 |
Table Two indicates that new workers were responsible for all the incorrect invoices, if the subdivision process stopped at this point. It would appear that more intensive training of new billing staff could dramatically reduce the rate of incorrect invoices.
However, if we introduce one additional basis for subdivision, we learn more about the "root cause" of incorrect invoices. What is the composition of the new invoice people on the second shift? That is, can the subgroup be subdivided on the basis of some other characteristic yet not under investigation?
Table Three indicates that inexperienced billing staff are of two kinds: 1) full-time employees and 2) part-time temporary staff.
Table Three
Inexperienced People | Invoice Error Rate | Number of Invoices |
Full-time Employees | 0 percent | 250 |
Part-time Employees | 80 percent | 250 |
500 |
Table Three reveals that part-time temporaries are the cause of the incorrect invoices. Why? The full-time staff received thorough training in the invoicing policies and procedures.
But—and this is a very big "but"—the part-timers are brought in during peak shipping periods and are often thrown at the problem, receiving no training.
That this happens shouldn't surprise you. However, it often gets overlooked when confronted with aggregated numbers.
People see what is presented to them; what is not presented tends to be disregarded. And what is presented, more often than not, are "problems couched in aggregate data"—especially in the areas where performance falls below expectations—which means that managers tend not to see the potential cause of the problem.
The Root Cause, Once identified, Can Be Eliminated via Management Action
How can the number of incorrect invoices be eliminated? Less dependence on part-timers in critical areas, and more effective training of the temporaries that are used. This particular organization stopped hiring temporaries for billing and invoice costs dropped eighty percent.
Management must change its policies. In this case, the hiring of office temporaries was eliminated. Management took an action to remedy the problem.
There was a corresponding savings in the customer support area. Customers no longer had to deal with chronically incorrect bills.
(This example is a bit oversimplified but it is based on an actual case. Jim Harrington, a Deming missionary who worked at IBM for 35 years, estimates that 50 percent of the costs of every billing system are attributable to "screw ups." Why continue to pay that money?)
Now We Can Define Homogeneity
The critical question: Can the data be subdivided on the basis of some characteristic other than the one under investigation, so as to reveal differences among subgroups?
If it is impossible to further sub-divide your data, the data can be classified as "homogeneous." If, however, one discovers a basis for subdividing which reveals differences, the data set is "heterogeneous" or "non-homogeneous."
Aggregation has the effect of ignoring relevant subgroups. Conclusions that seem obvious when we look only at aggregated data become debatable when we examine "lurking variables" or relevant subgroups.
Homogeneity, Variability, and Statistical Procedures
Let’s review:
- An aggregated performance measurement is of limited diagnostic value.
- Through the process of isolating and analyzing variation among relevant subgroups, you can locate the "root cause" of the problem.
- Management action is required to deal with the "root cause" of the problem.
- Faulty conclusions and/or policies flow from a data-set that is not homogeneous with respect to the performance measurement under investigation.
Conclusion
Statistical procedures detect significant variation among subgroups. If significant differences in a performance characteristic (because of thoughtful subdivision of a data set) are found to exist, the reasons for the variation must be investigated.
After the "causes" of the variation are discovered and eliminated, the performance measurement improves.