It's a well worn axiom that you can't manage what you don't measure.
Sounds simple enough.
But there's a catch and a corollary. The catch is, what exactly should you measure? Warehouse and distribution center managers can and often do gather a vast amount of performance data on their operations, but the ability to use that information to make management decisions is finite. In order to prevent all the data from becoming so much noise, managers have to determine which measures are, first, reliable, and second, most useful, and then concentrate on those. That brings us to the corollary: Only capture and report metrics that you know are good.
In order to understand which metrics are most useful to warehouse and distribution center managers, DC VELOCITY and the Warehousing Education and Research Council (WERC) sponsor an annual warehouse benchmarking study. The study is made possible through the support of Ryder System and the Staubach Company. Now in its fifth year, the survey asks DC VELOCITY readers and WERC members what metrics they use and how they use them.
The survey, conducted by study partners Georgia Southern University and the consultancy Supply Chain Visions, aims not only to determine which metrics are used most often, but also to provide clear definitions of those metrics and to suggest ways that managers can use them. In addition, the study results offer benchmarks against which managers can compare the own companies' performance. And now that we have five years' worth of data, we can begin to see some trends in just how well warehouses and DCs are performing against the measures managers consider most important.
Getting better all the time
What we've learned is encouraging. Managers are getting better at using metrics, and it's evident in improved performance over the course of five years. We've seen a marked improvement in companies' ability to deliver what's known as the Perfect Order—one that arrives complete, on time, damage free, and accompanied by the correct documentation and invoice. Senior executives are supporting—even demanding—the use of performance measures at the companies they run. And forward-thinking companies in some industries—grocery retailers, in particular—are beginning to introduce measures across organizations, quite literally thinking outside the warehouse or DC box.
Furthermore, we've seen performance improve across companies of all sizes. As technology for capturing performance data becomes more accessible, the amount of information available to improve warehouse operations continues to expand.
This year's survey was launched via an e-mail invitation to WERC members and DC VELOCITY readers in early January 2008. Survey participants were asked to report their actual performance in 2007 against 50 key operational metrics. We analyzed the results by industry, type of operation (pallet picking, partial pallet picking, full case picking, or broken case picking), business strategy, type of customer served, and company size.
False confidence or measured confidence?
One thing we have seen remain relatively consistent from year to year is the list of most widely used metrics—the measures that are common to businesses large and small and across disparate industries. Exhibit 1 shows the 10 most popular metrics, along with the percentage of respondents who are using each one and its rank in last year's survey. New to the top 10 list this year: distribution costs as a percentage of sales. Dropping out of the top 10 this year: order cycle time.
What has also remained consistent is that respondents by and large are confident that they are doing a good job of serving their customers. For example, we asked survey participants how their customers would rate their performance against several service-oriented metrics, including on-time delivery, order cycle times, and accuracy of invoicing. In the case of on-time delivery, fully 84 percent responded that their customers would rate them as average or above average. (We'll come back to this issue of on-time delivery in a moment, as it has proved to be a thorny one.) It was a similar story with cycle times and invoice accuracy; roughly 70 percent indicated that their customers would rate their performance in these areas as average or above average. Now, basic math tells us that when we benchmark performance, half the respondents will be above the median score and half below. By definition, a majority cannot be above average. What's the story?
One possibility is that however statistically improbable it may seem, the respondents' assessment is essentially correct. By joining an educational organization like WERC and reading professional journals like DC VELOCITY, the survey participants have demonstrated an interest in staying abreast of industry trends and improving their operations, and they may in fact be above average. However, it's also possible that when it comes to assessing their own performance, these managers are going more by their gut than by hard data.
Take that issue of on-time delivery. What makes the respondents' apparent confidence in their on-time delivery performance surprising is that a sizeable percentage of them don't even measure their actual performance—"on-time delivery" does not appear on the list of the 10 most widely used metrics. What DCs are far more likely to measure is on-time shipment, which is an altogether different matter. A lot can happen between the time a shipment leaves the dock and the time it's delivered.
Further complicating the issue is the apparent lack of consensus regarding what constitutes an "on time" delivery. Roughly 55 percent of the respondents indicated that their customers defined "on time" as simply a delivery on the requested or agreed-upon day, but 28 percent said that "on time" meant delivery on or before an appointed time. The rest defined "on time" as being within a 15-minute to 1-hour window of that appointed time. Add to that the likelihood that each customer has a different definition, and it becomes clear why measuring on-time delivery can be at best problematic. We believe that this lack of agreed-upon standards and definitions goes a long way toward explaining why some suppliers have difficulty measuring "on time" when it comes to delivery.
Those who did say they measured on-time delivery claimed that by and large on-time delivery performance is comparable to on-time shipping. Sixty percent said that their shipments arrived on time 95.8 percent of the time or better. But our concern here is related to where the delivery performance rating is coming from. Is it a verifiable measure based on the actual time of delivery to the customer's facility, or are the respondents simply assuming that if the phone doesn't ring, all is well?
Just perfect
Though "on time" delivery may remain an elusive measure, the survey results still provide plenty of hard data on DC performance—data that companies can use to determine how they're performing relative to others. Exhibits 2 through 6 present the latest performance data—both median and best-in-class performance numbers across a range of metrics. (We chose median rather than the mean as it is not easily swayed by outliers. "Best in class" is defined here as the responses from the top 20 percent—that is, the companies that are performing the best against each metric.) Because of the large number of metrics included in the survey, we have divided them into five groups based on type of measurement: customer service, operations, financial, capacity/quality, and employee.
Of course, comparing any DC's performance against the benchmarks provides only a partial measure of its service. The true measure is how its customers perceive its service. Or to put it another way, all is for naught if the customer is not happy.
That's why it is worth looking beyond the most commonly used metrics to those that we associate with the Perfect Order and which are used to compute the Perfect Order Index (POI). (See Exhibit 7.)
We have historically defined the Perfect Order as one that is delivered complete, on time, damage free, and accompanied by the correct documentation and invoice. To calculate a company's score on the index, you simply take those four metrics (expressed as percentages) and multiply them together. For instance, if your company were performing at a 95 percent level in all four measures, your Perfect Order Index score would be 81.5 percent (95 x 95 x 95 x 95).
This year, we asked respondents to calculate their score for us.We then analyzed their responses to determine the median performance score (we decided to use the median—rather than the mean, or average— because it's less likely to be skewed by very high or low numbers). When we used this method of calculation,we found that the median performance against the Perfect Order Index was 97.5 percent. However, when calculated by the median response for each individual component of the Perfect Order, the index was a notso- perfect 94.1 percent.
We should note here that although we used the traditional definition of the Perfect Order, there are other definitions in use today. For example, the Grocery Manufacturers Association and the Food Marketing Institute have suggested a POI that includes seven elements—including cross-organizational elements such as service at the shelf (that is, the percentage of time the product is on the store shelf) and days of supply in the supply chain. Respondents to our survey have already begun gathering this type of information (which can be extremely tough to obtain) to some small extent. For example, 3.7 percent said they were now collecting data on service at the shelf.
Performance is performance
But however you calculate the Perfect Order Index, the fact remains that both customer-facing measures and internal metrics are crucial to managing operations. And this survey is intended to provide readers with benchmarking information they can use to assess their own operational performance.
One of the lessons we've learned over the five years of the study is that while benchmark numbers for companies of similar size or similar circumstances are very useful, so, too, are broad cross-industry measures. We've often heard companies say they cannot compare their performance to that of companies in other industries. Our response— and we have visited hundreds of facilities—is that performance is performance.
To test our contention, we analyzed the responses to determine if the differences found in the performance data by industry were true statistical differences or merely the result of random variation. The answer is that statistical differences were only found in the case of three metrics: percentage of supplier orders received damage free, inventory shrinkage as a percentage of total inventory, and back orders as a percentage of total lines.
In other words, with those exceptions, the differences found among industries could all be attributed to normal variation. And that means that useful benchmarking can take place across industries. Knowing that there are very few real differences in performance outcomes between industries may open up new possibilities for benchmarking for many companies. That, we hope, includes yours.
The annual benchmarking study began as a collabora- tion between DC VELOCITY and Georgia Southern University in 2004. The initial study focused on what metrics DCs were using rather than on how they per- formed against whatever measures they used. It found that while there was no single set of universally accept- ed metrics, most respondents were using metrics from at least one of three broad categories: time-based measures, financial measures, and service quality meas- ures. The top metrics from each category were on-time shipments (time-based), cost per unit shipped or processed (financial), and inventory count accuracy (service quality).
In 2005, WERC and Supply Chain Visions joined the research effort. That year, the survey shifted to a formal benchmarking study—one designed to provide the industry not just with data on what metrics were in widespread use, but also with data on performance against those warehouse-related metrics.
In 2006, the research team added definitions for each metric to make it easier for companies to conduct apples-to-apples comparisons with regard to each met- ric. In addition, the team added quintile reports that divided benchmark data into five groups, which helped show the distribution of responses.
For 2008, we've added a new question on technology implementation and revised the definitions of a few of the metrics to eliminate possible confusion. For example, we previously measured "back orders as a percentage of total." But after finding that companies applied this met- ric variously to the total number of orders, total number of order lines, and total dollars, we decided to break it into three separate back-order metrics. We have also added a measure for internal order cycle time to differ- entiate it from end-to-end cycle time, and a metric to capture orders that are on time, ready to ship.
They work for mid-sized companies as well as the titans of industry, and they come from sectors as diverse as biotech and utilities. But what they all have in common is that they responded to our 2008 survey on warehousing metrics and benchmarks. In all, nearly 700 warehouse and DC professionals from across the country participated in our annual research study, which was conducted online in January.
As for their job titles, the respondents hold a variety of management positions. Just about half of the respondents are managers or supervisors, while nearly 28 percent are directors. Senior vice presidents and CEOs made up 22 percent of the respondents, up about 4 percentage points from last year.
Fifty-two percent of the responses came from professionals who identified themselves as working in the manufacturing/distribution segment. Another 11 percent work for third-party warehouses, 9 percent for retail companies, and 10 percent for life sciences companies, with the remainder scattered among pharmaceuticals, utilities, and other industries.
We also asked respondents who their direct customers were. Thirtythree percent ship to end customers, just over 27 percent to retailers, just under 27 percent to distributors or wholesalers, and about 13 percent to manufacturing customers.
Companies of all sizes were well represented in the study. Thirty-eight percent of the respondents indicated that they worked for organizations with sales between $100 million and $1 billion, 29 percent came from corporations with more than $1 billion in revenue, and 33 percent from those with revenues of under $100 million.
Rare is the warehouse or DC without some technology in place to improve productivity, accuracy, or speed. This year, we asked survey participants to tell us what technology they use. We listed 15 popular forms of technology and asked respondents to indicate whether each was currently in use, being implemented, planned for implementation, or not currently planned.
Not surprisingly, given the emphasis in most facilities on controlling labor costs and improving data accuracy and timeliness, RF scanners and bar-code systems topped the list of technologies in use, with nearly 70 percent saying they have implemented these types of systems. Next came advance shipment notices at 52.8 percent, and warehouse management systems (WMS) at 50 percent. Technologies with the lowest rate of implementation were voice-directed picking (6.0 percent), slotting independent of WMS (7.6 percent), and automated material handling systems (10.3 percent).
As for what's on the horizon, software-based technologies came out way ahead of equipment-based solutions like conveyors and automated storage/retrieval (AS/RS) systems. When respondents were asked what technologies they were currently implementing or planned to implement, the most frequent responses were warehouse management systems at 40.1 percent, labor management systems (LMS) at 35.9 percent, and transportation management systems (TMS) at 34.3 percent. By contrast, less than a quarter of the respondents said they had plans to acquire AS/RS systems, carousels, or pick-to-light systems. Why? Most likely it's because software is quite portable compared to physical assets. We believe that our findings reflect today's management emphasis on flexibility and creating a distribution environment that can be moved geographically to accommodate changing customer profiles.
Authors' note: We invite readers' comments, suggestions, and insights into the research and their own use of measures. We can be reached by e-mail: Karl Manrodt at , and Kate Vitasek at .