Skip to content
Search AI Powered

Latest Stories

strategic insight

taking the measure of metrics

Our latest research into DC and warehouse metrics shows that managers are getting better at using those measures to boost performance. But is their DC performance as good as they think it is?

taking the measure of metrics

It's a well worn axiom that you can't manage what you don't measure.

Sounds simple enough.


But there's a catch and a corollary. The catch is, what exactly should you measure? Warehouse and distribution center managers can and often do gather a vast amount of performance data on their operations, but the ability to use that information to make management decisions is finite. In order to prevent all the data from becoming so much noise, managers have to determine which measures are, first, reliable, and second, most useful, and then concentrate on those. That brings us to the corollary: Only capture and report metrics that you know are good.

In order to understand which metrics are most useful to warehouse and distribution center managers, DC VELOCITY and the Warehousing Education and Research Council (WERC) sponsor an annual warehouse benchmarking study. The study is made possible through the support of Ryder System and the Staubach Company. Now in its fifth year, the survey asks DC VELOCITY readers and WERC members what metrics they use and how they use them.

The survey, conducted by study partners Georgia Southern University and the consultancy Supply Chain Visions, aims not only to determine which metrics are used most often, but also to provide clear definitions of those metrics and to suggest ways that managers can use them. In addition, the study results offer benchmarks against which managers can compare the own companies' performance. And now that we have five years' worth of data, we can begin to see some trends in just how well warehouses and DCs are performing against the measures managers consider most important.

Getting better all the time
What we've learned is encouraging. Managers are getting better at using metrics, and it's evident in improved performance over the course of five years. We've seen a marked improvement in companies' ability to deliver what's known as the Perfect Order—one that arrives complete, on time, damage free, and accompanied by the correct documentation and invoice. Senior executives are supporting—even demanding—the use of performance measures at the companies they run. And forward-thinking companies in some industries—grocery retailers, in particular—are beginning to introduce measures across organizations, quite literally thinking outside the warehouse or DC box.

Furthermore, we've seen performance improve across companies of all sizes. As technology for capturing performance data becomes more accessible, the amount of information available to improve warehouse operations continues to expand.

This year's survey was launched via an e-mail invitation to WERC members and DC VELOCITY readers in early January 2008. Survey participants were asked to report their actual performance in 2007 against 50 key operational metrics. We analyzed the results by industry, type of operation (pallet picking, partial pallet picking, full case picking, or broken case picking), business strategy, type of customer served, and company size.

False confidence or measured confidence?
One thing we have seen remain relatively consistent from year to year is the list of most widely used metrics—the measures that are common to businesses large and small and across disparate industries. Exhibit 1 shows the 10 most popular metrics, along with the percentage of respondents who are using each one and its rank in last year's survey. New to the top 10 list this year: distribution costs as a percentage of sales. Dropping out of the top 10 this year: order cycle time.

What has also remained consistent is that respondents by and large are confident that they are doing a good job of serving their customers. For example, we asked survey participants how their customers would rate their performance against several service-oriented metrics, including on-time delivery, order cycle times, and accuracy of invoicing. In the case of on-time delivery, fully 84 percent responded that their customers would rate them as average or above average. (We'll come back to this issue of on-time delivery in a moment, as it has proved to be a thorny one.) It was a similar story with cycle times and invoice accuracy; roughly 70 percent indicated that their customers would rate their performance in these areas as average or above average. Now, basic math tells us that when we benchmark performance, half the respondents will be above the median score and half below. By definition, a majority cannot be above average. What's the story?

One possibility is that however statistically improbable it may seem, the respondents' assessment is essentially correct. By joining an educational organization like WERC and reading professional journals like DC VELOCITY, the survey participants have demonstrated an interest in staying abreast of industry trends and improving their operations, and they may in fact be above average. However, it's also possible that when it comes to assessing their own performance, these managers are going more by their gut than by hard data.

Take that issue of on-time delivery. What makes the respondents' apparent confidence in their on-time delivery performance surprising is that a sizeable percentage of them don't even measure their actual performance—"on-time delivery" does not appear on the list of the 10 most widely used metrics. What DCs are far more likely to measure is on-time shipment, which is an altogether different matter. A lot can happen between the time a shipment leaves the dock and the time it's delivered.

Further complicating the issue is the apparent lack of consensus regarding what constitutes an "on time" delivery. Roughly 55 percent of the respondents indicated that their customers defined "on time" as simply a delivery on the requested or agreed-upon day, but 28 percent said that "on time" meant delivery on or before an appointed time. The rest defined "on time" as being within a 15-minute to 1-hour window of that appointed time. Add to that the likelihood that each customer has a different definition, and it becomes clear why measuring on-time delivery can be at best problematic. We believe that this lack of agreed-upon standards and definitions goes a long way toward explaining why some suppliers have difficulty measuring "on time" when it comes to delivery.

Those who did say they measured on-time delivery claimed that by and large on-time delivery performance is comparable to on-time shipping. Sixty percent said that their shipments arrived on time 95.8 percent of the time or better. But our concern here is related to where the delivery performance rating is coming from. Is it a verifiable measure based on the actual time of delivery to the customer's facility, or are the respondents simply assuming that if the phone doesn't ring, all is well?

Just perfect
Though "on time" delivery may remain an elusive measure, the survey results still provide plenty of hard data on DC performance—data that companies can use to determine how they're performing relative to others. Exhibits 2 through 6 present the latest performance data—both median and best-in-class performance numbers across a range of metrics. (We chose median rather than the mean as it is not easily swayed by outliers. "Best in class" is defined here as the responses from the top 20 percent—that is, the companies that are performing the best against each metric.) Because of the large number of metrics included in the survey, we have divided them into five groups based on type of measurement: customer service, operations, financial, capacity/quality, and employee.

Of course, comparing any DC's performance against the benchmarks provides only a partial measure of its service. The true measure is how its customers perceive its service. Or to put it another way, all is for naught if the customer is not happy.

That's why it is worth looking beyond the most commonly used metrics to those that we associate with the Perfect Order and which are used to compute the Perfect Order Index (POI). (See Exhibit 7.)

We have historically defined the Perfect Order as one that is delivered complete, on time, damage free, and accompanied by the correct documentation and invoice. To calculate a company's score on the index, you simply take those four metrics (expressed as percentages) and multiply them together. For instance, if your company were performing at a 95 percent level in all four measures, your Perfect Order Index score would be 81.5 percent (95 x 95 x 95 x 95).

This year, we asked respondents to calculate their score for us.We then analyzed their responses to determine the median performance score (we decided to use the median—rather than the mean, or average— because it's less likely to be skewed by very high or low numbers). When we used this method of calculation,we found that the median performance against the Perfect Order Index was 97.5 percent. However, when calculated by the median response for each individual component of the Perfect Order, the index was a notso- perfect 94.1 percent.

We should note here that although we used the traditional definition of the Perfect Order, there are other definitions in use today. For example, the Grocery Manufacturers Association and the Food Marketing Institute have suggested a POI that includes seven elements—including cross-organizational elements such as service at the shelf (that is, the percentage of time the product is on the store shelf) and days of supply in the supply chain. Respondents to our survey have already begun gathering this type of information (which can be extremely tough to obtain) to some small extent. For example, 3.7 percent said they were now collecting data on service at the shelf.

Performance is performance
But however you calculate the Perfect Order Index, the fact remains that both customer-facing measures and internal metrics are crucial to managing operations. And this survey is intended to provide readers with benchmarking information they can use to assess their own operational performance.

One of the lessons we've learned over the five years of the study is that while benchmark numbers for companies of similar size or similar circumstances are very useful, so, too, are broad cross-industry measures. We've often heard companies say they cannot compare their performance to that of companies in other industries. Our response— and we have visited hundreds of facilities—is that performance is performance.

To test our contention, we analyzed the responses to determine if the differences found in the performance data by industry were true statistical differences or merely the result of random variation. The answer is that statistical differences were only found in the case of three metrics: percentage of supplier orders received damage free, inventory shrinkage as a percentage of total inventory, and back orders as a percentage of total lines.

In other words, with those exceptions, the differences found among industries could all be attributed to normal variation. And that means that useful benchmarking can take place across industries. Knowing that there are very few real differences in performance outcomes between industries may open up new possibilities for benchmarking for many companies. That, we hope, includes yours.

Authors' note: We invite readers' comments, suggestions, and insights into the research and their own use of measures. We can be reached by e-mail: Karl Manrodt at , and Kate Vitasek at .

The Latest

More Stories

legal scales and gavel

FMCSA rule would require greater broker transparency

A move by federal regulators to reinforce requirements for broker transparency in freight transactions is stirring debate among transportation groups, after the Federal Motor Carrier Safety Administration (FMCSA) published a “notice of proposed rulemaking” this week.

According to FMCSA, its draft rule would strive to make broker transparency more common, requiring greater sharing of the material information necessary for transportation industry parties to make informed business decisions and to support the efficient resolution of disputes.

Keep ReadingShow less

Featured

pickle robot unloading truck

Pickle Robot lands $50 million in VC for truck-unloading robots

The truck unloading automation provider Pickle Robot Co. today said it has raised $50 million in venture capital and will use the money to accelerate the development of new feature sets and build out the company’s commercial teams to unlock new markets and geographies.

The “series B” funding round was financed by an unnamed “strategic customer” as well as Teradyne Robotics Ventures, Toyota Ventures, Ranpak, Third Kind Venture Capital, One Madison Group, Hyperplane, Catapult Ventures, and others.

Keep ReadingShow less
chart of trucking conditions

FTR: Trucking sector outlook is bright for a two-year horizon

The trucking freight market is still on course to rebound from a two-year recession despite stumbling in September, according to the latest assessment by transportation industry analysis group FTR.

Bloomington, Indiana-based FTR said its Trucking Conditions Index declined in September to -2.47 from -1.39 in August as weakness in the principal freight dynamics – freight rates, utilization, and volume – offset lower fuel costs and slightly less unfavorable financing costs.

Keep ReadingShow less
chart of robot use in factories by country

Global robot density in factories has doubled in 7 years

Global robot density in factories has doubled in seven years, according to the “World Robotics 2024 report,” presented by the International Federation of Robotics (IFR).

Specifically, the new global average robot density has reached a record 162 units per 10,000 employees in 2023, which is more than double the mark of 74 units measured seven years ago.

Keep ReadingShow less
person using AI at a laptop

Gartner: GenAI set to impact procurement processes

Progress in generative AI (GenAI) is poised to impact business procurement processes through advancements in three areas—agentic reasoning, multimodality, and AI agents—according to Gartner Inc.

Those functions will redefine how procurement operates and significantly impact the agendas of chief procurement officers (CPOs). And 72% of procurement leaders are already prioritizing the integration of GenAI into their strategies, thus highlighting the recognition of its potential to drive significant improvements in efficiency and effectiveness, Gartner found in a survey conducted in July, 2024, with 258 global respondents.

Keep ReadingShow less