Skip to content
Search AI Powered

Latest Stories

strategic insight

taking the measure of metrics

Our latest research into DC and warehouse metrics shows that managers are getting better at using those measures to boost performance. But is their DC performance as good as they think it is?

taking the measure of metrics

It's a well worn axiom that you can't manage what you don't measure.

Sounds simple enough.


But there's a catch and a corollary. The catch is, what exactly should you measure? Warehouse and distribution center managers can and often do gather a vast amount of performance data on their operations, but the ability to use that information to make management decisions is finite. In order to prevent all the data from becoming so much noise, managers have to determine which measures are, first, reliable, and second, most useful, and then concentrate on those. That brings us to the corollary: Only capture and report metrics that you know are good.

In order to understand which metrics are most useful to warehouse and distribution center managers, DC VELOCITY and the Warehousing Education and Research Council (WERC) sponsor an annual warehouse benchmarking study. The study is made possible through the support of Ryder System and the Staubach Company. Now in its fifth year, the survey asks DC VELOCITY readers and WERC members what metrics they use and how they use them.

The survey, conducted by study partners Georgia Southern University and the consultancy Supply Chain Visions, aims not only to determine which metrics are used most often, but also to provide clear definitions of those metrics and to suggest ways that managers can use them. In addition, the study results offer benchmarks against which managers can compare the own companies' performance. And now that we have five years' worth of data, we can begin to see some trends in just how well warehouses and DCs are performing against the measures managers consider most important.

Getting better all the time
What we've learned is encouraging. Managers are getting better at using metrics, and it's evident in improved performance over the course of five years. We've seen a marked improvement in companies' ability to deliver what's known as the Perfect Order—one that arrives complete, on time, damage free, and accompanied by the correct documentation and invoice. Senior executives are supporting—even demanding—the use of performance measures at the companies they run. And forward-thinking companies in some industries—grocery retailers, in particular—are beginning to introduce measures across organizations, quite literally thinking outside the warehouse or DC box.

Furthermore, we've seen performance improve across companies of all sizes. As technology for capturing performance data becomes more accessible, the amount of information available to improve warehouse operations continues to expand.

This year's survey was launched via an e-mail invitation to WERC members and DC VELOCITY readers in early January 2008. Survey participants were asked to report their actual performance in 2007 against 50 key operational metrics. We analyzed the results by industry, type of operation (pallet picking, partial pallet picking, full case picking, or broken case picking), business strategy, type of customer served, and company size.

False confidence or measured confidence?
One thing we have seen remain relatively consistent from year to year is the list of most widely used metrics—the measures that are common to businesses large and small and across disparate industries. Exhibit 1 shows the 10 most popular metrics, along with the percentage of respondents who are using each one and its rank in last year's survey. New to the top 10 list this year: distribution costs as a percentage of sales. Dropping out of the top 10 this year: order cycle time.

What has also remained consistent is that respondents by and large are confident that they are doing a good job of serving their customers. For example, we asked survey participants how their customers would rate their performance against several service-oriented metrics, including on-time delivery, order cycle times, and accuracy of invoicing. In the case of on-time delivery, fully 84 percent responded that their customers would rate them as average or above average. (We'll come back to this issue of on-time delivery in a moment, as it has proved to be a thorny one.) It was a similar story with cycle times and invoice accuracy; roughly 70 percent indicated that their customers would rate their performance in these areas as average or above average. Now, basic math tells us that when we benchmark performance, half the respondents will be above the median score and half below. By definition, a majority cannot be above average. What's the story?

One possibility is that however statistically improbable it may seem, the respondents' assessment is essentially correct. By joining an educational organization like WERC and reading professional journals like DC VELOCITY, the survey participants have demonstrated an interest in staying abreast of industry trends and improving their operations, and they may in fact be above average. However, it's also possible that when it comes to assessing their own performance, these managers are going more by their gut than by hard data.

Take that issue of on-time delivery. What makes the respondents' apparent confidence in their on-time delivery performance surprising is that a sizeable percentage of them don't even measure their actual performance—"on-time delivery" does not appear on the list of the 10 most widely used metrics. What DCs are far more likely to measure is on-time shipment, which is an altogether different matter. A lot can happen between the time a shipment leaves the dock and the time it's delivered.

Further complicating the issue is the apparent lack of consensus regarding what constitutes an "on time" delivery. Roughly 55 percent of the respondents indicated that their customers defined "on time" as simply a delivery on the requested or agreed-upon day, but 28 percent said that "on time" meant delivery on or before an appointed time. The rest defined "on time" as being within a 15-minute to 1-hour window of that appointed time. Add to that the likelihood that each customer has a different definition, and it becomes clear why measuring on-time delivery can be at best problematic. We believe that this lack of agreed-upon standards and definitions goes a long way toward explaining why some suppliers have difficulty measuring "on time" when it comes to delivery.

Those who did say they measured on-time delivery claimed that by and large on-time delivery performance is comparable to on-time shipping. Sixty percent said that their shipments arrived on time 95.8 percent of the time or better. But our concern here is related to where the delivery performance rating is coming from. Is it a verifiable measure based on the actual time of delivery to the customer's facility, or are the respondents simply assuming that if the phone doesn't ring, all is well?

Just perfect
Though "on time" delivery may remain an elusive measure, the survey results still provide plenty of hard data on DC performance—data that companies can use to determine how they're performing relative to others. Exhibits 2 through 6 present the latest performance data—both median and best-in-class performance numbers across a range of metrics. (We chose median rather than the mean as it is not easily swayed by outliers. "Best in class" is defined here as the responses from the top 20 percent—that is, the companies that are performing the best against each metric.) Because of the large number of metrics included in the survey, we have divided them into five groups based on type of measurement: customer service, operations, financial, capacity/quality, and employee.

Of course, comparing any DC's performance against the benchmarks provides only a partial measure of its service. The true measure is how its customers perceive its service. Or to put it another way, all is for naught if the customer is not happy.

That's why it is worth looking beyond the most commonly used metrics to those that we associate with the Perfect Order and which are used to compute the Perfect Order Index (POI). (See Exhibit 7.)

We have historically defined the Perfect Order as one that is delivered complete, on time, damage free, and accompanied by the correct documentation and invoice. To calculate a company's score on the index, you simply take those four metrics (expressed as percentages) and multiply them together. For instance, if your company were performing at a 95 percent level in all four measures, your Perfect Order Index score would be 81.5 percent (95 x 95 x 95 x 95).

This year, we asked respondents to calculate their score for us.We then analyzed their responses to determine the median performance score (we decided to use the median—rather than the mean, or average— because it's less likely to be skewed by very high or low numbers). When we used this method of calculation,we found that the median performance against the Perfect Order Index was 97.5 percent. However, when calculated by the median response for each individual component of the Perfect Order, the index was a notso- perfect 94.1 percent.

We should note here that although we used the traditional definition of the Perfect Order, there are other definitions in use today. For example, the Grocery Manufacturers Association and the Food Marketing Institute have suggested a POI that includes seven elements—including cross-organizational elements such as service at the shelf (that is, the percentage of time the product is on the store shelf) and days of supply in the supply chain. Respondents to our survey have already begun gathering this type of information (which can be extremely tough to obtain) to some small extent. For example, 3.7 percent said they were now collecting data on service at the shelf.

Performance is performance
But however you calculate the Perfect Order Index, the fact remains that both customer-facing measures and internal metrics are crucial to managing operations. And this survey is intended to provide readers with benchmarking information they can use to assess their own operational performance.

One of the lessons we've learned over the five years of the study is that while benchmark numbers for companies of similar size or similar circumstances are very useful, so, too, are broad cross-industry measures. We've often heard companies say they cannot compare their performance to that of companies in other industries. Our response— and we have visited hundreds of facilities—is that performance is performance.

To test our contention, we analyzed the responses to determine if the differences found in the performance data by industry were true statistical differences or merely the result of random variation. The answer is that statistical differences were only found in the case of three metrics: percentage of supplier orders received damage free, inventory shrinkage as a percentage of total inventory, and back orders as a percentage of total lines.

In other words, with those exceptions, the differences found among industries could all be attributed to normal variation. And that means that useful benchmarking can take place across industries. Knowing that there are very few real differences in performance outcomes between industries may open up new possibilities for benchmarking for many companies. That, we hope, includes yours.

Authors' note: We invite readers' comments, suggestions, and insights into the research and their own use of measures. We can be reached by e-mail: Karl Manrodt at , and Kate Vitasek at .

The Latest

More Stories

autonomous tugger vehicle

Cyngn delivers autonomous tuggers to wheel maker COATS

Autonomous forklift maker Cyngn is deploying its DriveMod Tugger model at COATS Company, the largest full-line wheel service equipment manufacturer in North America, the companies said today.

The deal was announced the same week that California-based Cyngn said it had raised $33 million in funding through a stock sale.

Keep ReadingShow less

Featured

photo of self driving forklift
Lift Trucks, Personnel & Burden Carriers

Cyngn gains $33 million for its self-driving forklifts

photo of a cargo ship cruising

Project44 tallies supply chain impacts of a turbulent 2024

Following a year in which global logistics networks were buffeted by labor strikes, natural disasters, regional political violence, and economic turbulence, the supply chain visibility provider Project44 has compiled the impact of each of those events in a new study.

The “2024 Year in Review” report lists the various transportation delays, freight volume restrictions, and infrastructure repair costs of a long string of events. Those disruptions include labor strikes at Canadian ports and postal sites, the U.S. East and Gulf coast port strike; hurricanes Helene, Francine, and Milton; the Francis Scott key Bridge collapse in Baltimore Harbor; the CrowdStrike cyber attack; and Red Sea missile attacks on passing cargo ships.

Keep ReadingShow less
diagram of transportation modes

Shippeo gains $30 million backing for its transportation visibility platform

The French transportation visibility provider Shippeo today said it has raised $30 million in financial backing, saying the money will support its accelerated expansion across North America and APAC, while driving enhancements to its “Real-Time Transportation Visibility Platform” product.

The funding round was led by Woven Capital, Toyota’s growth fund, with participation from existing investors: Battery Ventures, Partech, NGP Capital, Bpifrance Digital Venture, LFX Venture Partners, Shift4Good and Yamaha Motor Ventures. With this round, Shippeo’s total funding exceeds $140 million.

Keep ReadingShow less
Cover image for the white paper, "The threat of resiliency and sustainability in global supply chain management: expectations for 2025."

CSCMP releases new white paper looking at potential supply chain impact of incoming Trump administration

Donald Trump has been clear that he plans to hit the ground running after his inauguration on January 20, launching ambitious plans that could have significant repercussions for global supply chains.

With a new white paper—"The threat of resiliency and sustainability in global supply chain management: Expectations for 2025”—the Council of Supply Chain Management Professionals (CSCMP) seeks to provide some guidance on what companies can expect for the first year of the second Trump Administration.

Keep ReadingShow less
grocery supply chain workers

ReposiTrak and Upshop link platforms to enable food traceability

ReposiTrak, a global food traceability network operator, will partner with Upshop, a provider of store operations technology for food retailers, to create an end-to-end grocery traceability solution that reaches from the supply chain to the retail store, the firms said today.

The partnership creates a data connection between suppliers and the retail store. It works by integrating Salt Lake City-based ReposiTrak’s network of thousands of suppliers and their traceability shipment data with Austin, Texas-based Upshop’s network of more than 450 retailers and their retail stores.

Keep ReadingShow less