• Strategy
  • McKinsey & Co.

Measuring Performance in Services

Services are more difficult to measure and monitor than manufacturing processes, but executives can rein in variance and boost productivity if they implement rigorous metrics.

Services are different. To make meaningful comparisons, companies have to identify the sources of difference in their businesses and devise metrics that compare these businesses meaningfully. The considerations that show up frequently include the most obvious differences among jobs and groups, such as regional variations in labor costs, local geographies and difficulties in reaching accounts, the workload mix (for example, repairs versus installation), and differences in the use of capital (whether equipment is owned or leased by the company or owned by the customer). Several other major issues come into play as well.

• Service-level agreements. The more types of services a business offers, the more variability it can expect in its agreements. The metrics for a help desk that provides customer support for 5,000 users in a 9-to-5 office are very different from those for a help desk that supports logistics in a round-the-clock industrial environment. Even when offerings are similar, variance can be introduced locally through the way contracts are interpreted. In one IT-outsourcing company, two desktop support accounts with service-level agreements that specified an eight-hour response time had very different cost metrics. When asked why, the manager of the poorly performing account said that, despite the contract’s limits, “If we don’t answer within the hour, our client goes ballistic.” The written service-level agreement had been trumped by an unwritten one that was costing real money.

• Environment, equipment, and infrastructure. Each customer’s environment has unique aspects that are difficult to measure. A logistics provider will see huge differences between managing a big, automated warehouse and a small, simple one. Field services that support industrial systems must contend with many generations of equipment and upgrades at customer sites. Some clients have their own on-site staff to support service, while others may be difficult even to reach. Given the range of possibilities, it’s usually not very helpful simply to measure the average cost of a service call.

• Work volume. Size is a major reason for the wide variance among accounts and business units. Interestingly, managers of both small and large accounts claim that size makes their particular metrics worse. Both have a point: large accounts should benefit from scale, but in general they are also more complex, and that drives costs back up. Volume needs to be considered, but only in tandem with other patterns (including scale benefits and the breadth of work) that help explain costs.

The data problem. Underlying all of these problems is an inability to identify what must be measured and how to normalize data across different environments. Even when companies know what to measure, they struggle to achieve accuracy. Data are rarely defined or collected uniformly across an organization’s environments. A service call involving the installation of two elevators, for example, could be measured as a single installation in one part of a company and as two in another.

Contributing to this ambiguity is the fact that data collection is usually driven by the requirements of financial cost reporting, which often fails to shed light on ways of boosting performance. Accountants for an IT services company may need to know the cost of each server, for instance, but an executive looking to reduce variance would also need to know the number of service incidents by server type and the time spent on each incident. Variance in demand drivers is also important: did the number of calls to a help desk rise because more users bought a product, for example, or because it changed? Financial metrics might fail to detect this important distinction.

Discuss

Your email address will not be published. Required fields are marked *