Desktop version

Home arrow Health arrow Case Studies in Maintenance and Reliability: A Wealth of Best Practices

Source

Bench marking

Benchmarking is about being humble enough to admit that someone else is better at something than you; and wise enough to try to learn how to match and even surpass them at it.

American Productivity and Quality Center.

Author: Jim Wardhaugh

Location: 2.3.3 Corporate Technical Headquarters

Background

Our little group was providing a benchmarking and consultancy service to our own facilities and to a few others with whom we had technical support agreements. These sites were scattered around the world. They operated in different geographical areas, under different government regulatory regimes. They were of different ages and sizes; they used different feed-stocks to make different portfolios of products. Our task was to scrutinize data from these locations, identify those whose performance could be improved, and arrange to help those who needed it.

Company Performance Analysis Methodology

We had a systematic methodology for capturing performance data from the sites. There were structured questionnaires asking for relevant data. These were backed up by copious notes explaining in detail the methodology, terminology, and definitions. Some returns were required every quarter while the rest were required annually. Each client location would then send the requested data, which was checked rigorously for any apparent errors. The data was used by a number of different groups in the head office, each looking at different aspects of performance. Our group looked at aspects of maintenance performance.

We did not want to ask a site for data that it was already sending to the head office in any report. So we took great pains to extract data from a variety of sources. In this way, the input effort by the sites was minimized and little additional information was needed from them.

When satisfied that all the data looked sensible we massaged the data to identify the performance of each site (or a facility on that site) in a number of ways. The main performance features published for each site were:

For each of the major plants on site [e.g., Crude Distillation Unit (CDU), Catalytic Cracker (CCU), Hydro-cracker (HCU), Reformer (PFU), Thermal Cracker (TCU/VBU)]:

• Downtime averaged over the turnaround cycle (whether 3, 4, or 5 years). This smoothed out the effect of major turnarounds (also called shutdowns)

For the whole site:

  • • Maintenance cost, averaged over the turnaround cycle, as a percentage of replacement value
  • • Maintenance cost, averaged over the turnaround cycle, in US$/bbl.
  • • Maintenance man-hours per unit of complexity.

This information was published annually and provided in a number of forms, but the two most common provided comparisons with their peers and were:

  • • A straight-forward bar chart showing a ranking from best to worst (see an example in Figure 8.1).
  • • A radar diagram which sites found useful because it could show at a glance a number of aspects (see idealized version in Figure 8.2). Comparisons could then be made against the performance of the best (see Figure 8.3).

On each spoke of the diagram, the length of the spoke represents the actual value for each facility. The shaded polygon shows the data points for the best performers; these are the values of the item in the ranked order, one- third of the way from the best to the worst performer.

Comparisons were made against two yardsticks:

  • • The average performance of the group of plants or refineries
  • • The performance of the plant or refinery one-third of the way down the ranking order.

Because the facilities were of different sizes and complexities, we had to normalize the data. We used a number of normalizing factors to achieve this. For example, when measuring maintenance costs, we used factors such as asset replacement value and intake barrels of feedstock as the divisors.

These divisors gave different answers and thus somewhat different rank-

Example ranking on a bar chart

Figure 8.1 Example ranking on a bar chart

Idealized radar chart

Figure 8.2 Idealized radar chart

Realistic radar chart showing plant performance

Figure 8.3 Realistic radar chart showing plant performance

Benchmarking 67

ings. Not surprisingly, those deemed to be top performers, liked the divisor we used. Those deemed poor were highly vexed. Although there were exceptions, whatever the divisor used, those in the top-performing bunch stayed at the top, those in the bottom bunch stayed at the bottom. Only minor changes in position or performance were identified. Those in the middle of the performance band could show significant movement, however. Normalizing methods are discussed in Appendix 8-B.

68 Chapter 8

 
Source
Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >

Related topics