Home Health Case Studies in Maintenance and Reliability: A Wealth of Best Practices
Benchmarking is about being humble enough to admit that someone else is better at something than you; and wise enough to try to learn how to match and even surpass them at it.
American Productivity and Quality Center.
Author: Jim Wardhaugh
Location: 2.3.3 Corporate Technical Headquarters
Our little group was providing a benchmarking and consultancy service to our own facilities and to a few others with whom we had technical support agreements. These sites were scattered around the world. They operated in different geographical areas, under different government regulatory regimes. They were of different ages and sizes; they used different feed-stocks to make different portfolios of products. Our task was to scrutinize data from these locations, identify those whose performance could be improved, and arrange to help those who needed it.
Company Performance Analysis Methodology
We had a systematic methodology for capturing performance data from the sites. There were structured questionnaires asking for relevant data. These were backed up by copious notes explaining in detail the methodology, terminology, and definitions. Some returns were required every quarter while the rest were required annually. Each client location would then send the requested data, which was checked rigorously for any apparent errors. The data was used by a number of different groups in the head office, each looking at different aspects of performance. Our group looked at aspects of maintenance performance.
We did not want to ask a site for data that it was already sending to the head office in any report. So we took great pains to extract data from a variety of sources. In this way, the input effort by the sites was minimized and little additional information was needed from them.
When satisfied that all the data looked sensible we massaged the data to identify the performance of each site (or a facility on that site) in a number of ways. The main performance features published for each site were:
For each of the major plants on site [e.g., Crude Distillation Unit (CDU), Catalytic Cracker (CCU), Hydro-cracker (HCU), Reformer (PFU), Thermal Cracker (TCU/VBU)]:
• Downtime averaged over the turnaround cycle (whether 3, 4, or 5 years). This smoothed out the effect of major turnarounds (also called shutdowns)
For the whole site:
This information was published annually and provided in a number of forms, but the two most common provided comparisons with their peers and were:
On each spoke of the diagram, the length of the spoke represents the actual value for each facility. The shaded polygon shows the data points for the best performers; these are the values of the item in the ranked order, one- third of the way from the best to the worst performer.
Comparisons were made against two yardsticks:
Because the facilities were of different sizes and complexities, we had to normalize the data. We used a number of normalizing factors to achieve this. For example, when measuring maintenance costs, we used factors such as asset replacement value and intake barrels of feedstock as the divisors.
These divisors gave different answers and thus somewhat different rank-
Figure 8.1 Example ranking on a bar chart
Figure 8.2 Idealized radar chart
Figure 8.3 Realistic radar chart showing plant performance
ings. Not surprisingly, those deemed to be top performers, liked the divisor we used. Those deemed poor were highly vexed. Although there were exceptions, whatever the divisor used, those in the top-performing bunch stayed at the top, those in the bottom bunch stayed at the bottom. Only minor changes in position or performance were identified. Those in the middle of the performance band could show significant movement, however. Normalizing methods are discussed in Appendix 8-B.
68 Chapter 8
|< Prev||CONTENTS||Next >|