Ultimately we conducted 18 unit workshops, one for our quant group, and a corporate one. At the end of the process we reviewed all of the output. We recognized the need for categorizing the differences between the votes to report risk using a color key for risk profiles (see Exhibit 3.1). In reviewing the voting scores,
Exhibit 3.1 Color Key for the Risk Profile Score
it appeared that five groupings existed. We had some actuaries review the data as well, and they came up with the same results.
Companies frequently like to use three colors in their corporate dashboards; however, most experts seem to agree that risk is not so cut-and-dried, and recommend four or five risk categorizations. As a workshop facilitator, one can generally detect why a score was blue and not green. In discussions challenging such a vote, facilitators frequently heard general managers or other participants speak very clearly as to why an initiative is blue and not green.
Following the addition of risk categories, the ERM team developed a summary report, in priority order, consisting of each initiative, its definition, and each initiative's risk profile (see Exhibit 3.2). These were compiled by region and submitted to the Mars management team and the regional management teams, along with the complete workshop reports.
Although senior managers reviewed these reports, it was too early in the process for them to understand fully the potential of ERM. This was highlighted in January 2007 during my annual review with Oscar. David, Mars' president, entered the "fishbowl" room quite perturbed at one of the largest units. The unit had advised of a significant surprise at year-end, which had an impact on the overall business's year-end results. David looked at me and asked whether this issue had arisen during my new process. I advised him that the unit had raised this as a potential issue, which could adversely impact them entering the new year. They
Exhibit 3.2 Summary Report
asked me to get them a copy of the complete report, and I took this to mean they had read but not kept the original.
The unit's ERM workshop output had the issue as a "red" in their submission. While both David and Oscar agreed that they expected some units to have initiatives with a red risk profile, they would not accept a unit to have a red issue and not address it or communicate the potential impact as appropriate. This became a basic tenet of the ERM process. This incident also proved a major win for ERM, as David became extremely interested in the quarterly updates, which began shortly thereafter.
To ensure that units used ERM throughout the year and communicated their views on risk to senior management, we developed an ERM dashboard template. This included the initiatives in priority order, the risk profile of each initiative for each quarter (beginning with the workshop in Q3), the risk profile trend – stable, improving, or declining – and a comment column for providing a view for year- end (see Exhibit 3.3). This became an excellent tool for communicating for several reasons. First, units that did not do so already had to review their risks and risk treatments quarterly. This helped them to have a risk mentality mind-set, which David had given us as a goal at the beginning. Second, senior managers could quickly identify units that were struggling with issues. For the first couple of years of the program, David would meet with the corporate controller, to review the
Exhibit 3.3 Quarterly Update
quarterly reports. Finally, it provided units with a tool to communicate to management that things were on track, although the first or second quarter sales may not have appeared that way.
An excellent example of the latter point occurred the first year we used the reporting template. In a large market where the company had a strong number three position, the unit's reported sales appeared to fall below its plan at the end of the first, second, and third quarters of 2006.
I had facilitated the unit's workshop. As their two main competitors, which had a significant share of the market, planned to front-end load their activities (e.g., advertising, consumer promotions, trade discounts, etc.) into the first and second quarter, the unit decided to focus the vast majority of its activities into the second half, especially the fourth quarter. Each quarter the unit reported its key brands as having green risk profiles. Each quarter, Oscar had me contact and challenge the unit CFO on this point. Each quarter the unit CFO responded that the unit had back-end loaded its activity set into Q3 and Q4, and I confirmed to Oscar that this had been the case in the workshop as well. In the end, the unit delivered about 105 percent of its planned sales, and the ERM Quarterly Report gained a great deal of credibility.
One thing that we noted from both the pilot year and the launch year was that participants did not always seem to vote on the same thing on an initiative. For example, an objective may read, "Maintain market leadership while achieving growth and profitability targets." A unit might have 35 percent market share, and it could hold market leadership at 25 percent. One participant may vote low because she believes market share will fall to 32 percent while another participant votes high because this will still represent market leadership. Similarly, divergent votes
Exhibit 3.4 Targets
on achieving growth and profitability may result as different participants vote on gross sales versus net sales, and earnings versus margins.
To resolve this problem, we changed the process for the 2007 Operating Plan workshops, conducted in Q3 of 2006, and all future workshops. We required units to specify measurable targets within each objective (see Exhibit 3.4).
Units could do this for all initiatives, including intangible ones. For instance, associate engagement targets would include specific numerical scores for the units and follow-up percentage targets for management. Similarly, "Have the right people for the right jobs" would become "Have one person for each critical job in the unit's succession plan." These objectives would have measurable targets by which the unit could report progress throughout the course of the year.