Analytical Hierarchy Process
The analytical hierarchy process (AHP) is a multi-objective decision analysis tool first proposed by Thomas Saaty [Saatyl980], The technique has become very popular—a Google Scholar search for “analytical hierarchy process” returns over 1.5 million items in under a tenth of a second.
Description and Uses
The process is designed for using either objective and subjective measures to evaluate a set of alternatives based upon multiple criteria, organized in a hierarchical structure as shown in Figure 8.2. The goal or objective is at the
FIGURE 8.2: A Generic Three-Layer AHP Hierarchy
top level. The next layer holds the criteria that are evaluated or weighted. The bottom level has the alternatives which are measured against each criterion. The decision maker makes pairwise comparisons of the criteria in which every pair is subjectively or objectively compared. Subjective assessments use Saaty’s nine-point scale that we introduced with SAW. (See Table 8.8.)
The AHP process can be described as a method to decompose a problem into sub-problems. In most cases, the decision maker has a choice among many alternatives. Each alternative has a set of attributes or characteristics, AHP calls criteria, that can be measured, either subjectively or objectively. The attribute elements of the hierarchical process can relate to any aspect of the decision problem, either tangible or intangible, carefully measured or roughly estimated, well- or poorly understood—essentially anything that applies to the decision at hand.
To perform an AHP, we need a goal or an objective and a set of alternatives, each with criteria (attributes) to compare. Once the hierarchy is built, the decision makers systematically pairwise-evaluate the various elements (comparing them to one another two at a time), with respect to their impact on an element above them in the hierarchy. The decision makers can use concrete data about the elements or subjective judgments concerning the elements’ relative meaning and importance for making the comparisons. Since subjective judgments are imperfect, sensitivity analysis will be very important.
The process converts subjective evaluations to numerical values that can be processed and compared over the entire range of the problem. A priority or numerical weight is derived for each element of the hierarchy, allowing diverse and often incommensurable elements to be rationally and consistently compared to one another. The final step of the process calculates a numerical priority score for each decision alternative. These scores represent the alternatives’ relative ability to achieve the decision’s goal; they allow a straightforward consideration of the various courses of action.
AHP can be used by individuals for simple decisions or by teams working on large, complex problems. The method has unique advantages when important elements of the decision are difficult to quantify or compare, or where communication among team members is impeded by their different specializations, lexicons, or perspectives.
Methodology of the Analytic Hierarchy Process
The procedure for AHP can be summarized as:
Step 1. Build the hierarchy for the decision following Figure 8.2.
Goal: Select the best alternative Criteria: List сь С2, С3, ..., cm Alternatives: List a 1, «2, аз, ..., a„
Step 2. Judgments and Comparison.
Build comparison mat rices using a 9-point scale of pairwise comparisons shown in Table 8.10 for the criteria (attributes) and the alternatives relative to each criterion. A problem with m criteria and n alternatives will require n + 1
matrices. Find the dominant eigenvector of each matrix. The power method is often used to calculate eigenvectors; see, e.g., [BurdenFaires2005]. The goal, in AHP, is to obtain a set of eigenvectors of the system that measure the importance of alternatives with respect to the criteria.
TABLE 8.10: Saaty’s Nine-Point Scale
TABLE Notes: Even numbers represent intermediate importance levels which should only be used as compromises. If the importance level of A to В is 3, then that of В to A is 1/3, the reciprocal.
Saaty’s consistency ratio CR measures the consistency of the pairwise assessments of relative importance. The value of CR must be less than or equal to 0.1 to be considered valid. To compute CR, start by approximating the largest eigenvalue Л of the comparison matrix. Then calculate the consistency index Cl with
Finally, CR = CI/RI, where RI is the random index (from [Saatyl980]) found from
If CR > 0.1, we must go back to our pairwise comparisons and repair the inconsistencies. In general, consistency ensures that if A > В and В > C, then A > C for all A, B, and C.
Step 3. Combine all the alternative comparison eigenvectors into a matrix, then multiply by the criteria matrix’s eigenvector to obtain an overall comparative ranking.
Step 4. After the m criterion weights are combined into an nxm normalized matrix (for n alternatives by m criteria), multiply by the criteria ranking vector to obtain the final rankings.
Step 5. Interpret the order presented by the final ranking.
Strengths and Limitations
AHP is very widely used in business, industry, and government. The technique is quite useful for discriminating between competing options for a range of objectives needing to be met. Even though AHP relies on what might be seen as obscure mathematics—eigenvectors and eigenvalues of positive reciprocal matrices—the calculations are not complex, and can be carried out with a spreadsheet. A decision maker doesn’t need to understand linear algebra theory to use the technique, but must be aware of its strengths and limitations.
AHP’s main strength is producing a ranking of alternatives ordered by their effectiveness relative to the criteria’s weighting to meet the project goal. The calculations of AHP logically lead to the alternatives’ ranking as a consequence of preference judgments on the relative importance of the criteria and on how the alternatives satisfy each of the criteria. Making accurate and good-faith relative importance assessments is critical to the method. Manually adjusting the pairwise judgments to obtain a predetermined result is quite hard, but not impossible.
A further strength is that AHP provides a heuristic for detecting inconsistent judgments in pairwise comparisons. When there are a large number of criteria and/or alternatives, inconsistencies can easily be hidden by the problem’s size; AHP highlights hidden inconsistencies.
A main limitation of AHP comes from being based on eigenvectors of positive reciprocal matrices. This basis requires that symmetric judgments must be reciprocal: if A is 3 times more important than В, then В is 1/3 as important as A. This restriction can lead to problems such as “rank reversal,” a change in the ordering of alternatives when criteria or alternatives are added to or deleted from the initial set compared. Several modifications to AHP have been proposed to ameliorate this and other related issues. Many of the enhancements involved ways of computing, synthesizing pairwise comparisons, and/or normalizing the priority and weighting vectors. In the next section, we’ll see TOPSIS, a method that corrects rank reversal.
Another limitation is implied scaling in the results. The final ranking indicates that one alternative is relatively better than another, not by how much. For example, suppose that rankings for alternatives (А, В, C) are (0.392,0.406,0.204). The values only imply that alternatives A and В are about equally good (яг 0.4), and C is worse (яг 0.2). The ranking does not mean that A and В are twice as good as C.
Hartwich [Hartwichl999] criticized AHP for not providing sufficient guidance about structuring the problem to be solved; that is, how to form the levels of the hierarchy for criteria and alternatives. When project team members carry out rating items individually or as a group, guidance on aggregating separate criteria assessments is necessary. As the number of levels in the hierarchy increases, the complexity of AHP increases faster; n criteria require 0(n2) comparisons.
Nevertheless, AHP is a very powerful and useful decision tool when used intelligently.
Using subjective pairwise comparisons in AHP makes sensitivity analysis extremely important. How often do we change our minds about the relative importance of objects, places, or things? Often enough that we should test the pairwise comparison values to determine the robustness of AHP’s rankings. Test the decision maker weights to find the “break point” values, if they exist, that change the alternatives’ rankings. At a minimum, perform trial-and-error sensitivity analysis using a numerical incremental analysis of the weights. Numerical incremental analysis works by incrementally changing one parameter at a time (OAT), finding the new solution, and graphically showing how the rankings change. Several variations of this method are given in [BarkerZabinsky2011] and [Hurley2001],
Chen [ChenKocaoglu2008] divided sensitivity analysis into three main groups: numerical incremental analysis, probabilistic simulations, and mathematical models. Probabilistic simulation employs the use of Monte Carlo simulation ([Butler.TiaDveiT997]) that makes random changes in the weights and simultaneously explores the effect on rankings. Modeling may be used when it is possible to express the relationship between the input data and the solution results as a mathematical model. Leonelli’s [Leonelli2012] master’s thesis outlines these three procedures.
We prefer numerical incremental analysis weight adjusting with the new weight w'j given by
where wp is the original weight of the criterion to be adjusted, and w'p is the value after the criterion was adjusted [AlinezhadAmini2011].
Whichever method is chosen, sensitivity analysis is always an important part of an AHP solution.
Examples Using AHP
Let’s look at a selection of examples using AHP starting with a quite simple 3-criteria for 3 alternatives.
Example 8.5. Selecting a VHF Transceiver Antenna.
An amateur radio operator wants to build and install a new VHF antenna choosing from designs for a vertical 1/4-wave antenna, a 3-element Yagi antenna, and a Л-pole antenna. See Figure 8.3. The three main criteria will be size, antenna gain (how much the antenna focuses the signal), and ease of assembly.
FIGURE 8.3: VHF Antenna Types
We’ll use the AHP and AnalyzeComparisonMat programs from the book’s Maple package PSMv2. After loading the package via with, use the Describe command to see a brief description and a list of arguments.
Step 1. Build the 3-level hierarchy listing the criteria and alternatives.
Goal: Select the best antenna Criteria: Size, Gain, Assembly Alternatives: Vertical, Yagi, J-Pole
Step 2. Perform the pairwise comparisons using Saaty’s 9-point scale.
Use the row and column orders specified by the lists in Step 1. The ham operator’s choices are as follows.
Criteria comparison matrix:
The consistency ratio 0.03 is very good.
Alternatives by Size comparison matrix:
The consistency ratio 0.05 is very good.
Alternatives by Gain comparison matrix:
The consistency ratio 0.05 is very good.
Alternatives by Assembly comparison matrix:
The consistency ratio 0.03 is very good.
Step 3. The A IIP program will combine the eigenvectors from the three alternatives’ priority matrices to form the overall alternative priority matrix.
Step 4. Obtain the AHP rankings.
Interpretation of Results. The Yagi is the best choice antenna, with the vertical and Л-pole at essentially the same rating. Since the Yagi was a clear preference for gain, and gain was the highest rated criteria, the rankings pass an initial “common-sense test.”
The necessary sensitivity analysis will be left to the reader. The first step will be to find the break-even points for the criteria ratings.
Example 8.6. Selecting a Car Redux.
Revisit Example 8.3 “Selecting a Car” with the data presented in Table 8.7 (pg. 353), but now use AHP to rank the models.
Step 1. Build the 3-level hierarchy and list the criteria from the highest to lowest priority (mainly for convenience).
Goal: Select the best car
Criteria: Cost, MPG-City, MPG-Hwy, Safety, Reliab., Perform., Style Alternatives: Prius, Fusion, Volt, Camry, Sonata, Leaf
Step 2. Perform the pairwise comparisons using Saaty’s 9-point scale.
We chose the priority order as: Cost, Safety, Reliability, Performance, MPG- City, MPG-Hwy, and Interior & Style. Putting the criteria in priority order allows for easier pairwise comparisons. A spreadsheet similar to Figure 8.4 organizes the pairwise comparisons nicely. Enter the comparisons in Maple as the matrix PCM.
The consistency ratio 0.02 is very good.
Step 3. We enter the AltM matrix with columns listed in the priority order we chose in Step 1; the order must match the PCM matrix. The cost data does not follow the rubric “larger is better”; therefore, the reciprocal of cost
FIGURE 8.4: Pairwise Comparison of Criteria
is used for the first column.
Standard methods for dealing with a variable like cost include: (1) replace cost with 1 /cost, (2) use a pairwise comparison with the nine-point scale, (3) use a pairwise comparison with ratios costi/costj, or (4) remove cost as a criteria and a variable, perform the analysis, and then use a benefit/cost ratio to re-rank the results. Many analysts prefer (4) when cost figures are large and dominate the procedure.
For the alternatives, we either use the raw data, or we can use pairwise comparisons by criteria for how each alternative fares versus its competitors. Here, we take the raw data replacing cost by 1/cost, then normalize the columns.
Step 4. Execute the procedure and obtain the output to interpret.
Note: Our AHP program will accept either the alternatives’ comparison matrices or the eigenvectors merged into a single alternative comparison matrix.
Once again, use MatrixSort to make the results easier to parse.
We see the resulting rank order for the best car is
Sensitivity Analysis. We alter our pairwise comparison values to obtain a new set of weights and obtain new results: Camrv, Fusion, Sonata, Prius, Leaf, and Volt. First we adjust the weights and place the weights into a new comparison matrix.
We altered cost, the largest decision criteria, by lowering its value incrementally. Then we created a matrix of the new decision weights that included the original set of weights as a reference. We then multiplied it by the transpose of the normalized matrix of alternatives still in criteria order. Using Statistics:-Rank finds the ordering. With the changes in the decision weights, the cars were ranked the same. That is, the resulting values have changed, but not the relative rankings of the cars. The Maple work follows.
Again, we recommend using sensitivity analysis to try to find break points, if any exist.
In the next sensitivity analysis, take the smallest criteria, Interior and Style, and increase its value by 0.1, 0.2, and 0.25, adjusting the other weights proportionally. The new results show a change in rank ordering between the 3rd and 4th increments. Thus the break point is adding between 0.2 and 0.25 to the criteria weight for Interior and Style. Verify these computations with Maple!
Example 8.7. The Kite Network Redux.
Revisit Krackhardt’s Kite social network; search for the key influencer nodes. According to Newman there are four metrics that contribute to identifying the key nodes of a network. In our priority order, the key criteria are:
Total Centrality, Betweenness, Eigenvector Centrality, Closeness Centrality
Assume we have the outputs from the network analysis program ORA- Pro (which is not shown here due to the volume of output). Take the metrics from ORA-Pro and normalize each column. The columns for each criterion are placed in the matrix X = [®у]. Define wj to be the weight for each criterion. Limit the size of X matrix to 8 alternatives with the four criteria for this example.
Next, assume we have obtained the criteria pairwise comparison matrix from the decision maker. Using the output from ORA-Pro and normalizing the results, we are ready for AHP to rate the alternatives within each criterion. We provide a sample pairwise comparison matrix with Saaty’s nine-point scale for weighting the Kite network criteria. The consistency ratio is CR = 0.01148, which is much less than 0.1, so our pairwise comparison matrix is consistent. We continue with Maple.
As before, sort the results to make them easier to read.
AHP gives Susan as the key node. However, the bias of the decision maker is important in the analysis of the weights of the criteria. The Betweenness criterion is rated 2 to 3 times more important than the others.
Sensitivity Analysis. Changes in the pairwise comparisons for the criteria cause fluctuations in the key nodes. The reader should change the pairwise comparisons given above so that Total Centrality is not so dominant, and rerun the AHP as we did with the previous example.