Table of Contents:
The basic SAW method uses the following equations:
where w is the dimensionless weight of criterion j, d^ is the dimensionless score in the interval [0, 1] of alternative / with respect to criterion j, and si is the aggregate score in the interval [0, 1] of alternative i. Depending on the normalization convention applied, the optimal solution is the alternative with either the highest or lowest aggr egate score, sf„ The other solutions can likewise be ranked in order of decreasing preference. From here on, it is assumed that all scores are normalized such that higher values are more desirable.
In SAW, the choice of weighting factors has a strong influence in determining the solution (Malierre et al., 2018). Since the weights reflect the decision-maker's preferences, they are inherently subjective. To manage the inherent biases that may exist in any MADM application, systematic procedures (e.g., eigenvector or geometric mean methods) for determining weights based on pairwise comparisons can be used, as in the Analytic Hierarchy Process (AHP) (Saaty, 1980). Sensitivity analysis can also be used to gauge the robustness of the rankings (Malieue et al., 2018). Cruz and Almario (2018) showed that, for any MADM problem solved via SAW, there exists a polytope in the hyperspace of the criteria weights for which the best solution retains its optimal rank; they referred to this polytope as the invariance region. Subsequently. Tan et al. (2019a) developed a procedure for tracing the invariance region.
Interval numbers offer an efficient means of representing epistemic uncertainty in mathematical models. Such uncertainty arises not from random variations (i.e., stochasticity) but from a lack of knowledge. Such uncertainties are invariably present when new and emerging technologies are characterized (Sikdar, 2019). This phenomenon is evident in the range of values reported in the literature for techno-economic characteristics of various NETs (e.g., McLaren, 2012; Smith et al., 2016; Alcalde et al., 2018). An interval number is a range of values characterized by a lower and an upper bound, which correspond to the pessimistic (i.e., conservative or risk averse) and optimistic (i.e., aggressive or risk tolerant) subjective estimates of the tine (but unknown) value. In a sense, the true value is “spread out” uniformly across the interval; the width of the interval signifies the level of uncertainty of the quantity.
Arithmetic operations on interval numbers are based on operations on the limits. For example, given two interval numbers A = [aL, au] and В = [bL, bu], we have:
In the case of division, i.e., equation (6), the result is undefined if 0 e [M, b1]. Crisp or ordinary numbers can be regarded as a special case of intervals with zero spread. These arithmetic operations can be used in both data pre-processing (e.g., normalization) and in the generic SAW procedure defined by equations (1) and (2). The resulting aggregate scores will thus har e interval values.
Interval numbers can be ranked by pairwise comparison of their values. Hie trivial cases where (a) two interval numbers are exactly the same, or (b) when the worst value of one number exceeds the best value of another, need not be discussed here in detail. Some complications arise if the intervals overlap, which suggests that one number is not clearly larger than the other. In such cases, the procedure described by Sayadi et al. (2009) can be applied, wherein the intervals being compared are “collapsed” into point estimates that are weighted averages of the lower and upper bounds:
where parameter a quantifies the degree of risk aversion of decision maker in the interval [0, 1]. A completely pessimistic decision-maker will use a = 1 and will compare interval numbers based on their lower bounds (note that higher values are assumed to be more desirable here), while a completely optimistic decision-maker will use the upper bounds. This parametric control of the interval SAW procedure is a powerful and valuable feature which is particularly useful in the case of NETs, whose techno-economic parameters show wide uncertainty margins due to lack of knowledge from an extended history of large-scale use.
Comparison of NET Options
This case study is based on data reported by Alcalde et al. (2018) on the potential of NETs in Scotland. Their results are interesting, since they conclude that the NET potential is sufficient to offset Scotland’s GHG emissions and achieve net zero emissions; the coimtiy, thus, serves as a microcosm for the kind of NET scale-up that needs to be done throughout the world. Raw data on the six NET options based on relevant criteria are shown in Table 2; with the exception of TRL, which are drawn from the paper of McLaren (2012), these data are based on the work of Alcalde et al. (2018). In this case. TRL provides a measure of the dynamic scalability of a given technology; more mature options are capable of being scaled up within the time horizon needed to achieve deep emissions cuts. Note that many of the scores show wide margins of uncertainty, in some cases spanning multiple orders of magnitude. The necessary calculations can be done in a spreadsheet, or using optimization and equation-solving software such as LINGO (Schrage, 2001). Appendices containing the LINGO code of this case study (which can be copied and pasted directly into a LINGO model file) are given at the end of the chapter in order to facilitate replication by the reader. A demonstration version of the software can be downloaded from the company website (www.lmdo.com).
These raw data can then be normalized into the required dimensionless form by linear interpolation between the tlneshold (i.e., best and worst) values given in Table 3. This is a trivial step and. for brevity, is no longer shown here. These weights are assumed to be known a priori: in practice, they can be systematically determined using AHP (Saaty, 1980). The resulting dimensionless scores are given in Table 4.
Table 2. Scores of NET options with respect to techno-economic criteria (Alcalde et al., 2018; McLaren, 2012).
Table 3. Threshold values and weights of criteria.
Table 4. Normalized scores of NET options with respect to techno-economic criteria.
It is then possible to use SAW to determine the optimal solution based on the weights also given in Table 3. Initially, we assume a highly conservative decision-maker (a = 1) who judges the NET alternatives based on then worst scores. In this case, the optimal solution is AR, with a dimensionless aggregate score of 0.561. Implementation of these steps in LINGO can be done using the code given in Appendix B. The code can also be modified by the reader to handle different data sets. Figure 1 shows the sensitivity analysis of aggregate scores with respect to a. For a decision-maker with moderate to low levels of risk aversion (0 < a < 0.76), the optimal solution is BECCS. Conservative decision-makers (0.76 < a < 1), on the other hand, will favor AR; this is a well-understood and time-tested NET and is, thus, the clear choice for the risk averse.
The decision-maker may also be interested in the stability of the optimal solution with respect to different weights. Given the baseline weights, and with о. = 1, AR was found to be the optimal alternative. The decision-maker can then determine the range of variations of the weights (i.e., the rank invariance region) for which AR remains optimal. This can be done using the procedure described by Tan et al. (2019a) with the working LINGO code in Appendix C. Table 5 gives the lower and upper limits of the criteria weights; any weight value that falls outside these bounds will definitely result in an optimal solution other than the AR option. A narrower range of weight values indicates higher sensitivity of the optimum choice to a given criterion. For example, when the weight given to sequestration potential
Figure 1. Sensitivity analysis with respect to a.
Table 5. Upper and lower lnmts of the weights of criteria.
approaches 1, the optimal NET alternative becomes EW. On the other hand, it can also be seen that the choice of AR is completely robust to any changes in specific cost or TRL.
Implications for the Role of CDR/NETs in Large-Scale Carbon Management
It is clear that CDR or NETs will be needed to offset existing positive GHG emissions in order to achieve net zero emissions by mid-century, as recommended by the IPCC (2018). The different technology options being discussed in the literature need to be evaluated based on criteria such as potential, use of resources (e.g., water and energy), cost, and technological maturity. The MADM methodology described in this chapter provides a workable approach to ranking alternatives even if precise data are not available for evaluation. Nevertheless, this technique only provides a piece of the puzzle, and other aspects need to be accounted for if CDR and NETs are to become a significant carbon management strategy. First, it is notable that only AR is sufficiently mature for immediate large-scale deployment (Bastin et al., 2019). Land-based techniques (SCS, BA and EW) have been field-tested only at limited scales, well below the levels that will deliver significant benefits. BECCS and DAC have been demonstrated at the pilot plant scale, and their scale-up will be dependent on the presence of CO, transportation infrastructure and suitable geological storage sites.
Given the current status of technology, some key points can be identified. First, bridging the gap between research and commercial deployment of most CDR/NET options will still require significant economic investment (Nemet et al., 2018). Optimization models will be needed in order to maximize the technology maturity gains given limited financial resources (Tan et al., 2019b). There will also still be major challenges in ramping up even relatively mature C’DRNETs, such as BA, from field tests to gigaton-scale carbon management strategies: such issues include planning supply chains and developing reliable monitoring systems to verify emissions cuts (Tan, 2019). Targets for CDRNET deployment at the level of countries (Smith et al., 2016b) or regions (Gedeu et al., 2019) should take into account local resource limitations, as well as integration with other carbon management strategies being used concurrently. Governance aspects (e.g., economic incentives) need to be studied for proper calibration and alignment with societal priorities (Bellamy. 2018). Finally, as with any technological solution, sociocultural aspects need to be kept in mind, since these factors can act as barriers to their successful scale-up (Buck. 2016).