# Model Structures

This structure gives an overview of the various model structures used in this chapter. We first discuss the generalised RRM model, which can simplify to both standard RUM and RRM structures, before discussing mixture models.

## The Generalised RRM Model

This section draws heavily from the recently published paper (Chorus, 2014) which puts forward the Generalised RRM (or G-RRM) model. For reasons of space limitations and to avoid repetition, we do not present and discuss the conventional RRM model and its properties; for further information on the RRM model, the interested reader is referred to Chorus (2012), with applications for example in Boeri, Longo, Doherty, and Hynes (2012), Boeri, Longo, Grisolia, Hutchinson, and Kee (2013), Kaplan and Prato (2012) and Prator (2014). The same

applies for the standard RUM model, which is well covered elsewhere. Instead, we directly start with introducing the G-RRM model; the G-RRM model assumes that discrete choice behaviour is driven by minimisation of the following objective function:

(2.1)

where denotes the random (or total) regret associated with a considered alternative *i,* denotes the 'deterministic' regret associated with *i,* denotes the 'unobserved' regret associated with *i*9 its negative being distributed i.i.d. Extreme Value Type I with variance , denotes the estimable taste parameter associated with attribute , denote the values associated with attributefor, respectively, the considered alternative *i* and another alternative *j*, denotes the regret-weight for attribute

The probability of choosing alternative * i* out of

*J*is then given by:

(2.2)

that is the minimisation of the deterministic regret component.

To illustrate the role of regret-weight , consider the so-called attribute regret function , which gives the regret that is associated with comparing considered alternative * i* with another alternative

*j*, in terms of attribute . Note that when equals one, the conventional attribute regret function as put forward in Chorus (2010) is obtained. By varying from 0 to 1, and plotting the resulting attribute regret function for ranging from -5 to 5 (keeping fixed at unity), the role of the regret-weight becomes immediately clear; the left hand panel of Figure 2.1 shows the effect on the attribute regret function of a step-wise variation in , and the right hand panel shows the effect of a continuous change in

**γ.**Both graphs show that when the regret-weight becomes smaller and starts to approach zero, the convexity of the regret function and the resulting reference dependent asymmetry (or nonlinearity) in preferences vanishes. When , there is no asymmetry anymore, implying that the impact on regret of a change in an alternative's attribute is no longer dependent on the alternative's initial performance in terms of the attribute, relative to its competition. Intuitively, the resulting regret function with symmetric preferences (i.e. with) looks like a function generated by a linear-in-parameters RUM model, and indeed it can be shown (Chorus, 2014) that if for a particular attribute, the G-RRM model generates the same choice behaviour as does a linear-in-parameter RUM model, for that attribute.

Figure 2.1: Impact of variation in regret-weight (γ) on attribute regret *(β*m = 1).

As noted by Chorus (2014), the coefficient values obtained when for a particular attribute are scaled down by a factor of *J* compared to the estimation of a RUM model, which is immediately obvious from Eq. (2.1) which shows that the coefficient is used *J* times per G-RRM function, instead of just once as in a RUM function.

Clearly, depending on the values offor different attributes, different choice models arise; as such, the G-RRM model can be seen as a generic formulation which nests various types of choice models: the conventional RRM model is a special case which is obtained when, and the conventional linear-in- parameters RUM model is a special case which is obtained when Furthermore, hybrid RUM-RRM models of the type proposed in Chorus et al. (2013) are obtained when

A more subtle model structure arises when for one or more attributes: as can be seen when inspecting Figure 2.1, for these in-between values of γ, the regret/utility function is still convex, but the degree of nonlinearity is not as high as for a conventional RRM model (or G-RRM with). Nonetheless, given the presence of some asymmetry and reference-dependency in the regret function for values of , the resulting or implied behaviour should be considered as regret minimisation as opposed to linear-in-parameters utility maximisation behaviour. As a consequence, for the G-RRM model with , previously derived properties of the conventional RRM model hold, but these properties are less pronounced than for the conventional RRM model.

As an illustration of how values ofinfluence the salience of key properties of the RRM model, Figure 2.2 presents a numerical simulation in the context of the G-RRM model (with two different regret-weights). The numerical example refers to the existence of a compromise effect, which states that consumers tend to prefer alternatives with a reasonable performance on each attribute, as

Figure 2.2: Compromise effect for the conventional RRM model and for G-RRM with regret-weights 0.1 (the latter in dotted lines).

opposed to alternatives with a strong performance on some, and a poor performance on other attributes. The fact that the RRM model predicts such a compromise effect has been reported in various theoretical (Chorus, 2010) and empirical (Chorus & Bierlaire, 2013) studies. As these papers show, it is the convexity of the regret function in the RRM model that generates the compromise effect: deterioration of an attribute with an already poor performance generates a lot of additional regret, and this cannot be compensated by the relatively small decreases in regret associated with further improvement of an already strong attribute – this is easily verified when inspecting the regret curve for γ = 1 in Figure 2.1's left panel. As a consequence of this semi-compensatory behaviour implied by the convexity of the regret function, alternatives with a reasonable performance on all attributes gain a market share bonus in the context of RRM models. Note that, given the symmetric and reference independent treatment of attributes in linear-in-parameter RUM-models, the latter do not feature the compromise effect (as, for example, highlighted and empirically verified in Chorus & Bierlaire, 2013).

Since RRM's accommodation of the compromise effect is a consequence of the convexity of its regret function, the expectation is that for a G-RRM model with 0<γ<1 this compromise effect, while still present, will be less pronounced than in a conventional RRM model (i.e. a G-RRM model with y=l). The following numerical example – which has been used in previous publications to illustrate how RRM generates compromise effects, and as such forms a good benchmark – serves to illustrate and verify this expectation. Assume a choice situation between three alternatives, each defined in terms of two quality attributes *(x*1 and *x*2*)* that are equally important to the decision-maker (higher values are preferred over lower ones; *β=1* for each of the two attributes): *A =* {1, 3), *B*={2 + Δ, 2-Δ), *C =* {3,1}. In words, where *A* and *C* take on relatively extreme positions on the two attributes, *B* is a compromise alternative to the extent that Δ is close to 0. Figure 2.2 plots *P(A*), *P(B)* and *P(C)* as a function of Δ, for G-RRM models with * γ=1* for both attributes, and for G-RRM models with γ = 0.1 for both attributes, respectively (the former in solid lines, the latter in dotted lines).

Figure 2.2 clearly shows the presence of a compromise effect for alternative *B:* as long as its attributes remain close to 2 (i.e. Δ close to 0), it receives a choice probability bonus at the cost of the two extreme alternatives. Secondly, and this is more relevant in the context of this chapter, the compromise effect is still present for the case where *γ =* 0.1, but less pronounced than for the case where *γ* = 1 (i.e. the conventional RRM model). This decreasing salience of the compromise effect for the G-RRM model with the lower regret-weight follows directly from the decreasing difference between the sensitivity to attribute changes in the domain of poor performance versus in the domain of strong performance. This decreasing difference in turn follows directly from the decreased level of nonlinearity of the regret curve, which is a direct consequence of the lower regret-weight. In sum, regret-weights with values between 0 (linear-in-parameters RUM) and 1 (conventional RRM model) still generate regret minimisation behaviour and as such still result in key properties of the conventional RRM model, but with a reduced salience.

For estimation purposes, it is pragmatic to parameterise the regret-weight in terms of a binary logit function: γm =exp (δm)/(l + exp For (large) negative values of δm, a RUM specification is approached for the attribute, and for (large) positive values, a RRM specification is approached. When δm is estimated to lie in- between these two extremes, for example when it is estimated to be insignificantly different from 0, implying γm=0.5, regret minimisation behaviour is obtained but with less emphasis on regret than is the case for the conventional RRM model.

## Mixture Model

The mixture model introduced by Hess et al. (2012) is a simple generalisation of a latent class structure where the differences between classes are not just in parameters of the same underlying model structures but also in the use of different model structures in different classes.

1. Note that choice probabilities generated by a RUM's linear-in-parameters Logit-model would of course be insensitive to changes in Δ as the model would assign equal choice probabilities of 0.33... for all three alternatives, irrespective of the value for Δ; these probabilities are hence not plotted, for clarity of communication.

A general specification of a model allowing for different decision rules within a latent class framework is given by:

(2.3)

where *LC*n is the contribution to the likelihood function of the observed choices for respondent *n.* This probability of observed choices is given by a weighted average over *S* different types of models, where LCns is the probability of the observed sequence of choices for person *n* if model s is used, and is the weight attached to model s (representing a specific decision process), where . The mixing of models is performed at the level of individual respondents rather than individual choices.

In the existing work, the above specification has been used to combine models such as RUM, RRM and elimination by aspects (EBA). In the present chapter, we extend on this by making the individual classes use G-RRM models, thus allowing not just for mixtures between pure RUM and RRM classes, but also mixed RUM- RRM classes and classes with intermediate specifications for individual parameters.