Home Management
|
|
|||||||||||||||||||||||
Conjoint design for new productsTable of Contents:
A conjoint design begins with a list of features or attributes that may define or characterize the new product. These are typically physical attributes such as size, color, weight, and so forth. Price is sometimes included but this is not necessary since the objective is to optimize the construction of the product itself. This construction or specification of the attributes is what customers will first evaluate. The list of attributes may be developed by the design or engineering team based on their knowledge of what must work together for functionality they believe the product should have. In addition, the attributes could, and frequently do, reflect their internal judgement and opinion, sometimes unfounded, as to what they should be with the customers merely providing evidence that their choices are correct. SMEs and KOLs could also be called on to provide input into the attribute list. Occasionally, the list is quite long so it may have to be narrowed to a subset that consists of the most important attributes from the designers’ perspective. This subset, however, may still not have the attributes in the right importance order from the customers’ perspective so the designers’ effort may be misplaced; in short, there is a design resource misallocation. The attributes could also be the features identified by the methodology' I describe in Section 7.2.1 which relies on text analysis of product reviews. That method does not necessarily identify the levels of each attribute, just the attributes themselves. Nonetheless, compiling an attribute list is the important first step. The design team working in conjunction with SMEs, KOLs, and the marketing team have to specify the levels for the attributes. The attribute list consists of a set of distinct items that are mutually exclusive but not completely exhaustive. They are mutually exclusive because they are distinct features by definition. They are not completely exhaustive because no product is so simple that it has only a few attributes. Most are defined with multiple attributes, but it is not practical to test all of them. There are probably many designers who can correctly incorporate them in a design without concern about negative customer feedback or that they even notice the attribute. For example, car buyers are more concerned about legroom, number of cup holders, moon roof, and an on-board navigation system while they are less concerned about an on-board computer. Creating the attribute list is not the end of this phase of a conjoint study. The levels for each attribute must also be defined. These could simply be “Yes/No” or “Present/Absent” or “Small/Medium/Large”. They are discrete and context often determines which to use. Continuous attributes are possible but usually these are discretized. Weight, for instance, might be 6 oz./8 oz./12 oz. for cans of tomato paste where this is obviously a continuous measure but discretized into three levels. The levels that define each attribute are frequently mutually exclusive and completely exhaustive. There is no third possibility “Maybe” for the binary levels “Present/Absent.” In some situations, however, the levels of an attribute may not be mutually exclusive. For instance, a home security system could have an attribute “emergency calls” so that the system will call a public help agency in the event of a home emergency. The levels could be “Fire Department”, “Police Department”, “EMS”. These do not have to be mutually exclusive since all three could be included as security help features. In this case, the levels could be redefined as all combinations of “Fire Department”, “Police Department”, “EMS.” With three levels for this example, there are 23 -1 =7 combinations consisting of only one level, two levels, and all three levels; the null combination, of course, does not make sense so it is omitted. These seven are mutually exclusive. It could also be that the levels across two (or more) attributes are not mutually exclusive. For example, suppose a soup manufacturer defines a new soup with two attributes: content and whether or not a soup is vegetarian. The content may be vegetables and sausage while the vegetarian attribute would be “Yes/No.” But the content cannot be sausage and the soup be vegetarian at the same time. This combination is a disallowed combination or impossible combination across attributes. Once the attributes and levels are defined and any disallowed combinations are identified and excluded, unique combinations of attribute levels that will define potential products have to be created. The identification of attributes and their levels is an art part of product design but this formation of combinations is a science part (although there is also an art element to it). The combination is simply called a design and the full list of combinations is reflected in a design matrix. The design matrix is the list of potential products and a numeric coding of this matrix is used in the statistical estimation of the part-worth utilities. The easiest method for creating a design matrix is to create a full factorial design that consists of all combinations of the attribute levels, but as I mentioned above, a fraction is typically used to lessen the burden on survey respondents. A new product design exampleMusicians have always been hampered as they play an instrument by the need to turn a physical page of sheet music. Sometimes, a human page turner assists them, but this is a distraction, to say the least, and a potential handicap to the musician if the page turner loses the spot on a page and turns the page at the wrong moment. A manufacturer of automatic page turners developed a new product to solve this physical page turning problem. The solution is a pedal device that communicates with a tablet, smartphone, or computer where a virtual page appears on a screen. The page is turned by stepping on the pedal located on the floor next to the musician. Some practice is needed to get the pedal pressing correct, but the effort is worthwhile since the page turning response rate is high. The device consists of four attributes:
Discussions with individual musicians, focus groups, and reviews of online blogs, chatrooms, and discussion groups using the methods described in Chapter 2 resulted in these four attributes as the primary ones for prototype development.3 These are the physical attributes. The levels are listed in Table 3.1. Price is not included at this stage of analysis although it could be included. Conjoint designA full factorial design consists of all combinations of the attribute levels. For this example, this is6x3x4x2 = 144 combinations or runs or possible products.4 In terms of the notation for (3.1), т is each of these 144 products. In a conjoint study, each customer is presented with each product one at a time and is asked to rate each TABLE 3.1 Four attributes for an electronic music page turner device. There are six levels for Device Pairing, three for Power Source, four for Weight, and two for Page Reversibility.
![]() FIGURE 3.2 This is the design matrix in 24 runs for the music conjoint example. The full factorial is 144 runs. The 24 runs is a one-sixth fraction of the total number of runs. one separately on a 10-point preference or liking scale, but 144 products is clearly far too much for any one respondent to handle, so a smaller number is needed. This smaller number is a fraction of the full set of combinations and the smaller design is, hence, called a fractional factorial design. The minimum size for this problem is 12 combinations, a one-twelfth fraction’’ but a better size is 24, or a one-sixth fraction, since 12 does not allow for any error estimation whereas 24 would. So a fractional factorial design consisting of 24 runs or products was created. This is shown in Figure 3.2. Each row in the design matrix is a potential product. Referring to the first row of the design matrix in Figure 3.2, one possible music sheet turner is compatible with an iPad, uses 2 AA Batteries, weighs 8.2 Oz., and allows page reversing. Conjoint model estimation The statistical objective for conjoint analysis is the estimation of the part-worth utilities, one part-worth for each level of each attribute. These show the value to customers of each level so that by appropriately aggregating the part-worths to form products you can determine which attributes and which levels are most important or critical for the product. The part-worths are, therefore, the basis for the optimal product design parameters. The part-worths are typically estimated using ordinary least squares regression (OLS), although this may not be the best estimation method. As noted in Paczkowski [2018], other estimation methods are available but OLS is commonly used because of its ubiquity (it is implemented in most simple software and spreadsheet packages); many analysts are trained in OLS estimation to the exclusion of other methods; and calculations are simple to do and understand. The design matrix is the explanatory data input while the preference ratings are the dependent data. The preference ratings can be used as-is but the explanatory data must be coded since they are qualitative factors. Only numerics can be used in statistical estimations. There are two ways to code the qualitative factors, although they give the same results after interpretation adjustments. The two ways are dummy coding and effects coding.6 The former is popular in econometric studies while the latter is popular in market research studies. Dummy coding involves assigning 0 and 1 values (representing “absent” or “turned oft'” and “present” or “turned on”, respectively) for the levels of a qualitative factor. Effects coding uses -1 and +1 values for the comparable settings. In either case, new variables are created for each level of a qualitative factor except one level which is chosen as a base level. For example, if a factor has three levels, then two variables are created; if it has four levels, then three are created. The reason one variable is omitted is to avoid what is commonly called the dummy variable trap which leads to a situation of perfect multicollinearity. See Gujarati [2003] for a discussion. Dummy variables measure shifts from a base level, which is the one that is dropped, while effects variables measure differences from the mean of the dependent variable. The effects codes can be easily derived from the dummy codes so intuitively you should expect the estimation results with either coding to be the same; just interpretations differ. Regardless of the coding method, the created variables are often simply called “dummies.” An advantage of effects coding is that the coefficient for the base level can be retrieved. As shown by Paczkowski [2018], the coefficient for the omitted base equals -1 X £(coefficients) so the sum of all the levels for an attribute must sum to 0.0. Figure 3.3 shows the estimated coefficients or part-worths for each attribute for the page turner problem in the report labeled Parameter Estimates. The Expanded Estimates section shows the retrieved part-worths. The sum of all the coefficients for an attribute in this section is 0.0. Most statistical softwares provide methods for doing the coding. In some instances, the coding is automatic. The selection of the base to be omitted is also sometimes automatic. A common rule for the base is the last level in alphanumeric order. See Paczkowski [2018] for a thorough discussion and comparison of these two coding schemes and the bases. This coding explains the minimum size of 12 combinations for the page turner problem mentioned above. For each attribute, a series of variables, either dummy or ![]() FIGURE 3.3 Part-worth utilities. The top panel shows the OLS estimates based on effects coding of the qualitative factors. One level, the last in alphanumeric order, is dropped to avoid perfect multicollinearity. The second panel shows the estimates expanded to include the omitted level from each attribute. Notice that the utilities in the Expanded Estimates section sum to 0.0 for each attribute. effects coded, is defined for estimation. But there is always one less dummy variable than there are levels. The numbers of required dummy variables for each attribute are shown in Table 3.2. An effects coding was used for estimating the part-worth utilities for the page turner problem. The estimates are shown in Figure 3.3. Notice that there is a separate estimated coefficient for each level of each attribute in the Expanded Estimates TABLE 3.2 This shows the number of required dummy or effects coded variables for each attribute for conjoint estimation. The first attribute, Device Pairing for example, has six levels as seen in Table 3.1. To avoid the dummy variable trap, five dummies are required. The sum of the dummies is 11 so there are 11 variables, each requiring a coefficient in the model. A constant is also included in the model which has its own coefficient. A total of 12 coefficients have to be estimated. The minimum number of observations, or runs in this case, is therefore 12.
report. For Device Pairing, for example, which has six levels, there are six estimated coefficients. These estimates can be used to predict the total utility or total worth of each possible product. As I noted above, there are 144 possible products. The best product configuration is the one identified by the maximum part-worth for each attribute. From Figure 3.3, these part-worths are 3.34, 0.13, 1.54, and 0.38. The intercept is 5.13. The sum is 10.52. Although the main output of the OLS estimation is the part-worth utility estimates, and these are valuable unto themselves, it is the determination of the attribute importances that most analysts look at for determining what should be emphasized in the new product. The attribute importances are the scaled ranges of the part-worth utilities of each attribute with the scale being 0-1 (or 0%-100%). The range for an attribute is the maximum part-worth less the minimum part-worth. The sum of the ranges is divided into each attribute range to get the scaled range for all attributes. A typical chart of these importances shows the attributes, not their levels, rank ordered by the importances. The ranges for the part-worth utilities were calculated and scaled to sum to 100% and are shown in Figure 3.4. It is clear that Device Pairing is the most important feature for a new product. See Paczkowski [2018] for a discussion of the importances calculations and interpretation. Some problems with conjoint analysisAlthough conjoint analysis has been a mainstay of market research analysis for a long time, is simple to implement, and is easy to interpret, it has shortcomings that make its continued use for product optimization questionable. One issue, of course, is the assumed setting for the conjoint exercise. Unlike discrete choice analysis mentioned above which mimics shopping behavior, conjoint fails to mimic this behavior, or any behavior for that matter. Survey respondents are merely asked to ![]() FIGURE 3.4 Attribute importances calculated by scaling the ranges of the part-worth utilities in Figure 3.3 to sum to 100%. rate a single product in isolation from other products. Occasionally, they are asked to rank products which means they have the full set of products in front of them when they do the ranking. If there are a large number of products, however, then the ranking can become onerous which could make results questionable. Experimental design procedures are available to mitigate this issue but since most analysts are unfamiliar with them they are infrequently used. The fractional factorial is the easiest and this is the one most analysts use. See Paczkowski |2018| for a discussion of other designs. Optimal attribute levelsThe purpose of a conjoint analysis for new product development is to identify the optimal level for each attribute or feature of the product. This is done with the estimated part-worths as I illustrated above. The sum of the part-worths tells you the overall worth to customers of the product configuration. It is important to know this total worth because it can help you decide between two configurations that are reasonably close. Although the conjoint estimates may indicate one clear “winner”, marketing and business intuition and intelligence (a.k.a., experience) might indicate that the second-best configuration is better. In addition, since the ![]() FIGURE 3.5 Tlu* effect of the optimal or best level for each attribute is shown along with the worst level. The best configuration has the highest total worth which is the sum of the individual part-worths. This is 10.51. The worst configuration has a total worth that is 8.2 utility units lower at 2.29. estimated total worth is based on the sum of part-worth estimates, the total really has a confidence band around it. It may be that several configurations have overlapping confidence bands so there is no clear winner. A tool, such as a profiler or simulator could easily be constructed to calculate the total worth from different combinations of part-worths. Figure 3.5 shows one such profiler. The optimal total worth has a value of 10.5 while the worst configuration is 2.29. There are three configurations that are close for being the best ones, all of which involve the maximum part-worth estimate for Device Pairing, Weight, and Reversible; only the Power Source differs for each. The Power Source attribute has almost no contribution to the total worth which is consistent with the attribute importances as shown in Figure 3.4. The three top configurations are shown in Figure 3.6. SoftwareSoftware is always an issue. For conjoint analysis, two procedures are required:
![]() FIGURE 3.6 The top three configurations are all close in value. The Page Transport attribute make little to no contribution to the total worth of the product. All statistical software packages implement OLS to varying degrees of sophistication so any package can be used. Some are R and Python which are free, and JMP and SAS which are more comprehensive and commercial which means they are expensive. Stata is excellent but mostly targeted to econometric applications which makes it more specialized than the others. The experimental design is an issue. My preference isJMP because it has a comprehensive design of experiments (DOE) platform which can easily handle conjoint designs although there is no conjoint design component per se. See Paczkowski |2016] for an example of using JMP’s DOE platform for conjoint design. SAS can certainly be used but this requires an advanced understanding of SAS. R has packages that will also create fractional factorial designs. JMP was used for the Case Study. |
<< | CONTENTS | >> |
---|
Related topics |