Home Management



Sentiment analysis and opinion miningTable of Contents:
Section 2.3 focused on methods for extracting key words, phrases, and topics from text data for the purpose of identifying ideas for new products or improvements to existing ones. Another form of text analysis goes a step further and uncovers people’s sentiments or opinions about a concept, which is a product in our case. A sentiment is a negative, neutral, or positive view. In many instances, just a polarity  negative or positive  is used. Examples are:
Some sentiment analysis methodologies also consider emotions rather than sentiments, but the concept is the same. See Sarkar [2016, p. 342] for some discussion. Also see Zhang and Liu 2017 for a more detailed discussion of sentiment analysis. Market researchers have, of course, always captured consumers’ sentiments about a product through customer satisfaction surveys. They typically asked three questions:
These questions are obviously looking for sentiments or opinions. A numeric scale for them is usually a Likert fivepoint scale which the market researchers transform to toptwo box, middle box, and bottomtwo box.^{13} These are positive, neutral, and negative sentiments, respectively. One form of analysis of these three questions is based on a Venn Diagram such as the one in Figure 7.25. The scores for the three satisfaction questions are converted to toptwo box ratings (T2B), which are 0/1 values, and then simple proportions are calculated as the average of these 0/1 values for each of the eight possible intersections of the questions corresponding to the three Venn circles. A tabular report based on these proportions, similar to the one shown in Table 7.9, is also possible. The difference in this analysis from the focus of this section is that text data are used in our current discussion, not a numeric scale. Sentiment methodology overviewSentiment analysis is based on a lexicon, a special dictionary that classifies words and phrases as negative/neutral/positive sentiments. There is a wide array of lexicons. See Sarkar [2016 for a discussion of some of them. Sentiment analysis is done by tokenizing the words of a document (after preprocessing the text to delete stopwords, correct spelling, removing punctuation, changing case, and so forth) and then passing the tokens through a lexicon. The document as a whole is classified for its sentiment following some procedure. See Sarkar 2016 for Python code for accomplishing this. Values representing the sentiment of each document are returned and used in statistical analysis. For example, FIGURE 7.25 This Venn diagram shows one possible summarization of the three customer satisfaction questions in a typical customer satisfaction study. In terms of sentiment analysis, the intersection of the three circles shows a positive sentiment; the customers in this intersection are loyal to the product. Those outside all three circles, have a strong negative sentiment. Customers at other intersections have varying degrees of neutral sentiment. FIGURE 7.26 These are the sequence of steps for feature opinion analysis. simple frequency counts could be calculated. If the proportion unfavorable is larger than the favorable, then the product has a problem. A problem with sentiment analysis as usually applied is that it does not explicitly say what is wrong with the product. It does not say which aspect  feature or attribute  is an issue, only that something is wrong. In order to pinpoint a specific problem, one that can be addressed by the R&D staff, you still have to read through the texts. Several researchers have proposed a methodology for extracting features or attributes referenced in product reviews. The opinions of these features are then classified. Quantitative measures are created that can be used in statistical and machine learning applications to further understand opinions of product features, but the opinions are pinpointed to specific features. See de Albornoz et al. [2011], Morales et al. [2008], and Plaza et al. [2010] for discussions. TABLE 7.9 This is a tabular form of the Venn diagram in Figure 7.25. The counts (corresponding percentages) should add to the total sample size (100%). For this example, n = 1000. There are 683 in the T2B for Satisfaction; 877 for Recommend; and 548 for Stay. All others were in the bottom boxes with 66 in the bottom box for all three questions. These 66 are the strongly negative sentiment. This example is based on fictitious data.
There are several steps in the process which are illustrated in Figure 7.26. The first step, of course, is to compile a corpus of product reviews from internal logs, social media, or product review websites as I discussed in Chapter 2. Each review is a document in the corpus. The sentences in each document are preprocessed and then sentencetokenized. The words in each sentence are then tokenized. The reason for this second tokenization is to allow you to refer back to the original sentence from where the word came. Each tokenized word is assigned a “sense or meaning” using WordNet. See Miller [1995] and Fellbaum [1998] for background on WordNet. WordNet is a lexical database that classifies words into a hierarchy of more abstract words and concepts. It maps words to their synonyms thus allowing a grouping of words. In this regard, some have equated WordNet to a combination dictionary and thesaurus. The important use for this application is the hierarchy of concepts that is formed. This hierarchy can be visualized as an inverted tree with the root, representing the most general abstract concept, at the top of a graph with branches flowing down from the root. Each branch can be split into new branches, the split points, called nodes, being a next lower (and less abstract) concept or word. This split continues until the original word is reached. At this point, the branches terminate at a final node. There will be as many final branches and terminal nodes as there are words originally submitted to the WordNet program. In our case, the tokenized words from a single sentence are used so there are as many terminal nodes as there are words in that sentence. A hierarchy of general concepts contained in Wordnet is shown in Figure 7.27. The root, an “entity”, is at the top and an “organism” is at the bottom. Using WordNet, the tokens or words can be classified as nouns, verbs, adjectives, and adverbs, de Albornoz et al. [2011] note that only nouns, but not proper nouns, FIGURE 7.27 This illustrates the lexical hierarchy on Wordnet. Notice that the highest element in the hierarchy is an Entity which could be a physical entity or an abstract entity'. are useful for product features. These nouns are processed through the WordNet database and WordNet concepts are returned. The concepts are extended with their hypernyms. Hypernyms are words that have a more general, broader meaning than the word under consideration. For example, canine is a hypernym for dog since a canine could be wolf, jackal, fox, or coyote as well as a domesticated dog.^{14} There are also hyponyms which are the opposite of hypernyms; they have a more specific meaning. Dog is a hyponym for canine. If your focus is on hypernyms, then the flow of a tree is upward to more abstract concepts. If your focus is on hyponyms, then the flow is downward to more specific concepts. In Figure 7.27 and Figure 7.28, I chose to move up the trees to more abstract concepts. For “blinds”, the hypernym is “protective covering.” For a robotic vacuum cleaner, the hypernym for “robot” is “automaton”, a more general concept involving any mechanism, including a person, that is selfoperating.^{15} At an even more general level, a blind and an automaton are each a physical “entity'”, something that exists, whether it be living or not such as an abstract idea or an abstract entity.^{16} A graph for the words “dog”, “blind”, “robot”, and “vacuum” is shown in Figure 7.28. Following de Albornoz et al. [2011], a graph can be created for each sentence in a product review and then all the graphs are merged. The edges of the graph are weighted, the weights being a function of how far a concept is from the root. Those concepts that are further have more specificity and so are weighted more than those that are close to the root. These weights are used to calculate the salience or importance of each concept. The salience of a concept is “the sum of the weights of the FIGURE 7.28 This illustrates the lexical hierarchy from Wordnet for four words mentioned in this book. edges that have as source or target the given vertex.”^{17} The weights are multiplied by the frequency of occurrence of the concept in the corpus. The concepts are sorted by their saliences and grouped by a clustering algorithm such as hierarchical or кmeans clustering. Each cluster is a product feature. A word cloud can be used to emphasize the importance of each feature. These features can be used to specify and design a new product. In one sense, they come directly from the customers via their written comments, but in another sense they are derived from the comments since few customers can clearly and succinctly indicate what they want. Customers can only complain about their problems; they cannot articulate their needs. This approach holds promise for uncovering those needs. de Albornoz et al. [2011] outline several heuristics for associating each sentence with a product feature. This results in a mapping of each sentence to at least one product feature. Then all the sentences associated with a feature can be analyzed for the sentiment or opinion expressed for that feature. SoftwareMost general statistical software can handle data visualization. R and Python have excellent data visualization capabilities but producing a graph, even a simple one, requires some program coding. Stata has a good graphics library but program coding is a bit awkward. The same holds for SAS. JMP is particularly good at data visualization because of its dynamic linking of a data table and a graph. SummaryI presented a detailed case study of a drilldown of issues associated with a new product postlaunch. The problems were identified through a dashboard but then investigated for root causes. The purpose of this detailed case study was to emphasize that Deep Data Analysis (DDA), the key message of this book, is not solely for new product development, but is also applicable for postlaunch examination of that product. Just because a new product is launched does not mean that you should stop studying the data generated from it. Once launched, the product will succeed or fail, and as you know from Chapter 1, most fail. The rootcause analysis will help you identify why it failed so that the next new product would have a better chance for success. AppendixDemonstration of linearization using log transformationLet f(x) be some function we want to approximate at a point x = a. The Taylor Series Expansion (TSE) off (x) at x = a is
where f‘(a) is the i'^{1}' derivative of the function evaluated at я;/^{0} (л) is the original function evaluated at a; and /'! is the factorial function with 0! = 1. If you set f(x) = In a% then the TSE of the natural log is
where R is a remainder term that is usually ignored. Let x = y, and a = y,_,. Then In y_{t} = lny,_] + The last term is the percent change in y. Denote this by g. Then
Notice that the first term on the righthand side in (7.A.1) is like the term on the left, just one step back. So you can write it as
and substitute (7.A.2) in (7.A.1) to get Repeat this backward substitution until you get
where y_{0} is the first or starting point for the у data series. Clearly, (7.A.4) is a linear equation with intercept In y_{0} and slope g. So the natural log transformation linearized the original data. Demonstration of variance stabilization using log transformationTime series are often characterized by the property that their variance increases over time. Such a series is said to be nonstationary. See Wei [2006] for a discussion of nonstationarity. This property is not desirable from a statistical point of view because changing variances are difficult to work with; constant variances are more desirable. A time series that has a constant variance is called stationary. We require a function that when applied to a nonstationary time series coiwerts it to a stationary one. That is, we require a function that converts the variance of a time series to a constant. This section closely follows the demonstration by Wei [2006] for finding the transformation function. Assume a time series Y„t= 1,2.....T. Then this series is nonstationary if we can write V[Y,] = cXf(a,) where c is a constant and/() is a function of the data. For example, we could assume that V[Y,] = cXcrf. We want a function T() that transforms У, so that V'!'(У,)] = c. Following Wei [2006], we could use the Taylor Series Expansion (TSE) to determine the transformation function. We want a transformation function Т(У,) evaluated at У, = a, such that
The variance of Т(У_{(}) is
where 7'^{1} (a,) is the first derivative of T evaluated at a,. We need
Integrating (7.A.8) and using V[Y,] = cXaj, then 7'(a,. So the natural log is the transformation that stabilizes the variance; i.e., makes it a constant. Wei [2006] shows that this approach can be extended to produce the BoxCox Power Transformations which have the natural log as one option. This class of transformations is given by
Transformations for different values of A are shown in Table 7.10. TABLE 7.10
Source: Wei [2006] TABLE 7.11 Elasticity values and interpretations.
Constant elasticity modelsA useful concept is the elasticity. An elasticity is a unitless measure of the percent change in the dependent variable for a percent change in an independent variable:
Since an elasticity is a ratio of percent changes, it is natural to interpret values of ?/ with respect to 1: the percent change in Y equals the percent change in X. Terminology is shown in Table 7.11. A price elasticity of demand is
For an inherently linear model, Y, = e&> xX^^{1}, linearization using natural logs yields In Yj = Д, + /?, lnX,. This is a loglog model. Note that or
The elasticity is the parameter and is a constant. The curve for Y is called an isoelastic curve. See Paczkowski [2018] for a discussion. Total revenue elasticityTotal revenue is defined as TR = Px Q(P). Taking the first derivative with respect to price yields
To convert this to an elasticity, multiply the lefthand side by P and divide by TR which amounts to multiplying the lefthand side by '/q based on the definition of TR. The righthand side also must be multiplied by '/q yielding rjJ^{R} = 1 + ^{v}/qX ^{dQ}/tip. The second term is the definition of a price elasticity so >]^{l}p^{R} = 1 + ц_{р} . Effects tests FratiosThe effects tests for the OLS models are based on the sum of squares for the respective effects. The sum of squares is from the Model component of the ANOVA table with that effect included and excluded. Basically, an ANOVA is created with the effect and then without. The sum of squares for the effect is the difference in the Model sum of squares from the two ANOVA tables. For example, the Region effect sum of squares is the difference between an ANOVA table’s Model sum of squares with the Region variable and an ANOVA table’s Model sum of squares without the Region variable. The corresponding Fratio is the effect’s mean square divided by the residual mean square from the ANOVA table. Notes

<<  CONTENTS  >> 

Related topics 