Desktop version

Home arrow Marketing

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Surveys

The purpose of the survey, including the information required to act, needs to be established and agreed upon by stakeholders. In other words, why is the survey being done? This leads to the proposed specific combination of survey elements needed for success, as shown in Figure 3.1. Second, there must be sponsorship to ensure resource commitments, the team, and the schedule. Once the team is formed, the survey is carefully designed to identify the target audience (i.e., segments and respondents), the segments by product, distribution channel, industry, revenue, geographical region, and the level of organization (e.g., account, business unit, etc.).

Successful surveys are usually personal and brief, with advance notification for participants and a standard process for conducting and managing them. They have the right frequency, the right language, include incentives (if useful), and provide closed-loop feedback to respondents. It is important that respondents see demonstrative continuous improvement based on the information they provide in a survey. This latter attribute is often neglected because of technological barriers and cost. This limitation has been recently removed through active data monitoring enabled by digitization of the emerging listening posts. These include text mining of social media, complaint logs and transcripts, telephone calls, industry forums, publicly available customer news, customer-facing stakeholders, blogs, etc., as well as mobile app development, virtualization, shared calendars and applications like Microsoft Outlook and SalesForce.com, and predictive analytics. Predictive analytics are used to show which variables (e.g., demographic information, cost, or perceived quality) most positively impacts customer experience represented by output variables such as loyalty metrics and higher customer-retention rates. Statistical models are created to explain relationships between various data points.

Organizations use surveys for various reasons. Examples include obtaining customer, employee, or supplier feedback to enhance the total customer experience, benchmarking competitive products and services, reducing complaints, and enhancing revenue. Surveys must be aligned to an organization’s goals regardless of their intent. Identifying a survey’s goals is like building a house: the foundation must be built correctly. So, we start a survey by ensuring the questions to be answered are aligned with stakeholder expectations, are unambiguous, and will provide enough information for analysis and subsequent operational improvement projects. This requires that stakeholders must be consulted when designing and deploying a survey. A “voice of” team works to integrate stakeholder needs into a survey by framing questions of increasing specificity, i.e. by starting with higher-level questions and working down to specific questions. Survey questions should be designed in a way that provides information for operational improvements. Some questions and formats are better than others.

Let us use a generic use case focused on automotive replacement parts. Consider the questions that might be relevant to this organization’s goals. How do we increase sales to automotive parts customers? Which customers purchase which products and why? What are the customers’ common demographics? Which customers leave us? What is similar between them? Based on a customer’s previous purchases, which type of promotion would appeal to them? What are the credit risk profiles for customers who default on loans? How can we determine when a vehicle needs maintenance? The team also needs to visualize how the responses to these questions will be used by the organization; this will inform how the responses will be reported

How will each question be analyzed? What are the follow-up questions? Which customers? Which product? Where and when did customers make purchases? How were last year’s sales stratified by various demographics? This approach naturally leads to identifying participants and the methodology needed for obtaining useful information. Methodology is important because there are several different types of data obtained in surveys, and the analytics to understand these data differ as well. Data conditioning is also required to prepare the data for analysis because surveys are collected into databases that may contain thousands or millions of transaction records. Depending on type and size of the databases, advanced tools may be needed to bring data together for analysis, such as analytical sandboxes and advanced database management and analysis software associated with Big Data applications. Establishing goals and answering questions requires thinking through the types of data needed for analysis (e.g., numbers, text, pictures, sound, or other types).

There are different formats for survey questions. These must be carefully considered when creating a survey. The first format is contingency questions. These questions are answered if a respondent provides a particular response to a previous question. This avoids asking participants questions that do not apply to them. Matrix questions are a second type, in which identical response categories are assigned to multiple questions. These questions are placed one under the other, forming a matrix with response categories along the top and a list of questions down the side. This is an efficient use of page space and respondents’ time. Closed-ended questions constrain respondents’ answers to a fixed set of responses. Most scales are closed-ended. Other types of closed-ended questions include yes/no questions, multiple-choice questions where a respondent has several options from which to choose, and scaled questions where responses are graded on a continuum. An example would be rating a product on a scale from 1 to 10, with 10 being the most preferred. Methods include the Likert scale, semantic differential scale, and rank-order scale.

There are other types of questions that are less structured. Open-ended questions enable respondents to provide answers without predefined options or categories. The respondent supplies an answer without being constrained by a fixed set of possible responses. Examples include completely unstructured responses (e.g., “What is your opinion on questionnaires?”); word association, where words are presented and the respondent mentions the first word that comes to mind; and sentence completion, where respondents complete an incomplete sentence (e.g., “The most important consideration in my decision to buy a new house is...”).

Other questions require respondents to do work. Using story completion, respondents add information to continue a story based on a given prompt. For picture completion, respondents fill in an empty conversation balloon. And for a thematic apperception test, respondents review a picture to create a story for what they think is happening in it. Some survey questions create more information than others, but regardless of the type of survey question being asked, a chronic problem with surveys is that the questions do not provide enough quantification or specificity for effective action. Understanding how information will eventually be used will help format useful questions. Asking questions in different ways will provide information around market segments, competitors, and current performance.

Customer interviews are another important method to actively obtain VOC information. These can be done automatically using e-mails and mailings, or they can be conducted in person. In either situation, questions should be relevant to the VOC information that must be collected for analysis and structured to prevent biased information. It is important to plan interviews carefully prior to collecting customer information to ensure team members understand common definitions and the interviewing methodology. This is particularly important when framing questions for written or e-mail surveys. If e-mails and mailings are used to obtain information through a survey, then they should be tested using a small sample to validate questions for clarity and relevance.

E-mails and written surveys typically have a very low response rate, but they are relatively inexpensive to conduct and analyze. In contrast, personal interviews will provide more information but are more expensive. The general format for effective personal interviewing is to probe the customer with relevant and very clearly phrased questions, which are followed with clarifying statements. At the end of an interview, validation questions should be asked to confirm the customer’s responses to prior questions. In-person interviews can be conducted one-on-one or with a focus group of several individuals. Focus group interviews have an advantage over one-on-one interviews in that group dynamics may increase the number of new ideas. Focus group interviews, however, must be facilitated properly to be effective. If on-site interviews are used, it will be useful to gather customer information relative to how a product or service is used, including who is using it, where they use it, why they use it, when they use it, how they use it, and other relevant information to identity opportunities that increase performance and excitement features and functions.

Each interviewing strategy has advantages and disadvantages with respect to the types of information gathered and cost. As a rule, the greater the interpersonal interaction between the interviewer and interviewee, the more relevant information will be obtained. In fact, this is the major advantage of actively obtaining VOC information. However, for building quantitative models requiring large amounts of data, surveys may be a better choice because larger samples can be statistically analyzed.

A second consideration focuses on sample representation and size, response rates, business rules regarding nonresponse, do-not-contact requests, survey fatigue, data cleanup (i.e., contact lists) before and after the survey and before or during reporting. The planning goals are to prevent biased sampling, poor sampling representation, nonresponse, and variation between interviewers or responses to a given question.

Nonresponse bias occurs if respondents differ in meaningful ways from non-respondents. In the 1936 American presidential election, when Alfred Landon ran against Franklin D. Roosevelt, the sampling was biased because of the survey method used to estimate which candidate was preferred for president. The survey respondents tended to be Landon supporters, and non-respondents were Roosevelt supporters. A low percentage of the sampled voters completed the mail-in survey, which overestimated voter support for Alfred Landon and led the Literary Digest voter survey to predict that Alfred Landon would beat Franklin D. Roosevelt in the 1936 presidential election. But the survey suffered from undercoverage of low-income voters, who tended to be Democrats. If some members of the population to be surveyed are not fully represented in the sample this is under-coverage. Nonresponse bias must be controlled when using surveys. Another form of bias is voluntary response bias, which occurs when survey respondents are self-selected volunteers. An example is a radio show that asks for call-in participation in surveys on controversial topics (e.g., abortion, affirmative action, gun control, etc.). The resulting sample tends to overrepresent individuals who have strong opinions on these issues or those whose opinions align with the source of the survey (e.g., a conservative radio show has conservative listeners, so the call-in responses are likely to be similar to that presented by the radio show).

Cognitive issues may also cause survey errors in the interviewer or respondent. The survey planning process needs to consider these potential situations as well. These include forgetfulness from not concentrating, misunderstanding that leads to flawed assumptions, sensory errors that cause misidentification, inadvertent errors caused by distraction and fatigue, delay in task execution due to slow information processing, an inability to adapt to changing environments, and intentional errors for various reasons.

Surveys must also conform to laws and restrictions on using personal information and non-contact requests. This requires informed consent by opting into a survey. There are two ways to opt into a survey. The first is explicit consent, in which a person must actively select the survey. The second is implicit consent, in which the organization conducting the survey simply posts a notice to the respondent. Many countries have enacted privacy laws that protect personal information. In Chapter 10, we will discuss these privacy requirements in the section titled Data Security.

Once the survey is planned and reviewed, the team will conduct the survey. It is crucial that the people doing the survey are well trained. Some teams practice by asking each other questions and doing mock surveys. It is important that the people doing the survey remain neutral, understand the survey questions, and follow the agreed-upon process. If the questions are asked in person, each question should be presented to the respondents in order and verbatim. The answers should be recorded accurately and verbatim with no summarization or adjustment by the interviewer. Inconsistencies in answers should be managed. If questions are not fully answered, then follow-up questions may be needed. These should already be part of the survey plan. All survey information should be kept confidential.

Once the survey is completed, the responses will need to be verified for accuracy and consistency, and corrections will be made by the interviewer. Business rules are needed to address nonresponse, do-not-contact requests, survey fatigue, and data cleanup (i.e., contact lists) before and after a survey. These rules should be integrated with the survey methods, whether e-mail, phone, site visits, and others. All corrections need to be available for auditability. Problems may arise regarding incorrect process (e.g., the flow of questioning was interrupted) or mathematical errors such as incorrect or transposed numbers. There could also be typographical errors or illegible writing. Information may also be missing. After the reviewer makes corrections, the responses are conditioned for either manual analysis or machine summarization. The team should have procedures for handling incorrect data, sampling issues, and other problems.

Analytical methods have been developed to extract useful “voice of” information from diverse listening posts. Examples include visualization of data patterns, text mining of unstructured data, application of descriptive statistics, cross-tabulation, correlations, and predictive models. Models show relationships between input (predictor) variables (e.g., demographic and other information) and one or more output variables. Output variables (i.e., key process output variables, or KPOVs) measure customer experience. Examples include percentage of respondents who are satisfied with a product or service, how much they spent, or their intent to repurchase in the future. Accurate models help increase service performance by showing what inputs need to be adjusted and to which level. In other words, this analysis can indicate how improvements in customer satisfaction through operational improvements will increase revenue, profitability, and customer retention.

TABLE 3.1

Identify Survey Types and Respondents

Customer Interaction

How

Who

Transactional

At time of service

Anyone

Loyalty

Removed from transaction

Key stakeholders

Alerts

Unusual events and significant complaints

Key stakeholders

Advocacy

Face-to-face meetings with major customer and stakeholders

Key stakeholders

Major interactions on the customer experience map

Purchase, deliver, setup and install, use, and service (moments of truth)

Key stakeholders

Industry panels

Industry meetings and action groups

Industry experts, consultants, competitors, and customers through open forums

Benchmarking

See best practices list

Key stakeholders

Internal interaction

How

Who

Partner and supplier surveys

Transactional and loyalty surveys, meetings, etc.

Partner and supplier stakeholders

Field sales

Transactional and loyalty surveys, meetings, etc.

Sales management

Other employee surveys

Online surveys of employee satisfaction or opinions, meetings, etc.

Partner with employees

An important goal is increasing the actionable information from “voice of” surveys and related data collection activities so the analysis will be quantitative. As an example, we are often asked by different organizations to take a survey when purchasing a product. A typical question focusing on the time to checkout is, “Were you satisfied with the time for checkout?” The response from this question is either yes or no, which creates a percentage satisfied statistic. A more efficient way of asking this and similar questions would be, “How long did it take to check out?” “How long did you expect to check out?” and “Were you satisfied with the time to check out?” This series of questions helps create central location (mean, median) and dispersion (variance) statistics over many transactions. Table 3.2 shows that quantitative information enables more effective questioning.

Recently, I visited a major retailer and took the post-purchase survey. Table 3.3 list the types of questions I was asked. Some are inputs (independent variables) and others are outputs (dependent variables). Comments have been added to the questions from a process improvement perspective. The variables are inputs or outputs and represent different types of data, each having a differing informational content, either nominal (e.g., a label) or ordinal (e.g., its sequence contains information). Note that none of the questions are continuous, which has a maximum information content for a given sample size. From an analytical perspective, these questions are weak. It will be difficult to build a robust predictor model for

TABLE 3.2

Maximize Information Content

Question

Answer

Data Type

Comment

Were you satisfied with the wait time at checkout?

Yes

Discrete

Limited information; large sample sizes are needed.

How long did you wait at checkout? {in seconds)

2 minutes

Continuous

This provides the actual time waited.

How long did you expect to wait at checkout?

(in seconds)

1 minute

Continuous

This is a gap to be closed (but it is associated with a certain customer demographic).

How long do you wait at our competitors?

1 minute

Continuous

This is a competitive gap that may need to be closed.

Gather demographic data

Discrete or continuous

These are the independent variables (also named input variables).

TABLE 3.3

Retail Survey Example

Question

Variable

How likely are you to recommend this store to others?

Dependent ordinal (model using analysis of variance) with transformations, or ordinal logistics regression (i.e., on a scale of 1-10, with 10 being most likely).

Please copy your Store#, ID#, date of visit, time of visit, and products purchased exactly as they appear on your receipt. Please note the letters are case sensitive.

Independent variable, nominal: day

and time of day are continuous; needed a magnifying glass to type in Store#, ID#, and the date, time, and products purchased information.

What is your gender? (Male or Female)

Independent, nominal

Which of the following best captures your total household income last year before taxes? Please include income from all sources, (various amounts from $7,500 to $20,000, or prefer not to answer)

Independent, ordinal if in categories or continuous if an amount is entered

  • (Getting what you needed quickly): Specifically, how satisfied were you with the following areas? (list of store sections):
    • 1 = Extremely dissatisfied... 10 = Extremely satisfied, and NA

Dependent, ordinal

(This stores employees): Specifically, how satisfied were you with the following areas? (list 1 = Extremely dissatisfied... 10 = Extremely satisfied, and NA

Dependent, ordinal

  • (Availability of the products you were looking for): Specifically, how satisfied were you with the following areas?
  • 1 = Extremely dissatisfied... 10 = Extremely satisfied, and NA

Dependent ordinal:

  • (Quality of products): Specifically, how satisfied were you with the following areas?
  • 1 = Extremely dissatisfied... 10 = Extremely satisfied, and NA

Dependent ordinal

(Appearance of the store): Specifically, how satisfied were you with the following areas? 1 = Extremely dissatisfied... 10 = Extremely satisfied, and NA

Dependent ordinal

  • (Ability to save money): Specifically, how satisfied were you with the following areas?
  • 1 = Extremely dissatisfied... 10 = Extremely satisfied, and NA

Dependent ordinal

them. If the variables had been continuous, the analytics would be more useful, the sample sizes would be smaller, and the ability to predict future outputs would be more accurate and precise. Table 3.4 shows how these variables could be modified to provide more information and how the predictor models could be made more efficient.

There is a correlation between the NPS and repurchase—namely sales. The NPS is between 0 and 10. A zero implies no recommendation and low satisfaction, whereas a 10 implies a high recommendation and high satisfaction. The scale is categorized as net detractors between 0 and 6, neutral between 7 and 8, and promoters between 9 and 10. The net promoter statistic is calculated as percentage promoters minus percentage detractors. As an example, if 100 people were surveyed with 30 detractors, 20 neutral, and 50 promoters, the net promoter score would be calculated as 50% - 30% = 20%, or 20. Benchmarks are available for different industries. Organizations need a NPS that is higher than competitors. Because customer satisfaction depends on the overall customer experience, improvement projects should be carefully selected and aligned to solve several related customer issues. As an example, improvements in product availability, pricing accuracy, location, and other areas may be needed to improve an NPS rather than a single issue.

Competition occurs within increasingly narrowly defined market segments where organizational size often becomes irrelevant. Narrow market segments enable smaller organizations to successfully compete against larger ones by arriving to market earlier with competitive products or services that exceed customer expectations. Translating the VOC enables an organization to develop exciting new solutions to old problems or to completely redefine older problems in terms of new paradigms and solutions. As an example, understanding key customer value elements such as time, price, utility, and function facilitates improvements to processes by eliminating nonessential or inefficient operations using value flow mapping. Understanding customer needs and value perceptions also drives organizations to identify and align its resources behind core competencies focusing attention on necessary improvements to process design. Failure to effectively translate the VOC into products and processes results in lost customers and higher costs caused by breakdowns at the customer interface. These appear as high warranty expenses, returned goods, customer credits, poor customer retention, and other issues; in the most severe situations, customers are lost.

TABLE 3.4

Analytical Options

Current Survey Format

Current Models

Improved

(Actionable)

Format

Additional Models (More Efficient)

Dependent Ordinal: How likely are you to

recommend this store to others?

1 = Not likely at all... 10 = Extremely likely

ANOVA (with transformation) or ordinal logistics regression (or could also be used as an independent variable to predict another dependent variable, such as sales or net promoter score).

Transformation of Y

Multiple linear or logistics regression

Independent Ordinal: Which of the following best captures your total household income last year before taxes? Please include income from all sources, (various amounts from $7,500 to $20,000, or prefer not to answer)

ANOVA (with transformations) if all independent variables are discrete, otherwise ordinal logistics regression

Ask actual income

Multiple linear or logistics regression

Dependent Ordinal (Getting what you needed quickly): Specifically, how satisfied were you with the following areas? (list of store sections): 1 = Extremely dissatisfied... 10 = Extremely satisfied, and NA

ANOVA (with transformations)

Ask satisfaction on a scale of 0% to 100%

Multiple linear or logistics regression

Dependent Ordinal (This stores employees): Specifically, how satisfied were you with the following areas? (list

1 = Extremely dissatisfied... 10 = Extremely satisfied, and NA

ANOVA (with transformations)

Ask satisfaction on a scale of 0% to 100%

Multiple linear or logistics regression

Efficient: Produces a smaller variance of the parameter estimate and a smaller sample size is required to reject the null hypothesis: parameter = 0. Consistent: The estimated values of the parameter will correspond to the true value.

ANOVA = analysis of variance

How does an organization know it is meeting the VOC? Several metrics are used to measure and improve VOC performance. These include market share percentage, revenue growth, margin percentage, percentage customer retention, customer returns as a percentage of sales, warranty expenses as a percentage of revenue, customer acquisition costs, and customer satisfaction as measured by NPS. There are other metrics, used by specific industries, to ensure effective VOC measurement. These metrics can be summarized as follows: if an organization is increasing its market share in a profitable way and customer satisfaction is high, then the organization is performing well in its market. But there should be plans to meet competitive threats by strategic planning to increase market share and margins from year to year.

 
<<   CONTENTS   >>

Related topics