Desktop version

Home arrow Management

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Resources: Making it work

I focused in this book on doing analytics for new product development from ideation to tracking. I discussed issues and some of the details for analysis. Nonetheless, this is not a managerial book. Management, however, played a role in some sections, but more for guidance and critical inputs on goals and objectives. It has a role above and beyond the analytical efforts. Executive managers provide the resources the organization needs and set the tone and agenda to successfully implement a data analytical approach to new product development. These resources are analytical talent and software and the agenda is the pursuit, the focus on, data- driven decisions. Executive managers also establish a collaborative framework so that analytical talent can share what they learn from the five stages of new product development and not hide what they know and have learned.

This chapter has three sections. The first covers the role and importance of collaboration, both internally and externally, for making more efficient use of data for the development of new products. Collaboration is important because it reduces costs and breaks down barriers that frequently block the development of ideas. In the second section, I discuss the analytical talent needed for all phases of a data- driven approach to new product development. The technical skill-sets required and the training needed to maintain those skills are covered. The third section covers software. Even though software was mentioned at the end of each chapter, this is such an important topic that it must be mentioned one last time.

The role and importance of organizational collaboration

The purpose of collaboration is to lower the overall cost of new product development allowing more products to be produced and brought to market faster and sooner than your competition. Given the analytical perspective of this book, it may seem that there is zero cost to doing analysis. In reality, of course, there is always an

These are illustrative cost curves that show the effect of Deep Data Analysis (DDA) and then DDA with collaboration

FIGURE 8.1 These are illustrative cost curves that show the effect of Deep Data Analysis (DDA) and then DDA with collaboration.

analytical cost. In particular, the more Rich Information managers need for making their decisions, the more analysis that is required. In economic terms, there is an upward sloping cost curve for analysis: the richer the information required the more analysis that has to be done and so the higher the cost of acquiring that information.

Management controls where they are on this cost curve by the requirements they establish. Whatever the requirements, however, the placement of the curve is determined by the level and extent of the collaboration of the staff involved in new product development. The less the collaboration, when departments and key personnel hoard data, information, and findings, the higher the cost of providing any level of information management needs. The more the level and extent of the collaboration, the lower the costs of doing analysis because information is shared. This is depicted in Figure 8.1.

The cost reduction I just described is based on collaboration internal to a business. External collaboration is just as important and is, in fact, the norm and necessity in key industries. Consider the automotive industry where there has been a convergence of consumer electronics and automotive technology, as automotive experts view a car not as a car but as a “mobile experience.” This means that modern-day consumers, who are highly interconnected in all aspects of their lives, expect their transportation modes to be equally connected and provide them with the same experiences as they receive at home. Riding in a car (not necessarily driving one) should be an experience comparable to their other experiences as well as just providing mobility. As automotive experts realized and accepted this different view of the reason for buying a car, they changed its nature, composition, and function. More vehicles are now equipped with consumer electronics such as mobile phones for hands-free driving, video players for entertainment (of children), and enhanced stereo systems delivered not through the traditional radio but through satellite, Bluetooth, and wireless connections. See Forum [2016] for an analysis of the changing automotive landscape.

This change in perspective of a car to a mobility experience has altered the industry’s supply chain. The traditional automotive establishment no longer develops all the complex electronics consumers demand for their “experience”, so automotive manufacturers must look elsewhere for these developments. Development in-house is too costly; collaboration outside the industry is needed. Collaboration with outside organizations (universities, non-profit establishments, government agencies, and companies in other industries such as those in Silicon Valley) helps to develop not only the technology' needed, but also develop new processes, problem solutions, analysis methods, and general intellectual property'. See Pertuze et al. [2010] for an analysis along these lines for industry-university collaboration.

A 2018 survey of the auto industry found that “nearly a quarter of the participants affirmed their belief that tech vendors with experience in consumer electronics and user experience will drive innovation [in the auto industry], compared to only 7 percent a y'ear ago.”1 It is further noted that “Automakers are building research and development centers in Silicon Valley and partnering with technology giants from consumer markets. Many automotive OEMs are also considering the design and assembly resources of contract manufacturing partners ... that specialize in combining the rapid innovation and product introductions associated with consumer electronics with the rigorous engineering, testing manufacturing and reliability demands of the automotive industry.”2 This external collaboration is just as important and vital as internal collaboration.

Collaboration, however, does not always lead to successful new products. Pertuze et al. [2010] note that 50% of the industry-university collaboration projects they examined lead to an outcome such as an idea or new process. But only 40% of these lead to something with impact. They argue that something new must have an impact; basically, it must lead to a new product. One reason for a low or even nonexistent impact is how a collaboration is managed. The effect of a collaboration is to establish a knowledge flow from one organization to another. A point-of- contact (РОС), who could be an executive at the lead collaborator company (i.e., the company actually needing the results of the collaboration such as an automotive company that collaborates with a tech company), oversees the collaboration and thus the knowledge flows. This РОС must ensure the dissemination to internal stakeholders of any and all knowledge gained. The РОС also must oversee the sharing with the collaborating company. They may share strategic plans, internal knowledge about industry trends, and customer requirements perhaps based on market research. The knowledge flow is not one-way but two-way. This is illustrated in Figure 8.2. The internal and external collaboration are not two separate activities, but are one synergistic whole. If there is any impediment to this flow then collaboration will be less than perfect or will fail so the full cost reduction benefit of collaboration will not be realized.

There is a two-way flow of knowledge during collaboration with an outside partner

FIGURE 8.2 There is a two-way flow of knowledge during collaboration with an outside partner. The РОС manages this flow and ensures that it is two-way and that any knowledge gained is disseminated internally to all the correct stakeholders.

Analytical talent

Analysts are needed for new product development, but not any analysts. Those with special skill-sets are needed. Not only are special skill-sets needed, but those skills must be maintained and updated as technology, software, and analytical tools are updated and new developments are introduced since, after all, developments in these areas are themselves new products. It is easy to fall behind in any one of these three areas, and falling behind could be a burden for the business. I will discuss these three skill requirements in the following subsections.

Technology skill sets

Big Data has become a big issue and a big opportunity for businesses in the first two decades of this millennium. As I noted in Chapter 1, it is usually defined in terms of the Three Vs: Volume, Variety, and Velocity. Regardless of which component you focus on, technology is behind that component in one form or another. Technolog)' determines the data that are captured, how those data are stored, and how those data are analyzed.

As an example of how data are captured, consider sensors which are now ubiquitous. They are in medical facilities monitoring patients’ bodily functions; at street intersections monitoring traffic flows; in vehicles monitoring vehicle functions, speed, distances, and driver alertness; in agriculture monitoring crops (for irrigation, for example) and livestock; in homes, office buildings, public places; and so on. They record a plethora of data almost in real-time which means that a tremendous amount of data is collected. This is the Volume part of Big Data. And those data must go somewhere. They go into data stores, data warehouses, data lakes, and data marts. Technology, in the form of sensors, generates and collects those data while other technology, in the form of more efficient computer storage, house and maintain those data. Sensors are making available, and will make available, volumes of data at real-time rates that are unprecedented. See Shi et al. [2018], Evans [2011], and Kim et al. [2019| for discussions of the development, challenges, and impacts of sensors.

In addition to data captured by sensors, different types of data are now collected in the form of text messages, audio, videos, and images, as I discussed in Chapters 2 and 6. Technolog)' again plays a part in the generation and collection of these types of data as well as their storage. It causes the generation because of social media, for example, which is a newer form of communication technology that exploded on the market in the past two decades. This adds to the Variety component of Big Data.

Also, sensors and social media have made data available in real-time as I just noted. Analysts who access this data must know how to access it all, how to process and query it, how to wrangle it into a form they need, how to make sense of it, and how to report findings from it. This is true at the early ideation stage as well as at the tracking stage. Consider a new sensor device for monitoring home appliances, refrigerators in particular. It could, for example, monitor the compressor and alert the owner as well as a service provider when it begins to fault. Monitoring will be real-time since a refrigerator runs constantly. Analysts must know how to interpret signals sent from the sensor to distinguish between random noise from a compressor and a deviation from trend, thus indicating a pending fault. This would not be for one refrigerator, but for millions. Software, perhaps powered by an artificial intelligence system, would, of course, read and interpret the data and feed results to a human analyst. The human analyst still has to make a final determination of what action to take, if any. This is a special skill set because the human analyst has to interact with the software “analyst” to make a decision.

A data analyst cannot rely on just having statistical or econometric skills and knowledge. He or she must also be aware of and knowledgeable about rapid technology changes because they have direct implications for the data they are supposed to analyze. The basic tools for analyzing “big” data are the same as those required for analyzing “small” data. But the volume, variety, and velocity of the data means that some of these tools are less important and new ones are needed. For instance, data visualization became a hot buzz topic in the mid- to late 2010s. The idea of creating graphs to discover patterns in data has been around for a long time, so it is not new. See, for example, the classic book by Tukey [1977] who developed many of the now widely used methods of data visualization and, in fact, started a whole research line in this area. Yet visualization is getting a lot of attention because how visualization is approached has changed. New displays have been developed (e.g., hex bin plots, contour plots) that help the analysis of Large-N data. At the same time, sampling methods, traditionally a mainstay of statistics, have been studied for their applicability to Large-N situations. Furthermore, softwares (part of technological changes) have been developed to automate and enable the analysis of large quantities of data not to mention the different types of data. As another example, new statistical and econometric methods have been developed to deal with the

Large-N problem. See Varian [2014| for a discussion. Also see Kapetanios et al. [2018] for an extensive survey of econometric methods for Big Data.

Finally, as part of the Variety component of Big Data, an analyst must be conversant in text analysis as I developed it in Chapters 2 and 6. This is important both on the front-end of new product development (i.e., the ideation stage) but also on the back-end (the tracking stage) because customers write reviews on social media and review web sites, as well as the company’s own web site. On the back-end, these reviews are important for understanding how your new product is being received in the market. Sales, of course, will certainly tell you how well it is received (high sales volume equates to being well received) but sales will not tell you why. Traditionally, market research methods (i.e., surveys and focus groups) were used to answer a “1'VhyV' question. But because of the social media outlets and online reviews, text analysis can also be used, and probably at lower cost.

The implication for analytical talent is clear. Analysts must be conversant in a wide array of technologies:

  • • data collection methods:
  • • sampling from Large-N data sets;
  • • data visualization beyond simple pie and bar charts;
  • • programming tools to use with Large-N data ( e.g., SQL);
  • • sophisticated software (e.g., SAS, R, Python);
  • • new statistical and econometric methods for Large-N data; and
  • • text analytic methods.

In addition, analysts must have domain-specific knowledge for determining the application and applicability of these technologies for their specific business. In summary, analysts must be conversant in Computer Science, statis- tics/econometrics, and domain-specific concepts, an area called Data Science.

Data scientists, statisticians, and machine learning experts

The analytical requirements outlined above cut across multiple disciplines. Figure 8.3 shows the intersection of three disciplines - computer science, statis- tics/econometrics, and application domain expertise - to produce one discipline: Data Science.

Computer Science contributes computer technology and programming languages for not only handling statistical and econometric problems, but also for handling data visualization and all that is associated with Big Data. For Big Data, the Three Vs. have to be managed in a way that allows end-users to efficiently access and process not just the right data for their problems, but also do so in a timely manner to solve or address a business problem.

Statistics and econometrics provide the estimation and analytical methods for extracting information from data. Some have been around for a long time. OLS regression is a good example. This technique has been known since 1805 and is now

This Venn diagram shows the intersection of three main disciplines needed for new product development, not to mention for all business analytics whether for new products or not

FIGURE 8.3 This Venn diagram shows the intersection of three main disciplines needed for new product development, not to mention for all business analytics whether for new products or not. This chart is based on a similar diagram in Duchesnay [2019].

well developed and understood.3 Each discipline is separate yet tied together. They are separate in that they are concerned with different problem sets. Econometrics is concerned with economic (and business) problems while statistics is concerned with a wider array of problems which includes economics and business. Also, statistics covers a wider field of methods such as experimental design, ANOVA, data visualization, multivariate analysis, and sampling methodologies to mention a few. They are also intimately connected in that both are concerned with estimation methods. Together, they are a driving force in data analysis. The combination in conjunction with Computer Science is Machine Learning.

Domain Application is the specific set of problems that have to be addressed and the domain knowledge to frame solutions for those problems. In a business context, the Domain Application is business context (e.g., what the business does, its strategic goals, its organization, and so forth) as well as the industry the business operates in (e.g., the competitive structure and the regulatory environment). The overlap of Computer Science and Domain Applications is the province of Artificial Intelligence (AI). AI is the development and application of intelligent software to mimic thoughts and learning. As noted in Wikipedia:4

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.

This is a broad definition. I am taking a narrower perspective with a focus on business problems. In this context, AI will assess, i.e, “learn”, from large and complex volumes of data looking for patterns and trends that are not only interesting but meaningful for the business. These patterns and trends could include operations as well as market conditions. For new product development, the volume of data generated by sensors could be processed through AI algorithms that learn how consumers use existing products and make recommendations for new ones. For example, Sclmurrer et al. [2018] note how automobiles are equipped with an array of sensors that provide data on every aspect of a driver’s driving behavior, not to mention the performance of the vehicle under different driving conditions. These data can be used to not only design new vehicles to better match consumers’ driving preferences, but to also identify mechanical and electronic issues that might go undetected and that need to be resolved before a serious problem develops. The sensors in modern vehicles can transmit these data through the “cloud” to central data repositories (i.e., data stores) that then process the data into data warehouses accessible by engineering and design teams. In essence, the sensor data flow through a Wireless Sensor Network (WSN). See Kim et al. [2019] on IKS'/Vs. This has the effect of reducing product development costs and speeding the development of newer vehicles.

The implication is that future new product development teams must consist of people who are not only skilled in quantitative methods, the focus of this book, but must also possess technological knowledge and abilities to use the new AI systems and the plethora of data generated by the ever-expanding array of sensors in our modern economy. Because of the technology changes, however, newer approaches to data analytics are constantly being developed in academic and in industrial research laboratories. Data visualization, for example, once called Exploratory Data Analysis (EDA), has become an important component of statistical analysis when large quantities of data are available.

Constant training

Since technolog)' is rapidly changing, those who are in the new product development area must constantly work to maintain their skill level and enhance it to keep pace with the technolog)' changes. This includes methods for data access and data management, programming languages, data mining and text mining methodologies, and developing AI technologies to mention a few. In short, future new product development personnel have to be multiversed.

246 Deep Data Analytics for New Product Development

 
<<   CONTENTS   >>

Related topics