Table of Contents:
Shewhart and Deming
If done well, our processes will provide us with tangible information regarding our ability to execute or deliver to meet the customer’s expectations. The work will produce data or metrics, that is if we are attentive and realize the importance, that we will subsequently compile to produce information. For example, we may be interested in how long it takes to perform a specific process or achieve the output of a specific process, so we can better predict this in our project schedules. This prediction will have an impact on our project plan. We may not like what we find regarding the process output or performance. Our organization may not be satisfied with the performance of that specific process and we will need to understand what we have and why it produces the results it does to be able to determine what we need to change to perform as we desire. Of course, to know if the process is performing as we wish, we need to have some expectation from the output or outcome of that process. All of this will inform us as to the data and information we will need from the process. If we are interested in time and throughput, then we will measure things associated with time. If it is waste, we will look at the repeatability of the outcome for example, how much rework, or the amount of material that is disposed. If the variation of the output is too wide, then we will look for ways to improve this process by reducing the variation of the process output. There are a multitude of approaches we can take for the plan and physical work. We can use the simple Shewhart or Deming cycle earlier reviewed, or a project can be set up to take on these changes and that project can either be executed via a more formal or lean approach like Kanban.
There are four steps in Deming or Shewhart Cycle and these are fairly easy to understand. The first step is the Plan the improvement. In the course of doing the work, we may have been collecting data, and interviews with those doing the work to understand the limits of the system. What matters in the work? How can we make this better? How much better would make the system better? Knowing this allows us to formulate an approach, specific steps that will be employed in what will be a mini experiment, that is well understood by the team, not just the management.
Step two, is the Do phase. The do phase is where we invoke the steps in the plan. Along the way will be data collection. This Do phase is essentially the controlled experiment that we devised in the plan phase, this is not an across the board change, but an incremental change that allows exploration. What happens if we take this action? The results of this experimentation will inform us of the next steps.
Step three, is the Check step. In check, we compare what just happened in this experiment with what we believed would happen. If the prediction matches the outcome, there can be some confidence in that outcome and correlation to the actions taken. If the outcome is better than we thought, we should revisit the preparation and the experiment performed in the do phase, along with the post experiment analysis. We need to understand why things turned out the way they did for this experiment. Being wrong, either in predicting the outcome to be better, or worse, in either case, we are incorrect and need to know why.
Step four is the Act step, this really means enAct, as in institute the change across the board. We know the results of the experiment and now we are confident that this change is an improvement. This change is then articulated to other teams and other projects.
No matter the results of the experiment, we should record both the experiment details (what was done and how) and the results of the experiment. The positive results are typically recorded via updated process documentation updates. We question whether only updating the process documents are enough, as we then have no documented experiments recorded that gets us to this new version of the process. Those not involved in the process will not see how we arrived at this
Figure 3.8 There is much to Total Quality Management.
state. Thus, the learning should be recorded beyond that of the subsequent process updates. However, in our experience often the unfavorable experiment results are not recorded. Capturing the failed experiments are additional methods for learning. Not capturing these failing experiments will likely result in our conducting this experiment again. Now performing these past experiments once again, is not necessarily a bad thing, It is another opportunity to explore, ideally the results of any subsequent experiment, assuming the same set of experiment methods, will replicate the previous results.
Total Quality Management
One of the first things for our team and the organization at large to recognize is data is dispersed. There is variation in all things. This is also true for our process work and is one of the reasons for using tools to understand this dispersion. In addition to this dispersion, there are two types of variation.
Chance cause variation also called unavoidable causes, or chance causes or the randomness that is expressed and are the consequences of the system in its present incarnation. These causes are not yet under technical control but are present in
Figure 3.9 Ishikawa or fisbone diagrams are useful to generate ideas.
theoretically almost infinite numbers. They are unavoidable causes or chance causes and the variation produced by them is called “controlled variability. As managers, we cannot blame our workers for them based on existing work standards and drawings.
In contrast, the other type of variation known as special cause and is the type that produces some abnormality in the process and results in a particularly large variation, e.g., when something not covered by the work standards happens or the work standards are disobeyed. Such causes can be eliminated through team understanding of the reasons for this variation and cooperative effort; they are called avoidable, or “assignable causes” and the variation due to the miss called uncontrolled variability.
Altering the system can alter the range of variation in a way that we desire. Additionally, this exercise of taking measurements and reviewing to understand the process capability (and get clues as to where improvements may reside) are opportunities for team learning.
We will use the tools associated with Total Quality Management (TQM) approach and tools can help us understand the situation and then be able to take some prudent action. In fact, we would want our team to be experienced with these tools and techniques to use in the context of the team and on their own, learning and propagating that learning throughout the organization.
Cause and Effect Diagram
The cause and effect diagram, also called fishbone diagram, or Ishikawa diagram, is a tool for brainstorming or generating ideas that are associated with some phenomenon or symptom or improvement area. The large bones on the diagram are the hierarchal attributes we believe could cause the issue we see at the head of the fish, for example, these bones could be labeled measurements, materials, methods, machines, and personnel and environment for a manufacturing setting. Then we brainstorm ideas that fit those categories of major bones on the diagram, that would give us the symptom or performance we are seeing in the system.
To get the most out of this tool, those closest to the issues at hand, either a failure or specific area of desired must be part of the exploration. Experience using this technique indicates the open discussion required genearates many ideas as to why the system performance or outcome is the way it is. These exchanges build other ideas or thoughts for improvements. In classes we have conducted using this tool to explore the project management knowledge area has been productive with the students, where the arrive at the correct conclusion that these knowledge areas represent a collection of systems and the prediction of what system fails and produces the observed failure or phenomonon is much more complicated than originally thought.
Check sheets are used to identify physical manifestations of a not understood process or interaction. For example, we may have some smudge show up on a quarter panel of a vehicle. We would make a graphic of this quarter panel, and every time we find a mark on the panel, we replicate that mark on the paper drawing. This shows the team where the anomaly manifests which can be used to understand the source of the anomaly and as such the root cause and subsequent resolution.
Variation, or variation that is not understood, is the bane of effective and efficient organizations. Everything from the processes of the organization, to schedules, and
Figure 3.10 Check sheets can help us discover patterns.
costs is subject to variation. Not understanding variation in general often results in unrealistic plans. The things that matter to the organization especially require an understanding of these areas. For example, consider a test department that performs a set of steps in preparation for the testing. If this preparation is important, understanding the variation of this preparation work, the distribution of time in hours or days required to adequately prepare would be interesting for planning purposes as well as understanding the source of this variation in such a way to be able to influence this variation. Specifically, understanding the variation is the starting point for identifying the sources of that variation, and the determination of specific actions that can be undertaken to reduce that variation.
Control charts present the rate of performance of a process compared to the upper and lower control limits. This graphical representation of specific parameter performance makes it possible to see when performance of the system or process is moving from that which is within control to performance that is outside of control. The upper and lower control limits are calculated from measurements of a key attribute or parameter. In this way we see exactly how the process performs and can likewise see how things changing can make the process better or worse.
Distributions tell us something about the measure of control we can exert, as well as the range of possible outcomes from the effort. Gathering data will allow us to perform statistical analysis on the data, giving us the range of possible outcomes for the item under consideration. We can calculate the upper and lower control limits; this is the normal operating conditions, or common cause variation. Common cause variations are those things within the system that produce a consistent and random variation. Common cause is what we see when the control chart data performs within the upper and lower control limits, and the associated rules. Common cause variation is what remains after we have
Figure 3.11 Understanding the variations in the system help us with prediction and future improvement.
Figure 3.12 Histograms are also a good way to present distribution.
removed the special cause variation. This variation is not caused by the natural variation within the system, but some other exogenous change. For example, processed material comes into the manufacturing facility where the properties of that material are beyond that of the expected variation due to the process that is produced.
Figure 3.13 Pareto helps in prioritization of the things that should be addressed.
Histograms are used to understand the distribution of a parameter, for example, we can review the time it takes to perform a certain function, or some other distribution of some variable. The histogram allows us to see the variation in that variable. Knowing the distribution of this variable enables our planning to be more accurate instead of guessing we have the range of possibilities. If we do not like the distribution, we can conjure up ideas to explore that would make this distribution more as we need it to be rather than what is presently demonstrated.
Histograms can help us see if there are a cascade of variables that may be impacting the final curve results. Tire graphic above is slanted or skewed left and not a normal distrbution. Histograms can be skewed left, or multiple peaks, referred to as bimodal. The appearance of the distribution are clues to the sort of things that may be impacting the distribution. For example, bimodal distribution indicates that there is more than one variable impacting the distibution.
When we want to prioritize what we undertake to improve, we will likely use a Pareto chart. This chart is a stacked histogram from larger to smaller, along with a separate line graph that demonstrates the cumulative percentage of the contribution of each bar in the histogram. This gives us the 80% 20% rule, wherein 80% of the problems come from 20% of the causes. This provides us with a prioritization scheme, where we have many issues to undertake. We want to apply our efforts to fix the biggest of our problems and work down to the lower sources. No sense in trying to solve the problems that only has a minimal impact on the project, and organization Pareto helps us determine either the volume of problems, or the cost for each of those parameters on the x-axis.
The scatter plot (or scatter diagram) helps to visualize correlation between two variables (not causation). This can be used to understand the connection of one variable with another variable, that may aid in making the system more predictable. This is not causation, in that some other third element (or perhaps even there is cause and effect, but this method does not prove that) may be making both of these variables behave the way they do. The tighter the scatter the closer the correlation, wider patterns, the more loose the correlation. There are also positive and negative correlations, that is as one variable increases so too does the other (positive correlation) and as one variable decreases, the other variable increases. The tighter the collection of dots, the stronger the correlation, the more disperse, the less correlation. In the graphic we see that that the positive sloped (positive correlation) is a tight pattern, and therefore has a high degree of correlation. The
Figure 3.14 Scatter plots help to unover any correlation (not causation) between variables.
negative correlation, has a wider pattern and therefore it is said that the correlation is weak.
There are times when these tools usually associated with total quality management are not able to help the team and the organization to ascertain the source of the difficulties. This is especially true when it comes to creative thinking. Tools can help us to evoke connections between concepts, ideas, or material associations. Other tools help to ascertain the best decision out of several possible decisions. In each of these tools, a team approach can be employed, evoking connections that may not be readily seen by any one team member.
Mind maps are ways to explore ideas building the structure as we build associations. In this way the result is a graphical representation of ideas without a rigid structure at the start, facilitating brainstorming and discovery of ideas that we have not previously considered. Mind mapping is a more structured way of brainstorming, and can be done alone or with a team on a whiteboard.
We can use decision matrix to help us decide among alternatives, and these techniques lend to team discussion and therefore facilitate learning. In fact, these tools are not of much use for single person exploration. There are unweighted and weighted examples of decision matrix. The unweighted decision matrix is a list of attributes against which the decision alternatives under consideration are compared, with each attribute being equal. The Pugh matrix is an example of a weighted decision matrix, in that we prioritize the most prized attributes by assigning a number. Either of these can be used to evaluate a variety of possible decision alternatives against a common set of desired attributes.
First, it is important to note the team members when making any assessment or comparison of ideas. Noted throughout this material, is the need for more than one perspective. In fact, this differing of perspective opens the idea for true scrutiny as well as the possibility of generating more ideas or some amalgam of the ideas that are under consideration. These variety of perspectives are helpful when it comes to optimization, that is discovering the trade-offs for subsequent decisions. This perspective is generated from a cross-functional group of people that can articulate their perspectives, things like evaluation criteria as well as decision possibilities. This requires and environment that encourages engagement and is safe for the team members to speak their mind. We are not looking at the attribute from a single perspective, such as that of the development group; the expectation is there are discussions among the team members as the evaluation is underway, articulating why each team member thinks the way they do for each attribute evaluation. Decision matrix tools help us evoke these tangible
Next, weighted means that all the attributes we are considering are not of equal value when it comes to the decision. We will compare the strategies or ideas we generate against these attributes; some of these attributes we have a greater desire for than others. For example, perhaps we value cost attributes over some of the feature content, throughput or maintainability. If that is the case, the emphasis on the cost variable will be addressed in the matrix and the final number presented by the matrix will include the ranking of this desire.
This weighting (priority) becomes a multiplier for the rating and produces the end value for the attribute. For example, we would expect a priority value of 5 to produce some weighted value of 20-25 as that would mean we have found a good
Figure 3.15 The Pugh matrix is an example of a weighted decision matrix.
solution for the highest priority attribute (our solution then holds a ranking of 4-5, also the top end of the scale). If our priority or weighted level 5 produces a 10 (a rating of 2) then we have not matched the design to this objective very well.
Brainstorming is a technique developed by Alex Osborn in 1941 and has been successfully applied over the many decades by many organizations. With brainstorming, we are tapping into the hive mind or the genius of the group. There are few rules to successfully conduct the brainstorming event.
■ Quantity: The more ideas the better; the more ideas, the more probable that the list of ideas will include some very valuable gems from which we can take advantage. The key is to generate ideas, not to develop any of the ideas.
■ No criticism: There will be no criticism of the ideas generated; critiquing the ideas can dampen the enthusiasm and the pace of ideas coming from the team, and we seek to generate many ideas.
■ No constraints: To ensure the maximum ideas we likewise will not consider constraints upon the ideas, for the same reason there is not criticism. The intent is to bring a mass of ideas.
■ Combining ideas: During the generation of ideas, the team is encouraged to build upon the ideas generated from the other team members. That is one of the benefits of this type of idea generation; the other group members hear the idea and that brings other related or extensions of that idea.
Figure 3.16 Data can be found in many places to help us understand the situation.
Experience suggests the exercise with the team can generate new ideas for doing the work, new sequences for doing the work, or elimination of unproductive work and even new products. Additionally, we have the collection of individuals contributing and exchanging ideas, which can expose them to different perspectives and mental models upon which the work can build.