Desktop version

Home arrow Marketing

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Design for Six Sigma

Design for Six Sigma (DFSS) and similar methodologies focus on translating the VOC in a meaningful way from concept to commercialization and back to the customer, thereby closing the customer’s voice-of feedback on original requests for improvement to a current design or for a new design. This helps avoid a simple internal focus on the voice of the business rather than the VOC. In a traditional design approach, products and services are built component by component, which sometimes causes the higher- level system to be sub-optimized from cost, performance, and lead time perspectives. In contrast, the DFSS approach focuses on optimization of features and functions mapped from customer requirements into design solutions. The mapping starts with the big Ys or customer requirements into smaller ys that are specifications into the xs or design variables that impact these outputs (ys). The DFSS optimization methods quantity the Y =f{X) relationships. The quantification relates changing the average level and variation of the outputs by varying the inputs using statistical models.

The DFSS methodology is characterized by five sequential phases with deliverables. These are shown in Table 4.10. In addition to these deliverables, Table 4.11 lists the key tools and methods used to complete each deliverable. The first phase is identification of the customer requirements using the VOC. The second phase requires translating the VOC requirements into internal specifications and developing design alternatives. In the third phase, the alternative design concepts are coalesced into a final design that is optimized using statistical models. Optimization ensures that reliability, serviceability, and other performance specifications will be met in practice. The fourth phase, validation of a performance against VOC requirements under expected environmental conditions and customer usage, is evaluated using specialized capabilities analyses to demonstrate low failure rates. The fifth phase is incorporation of all lessons learned by the CE team for future development projects.

In the identification phase, customer needs and requirements are gathered using market research tools and methods including quality function deployment (QFD). This is a critical step of the DFSS method because a competitive advantage is created by effective translation of the VOC into cost effective and high-performance products and services. These customer requirements or (CTs) are identified and quantified using market research and QFD methods. This information is used to calculate internal design

TABLE 4.10

Design for Six Sigma Deliverables

Phase

Deliverables

Identify'

a. Identify customer needs, expectations, and requirements (CTCs) using marketing research, QFD, and related tools.

b. Establish metrics for CTCs.

c. Establish acceptable performance levels and operating windows for each output.

Design

a. Evaluate and translate CTCs into functional specifications.

b. Evaluate and select concept designs with respect to design specifications and targets using DFMEA, alternative ranking methods, focus groups, and QFD.

Optimize

Select important design concepts for optimization using experimental design methods, reliability analysis, simulation, tolerance design, and related optimization tools.

Validate

a. Pilot/prototype according to design specifications.

b. Verify' that pilots/prototypes match predictions; mistake-proof the process and establish the process control plan for CTCs using capability analysis, mistake-proofing, control plans, and statistical process control.

Incorporate

Verify manufacturability and CTCs are met over time using design reviews and metrics scorecards.

CTC = critical-to-customer characteristics; QFD = quality function deployment; DFMHA = design failure mode and effects analysis.

specifications and performance targets for the specifications. Scorecards are also created to report CT characteristics and specification current performance (if it exists) and their targets. The DFSS scorecard will be discussed in the validation phase with an example. At the end of this phase, the VOC requirements should be translated into CT characteristics or higher-level requirements and internal specifications. The team also finalizes performance levels for the design’s features and functions (i.e., CT characteristics).

In the design phase, solutions are mapped to the CT characteristics and specifications using QFD methods. Several alternative design concepts and prototypes are often created in this phase to evaluate solutions from different perspectives. These design alternatives are evaluated using a variety of analytical tools and methods, including a Pugh matrix. To the extent that solutions already exist and satisfy the required features and functionality, they should be incorporated into the new design, unless better ones have become available. This will enable the team to focus on performance gaps

TABLE 4.11

Design for Six Sigma Tools and Methods

Phase

Tool/Method

Identify

  • • Market/customer research . QFD
  • • CTC flow down

Design

• Brainstorming, etc.

. QFD

  • • Robust design
  • • Monte Carlo simulation
  • • DFMEA
  • • Reliability modeling
  • • Design for manufacturing

Optimize

  • • DOE
  • • Transfer function Y =f(X)
  • • Design/process simulation tools
  • • Tolerance design

Validate

  • • Process capability modeling
  • • DOE
  • • Reliability testing
  • • Mistake-proofing
  • • Statistical analysis
  • • Preliminary quality control plan
  • • Updated DFSS scorecard

QFD = qualify function deployment; DFMHA = design failure mode and effects analysis; CTC = critical-to-customer characteristic; DOE = design of experiments; DFSS = design for Six Sigma.

without current solutions. Eventually, combinations of design alternatives may be combined into the final design using a Pugh matrix.

Tools and methods used in this phase include brainstorming, QFD, robust experimental design evaluations using statistical models, Monte Carlo simulation to determine optimum tolerances for variables, DFMEA to analyze how a design might fail and the causes of failure, reliability analysis to predict the useful life of the product or service in the field under actual usage, and DFM. DFM tools and methods integrate and focus design activities.

The DFMEA is used to analyze the ways (i.e., modes) in which a design could fail to meet customer requirements. Countermeasures are developed to prevent or manage the potential failures identified. Figure 4.9 shows a generic DFMEA form, and Tables 4.12 and 4.13 list important attributes of this DFMEA form. The failure mode is a description of a nonconformance

FIGURE 4.9

Design failure mode and effects analysis (DFMEA). SEV = severity; OCC = occurrence probability; DET = detection probability; RPN = risk priority number.

or failure of a sub-system. A failure effect is the impact on the customer from the failure mode if it occurs. Severity is an assessment of the seriousness of the failure mode for the customer. Severity is measured using a scale from 1 to 10, with 1 signifying a minor impact on the external customer and 10 representing a very severe impact on the external customer. The failure cause describes how the failure mode could have occurred.

TABLE 4.12

DFMEA Definitions

Term

Definition

Failure mode

Description of a nonconformance or failure for a system.

Failure effect

Effect of a failure mode on the customer.

Severity

Assessment of the seriousness of the failure mode on the customer using a scale of 1 to 10.

Failure cause

Describes how the failure mode could have occurred.

Occurrence probability

An assessment of the frequency with which the failure cause occurs using a scale of 1 to 10.

Detection probability

An assessment of the likelihood (or probability) that your current controls will detect the failure mode using a scale of 1 to 10.

Risk priority number (RPN)

Risk Priority Number (RPN) = (Severity) x (Occurrence) x (Detection). It is used to prioritize recommended actions. Special consideration should be given to high severity ratings even if occurrence and detection ratings are low.

TABLE 4.13

Twenty Steps to Create a DFMEA

Step

1. Assign a DFMEA number.

2. Assign a title to your DFMEA.

3. List department and person responsible for the DFMEA.

4. List customer and product name.

5. Assign a DFMEA start date.

6. Assign current date.

7. List core team members.

8. List design systems based on hierarchy.

9. List potential failure modes.

10. List potential failure effects.

11. Assign severity to each effect.

12. List potential failure causes.

13. Assign occurrence probability to each cause.

14. List current controls for causes.

15. Assign detection probability to causes.

16. Calculate the RPN.

17. List preventive or corrective actions.

18. Assign responsibility for preventive or corrective actions.

19. Record preventive and corrective actions by date.

20. Recalculate RPNs and reprioritize RPNs.

DFMEA = design failure mode and effects analysis; RPN = risk priority number.

The occurrence probability is an assessment of the frequency with which the failure cause occurs, using a scale from 1 to 10, with 1 representing a minor impact on the external customer and 10 signifying a major impact. The current controls relate to the systems in place to prevent the failure cause from occurring or reaching an external customer. The detection probability is an assessment of the probability that current controls will detect the failure cause, using a scale of 1 to 10. In this inverse scale, 10 means the current controls are not effective, whereas 1 implies that the current controls are very effective in detecting a failure mode. The risk priority number (RPN) is calculated as the RPN = (severity) x (occurrence) x (detection) by failure cause. The RPN ranges from 1 (minor) to 1,000 (major) and is used to prioritize recommended countermeasures for each of the failure causes associated with a failure mode. Special consideration should be given to high severity ratings, even if occurrence and detection ratings are low.

TABLE 4.14

Ten Methods to Increase Product Reliability

Method

1. Develop sample size plans to determine the number of test units required to calculate reliability percentages for units under test with statistical confidence.

2. Develop specification demonstration plans to estimate the maximum number of failures that will occur in a predetermined time duration.

3. Determine accelerated life test plans to calculate the number of test units to be allocated to each experimental condition of the experimental design.

4. Use parametric analysis of repairable systems to estimate the mean number of repairs over time for units under test, assuming a specific distribution.

5. Use nonparametric analysis of a repairable system to estimate the mean number of repairs over time for units under test, without assuming a specific distribution.

6. Use accelerated life testing to build models of failure time versus several independent variables.

7. Use regression-based testing to build models to predict time to failure versus several independent variables, including covariates, nested terms, and interactions.

8. Use probit analysis to estimate survival probabilities of test units exposed to an experimental stress condition.

9. Use distribution analysis to determine the time to failure probabilities for a design characteristic exposed to an experimental condition.

10. Integrate information gained from reliability analyses into the DFMEA.

Reliability analysis uses a variety of statistical methods to evaluate the likelihood of a design meeting performance targets and the occurrence of failure modes under a variety of expected use conditions. These analyses often use accelerated methods. Table 4.14 list ten methods to increase the reliability of a product or service. The first is the development of sampling plans to estimate the number of test units required to calculate reliability percentages for units under test with statistical confidence. The second is the development of demonstration plans to verify a maximum number of allowed failures within a predetermined time. This information is used to create accelerated life test plans to calculate the number of test units to be allocated to each experimental condition of an experiment. Accelerated testing enables a design to be stressed for short periods of time to predict its performance at lower stress levels for longer periods of time. An example would be to heat a component at 100°C for twenty-four hours to develop correlated failure rates at a lower temperature for twelve months. A service example would be to use models to analyze high demand on a service system and its impact on customer service levels and system cost.

Parametric analysis of repairable systems estimates the mean number of expected repairs to a system over time for units under test by assuming a specific probability distribution. In contrast, a nonparametric analysis of a repairable system is used to estimate the mean number of repairs to a system over time for units under test, but without assuming a specific probability distribution. Distribution assumptions are important when building accelerated testing models of time to failure versus several independent variables. The accelerated testing models are based on a model with linear or exponential relationships between time to failure and independent or accelerating variables. However, regression-based testing can also be used to build reliability models to predict time to failure versus several independent variables and covariates, nested variables, and variable interactions. Probit analysis is a method used to estimate survival probabilities of test units exposed to an experimental stress condition. Distribution analysis is used to determine the time-to-failure probabilities for design characteristics exposed to an experimental stress condition. Finally, all information gathered during reliability testing and analyses is incorporated into the DFMEA after several design alternatives have been evaluated using reliability testing the project moves into the optimize phase.

In the optimize phase, important characteristics of one or several alternative designs are incorporated into the final optimized design. Relevant tools and methods include Monte Carlo simulation, tolerance design, computer-aided design, and finite element analysis are used to develop tolerances for the KPIVs that impact the KPOVs (or Ys). Figure 4.10 shows how the HOQ is used to translate CT characteristics into specifications to build and analyze transfer functions, i.e., Y =f{X). The levels of the KPIVs are varied according to an experimental design or model to evaluate their combined impact on the KPOVs. The underlying statistical models are regression-based, hence the relationship Y =f(X).

Once the transfer functions (Y =f{X)) have been calculated, the KPIVs are set to levels that ensure the KPOVs (or Ys) are at their optimum levels. This concept is shown in Figure 4.11. Statistical tolerance refers to a methodology specifying the range over which KPIVs can vary while the associated Y remains optimized and on target with minimum variation. A KPOV should exhibit a level of variation small enough such that, when its level changes because of variations of the Xs, it remains within specification. Complicating a tolerance analysis is the fact that measurement error adds to the variation of the measured KPOVs or KPIVs. Capability analysis will be discussed in Chapter 9.

FIGURE 4.10

Building a transfer function: Y =f(X). CTC = critical-to-customer characteristic.

In the validation phase, prototypes are carefully evaluated under controlled production conditions. These evaluations are called pilots. Pilots are needed to evaluate the prototypes against the predicted performance targets. Based on the pilot results, specifications of the KPOVs (Ys) and KPIVs (Xs) are finalized by the CE team. Measurement methodologies and testing requirements are also finalized. The DFMEA is updated with pilot evaluations and is used to implement mistake-proofing strategies and countermeasures to control variations of the KPIVs. This information is incorporated into the preliminary quality control plan, shown in Figure 4.12. Quality assurance will also include a process failure mode and

FIGURE 4.11

Using design and process simulation tools to tolerance the design. CTC = critical-to- customer characteristic.

FIGURE 4.12

Required documentation for the preliminary quality control plan. LSL = lower specification limit; USL = upper specification limit.

FIGURE 4.13

Developing the product’s design for Six Sigma (DFSS) scorecard.

effects analysis into the quality control plan and finalize it with the customer and design team.

An integral part of the quality control plan is a scorecard of all variables and their actual performance against targets, a e DFSS scorecard is a more recent example, and it is shown in Figure 4.13. a is scorecard represents a summarization of several levels of the final product design. a ese i nclude p erformance bye ustomer r equirement, p rocess p erfor- mance by operation within each process workflow, raw material performance, and purchased part performance, which are rated using quality metrics. Quality metrics and how to calculate them will be discussed in Chapter 9. a ese i nclude p arts p er m illion (PPM), оpportunity c ounts, defect p er m illion о pportunities ( DPMO), n ormalized у ield, a nd t he Six Sigma Score (Zst). a ese capability metrics are directly correlated to defect p ercentages. D FSS s corecards a Iso c an b e m odified tom easure across a supply chain.

In summary, in the validate phase, after working prototypes are built and tested under controlled production conditions, evaluations are used to demonstrate that a new design can be produced under production conditions with high quality. A p reliminary quality control plan is created in the validate phase to communicate controls for the design and its supporting process to tie back to the customer requirements. At the end of this phase, the team integrates the lessons learned for use in future design projects.

In the incorporate phase, the team gathers lessons learned from its recent design work to create documentation. This includes detailed drawings of the product or process workflows, specifications and tolerances of each KPOV (or Y), a list of important design features and functions, as well testing requirements. Integral to documentation and transitioning activities is the transfer of design knowledge to the process owner and local work team. At this point in the project, final verification of a design is made by operations and quality to ensure it achieved all cost and performance targets. The DFSS scorecards are also updated to reflect any new information.

 
<<   CONTENTS   >>

Related topics