Using Statistical Engineering you will achieve performance leaps, for instance for processes. Using conventional approaches you will only increase performance. Why? Because Statistical Engineering analyzes big variations within the process and between processes, whereas conventional approaches analyze only mean values of actuating variables defined by segmentation.
As an example, some application projects need an average lead time of 10 months. The sum of average lead times of prototyping, testing, and other activities allows to reduce to 8 months. Actual application projects vary between 8 months and 12 months, each time because of other causes. Using Statistical Engineering root causes of variation are identified. Knowing these a lead time of 5 months is targeted and achieved. Less than 5 months are achieved adding some actions of agile and lean development.
Variation analysis allows identification of root causes and their effect on output. Interactions of root causes are included. Knowing root causes a few actions are sufficient for implementation. Conventional approaches usually propose many actions often not well founded by data. Of these actions only a selection is implemented not to overload the organization. Often hindsight only shows which actions really worked.
Statistical Engineering allows to achieve optimization and prevention targets. During prevention real problems are identified and measured early. The collection of potential problems and their potential root causes is not necessary. Hence associated tools like FMEA, FTA, or Ishikawa diagrams are dispensable.
Thinking the problem through, problem classification, and pre-defined solution options are allowing target oriented and often scheduled solving of even the dodgiest technical problems.
Problem solving is done by iteratively and empirically excluding input variables with small effect on output variation, just like Sherlock Holmes did. Hence up to 24 theoretical tools are dispensable.
In addition Statistical Engineering requires efficiency increase by analysis of multiple variables per iteration at once. If data are available or achievable by observation, efficiency can be further increased doing without prototyping, testing, or simulation.
Engineers are familiar with trial and error, changing iteratively and empirically one actuating variable after the other. Statistical Engineering allows to seperate effects of actuating variables and residual variation using multiple variables with a very small number of data or samples. This is impossible by trial and error.
If many actuating variables are to be considered, Statistical Engineering is faster and more efficient than their empirical alternatives, trial and error or complex design of experiments (DoE).
Complex design of experiments can only be handled by specialists with special tools avoiding to many failures. For Statistical Engineering, MS Excel is sufficient as tool for planning, data collection, and data anlysis. Hence every engineer and controller is able to apply Statistical Engineering with much benefit after some practise.
You find the detailed and easy-to-use methodology with all building blocks for successful optimization projects in our reference book Statistical Engineering. It is affordable and worldwide achievable at Amazon.