Академический Документы
Профессиональный Документы
Культура Документы
INTRODUCTION
LEARNING OBJECTIVES
Walter A. Shewhart
Walter A. Shewhart
Later work
From the late 1930s onwards, Shewhart's interests expanded out from
industrial quality to wider concerns in science and statistical inference. The title
of his second book Statistical Method from the Viewpoint of Quality Control (1939)
asks the audacious question: What can statistical practice, and science in general,
learn from the experience of industrial quality control?
His more conventional work led him to formulate the statistical idea of
tolerance intervals and to propose his data presentation rules, which are listed
below:
Influence
As a man, he was gentle, genteel, never ruffled, never off his dignity. He knew
disappointment and frustration, through failure of many writers in mathematical
statistics to understand his point of view.
In other words, the fact that the criterion we happen to use has a fine
ancestry of highbrow statistical theorems does not justify its use. Such
justification must come from empirical evidence that it works.
Rule 1. Original data should be presented in a way that will preserve the
evidence in the original data for all the predictions assumed to be useful.
Rule 2. Any summary of a distribution of numbers in terms of symmetric
functions should not give an objective degree of belief in any one of the
inferences or predictions to be made there from that would cause human action
significantly different from what this action would be if the original distributions
had been taken as evidence.
FIGURE 2.1
The model can be used for the ongoing improvement of almost anything
and it contains the following four continuous steps: Plan, Do, Study and
Act. Students, facilitated by their teacher should be able to complete the
following steps using information from a classroom data center. Building staffs,
facilitated by their principal or district quality facilitators should be able to
complete the steps of PDSA using information from a building data center.
Similarly, district level strategic planning committees can use the same process.
In the second step (DO), the plan is carried out, preferably on a small
scale. Identify the process owners who are on the team. Within the action plan
and on the storyboard, explain why new steps are necessary. Collect and chart
the baseline data. Form a hypothesis of possible causes that are related to the
current performance results. A quality tool such as a fish-bone diagram or an
affinity diagram would be useful at this stage. Implement the strategy to bring
about the change.
In the third step (STUDY), the effects of the plan are observed. Monitor
the data. On the baseline data chart, continue with the next data points. Explain
when and how data analysis with appropriate people takes place. Explain what
is being learned though the improvement process. Identify trends, if any can be
discerned. Conduct a gap analysis. Use comparisons, and benchmark [best
practice] data. If negative data result, undergo another cycle of PDSA.
In the last step (ACT), the results are studied to determine what was
learned and what can be predicted. If positive data result, standardize the
process/strategy. Standardize and keep the new process going. Repeat the cycle
starting with PLAN to define a new change you want. See the diagram of the
process below.
After World War II (1947), Deming was involved in early planning for the
1951 Japanese Census. He was asked by the Department of the Army to assist in
this census. While he was there, his expertise in quality control techniques,
combined with his involvement in Japanese society, led to his receiving an
invitation by the Japanese Union of Scientists and Engineers (JUSE).
The first section of the meritorious service record describes his work in
Japan:
The second half of the record lists his service to private enterprise through
the introduction of epochal ideas, such as quality control and market survey
techniques
Contributions of Deming
The recovery of Japan after World War II has many explanations: Japan
was forbidden to be involved in military industries, the Japanese concentrated on
consumer products; powerful conglomerates of industry and banks (zaibatsus)
poured money into selected companies; the Japanese people consented to great
sacrifices in order to support the recovery.
American industry flourished in the postwar boom in the US, but found
itself getting hints and finally clear indications of Japanese competition in the
1970s. Hal Sperlich, a Ford executive, visited a Japanese auto factory in the early
seventies and was amazed to find that the factories had no area dedicated to
repairing shoddy work; in fact the plant had no inspectors. Sperlich left that
factory somewhat shaken: In America, he thought, we have repair bins the size of
football fields. William Ouchi wrote that when he began to study Japanese
practices in 1973, there was little interest in the US in his findings. When his
book, Theory X, was published in 1981, interest had grown tremendously and
the book was a best seller. However, even in 1981, a top officer in Motorola
warned American manufacturers of computer chips that they were complacent
and not paying enough attention to Japanese quality. In 1981, Ford engineers
compared automatic transmissions, some built by Mazda for the Ford Escorts,
and some built by Ford. The ones made in Japan were well liked by our
customers; many of those from Ohio were not. Ours were more erratic; many
shifted poorly through the gears, and customers said they didnt like the way
they performed. The difference was due to the tighter tolerances in the Japanese
made transmissions.
NBC aired a documentary If Japan Can, Why Cant We. The documentary
explained what Japan was doing and especially stressed the contributions of
Deming. Donald Peterson, then president of Ford, was one of many CEOs
motivated to call Deming. Deming said his phone rang off the hook.
In a return to using the brain, not just the brawn, of the worker, the
Japanese methods, building on Deming, actually deTaylorize work. The
classical Taylor model of scientific management, which favored the separation of
mental from physical labor and the retention of all decision making in the hands
of management, is abaondoned in favor of a cooperative team approach
designed to harness the full mental capabilities and work experience of everyone
involved in the process ....
While the US had much to learn from Japanese methods, careful observers
realized that the differences between Japanese and American societies were so
great that not all ideas could be imported (some of the cooperation among
Japanese companies would violate US antitrust laws), that the Japanese methods
were not always what they seemed (for example, life time employment was
limited to a minority), and American companies, unheralded, were already using
many of the new Japanese methods. Ouchi in Theory Z, examined Japanese
practices in their treatment of workers, distilled them to the central ideas which
he called Theory Z, and discovered that the best examples of Theory Z
management were American companies.
FIGURE 2.2
The model can be used for the ongoing improvement of almost anything
and it contains the following four continuous steps: Plan, Do, Study and
Act. Students, facilitated by their teacher should be able to complete the
following steps using information from a classroom data center. Building staffs,
facilitated by their principal or district quality facilitators should be able to
complete the steps of PDSA using information from a building data center.
Similarly, district level strategic planning committees can use the same process.
In the second step (DO), the plan is carried out, preferably on a small
scale. Identify the process owners who are on the team. Within the action plan
and on the storyboard, explain why new steps are necessary. Collect and chart
the baseline data. Form a hypothesis of possible causes that are related to the
current performance results. A quality tool such as a fish-bone diagram or an
affinity diagram would be useful at this stage. Implement the strategy to bring
about the change.
In the third step (STUDY), the effects of the plan are observed. Monitor
the data. On the baseline data chart, continue with the next data points. Explain
when and how data analysis with appropriate people takes place. Explain what
is being learned though the improvement process. Identify trends, if any can be
discerned. Conduct a gap analysis. Use comparisons, and benchmark [best
practice] data. If negative data result, undergo another cycle of PDSA.
In the last step (ACT), the results are studied to determine what was
learned and what can be predicted. If positive data result, standardize the
process/strategy. Standardize and keep the new process going. Repeat the cycle
starting with PLAN to define a new change you want. See the diagram of the
process below.
Joseph Juran
Juran expressed his approach to quality in the form of the Quality trilogy.
Managing for quality involved three basic processes:
Quality Planning: This involves identifying the customer (both internal and
external), determining their needs, design goods and services to meet these
needs at the established quality and cost goals. Then design the process and
transfer this to the operators.
The relationship among the three processes is shown in the Quality Trilogy
figure below:
FIGURE 2.3
The errors made during the initial planning result in a higher cost which
Juran labeled Chronic Waste: At the beginning, the process stays within control
limits. A quality improvement project is initiated and succeeds in reducing
chronic waste.
Juran also created the concept of Cost of Quality. There are four elements
comprising the cost of quality.
Prevention Costs: Initial design quality, actions during product creation (e.g.
marketing research, establishing product specifications, determining
consumer needs, training workers, vendor evaluation, quality audit,
preventive maintenance, etc.)
The graph in the following exhibit shows that costs of conformance (appraisal
and prevention) increase as defect rate declines. However, the costs of
nonconformance (internal and external failures) decrease. The trade-off leads
to an optimal conformance level.
FIGURE 2.4
Philosophy
Step 8 : Prove that the process can produce the product under
operating conditions
Philip Crosby
Crosby described quality as free and argued that zero defects was a
desirable and achievable goal. He articulated his view of quality as the four
absolutes of quality management:
Integrity
Dedication to communication and customer satisfaction
Company-side policies and operations which support the quality thrust
FIGURE 2.5
Fourteen Step Quality Programme: Philip B. Crosby
Step 4 Evaluate the cost of quality this evaluation must highlight, using
the measures established in the previous step, where quality
improvement will be profitable.
Step 9 Hold a Zero Defects day to establish the attitude and expectation
within the company. Crosby sees this as being achieved in a
celebratory atmosphere accompanied by badges, buttons and
balloons.
The True Total Quality according to Masaaki Imai is important to recognise the
importance of the commonsense approach of gemba (shop floor) kaizen to
quality improvement, as against the technology-only approach to quality
practised in the west.
The production system (batch production) employed by over 90% of all the
companies in the world is one of the biggest obstacles to quality improvement. A
conversion from a batch to a JIT (just-in-time)/lean production system should be
the most urgent task for
any manufacturing company today in order to survive in the next millennium.
ISHIKAWA DIAGRAM
FIGURE 2.6
A generic Ishikawa diagram showing general and more refined causes for
an event.
Definition: A graphic tool used to explore and display opinion about sources of
variation in a process. (Also called a Cause-and-Effect or Fishbone Diagram.)
Purpose: To arrive at a few key sources that contributes most significantly to the
problem being examined. These sources are then targeted for improvement. The
diagram also illustrates the relationships among the wide variety of possible
contributors to the effect.
The figure below shows a simple Ishikawa diagram. Note that this tool is
referred to by several different names: Ishikawa diagram, Cause-and-Effect
diagram, Fishbone diagram, and Root Cause Analysis. The first name is after the
inventor of the tool, Kaoru Ishikawa (1969) who first used the technique in the
1960s.
FIGURE 2.7
Only one tool has been created that adds computer analysis to the
fishbone. Bourne et al. (1991) reported using Dempster-Shafer theory (Shafer and
Logan, 1987) to systematically organize the beliefs about the various causes that
contribute to the main problem. Based on the idea that the main problem has a
total belief of one, each remaining bone has a belief assigned to it based on
several factors; these include the history of problems of a given bone, events and
their causal relationship to the bone, and the belief of the user of the tool about
the likelihood that any particular bone is the cause of the problem.
How to Construct:
Tip:
FIGURE 2.8
FIGURE 2.9
Taguchi methods
1. Taguchi loss-function;
2. The philosophy of off-line quality control; and
3. Innovations in the design of experiments.
Loss functions
Such losses are, of course, very small when an item is near to nominal.
Donald J. Wheeler characterised the region within specification limits as where
we deny that losses exist. As we diverge from nominal, losses grow until the point
where losses are too great to deny and the specification limit is drawn. All these
losses are, as W. Edwards Deming would describe them, ...unknown and
unknowable but Taguchi wanted to find a useful way of representing them within
statistics.
The first two cases are represented by simple monotonic loss functions. In the
third case, Taguchi adopted a squared-error loss function on the grounds:
1. System design;
2. Parameter design; and
3. Tolerance design.
System design
Parameter design
Tolerance design
Design of experiments
Outer arrays
In his later work, R. A. Fisher started to consider the prospect of using design of
experiments to understand variation in a wider inductive basis. Taguchi sought to
understand the influence that parameters had on variation, not just on the mean.
He contended, as had W. Edwards Deming in his discussion of analytic studies,
that conventional sampling is inadequate here as there is no way of obtaining a
random sample of future conditions. In conventional design of experiments,
variation between experimental replications is a nuisance that the experimenter
would like to eliminate whereas, in Taguchi's thinking, it is a central object of
investigation.
Management of interactions
Many of the orthogonal arrays that Taguchi has advocated are saturated
allowing no scope for estimation of interactions between control factors, or inner
array factors. This is a continuing topic of controversy. However, by combining
orthogonal arrays with an outer array consisting of noise factors, Taguchi's
method provides complete information on interactions between control factors
and noise factors. The strategy is that these are the interactions of most interest in
creating a system that is least sensitive to noise factor variation.
Followers of Taguchi argue that the designs offer rapid results and that
control factor interactions can be eliminated by proper choice of quality
characteristic (ideal function) and by transforming the data. That
notwithstanding, a confirmation experiment offers protection against any
residual interactions. In his later teachings, Taguchi emphasizes the need
to use an ideal function that is related to the energy transformation in the
system. This is an effective way to minimize control factor interactions.
Western statisticians argue that interactions are part of the real world and
that Taguchi's arrays have complicated alias structures that leave
interactions difficult to disentangle. George Box, and others, have argued
that a more effective and efficient approach is to use sequential assembly.
Analysis of experiments
Assessment
Cost of Quality
But quality comes with a cost. The definition of the Cost Of Quality is
contentious. Some authors define it as the cost of non-conformance, i.e. how
much producing nonconforming products would cost a company. This is a one
sided approach since it does not consider the cost incurred to prevent non
conformance and above all in a competitive market, the cost of improving the
quality targets.
Cost of conformance
Preventive Costs
Appraisal Cost.
Cost of non-conformance
Internal Failure
o Cost of reworking products that failed audit
o Cost of bad marketing
o Scrap
External Failure
o Cost of customer support
o Cost of shipping returned products
o Cost of reworking products returned from customers
o Cost of refunds
o Loss of customer Goodwill
o Cost of discounts to recapture customers
On the other hand, not improving the quality level of the LCDs will lead
to an increase in the probability of having returned products from customers and
internal rework, therefore increasing the cost of nonconformance.
FIGURE 2.10
The Total cost of Quality would be the sum the cost of conformance and the
cost of nonconformance, that cost would be C3 for a quality level of Q2.
C3 = C1 + C2.
FIGURE 2.11
Should the manufacturer decide that the quality level would be at Q1, the
cost of conformance (C2) would be higher than the cost of nonconformance (C1)
and the Total cost of Quality would be at C3.
FIGURE 2.12
But according to Taguchi, the products that do not match the target, even
if they are within the specified limits do not operate as intended and any
deviation from the target, be it within the specified limits or not will generate
financial loss to the customers, the company and to society and the loss is
proportional to the deviation from the target.
The loss Function quantifies the deviation from the target and assigns a
financial value to the deviation.
FIGURE 2.13
The graph that depicts the financial Loss to society that results from a
deviation from the target resembles the Total Cost of quality U graph that we
built earlier but the premises that helped build them are not the same. While the
Total Cost curve was built based on the costs of conformance and
nonconformance, Taguchis Loss Function is primarily based on the deviation
from the target and measures the loss from the customers expectation
perspective.
Example:
Solution:
= 95
K = (95 / 0.004) = 237500
Therefore
Not producing a bolt that match the target would have resulted in a financial
loss to society that amounted to $23.75.
Since the deviation from the target is the source of financial loss to society,
what needs to be done in order to prevent any deviation from the set target?
The first thought might be to reduce the specification range and improve
the online quality control, to bring the specified limits closer to the target and
inspect more samples during the production process in order to find the
defective products before they reach the customers. But this would not be a good
option since it would only address the symptoms and not the root causes of the
problem. It would be an expensive alternative because it would require more
inspection which would at best help detect nonconforming parts early enough to
prevent them from reaching the customers.
The root of the problem is in fact the variation within the production
process, i.e. the value of sigma, the standard deviation from the mean.
Lets illustrate this assertion with an example. Lets suppose that the
length of a screw is a Critical-To-Quality (CTQ) characteristic and the target is
determined to be 15 with a LCL of 14. 96 and a UCL of 15.04. The following
sample was taken for testing:
15.02
14.99
14.96
15.03
14.98
14.99
15.03
15.01
14.99
All the observed items in this sample fall within the control limits even
though all of them do not match the target. The mean is 15 and the standard
deviation is 0.023979. Should the manufacturer decide to improve the quality of
the output by reducing the range of the control limits to 14.98 and 15.02, three of
the items in the sample would have failed audit and would have to be reworked
or discarded.
15.01
15
14.99
15.01
14.99
14.99
15
15.01
15
The mean is still 15 but the standard deviation has been reduced to
0.00866 and all the observed items are closer to the target. Reducing the
variability around the target has resulted in improving quality in the production
process at a lower cost.
This is not to suggest that the tolerance around the target should never be
reduced; addressing the tolerance limits should be done under specific
conditions and only after the variability around the target has been reduced.
Since variability is a source of financial loss to producers, customers and society
at large, it necessary to determine what the sources of variation are so that
actions can be taken to reduce them. According to Taguchi, these sources of
variation that he calls Noise factors can be reduced to three:
But if the online quality control is not the appropriate way to reduce
production variations, what needs to be done to prevent deviations from the
target?
Concept Design
Parameter Design
The next step in the production process is the parameter design. After the
design architecture has been selected; the producer will need to set the parameter
design. The parameter design consists in selecting the best combination of control
factors that would optimize the quality level of the product by reducing the
products sensitivity to noise factors. Control factors are parameters over which
the designer has control. When an engineer designs a computer, he has control
on factors such as the CPU, System board, LCD, memory, LCD cables. etc. He
determines what CPU best fits a motherboard, what memory stick and what
wireless network card to use and how to design the system board that would
make it easier for the parts to fit in. The way he combines those factors will
impact the quality level of the computer.
The producer wants to design products at the lowest possible cost and
at the same time have the best quality result under current technology. To do so,
the combination of the control factors must be optimal while the effect of the noise
factors must be so minimal that they will not have any negative impact on the
functionality of the products. So the experiment that leads to the optimal results
will require the identification of the noise factors because they are part of the
process and their effects need to be controlled.
One of the first steps the designer will take is to determine what the
optimal quality level is. He will need to determine what the functional
requirements are, assess the Critical-To-Quality characteristics of the product
and specify their targets. The determination of the CTQs and their targets
depends among other criteria on the customer requirements, the cost of
production and current technology. The engineer is seeking to produce the
optimal design, a product that is insensitive to noise factors!. The quality level of
the CTQ characteristics of the product under optimal conditions depends on
whether the response experiment is static or dynamic.
FIGURE 2.14
The response experiment is said to be static when the quality level of the
CTQ characteristic is fixed. In that case, the optimization process will seek to
determine the optimal combination of factors that enables to reach the targeted
value. This happens in the absence of a signal factor, the only input factors are
the control factors and the noise factors. When we build a table, we determine all
the CTQ target and we want to produce a balanced table with all the parts
matching the targets.
The Bigger-The-Better
If the number of minutes per dollar customers get from their cellular phone
service provider is critical to quality, the customers will want to get the
maximum number of minutes they can for every dollar they spend on their
phone bills.
Vibrations are critical to quality for a car, the less vibration the customers
feel while driving their cars the better, the more attractive the cars are.
The Signal-To-Noise ratio for the Smaller-The-Better is:
The Nominal-The-Best.
When a customer buys ceramic tiles to decorate his bathroom, the size of
the tiles is critical to quality, having tiles that do not match the predetermined
target will result in them not being correctly lined up against the bathroom walls.
Tolerance Design.
Parameter design may not completely eliminate variations from the target.
Thats why tolerance design must be used for all parts of a product to limit the
possibility of producing defective products. The tolerance around the target is
usually set by the design engineers; it is defined as the range within which
variation may take place. The tolerance limits are set after testing and
experimentation. The setting of the tolerance must be determined by criteria such
as the set target, the safety factors, the functional limits, the expected quality
level and the financial cost of any deviation from the target.
The safety limits measure the loss incurred when products that are
outside the specified limits are produced.
With being the loss incurred when the functional limits are exceeded and A
being the loss when the tolerance limits are exceeded.
tolerance specifications for the response factor will be:
Example:
The functional limits of a conveyor motor are +/- 0.05 of the response
RPM. The adjustments made at the audit station before a motor left the company
cost $2.5 and the cost associated to defective motors once it has been sold is on
average $180.
Solution:
Shingeo
Shigeo Shingo's life-long work has contributed to the well being of everyone in
the world. Shigeo Shingo along with Taiichi Ohno, Kaoru Ishikawa and others
has helped to revolutionise the way we manufacture goods. His improvement
principles vastly reduce the cost of manufacturing - which means more products
to more people. They make the manufacturing process more responsive while
opening the way to new and innovative products with less defects and better
quality.
He was the first to create some of the strategies on continuous and total
involvement of all employees. Shingo's never-ending spirit of inquiry challenges
the status quo at every the first to create some of the strategies on total
involvement of all employees. Shingo's never-ending spirit of inquiry challenges
the status quo at every level- he proposed that everything could be improved.
Shingo believed that inventory is not just a necessary evil, but all inventory is
absolute evil. He is one of the pioneers of change management. He brought about
many new concepts such as ZD (Zero Defects), shifting use of statistics for
acceptance or rejection from SQC (Statistical Quality Control) to SPC (Statistical
Process Control), SMED (Single Minute Exchange of Dies), POKA-YOKE
(mistake proofing), Defining Processes & Operations in two-dimensions of VA
(Value Addition) and non-VA, etc.
Quality circle
Quality Circles were started in Japan in 1962 ( Kaoru Ishikawa has been credited
for creating Quality Circles) as another method of improving quality. The
movement in Japan was coordinated by the Japanese Union of Scientists and
Engineers (JUSE). Prof. Ishikawa, who believed in tapping the creative potential
of workers, innovated the Quality Circle movement to give Japanese industry
that extra creative edge. A Quality Circle is a small group of employees from the
same work area who voluntarily meet at regular intervals to identify, analyse,
and resolve work related problems. This can not only improve the performance
of any organisation, but also motivate and enrich the work life of employees.
Background
In the early 1990s, the U.S. National Labor Relations Board (NLRB) made
several important rulings regarding the legality of certain forms of quality
circles. These rulings were based on the 1935 Wagner Act, which prohibited
company unions and management-dominated labor organizations. One NLRB
ruling found quality programs unlawful that were established by the firm, that
featured agendas dominated by the firm, and addressed the conditions of
employment within the firm. Another ruling held that a company's labor-
management committees were in effect labor organizations used to bypass
negotiations with a labor union. As a result of these rulings, a number of
employer representatives expressed their concern that quality circles, as well as
other kinds of labor-management cooperation programs, would be hindered.
However, the NLRB stated that these rulings were not general indictments
against quality circles and labor-management cooperation programs, but were
aimed specifically at the practices of the companies in question.
But successful quality circles offer a wide variety of benefits for small
businesses. For example, they serve to increase management's awareness of
employee ideas, as well as employee awareness of the need for innovation within
the company. Quality circles also serve to facilitate communication and increase
commitment among both labor and management. In enhancing employee
satisfaction through participation in decision-making, such initiatives may also
improve a small business's ability to recruit and retain qualified employees. In
addition, many companies find that quality circles further teamwork and reduce
employee resistance to change. Finally, quality circles can improve a small
business's overall competitiveness by reducing costs, improving quality, and
promoting innovation.
5S
The 5Ses referred to in LEAN are:
Sort
Straighten
Shine
Standardize
Sustain
Seiri - Put things in order (remove what is not needed and keep
what is needed)
Seiton - Proper Arrangement (Place things in such a way that they
can be easily reached whenever they are needed)
Seiso - Clean(Keep things clean and polished; no trash or dirt in the
workplace)
Seiketsu - Purity (Maintain cleanliness after cleaning - perpetual
cleaning)
Shitsuke - Commitment (a typical teaching and attitude towards
any undertaking to inspire pride and adherence to standards
established for the four components)
A place for everything (first 3Ses) and Everything in it's Place (last two Ss)
2.12 8D METHODOLOGY
8 Disciplines
SUMMARY
REVIEW QUESTIONS