Академический Документы
Профессиональный Документы
Культура Документы
QUALITY PARADIGM:
Values, Goals, Controls, Information, and Consciousness
We all have needs, requirements, wants, expectations and desires. Needs are essential for life,
to maintain certain standards or essential for products and services to fulfill the purpose for
which they have been acquired. Requirements are what we request of others and may
encompass our needs but often we don't fully realize what we need until after we have made
our request. For example, now that we own a mobile phone we later discover we need hands
free operation while driving and didn't think to ask at the time of purchase. Hence our
requirements at the moment of sale may or may not express all our needs. Our requirements
may include wants - what we would like to have but its not practical. For example, we want a
cheap computer but we need a top of the range model for what we need it for. Expectations
are implied needs or requirements. They have not been requested because we take them for
granted - we regard them to be understood within our particular society as the accepted norm.
They may be things to which we are accustomed based on fashion, style, trends or previous
experience. Hence one expects sales staff to be polite and courteous, electronic products to be
safe and reliable, food to be fresh and uncontaminated, tap water to be potable, policemen to
be honest and people or organizations to do what they promise to do. In particular we expect
goods and services to comply with the relevant laws of the country of sale and expect the
supplier to know which laws apply. Desires are our innermost feelings about ourselves and
our surroundings, what we would like most.
Everyone nowadays is laying claim to quality. At Ford, quality is job one. GM is putting
quality on the road. Chrysler makes the best built American car, and Lee Iacocca can't figure
out why two cars come off the same American assembly line, and people prefer the one with
the foreign label. Quality has given vanquished economies mighty post war powers. The
American posture is uneasy, looking back on neighboring progress for a fix on quality.
The narrow interpretation of quality control is producing a quality product. The Japanese
expand this definition in a broader sense to include: quality of work, quality of service,
quality of information, quality of process, quality of division, quality of people, including
workers, engineers, managers, and executives, quality of system, quality of company, and
quality of objectives.
Quality Paradigm-HHIP 1
Haery Sihombing @ IP
higher learning today revolve around the basic principles developed by these individuals.
These predecessors based their ideas mainly around improving the production processes in
firms and did not expand these ideas to other functional departments in companies.
The Japanese, specifically, Dr. Ishikawa, decided to expand on the American ideas and relate
them to the operation of every department. He expanded on these ideas with the same goal as
his American colleagues, to provide a quality product to maintain a high service level and
good working relationship with customers. Dr. Ishikawa expanded the idea to develop new
principles for quality control: always use quality control as the basis for decisions, integrate
the control of cost, price, and profit, and control quantity of stock, production, and sales, and
date of delivery.
2. QUALITY PERSPECTIVES
In supplying products or services there are three fundamental parameters which determine
their saleability or usability. They are price, quality and delivery. Customers require products
and services of a given quality to be delivered by or be available by a given time and to be of
a price which reflects value for money. These are the requirements of customers. An
organization will survive only if it creates and retains satisfied customers and this will only be
achieved if it offers for sale products or services which respond to customer needs,
expectations, requirements and desires. Whilst price is a function of cost, profit margin and
market forces and delivery is a function of the organization’s efficiency and effectiveness,
quality is determined by the extent to which a product or service successfully meets the
expectations, needs and requirements of the user during usage (not just at the point of sale).
Quality Paradigm-HHIP 2
Haery Sihombing @ IP
ISO 9001 addresses quality goals through the use of the term ‘quality objectives’ but goes no
further. The purpose of a quality system is to enable you to achieve, sustain and improve
quality economically. It is unlikely that you will be able to produce and sustain the required
quality unless you organize yourselves to do so. Quality does not happen by chance - it has to
be managed. No human endeavour has ever been successful without having been planned,
organized and controlled in some way.
The quality system is a tool and like any tool can be a valuable asset. (or be abused,
neglected, misused!) Depending on your strategy quality systems enable you to achieve all
the quality goals. Quality systems have a similar purpose to the financial control systems,
information technology systems, inventory control systems, personnel management systems.
They organize resources so as to achieve certain objectives by laying down rules and an
infrastructure which, if followed and maintained, will yield the desired results. Whether it is
the management of costs, inventory, personnel or quality, systems are needed to focus the
thought and effort of people towards prescribed objectives. Quality systems focus on the
quality of what the organization produces, the factors which will cause the organization to
achieve its goals, the factors which might prevent it satisfying customers and the factors
which might prevent it from being productive, innovative and profitable. Quality systems
should therefore cause conforming product and prevent nonconforming product.
Quality systems can address one of the quality goals or all of them, they can be as small or as
large as you want them to be. They can be project specific, or they can be limited to quality
control that is, maintaining standards rather than improving them. They can include Quality
Improvement Programmes (QIPs) or encompass what is called Total Quality Management
(TQM). This book, however, only addresses one type of quality system - that which is
intended to meet ISO 9000 which currently focuses on the quality of the outgoing product
alone
We have only to look at the introductory clauses of ISO 9001 to find that the aim of the
requirements is to achieve customer satisfaction by prevention of nonconformities. Hence
quality management is a means for planning, organizing and controlling the prevention of
failure. All the tools and techniques that are used in quality management serve to improve our
ability to succeed in our pursuit of excellence.
Quality does not appear by chance or if it does it may not be repeated. One has to design
quality into the products and services. It has often been said that one cannot inspect quality
into a product. A product remains the same after inspection as it did before so no amount of
inspection will change the quality of the product. However, what inspection does is measure
quality in a way that allows us to make decisions on whether to release a piece of work. Work
that passes inspection should be quality work but inspection unfortunately is not 100%
reliable. Most inspection relies on the human judgment of the inspector and human judgment
Quality Paradigm-HHIP 3
Haery Sihombing @ IP
can be affected by many factors some of which are outside our control such as the private
life, health or mood of the inspector. We may fail to predict the effect that our decisions have
on others. Sometimes we go to great lengths in preparing organization changes and find to
our surprise that we neglected something or underestimated the effect of something. So we
need other means than inspection to deliver quality products. It is costly anyway to rely only
on inspection to detect failures - we have to adopt practices that enable us to prevent failures
from occurring. This is what quality management is all about.
Quality Paradigm-HHIP 4
Haery Sihombing @ IP
the sale of products and services to customers. The difference is that ISO 9000 and other
standards used in a regulatory manner are not concerned with an organization's efficiency or
effectiveness in delivering profit. However, they are concerned indirectly with nurturing the
values that determine the behavior of the people who make decisions that affect product or
service quality.
What is quality? You will comes across several terms that all seem to relate to the concept of
quality. It can be quite confusing working out what the difference is between them. We've
defined the key terms that you need to know below:
Term Description
Quality is first and foremost about meeting the needs and expectations of customers. It is
important to understand that quality is about more than a product simply "working properly".
Think about your needs and expectations as a customer when you buy a product or service.
These may include performance, appearance, availability, delivery, reliability, maintainability,
Quality cost effectiveness and price.
Think of quality as representing all the features of a product or service that affect its ability to
meet customer needs. If the product or service meets all those needs - then it passes the
quality test. If it doesn't, then it is sub-standard.
Producing products of the required quality does not happen by accident. There has to be a
production process which is properly managed. Ensuring satisfactory quality is a vital part of
the production process.
Quality Quality management is concerned with controlling activities with the aim of ensuring that
Management products and services are fit for their purpose and meet the specifications. There are two
main parts to quality management
(1) Quality assurance
(2) Quality control
Quality assurance is about how a business can design the way a product of service is
produced or delivered to minimize the chances that output will be sub-standard. The focus of
quality assurance is, therefore on the product design/development stage.
Why focus on these stages? The idea is that - if the processes and procedures used to
Quality produce a product or service are tightly controlled - then quality will be "built-in". This will
Assurance make the production process much more reliable, so there will be less need to inspect
production output (quality control).
Quality assurance involves developing close relationships with customers and suppliers. A
business will want to make sure that the suppliers to its production process understand
exactly what is required - and deliver!
Quality control is the traditional way of managing quality. A further revision note (see the list
on the right) deals with this in more detail.
Quality control is concerned with checking and reviewing work that has been done. For
Quality
example, this would include lots of inspection, testing and sampling.
Control
Quality control is mainly about "detecting" defective output - rather than preventing it. Quality
control can also be a very expensive process. Hence, in recent years, businesses have
focused on quality management and quality assurance.
Quality Paradigm-HHIP 5
Haery Sihombing @ IP
Under traditional quality control, inspection of products and services (checking to make sure
that what's being produced is meeting the required standard) takes place during and at the end
of the operations process.
There are three main points during the production process when inspection is performed:
1 When raw materials are received prior to entering production
2 Whilst products are going through the production process
3
When products are finished - inspection or testing takes place before products are dispatched to
customers
The problem with this sort of inspection is that it doesn't work very well!
There are several problems with inspection under traditional quality control:
1 The inspection process does not add any "value". If there were any guarantees that no
defective output would be produced, then there would be no need for an inspection process
in the first place!
2 Inspection is costly, in terms of both tangible and intangible costs. For example, materials,
labor, time, employee morale, customer goodwill, lost sales
3 It is sometimes done too late in the production process. This often results in defective or non-
acceptable goods actually being received by the customer
4 It is usually done by the wrong people - e.g. by a separate "quality control inspection team"
rather than by the workers themselves
5 Inspection is often not compatible with more modern production techniques (e.g. "Just in
Time Manufacturing") which do not allow time for much (if any) inspection.
6 Working capital is tied up in stocks which cannot be sold
7 There is often disagreement as to what constitutes a "quality product". For example, to meet
quotas, inspectors may approve goods that don't meet 100% conformance, giving the
message to workers that it doesn't matter if their work is a bit sloppy. Or one quality control
inspector may follow different procedures from another, or use different measurements.
As a result of the above problems, many businesses have focused their efforts on improving
quality by implementing quality management techniques - which emphasize the role of
quality assurance. As Deming (a "quality guru") wrote:
"Inspection with the aim of finding the bad ones and throwing them out is too late, ineffective,
costly. Quality comes not from inspection but from improvement of the process."
The ISO definition states that quality control is the operational techniques and activities that
are used to fulfill requirements for quality. This definition could imply that any activity
whether serving the improvement, control, management or assurance of quality could be a
quality control activity. What the definition fails to tell us is that controls regulate
Quality Paradigm-HHIP 6
Haery Sihombing @ IP
performance. They prevent change and when applied to quality regulate quality performance
and prevent undesirable changes in the quality standards. Quality control is a process for
maintaining standards and not for creating them. Standards are maintained through a process
of selection, measurement and correction of work, so that only those products or services
which emerge from the process meet the standards. In simple terms quality control prevents
undesirable changes being present in the quality of the product or service being supplied. The
simplest form of quality control is illustrated in the Figure below. Quality control can be
applied to particular products, to processes which produce the products or to the output of the
whole organization by measuring the overall quality performance of the organization.
Quality control is often regarded as a post event activity. i.e. a means of detecting whether
quality has been achieved and taking action to correct any deficiencies. However, one can
control results by installing sensors before, during or after the results are created. It all
depends on where you install the sensor, what you measure and the consequences of failure.
Some failures cannot be allowed to occur and so must be prevented from happening through
rigorous planning and design. Other failures are not so critical but must be corrected
immediately using automatic controls or fool proofing. Where the consequences are less
severe or where other types of sensor are not practical or possible, human inspection and test
can be used as a means of detecting failure. Where failure cannot be measured without
observing trends over longer periods, one can use information controls. They do not stop
immediate operations but may well be used to stop further operations when limits are
exceeded. If you have no controls then quality products are produced by chance and not
design. The more controls you install the more certain you are of producing products of
consistent quality but there is balance to be achieved. Beware of the law of diminishing
returns.
It is often deemed that quality assurance serves prevention and quality control detection, but a
control installed to detect failure before it occurs serves prevention such as reducing the
tolerance band to well within the specification limits. So quality control can prevent failure.
Assurance is the result of an examination whereas control produces the result. Quality
Assurance does not change the product, Quality Control does.
Quality Control is also a term used as a name of a department. In most cases Quality Control
Departments perform inspection and test activities and the name derives from the authority
that such departments have been given. They sort good products from bad products and
authorize the release of the good products. It is also common to find that Quality Control
Departments perform supplier control activities which are called Supplier Quality Assurance
Quality Paradigm-HHIP 7
Haery Sihombing @ IP
or Vendor Control. In this respect they are authorized to release products from suppliers into
the organization either from the supplier's premises or on receipt in the organization.
Since to control anything requires the ability to effect change, the title Quality Control
Department is in fact a misuse of the term since such departments do not in fact control
quality. They do act as a regulator if given the authority to stop release of product, but this is
control of supply and not of quality. Authority to change product usually remains in the hands
of the producing departments. It is interesting to note that similar activities within a Design
Department are not called quality control but Design Assurance or some similar term. Quality
Control has for decades been a term applied primarily in the manufacturing areas of an
organization and hence it is difficult to change peoples perceptions after so many years of the
terms incorrect use.
In recent times the inspection and test activities have been transferred into the production
departments of organizations, sometimes retaining the labels and sometimes reverting to the
inspection and test labels.
Control of quality, or anything else for that matter, can be accomplished by the following
steps:
x Determine what parameter is to be controlled.
x Establish its criticality and whether you need to control before, during or after results are
produced.
x Establish a specification for the parameter to be controlled which provides limits of
acceptability and units of measure.
x Produce plans for control which specify the means by which the characteristics will be
achieved and variation detected and removed.
x Organize resources to implement the plans for quality control.
x Install a sensor at an appropriate point in the process to sense variance from specification.
x Collect and transmit data to a place for analysis.
x Verify the results and diagnose the cause of variance.
x Propose remedies and decide on the action needed to restore the status quo.
Take the agreed action and check that the variance has been corrected
Quality Paradigm-HHIP 8
Haery Sihombing @ IP
x Knowledge that the declared intentions are actually being followed (This may be
gained by personal assessment or reliance on independent audits)
x Knowledge that the products and services meet your requirements (This may be
gained by personal assessment or reliance on independent audits)
You can gain an assurance of quality by testing the product/service against prescribed
standards to establish its capability to meet them. However, this only gives confidence in the
specific product or service purchased and not in its continuity or consistency during
subsequent supply. Another way is to assess the organization which supplies the
products/services against prescribed standards to establish its capability to produce products
of a certain standard. This approach may provide assurance of continuity and consistency of
supply
Quality assurance activities do not control quality, they establish the extent to which quality
will be, is being or has been controlled. This is born out by ISO 8402 1994 where it is stated
that quality control concerns the operational means to fulfill quality requirements and quality
assurance aims at providing confidence in this fulfillment both within the organization and
externally to customers and authorities. All quality assurance activities are post event
activities and off line and serve to build confidence in results, in claims, in predictions etc. If
a person tells you they will do a certain job for a certain price in a certain time, can you trust
them or will they be late, overspent and under spec?. The only way to find out is to gain
confidence in their operations and that is what quality assurance activities are designed to do.
Quite often, the means to provide the assurance need to be built into the process such as
creating records, documenting plans, specifications, reviews etc. Such documents and
activities also serve to control quality as well as assure it (see also ISO 8402 ). ISO 9001
provides a means for obtaining an assurance of quality, if you are the customer, and a means
for controlling quality, if you are the supplier.
Quality assurance is often perceived as the means to prevent problems but this is not
consistent with the definition in ISO 8402. In one case the misconception arises due to people
limiting their perception of quality control to control during the event and not appreciating
that you can control an outcome before the event by installing mechanisms to prevent failure
such as automation, mistake-proofing, failure prediction etc. Juran provides a very lucid
analysis of control before, during and after the event in Managerial Breakthrough.
In another case, the misconception arises due to the label attached to the ISO 9000 series of
standards. They are sometimes known as the Quality Assurance standards when in fact, as a
family of standards, they are Quality System standards. The requirements within the
standards do aim to prevent problems and hence the association with the term Quality
Assurance. Only ISO 9001, ISO 9002 and ISO 9003 are strictly Quality Assurance Standards.
It is true that by installing a quality system, you will gain an assurance of quality, but
assurance comes about through knowledge of what will be, is being or has been done, rather
than by doing it. Assurance is not an action but a result. It results from obtaining reliable
information that testifies the accuracy or validity of some event or product. Labelling the
prevention activities as Quality Assurance activities may have a negative effect particularly if
you have a Quality Assurance Department. It could send out signals that the aim of the
Quality Assurance Department is to prevents things from happening! Such a label could
unintentionally give the department a law enforcement role.
Quality Paradigm-HHIP 9
Haery Sihombing @ IP
Quality Assurance Departments are often formed to provide both customer and management
with confidence that quality will be, is being and has been achieved. However, another way
of looking upon Quality Assurance departments is as Corporate Quality Control. Instead of
measuring the quality of products they are measuring the quality of the business and by doing
so are able to assure management and customers of the quality of products and services.
Assurance of quality can be gained by the following steps illustrated diagrammatically in the
Figure below:
x Acquire the documents which declare the organization’s plans for achieving quality.
x Produce a plan which defines how an assurance of quality will be obtained i.e. a quality
assurance plan.
x Organize the resources to implement the plans for quality assurance.
x Establish whether the organization’s proposed product or service possesses characteristics
which will satisfy customer needs.
x Assess operations, products and services of the organization and determine where and
what the quality risks are.
x Establish whether the organization’s plans make adequate provision for the control,
elimination or reduction of the identified risks.
x Determine the extent to which the organization’s plans are being implemented and risks
contained.
x Establish whether the product or service being supplied has the prescribed characteristics.
In judging the adequacy of provisions you will need to apply the relevant standards,
legislation, codes of practices and other agreed measures for the type of operation, application
and business. These activities are quality assurance activities and may be subdivided into
design assurance, procurement assurance, manufacturing assurance etc. Auditing, planning,
analysis, inspection and test are some of techniques which may be used.
ISO-9000 is a quality assurance standard, designed for use in assuring customers that
suppliers have the capability of meeting their requirements
Quality improvement (for better control) is a process for changing standards. It is not a
process for maintaining or creating new standards. Standards are changed through a process
of selection, analysis, corrective action on the standard or process, education and training.
The standards which emerge from this process are an improvement from those used
Quality Paradigm-HHIP 10
Haery Sihombing @ IP
The transition between where quality improvement stops and quality control begins is where
the level has been set and the mechanisms are in place to keep quality on or above the set
level. In simple terms if quality improvement reduces quality costs from 25% of turnover to
10% of turnover, the objective of quality control is to prevent the quality costs rising above
10% of turnover. This is illustrated below.
Improvement by better control is achieved through the corrective action mechanisms and by
raising standards that requires a different process. A process which results in new standards
which is improving quality by raising standards can be accomplished by the following steps
illustrated diagrammatically in the figure below
x Determine the objective to be achieved. e.g. new markets, products or technologies or
new levels of organizational efficiency or managerial effectiveness, new national
standards or government legislation. These provide the reasons for needing change.
x Determine the policies needed for improvement. i.e. the broad guidelines to enable
management to cause or stimulate the improvement.
x Conduct a feasibility study. This should discover whether accomplishment of the
objective is feasible and propose several strategies or conceptual solutions for
consideration. If feasible, approval to proceed should be secured.
x Produce plans for the improvement which specify the means by which the objective will
be achieved.
x Organize the resources to implement the plan.
x Carry out research, analysis and design to define a possible solution and credible
alternatives.
x Model and develop the best solution and carry out tests to prove it fulfils the objective.
x Identify and overcome any resistance to the change in standards.
x Implement the change i.e. putting new products into production and new services into
operation.
x Put in place the controls to hold the new level of performance.
Quality Paradigm-HHIP 11
Haery Sihombing @ IP
This improvement process will require controls to keep improvement projects on course
towards their objectives. The controls applied should be designed in the manner described
previously.
Quality Paradigm-HHIP 12
Haery Sihombing @ IP
3. Interpret quality in the broad sense to take into account all departments of the
company versus concentrating solely on the production department.
4. Maintaining a competitive price regardless of how high the level of quality is .Despite
the quality, customers will not purchase excessively expensive products. Companies
cannot implement quality that ignores price, profit, and cost control. These companies
should aim to produce a product with just quality, just price, and just amount.
Those six steps follow the plan-do-check-action approach modified by Dr. Ishikawa. The first
two steps involve the planning aspect, steps three and four follow the doing aspect, step five
is the checking step, and step six is the action portion of the method. Each of these steps
involves a great detail of work.
Unless top management determines standardized policies, the company will not reach its
goals. These policies need mutual understanding and adherence by all subordinates at every
level in the hierarchical tree. Direct subordinates need to full understand these policies in
order to implement and enforce them in their individual departments. The mutual
understanding by all employees puts everyone on the same page. This allows the company to
proceed towards accomplishing its goals. Companies should set goals based on their
problems at a company level and set specific limits on the goals. This makes the goals more
explicit to everyone striving to attain them. With that said, top management should take into
account the feelings and opinions of their subordinates when setting these guidelines.
Determining the methods of reaching the goals set forth by top management involves the
standardization of work. An idea can work for an individual, but if those around him do not
adopt it confusion will arise, hence, defeating the purpose of standardized work. The mangers
should again take the subordinates input into account for more effective implementation of
the standardized procedures. This approach looks at potential problems and the
standardization solves and eliminates these problems before they ever surface.
Engaging in education and training eliminates many human errors before they occur. Good
goals and planning are useless unless the workers understand what they need to do and
exactly how to do it. This does not mean classroom work exclusively. This step involves
everything from one-on-one meetings between management and subordinates to classroom
work to hands-on training with equipment and machinery. This step also involves the
personal nurturing of every employee. Building a mutual trust between management and
subordinates is vital to a cooperative team effort.
Quality Paradigm-HHIP 13
Haery Sihombing @ IP
If management follows the previous three steps, implementation should not pose any
significant, unfixable problems. Problems will arise at every step in the process, but
management should solve these problems immediately as they surface.
The last two steps involve observing any negative effects that implementation presents and
taking appropriate action to correct them. These steps involve finding the root or cause of any
problem and then systematically solve them in a standardized manner. Solutions presented
must not only solve the problem, but must also prevent the recurrence of this problem.
Companies all have different goals for TQC and reasons for implementing it; however, they
all have similar purposes summarized by Dr. Ishikawa:
1. Improve the corporate health and character of the company
2. Combine the efforts of every employee, achieving participation by all, and
establishing a cooperative system
3. Establish a quality assurance system and obtain the confidence of customers and
consumers
4. Aspire to achieve the highest quality in the world and develop new products for that
purpose alone
5. Show respect for humanity, nurture human resources, consider employee happiness,
provide cheerful workplaces, and pass the torch to the next generation
6. Establish a management system that can secure profit in times of slow growth and can
meet various challenges
7. Utilize quality control techniques (97)
Quality Paradigm-HHIP 14
Haery Sihombing @ IP
There also exist several societal and work related differences in attitude and practice that
provide the Japanese with a significant advantage over their Western counterparts.
The vertical integration of Japanese companies provides management with a better working
relationship with their subordinates. Subordinates are less apprehensive about approaching
management with suggestions and ideas .On the other hand, most Western companies have
distinct gaps between each level of employee, which allows a communication gap to
persevere. This poses many problems for Western companies when trying to get all team
members on the same page aiming towards a common goal.
Labor unions influence many company decisions in the West. The unions organize
themselves along functional lines. Therefore, if the welders or the machinists decide to strike,
a whole company will shut down. Japanese labor unions organize themselves across an entire
enterprise. These unions cross-train workers to be multi-functional. The Japanese companies
nurture this type of worker, which provides greater cohesion in the team effort.
Western cultures rely too heavily on the Taylor method. This method is one of management
by specialists. These specialists develop standards and specification requirements. In contrast,
as previously mentioned, Japanese cross-train their workers to educate them in many facets of
company operation. They also work more as a team to develop these standards and
requirements.
In Japan, most workers stay with a single company for their whole working careers. This
makes the workers familiar with the company and develops more of a family atmosphere.
This enhances the cohesion between workers previously mentioned. In Western cultures, the
job turnover rate is much higher. Workers change companies looking for pay raises and
personal career development. This brings a “me” attitude to many companies, which can take
away from company productivity. There are many other cultural differences between Japan
and the West that also facilitate the gap that sets Japanese companies apart from their global
competition.
5. QUALITY AS A VALUE
The ad men are on it daily; government agencies are asking for quality control manuals; some
seek no less than total quality management; yes, the chances are you have some form of
quality strategy working or planned.
Quality Paradigm-HHIP 15
Haery Sihombing @ IP
Whose Shoes?
Give popularity a look askance; it runs on a sidetrack. What is constructive about a thing is
seldom known by its popularity. Take running shoes. They are good shoes and mostly
trustworthy. The people who wore the first running shoes probably were runners, but the
shoes got popular. Now, mostly, the people who wear them don't run in them. On the
question of who runs, the wearing of running shoes is untrustworthy information. If everyone
showed up tomorrow in running shoes, don't bet on the winner by the wardrobe.
Some mighty impressive quality control manuals can be found, and they say things finer than
the rest of us even know. Still, the people wrapped in them sometimes must stand naked for
their appointments in reality.
The quality paradigm explained and applied here is not a manual for following. It is a focal
point for reflecting on yourself in many mirrors. Quality comes from there.
We pay too dear a price for experience that we should ever be permitted to waste it. A
number of the payments we are making to that account are considered here. They were drawn
from the public record that we may each recognize our own time and place in them.
IS QUALITY CONTROLLED?
I'm not at ease with this "control" business when it comes to quality. Quality Control sounds
entirely too self-congratulatory. Quality assurance and quality management unsettle me, as
well. They are close-ended terms, and, when I listen to people speaking them, I hear a finality
in their voices that says they have it under control. But quality is an elusive prize. I can't get
easy with terms that hint we can directly control it. Think that, and you could lose the
richness of the quest for quality in the same way moon shot loses the sustained exhilaration of
a lunar rendezvous.
The seekers of rendezvous move to choreography with complex steps, breathtaking leaps, and
music; they have harmony and are open to secret possibilities. Astronauts ventured to the
moon in the embrace of that. And you will be closer to quality in a rendezvous and more
likely to touch and be touched by it.
The quality paradigm sounds a chord of notes to help you find the steps in quality's
choreography and hear the music it plays on the strings of your consciousness.
THE PARADlGM
The connection with quality is made by decoding the human behavior of conscious beings
and preparation of the individual consciousness for quality results. The paradigm is rooted in
the conviction that quality is the result of conscious judgments made to achieve it, and our
failures are traceable to dysfunctions in our consciousness.
Within our consciousness, values are gate keepers-opening for some choices, closing for
others, and providing "just squeeze by" space for some. They are standards for choices.
Quality Paradigm-HHIP 16
Haery Sihombing @ IP
Controls are the tools we design to manage and inform us of our progress along settled goals.
We understand and act on their messages in the context of all the information in our
consciousness, which is open to possibilities broadly or narrowly depending on our
construction and conservation of information and our individual selves.
Goals yield no common sense, and wisdom cannot be gained from their achievement or
failure without knowledge and conservation of the value premises. Absent wisdom ordered
by values, our marches make the picture of pachinko balls dropped into reality to score points
or not in a noisy descent. The racket we make does not accumulate to our wisdom.
Value premises are distinct from goals by their formation. Goals are settlements. Both
individual and organizational goals settle competition over the questions: What shall we do?
What will we forego? And, what resources shall be committed?
People speak easily of established organizational values and goals. You should not glide
easily over that. As settlements of the choices open to a group, organizations can be said to
have common goals. But a common value premise does not arise by any such settlement.
Certainly, a number of people can agree that they hold similar value premises, but that is
close correlation of what value is held rather than settlement among values open for choice.
You could settle with me that to achieve the goal of a 10% productivity increase, our
partnership will allocate 15% of its cash resources to computer-aided drafting equipment, and
we will be bound by our goal settlement.
But could I settle with you that I am bound by your value premise: Business growth is good?
If we agreed already, that is a correlation. In fact, however, I would spend the increased
productivity not on building our business, but on time to kayak the river every Friday. My
value premise is: Confrontation with the object of man's desiring is good. If you insisted, and
I agreed with your value premise as a condition to buying the equipment, I would conclude
that, by making the value "settlement," the value premise nearest that is: Agreement with you
is good. The river is still in the lead, and you would find me there every Friday.
Does that help answer why organizations do not have values per se and suggest where to look
for values?
Value premises are important to the pursuit of quality, because what is steadily valued (if
experienced as wisdom) is the rail that keeps an individual locked onto the prize. Otherwise, a
professional does not pore as thoughtfully over the one hundred and first shop drawing at
seven thirty tonight as over the first at eight o'clock this morning, or does not faithfully
complete a job awash in budget overruns. Of course, an assumption of the value held was
Quality Paradigm-HHIP 17
Haery Sihombing @ IP
made for those statements, and, if the rails are set on a side track, they run a different course
to another destination.
How high or low you place the value note determines the tone of the paradigm chord. Some
chords are played on the high register; others play lower.
The settlement of goals for quality is a management task; however, it is not a solitary task.
Professional service is advice for the guidance of others. Clients purchase advice, and, in the
price paid, they largely settle the quality goals whether spoken or not.
I call it a settlement to draw specific attention to the fact that the level of quality is an issue
between the design professional and client. It must be specifically settled, else the financial
resources to deliver the quality expected will not be provided. The first chance, and perhaps
the only chance to settle quality, is the negotiation of the contract for services.
Left unsettled, the client's expectations for quality will likely be quite high (or will be so
stated when goals conflict), while the professional's quality goal will tend to center on the fee
available to pay for it. In other cases, the prospective client's quality target may be so low that
the legal minimum standard of care should preclude settlement.
If you aspire to design quality projects, seek quality in both your clients and yourself. Settle
the goals specifically, and determine that they are matched. When they are mismatched, test
what values they derive from and learn what tone the project will sound. You cannot play on
the high register if the client is intent on other notes.
The Controls
Controls serve three primary purposes: First, a control communicates what management
expects the firm's professionals will do to achieve a goal; second, a control provides
information to management and the working professionals about progress toward the goal;
the third purpose is to raise the stock of people who obtain quality results, which is proof
manifested in people of the value held for quality.
How the notes of the paradigm play a chord is important to staying in step with quality.
The goal is management's settlement of its aspirations to the value stated by its commitment
of resources to it.
The control is information about how to achieve the goal and the progress made along the
path to it.
If values are not firmly held, goals will be confused, inconsistent, and ill defined. Then,
controls on those goals will be pointed in the wrong direction, or whatever information they
Quality Paradigm-HHIP 18
Haery Sihombing @ IP
offer will go unheeded. Light will not cleanly separate the quality achievers from those in the
shadows. You will miss the complex steps; your leaps will be into empty space; and your
music will sound discordant notes. Finally, the value premises will be exposed by behavior
tested in reality.
Quality strategies are typically chock full of procedures, forms, standard design details, and
checklists. Particular quality management procedures, for example, a design peer review, a
mock-up and test of a structural connection, the common checklist, quality performance
evaluation, and the budget to do any of them are examples of controls. Typically, much stock
is put in them because they are demonstrations of headway toward quality, but listing them
here is not a recommendation for them. I have a different point.
Any strategy for quality is destined to fail utterly, unless controls give the opportunity and the
time to act. The initiation of effective controls requires the foresight waiting for you a few
pages ahead in "The Freedom to Act" and beyond. Controls are tools for anticipating hazards-
that we shall have the opportunity to act in time. Do not get snagged on gadgets here; the
most effective controls are those which provide information needed at critical times. You
might get that just by listening to what the client is telling you (or what is left unsaid).
Controls are deliberate potentials for disquieting news. They are not the lights on the right
seat vanity mirror. Controls are your lasers piercing the night you go into uneasily, sending
back the information you need. We move directly, then, to information.
INFORMATION
Information is everywhere in the paradigm. Without it, we are conscious beings on perpetual
standby waiting for data in grave doubt about our next move. So vital is information that
theorists include it with matter and energy as an essential concept to interpret nature.
a. Probability
Probability is the linking theory between the sender and the receiver of information. We have
need of probability, because the transfer of useful information always involves uncertainty.
The measure of that uncertainty figures in the difference between the sender's and receiver's
knowledge and the skill of the information encoding.
You can readily see why this is the case. If your knowledge and mine are the same, a transfer
of information between us has little uncertainty, but what we exchange adds little to our
knowledge. The exchange would become completely predictable, and we would end it. Ratso
Rizzo put it bluntest in Midnight Cowboy by saying, "There's no use talking to you. Talking
to you is just like talking to me."
The prime purpose of communication is to exchange messages that are not predictable, which
is to say we want to send missing information. The outcome is always uncertain. The greater
the missing information, the greater is the inherent uncertainty of the outcome, but the
potential exchange is all the richer for it.
Quality Paradigm-HHIP 19
Haery Sihombing @ IP
The rich potential will not be achieved if inept encoding decreases the probability of the
intended outcome. Probability will be low if the message cannot be understood. The
information will not empower the receiver to act, and actions that follow may be based on
information gathered from other sources. You will not have influenced the probability of the
outcome (affected, yes; influenced, no). Probability will also be low if the message can be
understood in so many ways that the receiver is empowered for numerous possible outcomes.
Point one is your knowledge that professional advice (drawings, specifications, letters, and
conversation) supplies missing information, whose probability of intended consequences is
made acceptable, if at all, by skilled encoding. Seeking an acceptable probability focuses the
sender's consciousness to the receiver's missing knowledge so that the obligation to fill that
gap will be assumed. Success is not measured by what is said. "I made my point!" misses the
relevant investment. Dividends are paid on the action taken in response to the message
received.
I hear a few readers barely able to suppress a rejoinder. "Doesn't the receiver have an
obligation to comprehend and tell me when there is a confusion about my information?"
There is, and, when you are a receiver, you should honor the attendant obligations. The
decision will be yours then, but the decision is another's when receiving from you. Do you
see? You should aim to influence probability. You cannot pitch poorly on the premise that
you are owed good catching.
b. Redundancy
Redundancy makes complexity more predictable. The possibility of error in complex tasks is
great, because there are many possible outcomes from their attempt. You can test this by
operating a transcontinental car ferry for a few minutes. Traveling by road from New York to
San Francisco is complex to begin with, because there are so many possible roads. When you
instruct one hundred drivers in New York to drive to San Francisco, unpredictable arrival
times are likely. The system of instructions needs redundancy to enhance its predictability,
which can be achieved by internal rules. More stability is achieved if only paved roads are
allowed. That rule is, strictly speaking, unnecessary information for getting to San Francisco.
It is redundant, and it increases order by reducing the number of possible routes. Greater
order is achieved by progressively greater redundancy: Use only U.S. interstate highways;
use only U.S. Interstate 80. We could go on by adding internal rules on speed, check points,
and place and length of rest stops until the reduced number of possible outcomes constrained
by redundancy yields acceptable predictability.
One of the reasons both drawings and specifications are issued is to surround a thing with
redundancy in a way that makes the result intended more probable. Detail drawings serve a
similar function. Redundancy makes the complex system of construction more predictable.
Quality Paradigm-HHIP 20
Haery Sihombing @ IP
Information theory instructs you to test the probability of your information by anticipating the
number of possible outcomes from its use. Too many possible outcomes indicates
unpredictability, which calls for more skillful encoding, including, perhaps, greater
redundancy.
Entropy has highly predictable applications in natural systems. For example, in a corollary to
the second law of thermodynamics, energy does not alter its total quantity, but it may lose
quality. A fully fired steam engine left unattended ceases to function not because total energy
is less, but because its energy is dissipated from a highly regulated form in the boiler to a
randomly organized form in the atmosphere. When cold, the engine's energy system is at
maximum entropy.
Claud Shannon's work on information theory at Bell Telephone Laboratories, published in the
July and October 1948 issues of the Bell System Technical Journal, is the prototype scientific
application of entropy to information systems. Shannon theorized that information in a
system tends to lose quality over time by progressive transformation from an organized form
to an eventual random order. At maximum entropy, the information has so many possible
arrangements that nothing useful or predictable can be made of it. Missing information is then
at its maximum; outcomes are highly uncertain, and predictability is very low.
For example, a beaker of seawater in one hand and a beaker of distilled water in the other
represent a highly ordered system of information about their contents. The information is
significantly disordered when I mix them together in a single beaker. When the mixed beaker
is drained into the ocean, complete entropy of the information is reached. The order of matter
and information is completely random, and no use can be made of it.
Entropy is ubiquitous. You likely discovered its consequences when you first remarked,
"When did we start doing things that way? Whatever happened to our practice on doing
things this way? Who gave them the idea they could do that?" The when, whatever, and who
is entropy.
Entropy is systemic. A firm with many branch offices is headed for disorder in numerous
systems, which will account for some of the confusion among them. The decision to set up a
new department is a decision to risk entropy in a new system. A decision to locate one firm in
two buildings is just asking for trouble. Entropy, you see, is also a concern of effective
architecture.
Let's check in on our car ferry business. Shannon's corollary predicts that over time our
highly ordered way of getting reliable arrival times in San Francisco will fall apart: Our
drivers will seek scenic diversion; they will ignore the route rules; they will overstay at
Quality Paradigm-HHIP 21
Haery Sihombing @ IP
Jesse's Highway Diner; Jesse may add new diversionary information; some maverick drivers
will break the rules and chance acceptable arrival times; replacement drivers will ignore our
rules and try the maverick routes. Left unattended, missing information will tend to grow, the
possible outcomes will increase, predictability will decrease, and, at some point, total entropy
will be reached when all arrival times are equally likely. There will be no information in the
system we can do anything useful with.
A closed information system resists exchanges. It busies itself with cybernetic enforcement of
the system extant. Closed to disruptive outside influence, it may be thought stable, but the
apparent stability carries the seeds of self destruction. The closed system will not change, but
the surroundings do not stand still. New possibilities are unwelcome and do not spin off as
innovative systems of greater complexity. Information missing in the closed system will
approach maximum. The entropy alarm will sound; no one will hear. The Soviet Union is a
deliberate, real life laboratory of that-with consequences. It is not by information gathered
from coincidence alone that glasnost sounds at least a quarter note in the trumpeted Soviet
economic reform.
An open information system is very nearly the opposite. It thrives on exchanges. While
entropy is running the clock down, an open system busily spins out new subsystems of
greater complexity. The clock will run on them, but they are replaced by higher orders of
complexity. Open system characteristics predominate in the system that encircles the design
professions and construction industry.
But you might be tempted to believe the entropy clock has run quite far if you compared sets
of drawings for yesteryear's grand old buildings with a set for an ordinary recent building.
The grand and old may have been built several hundred years ago from a dozen drawing
sheets, while even the quick and dirty model today takes several times that. Is there that much
more missing information today? Is the difference in those drawing sets the measure of
entropy in the system? It probably is not.
If the design and construction information system was a closed system, then I would answer,
yes, that entropy was greatly advanced. In a closed system, we would be constructing
basically the same building today as a century earlier in substantially the same way, and, if it
took triple the number of drawings, that would be clear indication of enormous missing
information. You would expect a highly disorganized industry.
None of that is the case. The construction industry today is more highly organized than a
century earlier, and its projects are increasingly complex. In its open system, load bearing
masonry got transformed into structural steel or reinforced concrete, both with curtain walls,
on to geodesic domes, onward to space frames, and out, we might predict, to delicately
webbed cities sailing in orbit.
Quality Paradigm-HHIP 22
Haery Sihombing @ IP
Did I answer why there are more drawings? There are more because greater complexity has
multiplied the potential outcomes, which design professionals endeavor to regulate by greater
redundancy.
Your chore is to battle against entropy (and other sinister opponents) within a worldwide
construction information system busily spinning out ever greater complexities. And it all
happens not in glacial time, but so luckily that the greatest complexity has developed in your
lifetime. It must be very exciting for you.
This is something completely new, something that yields a new scientific intuition about the
nature of our universe. It is totally against the classical thermodynamic view that information
must always degrade. It is, if you will, something profoundly optimistic.
We will want to find a personal entrance for you into the profoundly optimistic behavior of
complex open systems.
f. Points of Information
Information is everywhere in the paradigm, and, strategically, it is just before our time in the
Consciousness. Information theory is the probability of your supplying missing information.
Your consciousness embraces the capacity to see the measure of that and the many other
uncertainties you face. The approach to quality is made when you act on your measurements
of all the uncertainties you see to affect reality with careful, harmonious, conscious
choreography.
CONCIOUSNESS
a. The Place of It
The music in the quality paradigm is all single notes until they are written in a chord. The
place of consciousness is to order the notes, record them, and sing the chord.
If you would hear music from stone, steel, and glass, stand before a building you admire and
ponder: Why does it stand? Why does it endure? What in it stirs my admiration? An
explanation for its standing might be all laws of physics, its endurance all properties of
materials, and the stir in you the emotive power of elegance or grace.
In Departures, a journal ended by his death from cancer, Paul Zweig wrote that we live in a
"double sphere of consciousness." The near shell is occupied with our immediate needs, "full
Quality Paradigm-HHIP 23
Haery Sihombing @ IP
of urgency, heavy with the flesh of our lives," but the outer sphere takes us, "into the future
where we pretend there is time." Time in the outer sphere is our "experiment with
immortality without which books would not be written and buildings would not be erected to
last centuries."
Pause here and listen. The human music is playing all around you. The building you admire
stands and endures because it stood first and endured first in the consciousness of its
designers and builders. The building was an experiment with immortality that has lasted long
enough to capture you, and it was made into art within the private estate of your own
consciousness. Between you and some remarkable people, perhaps now gone, a great
building has been created. It is exquisite work.
It might not have been great at all. A different consciousness by any one of you, and the
building could have been ugly, dilapidated, or collapsed, but I thought you should be fortified
first by beautiful music.
The leaves of the Silphium laciniatum plant line up in a north-south direction to catch the
morning and late afternoon sunshine, while avoiding the damaging mid-day sun. It is known
as the compass plant for its reliability at reading sunlight and answering its genetic code.
The human nervous system has evolved more complex capacities. We are conscious not only
of a perception of light, but also of the sensation of its intensity and color, the fact of the
speed of light, the memory of a nap we once had in the light of a winter's day, anger at being
awakened prematurely, the intention to install shades, the value of eyesight, the desire to
write an ode to light. All these (perception, sensation, fact, memory, anger, intention, value,
desire) are additional bits of information available to the conscious mind
Consciousness exists when specific mental phenomena are occurring and you have powers to
direct the intended courses. Intentions are bits of information which order other information
in the consciousness. Intentions profoundly affect other information. Their effect is to reject
information, rank it, interpret, synthesize, and order our bodies to action along particular
paths. As a result, our consciousness is filled and continues to grow with intentionally
ordered information.
Quality Paradigm-HHIP 24
Haery Sihombing @ IP
Each human being carries an ordered set of information termed the self. It is the ordered set
that tells you who you are. If a self set was read aloud in a crowd, someone would likely
respond, "That's me, my identity! Where did you get all my private stuff?" That is how we
would know whose set it was.
The self is a private estate and by far the most luxurious you can construct. It offers you
command over billions of potential mental phenomena, and, when opened to connections
with others, you have immense potential.
The self is a potent set. It is loaded with a lifetime of experiences: passions and pains, goals,
values, convictions, and intentions. Information in that set is potent because it was ordered for
the survival of the self ("That's you!"). If you do not remember ordering your potent set, it is
because you had a great deal of help (some useful, some inhibiting) from family, school,
society, and a very long, recorded genetic code of homo sapiens' experience on this planet.
Information that threatens the self will set off a quake in the consciousness. Adaptive
strategies are quickly implemented; consequences can be ruinous and range far. The
threatened self has but a few possibilities and they are urgent, heavy with the flesh of life.
Information that suppresses the self will cause a range of adaptations from passivity,
resentment, to rebellion. We restrict our own opportunity when we initiate limitations in
others, or when we fail to petition their potential. A thwarted self may find its final revenge in
conformity. The point/counter-point between the proletariat and the withering state is: They
pretend to pay us; we pretend to work. We are more at risk from another point and counter:
They pretend to hear us; we pretend to think.
General Electric caught itself in a door of its own making when it decided in 1983 to mass-
produce rotary refrigerator compressors. The story is told on the front page of the May 7,
1990, Wall Street Journal. The reporter found that the product development phase was under
severe time pressure to gain advantages over foreign manufacturers. Based on the lab test of
about 600 compressors in 1984, mass production was commenced in March, 1986. Field
testing was cut from a planned twenty-four months to nine. User failure reports commenced
in December, 1987, about 21 months after production. By March, 1989, defective
compressors had compelled GE to take a $450 million pretax charge for about 1.3 million
compressor replacements.
The lab testing had consisted of running prototypes under harsh conditions for two months. It
was intended to simulate five years of normal operation. One GE testing technician of 30
years experience had repeatedly told his direct supervisors that he doubted the test
conclusions. Although the units had not failed, there were visible indications of heat related
distress: discolored motor windings, bearing wear, and black, crusted oil. None of the
Quality Paradigm-HHIP 25
Haery Sihombing @ IP
warnings to his supervisors (three supervisors in four years) were passed on to higher
management. An independent engineer urged more severe testing, because there had been
only one failure in two years. That was suspiciously low; the news was too good, but he was
overruled.
Examination of failed field units disclosed a design error responsible for the early wearing of
two metal parts that, you guessed it, caused excessive heat buildup leading to failure.
GE had declined to walk in the field of many flowers. It wanted an early success with every
head turned the same direction, and it got that. But key cognitive connections were lost: GE
had declined an offer of help from the engineer, who in the 1950s had designed the GE rotary
air conditioner compressor; the testing technicians and independent engineer were not heard;
no one dared insist on the longer field tests desired, because, "It would have taken a lot of
courage to tell (the GE Chairman) that we had slipped off schedule." The senior executives,
walking in the field of silent flowers, petitioned only good news.
Lessons at GE were paid in money and prestige. The replacement GE Chief of Technology
and Manufacturing was quoted on the lesson he took from his predecessor's experience: "I'd
have gone and found the lowest damn level people we had...and just sat down in their little
cubbyholes and asked them, 'How are things today?"'
Well, everybody has a story to tell. Don't you wonder what is low and little about that place?
Do you wonder who is damned there? After all that passed, do you suppose "please" is too
lavish an offering for the door to profound optimism?
I will allow that drawings and such can be checked. They could get better by checking;
however, I hold to my point on the consciousness, because checking drawings is not like
checking any other "thing." If you made ball bearings, all the information about their quality
could be quickly known. Metallurgy would give the properties of the steel, laser measurement
would test thousands of them as fast as they rolled by, maybe test a sample to destruction, and
a bucket of bearings would be right to specification.
But the quality of designs is not tested that way. Obviously, it is impractical to rely on a
check of the final construction for proof that the design was good. A twenty-four month field
test would have helped GE, but is an untimely control for you. Instead, a drawing checker
looks for indications apparent on the drawing and in key calculations, perhaps, that the
preparer met the standards for design. One consciousness checks what another has revealed
about itself on paper. It helps. Two heads can be better than one here, but it remains a
fundamentally different check and pass than you get on ball bearings. A laser sees every ball
bearing the same way. One consciousness will not see the same drawing another
consciousness sees. There is both benefit and mischief in that.
Judgments in design develop one on another until what appears on paper may mask deficient
work many steps back. The indications become less clear. Designs are not dismembered step
Quality Paradigm-HHIP 26
Haery Sihombing @ IP
by step when they are checked. Construction documents are not checked by duplicating the
work any more than the knots in a Persian rug are tested by untying all of them.
The point is, all that you do is tied in your private estate on strings of consciousness. You
have the first and best chance at quality there. No one after you will have a slate as clean and
clear. We will visit inside your private estate later in Putting the Paradigm in Motion:
Learning to See, where we can explore what you might mark on your slate.
Much is owed to the brain, yet our use of it is a paradox. Only a part of it is applied to outrun
the surrounding Chaos. With that, we bravely pick our way through indifferent forces and
cosmic coincidence. Another part we give over to constructing comforting race reports about
the absolute security of our pack-leading position in the food chain. Entire firms, indeed
whole cultures, issue themselves favorable race reports. We believe ours. We believe each
other's when quite convenient.
Smug satisfaction with our own race reports is unwarranted self-congratulation. By listening
to them, we lose the freedom to act creatively, decisively, and effectively. We surrender to
entropy in our information system by disabling our consciousness. We stun exactly the
faculty needed for agility and proficiency. We are overtaken, outmaneuvered, and the prize is
lost.
a. X On the Reef
And we can lose woefully. Those who are captains of firms take a lesson! Before there was
the agony wrought by the Exxon Valdez, there was an opportunity to determine its rules of
operation. The reasoning against allowing passage of super tankers from Valdez, Alaska,
through Prince William Sound was to many minds convincing, but it was finally silenced
under a thick blanket of government and oil industry assurances of extreme safety, profound
expertise, and good intentions. There was to be virtual "air traffic" quality control over the
sea lanes beginning in 1977. Why that was a comfort at the time remains a mystery.
Years without fatal incident were sweet succor. Every player, from ship's captain, Exxon
president, Alyeska, and the State of Alaska to the Coast Guard, readily accepted obliging race
reports. Bligh Reef wasn't one meter further out of harm's way, but the consciousness had
new information: The shoals near Bligh Reef had been passed hundreds of times without
incident; no oil tanker had been holed in Prince William Sound; the oil fields drained into
Quality Paradigm-HHIP 27
Haery Sihombing @ IP
Valdez, Alaska, had peaked, and the campaign was on the down slope side of economic
maturity.
At Exxon and Alyeska, that information was ordered by intentions to improve profits.
Equipment and manpower were cut, safety and cleanup systems were disabled, and the
surplus was siphoned to a thickening, black bottom line.
The Exxon Valdez had clear sailing as far behind as the eye could see. Bligh Reef waited. If
the cosmos played at tag, its mechanics must have watched rapt, while, on March 24, 1989,
Bligh's prize was delivered by current, wind, and human fallibility heavily laden and hard
aground. The Exxon Valdez, gorged with obliging race reports, belched sickening defeat.
Are we unfair to Exxon and the Exxon Valdez? Couldn't they, after passing Bligh Reef and
clearing Prince William Sound safely time after time, prudently lower their concerns and
preparations? They could not, because each passage was an independent event. The
conditions on one passage were not the same as the next. Variables of sea, weather, and crew
preparedness were different for each, and they were different on March 24th. Only Bligh
Reef was unchanged. Belief in Exxon's race reports was no more prudent than your using the
same soils report for all projects on Main Street.
Repeated success with similar independent tasks enhances the qualifications for the next
attempt, but it does not suspend the hazards encountered. Architects and engineers, who have
designed this or that project type dozens of times, may have paramount qualifications for the
work; however, they have not suspended the hazards. Success talks, but it can speak praise
we should not honor.
Aboard the Exxon Valdez, it was a control that the crew shall: "Call the Captain's quarters for
instructions abaft Bligh Reef." As a control, it was designed to give the Captain notice of the
approaching reef and an opportunity to give his instructions. The goal was apparently to pass
Bligh Reef without bloody running hard up the back of it.
How do you rate it? Some use a similar control: "Check the drawings before release to
construction." That is a point on your chart just abaft Release Reef.
The control increased the Captain's time and opportunity to act in one way. It gave him a
single point in time to check where the Valdez was in relation to Bligh Reef and the same
single opportunity to correct his course. Next stop: Bligh Reef.
It was not a wise choice among so many better ones. For example, "The captain shall remain
on the bridge until Bligh Reef is cleared." The time and opportunity to act is increased, and
there are bonuses here: The price for a captain on the bridge is the same as a captain in his
quarters; a captain at the helm more likely has a consciousness alive to the hazards than a
captain on the bunk.
Quality Paradigm-HHIP 28
Haery Sihombing @ IP
Would you add a ship's pilot to Exxon's quality program? An expense authorization from
Exxon is needed for that. The time for it was in 1977, when the goals were settled. It is too
late now, of course. Quality is built into a project at its beginning, if at all.
We race against the sands of time. The freedom to act is power to our legs and mind. Powers
absorbed in issuing and receiving obliging race reports collect sand. Silica petrifies us; we
slow, then we die.
Design professionals are judgment workers. They typify the workers fast dominating the kind
of labor done in America. Specific qualities recommend them.
Judgment workers are distinctively goal directed, highly motivated individuals, who have
arrived where they are by selecting ever more demanding and specialized careers. Their paths
were deliberately chosen, and the personal stake in them is high. They are the guardians of
their own investment in values. Paths of action supporting personal goals and values are
beneficiaries of their personal stake. Contrary paths clash strongly with the potent set, which
can initiate conflict and ultimately threaten the self. Unclear or conflicting goals can create
conditions of siege.
Care was justifiably taken to say that judgment workers are more likely committed to a
project than a "boss," and they take care not of themselves. Please note: That statement is not
in the least equivalent to, "They look out for Number One." More often, the people I describe
will inflict substantial penalties on Number One for the benefit of the project. Whether
management receives the benefit it seeks from their work depends on the care management
takes in its tasks.
A firm of consistently and skillfully managed judgment workers can achieve wonders. A firm
managed with discrepant values, goals at cross-purpose, or with conflicting information can
set loose its own Golem.
Quality Paradigm-HHIP 29
Haery Sihombing @ IP
investigate the Challenger disaster concluded that both the launch decision process and
critical rocket hardware were flawed.
The facts showed that, while Challenger was fueled, manned, and ready on the launch pad,
Morton Thiokol booster rocket engineers raised a launch safety issue with Thiokol
management. They had observed on previous launches that the synthetic rubber O-rings
sealing the joint between two lower sections of the solid rocket booster had been damaged by
hot gases escaping from gaps around the O-rings. The damage was most pronounced on cold
weather launches. If launched as scheduled, Challenger would make the coldest test yet; there
was potential for disaster.
The position of the dissenting Thiokol engineers can be well appreciated. NASA launch
safety procedures required a "Go" for launch from Thiokol. The issue raised by the engineers
had consequences, not only for the immediate launch but also for the safety assessment of all
prior launches, future launches, and Thiokol's reputation.
The commission disclosed that there was debate. There was consideration of information.
Ultimately, the engineers would report experiencing a shift that put them on the defensive.
They would say it was as if the rules had changed. Previously, the control on "Go" for launch
was, "Prove it is safe, or do not launch." The Thiokol engineers in the Challenger launch
circumstances perceived a new control: "Prove it is unsafe to launch, or we launch."
The consciousness of Thiokol's management was not swayed by the debate or by data that
could be marshaled while Challenger waited. Finally, the engineers were asked by
management to, "Take off your engineering hats, and put on your management hats." In only
a short time, the engineers withdrew their safety issue, and the Thiokol "Go" for launch sent
Challenger into space history.
The strong intentions expected from goal driven specialists were evident in the Thiokol
engineers. The space shuttle program is an ultimate specialty. There is one and one only. The
information they had (previous cold temperature, O-ring damage, and it's even colder now! )
got channeled by strong intentions for safety right to the top of Thiokol. That could not have
been pleasant. Playing the rock in the middle of the road resists other strong information in
the self: intention to be loyal to the firm, intention to support coworkers, intention to advance
one's career, and intention to support prior launch decisions (when no warning was given).
After twenty-four successful launches, the potent self risked a considerable setback by
speaking out.
The warning was spoken, and, importantly, it was given in response to a perceived control:
"Prove it is safe to launch, or do not launch." That was the discussion the engineers expected
and were prepared to have. But the discussion and the control were reversed. The engineers
would not likely win the case, "Prove it is unsafe," because the consciousness plainly did not
prepare and order the information that way. You can hear the consciousness scream, "It's a
little bloody late to change the rules here!"
Quality Paradigm-HHIP 30
Haery Sihombing @ IP
Management shook the high wire. Now, there were key judgment workers off balance. The
Thiokol team had reduced effectiveness.
Changing the rules on the high wire has unsettling effects. There is new information in the
consciousness: Management is inconsistent, management is unfair, management doesn't trust
me, and I just might fall from here. The potent self gets punched hard when the rules are
abruptly changed. When the O-rings were not proved unsafe, Thiokol's management asked
the engineers to think it over, but, this time: Take off your engineering hat. Put on your
management hat.
We cannot pry open the engineers' consciousness to find the twisted metal memory of that
moment, but we can analyze the conflict provoked by such an instruction.
We honor a prohibition in arguing cases to the jury. The rule prohibits any attorney from
arguing the Golden Rule. Counsel for plaintiff may not urge the jury to put itself in the
plaintiff's shoes and decide for the plaintiff what the jury members would want decided for
themselves. Defense counsel can't argue the Golden Rule in the defendant's shoes, either.
How, from this position, shall a juror satisfy the duty to hear all the evidence and fairly
decide the case bearing no prejudice to either side? I know I want the money, lots of it, and so
do you! There may be saints (or are they brutes?) unaffected by the argument, but the system
of justice does not chance it.
These words, of course, were never spoken, but, under the circumstances (Challenger waiting
at the "Go" line for a Thiokol release, the potentially embarrassing safety debate unresolved,
the immediate need for a decision), prudent systems of management (like prudent systems of
justice) do not chance the consequences of shaking the high wire in that way.
Remember what is included in the potent self set. Here are the values, goals, hopes, loyalties,
ambitions, and desires. Shaking a person there sets the high wire into wildly accelerating
waves. The person either holds on against wave after wave, or he lets go of the position.
And if the position is abandoned under pressure, what has management tested? Has it tested
the merits of cold weather O-ring integrity? Does the question somehow clarify the
Quality Paradigm-HHIP 31
Haery Sihombing @ IP
understanding of any properties of O-rings and the behavior of combustion gases? Could it
ever? If it couldn't, is it sensible to put the conflict into another's consciousness?
And if the position is abandoned, is a new set of values installed that, because of a narrow
reference as an engineer, could not be seen except by playing at management? Could it ever?
The instruction does one thing: It abruptly tests whether or not a person for that moment
valued agreement with management, the tribe, the hunting lodge, to such an extent that it
would override confidence in a contested engineering judgment and the intentions that
ordered it.
That is galling in the extreme; hold on to it for the lesson taught in architecture and
engineering. The control, "Prove it is safe to launch, or we do not launch," supports the goal,
"Safety First," and the value premise in the lead of that is: Human life is good.
Try that brief metaethical exercise yourself, beginning with the control, "Prove it is unsafe to
launch, or we launch." Play the notes back from control to goal and search out the value
premise in the lead of it.
The test result in this instance was that Thiokol engineering did value more its agreement
with Thiokol management, and it took momentary solace in that agreement from the 24 prior
launches.
A key engineer remarked later, "We may have gotten too comfortable with the design." You
can hear silica replace key carbon cells in the registration of obliging race reports. Entropy
unchecked had progressively disordered the boundary between a safe and an unsafe O-ring
design. Framed in ghostly margins, a man pitching on the high wire could just make out a
"safe enough" design in a thoroughly bad one.
By all of that, Thiokol had progressively reduced its freedom to act. The opportunity to save
Challenger was lost.
Test and hearings done, the commission concluded that NASA must rework its launch
procedures to encourage a flow of additional information from more people directly into the
launch decision.
Quality Paradigm-HHIP 32
Haery Sihombing @ IP
NASA broke up some of the little boxes. That helps. The DC-3 aeroplane was designed prior
to 1936 in a hanger without walls between the engineers, wing, tail, instrument designers, or
anyone else. When you wanted to know something, you walked over to the person doing it,
and you displayed what you were doing.
But there is a question bigger than little boxes: Whose stock was raised and whose fell by this
quality disaster? The key engineer, who rose to answer the launch control, fell; he lost his job.
The one and one only space shuttle program is over for him. Thiokol, however, is busy today
making more NASA space shuttle rockets.
Yes, everybody has a story to tell; it turns on a question of economics, they say. It turns there,
but it twists and it contorts before us on the values exposed by behavior. Children can figure
the lesson, and they do.
Challenger's circumstances are dramatic to the point of exhaustion, but the high wire sways
for people in more ordinary circumstances: The project architect or engineer trying to meet a
release date or a budget, leading the largest project ever, leading a project gone sour,
managing the new department or profit center, opening a new market for the firm.
All involve a critical test of the self. Although tests do build strength, time on the wire is
taxing. An individual without options may implement self-protection plans harmful to
quality: Ignore unfavorable facts that would draw criticism, leave problems unreported to
avoid early judgment, end or abbreviate the professional service when the schedule or budget
is exhausted just because it is too great a hassle not to. Problems may not surface until the
freedom to act is largely lost.
How to send a call for help with immunity from a self-damaging counterstrike is key to
keeping people in balance on the wire.
Ready to start? Action! Cars are rolling, and parts are fitting quickly, tightly. It's going well,
and we are making good cars today, but what happens when the worker sees that a carcass
might skid across the goal line before the assigned work is done? Are we going to lose
another one off the wire? Will I get the leaky windshield, will you get the magic self-opening
door? Not this time. The worker will hit a button that sounds a horn, and the horn will tell the
floor section foreman to get on the line and start slinging parts. Bravo!
What I like most about this is not that both management and workers contribute to goal
settlement (one good idea); it is not that management pitches in (another good idea); what I
Quality Paradigm-HHIP 33
Haery Sihombing @ IP
like is that the horns go off routinely. People use them, and that says the people have trust.
That's what I like!
You are on a high wire. You will put other people on high wires. None are afraid of heights,
and the view up here is exhilarating. You can see buildings from here that will last for
centuries. And they will, too, if every high wire you construct has a safety valve people trust
enough to use regularly.
In the 1930s, researcher Jacob Von Uexkull applied a technique in environmental studies for
framing what the animal sees in the environment. Whereas, in oversight, we see the animal in
the environment, the individual animal perceives the Umwelt or the self-world.
As planetary leaders, we are accustomed to seeing the environment, and we easily assume
that is our world. I think we do that because we claim a victory that allows us to survey from
the top of the hill. In fact, we have the greatest need to see ourselves within our Umwelten.
All the information an individual perceives about what is happening at a particular time is, for
that moment, the individual's Umwelt. It is the world perceived, or the self-world. That you
are in your Umwelt necessarily means that you do not see the environment.
The self-world for one person differs from any other's. From place to place and culture to
culture, the differences cause people to see separate worlds in the same space.
The Inuit peoples hunt vast Arctic grounds without maps pasted to their sledges or kayaks,
and they continue the hunt through the long winter even without a reliable sun fix to guide
them. Take a long trek with a hunter, and figure how to return. While you and I are looking
for North, East, or for Silphium laciniatum compass plants, the Inuit hunter has taken account
of the direction of the wind monitored by the motion of it on his fur hood. He has noticed the
ocean currents, memorized the color of the ice and ocean, the texture of the snow. You are
lost in an Umwelt ended at the tip of your nose. The Inuit hunter wonders how you lived so
long in your land and surmises everyone there is starving.
Back home, your Umwelt is larger, but it is still a source of troublesome flux. You
concentrate on one thing, and you miss a message on another. There are ten things on your
mind, and your consciousness is busy swapping intentions to let some information in,
ignoring other information, and misunderstanding the next message. Your focus filter is
making efficient use of your attention, or so you think, if you are aware at all.
Does your Umwelt for the critical instant encompass the clues you desperately need? You
can't feel very confident. Just a minute ago you were as lost as a baby in the Arctic, and now
there is something strange about your office.
Quality Paradigm-HHIP 34
Haery Sihombing @ IP
Briefly, because it is everywhere reported, the design for suspension of the two stacked
pedestrian bridges called for a hanger rod passing from the building roof trusses through the
floor beams of the upper bridge, where the rod was connected, and continuing through to the
floor beams of the lower bridge for connection there.
Notice that the load on the upper bridge connection is the weight of the upper bridge. The
load of the lower bridge is on the rod. Hold on to this: One rod means one load on the upper
bridge connection, which we will call Connection "A". The lower bridge hangs on the rod
and not on the upper bridge.
A steel fabricator's shop drawing was generated, which changed the design by calling for two
rods. The first rod passed through the upper bridge floor beams, where it was connected, and
that is Connection "A". So far just like the first design and the same upper bridge Connection
"A" has exactly the load put on it by the original design-one load.
But, a second rod was started next to the rod just connected, and it was passed through the
lower bridge deck beams where it was connected. Trace the load again. Now the lower bridge
is connected to the upper bridge. The load that stayed in the rod on the original design was
now hooked onto the upper bridge. As a consequence, there were two loads on the upper
bridge Connection "A" and the bridges failed when the rod pulled through the upper bridge
deck beams. That is the "Double Load" reason the bridges failed.
It is the practice that steel fabricators detail connections. I think that is because people believe
it saves money. It is the practice that architects and engineers check fabricators' shop
drawings for general compliance with the design concept, and that check is not always done
by the original project structural engineer. I think that is because people believe it saves
money.
I also think it means that, in the rush and complexity of a design and construction project,
there is a somebody who may not focus on the exact information needing attention in the
precise way that will help. Our Umwelten all have different coverages. There are gaps in
between, and the Golem hides in them. The size of our self-world denies our consciousness
access to the signs that need reading, we miss the color of the ice, the wind confuses us, we
lose our way, and the world comes down all around us. It would kill an Inuit hunter, and it
kills us.
Quality Paradigm-HHIP 35
Haery Sihombing @ IP
Engineers and their advisors know that design changes are a major source of uncertainty. It is
a place where one party's intentions acting on a design ordered by another's intentions may
well block the objectives of both. Two heads are not better than one here. The engineer,
fabricator, and contractor are not in the same DC-3 design hanger, so none has ready access
to the others' consciousness. Their intentions order information differently, as they are not
agreed on settled goals.
The engineer, here, intended a hanger with simple, but specific load bearing characteristics,
hence a one rod/one load system.
The contractor may have perceived that stringing a long rod through two bridges was
cumbersome. The fabricator and contractor may both have wondered how to thread a nut at
the upper bridge connection, as you can't push a nut over the unthreaded part of the rod to its
middle. The shop drawing intended to answer those issues by a two rod system. Right there in
the gap was a two load Connection "A", which entered no party's consciousness.
Had the information been ordered by consistent intentions acting on settled goals,
construction could have been accomplished simply. Safe suspensions would have sustained
life. All the two rod objectives could have been answered if the two rods had been joined
together by a commonly available sleeve nut rod connector at or near Connection "A" to
preserve the specific one rod/one load result intended by the engineer.
All of our empathy begs for a second chance: (1) to express at the outset that design changes
order information by different intentions along goals that are not uniformly settled, (2) that
changes will mask neglected risks, (3) therefore, to install a control that all proposed changes
shall be personally reviewed, rejected, or approved by the original engineer in whose
consciousness we expect to find the intentions that will order information in time to give us
the freedom to act.
We can learn to read ice, and, if we expand our self-worlds to include reading ice when only
the clues there will save us, we stand a chance at getting back home with some reliability.
Quality Paradigm-HHIP 36
Haery Sihombing @ IP
But you are not content with that. No reason for architecture or engineering exists, unless
reality is changed by it. You are a traveler in reality bent on changing it. You are not alone.
Before you, travelers bent on discovery of new lands made the legends we studied and
admired in school. Who did not aspire to an explorer's life?
Geographers know that exploration begins before the ship weighs anchor, and explorers never
quite escape the exotic territory of the human mind. Exploration, teaches geographer J.
Wreford Watson, is a process beginning in the imagination with preconceived ideas about the
character and content of the lands to be found and explored. Explorers place imaginary
markers on the land before one foot is set on it, and, once there, interpret what is seen to
verify the markers sent conveniently ahead.
What we observe in new lands, Watson teaches, is neither new nor correct. Rather, what we
see is a distortion "compounded of what men hope to find, what they look to find, how they
set about finding, how findings are fitted into their existing framework of thought, and how
those findings are then expressed."
The bodies of more than a few explorers are buried in a reality more strange and harsh than
the one imagined or reported by the first to arrive.
b. Claiming by Markers
We carry our reservoirs of experience forward by placing markers ahead of our actions.
Design professionals look ahead to the next project, and they imprint the expected reality of it
with markers from their experience. I think explorers place markers for the want of courage
to meet the unfamiliar with their lives. I think it is merely expedient for us at home. It is our
way of fitting new realities into the garrison of the familiar. Whether the familiar will
imprison our creativity or marshal a new and finer order depends on the preparation of our
consciousness in that exact instance.
If we send our imaginary markers, as Watson would warn us, compounded by what we hope
to find, what we have decided to find, what within our existing framework of thought we are
prepared to find, then, we will not claim new territory.
There is no conclusive trepidation in that. The essential and cheerful point is that we can
distinguish reality from our imaginary markers. The garrison of our experience will not acquit
our comfortable conjecture if the consciousness is prepared to discover and reject all
counterfeit reality.
Quality Paradigm-HHIP 37
Haery Sihombing @ IP
our self-worlds. To our great dismay, we do not always build wisely, and our maintenance
can be shabby at best.
The Self Estate
If I have occasionally made the self sound troublesome, dispatch that notion. Thomas Edison
said he grew tired of hearing people run down the future, as he intended to spend the rest of
his life in it. Your time in the future is in the self and the consciousness you construct. You
have need of a strong and orderly estate.
The self has been much theorized, amplified, anesthetized, and penalized in philosophy,
science, and society. We humans seem at times astonished at our capabilities and,
alternatively, fearful of our potential. You may study the books yourself and draw your own
conclusions about which ideology's sound you favor.
We bark for no dogma here. We look instead for what will help us increase our freedom to
act. For it is the freedom to act that powers the quality paradigm.
To command that freedom, we need to know what labor our consciousness performs, collect
the sweat of it on our faces, and feel its exertions as experience. We need to conserve our
values and tie goals and controls in harmony with them. We need to chart the limits of our
Umwelt, feel the weight of information and give good measure in it, and know the
dysfunctions that inhibit our performance. Plainly, we will never complete the task of
mastering all of that, but we can open our consciousness to secret possibilities and make at
least a few breathtaking leaps through reality.
a. Splendorous Estates
Ours is first among all species for the time and energy spent in preparation, and our
aspirations have propelled a pace of change that may predestine us all to a constant state of
refitting. That would be a fine destiny, indeed, one well suited for planetary leaders. It is
entirely suitable for any person bent on changing the planetary reality, even by adjusting a
corner of it on Main Street.
The point here acknowledges but moves across your constant technical preparation: The point
lands in your private estate. There, you are made guardian of your values and goals, and you
determine the paths you will follow; there, you install controls on your progress along settled
paths; there, you experiment with immortality, and you plan to change reality, perhaps
forever.
Do you marvel at the splendorous potential of your private estate? Look at the rooms, the
library shelves, secret passages, doors to cryptic codes unbroken, and the plans, dreams, the
flying machines! No one else has an estate quite like it! It is no wonder Strauss has appeared
suddenly to conduct Also Sprach Zarathustra just for you!
Tell us Guardian: What will you build from all that wonderment in our shared reality?
Concentrate! Strauss is on tiptoes; poised, baton waved high; the music soars here! This is
your moment: Yours is the next breathtaking leap into reality leaving changes in stone, steel,
and glass, perhaps forever! The doors to your estate fly open; the light from a billion
possibilities is brilliant, but the music fails, it fades, and a critic looks back on you from your
own shoulder. You can't stop it. Reality is sobering; there are doubts in the approach to it, and
there ought rightfully to be. Reality is, finally, a splendorous place only for the people who
are first prepared to leave their private estates and earn a place in it.
Quality Paradigm-HHIP 38
Haery Sihombing @ IP
The classical Greeks postulated a self that held but the promise of attaining credentials fit to
affect reality. The promise would be unfulfilled unless the individual undertook the obligation
to prepare and care for the self that it would be made and remain qualified to venture into
reality. The guiding, early Greek principle was, "Take care of yourself." Scholars teach that
meant that you were to undertake the care of yourself as a central obligation. The philosophy
obligated the individual to take guardianship over the self estate.
To accomplish your care, you were necessarily required to see yourself as a figure of some
central importance. Over time, that Greek principle got into the same kind of trouble the Sun
had at the center of the solar system, and both lost ground in the fierce competition for the
heart of mankind's proper concern.
"Take care of yourself" and its corollary, "The concern of self," gave ground to the more
accommodating, gnothi sauton, or "Know yourself." Scholars teach that "Know yourself" had
a different point: It was ecclesiastical advice of the time meaning, "Do not presume yourself
to be a god."
Good advice for this time of day, as well. But, as the Sun was restored to a working position
in the solar system, perhaps we may find useful employment in our time for the Greek
concern with the self.
Do not draw the conclusion that the concern with self can be called self-centered as we use
the term today. Askesis sought to train the ethical self. Its training led to the assimilation of
truth, and its practice tested one's preparation for doing what should be done when confronted
with reality. The first note of the paradigm, the value note, is everywhere evident in the
askesis. Greece was to get a healthy Republic out of the healthful self.
Quality Paradigm-HHIP 39
Haery Sihombing @ IP
The meditatio is a "what if-then this" inquest of the self. Accomplished alone, it is a
meditation; with another, it is Socratic exchange:
And on it goes.
The meditatio is especially commended to judgment workers, whose thoughts (unlike ball
bearings) cannot be readily measured and whose flaws, if not detected by themselves, may be
unperceived by others. Judgment workers are the guardians of their own values and goals, but
they are fiduciaries of precious parts of the reality we all share. Before our shared reality is
changed forever by what is put there, people who bend it by their intentions should test the
consequences first on themselves. You do not need more examples of that.
Those who pursue the classic, premeditatio mallorum, pursue training of the ethical self. It is
preparation that increases the freedom to act by forcing upon us images of the future
misfortunes we will wish to avoid. It teaches us to see the Golem.
FIRST: Imagine your future, not as you believe it will be, but imagine the worst that can
happen in your future. Do not temper the images with what "might be" or "how it happened
once before," but make the worst case imaginable for yourself.
SECOND: Imagine not that this will happen in the future, but that it is in the process of
happening now. Imagine not that the shop drawing will change the design, but that the shop
drawing is being drawn, that it has been drawn, that the steel is being fabricated, that the steel
is being installed, that the building is occupied, that the connection has failed.
THIRD: Engage in a dialogue to prepare your responses and actions, until you have reduced
the cinegraph to an ethical resolution. Do not conclude the dialogue until you are prepared to
do what should be done when faced with the reality of what you have imagined.
Herein are tested the responses: I am not responsible for changes; someone else is
responsible; they are supposed to tell me about any changes first and send me calculations;
they are engineers, too; I wasn't hired for steel detailing; my review is only for general, not
specific compliance; I look at every steel shop drawing they send me; there is only so much
time and money for shop drawing reviews; the building department approved the building; I
did what others would have done.
Common precautions and plans may not allow an eidetic reduction of your future misfortune
that can conserve your values and meet your goals. Perhaps, what you are about is not
common, or, if common, it holds peril for many lives. Then, try repeated reductions with new
precautions until a satisfactory resolution is achieved. This can be unsettling work, but
Quality Paradigm-HHIP 40
Haery Sihombing @ IP
harness the agitation you feel. It is an antagonist you call voluntarily to your cause. Feel the
weight of that critic on your shoulder.
People unwilling to deliberately disquiet themselves will refuse the proportions of a future
misfortune or will dismiss it for an uncertainty of occurrence, but that is not an eidetic
reduction. That is denial, and denial is not preparation.
When you are prepared to affect reality, you will see the Golem, and you will be prepared to
do what must be done. Your preparation will reveal the Golem wrapped in the twisted metal
moment you cease conservation of your values.
What you are permitted to access by the discipline is an unacceptable reality at no loss to life
or property. It is an opportunity to check your values, settle your goals, install controls, and
prepare yourself (including the potent set) with the intentions that will order and encode
information and instruct actions thereafter to reduce future misfortune.
As the result of living the premeditatio mallorum, a structural engineer might decide to
review a steel shop drawing for each and every load-bearing connection in a suspension
bridge, which he will check off on the structural sheets one by one.
A design professional could discover that stacked, suspended, pedestrian bridge designs carry
significant missing information and make the decision to increase the probability of the
information furnished achieving the intended outcome. Drawings and specifications might
encode copious redundant information to limit the possible outcomes. That might be done
even for the absurdly simple design, because the consequences of a failure are too great to
risk. Perhaps, the eidetic reduction of future misfortune can be handled in no other way.
With this preparation, ice can be read and understood, and hunters will return to their
families.
Training, as I believe you know, is not what is done in school. Students go to college to study
for the same reason people go to banks for robbery. Money is kept in banks, and college is
where theory is kept.
If done at all, training is begun and continued in practicing firms actively affecting reality.
New practitioners, if ready to work, are not prepared to affect reality. Gymnasia contemplates
that training will be in real situations, even if they do not directly affect reality. That is one
function of reviews by seniors and making experienced minds available to curious beginners.
It is the business of finding and sustaining curiosity. Gymnasia teaches the premeditatio
mallorum to give the novice a view, even if pessimistic, of the reality as it might be for a slow
or careless apprentice.
Take note that people who do not train at the premeditatio "if this-then what?" are frequently
seen playing at the less effective self-care exercise, "If only we had-well then."
Quality Paradigm-HHIP 41
Haery Sihombing @ IP
The ideology of the submerged individual slogging along to forced songs fades now. The
Internationale does not muffle the cries of people who will not bear repression. The fierce
focus on the competition between ideologies is fading. You can read Winter and worry on the
faces of the remaining ideologues. The immediate season is entrepreneurial Spring. And you
can foresee the doors to profound optimism opening there on more hinges of consciousness
free to act than ever before. The good of that is the better work our species does under those
conditions.
Abraham Lincoln knew that when he mused in a diary in 1861 on the success of the
American experiment with the individual's freedom:
Without the Constitution and the Union, we could not have attained the result; but even these,
are not the primary cause of our great prosperity. There is something back of these, entwining
itself more closely about the human heart. That something, is the principle of "Liberty to all"-
the principle that clears the path for all-gives hope to all-and, by consequence, enterprise, and
industry to all.
Psychologists and researchers in this relatively new field have offered observations about the
highest achievers:
1. They are able to transcend previous comfort zones, thereby opening themselves to
new heights of achievement.
2. They are guided by compelling goals they set themselves.
3. They solve problems rather than place blame.
4. They confidently take risks after preparing their psyche for the worst consequences,
and they are able to mentally rehearse anticipated actions or events beforehand.
5. They routinely issue "Catastrophic Expectations Reports" to themselves, against
which they refine and rehearse countermeasures.
Quality Paradigm-HHIP 42
Haery Sihombing @ IP
6. They are able to imagine themselves exceeding previous achievements, and they issue
themselves "Blue Sky Reports."
7. They do what they do for the art and beauty of accomplishment.
There are other attributes of high achievers, I am sure, but this list is a fair mirror to reflect in.
As you reflect, return to the askesis to test the durability of its traditions, and consider (with a
nod to Mr. Lincoln's own reading list) whether the Greeks of antiquity were not correct after
all to care for the self that the Republic might enjoy great health
.
Carry What You Think Will Last
I hope you read more philosophy and art in the paradigm than procedure, for, unless you do,
no good may come of it.
Managers, who, by this time, may wonder how to get their judgment workers to rendevous
with quality, are about to walk away from the philosophy of it altogether. The paradigm has
space for all, but it has no separate rooms. Everyone must look to the care of the self that the
firm may prosper. No one has permission to go blindly and unprepared into the reality of the
next project. Not from the leading principal to the neophyte is anyone exempt from
preparation to figure the probability of missing information and give good weight in it, to test
false markers, or to lay first eye on the Golem.
The philosophy of the paradigm is in the pursuit of quality as an ethical odyssey. Those who
achieve and sustain high quality are in that chase.
The art of it is in the happiness men and women attain from the work of architecture and
engineering when they are prepared to affect reality in a way that has timeless beauty.
Paper Tigers
What place is there for the procedures, the checklists, and the standards that are the common
mainstay of quality management?
Yes, paper and recordings on it are required. But the purpose of all that paper is not to drive
workers to quality goals. Rather, its purpose is to record the experience the firm has had in
successfully affecting reality, and to communicate the wisdom gained. It is history; therefore,
it can be guidance, and I like that. It is yesterday's news, and that worries me.
That paper tools are made is not assurance that any user is alive to the hazards. Airlines
believe in doing things by the book; pilots believe in doing things by the book; both mostly
do. Yet wings are still ice coated and flaps set in wrong positions, and both occasionally stay
that way right through the crash.
I said earlier that controls are deliberate potentials for disquieting information, but what I
know about checklists is that people frequently bounce through them seeking confirmation
(or more likely, just documentation) that they have done right. None of that helps us return
home from the hunt safely.
The Inuit hunter learns the clues in ice for the love of life in him. Clues that have immense
potential are not missed; they are immediate, intimate, and they solve the mystery of survival.
You and I lose nearly every bit of the good in the paper tools we read. We are not disquieted,
and we do not allow controls to make us deliberately so.
Quality Paradigm-HHIP 43
Haery Sihombing @ IP
The clues in paper tools will have immense potential only for people intent on their
preparation for changing reality. People in that chase will compose their histories in
comprehensive and precise checklists, procedures, and standards, and they will send them
ahead armed with their tested insights of the perils in new projects. They will offer them for
wisdom that is immediate, intimate and for their potential to solve mysteries in stone, steel,
and glass. But they will not cast confident eyes over their paper tools. The quality is not in
them.
LAST WORDS
We went looking for the roots of our failures and for any help there was to comprehend
passages back from deep disappointment. I think we hoped to emerge with clues to our
happiness.
6. BIBLIOGRAPHICAL NOTES
1. To take your study into information and language theory, read much more about it in
Jeremy Cambell's book, Grammatical Man, Information, Entropy, Language and Life,
Simon & Schuster, Inc., 1982. Read, as well, in Chaos by James Gleick, Viking
Penguin, Inc., 1987.
2. The structure of the consciousness is developed by Mihaly Csikszentmihalyi in Flow:
The Psychology of Optimal Experience, Harper & Row, 1990. Flow is written for a
wide audience by a serious researcher and scholar. It cannot be confused with a ten-
minute book on anything.
3. For a discourse on evolution of species, you cannot do better than Richard Dawkins'
book, The Blind Watchmaker, W.W. Norton & Company, 1986. Mr. Dawkins offers
for ten dollars to sell you his Macintosh "Biomorph" program, which will have you
evolving on-screen life forms in minutes. See the coupon in the back of his book. As
far as I know, it is the only way to get the program.
4. Peter F. Drucker is frequently your best source and often the only one you will need
on management theory, including goals, controls, and the behavior of workers in an
information era. Management Tasks, Responsibilities, Practices, Harper & Row,
1973, is comprehensive.
5. You can participate in a varied seminar on the self through the collection of papers in
Technologies of the Self: A Seminar With Michel Foucault, University of
Massachusetts Press, 1988. The editors are Luther H. Martin, Huck Gutman, and
Patrick H. Hutton.
6. Two books adding to the study here through unique perspectives on life and the varied
experience of it are: Departures by Paul Zweig, Harper Row, 1986, and Arctic
Dreams: Imagination and Desire in a Northern Landscape, Charles Scribner's Sons,
1986, by Barry Lopez, who introduced us to the Inuit hunter, Jacob Von Uexkull, and
J. Wreford Watson.
Quality Paradigm-HHIP 44
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Statistical process control that was pioneered by Walter A. Shewhart and taken up by W. Edwards
Deming with significant effect by the Americans during World War II to improve industrial production.
Deming was also instrumental in introducing SPC methods to Japanese industry after that war. Dr.
Shewhart created the basis for the control chart and the concept of a state of statistical control by
carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he
understood data from physical processes never produce a "normal distribution curve" (a Gaussian
distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in
manufacturing data did not always behave the same way as data in nature (Brownian motion of
particles). Dr. Shewhart concluded that while every process displays variation, some processes display
controlled variation that is natural to the process, while others display uncontrolled variation that is not
present in the process causal system at all times.[1]
Scope
The crucial difference between Shewhart’s work and the inappropriately-perceived purpose of SPC that
emerged, that typically involved mathematical distortion and tampering, is that his developments were in
context, and with the purpose, of process improvement, as opposed to mere process monitoring. I.e.
they could be described as helping to get the process into that “satisfactory state” which one might then
be content to monitor. Note, however, that a true adherent to Deming’s principles would probably never
reach that situation, following instead the philosophy and aim of continuous improvement.
Statistical process control (SPC) is mostly technical in nature – it is actually just the technical arm of the
author’s SPC quality management system. It concentrates of finding process variation; correcting the
variation depends on the creativity and ingenuity of the people involved.
SPC (statistical process control) began as a means of controlling production in manufacturing plant. It
soon became obvious that SPC principles could be extended into other areas, such as non-
manufacturing career other than production in a manufacturing firm, and all areas of service type
industry, management, health care, education, politics, the families and area life itself.
The system that has evolved from SPC is in two parts : technical and humanistic. SPC (in term of TQM)
is a system of management.: One that concentrates on quality rather than on productivity or accounting
(as used to be the case in the past, and still is in many firms). Actually, quality is included in a production
and/or an accounting management system, and production and accounting are included in a quality
management system. The problem is in the way we think about it and the way we approach it.
In production or accounting systems, quality tends to become only appended function, it tends to be
somewhat forgotten. The problems associate with getting a product out of the door tend to become more
important than the problems associated with getting a ‘”good” product out (making “quality” products).
In summary, the SPC (in term of TQM) quality management system is as system where people work to
accomplish certain goals. The goals are:
¾ Quality (making a ‘good’ product)
¾ Productivity (getting the product ‘out of door’)
¾ Accounting (making the product at least cost)
¾ People working together harmoniously
DEFINITION
Statistic
a) Is the science that deals with the collection, classification, analysis, and making of inferences from
data or information. Statistic is subdivided into 2 categories, are:
¾ Descriptive Statistic (describes the characteristics of a product or process using information
collected on it).
¾ Inferential Statistic (draws conclusions on unknown process parameters based on
information contained).
b) Is a measure of characteristic of a sample of the universe. Ideally, statistics should be resemble and
closely approximate the universe parameters they represent. In statistics, symbols are used to
represent desirable characteristics, mostly for ease calculation.
Every measurement is a process. The process of measurement is the fundamental concept in physics,
and, in practice, in every field of science and engineering.
Identification of a process is also a subjective task, because the whole universe demonstrates one
continuous universal "process", and every arbitrarily selected human behaviour can be conceptualized
as a process. This aspect of the process recognition is closely dependent on human cognitive factors.
According to the observations included in the systemic TOGA meta-theory, the concepts system,
process, function and goal are closely and formally connected, and, in parallel, every process has the
system property i.e., can be seen as an abstract dynamic system/object, and arbitrarily divided on
network of processes. This division depends of the character of the changes, and on socio-cognitive
factors, such as their perception, tools and the goal of the observer.
Fig. 1 Process
SPC
a) Statistical process control (SPC) is a method for achieving quality control in manufacturing
processes. It is a set of methods using statistical tools such as mean, variance and others, to detect
whether the process observed is under control.
b) Statistical Process Control (SPC) is a method of monitoring, controlling and, ideally, improving a
process through statistical analysis. Its four basic steps include measuring the process, eliminating
variances in the process to make it consistent, monitoring the process, and improving the process to
its best target value
c) Statistical process control is the application of statistical methods to identify and control the special
cause of variation in a process. Statistical Process Control (SPC) is the equivalent of a histogram
plotted on it's side over time. Every new point is statistically compared with previous points as well
as with the distribution as a whole in order to assess likely considerations of process control (i.e.
control, shifts, and trends). Forms with zones and rules are created and used to simplify plotting,
d) SPC is about control, capability and improvement, but only if used correctly and in a working
environment which is conducive the pursuit of continuous quality improvement, with the full
involvement of every company employee.
SPC is generally accepted to mean control (management) of the process through the use of statistic or
statistical methods.
Taking the guesswork out of quality control, Statistical Process Control (SPC) is a scientific, data-driven
methodology for quality analysis and improvement. Statistical Process Control (SPC) is an industry-
standard methodology for measuring and controlling quality during the manufacturing process. Attribute
data (measurements) is collected from products as they are being produced. This data is then plotted on
a graph with pre-determined control limits. Control limits are determined by the capability of the
process, whereas specification limits are determined by the customer’s needs.
Data that falls within the control limits indicates that everything is operating as expected. Any variation
within the control limits is likely due to a common cause—the natural variation that is expected as part
of the process. If data falls outside of the control limits, this indicates that an assignable cause is likely
the source of the product variation, and something within the process should be changed to fix the issue
before defects occur.
To quantify the return on your SPC investment, start by identifying the main areas of waste and
inefficiency at your facility. Common areas of waste include scrap, rework, over inspection, inefficient
data collection, incapable machines and/or processes, paper-based quality systems and inefficient lines.
You can start to quantify the value of an SPC solution by asking the following questions:
x Are your quality costs really known?
x Can current data be used to improve your processes, or is it just data for the sake of data?
x Are the right kinds of data being collected in the right areas?
x Are decisions being made based on true data?
x Can you easily determine the cause of quality issues?
x Do you know when to perform preventative maintenance on machines?
x Can you accurate predict yields and output results?
There is a lot of confusion about SPC. Walter Shewhart, Don Wheeler, and the BDA booklets "Why
SPC?" and "How SPC?" say one thing. Almost all other books on SPC say something else. The reason
for the disagreement is not that the other books are completely wrong in everything they say. (They are
wrong in some things). The main difference is that the aim of SPC as intended by Walter Shewhart, and
as described by most other writers, are completely different.
Adjustment is all most books cover. It has some use, because it prevents things getting worse. The
point of making the adjustment is to try to keep the product "within specification". This makes the
statistical methods quite complicated.
To be sure that most of the individual results are within specification, when we only have measured a
few samples, we have to know that the process is stable, and that the individual measurements are
Improvement is what Dr Shewhart and Dr Deming are talking about, but hardly any books mention: or if
they do, they are more concerned with the first problem, and so suggest rather ineffective ways of
finding improvements. Adjustment is needed too, but it takes second place, instead of being the sole
aim.
The reason why the approach has to be different is that some "signals" that you can pick up from a
control chart tell you little about the nature of the underlying reason for going out of control. For example,
a slow drift in the mean can rarely point straight to a cause of change, whereas an isolated point quite
unlike the points on either side of it usually tells you all you need to know. At least, it does if the process
operators themselves are keeping the chart: they will usually know just what happened at that point. By
comparison, a slow drift may result from something that started to go wrong long before.
Naturally, if all you are going to do is to alter the controls to bring the mean back into line, you want to
detect a slow drift or change as soon as possible, and put it right. This is why many books suggest such
a wide range of "out of control" signals, such as runs above the mean, or runs in the same direction.
On the other hand, if you want to trace underlying causes, and do something permanent about them,
these signals are usually nothing but a nuisance. The process gets readjusted before you can trace a
cause. So in the Deming-Shewhart approach, the only signal worth much is the simple 3SD above or
below the mean. The distribution, normal or otherwise rarely matters. And of course, we do NOT have to
start with a process that is "under control".
The aim is first of all to find out if the process is stable. If it is stable, that is, "under statistical control",
the aim is to set priorities for investigation. The investigations are to find ways of changing the process,
by permanently removing "special causes" of variation.
If, on the other hand the process is already stable, we can use the control chart to demonstrate the
effect of experimental changes. Any change is likely to make the process unstable for a short time, so
the control chart is needed to tell us when the new equilibrium has been reached.
Instead of emphasizing complicated rules for detecting drift or a change of mean, what is needed is
great care to see that the information about factors which might affect the process, and knowledge of
changes, is immediately available to someone who can see the connections, and can get things done. In
this approach control charts on INPUTS to the process, such as raw materials,
temperatures, pressures, and so on, are as important, or more important, than control charts on the final
product. For adjustment, only the final product matters.
Obviously improvement is better in the long run, from all points of view. If simply adjusting the process is
enough to meet specifications, improvement will meet them many times over. And the general effects on
the system which result from improvement will have good effects that spread far and wide.
The statistical methods used in improvement are also much easier to understand and use. The
drawback, in many companies, is that short-term thinking rules, and no-one has the power to change the
system.
A series of line graphs or histograms can be drawn to represent the data as a statistical distribution. It is
a picture of the behaviour of the variation in the measurement that is being recorded. If a process is
What inspired Shewhart’s development of the statistical control of processes was his observation that
the variability which he saw in manufacturing processes often differed in behaviour from that which he
saw in so-called “natural” processes – by which he seems to have meant such phenomena as molecular
motions.
Wheeler and Chambers combine and summarise these two important aspects as follows:
"While every process displays variation, some processes display controlled variation, while others
display uncontrolled variation."
In particular, Shewhart often found controlled (stable variation in natural processes and uncontrolled
(unstable variation in manufacturing processes. The difference is clear. In the former case, we know
what to expect in terms of variability; in the latter we do not. We may predict the future, with some
chance of success, in the former case; we cannot do so in the latter.
What is important is the understanding of why correct identification of the two types of variation is so
vital. There are at least three prime reasons.
First, when there are irregular large deviations in output because of unexplained special causes, it is
impossible to evaluate the effects of changes in design, training, purchasing policy etc. which might be
made to the system by management. The capability of a process is unknown, whilst the process is out of
statistical control.
Second, when special causes have been eliminated, so that only common causes remain, improvement
then has to depend upon management action. For such variation is due to the way that the processes
and systems have been designed and built – and only management has authority and responsibility to
work on systems and processes. As Myron Tribus, Director of the American Quality and Productivity
Institute, has often said:
x “The people work in a system.
x The job of the manager is
x To work on the system
x To improve it, continuously,
x With their help.”
Finally, something of great importance, but which has to be unknown to managers who do not have this
understanding of variation, is that by (in effect) misinterpreting either type of cause as the other, and
acting accordingly, they not only fail to improve matters – they literally make things worse.
These implications, and consequently the whole concept of the statistical control of processes, had a
profound and lasting impact on Dr Deming. Many aspects of his management philosophy emanate from
considerations based on just these notions.
x So why SPC?
The plain fact is that when a process is within statistical control, its output is indiscernible from random
variation: the kind of variation which one gets from tossing coins, throwing dice, or shuffling cards.
Whether or not the process is in control, the numbers will go up, the numbers will go down; indeed,
occasionally we shall get a number that is the highest or the lowest for some time. Of course we shall:
how could it be otherwise? The question is - do these individual occurrences mean anything important?
So the main response to the question Why SPC? is therefore this: It guides us to the type of action that
is appropriate for trying to improve the functioning of a process. Should we react to individual results
from the process (which is only sensible, if such a result is signalled by a control chart as being due to a
special cause) or should we instead be going for change to the process itself, guided by cumulated
evidence from its output (which is only sensible if the process is in control)?
Control charts have an important part to play in each of these three Phases. Points beyond control limits
(plus other agreed signals) indicate when special causes should be searched for. The control chart is
therefore the prime diagnostic tool in Phase 1. All sorts of statistical tools can aid Phase 2, including
Pareto Analysis, Ishikawa Diagrams, flow-charts of various kinds, etc., and recalculated control limits will
indicate what kind of success (particularly in terms of reduced variation) has been achieved. The control
chart will also, as always, show when any further special causes should be attended to. Advocates of
the British/European approach will consider themselves familiar with the use of the control chart in
Phase 3. However, it is strongly recommended that they consider the use of a Japanese Control Chart
(q.v.) in order to see how much more can be done even in this Phase than is normal practice in this part
of the world
Classical Quality control was achieved by observing important properties of the finished product and
accept/reject the finished product. As opposed to this technique, statistical process control uses
statistical tools to observe the performance of the production line to predict significant deviations that
may result in rejected products.
The underlying assumption in the SPC method is that any production process will produce products
whose properties vary slightly from their designed values, even when the production line is running
normally, and these variances can be analyzed statistically to control the process. For example, a
breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but
some boxes will have slightly more than 500 grams, and some will have slightly less, producing a
distribution of net weights. If the production process itself changes (for example, the machines doing the
manufacture begin to wear) this distribution can shift or spread out. For example, as its cams and
pulleys wear out, the cereal filling machine may start putting more cereal into each box than it was
designed to. If this change is allowed to continue unchecked, product may be produced that fall outside
the tolerances of the manufacturer or consumer, causing product to be rejected.
By using statistical tools, the operator of the production line can discover that a significant change has
been made to the production line, by wear and tear or other means, and correct the problem - or even
stop production - before producing product outside specifications. An example of such a statistical tool
would be the Shewhart control chart, and the operator in the aforementioned example plotting the net
weight in the Shewhart chart.
X = average
n
UNGROUP X 1 X 2 ... X n n = number of observed values
DATA X ¦X
i 1
i
n X1, X2,…,Xn = observed value identified
by the subscript 1, 2,…, n or general
subscript i.
h
¦ = symbol meaning “sum of”
¦fX
i 1
i i
f1 X 1 f 2 X 2 ... f h X h n = sum of the frequency
X fi = frequency in a cell or frequency of an
GROUPED n f1 f 2 ... f h observed value
DATA xi = cell midpoint or observed value
h = number of cells or number of observed
values
¾ Mode The mode is the most frequently occurring number in a group of values
Central tendency is the tendency to be the same. It is a central or midway value from
¾ Median
which other value deviate in some set pattern. Tendency :
o STRONG (the values or measurements will group closely about the central
value)
o WEAK (the values or measurements will not group closely about the central
value)
The median for continuous data :
§n · Md = Median
¨ cf m ¸ Lm = Lower boundary of the cell with the median
GROUPED
Md Lm ¨ 2 ¸i n = Total number of observations
TECHNIOQUE ¨ f m ¸ cfm = Cumulative frequency of all cells below lm
¨ ¸ fm = Frequency
© ¹
o Measures of dispersion
The range is the difference between the highest value in a series of values or sample
¾ Range and the lowest value in that same series.
R = range
Xh = highest observation in a series
R = Xh - Xl
Xl = lowest observation in a series
¾ Standard The standard deviation shows the dispersion of the data within the distribution
Deviation
n 2
§ n ·
ı= n¦ X ¨ ¦ X i ¸
i
2
i 1 ©i1 ¹
nn 1
o Control Charts
¾ VARIABLE
Trial control limits for the charts are established at from the central line, as show by the equations :
¾ ATTRIBUTE
In the real life, X seldom exactly equals µ due to the differences in the number of measurements used to
calculate the two values.
The reason for this to offset the normal bias of small sample size when the sample size become large,
there us little difference between the formulas.
1. DISCRETE DISTRIBUTION
Deals with those randoms variable that can take on a finite or countably infinite number of values
1.1 HYPERGEOMETRIC DISTRIBUTION
The hypergeometric probability distribution occurs when the population is finite and the random sample
is taken without replacement. This procedure calculates the hypergeometric probability distribution to
evaluate hypothesis in relation to sampling without replacing in small populations The formula for the
hypergeometric is constructed of there combinations (total combinations, nonconforming combinations,
and conforming combinations) and is given by
is a binomial coefficient
The probability that a random variable X with binomial distribution B(n,p) is equal to the value k, where k
= 0, 1,....,n , is given by, where
where
where is the average number of occurrences in the specified interval. For the Poisson distribution,
2. CONTINUOUS DISTRIBUTION
May assume an infinite number of values over a finite or infinite range. The probability distribution of
continuous random variable x is often called the probability density function f (x)
2.1 NORMAL DISTRIBUTION
The normal curve is a continuous probability distribution. Solutions to probability problems that have
involve continuous data can be solved using the normal probability distribution. The so-called "standard
normal distribution" is given by taking and in a general normal distribution. An arbitrary
(4)
FLOWCHART
INTRODUCTION
In the systematic planning or examination of any process, whether it is a clerical, manufacturing, or
managerial activity, it is necessary to record the series of events and activities, stages and decisions in
a form which can be easily understood and communicated to all.
If the improvements are to be made, the facts that relating to the existing method must be recorded
first. The statements, defining the process should be lead to its understanding and will provide the basis
of any critical examination necessary for the development of improvements. It is essential, therefore,
that the descriptions of processes are accurate, clear and concise.
Process mapping is a communication tool that helps an individual or an improvement team understand
a system or process and identify opportunities for improvement. So, those mapping by flowcharts are
frequently used to communicate the components of a system or process to others whose skills and
knowledge are needed in the improvement effort.
Purpose:
x The purpose of the flowchart analysis is to learn why the current system/ process operates in the
manner it does, and to prepare a method for objective analysis.
x The flowchart techniques can also be used to study a simple system and how it would look if there
were no problems.
x A flowchart is a pictorial representation of the steps in a given process. The steps are presented
graphically in sequence so that team members can examine the order presented and come to a
common understanding of how the process operates.
Flowcharts are maps or graphical representations of a process. Steps in a process are shown with
symbolic shapes, and the flow of the process is indicated with arrows connecting the symbols.
Computer programmers popularized flowcharts in the 1960's, using them to map the logic of programs.
In quality improvement work, flowcharts are particularly useful for displaying how a process currently
functions or could ideally function. Flowcharts can help you see whether the steps of a process are
logical, uncover problems or miscommunications, define the boundaries of a process, and develop a
common base of knowledge about a process. Flowcharting a process often brings to light
redundancies, delays, dead ends, and indirect paths that would otherwise remain unnoticed or ignored.
But flowcharts don't work if they aren't accurate, if team members are afraid to describe what actually
happens, or if the team is too far removed from the actual workings of the process.
Flowcharts can be used to describe an existing process or to present a proposed change in the flow of
a process. Flowcharts are the easiest way to "picture" a process, especially if it is very complex.
Flowcharts should include every activity in the process. A flowchart should be the first step in identifying
problems and targeting areas for improvement.
THE FLOWCHART
1. Definition
(1) A flowchart is a graphical representation of the specific steps, or activities, of a process.
(2) A flowchart is a pictorial (graphical) representation of the process flow showing the process inputs,
activities, and outputs in order in which they occur.
Flowchart Categories:
x Theoretical flowchart is a flowchart that is prescribed by some overarching policy, procedure, or
operations manual. Theoretical flowcharts describes the way the process “should operate”.
x Actual flowchart depicts how the process is actually working.
Flow Chart-1
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
x Best flowchart is where the people involved in the process develop the optimum (or close to
optimum) flow based on the process expertise of the group.
Types of Flowchart:
a. System Flowchart
A system flowchart is a pictorial representation of the sequence of operations and decisions that
make up a process. It shows what being done in a process.
b. Layout Flowchart
A layout flowcharts depicts the floor plan of an area, usually including the flow of paperwork or
goods and the location of equipment, file cabinets, storage areas, and so on. These flowcharts are
especially helpful in improving the layout to more efficiently utilize a space.
There are many varieties of flowcharts and scores of symbols that you can use. Experience has shown
that there are three main types that work for almost all situations:
x High-level flowcharts map only the major steps in a process for a good overview.
x Detailed flowcharts show a step-by-step mapping of all events and decisions in a process.
x Deployment flowcharts which organize the flowchart by columns, with each column
representing a person or department involved in a process.
Flow Chart-2
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
The trouble spots in a process usually begin to appear as a team constructs a detailed flowchart.
Although there are many symbols that can be used in flowcharts to represent different kinds of steps,
accurate flowcharts can be created using very few (e.g. oval, rectangle, diamond, delay, cloud) but
flowcharts were used for years industrial engineering departments in 1930s, they become very popular
in the 1960s when computer programmers used them extremely to map the logic of their various
program.
Because a flowchart is a graphical representation, it is logical that there are symbols to represent the
different types of activities. A Flowchart is employed to provide a diagrammatic picture, by means of a
set of symbols, showing all the steps or stages in a process, project or sequence of events and its
considerable assistance in documenting and describing a process as an aid to examination and
improvement.
Although there are many different types of flowchart symbols, some of the more common symbols that
are used in common as figure below.
2. Flowcharting a Process
Steps in Flowcharting a Process To Construct an effective flowchart
1. Decide on the process to flowchart.
2. Define the boundaries of the process: the 1. Define the process boundaries with starting
beginning and the end. and ending points.
3. Describe the beginning step of the process in 2. Complete the big picture before filling in the
an oval. details.
4. Ask yourself "what happens next?" and add 3. Clearly define each step in the process. Be
the step to the flowchart as a rectangle. accurate and honest.
Continue mapping out the steps as rectangles 4. Identify time lags and non-value-adding steps.
connected by one-way arrows. 5. Circulate the flowchart to other people
5. When a decision point is reached, write the involved in the process to get their comments.
decision in the form of a question in a
diamond and develop the "yes" and "no"
paths. Each yes/no path must reenter the
process or exit somewhere.
6. Repeat steps 4 and 5 until the last step in the
process is reached.
7. Describe the ending boundary/step in an oval.
Flow Chart-3
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
When drawing a flowchart, constantly ask "what happens next?", "is there a decision made at this
point?", "does this reflect reality?", "who else knows this process?", etc. When possible, do a
walk-through of the process to see if any steps have been left out or extras added that shouldn't be
there. The key is not to draw a flowchart representing how the process is supposed to operate, but to
determine how it actually does operate. A good flowchart of a bad process will show how illogical or
wasteful some of the steps or branches are.
3. Symbols in a Flowchart:
Defines the boundaries of a process; shows the start or stop of a process. The
ellipse represents the beginning or ending (process boundaries) of a process
Designates a single step in a process. Briefly describe the step inside the box. The
rectangle is used to denote a single step, or activity, in the process.
The general symbol used to depict a processing operation
A diamond signifies a decision point in the process. Write the type of decision
made inside the diamond in the form of a question. The question is answered by
two arrows-- "yes" and "no" --which lead to two different branches. A diamond is
used to denote a decision point in the process. The answer to the question is
usually of the yes or no variety. It also includes variable type decisions such as
which of several categories a process measurement falls into
A small circle that surrounds a letter signifies where you pick up a process on the
same page; represents a connection. A small circle with either a number or letter
inside denotes the point where the process is picked up again
The general for that represents input or output media, operations, or processes is a
parallelogram
ADVANTAGES OF FLOWCHARTS
x A flowchart functions as a communications tool. It provides an easy way to convey ideas between
engineers, managers, hourly personnel, vendors, and others in the extended process. It is a
concrete, visual way of representing complex systems.
x A flowchart functions as a planning tool. Designers of processes are greatly aided by flowcharts.
They enable a visualization of the elements or new or modified processes and their interactions
while still in the planning stages.
Flow Chart-4
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
x A flowchart provide and overview of the system. The critical elements and steps of the process
are easily viewed in the context of a flowchart.
x A flowchart removes unnecessary details and breaks down the system so designers and other
get a clear, unencumbered look at what they’re creating.
x A flowchart defines roles. It demonstrates the functions of personnel, workstations and sub-
processes in a system. It also shows the personnel, operations, and locations involved in the
process.
x A flowchart demonstrates interrelationships. It shows how the elements of a process relate to
each other.
x A flowchart promotes logical accuracy. It enables to viewer to spot errors in logic. Planning is
facilitated because designers have to clearly breakdown all of the elements and operations of the
process.
x A flowchart facilitates troubleshooting. It is an excellent diagnostic tool. Problems in the process,
failures in the system, and barriers to communication can be detected by using a flowchart.
x A flowchart documents a system. This record of a system enables anyone to easily examine and
understand the system. Flowcharts facilitate changing a system because the documentation of
what exist is available.
Getting a picture of how a clinic functions begins with a high level flowchart, usually depicted horizontally and called the core process. It is
the "backbone" of the clinic operations to which other sub-processes can be attached. Once the core process flowchart is done (this should
take about an hour), it should be given to staff and physicians for their input. It is common for the flowcharting process to have many
iterations, as revisions and changes almost always need to be made before the flowchart accurately reflects the actual process. The trick is
to keep the initial charting "high level" and not get stuck in details just yet and to segmenting the core process into components, such as:
access, intake, assessment/evaluation, treatment, discharge/follow-up, for manageability and to help identify indicators.
Once the core process is agreed upon by those who work in the clinic, the next step is to decide which processes need attention (the
bottlenecks for patient flow).
The hardest part of flowcharting is keeping it simple enough to be workable, but with enough detail to show the trouble spots. And therefore
that the flowchart for sub-processes be vertical (as opposed to the core process that goes horizontally), keeping the smoothest flow to the
left of the page, and the complexity to the right. This does not always work out perfectly, but when it does, it allows us to see the non-value
added steps very quickly. Remember also that to leave lots of room for comments, since revisions are the rule. The more buy-in, the more
revisions you have. Another trick is to capture the "issues" that surface when you're trying to create a flowchart. These issues may not be
boxes in the process, but they will eventually be clues as to what needs to be addressed. For example, patients forget to bring their x-rays.
In order they do not slow us from moving ahead with the processes steps issues on the bottom of the flowchart, then it should be write down
these issue on the bottom of flowchart.
Once a flowchart has been created and agreed upon, the next step is to study to see where improvements can be made. It is helpful to
create a flowchart of the actual process first so you can truly see where the improvement opportunities are. If you are creating a new
process, then the ideal flowchart can be created first. Having the processes clearly outlined will allow everyone to see what is suppose to
happen and will also allow you to identify performance indicators/measures that will tell you how your clinic and processes are
functioning.
Performance measures are best identified by dividing your flowchart into segments. These segments can be gleaned from the process
itself. For example, in the clinic, there is a segment that deals with patients calling in for appointments, a second segment that involves the
registration of the patient when he/she arrives, a third segment that is the actual office visit, the a discharge segment. The question you
want to ask is: how are each of these segments working? How would I know:? If you want to answer that question, you need to look at
where performance measures would fit.
Flow Chart-5
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
CHECK SHEET
Definition: A simple data collection form consisting of multiple categories with definitions. Data are
entered on the form with a simple tally mark each time one of the categories occurs. A check sheet is a
structured, prepared form for collecting and analyzing data. This is a generic tool that can be adapted
for a wide variety of purposes.
A check sheet is a data-recording device. A Check sheet is essentially a list of categories. As events in
these categories, a check or mark is placed on the check sheet in the appropriate category.
When to Use
x When data can be observed and collected repeatedly by the same person or at the same
location.
x When collecting data on the frequency or patterns of events, problems, defects, defect location,
defect causes, etc.
x When collecting data from a production process.
UUU
UU U UUU
UU
UU UU
Procedure
1. Decide what event or problem will be observed. Develop operational definitions.
2. Decide when data will be collected and for how long.
3. Design the form. Set it up so that data can be recorded simply by making check marks or Xs or
similar symbols and so that data do not have to be recopied for analysis.
4. Label all spaces on the form.
5. Test the check sheet for a short trial period to be sure it collects the appropriate data and is
easy to use.
6. Each time the targeted event or problem occurs, record data on the check sheet.
How to Construct:
1. Clearly define the objective of the data collection.
2. Determine other information about the source of the data that should be recorded, such as
shift, date, or machine.
3. Determine and define all categories of data to be collected.
4. Determine the time period for data collection and who will collect the data.
5. Determine how instructions will be given to those involved in data collection.
6. Design a check sheet by listing categories to be counted.
7. Pilot the check sheet to determine ease of use and reliability of results.
8. Modify the check sheet based on results of the pilot.
Tips:
x Use Ishikawa diagrams or Brainstorming to determine categories to be used on the check
sheet.
Check Sheet-1
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
The figure below shows a check sheet used to collect data on telephone interruptions. The tick marks
were added as data was collected over several weeks.
Below the sample of a check sheet form which is used by inspector/operator in manufacturing area.
Check Sheet-2
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
EXAMPLE : 1
In a new plant, time cards need to be submitted to the payroll office within 30 minutes after the close of
the last shift on Friday. After twenty weeks of operation, the payroll office complaint to the plant
manager that this process is not being adhered to and as a result the payroll checks are not being
processed on time and workers are starting to complain. The requirement is = 30 minutes.
30 30 27 33
30 30 32 28
28 30 32 29
29 30 31 29
31 30 31 30
27 Ň =1
28 ŇŇ =2
29 ŇŇŇ =3
30 ŇŇŇŇŇŇŇŇ =8
31 ŇŇŇ =3
32 ŇŇ =2
33 Ň =1
TOTAL = 20
EXAMPLE : 2
Below is the data that being taken from the Pressing machine that produce the ceramic part for IC ROM
before it’s sent to funnel burner .
9.85 10.00 9.95 10.00 10.00 10.00 9.95 10.15 9.85 10.05
9.90 10.05 9.85 10.10 10.00 10.15 9.90 10.00 9.80 10.00
9.90 10.05 9.90 10.10 9.95 10.00 9.90 9.90 9.90 10.00
9.90 10.10 9.90 10.20 9.95 10.00 9.85 9.95 9.95 10.10
9.95 10.15 9.95 10.00 9.90 10.00 10.00 10.10 9.95 10.15
9.80 Ň =1
9.85 ŇŇŇŇ =4
9.90 ŇŇŇŇŇ ŇŇŇŇŇ = 10
9.95 ŇŇŇŇŇ ŇŇŇŇ =9
10.00 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇ = 13
10.05 ŇŇŇ =3
10.10 ŇŇŇŇŇ =5
10.15 ŇŇŇŇ =4
10.20 Ň =1
TOTAL = 50
Check Sheet-3
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
EXAMPLE : 3
Below is the 100 metal thickness data taken about the optic component to determine the data
distribution.
3.56 3.46 3.48 3.50 3.42 3.43 3.52 3.49 3.44 3.50
3.48 3.56 3.50 3.52 3.47 3.48 3.46 3.50 3.56 3.38
3.41 3.37 3.47 3.49 3.45 3.44 3.50 3.49 3.46 3.46
3.55 3.52 3.44 3.50 3.45 3.44 3.48 3.46 3.52 3.46
3.48 3.48 3.32 3.40 3.52 3.34 3.46 3.43 3.30 3.46
3.59 3.63 3.59 3.47 3.38 3.52 3.45 3.48 3.31 3.46
3.40 3.54 3.46 3.51 3.48 3.50 3.68 3.60 3.46 3.52
3.48 3.50 3.56 3.50 3.52 3.46 3.48 3.46 3.52 3.56
3.52 3.48 3.46 3.45 3.46 3.54 3.54 3.48 3.49 3.41
3.41 3.45 3.34 3.44 3.47 3.47 3.41 3.48 3.54 3.47
Range Center
No Lowest~Highest MidPoint
1 3.275 ~ 3.325 3.30 ŇŇŇ =3
2 3.325 ~ 3.375 3.35 ŇŇŇ =3
3 3.375 ~ 3.425 3.40 ŇŇŇŇŇ ŇŇŇŇ =9
4 3.425 ~ 3.475 3.45 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇ = 32
5 3.475 ~ 3.525 3.50 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ = 38
ŇŇŇ
6 3.525 ~ 3.575 3.55 ŇŇŇŇŇ ŇŇŇŇŇ = 10
7 3.575 ~ 3.625 3.60 ŇŇŇ =3
8 3.625 ~ 3.675 3.65 Ň =1
9 3.675 ~ 3.725 3.70 Ň =1
TOTAL = 150
EXERCISE : 1
In his last 70 games a professional basketball player made the follwing scores:
10 17 9 17 18 20 16
7 17 19 13 15 14 13
12 13 15 14 13 10 14
11 15 14 11 15 15 16
9 18 15 12 14 13 14
13 14 16 15 16 15 15
14 15 15 16 13 12 16
10 16 14 13 16 14 15
6 15 13 16 15 16 16
12 14 16 15 16 13 15
Check Sheet-4
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
EXERCISE : 2
The ABC Company is planning to analyze the average weekly wage distribution of its 58 employees
during fiscal year 2003. The 58 weekly wages are available as raw data corresponding to the alphabetic
order of the employees’ names as below:
EXERCISE : 3
Thickness measurements on pieces of silicon (mmX0.001)
790 1170 970 940 1050 1020 1070 790
1340 710 1010 770 1020 1260 870 1400
1530 1180 1440 1190 1250 940 1380 1320
1190 750 1280 1140 850 600 1020 1230
1010 1040 1050 1240 1040 840 1120 1320
1160 1100 1190 820 1050 1060 880 1100
1260 1450 930 1040 1260 1210 1190 1350
1240 1490 1490 1310 1100 1080 1200 880
820 980 1620 1260 760 1050 1370 950
1220 1300 1330 1590 1310 830 1270 1290
1000 1100 1160 1180 1010 1410 1070 1250
1040 1290 1010 1440 1240 1150 1360 1120
980 1490 1080 1090 1350 1360 1100 1470
1290 990 790 720 1010 1150 1160 850
1360 1560 980 970 1270 510 960 1390
1070 840 870 1380 1320 1510 1550 1030
1170 920 1290 1120 1050 1250 960 1550
1050 1060 970 1520 940 800 1000 1110
1430 1390 1310 1000 1030 1530 1380 1130
1110 950 1220 1160 970 940 880 1270
750 1010 1070 1210 1150 1230 1380 1620
1760 1400 1400 1200 1190 970 1320 1200
1460 1060 1140 1080 1210 1290 1130 1050
1230 1450 1150 1490 980 1160 1520 1160
1160 1700 1520 1220 1680 900 1030 850
Check Sheet-5
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
PARETO
ANALYSIS (The 80:20 Rule)
INTRODUCTION.
The Pareto effect is named after Vilfredo Pareto, an economist and sociologist who lived from 1848 to
1923. Originally trained as an engineer he was a one time managing director of a group of coalmines.
Later he took the chair of economics at Lausanne University, ultimately becoming a recluse. Mussolini
made him a senator in 1922 but by his death in 1923 he was already at odds with the regime. Pareto
was an elitist believing that the concept of the vital few and the trivial many extended to human beings.
This is known as the 80 : 20 rule, can be observed in action so often that it seems to be almost a
universal truth. As several economists have pointed out, at the turn of the century the bulk of the
country’s wealth was in the hands of a small number of people.
Much of his writing is now out of favour and some people would like to re-name the effect after Mosca,
or even Lorenz. However it is too late now – the Pareto principle has earned its place in the manager’s
kit of productivity improvement tools.
This fact gave rise to the Pareto effect or Pareto’s law: a small proportion of causes produce a large
proportion of results. Thus frequently a vital few causes may need special attention wile the trivial many
may warrant very little. It is this phrase that is most commonly used in talking about the Pareto effect –
‘the vital few and the trivial many’. A vital few customers may account for a very large percentage of
total sales. A vital few taxes produce the bulk of total revenue. A vital few improvements can produce
the bulk of the results.
This method stems in the first place from Pareto’s suggestion of a curve of the distribution of wealth in a
book of 1896. Whatever the source, the phrase of ‘the vital few and the trivial many’ deserves a place in
every manager’s thinking. It is itself one of the most vital concepts in modern management. The results
of thinking along Pareto lines are immense.
In practically every industrial country a small proportion of all the factories employ a disproportionate
number of factory operatives. In some countries 15 percent of the firms employ 70 percent of the
people. This same state of affairs is repeated time after time. In retailing for example, one usually finds
that up to 80 percent of the turnover is accounted for by 20 percent of the lines.
For example, we may have a large number of customer complaints, a lot of shop floor accidents, a high
percentage of rejects, and a sudden increase in costs etc. The first stage is to carry out a Pareto
analysis. This is nothing more than a list of causes in descending order of their frequency or
occurrence. This list automatically reveals the vital few at the top of the list, gradually tailing off into the
trivial many at the bottom of the list. Management’s task is now clear and unavoidable: effort must be
expended on those vital few at the head of the list first. This is because nothing of importance can take
place unless it affects the vital few. Thus management’s attention is unavoidably focussed where it will
do most good.
Another example is stock control. You frequently find an elaborate procedure for stock control with
considerable paperwork flow. This is usually because the systems and procedures are geared to the
most costly or fast-moving items. As a result trivial parts may cost a firm more in paperwork than they
cost to purchase or to produce. An answer is to split the stock into three types, usually called A, B and
C. Grade A items are the top 10 percent or so in money terms while grade C are the bottom 50-75
percent. Grade B are the items in between. It is often well worthwhile treating these three types of stock
in a different way leading to considerable savings in money tied up in stock.
Production control can use the same principle by identifying these vital few processes, which control the
manufacture, and then building the planning around these key processes. In quality control
concentrating in particular on the most troublesome causes follows the principle. In management
control, the principle is used by top management looking continually at certain key figures.
Pareto-1
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Thus it is clear that the Pareto concept – ‘the vital few and the trivial many’ – is of utmost importance to
management.
Pareto charts show where effort can be focused for maximum benefit. It may take two or more Pareto
charts to focus the problem to a level that can be successfully analyzed.
The Pareto chart combines a bar graph with a cumulative line graph. The bars are placed from left to
right in descending order. The cumulative line graph shows the percent contribution of all preceding
bars. However, the graph has the advantage of providing a visual impact of those vital few
characteristics that need attention.
How to Construct:
1. Determine the method of classifying the data: by problem, cause, type, nonconformity, and so
forth. Decide the problem which is to be analyzed.
2. Decide the period over which data are to be collected.
3. Identify the main course or categories of the problem. Collect data for an appropriate time
interval.
Pareto-2
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
4. Tabulate the frequency of each category and list in descending order of frequency. (if there are
too many categories it is permissible to group some into a miscellaneous category, for the
purpose of analysis and presentation).
5. Arrange the data as in a bar chart. Summarize the data and rank order categories from largest
to smallest.
6. Construct the diagram with the columns arranged in order of descending frequency.
7. Determining cumulative totals and percentages and construct the cumulative percentage curve.
For example, one client created a Pareto chart of insurance claim errors by error code and found no
one code that accounted for a significant amount of errors. They then created a Pareto chart of errors
by insurance company and found that one company was the source of most of the errors. Once they
approached the company with compelling data it was easy to convince them their process needed
fixing.
CONCLUSION
Even in circumstances which do not strictly conform to the 80 : 20 rule the method is an extremely
useful way to identify the most critical aspects on which to concentrate. When used correctly Pareto
Analysis is a powerful and effective tool in continuous improvement and problem solving to separate the
‘vital few’ from the ‘many other’ causes in terms of cost and/or frequency of occurrence.
Pareto-3
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
It is the discipline of organising the data that is central to the success of using Pareto Analysis. Once
calculated and displayed graphically, it becomes a selling tool to the improvement team and
management, raising the question why the team is focusing its energies on certain aspects of the
problem.
EXAMPLE :1
Checklist frequency Tally of causes of rewrites of letters
NUMBER of
CAUSE of REWRITE OCCURRENCES % %CUM
Misspelled words 33 46.5% 46.5%
Incorrect punctuation 21 29.6% 76.1%
Incorrect spacing 8 11.3% 87.3%
Incorrect signature block 6 8.5% 95.8%
Incorrect heading 3 4.2% 100.0%
71
100% 100.0% 72
71
95.8%
87.3% 63
80%
76.1% 54
Number 0f Occurance
% Occurance
45
60%
36
46.5%
40%
27
18
33
20%
21 9
8 6 3
0% 0
Mispelled words Incorrect Incorrect spacing Incorrect signature Incorrect heading
punctutation block
Cause of Rewrite
Pareto-4
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
EXAMPLE :2
COATING MACHINE
NONCONFORMITY
Machine NC % % CUM
M/C-1 35 14.77% 14.77%
M/C-2 51 21.52% 36.29%
M/C-3 44 18.57% 54.85%
M/C-4 47 19.83% 74.68%
M/C-5 29 12.24% 86.92%
M/C-6 31 13.08% 100.00%
TOTAL 237
% NON CONFORMITY
70%
160 59.92%
60%
40%
80
21.52% 30%
20%
40
10%
51 47 44 35 31 29
0 0%
M/C-1 M/C-2 M/C-3 M/C-4 M/C-5 M/C-6
MACHINE
EXAMPLE :3
Summary sheet for Shock Absorber Line
No.
X-Axis Category Defective % %Cum
Spot Weld 41 48.81% 0.58%
Leakers 23 27.38% 76.19%
Orifice 8 9.52% 85.71%
Steel (crimp) 6 7.14% 92.86%
Oil (Drift) 5 5.95% 98.81%
Rod (Chrome) 1 1.19% 100.00%
TOTAL 84
Pareto-5
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Number of Defects
76.19% 70%
56
% Defects
60%
42 50%
48.81% 40%
28
41 30%
20%
14
23
10%
1
0 0%
Spot Weld Leakers Orifice Steel Oil (Drift) Rod
(Crimp) (Chrome)
Category Defects
EXAMPLE :4
Listed below is the defect types of corded phone and the repair cost expense.
Defect
Type No. Defects Cost/Defect($)
Impurities 278 4
Dented 145 0.5
Crack 30 40
Scratch 25 1
Broken 20 10
Bend 2 10
Defect Type Defects % Defects Cost/Defect($) % C/D % CUM Defects % CUM C/D
Impurities 278 55.60% 5 7.35% 55.60% 7.35%
Dented 145 29.00% 2 2.94% 84.60% 10.29%
Crack 30 6.00% 40 58.82% 90.60% 69.12%
Scratch 25 5.00% 1 1.47% 95.60% 70.59%
Broken 20 4.00% 10 14.71% 99.60% 85.29%
Bend 2 0.40% 10 14.71% 100.00% 100.00%
TOTAL 500 100.00% 68 100.00% 100.00% 100.00%
Pareto-6
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Defect CUM
DEFECTS 99.60% 100.00% Type Defects Defects Defects
95.60%
500 100% 278
90.60% Impurities 55.60% 55.60%
84.60% 90% 145
Dented 29.00% 84.60%
400 80% 30
Crack 6.00% 90.60%
70% 25
Scratch 5.00% 95.60%
% DEFECTS
No. of DEFECTS
300
55.60%
60% Broken 20 4.00% 99.60%
2
50% Bend 0.40% 100.00%
200 40% TOTAL 500 100.00% 100.00%
278 30%
100 20%
145
10%
30 25 20 2
0 0%
Impurities Dented Crack Scratch Broken Bend
DEFECT TYPES
45 61.54%
Dented 2 3.08% 95.38%
60%
Dented 2 3.08% 98.46%
30 Scratch 1 1.54% 100.00%
40%
40 TOTAL 65 100.00% 100.00%
15 20%
10 10 2 2 1
0 0%
Crack Broken Bend Dented Dented Scratch
DEFECT TYPES
Pareto-7
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
INCREMENTAL COST
90%
82.88%
80%
% Defects X Cost/Defects
4.50
% (X10) Defects 70%
44.48% 50%
3.00 2.78
40%
2.40
30%
1.50
20%
0.58
0.40 10%
0.05 0.04
0.00 0%
Impurities Crack Dented Broken Scratch Bend TOTAL
DEFECT TYPES
EXERCISE :1
Two partners in an upholstery business are interested in decreasing the number of complaints from
customers who have had furniture reupholstered by their staff. For the past six months, they have been
keeping detailed records of the complaints and what had to be done to correct the situations. To help
their analyst of which problems to attack first, they decide to create several pareto charts. Use the
following table to cerate pareto charts for the number of complaints, the percentage of complaints, and
the dollar loss associated with complaints.
No. of % of Dollar
CATEGORY Complaints Complaints Loss
Loose threads 14 28 294
Incorrect hemming 8 16 216
Material flaws 2 4 120
Stitching flaws 6 12 126
pattern alignment errors 4 8 240
Color mismatch 2 4 180
Trim errors 6 2 144
Button problems 3 6 36
Miscellaneous 5 10 60
a. Make the pareto diagram & Discussed its
Pareto-8
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
EXERCISE :2
During the past month, a customer-satisfaction survey was given to 200 customer at local fast-food
restaurant. The following complaints were lodged:
No. Of
COMPALINT Complaint
Cold food 105
Flimsy Utensils 20
Food taste bad 10
Salad not fresh 94
Poor service 15
Food greasy 9
Lack of courtesy 5
Lack of cleanliness 25
a. Make the pareto diagram
EXERCISE :3
Create a pareto diagram using the table as listed below and the following information about the
individual costs associated with correcting each type of non conformity. Based on your pareto diagram
showing the total cost associated with each type of nonconformity, where should PT. Tool to be
concentrating their improvement effort?
Pareto-9
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
HISTOGRAM
INTRODUCTION
In Industry, business, and government the mass of data that have been collected is voluminous. Some
means of summarizing the data are needed to show value the data tend to cluster about and how the
data are dispersed or spread out. Two techniques are needed to accomplish this summarization of
data, as follow as:
x Graphical
The graphical technique is a plot or picture of a frequency distribution, which is a
summarization of how the data points (observations) occur within each subdivision of observe
values.
x Analytical
Analytical technique summarize dara by computing a measure of the tendency 9average,
median, and mode) and a measure of the dispersion (range and standard deviation).
Frequency distributions of variables data are commonly presented in frequency polygons (histograms).
In histogram, vertical bars are drawn, each with a width corresponding to the width of the class interval
and a heigh corresponding to the frequency of that interval. The bars share common sides with no
space between them.
Frequency distributions of attribute date are commonly presented in bar chart. For attribute data, a bar
chart is used to graphically display the data. It is constructed in exactly the same way as a histogram,
except that instead of using a vertical bar spanning the entire class interval, we use a line or bar
centered on each attribute category.
A frequency distribution shows how often each different value in a set of data occurs. A histogram is the
most commonly used graph to show frequency distributions. It looks very much like a bar chart, but
there are important differences between them.
A histogram is the graphical version of a table which shows what proportion of cases fall into each of
several or many specified categories. The categories are usually specified as nonoverlapping intervals
of some variable. A histogram divide the range of data into intervals and show the number, or
percentage, of observation that fall into each interval. The categories (bars) must be adjacent.
Histogram describe the variation in the process. The histogram graphically estimates the process
capability, and if required, the relationship to the specifications and the nominal (target). A Histogarm
consist of a set of rectangles that represent the frequency of observed values in each category.
Histogram-1
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
Histogram have certain identifiable characteristics. One charateristic of the distribution concerns the
symmetry or lack of symmetry of the data. A final characteric concerns the number of modes, or peaks,
in the data. There can be one mode, two modes (bi-modal), or multiple modes. A histogram is like
snapshot of the process showing the variation. Histograms can determine the process capability,
compare with specifications, suggest the shape of the population, and indicate discrepancies in the
data, such as gaps.
Histograms show the spread, or dispersion, of data. The Excel histogram uses variable data to
determine process capability. The customer's upper specification (USL) and lower specification limits
(LSL) determine process capability. With attribute data (i.e., defects), capability assumes that the
process must deliver zero defects. It also suggest the shape of the population and indicates if there are
any gaps in the data.
¾ WHEN TO USE
x When the data are numerical.
x When you want to see the shape of the data’s distribution, especially when determining
whether the output of a process is distributed approximately normally.
x When analyzing whether a process can meet the customer’s requirements.
x When analyzing what the output from a supplier’s process looks like.
x When seeing whether a process change has occurred from one time period to another.
x When determining whether the outputs of two or more processes are different.
x When you wish to communicate the distribution of data quickly and easily to others.
Histogram-2
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
¾ OBJECTIVE
The main purpose of histograms is to provide clues and information for the reduction of variation. This
come mainly from identification and interpretation of patter of variation. There 2 types of variations
pattern, as follow as:
x Random (from chance or common causes)
Because the are repeated frequently in real life, have been found to be quite useful and have,
therefore been documented and quantified.
x Non-random (from assignable or special causes)
Non-random patterns are patterns of error and are used in SPC, especially control charting, to
assist in the reduction of variation.
The purpose of a histogram is to graphically summarize the distribution of a univariate data set. The
histogram graphically shows the following:
1. center (i.e., the location) of the data;
2. spread (i.e., the scale) of the data;
3. skewness of the data;
4. presence of outliers; and
5. presence of multiple modes in the data.
These features provide strong indications of the proper distributional model for the data. The probability
plot or a goodness-of-fit test can be used to verify the distributional model.
Histogram-3
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
The most common form of the histogram is obtained by splitting the range of the data into equal-sized
bins (called classes). Then for each bin, the number of points from the data set that fall into each bin
are counted. That is
x Vertical axis: Frequency (i.e., counts for each bin)
x Horizontal axis: Response variable
The classes can either be defined arbitrarily by the user or via some systematic rule. A number of
theoretically derived rules have been proposed by Scott (Scott 1992).
The cumulative histogram is a variation of the histogram in which the vertical axis gives not just the
counts for a single bin, but rather gives the counts for that bin plus all bins for smaller values of the
response variable.
Both the histogram and cumulative histogram have an additional variant whereby the counts are
replaced by the normalized counts. The names for these variants are the relative histogram and the
relative cumulative histogram.
MATHEMATICAL DEFINITION
In a more general mathematical sense, a histogram is simply a mapping that counts the number of
observations that fall into various disjoint categories (known as bins), whereas the graph of a histogram
is merely one way to represent a histogram. Thus, if we let N be the total number of observations and n
be the total number of bins, the histogram hk meets the following conditions:
Histogram-4
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
Cumulative Histogram
A cumulative histogram is a mapping that counts the cumulative number of observations in all of the
bins up to the specified bin. That is, the cumulative histogram Hk of a histogram hk is defined as:
Rules of Thumb
Most people use 7 to 10 classes for histograms. However, from time to time the following rules of thumb
have been used to chunk the data. Where n is the number of observations in the sample.
Nclass = 10logn
...with varying degrees of success. The final technique does not perform well with n < 30.
The bin width ǻ (or the number of bins) of a histogram can be selected by using the formula (2k í v) /
ǻ2, where k and v are the mean and variance of the number of data points in the bins. The optimal bin
width is the one that minimizes the formula.
QUESTIONS
The histogram can be used to answer the following questions:
1. What kind of population distribution do the data come from?
2. Where are the data located?
3. How spread out are the data?
4. Are the data symmetric or skewed?
5. Are there outliers in the data?
INTERPRETATION
When combined with the concept of the normal curve and the knowledge of a particular process, the
histogram becomes an effective, practical working tool in the early stages of data analysis. A histogram
may be interpreted by asking three questions:
1. Is the process performing within specification limits?
2. Does the process seem to exhibit wide variation?
3. If action needs to be taken on the process, what action is appropriate?
The answer to these three questions lies in analyzing three characteristics of the histogram.
1. How well is the histogram centered? The centering of the data provides information on the process aim
about some mean or nominal value.
2. How wide is the histogram? Looking at histogram width defines the variability of the process about the
aim.
3. What is the shape of the histogram? Remember that the data is expected to form a normal or bell-shaped
curve. Any significant change or anomaly usually indicates that there is something going on in the process
which is causing the quality problem.
Histogram-5
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
NORMAL
BI-MODAL
CLIFF-LIKE
SAW-TOOTHED
SKEWED
It is worth mentioning again that this or any other phase of histogram analysis must be married to
knowledge of the process being studied to have any real value. Knowledge of the data analysis itself
does not provide sufficient insight into the quality problem.
Histogram-6
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
CREATING HISTOGRAM
1. Collect the consecutive data points from a process. For the histogram to be representative, use
the histogram worksheet to set up the histogram. It will help you determine the number of bars,
the range of numbers that go into each bar and the labels for the bar edges.
2. Choose the cell interval and determine the number of cells.
The cell interval is the distance between cells and precisely defined as the difference between
successive lower boundaries, successive upper boundaries, or successive midpoints.
a. Determine the rage : = – XH = highest number
XL = lowest number
R = Range
The all interval must be an odd number and have the same of decimal points as original data.
Suggestion: odd interval recommended so that the midpoint values will be to the same number
of decimal places as the data value.
or
Choose that cell interval which has the number of cells closest to the square root of the
sample size. If two are equally, choose the one that has the lowest number of cells.
=¥ n = the number of data points
Determine the midpoint of the lowest cell. This is the assumed average of the cell (all
measurements within the cell are assumed to have this value). The midpoint is calculated by
adding the upper and lower boundaries and dividing by 2.
Histogram-7
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
process during the time period of the histogram, your analysis of the histogram shape
probably cannot be generalized to all time periods.
x Analyze the meaning of your histogram’s shape.
Histograms are limited in their use due to the random order in which samples are taken and
lack of information about the state of control of the process. Because samples are gathered
without regard to order, the time-dependent or time-related trends in the process are not
captured. So, what may appear to be the central tendency of the data may be deceiving.
With respect to process statistical control, the histogram gives no indication whether the
process was operating at its best when the data was collected. This lack of information on
process control may lead to incorrect conclusions being drawn and, hence, inappropriate
decisions being made. Still, with these considerations in mind, the histogram's simplicity of
construction and ease of use make it an invaluable tool in the elementary stages of data
analysis.
¾ EXAMPLE: 1 A.
1. Determine the range of the data by subtracting the smallest observed measurement from the
largest and designate it as R.
Example:
Largest observed measurement = 1.1185 inches
Smallest observed measurement = 1.1030 inches
R = 1.1185 inches - 1.1030 inches =.0155 inch
2. Record the measurement unit (MU) used. This is usually controlled by the measuring instrument
least count.
Example: MU = .0001 inch
3. Determine the number of classes and the class width. The number of classes, k, should be no
lower than six and no higher than fifteen for practical purposes. Trial and error may be done to achieve
the best distribution for analysis.
Example: k=8
4. Determine the class width (H) by dividing the range, R, by the preferred number of classes, k.
Example: R/k = .0155/8 = .0019375 inch
The class width selected should be an odd-numbered multiple of the measurement unit, MU. This
value should be close to the H value:
MU = .0001 inch
Class width = .0019 inch or .0021 inch
5. Establish the class midpoints and class limits. The first class midpoint should be located near the
largest observed measurement. If possible, it should also be a convenient increment. Always make the
class widths equal in size, and express the class limits in terms which are one-half unit beyond the
accuracy of the original measurement unit. This avoids plotting an observed measurement on a class
limit.
Example: First class midpoint = 1.1185 inches,
and the class width is .0019 inch.
Therefore, limits would be 1.1185 + or - .0019/2.
Histogram-8
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
6. Determine the axes for the graph. The frequency scale on the vertical axis should slightly exceed
the largest class frequency, and the measurement scale along the horizontal axis should be at regular
intervals which are independent of the class width. (See example below steps.)
7. Draw the graph. Mark off the classes, and draw rectangles with heights corresponding to the
measurement frequencies in that class.
8. Title the histogram. Give an overall title and identify each axis.
¾ EXAMPLE: 1B.
The following example shows data collected from an experiment measuring pellet penetration depth
from a pellet gun in inches and the corresponding histogram:
Penetration depth (inches)
2
3
3
3
3
4
4
4
5
5
6
6
Histogram-9
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
Note the classical bell-shaped, symmetric histogram with most of the frequency counts bunched in
the middle and with the counts dying off out in the tails. From a physical science/engineering point
of view, the normal distribution is that distribution which occurs most often in nature (due in part to
the central limit theorem).
Histogram-10
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
Histogram-11
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
Description of Bimodal
The mode of a distribution is that value which is most frequently occurring or has the largest probability
of occurrence. The sample mode occurs at the peak of the histogram.
For many phenomena, it is quite common for the distribution of the response values to cluster around a
single mode (unimodal) and then distribute themselves with lesser frequency out into the tails. The
normal distribution is the classic example of a unimodal distribution.
The histogram shown above illustrates data from a bimodal (2 peak) distribution. The histogram serves
as a tool for diagnosing problems such as bimodality. Questioning the underlying reason for
distributional non-unimodality frequently leads to greater insight and improved deterministic modeling of
the phenomenon under study. For example, for the data presented above, the bimodal histogram is
caused by sinusoidality in the data.
Histogram-12
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
where p is the mixing proportion (between 0 and 1) and and are normal probability density
functions with location and scale parameters , , , and , respectively. That is, there are 5
parameters to estimate in the fit.
Whether maximum likelihood or least squares is used, the quality of the fit is sensitive to good starting
values. For the mixture of two normals, the histogram can be used to provide initial estimates for the
location and scale parameters of the two normal distributions.
Discussion of Skewness
A symmetric distribution is one in which the 2 "halves" of the histogram appear as mirror-images of one
another. A skewed (non-symmetric) distribution is a distribution in which there is no such mirror-
imaging.
For skewed distributions, it is quite common to have one tail of the distribution considerably longer or
drawn out relative to the other tail. A "skewed right" distribution is one in which the tail is on the right
side. A "skewed left" distribution is one in which the tail is on the left side. The above histogram is for a
distribution that is skewed right.
Skewed distributions bring a certain philosophical complexity to the very process of estimating a "typical
value" for the distribution. To be specific, suppose that the analyst has a collection of 100 values
randomly drawn from a distribution, and wishes to summarize these 100 observations by a "typical
value". What does typical value mean? If the distribution is symmetric, the typical value is
Histogram-13
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
unambiguous-- it is a well-defined center of the distribution. For example, for a bell-shaped symmetric
distribution, a center point is identical to that value at the peak of the distribution.
For a skewed distribution, however, there is no "center" in the usual sense of the word. Be that as it
may, several "typical value" metrics are often used for skewed distributions. The first metric is the mode
of the distribution. Unfortunately, for severely-skewed distributions, the mode may be at or near the left
or right tail of the data and so it seems not to be a good representative of the center of the distribution.
As a second choice, one could conceptually argue that the mean (the point on the horizontal axis where
the distribution would balance) would serve well as the typical value. As a third choice, others may
argue that the median (that value on the horizontal axis which has exactly 50% of the data to the left
(and also to the right) would serve as a good typical value.
For symmetric distributions, the conceptual problem disappears because at the population level the
mode, mean, and median are identical. For skewed distributions, however, these 3 metrics are
markedly different. In practice, for skewed distributions the most commonly reported typical value is the
mean; the next most common is the median; the least common is the mode. Because each of these 3
metrics reflects a different aspect of "centerness", it is recommended that the analyst report at least 2
(mean and median), and preferably all 3 (mean, median, and mode) in summarizing and characterizing
a data set.
Histogram-14
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
The issues for skewed left data are similar to those for skewed right data.
Histogram-15
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
Interpretations
When combined with the concept of the normal curve and the knowledge of a particular process, the
histogram becomes an effective, practical working tool in the early stages of data analysis. A histogram
may be interpreted by asking three questions:
1. Is the process performing within specification limits?
2. Does the process seem to exhibit wide variation?
3. If action needs to be taken on the process, what action is appropriate?
The answer to these three questions lies in analyzing three characteristics of the histogram.
1. How well is the histogram centered? The centering of the data provides information on the process
aim about some mean or nominal value.
2. How wide is the histogram? Looking at histogram width defines the variability of the process about
the aim.
3. What is the shape of the histogram? Remember that the data is expected to form a normal or bell-
shaped curve. Any significant change or anomaly usually indicates that there is something going on in
the process which is causing the quality problem.
EXAMPLE : 2
In a new plant, time cards need to be submitted to the payroll office within 30 minutes after the close of
the last shift on Friday. After twenty weeks of operation, the payroll office complaint to the plant
manager that this process is not being adhered to and as a result the payroll checks are not being
processed on time and workers are starting to complain. The requirement is = 30 minutes.
30 30 * 27 * 33
30 30 32 28
28 30 32 29
29 30 31 29
31 30 31 30
R = XH – XL = 33 – 27 = 6
27 Ň =1
28 ŇŇ =2
29 ŇŇŇ =3
30 ŇŇŇŇŇŇŇŇ =8
31 ŇŇŇ =3
32 ŇŇ =2
33 Ň =1
TOTAL = 20
9
8
7
6
5
4
3
2
1
0
33 32 31 30 29 28 27
Histogram-16
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
EXAMPLE : 3
Below is the data that being taken from the Pressing machine that produce the ceramic part for IC ROM
before it’s sent to funnel burner .
9.85 10.00 9.95 10.00 10.00 10.00 9.95 10.15 9.85 10.05
9.90 10.05 9.85 10.10 10.00 10.15 9.90 10.00 *9.80 10.00
9.90 10.05 9.90 10.10 9.95 10.00 9.90 9.90 9.90 10.00
9.90 10.10 9.90 *10.20 9.95 10.00 9.85 9.95 9.95 10.10
9.95 10.15 9.95 10.00 9.90 10.00 10.00 10.10 9.95 10.15
9.80 Ň =1
9.85 ŇŇŇŇ =4
9.90 ŇŇŇŇŇ ŇŇŇŇŇ = 10
9.95 ŇŇŇŇŇ ŇŇŇŇ =9
10.00 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇ = 13
10.05 ŇŇŇ =3
10.10 ŇŇŇŇŇ =5
10.15 ŇŇŇŇ =4
10.20 Ň =1
TOTAL = 50
H = (R/i) + 1
Range Center
9.785 – 9.835 9.81 Ň =1
9.835 – 9.885 9.66 ŇŇŇŇ =4
9.885 – 9.935 9.91 ŇŇŇŇŇ ŇŇŇŇŇ = 10
9.935 – 9.985 9.96 ŇŇŇŇŇ ŇŇŇŇ =9
9.985 – 10.035 10.01 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇ = 13
10.035 – 10.085 10.06 ŇŇŇ =3
10.085 – 10.135 10.11 ŇŇŇŇŇ =5
10.135 – 10.185 10.16 ŇŇŇŇ =4
10.185 – 10.235 10.21 Ň =1
TOTAL =50
Histogram-17
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
14 14
12 12
10 10
8 8
6 6
4 4
2 2
0 0
9.80 9.85 9.90 9.95 10.00 10.05 10.10 10.15 10.20 9.81 9.66 9.91 9.96 10.01 10.06 10.11 10.16 10.21
Fig. A Fig. B
EXAMPLE : 4
Quality improvement team at an emergency fire fighting squad was interested in studying the response
time to emergency service calls (in particular), the time interval between the customer call-in and the
arrival of a service crew at the scene). Shows below the table of 40 actual response time measurement
(in minutes) collected by the team. The table contain a great deal of information about the variation in
response times, it is difficult to extract the information from a list of minutes.
61 48 62 62 44 52 53 * 84 53 71
* 39 62 68 50 58 54 66 53 53 77
60 59 71 51 76 50 57 59 55 59
59 74 67 62 64 68 55 46 63 64
XH = 84 ; XL = 39 ĺ R = XH – XL = 84 -39 = 45
5
x = R/ = 45/ ĺ =3 Æ x = 15
ĺ =5 Æ x=9
ĺ =7 Æ x=7
ĺ =9 Æ x=5
=5 ĺ
37 39 41
Range Center
37 ~ 41 39 Ň =1
42 ~ 46 44 ŇŇ =2
47 ~ 51 49 ŇŇŇŇ =4
52 ~ 56 54 ŇŇŇŇŇ ŇŇŇ =8
57 ~ 61 59 ŇŇŇŇŇ ŇŇŇ =8
62 ~ 66 64 ŇŇŇŇŇ ŇŇŇ =8
67 ~ 71 69 ŇŇŇŇŇ =5
72 ~ 76 74 ŇŇ =2
77 ~ 81 79 Ň =1
82 ~ 86 84 Ň =1
TOTAL = 40
Histogram-18
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
9
8
7
6
5
4
3
2
1
0
39 44 49 54 59 64 69 74 79 84
But for simplest manner due to the time we need to consider to rounding its, then as follow:
10 Ö 20 Ö 30 Ö 40 Ö 50 Ö 60 Ö 70 Ö 80 Ö 90
1 hrs 1 ½ hrs
Range Center
38 ~ 42 40 Ň =1
43 ~ 47 45 ŇŇ =2
48 ~ 52 50 ŇŇŇŇŇ =5
53 ~ 57 55 ŇŇŇŇŇ ŇŇŇ =8
58 ~ 62 60 ŇŇŇŇŇ ŇŇŇŇŇ Ň = 11
63 ~ 67 65 ŇŇŇŇŇ =5
68 ~ 72 70 ŇŇŇŇ =4
73 ~ 77 75 ŇŇŇ =3
78 ~ 82 80 =0
83 ~ 87 85 Ň =1
TOTAL = 40
12
10
0
40 45 50 55 60 65 70 75 80 85
Histogram-19
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
EXAMPLE : 5
A manufacturing engineering department started working on a process check plating/chroming for
sheet metal attached product to the locking system for safe deposit box product since they have been
activating the new design of plat to avoid the corrosion.
To response to customer issue, the quality engineer involved in studying the thickness of the chroming
plating as below table:
0.625 0.625 0.626 0.626 0.624 0.625 0.625 0.628 0.627 0.627
0.624 0.624 0.623 0.626 0.624 0.624 0.626 0.625 0.625 0.627
0.622 0.628 0.625 0.627 0.623 0.628 0.625 0.626 0.626 0.630
0.624 0.627 0.623 0.626 * 0.620 0.628 0.623 0.625 0.624 0.627
0.621 0.626 0.621 0.625 0.622 0.626 0.625 0.625 0.624 0.627
0.628 0.627 0.626 0.626 0.625 0.628 0.626 0.625 0.627 0.627
0.624 0.625 0.627 0.626 0.625 0.628 0.624 0.625 0.626 0.627
0.624 0.628 0.625 0.626 0.625 0.627 0.626 0.630 0.626 0.627
0.627 0.625 0.628 0.631 0.626 0.630 0.625 0.628 0.627 0.627
0.625 0.627 0.626 0.630 0.628 0.631 0.626 0.628 0.627 0.627
0.625 0.630 0.624 0.628 0.626 0.629 0.626 0.628 0.626 0.627
0.630 0.630 0.628 0.628 0.627 0.631 0.625 0.628 0.627 0.627
0.627 * 0.632 0.626 * 0.632 0.628 0.628 0.627 0.631 0.626 0.630
0.626 0.630 0.626 0.628 0.625 0.631 0.626 * 0.632 0.627 0.631
0.628 * 0.632 0.627 0.631 0.626 0.630 0.625 0.628 0.626 0.628
80
70
60
50
40
30
20
10
0
0.620
0.623
0.626
0.629
0.632
Histogram-20
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
No Scale
1 0.620 Ň =1
2 0.621 ŇŇ =2
3 0.622 ŇŇ =2
4 0.623 ŇŇŇŇ =4
5 0.624 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇ = 12
6 0.625 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ Ň = 26
7 0.626 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ = 30
8 0.627 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇ = 28
9 0.628 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇ = 23
10 0.629 Ň =1
11 0.630 ŇŇŇŇŇ ŇŇŇŇŇ Ň = 11
12 0.631 ŇŇŇŇŇ ŇŇ =7
13 0.632 ŇŇŇŇ =4
TOTAL = 150
35
30
25
20
15
10
0
0.620
0.621
0.622
0.623
0.624
0.625
0.626
0.627
0.628
0.629
0.630
0.631
0.632
EXAMPLE : 6
Below is the 100 metal thickness data taken about the optic component to determine the data
distribution.
3.56 3.46 3.48 3.50 3.42 3.43 3.52 3.49 3.44 3.50
3.48 3.56 3.50 3.52 3.47 3.48 3.46 3.50 3.56 3.38
3.41 3.37 3.47 3.49 3.45 3.44 3.50 3.49 3.46 3.46
3.55 3.52 3.44 3.50 3.45 3.44 3.48 3.46 3.52 3.46
3.48 3.48 3.32 3.40 3.52 3.34 3.46 3.43 * 3.30 3.46
3.59 3.63 3.59 3.47 3.38 3.52 3.45 3.48 3.31 3.46
3.40 3.54 3.46 3.51 3.48 3.50 * 3.68 3.60 3.46 3.52
3.48 3.50 3.56 3.50 3.52 3.46 3.48 3.46 3.52 3.56
3.52 3.48 3.46 3.45 3.46 3.54 3.54 3.48 3.49 3.41
3.41 3.45 3.34 3.44 3.47 3.47 3.41 3.48 3.54 3.47
Histogram-21
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
x = R/ = 0.38/ ĺ =0.03 Æ x = 13
ĺ =0.05 Æ x=7
ĺ =0.07 Æ x=5
ĺ =0.09 Æ x=4
0.05
Range Center
No Lowest~Highest MidPoint
1 3.275 ~ 3.325 3.30 ŇŇŇ =3
2 3.325 ~ 3.375 3.35 ŇŇŇ =3
3 3.375 ~ 3.425 3.40 ŇŇŇŇŇ ŇŇŇŇ =9
4 3.425 ~ 3.475 3.45 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇ = 32
5 3.475 ~ 3.525 3.50 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ = 38
ŇŇŇ
6 3.525 ~ 3.575 3.55 ŇŇŇŇŇ ŇŇŇŇŇ = 10
7 3.575 ~ 3.625 3.60 ŇŇŇ =3
8 3.625 ~ 3.675 3.65 Ň =1
9 3.675 ~ 3.725 3.70 Ň =1
TOTAL = 150
40
35
30
25
20
15
10
0
3.30
3.35
3.40
3.45
3.50
3.55
3.60
3.65
3.70
EXAMPLE : 7
Automated data acquisition systems generate timely data about the product produced and the process
producing the silicon wafer thickness. For this reason, the company uses an integrated system of
automated statistical process control programming, data collection devices, and programmable logic
Histogram-22
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
controllers (PLCS) to collect statistical information about silicon wafer production. Utilizing the system
relieves the process engineers form the burden crunching, freeing time for critical analysis of the data.
The data as below:
Ö N = 60 ĺ = ¥N ĺ ¥60 ĺ 8 ĺ §8
No MidPoint
1 0.2470 ŇŇŇŇŇ =5
2 0.2480 ŇŇŇŇŇ =5
3 0.2490 ŇŇŇŇŇ ŇŇŇŇŇ = 10
4 0.2500 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇ = 18
5 0.2510 ŇŇŇŇŇ ŇŇŇŇŇ ŇŇŇ = 13
6 0.2520 ŇŇŇŇŇ Ň =6
7 0.2530 ŇŇ =2
8 0.2540 Ň =1
TOTAL = 60
20
18
16
14
12
10
8
6
4
2
0
0.247
0.248
0.249
0.250
0.251
0.252
0.253
0.254
Histogram-23
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
EXERCISE:
1. The following table lists the number of failures in 120 samples of a certain electronic component
tested for 100 hours each.
10 13 13 11 13 8 16 18 13 15
12 13 14 12 13 15 14 13 11 9
13 15 11 13 14 17 10 12 5 15
11 14 12 10 13 11 13 8 14 18
14 13 14 11 14 12 13 11 13 16
13 14 13 14 13 12 14 12 11 15
13 14 11 14 13 10 9 12 11 15
14 11 12 13 14 13 12 13 17 7
12 13 14 13 12 17 13 11 15 16
10 4 8 12 11 7 9 10 6 9
11 15 14 16 17 12 13 16 16 15
15 16 16 13 14 16 6 13 14 16
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
1.12 0.81 1.30 1.22 1.00 1.46 1.19 0.72 1.32 1.36
1.81 1.81 1.44 1.69 1.90 1.52 1.66 1.76 1.61 1.55
1.02 2.03 1.67 1.55 1.62 1.79 1.62 1.54 1.48 1.73
1.88 1.43 1.54 1.29 1.37 1.65 1.77 1.43 1.71 1.66
1.94 1.67 1.43 1.61 1.55 1.73 1.42 1.66 1.79 1.77
2.17 1.55 1.12 1.63 1.43 1.63 1.36 1.58 1.79 1.41
1.77 1.44 1.61 1.43 1.61 1.74 1.67 1.72 1.81 1.65
0.82 1.52 1.33 1.57 1.98 1.85 1.63 1.44 1.87 1.69
1.25 1.43 1.55 1.68 1.77 1.68 1.54 1.76 1.64 1.57
1.65 2.15 1.90 1.18 1.60 1.45 1.63 1.85 1.67 1.36
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
3. The tensile strenght of 100 castings was tested, and the results are listed in the table below
15.1 11.0 5.3 13.3 13.0 10.9 9.4 4.2 12.3 13.0
10.6 6.0 10.6 7.0 10.8 8.0 10.8 8.0 10.9 6.7
9.4 4.8 8.9 9.0 10.1 11.0 10.5 5.7 7.6 16.6
6.0 10.7 7.8 8.9 9.2 12.6 16.7 7.9 9.5 15.8
8.6 6.6 16.7 7.1 11.5 15.9 9.5 15.8 8.4 14.2
12.3 13.9 9.3 13.7 7.0 10.8 8.7 7.6 6.5 15.4
14.9 9.3 13.0 10.0 10.1 11.2 12.0 10.9 9.2 12.6
16.1 11.5 15.2 12.6 16.4 14.0 10.1 11.2 12.6 16.5
9.6 6.3 16.8 6.1 10.5 15.5 11.5 9.8 11.4 14.2
12.1 13.1 11.3 10.7 8.2 10.8 8.7 9.6 7.1 13.7
Histogram-24
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
4. Roger and Bill are trying to settle an argument. Roger says that averages from a deck of cards will
form a normal curve when plotted. Bill says they won’t. They decide to try the following exercise
involving averages. (An ace is woth 1 point, a jack 11, a queen 12, and a king13)
No Average No Average
1 11 13 10 12 7 10.6 26 8 10 1 9 10 7.6
2 1 7 12 9 3 6.4 27 9 13 2 2 2 5.6
3 9 5 12 1 11 7.6 28 12 4 12 3 13 8.8
4 11 5 7 9 12 8.8 29 12 4 7 6 9 7.6
5 7 12 13 7 4 8.6 30 1 12 3 12 11 7.8
6 11 9 5 1 13 7.8 31 12 3 10 11 6 8.4
7 1 4 13 12 13 8.6 32 3 5 10 2 7 5.4
8 13 3 2 6 12 7.2 33 9 1 2 3 11 5.2
9 2 4 1 10 13 6.0 34 6 8 6 13 9 8.4
10 4 5 12 1 9 6.2 35 2 12 5 10 4 6.6
11 2 5 7 7 11 6.4 36 6 4 8 9 12 7.8
12 6 9 8 2 12 7.4 37 9 13 3 10 1 7.2
13 2 3 6 11 11 6.6 38 2 1 13 7 5 5.6
14 2 6 9 11 13 8.2 39 10 11 5 12 13 10.2
15 6 8 8 9 1 6.4 40 13 2 8 2 11 7.2
16 3 4 12 1 6 5.2 41 2 10 5 4 11 6.4
17 8 1 8 6 10 6.6 42 10 4 12 7 11 8.8
18 5 7 6 8 8 6.8 43 13 13 7 1 10 8.8
19 2 5 4 10 1 4.4 44 9 10 7 11 11 9.6
20 5 7 12 7 8 7.8 45 6 7 8 7 4 6.4
21 9 1 3 6 12 6.2 46 1 4 12 11 13 8.2
22 1 13 9 3 6 6.4 47 9 11 8 1 11 8.0
23 4 5 13 5 7 6.8 48 8 13 10 13 4 9.6
24 3 7 9 8 10 7.4 49 12 11 11 2 3 7.8
25 1 7 6 6 1 4.2 50 2 12 5 11 9 7.8
5.The ABC Company is planning to analyze the average weekly wage distribution of its 58 employees
during fiscal year 2003. The 58 weekly wages are available as raw data corresponding to the alphabetic
order of the employees’ names as below:
Histogram-25
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
6. Given the ordered arrays in the accompanying table dealing with the lenghts of life (in hours) of a
sample of forty 100-watt light bulbs produced by manaufacturer A and a sample of forty 100-watt light
bulbs produced by manufacturer B.
MANUFACTURER A MANUFACTURER B
684 697 720 773 821 819 836 888 897 903
831 835 848 852 852 907 912 918 942 943
859 860 868 870 876 952 959 962 986 992
893 899 905 909 911 994 1004 1005 1007 1015
922 924 926 926 938 1016 1018 1020 1022 1034
939 943 946 954 971 1038 1072 1077 1077 1082
972 977 984 1005 1041 1096 1100 1113 1113 1116
1016 1041 1052 1080 1093 1153 1154 1174 1188 1230
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
7. The following data represent the amount of soft drink filed in a sample of 50 consecutive 2-liter
bottles. The results, listed horizontally in the order of being filled as below:
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
2.109 2.086 2.066 2.075 2.065 2.057 2.052 2.044 2.036 2.038
2.031 2.029 2.025 2.029 2.023 2.020 2.015 2.014 2.013 2.014
2.012 2.012 2.012 2.010 2.005 2.003 1.999 1.996 1.997 1.992
1.994 1.986 1.984 1.981 1.973 1.975 1.971 1.969 1.966 1.967
1.963 1.957 1.951 1.951 1.947 1.941 1.941 1.938 1.908 1.894
8. In his last 70 games a professional basketball player made the follwing scores:
10 17 9 17 18 20 16
7 17 19 13 15 14 13
12 13 15 14 13 10 14
11 15 14 11 15 15 16
9 18 15 12 14 13 14
13 14 16 15 16 15 15
14 15 15 16 13 12 16
10 16 14 13 16 14 15
6 15 13 16 15 16 16
12 14 16 15 16 13 15
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
Histogram-26
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
9. A company that filels bottles of shampoo try to maintain a specific weight of the product. The table
gives the weight of 110 bottles that were checked at random intervals. Make a tally of these weights
and construct a frequency histogram (weight in kg)
6.00 5.98 6.01 6.01 5.97 5.99 5.98 6.01 5.99 5.98 5.96
5.98 5.99 5.99 6.03 5.99 6.01 5.98 5.99 5.97 6.01 5.98
5.97 6.01 6.00 5.96 6.00 5.97 5.95 5.99 5.99 6.01 6.00
6.01 6.03 6.01 5.99 5.99 6.02 6.00 5.98 6.01 5.98 5.99
6.00 5.98 6.05 6.00 6.00 5.98 5.99 6.00 5.97 6.00 6.00
6.00 5.98 6.00 5.94 5.99 6.02 6.00 5.98 6.02 6.01 6.00
5.97 6.01 6.04 6.02 6.01 5.97 5.99 6.02 5.99 6.02 5.99
6.02 5.99 6.01 5.98 5.99 6.00 6.02 5.99 6.02 5.95 6.02
5.96 5.99 6.00 6.00 6.01 5.99 5.96 6.01 6.00 6.01 5.98
6.00 5.99 5.98 5.99 6.03 5.99 6.02 5.98 6.02 6.02 5.97
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
10. Listed next are 125 readings obtained in a hospital by motion-and-time study analyst who took 5
readings each day for 25 days.
DURATION OF OPERATION TIME
DAY (MIN)
1 1.90 1.93 1.95 2.05 2.20
2 1.76 1.81 1.81 1.83 2.01
3 1.80 1.87 1.95 1.97 2.07
4 1.77 1.83 1.87 1.90 1.93
5 1.93 1.95 2.03 2.05 2.14
6 1.76 1.88 1.95 1.97 2.00
7 1.87 2.00 2.00 2.03 2.10
8 1.91 1.92 1.94 1.97 2.05
9 1.90 1.91 1.95 2.01 2.05
10 1.79 1.91 1.93 1.94 2.10
11 1.90 1.97 2.00 2.06 2.28
12 1.80 1.82 1.89 1.91 1.99
13 1.75 1.83 1.92 1.95 2.04
14 1.87 1.90 1.98 2.00 2.08
15 1.90 1.95 1.95 1.97 2.03
16 1.82 1.99 2.01 2.06 2.06
17 1.90 1.95 1.95 2.00 2.10
18 1.81 1.90 1.94 1.97 1.99
19 1.87 1.89 1.98 2.01 2.15
20 1.72 1.78 1.96 2.00 2.05
21 1.87 1.89 1.91 1.91 2.00
22 1.76 1.80 1.91 2.06 2.12
23 1.95 1.96 1.97 2.00 2.00
24 1.82 1.94 1.97 1.99 2.00
25 1.85 1.90 1.90 1.92 1.92
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
Histogram-27
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
11. The realtive strenght of 150 silver solder welds are tested, and the results are given in the follwing
table. Tally these figures and arrange them in afrequency distribution.
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
12. The Get-Well Hospital has completed a quality improvement project on the time admit patient using
histogram. Listed below the data:
10 17 9 17 18 20 16
7 17 19 13 15 14 13
12 13 15 14 13 10 14
11 15 14 11 15 15 16
9 18 15 12 14 13 14
13 14 16 15 16 15 15
14 15 15 16 13 12 16
10 16 14 13 16 14 15
6 15 13 16 15 16 16
12 14 16 15 16 13 15
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
Histogram-28
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
Histogram-29
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution.
0.52 0.55 0.49 0.51 0.52 0.50 0.52 0.48 0.53 0.51
0.53 0.50 0.51 0.51 0.50 0.45 0.51 0.50 0.52 0.44
0.52 0.51 0.55 0.50 0.52 0.52 0.49 0.55 0.52 0.53
0.42 0.45 0.43 0.42 0.46 0.56 0.51 0.54 0.56 0.52
0.48 0.50 0.47 0.51 0.51 0.49 0.53 0.51 0.52 0.51
0.49 0.52 0.51 0.48 0.50 0.52 0.48 0.47 0.50 0.49
0.54 0.48 0.51 0.48 0.49 0.55 0.46 0.48 0.53 0.50
0.43 0.46 0.53 0.48 0.49 0.51 0.50 0.53 0.50 0.49
0.51 0.52 0.48 0.53 0.54 0.50 0.47 0.49 0.52 0.51
0.53 0.48 0.49 0.51 0.48 0.48 0.52 0.47 0.46 0.47
0.51 0.46 0.51 0.55 0.50 0.52 0.49 0.50 0.48 0.50
0.50 0.54 0.52 0.51 0.52 0.46 0.52 0.48 0.51 0.52
0.46 0.48 0.53 0.51 0.51 0.49 0.55 0.52 0.50 0.49
0.52 0.49 0.52 0.47 0.48 0.52 0.51 0.53 0.47 0.48
0.49 0.51 0.50 0.51 0.52 0.50 0.48 0.52 0.49 0.48
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel
midpoints, and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution.
ADDITIONAL EXERCISE
1. Below the listed of Curing Time Test Results
Histogram-30
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
26 38.83289 2 76 40.05451 3
27 33.0886 0 77 43.13634 4
28 31.6349 0 78 44.31927 5
29 34.55143 0 79 39.84285 2
30 33.8633 0 80 39.12542 2
31 35.18869 0 81 39.00292 2
32 42.31515 3 82 34.9124 0
33 43.43549 4 83 33.9059 0
34 37.36371 1 84 28.2279 0
35 38.85718 2 85 32.4671 0
36 39.25132 2 86 28.8737 1
37 37.05298 1 87 34.3862 0
38 42.47056 4 88 33.9296 0
39 35.90282 0 89 33.0424 0
40 38.21905 2 90 28.4006 1
41 38.57292 2 91 32.5994 0
42 39.06772 2 92 30.7381 0
43 32.2209 0 93 31.7863 0
44 33.202 0 94 34.0398 0
45 27.0305 1 95 35.7598 0
46 33.6397 0 96 42.37100 3
47 26.6306 2 97 30.206 0
48 42.79176 4 98 34.5604 0
49 38.38454 2 99 27.93 1
50 37.89885 1 100 30.8174 0
Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints, and
frequencies. Compare your graphic to the graphic above.
2. The histogram is demonstrated in the heat flow meter data case study
Heat Flow Meter Calibration and Stability
Generation
This data set was collected by Bob Zarr of NIST in January, 1990 from a heat flow meter calibration and
stability analysis. The response variable is a calibration factor.
The motivation for studying this data set is to illustrate a well-behaved process where the underlying
assumptions hold and the process is in statistical control.
Resulting Data
The following are the data used for this case study.
Histogram-31
# Prepared by Haery Sihombing @ IP.
Sihmo\bi E
a. Construct a frequency distribution showing the number of cells, cell boundaries, cel midpoints,
and frequencies.
b. Contruct a frequency histogram
c. Calculate the mean and standard deviation of the distribution
Histogram-32
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
SCATTER PLOT
Also called: scatter plot, X–Y graph
Scatter plot diagrams are used to evaluate the correlation or cause-effect relationship (if any) between
two variables (e.g., speed and gas consumption in a vehicle). The scatter diagram graphs pairs of
numerical data, with one variable on each axis, to look for a relationship between them. If the variables
are correlated, the points will fall along a line or curve. The better the correlation, the tighter the points
will hug the line.
The simplest form of scatter diagram consists of plotting bivariate data to depict the relationship
between two variables. When analyze the processes, the relationship between a controllable variable
and desired quality characteristic is frequently of importance. Knowing this relationship may help us
decide how to set a controllable variable to achieve a desired level for the output characteristic.
When you think there's a cause-effect link between two indicators (e.g., calories consumed and weight
gain) then you can use the scatter plot diagram to prove or disprove it. If the points are tightly clustered
along the trend line, then there's probably a strong correlation. If it looks more like a shotgun blast,
there is no correlation.
If the R2 correlation of determination (square of the correlation coefficient) is greater than 0.8, then 80%
of the variability in the data is accounted for by the equation. Most statistics books imply that this means
that you have a strong correlation.
Here is an example of the scatter plot diagram with the correlation coefficient and correlation of
determination created
When to Use
x When you have paired numerical data.
x When your dependent variable may have multiple values for each value of your independent
variable.
x When trying to determine whether the two variables are related, such as when trying to identify
potential root causes of problems.
Scatter-1
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
x After brainstorming causes and effects using a fishbone diagram, to determine objectively
whether a particular cause and effect are related.
x When determining whether two effects that appear to be related both occur with the same
cause.
x When testing for autocorrelation before constructing a control chart.
Procedure
1. Collect pairs of data where a relationship is suspected.
2. Draw a graph with the independent variable on the horizontal axis and the dependent variable
on the vertical axis. For each pair of data, put a dot or a symbol where the x-axis value
intersects the y-axis value. (If two dots fall together, put them side by side, touching, so that you
can see both.)
3. Look at the pattern of points to see if a relationship is obvious. If the data clearly form a line or a
curve, you may stop. The variables are correlated. You may wish to use regression or
correlation analysis now. Otherwise, complete steps 4 through 7.
4. Divide points on the graph into four quadrants. If there are X points on the graph,
5. Count X/2 points from top to bottom and draw a horizontal line.
6. Count X/2 points from left to right and draw a vertical line.
7. If number of points is odd, draw the line through the middle point.
8. Count the points in each quadrant. Do not count points on a line.
9. Add the diagonally opposite quadrants. Find the smaller sum and the total of points in all
quadrants.
10. A = points in upper left + points in lower right
11. B = points in upper right + points in lower left
12. Q = the smaller of A and B
13. N = A + B
14. Look up the limit for N on the trend test table.
15. If Q is less than the limit, the two variables are related.
16. If Q is greater than or equal to the limit, the pattern could have occurred from random chance.
Example: 1
The ZZ-400 manufacturing team suspects a relationship between product purity (percent purity) and the
amount of iron (measured in parts per million or ppm). Purity and iron are plotted against each other as
a scatter diagram, as shown in the figure below.
There are 24 data points. Median lines are drawn so that 12 points fall on each side for both percent
purity and ppm iron.
To test for a relationship, they calculate:
Scatter-2
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Then they look up the limit for N on the trend test table. For N = 24, the limit is 6.
Q is equal to the limit. Therefore, the pattern could have occurred from random chance, and no
relationship is demonstrated.
Considerations
Here are some examples of situations in which might you use a scatter diagram:
x Variable A is the temperature of a reaction after 15 minutes. Variable B measures the color of
the product. You suspect higher temperature makes the product darker. Plot temperature and
color on a scatter diagram.
x Variable A is the number of employees trained on new software, and variable B is the number
of calls to the computer help line. You suspect that more training reduces the number of calls.
Plot number of people trained versus number of calls.
x To test for autocorrelation of a measurement being monitored on a control chart, plot this pair of
variables: Variable A is the measurement at a given time. Variable B is the same measurement,
but at the previous time. If the scatter diagram shows correlation, do another diagram where
variable B is the measurement two times previously. Keep increasing the separation between
the two times until the scatter diagram shows no correlation.
x Even if the scatter diagram shows a relationship, do not assume that one variable caused the
other. Both may be influenced by a third variable.
x When the data are plotted, the more the diagram resembles a straight line, the stronger the
relationship.
x If a line is not clear, statistics (N and Q) determine whether there is reasonable certainty that a
relationship exists. If the statistics say that no relationship exists, the pattern could have
occurred by random chance.
x If the scatter diagram shows no relationship between the variables, consider whether the data
might be stratified.
x If the diagram shows no relationship, consider whether the independent (x-axis) variable has
been varied widely. Sometimes a relationship is not apparent because the data don’t cover a
wide enough range.
x Think creatively about how to use scatter diagrams to discover a root cause.
x Drawing a scatter diagram is the first step in looking for a relationship between variables.
Scatter-3
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Example: 2
Determine the relationship (graphically) between the depth of cut in a milling operation and the amount
of the tools, where as listed that 40 observations the process such that the depth of cut is varied over a
range of values and the corresponding amount of the tool wear.
X Y X Y X Y X Y
1 2.1 0.035 11 2.6 0.039 21 5.6 0.073 31 3.0 0.032
2 4.2 0.041 12 5.2 0.056 22 4.7 0.064 32 3.6 0.038
3 1.5 0.031 13 4.1 0.048 23 1.9 0.030 33 1.9 0.032
4 1.8 0.027 14 3.0 0.037 24 2.4 0.029 34 5.1 0.052
5 2.3 0.033 15 2.2 0.028 25 3.2 0.039 35 4.7 0.050
6 3.8 0.045 16 4.6 0.057 26 3.4 0.038 36 5.2 0.058
7 2.6 0.038 17 4.8 0.060 27 2.8 0.040 37 4.1 0.048
8 4.3 0.047 18 5.3 0.068 28 2.2 0.031 38 4.3 0.049
9 3.4 0.040 19 3.9 0.048 29 2.0 0.033 39 3.8 0.042
10 4.5 0.058 20 3.5 0.036 30 2.9 0.035 40 3.6 0.045
0.08
0.07
Tool Wear (in mm)
0.06
0.05
0.04
0.03
0.02
0.01
0
0 1 2 3 4 5 6
; POSITIVE CORRELATION
Example: 3
Determine the relationship (graphically) between the temperature and pressure in listed below
Scatter-4
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
100
80
Pressure 60
40
20
0
0 50 100 150 200 250 300
Temperature
Example: 4
Determine the relationship (graphically) between the speed and length in listed below
No Speed Length No Speed Length No Speed Length No Speed Length No Speed Length
1 5.1 46 11 5.0 26 21 5.5 30 31 3.7 20 41 5.0 31
2 4.7 30 12 5.0 41 22 4.4 14 32 5.1 35 42 3.9 30
3 4.4 39 13 4.2 29 23 4.2 30 33 6.0 52 43 4.6 34
4 2.8 27 14 3.0 10 24 3.6 16 34 4.1 21 44 3.5 34
5 4.6 28 15 3.3 20 25 3.3 20 35 4.6 24 45 2.5 20
6 3.8 25 16 3.7 24 26 5.0 40 36 5.5 29 46 3.0 25
7 4.9 35 17 5.2 34 27 2.5 13 37 4.5 15 47 2.5 23
8 3.3 15 18 5.1 36 28 3.9 25 38 5.0 30 48 4.6 28
9 4.0 38 19 3.3 23 29 4.0 20 39 2.2 10 49 5.6 20
10 5.0 36 20 3.5 11 30 4.5 22 40 3.5 25 50 3.3 26
50
40
Length
30
20
10
0
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0
Speed
Scatter-5
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
A cause & effect diagram is a picture composed of lines and symbols designed a meaningful
relationship between an effect and it causes. It was developed by Dr. Kaoru Ishikawa in 1943 and is
sometimes refereed to as an Ishikawa diagram.
The diagram cause-and effect diagram has nearly unlimited application in research, manufacturing,
marketing, office operations, and so forth. Use the Ishikawa fishbone diagram or cause and effect
diagram to identify the special root causes of delay, waste, rework or cost.
One of its strongest assets is the participation and contribution of everyone involved in the
brainstorming process.
When to Use:
x When identifying possible causes for a problem.
x Especially when a team’s thinking tends to fall into ruts.
Purpose: To arrive at a few key sources that contribute most significantly to the problem being
examined. These sources are then targeted for improvement. The diagram also illustrates the
relationships among the wide variety of possible contributors to the effect. Use the Ishikawa fishbone
diagram or cause and effect diagram to identify the special root causes of delay, waste, rework or
cost.
The figure below shows a simple Ishikawa diagram. Note that this tool is referred to by several
different names: Ishikawa diagram, Cause-and-Effect diagram, Fishbone diagram, and Root Cause
Analysis. The first name is after the inventor of the tool, Kaoru Ishikawa (1969) who first used the
technique in the 1960s.
Ishikawa-1
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
The basic concept in the Cause-and-Effect diagram is that the name of a basic problem of interest is
entered at the right of the diagram at the end of the main "bone". The main possible causes of the
problem (the effect) are drawn as bones off of the main backbone. The "Four-M" categories are
typically used as a starting point: "Materials", "Machines", "Manpower", and "Methods". Different
names can be chosen to suit the problem at hand, or these general categories can be revised. The
key is to have three to six main categories that encompass all possible influences. Brainstorming is
typically done to add possible causes to the main "bones" and more specific causes to the "bones" on
the main "bones". This subdivision into ever increasing specificity continues as long as the problem
areas can be further subdivided. The practical maximum depth of this tree is usually about four or five
levels. When the fishbone is complete, one has a rather complete picture of all the possibilities about
what could be the root cause for the designated problem.
The Cause-and-Effect diagram can be used by individuals or teams; probably most effectively by a
group. A typical utilization is the drawing of a diagram on a blackboard by a team leader who first
presents the main problem and asks for assistance from the group to determine the main causes
which are subsequently drawn on the board as the main bones of the diagram. The team assists by
making suggestions and, eventually, the entire cause and effect diagram is filled out. Once the entire
fishbone is complete, team discussion takes place to decide what are the most likely root causes of
the problem. These causes are circled to indicate items that should be acted upon, and the use of the
tool is complete.
The Ishikawa diagram, like most quality tools, is a visualization and knowledge organization tool.
Simply collecting the ideas of a group in a systematic way facilitates the understanding and ultimate
diagnosis of the problem. Several computer tools have been created for assisting in creating Ishikawa
diagrams. A tool created by the Japanese Union of Scientists and Engineers (JUSE) provides a
rather rigid tool with a limited number of bones. Other similar tools can be created using various
commercial tools.
Only one tool has been created that adds computer analysis to the fishbone. Bourne et al. (1991)
reported using Dempster-Shafer theory (Shafer and Logan, 1987) to systematically organize the
beliefs about the various causes that contribute to the main problem. Based on the idea that the main
problem has a total belief of one, each remaining bone has a belief assigned to it based on several
factors; these include the history of problems of a given bone, events and their causal relationship to
the bone, and the belief of the user of the tool about the likelihood that any particular bone is the
cause of the problem.
Ishikawa-2
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Procedure
Materials needed: flipchart or whiteboard, marking pens.
1. Agree on a problem statement (effect). Write it at the center right of the flipchart or
whiteboard. Draw a box around it and draw a horizontal arrow running to it.
2. Brainstorm the major categories of causes of the problem. If this is difficult use generic
headings:
o Methods
o Machines (equipment)
o People (manpower)
o Materials
o Measurement
o Environment
3. Write the categories of causes as branches from the main arrow.
4. Brainstorm all the possible causes of the problem. Ask: “Why does this happen?” As each
idea is given, the facilitator writes it as a branch from the appropriate category. Causes can
be written in several places if they relate to several categories.
5. Again ask “why does this happen?” about each cause. Write sub-causes branching off the
causes. Continue to ask “Why?” and generate deeper levels of causes. Layers of branches
indicate causal relationships.
6. When the group runs out of ideas, focus attention to places on the chart where ideas are few.
How to construct:
1. Place the main problem under investigation in a box on the right.
2. Have the team generate and clarify all the potential sources of variation.
3. Use an affinity diagram to sort the process variables into naturally related groups. The labels of
these groups are the names for the major bones on the Ishikawa diagram.
4. Place the process variables on the appropriate bones of the Ishikawa diagram.
5. Combine each bone in turn, insuring that the process variables are specific, measurable, and
controllable. If they are not, branch or "explode" the process variables until the ends of the
branches are specific, measurable, and controllable.
Ishikawa-3
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Example
This fishbone diagram was drawn by a manufacturing team to try to understand the source of periodic
iron contamination. The team used the six generic headings to prompt ideas. Layers of branches
show thorough thinking about the causes of the problem.
For example, under the heading “Machines,” the idea “materials of construction” shows four kinds of
equipment and then several specific machine numbers.
Note that some ideas appear in two different places. “Calibration” shows up under “Methods” as a
factor in the analytical procedure, and also under “Measurement” as a cause of lab error. “Iron tools”
can be considered a “Methods” problem when taking samples or a “Manpower” problem with
maintenance personnel.
Ishikawa-4
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Ishikawa-5
# Prepared by Haery Sihombing @ IP
sihmo\bi e
PURPOSE
In-depth view into Run Charts--a quality improvement technique; how Run charts are used to monitor
processes; how using Run charts can lead to improved process quality.
The purpose of using control charts is
x to help prevent the process from going out of control
The control charts help detect the assignable causes of variation on time so that appropriate actions
can be taken to bring the process back in control.
x To keep from making adjustments when they are not needed.
Most production processes allow operators a certain level of leeway to make adjustments on the
machines that they are using when it is necessary. Yet over adjusting machines can have negative
impacts on the output. Control charts can indicate when the adjustments are necessary and when they
are not.
x To determine the natural range (control limits) of a process and to compare it to its specified
limits.
If the range of the control limits is wider than the one of the specified limits, the production process will
need to be adjusted.
x to inform about the process capabilities and stability
x The process capability refers to its ability to constantly deliver products that are within the
specified limits and the stability refers to the quality auditor's ability to predict the process trends
based on past experience. A long term analysis of the control charts can help monitor the
machine long term capabilities. Machine wear out will reflect on the production output
x to fulfill the need of a constant process monitoring
Samples need to be taken on a regular basis and tested to make sure that the quality of the products
sent to the customers meets their expectations.
USAGE
Run charts are used to analyze processes according to time or order. Run charts are useful in
discovering patterns that occur over time.
KEY TERMS
h Trends:
Trends are patterns or shifts according to time. An upward trend, for instance, would contain a
section of data points that increased as time passed.
h Population:
A population is the entire data set of the process. If a process produces one thousand parts a
day, the population would be the one thousand items.
h Sample:
A sample is a subgroup or small portion of the population that is examined when the entire
population can not be evaluated. For instance, if the process does produce one thousand items a
day, the sample size could be perhaps three hundred.
HISTORY
Run charts originated from control charts, which were initially designed by Walter Shewhart. Walter
Shewhart was a statistician at Bell Telephone Laboratories in New York. Shewhart developed a system
for bringing processes into statistical control by developing ideas which would allow for a system to be
controlled using control charts. Run charts evolved from the development of these control charts, but
run charts focus more on time patterns while a control chart focuses more on acceptable limits of the
process. Shewhart's discoveries are the basis of what as known as SQC or Statistical Quality Control.
EXAMPLE : 1
Problem Scenario
You have just moved into a new area that you are not familiar with. Your desire is to arrive at work on
time, but you have noticed over your first couple of weeks on the job that it doesn't take the same
amount of time each day of the week. You decide to monitor the amount of time it takes to get to work
over the next four weeks and construct a run chart.
Step 1: Gathering Data
Collect measurements each day over the next four weeks. Organize and record the data in
chronological or sequential form.
M T W TH F
WEEK 1 33 28 26.5 28 26
WEEK 2 35 30.5 28 26 25.5
WEEK 3 34.5 29 28 26 25
WEEK 4 34 29.5 27 27 25.5
CONTROL CHART
What is it?
h A Control Chart is a tool you can use to monitor a process. It graphically depicts the average value and the upper and lower
control limits (the highest and lowest values) of a process.
h A graphical tool for monitoring changes that occur within a process, by distinguishing variation that is inherent in the
process(common cause) from variation that yield a change to the process(special cause). This change may be a single point
or a series of points in time - each is a signal that something is different from what was previously observed and measured.
A control chart (also called process chart or quality control chart) is a graph that shows whether a
sample of data falls within the common or normal range of variation. A control chart has upper and
lower control limits that separate common from assignable causes of variation. The common range of
variation is defined by the use of control limits. We say that a process is out of control when a plot of
data reveals that one or more samples fall outside the control limits.
The control chart, also known as the 'Shewhart chart' or 'process-behaviour chart' is a statistical tool
intended to assess the nature of variation in a process and to facilitate forecasting and management. A
control chart is a more specific kind of a run chart.
The control chart was invented by Walter A. Shewhart while working for Bell Labs in the 1920s. The
company's engineers had been seeking to improve the reliability of their telephony transmission
systems. Because amplifiers and other equipment had to be buried underground, there was a business
need to reduce the frequency of failures and repairs. By 1920 they had already realised the importance
of reducing variation in a manufacturing process. Moreover, they had realised that continual process-
adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart
framed the problem in terms of Common- and special-causes of variation and, on May 16, 1924, wrote
an internal memo introducing the control chart as a tool for distinguishing between the two. Dr.
Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a
page in length. About a third of that page was given over to a simple diagram which we would all
recognize today as a schematic control chart. That diagram, and the short text which preceded and
followed it, set forth all of the essential principles and considerations which are involved in what we
know today as process quality control." [1] Shewhart stressed that bringing a production process into a
state of statistical control, where there is only common-cause variation, and keeping it in control, is
necessary to predict future output and to manage a process economically.
Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by
carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories,
he understood data from physical processes never produce a "normal distribution curve" (a Gaussian
distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in
manufacturing data did not always behave the same way as data in nature (Brownian motion of
particles). Dr. Shewhart concluded that while every process displays variation, some processes display
controlled variation that is natural to the process, while others display uncontrolled variation that is not
present in the process causal system at all times.
A control chart is a run chart of a sequence of quantitative data with five horizontal lines drawn on the
chart:
x A centre line, drawn at the process mean;
x An upper warning limit drawn two standard deviations above the centre line;
x An upper control-limit (also called an upper natural process-limit drawn three standard
deviations above the centre line;
x A lower warning limit drawn two standard deviations below the centre line;
x A lower control-limit (also called a lower natural process-limit drawn three standard deviations
below the centre line.
Common cause variation plots as an irregular pattern, mostly within the control limits. Any observations
outside the limits, or patterns within, suggest (signal) a special-cause (see Rules below). The run chart
provides a context in which to interpret signals and can be beneficially annotated with events in the
business.
Different types of control charts can be used, depending upon the type of data. The two broadest
groupings are for variable data and attribute data.
x Variable data are measured on a continuous scale. For example: time, weight, distance or
temperature can be measured in fractions or decimals. The possibility of measuring to greater
precision defines variable data.
x Attribute data are counted and cannot have fractions or decimals. Attribute data arise when
you are determining only the presence or absence of something: success or failure, accept or
reject, correct or not correct. For example, a report can have four errors or five errors, but it
cannot have four and a half errors.
) Variables charts
o –X and R chart (also called averages and range chart)
o –X and s chart
o chart of individuals (also called X chart, X-R chart, IX-MR chart, Xm R chart, moving
range chart)
o moving average–moving range chart (also called MA–MR chart)
o target charts (also called difference charts, deviation charts and nominal charts)
o CUSUM (also called cumulative sum chart)
o EWMA (also called exponentially weighted moving average chart)
x When controlling ongoing processes by finding and correcting problems as they occur.
x When predicting the expected range of outcomes from a process.
x When determining whether a process is stable (in statistical control).
x When analyzing patterns of process variation from special causes (non-routine events) or
common causes (built into the process).
x When determining whether your quality improvement project should aim to prevent specific
problems or to make fundamental changes to the process.
The control chart is a graph used to study how a process changes over time. Data are plotted in time
order. A control chart always has a central line for the average, an upper line for the upper control limit
and a lower line for the lower control limit. These lines are determined from historical data. By
comparing current data to these lines, you can draw conclusions about whether the process variation is
consistent (in control) or is unpredictable (out of control, affected by special causes of variation).
Control charts for variable data are used in pairs. The top chart monitors the average, or the centering
of the distribution of data from the process. The bottom chart monitors the range, or the width of the
distribution. If your data were shots in target practice, the average is where the shots are clustering, and
the range is how tightly they are clustered. Control charts for attribute data are used singly.
h For Construction
1. Select the process to be charted and decide on the type of control chart to use.
o Use a Percent Nonconforming Chart (more information available from Health Tactics P
Chart) if you have data measured using two outcomes (for example, the billing can be
correct or incorrect).
o Use an Average and Range Control Chart (more information available from Health
Tactics X-R Chart) if you have data measured using a continuous scale (for example,
waiting time in the health center).
2. Determine your sampling method and plan:
o Choose the sample size (how many samples will you obtain?).
o Choose the frequency of sampling, depending on the process to be evaluated (months,
days, years?).
o Make sure you get samples at random (don't always get data from the same person, on
the same day of the week, etc.).
3. Start data collection:
o Gather the sampled data.
o Record data on the appropriate control graph.
4. Calculate the appropriate statistics (the control limits) depending on the type of graph.
5. Observation:
The control graph is divided into zones:
______________________________ Upper Control Limit (UCL)
Where k is the distance between the center line and the control limits.
EXAMPLE: 2
Consider the length as being the critical characteristic of manufactured bolts. The mean length of the
bolts is 17 inches with a known standard deviation of 0.01. A sample of 5 bolts is taken every half an
hour for testing and the mean of the sample is computed and plotted on the chart. That control chart will
be called control chart because it plots the means of the samples.
Based on the Central Limit theorem, we can determine the sample standard deviation and the mean.
The mean will still be the same as the population's mean, 17.
For three sigma control limits, we will have:
Control limits on a control chart should be readjusted every time a significant shift in the process
occurs.
A typical control chart is made up of at least four lines: a vertical line that measures the levels of the
samples' means, the two outmost horizontal lines represent the UCL and the LCL and the Center Line
represents the mean. If all the points plot in between the UCL and the LCL in a random manner , the
process is considered to be in control.
What is meant by in control process is not a total absence of variation but instead, when the variations
are present, they exhibit a random pattern, they are not outside the control limits and based on past
experience, they can be predicted and are strictly due to common causes. The control charts are an
effective tool for detecting the special causes of variation
The following chart depicts a process in control and within the specified limits. The Normal curve on the
left side shows the specified (desired) limits of the production process while the right chart is the control
chart. The specification limits determine whether the products meet the customers' expectations while
the control limits determine whether the process is under statistical control. These two charts are
completely separate entities. There is no statistical relationship between the specification limits and the
control limits.
If some points are outside the control limits, this will indicate that the process is out of control and
corrective actions need to be taken.
Let's note that a process with all the points in between the control limits is not necessarily synonymous
with acceptable process. A process can be in control with a high variability or too many of the plotted
points are too close to one control limit and away from the target.
The following chart is a good example of an out of control process with all the points plotted within the
control limits.
In this example, points A, B, C, D, E and F are all well within the limits but they do not behave
randomly, they exhibit a run up pattern, in other words they follow a steady (increasing) trend. The
causes of this run up pattern need to be investigated because it might be the result of a problem with
the process.
The interpretation of the control charts patterns is not easy and requires experience and know-how.
h Out of Control
o A single point outside the control limits. In Figure 1, point sixteen is above the UCL
(upper control limit).
o Two out of three successive points are on the same side of the centerline and farther
than 2 ı from it. In Figure 1, point 4 sends that signal.
o Four out of five successive points are on the same side of the centerline and farther
than 1 ı from it. In Figure 1, point 11 sends that signal.
o A run of eight in a row are on the same side of the centerline. Or 10 out of 11, 12 out of
14 or 16 out of 20. In Figure 1, point 21 is eighth in a row above the centerline.
o Obvious consistent or persistent patterns that suggest something unusual about your
data and your process.
When the samples plotted in the control chart are not of equal size, then the control limits around the
center line (target specification) cannot be represented by a straight line. For example, to return to the
formula Sigma/Square Root(n) presented earlier for computing control limits for the X-bar chart, it is
obvious that unequal n's will lead to different control limits for different sample sizes. There are three
ways of dealing with this situation.
Average sample size. If one wants to maintain the straight-line control limits (e.g., to make the chart
easier to read and easier to use in presentations), then one can compute the average n per sample
across all samples, and establish the control limits based on the average sample size. This procedure
is not "exact," however, as long as the sample sizes are reasonably similar to each other, this
procedure is quite adequate.
Variable control limits. Alternatively, one may compute different control limits for each sample, based
on the respective sample sizes. This procedure will lead to variable control limits, and result in step-
chart like control lines in the plot. This procedure ensures that the correct control limits are computed
for each sample. However, one loses the simplicity of straight-line control limits.
Stabilized (normalized) chart. The best of two worlds (straight line control limits that are accurate) can
be accomplished by standardizing the quantity to be controlled (mean, proportion, etc.) according to
units of sigma. The control limits can then be expressed in straight lines, while the location of the
sample points in the plot depend not only on the characteristic to be controlled, but also on the
respective sample n's. The disadvantage of this procedure is that the values on the vertical (Y) axis in
the control chart are in terms of sigma rather than the original units of measurement, and therefore,
those numbers cannot be taken at face value (e.g., a sample with a value of 3 is 3 times sigma away
from specifications; in order to express the value of this sample in terms of the original units of
measurement, we need to perform some computations to convert this number back)
Attribute Data:
This category of control chart displays data that result from counting the number of occurrences
or items in a single category of similar items or occurrences. These “count” data may be
expressed as pass/fail, yes/no, or presence/absence of a defect.
Variable Data:
This category of control chart displays values resulting from the measurement of a continuous
variable. Examples of variables data are elapsed time, temperature, and radiation dose.
Nominal chart, target chart. There are several different types of short run charts. The most basic are
the nominal short run chart, and the target short run chart. In these charts, the measurements for each
part are transformed by subtracting a part-specific constant. These constants can either be the nominal
values for the respective parts (nominal short run chart), or they can be target values computed from
the (historical) means for each part (Target X-bar and R chart). For example, the diameters of piston
bores for different engine blocks produced in a factory can only be meaningfully compared (for
determining the consistency of bore sizes) if the mean differences between bore diameters for different
sized engines are first removed. The nominal or target short run chart makes such comparisons
possible. Note that for the nominal or target chart it is assumed that the variability across parts is
identical, so that control limits based on a common estimate of the process sigma are applicable.
Standardized short run chart. If the variability of the process for different parts cannot be assumed to
be identical, then a further transformation is necessary before the sample means for different parts can
be plotted in the same chart. Specifically, in the standardized short run chart the plot points are further
transformed by dividing the deviations of sample means from part means (or nominal or target values
for parts) by part-specific constants that are proportional to the variability for the respective parts. For
example, for the short run X-bar and R chart, the plot points (that are shown in the X-bar chart) are
computed by first subtracting from each sample mean a part specific constant (e.g., the respective part
mean, or nominal value for the respective part), and then dividing the difference by another constant,
for example, by the average range for the respective chart. These transformations will result in
comparable scales for the sample means for different parts.
For controlling quality characteristics that represent attributes of the product, the following charts are
commonly constructed:
x C chart. In this chart (see example below), we plot the number of defectives (per batch, per
day, per machine, per 100 feet of pipe, etc.). This chart assumes that defects of the quality
attribute are rare, and the control limits in this chart are computed based on the Poisson
distribution (distribution of rare events).
x U chart. In this chart we plot the rate of defectives, that is, the number of defectives divided by
the number of units inspected (the n; e.g., feet of pipe, number of batches). Unlike the C chart,
this chart does not require a constant number of units, and it can be used, for example, when
the batches (samples) are of different sizes.
x Np chart. In this chart, we plot the number of defectives (per batch, per day, per machine) as in
the C chart. However, the control limits in this chart are not based on the distribution of rare
events, but rather on the binomial distribution. Therefore, this chart should be used if the
occurrence of defectives is not rare (e.g., they occur in more than 5% of the units inspected).
For example, we may use this chart to control the number of units produced with minor flaws.
x P chart. In this chart, we plot the percent of defectives (per batch, per day, per machine, etc.)
as in the U chart. However, the control limits in this chart are not based on the distribution of
rare events but rather on the binomial distribution (of proportions). Therefore, this chart is most
applicable to situations where the occurrence of defectives is not rare (e.g., we expect the
percent of defectives to be more than 5% of the total number of units produced).
be controlled, but also on the respective sample n's. The disadvantage of this procedure is that
the values on the vertical (Y) axis in the control chart are in terms of sigma rather than the
original units of measurement, and therefore, those numbers cannot be taken at face value
(e.g., a sample with a value of 3 is 3 times sigma away from specifications; in order to express
the value of this sample in terms of the original units of measurement, we need to perform
some computations to convert this number back).
As the sigma control limits discussed earlier, the runs rules are based on "statistical" reasoning. For
example, the probability of any sample mean in an X-bar control chart falling above the center line is
equal to 0.5, provided (1) that the process is in control (i.e., that the center line value is equal to the
population mean), (2) that consecutive sample means are independent (i.e., not auto-correlated), and
(3) that the distribution of means follows the normal distribution. Simply stated, under those conditions
there is a 50-50 chance that a mean will fall above or below the center line. Thus, the probability that
two consecutive means will fall above the center line is equal to 0.5 times 0.5 = 0.25.
Accordingly, the probability that 9 consecutive samples (or a run of 9 samples) will fall on the same side
of the center line is equal to 0.5**9 = .00195. Note that this is approximately the probability with which a
sample mean can be expected to fall outside the 3- times sigma limits (given the normal distribution,
and a process in control). Therefore, one could look for 9 consecutive sample means on the same side
of the center line as another indication of an out-of-control condition. Refer to Duncan (1974) for details
concerning the "statistical" interpretation of the other (more complex) tests.
Zone A, B, C. Customarily, to define the runs tests, the area above and below the chart center line is
divided into three "zones."
By default, Zone A is defined as the area between 2 and 3 times sigma above and below the center
line; Zone B is defined as the area between 1 and 2 times sigma, and Zone C is defined as the area
between the center line and 1 times sigma.
* 4 out of 5 points in a row in Zone B or beyond. Like the previous test, this test may be
considered to be an "early warning indicator" of a potential process shift. The false- positive
error rate for this test is also about 2%.
* 15 points in a row in Zone C (above and below the center line). This test indicates a smaller
variability than is expected (based on the current control limits).
* 8 points in a row in Zone B, A, or beyond, on either side of the center line (without points
in Zone C). This test indicates that different samples are affected by different factors, resulting
in a bimodal distribution of means. This may happen, for example, if different samples in an X-
bar chart where produced by one of two different machines, where one produces above
average parts, and the other below average parts.
Control charts provide the operational definition of the term special cause. A special cause is simply
anything which leads to an observation beyond a control limit. However, this simplistic use of control
charts does not do justice to their power. Control charts are running records of the performance of the
process and, as such, they contain a vast store of information on potential improvements. While some
guidelines are presented here, control chart interpretation is an art that can only be developed by
looking at many control charts and probing the patterns to identify the underlying system of causes at
work.
Freak patterns are the classical special cause situation. Freaks result from causes that have a large
effect but that occur infrequently. When investigating freak values look at the cause-and-effect diagram
for items that meet these criteria. The key to identifying freak causes is timelines in collecting and
recording the data. If you have difficulty, try sampling more frequently.
Drift is generally seen in processes where the current process value is partly determined by the
previous process state. For example, if the process is a plating bath, the content of the tank cannot
change instantaneously, instead it will change gradually. Another common example is tool wear: the
size of the tool is related to its previous size. Once the cause of the drift has been determined, the
appropriate action can be taken. Whenever economically feasible, the drift should be eliminated, e.g.,
install an automatic chemical dispenser for the plating bath, or make automatic compensating
adjustments to correct for tool wear. Note that the total process variability increases when drift is
allowed, which adds cost. When this is not possible, the control chart can be modified in one of two
ways:
1. Make the slope of the center line and control limits match the natural process drift. The control
chart will then detect departures from the natural drift.
Cycles often occur due to the nature of the process. Common cycles include hour of the day, day of the
week, month of the year, quarter of the year, week of the accounting cycle, etc. Cycles are caused by
modifying the process inputs or methods according to a regular schedule. The existence of this
schedule and its effect on the process may or may not be known in advance. Once the cycle has been
discovered, action can be taken. The action might be to adjust the control chart by plotting the control
measure against a variable base. For example, if a day-of-the-week cycle exists for shipping errors
because of the workload, you might plot shipping errors per 100 orders shipped instead of shipping
errors per day. Alternatively, it may be worthwhile to change the system to smooth out the cycle. Most
processes operate more efficiently when the inputs are relatively stable and when methods are
changed as little as possible.
A controlled process will exhibit only "random looking" variation. A pattern where every nth item is
different is, obviously, non-random. These patterns are sometimes quite subtle and difficult to identify. It
is sometimes helpful to see if the average fraction defective is close to some multiple of a known
number of process streams. For example, if the machine is a filler with 40 stations, look for problems
that occur 1/40, 2/40, 3/40, etc., of the time.
When plotting measurement data the assumption is that the numbers exist on a continuum, i.e., there
will be many different values in the data set. In the real world, the data are never completely
continuous. It usually doesn’t matter much if there are, say, 10 or more different numbers. However,
when there are only a few numbers that appear over-and-over it can cause problems with the analysis.
A common problem is that the R chart will underestimate the average range, causing the control limits
on both the average and range charts to be too close together. The result will be too many "false
alarms" and a general loss of confidence in SPC.
The usual cause of this situation is inadequate gage resolution. The ideal solution is to obtain a gage
with greater resolution. Sometimes the problem occurs because operators, inspectors, or computers
are rounding the numbers. The solution here is to record additional digits.
The reason SPC is done is to accelerate the learning process and to eventually produce an
improvement. Control charts serve as historical records of the learning process and they can be used
by others to improve other processes. When an improvement is realized the change should be written
on the old control chart; its effect will show up as a less variable process. These charts are also useful
in communicating the results to leaders, suppliers, customers, and others interested in quality
improvement.
Seemingly random patterns on a control chart are evidence of unknown causes of variation, which is
not the same as uncaused variation. There should be an ongoing effort to reduce the variation from
these so-called common causes. Doing so requires that the unknown causes of variation be identified.
One way of doing this is a retrospective evaluation of control charts. This involves brainstorming and
preparing cause and effect diagrams, then relating the control chart patterns to the causes listed on the
diagram. For example, if "operator" is a suspected cause of variation, place a label on the control chart
points produced by each operator. If the labels exhibit a pattern, there is evidence to suggest a
problem. Conduct an investigation into the reasons and set up controlled experiments (prospective
studies) to test any theories proposed. If the experiments indicate a true cause and effect relationship,
make the appropriate process improvements. Keep in mind that a statistical association is not the same
thing as a causal correlation. The observed association must be backed up with solid subject-matter
expertise and experimental data.
Mixture exists when there data from two different cause-systems are plotted on a single control chart. It
indicates a failure in creating rational subgroups. The underlying differences should be identified and
corrective action taken. The nature of the corrective action will determine how the control chart should
be modified.
Mixture example #1
The mixture represents two different operators who can be made more consistent. A single control
chart can be used to monitor the new, consistent process.
Mixture example #2
The mixture is in the number of emergency room cases received on Saturday evening, versus the
number received during a normal week. Separate control charts should be used to monitor patient-load
during the two different time periods.
What are the WECO (Western Electric Company) rules for signaling "Out of Control"?
The WECO rules are based on probability. We know that, for a normal distribution, the probability of
encountering a point outside ± 3 is 0.3%. This is a rare event. Therefore, if we observe a point outside
the control limits, we conclude the process has shifted and is unstable. Similarly, we can identify other
events that are equally rare and use them as flags for instability. The probability of observing two points
out of three in a row between 2 and 3 and the probability of observing four points out of five in a row
between 1 and 2 are also about 0.3%.
X-bar & Range Charts are a set of control charts for variables data (data that is both quantitative and
continuous in measurement, such as a measured dimension or time). The X-bar chart monitors the
process location over time, based on the average of a series of observations, called a subgroup. The
Range chart monitors the variation between observations in the subgroup over time.
Our X-bar & Range Chart accurately accounts for subgroups of varying sizes using Burr weighting
factors to determine process sigma (Burr,1969). For a general discussion of X-bar & R Charts, refer to
any Statistical Quality Control textbook, such as (Pyzdek, 1990, 1992a) or (Montgomery, 1991), or The
Memory Jogger (GOAL/QPC).
Control charts are generally used in a production or manufacturing environment and are used to
control, monitor and IMPROVE a process. Common causes are always present and generally attributed
to machines, material and time vs. temperature. This normally takes a minor adjustme ent to the
process to make the correction and return the process to a normal output. HOWEVER, when making a
change to the process, it should always be a MINOR change. If a plot is observed that shows a slight
deviation trend upward or downward, the "tweaking" adjustment should be a slight change, and then
another observation should be made. Too often people will over-correct by making too big of an
adjustment which then causes the process to dramatically shift in the other direction. For that reason,
all changes to the process should be SLIGHT and GRADUAL!
A control chart is a graph or chart with limit lines, called control lines. There are basically three kinds of
control lines:
x the upper control limit (UCL),
x the central line (actual nominal size of product),
x the lower control limit (LCL).
The purpose of drawing a control chart is to detect any changes in the process that would be evident by
any abnormal points listed on the graph from the data collected. If these points are plotted in "real time",
the operator will immediately see that the point is exceeding one of the contol limits, or is heading in
that direction, and can make an immediate adjustment. The operator should also record on the chart
the cause of the drift, and what was done to correct the problem bringing the process back into a "state
of control".
The method in which data is collected to be charted is as follows: A sampling plan is devised to
measure parts and then to chart that measurement at a specified interval. The time interval and method
of collection will vary. For our example, we will say that we collect data five times a day at specified time
intervals. In making the control chart, the daily data is averaged out in order to obtain an average value
for that day. Each of these values then becomes a point on the control chart that then represents the
characteristics of that given day. To explain further, the five measurements made in one day constitute
one sub group, or one plot point. In some manufacturing firms, measurements are taken every 15
minutes, and the four plots (a subgroup) are totaled and then an average value is calculated. This value
then equals one plot for the hour, and that plot is placed on the chart; thus, one plot point on the chart
every hour of the working day.
It is when these plot points should fall outside the UCL or LCL, that some form of change must occur on
the assembly or manufacturing line. Further, the cause needs to be investigated and have proper action
taken to prevent it from happening again………called preventative action, and continuous improvement
in the Quality world. The use of control charts is called "process control." In reality, however, a trend will
develop that indicates the process is leading away from the center line, and corrective action is usually
taken prior to a point exceeding one of the control limits.
There are two main types of Control Charts. Certain data are based upon measurements, such as the
measurement of unit parts. These are known as "indiscrete values" or "continuous data". Other types of
data are based on counting, such as the number of defective articles or the number of defects. These
are known as "discrete values" or "enumerated data".
For subgroup sizes greater than ten, use X-bar / Sigma charts, since the range statistic is a poor
estimator of process sigma for large subgroups. In fact, the subgroup sigma is ALWAYS a better
estimate of subgroup variation than subgroup range. The popularity of the Range chart is only due to its
ease of calculation, dating to its use before the advent of computers. For subgroup sizes equal to one,
an Individual-X / Moving Range chart can be used, as well as EWMA or Cu Sum charts.
X-bar Charts are efficient at detecting relatively large shifts in the process average, typically shifts of +-
1.5 sigma or larger. The larger the subgroup, the more sensitive the chart will be to shifts, providing a
Rational Subgroup can be formed. For more sensitivity to smaller process shifts, use an EWMA or Cu
Sum chart.
Subgroup Average
1. Average/ Center Line
The Average, sometimes called X-Bar, is calculated for a set of n data values as:
An example of its use is as the plotted statistic in an X-Bar Chart. Here, the n is the subgroup size, and
x-bar indicates the average of the observations in the subgroup. The building of X chart follows the
same principle as for the one of attribute control charts with the difference that quantitative
measurements are considered for the Critical-To-Quality characteristics instead of qualitative attributes.
Samples are taken and measurements of the means ( X ) for each sample derived and plotted on the
chart.
The center line (CL) is determined by averaging the s.
When dealing with subgrouped data, you can also calculate the overall average of the subgroups. It is
the average of the subgroups' averages, so is sometimes called X-doublebar.
where n is the subgroup size and m is the total number of subgroups included in the analysis.
The next step will be to determine the Upper Control Limit ( ) and the lower Control Limit
( ). We have determined k to be equal to 3, the only remaining variable of this equation
is which can be determined in several ways. One way to do it would be through the use of the
standard error estimate and another one would be the use of the mean range.
There is a special relationship between the mean range and the standard deviation for normally
distributed data.
where x-double bar is the Grand Average and x is Process Sigma, which is calculated using the
Subgroup Range or Subgroup Sigma statistic.
(n = constant) ,
OR
(n constant)
where:
x Rj is the Subgroup Range of subgroup j
x R-bar is the Average Range
where xi are the observations in subgroup j, x-barj is the subgroup average for subgroup j, and n is the
subgroup size.
2. Average Sigma
UCL , LCL (Upper and Lower Control Limit)
where S-bar is the Average Sigma , c4 is a function of n (available in any statistical quality control
textbook).
where B3 and B4 are a function of n (available in any statistical quality control textbook).
where R-bar is the Average Range , d3 is a function of n (available in any statistical quality control
textbook), and x is Process Sigma, which is calculated using the Subgroup Range .
Notes: Some authors prefer to write this as:
where D3 and D4 are a function of n (available in any statistical quality control textbook).
Notes: Some authors prefer to write this as:
2. Average Range
The average of the subgroup ranges (R-bar).
(n constant)
(n = constant) , OR
where:
x Rj is the Subgroup Range of subgroup j
x m is the total number of subgroups included in the analysis
x sigma-x is the Process Sigma based on the Range chart
x d2 is a function of n (available in any statistical quality control textbook)
Note: When control limits for the X-Bar chart are defined as fixed values (such as when historical data
is used to define control limits), the Average Range (R-bar) must be back calculated from these pre-
defined control limits. This ensures that the control limits on the Range chart are at the same sensitivity
as those on the X-Bar chart. In this case:
where d2 (available in any statistical quality control textbook) is based on the subgroup size n.
3. Moving Range
When individual (samples composed of a single item) CTQ characteristics are collected, moving range
control charts can be used to monitor process quality. The variability of the process is measured in
terms of the distribution of the absolute values of the difference of every two successive observations.
Let be the ith observation, the moving average range MR will be:
The standard deviation S is obtained by dividing by the constant . Since the moving range only
involve two observations, n will be equal to two and therefore for this case, will always be 1.128.
x STEP #5 - Find the range, R. Use the following formula for each subgroup.
EXAMPLE: 3.
It is now time for you to practice some of your learning. I have completed many of the Xbar and R
values for you, however, you really should perform a few calculations to gain the experience. Using the
attached Exercise Sheet, calculate the remaining Xbar and R values.
x STEP #7 - Compute the average value of the range (R). Total R for all the groups and divide by
the number of subgroups (k).
x STEP #8 - Compute the Control Limit Lines. Use the following formulas for Xbar and R Control
Charts. The coefficients for calculating the control lines are A2, D4, and D3 are located on the
bottom of the Work Sheet you are presently using, and presented here:
x STEP #9 - Construct the Control Chart. Using graph paper or Control Chart paper, set the index
so that the upper and lower control limits will be separated by 20 to 30 mm (units). Draw in the
Control lines CL, UCL and LCL, and label them with their appropriate numerical values. It is
recommended that you use a blue or black line for the CL, and a red line for the UCL and LCL.
The central line is a solid line. The Upper and Lower control limits are usually drawn as broken
lines.
x STEP #10 - Plot the Xbar and R values as computed for each subgroup. For the Xbar values,
use a dot (.), and for the R values, use an (x). Circle any points that lie outside the control limit
lines so that you can distinguish them from the others. The plotted points should be about 2 to
5 mm apart. Below is what our Xbar chart looks like when plotted.
x STEP #11 - Write in the necessary information. On the top center of the control charts write the
Xbar and R chart, and the R Chart so that you (and others) will know which chart is which. On
the upper left hand corner of the Xbar control chart, write the n value to indicate the subgroup
size; in this case n = 5.
resolution of your measurements, which will adversely affect your control limit calculations. In this case,
you'll have to look at how you measure the variable, and try to measure it more precisely.
Once you've removed the effect of the out of control points from the Range chart, look at the X-bar
Chart.
Æ Interpreting the X-bar Chart
After reviewing the Range chart, look for out of control points on the X-bar Chart. If there are any, then
the special causes must be eliminated. Brainstorm and conduct Designed Experiments to find those
process elements that contribute to sporadic changes in process location. To use the data you have,
turn Auto Drop ON, which will remove the statistical bias of the out of control points by dropping them
from the calculations of the average X-bar and X-bar control limits.
Look for obviously non-random behavior. Turn on the Run Tests, which apply statistical tests for trends
to the plotted points.
If the process shows control relative to the statistical limits and Run Tests for a sufficient period of time
(long enough to see all potential special causes), then we can analyze its capability relative to
requirements. Capability is only meaningful when the process is stable, since we cannot predict the
outcome of an unstable process.
Now that we know how to make a control chart, it is even more important to understand how to interpret
them and realize when there is a problem. All processes have some kind of variation, and this process
variation can be partitioned into two main components. First, there is natural process variation,
frequently called "common cause" or system variation. These are common variations caused by
machines, material and the natural flow of the process. Secondly is special cause variation, generally
caused by some problem or extraordinary occurrence in the system. It is our job to work at trying to
eliminate or minimize both of these types of variation. Below is an example of a few different process
variations, and how to recognize a potential problem.
Æ Types of Errors:
Control limits on a control chart are commonly drawn at 3s from the center line because 3-sigma limits
are a good balance point between two types of errors:
x Type I or alpha errors occur when a point falls outside the control limits even though no special
cause is operating. The result is a witch-hunt for special causes and adjustment of things here
and there. The tampering usually distorts a stable process as well as wasting time and energy.
x Type II or beta errors occur when you miss a special cause because the chart isn't sensitive
enough to detect it. In this case, you will go along unaware that the problem exists and thus
unable to root it out.
All process control is vulnerable to these two types of errors. The reason that 3-sigma control limits
balance the risk of error is that, for normally distributed data, data points will fall inside 3-sigma limits
99.7% of the time when a process is in control. This makes the witch hunts infrequent but still makes it
likely that unusual causes of variation will be detected.
In the above chart, there are three divided sections. The first section is termed "out of statistical control"
for several reasons. Notice the inconsistent plot points, and that one point is outside of the control
limits. This means that a source of special cause variation is present, it needs to be analyzed and
resolved. Having a point outside the control limits is usually the most easily detectable condition. There
is almost always an associated cause that can be easily traced to some malfunction in the process.
In the second section, even though the process is now in control, it is not really a smooth flowing
process. All the points lie within the control limits, and thus exhibits only common cause variations.
In the third section, you will notice that the trending is more predictable and smoother flowing. It is in
this section that there is evidence of process improvement and the variation has been reduced.
Therefore, to summarize, eliminating special cause variation keeps the process in control; process
improvement reduces the process variation, and moves the control limits in toward the centerline of the
process. At the beginning of this process run, it was in need of adjustment as the product output was
sporadic. An adjustment was made, and while the plotted points were now within the boundaries, it is
still not centered around the process specification. Finally, the process was tweaked a little more and in
the third section, the process seems to center around the CL.
There are a few more terms listed below that you need to become familiar with when analyzing a Xbar
Chart and the process:
D RUN - When several plotted points line up consecutively on one side of a Central Line (CL),
whether it is located above or below the CL, it is called a "run". If there are 7 points in a row on
one side of the CL, there is an abnormality in the process and it requires an adjustment.
D TREND - If there is a continued rise or fall in a series of points (like an upward or downward slant),
it is considered a "trend" and usually indicates a process is drifting out of control. This usually
requires a machine adjustment.
D PERIODICITY - If the plotted points show the same pattern of change over equal intervals, it is
called "periodicity". It looks much like a uniform roller coaster of the same size ups and downs
around the centerline. This process should be watched closely as something is causing a defined
uniform drift to both sides of the centerline.
D HUGGING - When the points on the control chart seem to stick close to the center line or to a
control limit line, it is called "hugging of the control line". This usually indicates that a different type
of data, or data from different factors (or lines) have been mixed into the sub groupings. To
determine if you are experiencing "hugging" of the control line, perform the following exercise.
Draw a line equal distance between the centerline and the upper control limit. Then draw another
line equal distance between the center line and the lower control limit. If the points remain inside
of these new lines, there is an abnormality, and the process needs closer analysis.
Steps:
a) List the data in its time series order.
b) Calculate the average. This becomes the center line of the control chart.
c) Calculate the absolute value differences (ranges) between each set of points.
There will be one less range than there are number of data points.
d) Determine the median range. List the ranges from highest to lowest and find
the middle of the list.
e) Multiply range by 3.14. This determines the distance of the control limits from
the center line.
f) Calculate the control limits: Add the results of Step 2 to Step 5 to get the
Upper Control Limit (UCL). Subtract Step 5 from Step 2 to get the Lower
Control Limit (LCL).
g) Plot the data in time series order and draw a solid center line at X, the
average.
h) Draw dashed lines to indicate the control limits.
Answer:
1. The data have been ordered and the ranges calculated, as given in the table to the left.
2. Calculate the average, X, which will be the center line on the control chart. X = 233 / 25 = 9.32
3. Determine the median range. This step requires ordering the ranges from smallest to largest
and finding the value(s) in the middle. Ordered ranges: 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 5, 5,
5, 6, 6, 7, 8, 8, 14, 16. R = 3
4. Multiply the range by 3.14. 3 x 3.14 = 9.42
5. Calculate the upper control limit (UCL) by adding the distance obtained in step 4 to the average
calculated in step 1. UCL = 9.32 + 9.42 = 18.74
6. Calculate the lower control limit (LCL) by subtracting the distance obtained in step 4 from the
average calculated in step 1. LCL = 9.32 - 9.42 = -0.10 If the data collected cannot take on
values less than zero, the lower control limit is adjusted to this minimum value, as in this case.
LCL = 0
7. The XmR control chart can now be plotted.
Analysis:
Given that the upper control limit is 18.74, any point larger than 18 would be an outlier and, therefore,
signal a possible special cause of variation. Data point #10 has a value of 22 and should be
investigated to discover why the system was out of statistical control at that point.
EXAMPLE: 5.
Consider, that we have a precision made piece coming off of an assembly line. We wish to see if the
process resulting in the objects diameter (say for example the molding process) is in control. Let's say
you are charting waiting times in an urgent care center. The chart may indicate a system in control,
hovering very tightly and consistently around a mean wait of forty minutes. This is not acceptable! The
proper way to use a control chart in this instance is to verify that your system remains in control as you
make operational changes to reduce the waiting time. You may, for instance, hire two more people.
Let's say you do this, and waiting time clearly drops, but now your system is out of control. What do you
do then?
We are not going to pursue this analysis to a conclusion here. The point is that a manager must be
asking two questions- first, is my system in control? Second, is the level of performance acceptable or
better? You use the chart to record, monitor and evaluate performance as you change or maintain a system. Your
operational goal will be whatever standard management has set, but your process goal will always be a system in
control.
Note: The sample size is traditionally 5, as in the example above. Control charts can and do use sample sizes
other than 5, but the control limit factors change (here, .577 for the X chart). These factors, by the way, were
derived by statisticians and will not be explained further here.
a) Calculate the average of the five. This is one data point for the X chart, called "Xbar". We will
represent X Bar as X' (X prime)
b) Calculate the range (largest minus smallest) of the five. This is one data point for the R Chart,
called simply "R".
c) Repeat steps 1 and 2 twenty (20) times. You will have 20 "X bar" points and 20 "R" points.
d) Calculate the average of the 20 X bar points- yes, the average of the averages. This value is "X
Double Bar", we will show it as X" (X double prime) It is the centerline of the X Chart.
e) Calculate the average of the twenty R points. This is called "R Bar", R prime for us (R'). It is the
centerline of the R chart, and also is used in calculating the control limits for both the X chart
and the R chart.
f) Calculate the upper and lower control limits for the X Chart, using the following equations:
Plot the original X Bar and R points, twenty of each, on the respective X and R control charts. Identify
points which fall outside of the control limits. It is these points which are due to unanticipated or
unacceptable causes. These are the points that require management attention. In general, if no points
are outside of the control limits, the system is in control and should not be interfered with. Because the
results are in control, however, does not mean they are acceptable.
EXAMPLE: 6
A quality control inspector at the Cocoa Fizz soft drink company has taken twenty-five samples with
four observations each of the volume of bottles filled. The data and the computed means are shown in
the table. If the standard deviation of the bottling operation is 0.14 ounces, use this information to
develop control limits of three standard deviations of the bottling operation.
The solution as below:
a. The center line of he control data is the
average of the samples:
A quality at Cocoa Fizz is using the data to develop control limits. If the average range for twenty-five
samples is 0.29 ounces (computed as 7.17/25) and the average mean of the observation is 15.95
ounces, develop three-sigma control limits for the bottling operation.
We can see that mean and range chats are used to monitor different variables. The mean or X-Bar
chart measures the central tendency of the process. Since both variables are important, it makes sense
to monitor a process using both mean and range charts. It is possible to have a shift in the mean of he
product but not a change in the dispersion. For example, at the Cocoa Fizz bottling plant the machine
setting can shift so that the average bottle filled contains not 16.0 ounces, but 15.9 ounces of liquid.
The dispersion could be the same, and this shift would be detected by an x-bar chart but not by a range
chart. This is shown in part (a) of figure below. On the other hand, there could be a shift in the
dispersion of the product without a change in the mean. Cocoa Fizz may still be producing bottles with
an average fill of 16.0 ounces. However, the dispersion of the product may have increased, as shown in
part (b) of figure as below. This condition would be detected by a range chart but not by an x-bar chart.
Because a shift in either the men or the range means that the process is out of control, it is important to
use both charts to monitor the process.
EXAMPLE: 7
A quality control inspector at Chileupeung Sdn.Bhd. monitor the line production process of pressing
machineby measuring the length of sim part with the data as table below. Calculate and graph the
process!
EXAMPLE: 8
Oon pisan as an inspector in the manufacturer of circuit boards for personal computers found that the
various components are to be mounted on each board and the boards eventually slipped into slots in a
chassis. The boards’ overall length is crucial to assure a proper fit, and this dimension has been
targeted as an important item to be stabilized.
R = 0.569/25 = 0.02276
UCL (R) = (2.114) (0.02276) = 0.048115 and LCL (R) = (0) (0.02276) = 0.00000
R Zone Boundaries :
a. Between lower zones A and B = R -2 d3 ( R /d2) = R (1- 2d3/d2) = 0.02276 (1-2(0.864)/2.326) = 0.005851
b. Between lower zones B and C = R - d3 ( R /d2) = R (1- d3/d2) = 0.02276 (1-(0.864)/2.326) = 0.014306
a. Between upper zones A and B = R + 2 d3 ( R /d2) = R (1+ 2d3/d2) = 0.02276 (1+2(0.864)/2.326) = 0.039669
b. Between upper zones B and C = R + d3 ( R /d2) = R (1+ d3/d2) = 0.02276 (1+(0.864)/2.326) = 0.031214
X = 125.61/ 25 = 5.00244
X Zone Boundaries :
a. Between lower zones A and B = X - (2/3) A2 R = 5.015573 - ((2/3) (0.577) (0.02276)) = 5.006818
b. Between lower zones B and C = X - (1/3) A2 R = 5.015573 - ((1/3) (0.577) (0.02276)) = 5.011195
c. Between upper zones A and B = X + (2/3) A2 R = 5.015573 + ((2/3) (0.577) (0.02276)) = 5.024328
d. Between upper zones B and C = X + (1/3) A2 R = 5.015573 + ((1/3) (0.577) (0.02276)) = 5.019951
EXAMPLE: 9
Twelve additional samples of curetime data from the molding process were collected from an actual
production run. The data from these new samples are shown in table 2. Save the raw data for this table
and try to draw the control charts. Compare the results with those given here
The X bar and the R charts are drawn with the new data with the same control limits established before.
They are shown below
There are two techniques used to discard data. If either the X and R value of a subgroup is out control
and has assignable cause, both are discarded, or only the out-of-control value of a subgroup is
discarded.
Formula:
X new
¦X X d
¦RR d
g gd R
g gd
Where = discarded subgroup averages
= number of discarded subgroups
= discarded subgroup range
EXAMPLE: 10
Calculate for a new X are based on discarding the X values of 6.65 and 6.51 for subgroups 4 and 20,
respectively. Calculations for a new R are based on discarding the R value of 0.30 for subgroup 18.
These new values of X and R are used to establish the standard values of X0, R0, and ı0. Thus,
Where d2= a factor from Table for estimating ıc from R0. the standard or reference values can be
considered to be the best estimate with the data available. As more data become available, better
estimates or more confidence in the existing standard values are obtained.
Using the standard values, the central lines and the 3ı control limits for actual operations are obtained
using the formulas
Where A, D1 and D2 are the factors from Table for obtaining the 3ı control limits from X0 and ı0.
EXAMPLE: 11
Form table B and for a subgroup size of 4, the factors are A = 1.500,
D2 = 2.059, D1 = 4.698. Calculations to determine X0 and ı0 using the data previously given are
EXERCISE: 1
A new machine that is used to fill the bottles of shampoo with a specific weight of the product has been
in trial production. The table gives the data which is collected by the operator in every 20 minutes:
EXERCISE: 2
The Get-Well Hospital has completed a quality improvement project on the time to admit a patient using
X and R Charts. They now wish to monitor the activity using median and range charts. Determine the
central line and control limits with the latest data in minutes as given below:
EXERCISE: 3
The manufacturing engineering engineer take sample by trial production of pressing machine to find the
performance of mold that pressing the ceramic powder for ceramic IC ROM with the data as shown in
below. Determine the central line and control limits with of the pressing machine performance.
EXERCISE: 4
The data taken from machine that produce the washer as below:
Determine the central line and control limits with of the machine performance and draw the graphic!
Table:
ATTRIBUTE CHARTS
Attribute Charts are a set of control charts specifically designed for Attributes data. Attribute charts
monitor the process location and variation over time in a single chart.
Attribute charts are fairly simple to interpret: merely look for out of control points. If there are any, then
the special causes must be eliminated. Brainstorm and conduct Designed Experiments to find those
process elements that contribute to sporadic changes in process location. To use the data you have,
turn Auto Drop ON, which will remove the statistical bias of the out of control points by dropping them
from the calculations of the average and control limits.
Remember that the variation within control limits is due to the inherent variation in sampling from the
process. (Think of Deming's Red Bead experiment: the proportion of red beads never changed in the
bucket, yet each sample had a varying count of red beads). The bottom line is: React first to special
cause variation. Once the process is in statistical control, then work to reduce variation and improve the
location of the process through fundamental changes to the system.
P-CHART Calculations
P-charts are used to measure the proportion hat s defective in a sample. The computation of the enter
line s well as the upper and lower control limits is similar to the computation for the other kinds of
control charts. The enter line is computed as the average proportion defective in the population, p . This
is obtained by taking a number of samples of observation at random and computing the average value
of p across all samples.
The p -chart is used when dealing with ratios, proportions or percentages of conforming or non
conforming parts in a given sample. A good example for a p -chart is the inspection of products on a
production line. They are either conforming or nonconforming. The probability distribution used in this
context is the Binomial distribution with p representing the non-conforming proportion and q (which is
equal to1 – p ) representing the proportion of conforming items. Since the products are only inspected
once, the experiments are independent from one another.
The first step when creating a p -chart is to calculate the proportion of nonconformity for each sample.
Where m represents the number of nonconforming items, b is the number of items in the sample and p
is the proportion of nonconformity
Plotted statistic.
The percent of items in the sample meeting the criteria of interest
where nj is the sample size (number of units) of Where is the mean proportion, k is the number
group j. of samples audited and is the kth proportion
obtained.
1. Center Line
where nj is the sample size (number of units) of group j, and m is the number of groups included in the
analysis.
where nj is the sample size (number of units) of group j, p-bar ( p ) is the average percent or represents
the center line.
EXAMPLE : 1
During the first shift, 450 inspections are made of book-of-the month shipments and 5 nonconforming
units are found. Production during the shift was 15,000 units. What is the fraction nonconforming?
p = 5 / 450 = 0.011
EXAMPLE : 2
Belangbetong Pte.Ltd. has been making the printing for milk cans for a number of days. They use p
charts to keep track of the number of nonconforming cans the are created each time a batch of cans is
run. The data as below:
EXAMPLE : 3
Inspection results of washer parts non-conformance taken from production in June as follow:
EXAMPLE : 4
Below is the Inspection data taken by QA Inspetion against the Hair Dryer Blower Motor in production
line as follow as:
Out of control
From figure above the subgroup 19 is above the upper control limit. If we discard the subgroup 19, then
the calculation for the new graphic as follow:
Ȉ np - npd
p new = -----------------
Ȉ n - nd
Where npd = number nonconforming in the disarded subgroups
nd = number inspected in the discarded subgroups
p o = p new
So,
=0
EXAMPLE : 5
Below is the Inspection Lightbulb Nonconforming data taken from production line of PT. Terang
Gemilang:
EXAMPLE : 6
Consider the case of a small maufacurer of low-tension electrical insulators. Each day during a one-
month period the manufacturer inspects the production of a agiven shift; the number inspected varies
somewhat. Based on carefully laid out operational definitions, some of the production is deemed
nonconforming and is downgraded.
Boundary between upper zones A and B Boundary between upper zones B and C
Boundary between lower zones A and B Boundary between lower zones B and C
The process indicates many instances of a lack of control. Fully 25% of the subgroup proportions are
out of control, and the data seems to be behaving in an extremely erratic pattern. Days 19,18,13,9, and
7 are all beyond the control limits. Day 5 also indicates a lack of control because it’s the second of three
consecutive points falling in zone C or beyond on the same side of the centerline.
Boundary between upper zones A and B Boundary between upper zones B and C
Boundary between lower zones A and B Boundary between lower zones B and C
EXAMPLE : 8
Preliminary data of computer modem final test and control limits for each group
Since all these out-of-control points have assignable causes, they are discarded. A new p is obtained
as follows:
Since this value represents the best estimate of the standard or reference value of the fraction
nonconforming, po = 0.19
The fraction nonconforming, po, is used to calculate upper and lower control limits for the next period,
which is no 26 and so on. However, the limits cannot be calculated until the end of each time picked up,
when the subgroup size, n , is known. This mean that the control limits are never known ahead of time.
np 31
P26 = -------- = ------- = 0.020
n 1535
po (1 – po)
UCL 26 = po + 3 ---------------------
n26
0.019 (1 – 0.019)
= 0.019 + 3 ------------------------
1535
= 0.029
0.019 (1-0.019)
LCL 26 = 0.019 - 3 -----------------------
1535
= 0.009
NOTE:
p Is the proportion9 fraction) nonconforming in a single subgroup. T is posted to the average but is not used to calculate the
control limits.
Is the average proportion (fraction) nonconforming of any subgroups. It is the sum of the number nonconforming divided
p by the sum of the number inspected and is used to calculate the trial control limits.
po Is the standard or reference value of the proportion (fraction) nonconforming based on the best estimate of p . It is used
to calculate the revised control limits. It can be specified as a desired value.
Is the population proportion (fraction) nonconforming. When this value is known, it can be used to calculate the limits,
́
since po = ́
EXERCISE : 1
Number of defectives in 30 SubGroups of size 50 as below:
EXERCISE : 2
Steel pails are anufactured at a high rate. Periodic samples of
50 pails are seleced from the process. Results of that
sampling are:
EXERCISE : 3
Determine the trial central line and control limits of a p chart
using the following data, which are for the payment of dental
insurance claims. Plot the values on graph paper and determine
if the process is stable. If there are any out-of-control points,
assume an assignable cause and determine the revised central
line and control limits.
EXERCISE : 4
Np-CHART Calculations
The np chart is one of the easiest to build. While the p -chart tracks the proportion of non-conformities
per sample, the np chart plots the number of non-conforming items per sample.
The audit process of the samples follows a binomial distribution, in other words, the expected outcome
is “good” or “bad”, and therefore the mean number of success is np.
The control limits for an np chart are as follow:
1. Center Line
where n is the sample size, np-bar is the Average count , and p-bar is calculated as follows:
EXAMPLE : 1
Bricks are manufactured for use in housing construction. The bricks are produced and then put on
pallets for shipment. Samples of fifty are taken each hour and visually inspected for cracks, breaking,
and other flaws that would result in construction problems. The data for one three-shift operation days
is shown below:
EXAMPLE : 2
Pcc INC. receives shipments of circuits boards from its suppliers by the truckload. They keep track of
the number of damaged, incomplete, or inoperative circuit boards found when the truck is unloaded.
This information helps them make decisions about which suppliers to use in the future. As shown in
table below:
p = 73/1000 = 0.073
EXAMPLE : 2
The centerline is the overall average number of non conforming (or conforming) items found in each
subgroup of the data. For the ceramic tile importer, there are a total of 183 cracked or broken tiles in the
30 subgroups examined.
183 183
a. Centerline n p = 100 ( ) = 6.100 or Centerline n p= = 6.100
3000 30
p = 0.061
U CHART Calculations
One of the premises for a c -chart was that the sample sizes had to be the same. The sample sizes can
vary when the u -chart is being used to monitor the quality of the production process and the u -chart
does not require any limit to the number of potential defects. Furthermore, for a p -chart or an np -chart
the number of non-conformances cannot exceed the number of items on a sample but for a u -chart, it
is conceivable since what is being addressed is not the number of defective items but the number of
defects on the sample.
The first step in creating a u -chart is to calculate the number of defects per unit for each sample.
Where u represents the average defect per sample, c is the total number of defects and n is the
sample size.
Once all the averages are determined, a distribution of the means is created and the next step will be to
find the mean of the distribution, in other word, the grand mean.
The control limits are determined based on and the mean of the samples n.
The average count of occurrences of a criteria of interest in sample of items
1. Center Line
where nj is the sample size (number of units) of group j, and m is the number of groups included in the
analysis.
where nj is the sample size (number of units) of group j and u-bar is the Average percent.
EXAMPLE : 1
The manufacture of a certain grade of plastic
produced in rolls, within samples taken 5 times daily.
Because of the nature of the process, the square
footage of each sample varies from inspection lot to
inspection lot.
Centerline (u) = Average number of defects/
120
100sq.qt = u = = 2.51
47.90
u
Upper control limit : LCL (u) = u + 3
ni
2.51
= 2.51+3
ni
u
Lower control limit : LCL (u) = u - 3
ni
2.51
= 2.51-3
ni
Number of
8.00 Inspection Units
Upper Control
Limits
Low er Control
Limits
6.00
4.00
2.00
0.00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
EXAMPLE : 2
2
The number of non conformities in carpets is determined for 20 samples in mm , but the amount of
carpet inspected for each sample varies. Results of the inspection are shown in table below:
192
Centerline (u) = Average number of defects/ 100sq.qt = u = = 4.683
41
u
Upper control limit : LCL (u) = u + 3
ni
4.683
= 4.683+3
ni
u
Lower control limit : LCL (u) = u - 3
ni
4.683
= 4.683-3
ni
Non Conformities
Upper Control Limits
12 Lower Control Limits
Centerline
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
EXERCISE :1
The number of typographical errors is counted over a certain
umber of pages for each sample. The data for 25 samples as
shown below. The number of pages used for each sample is not
fixed. Construct a control chart for the umber of typographical
errors per page. Revise the limits, assuming special causes for
point that are out of control.
EXERCISE :2
The number of imperfections in bond paper produced by a paper mil is observed over a period o
several days. The table below shows the area inspected and the umber of imperfections for 25
samples.
C CHART Calculations
The c -chart monitors the process variations due to the fluctuations of defects per item or group of
items. The c -chart is useful for the process engineer to know not just how many items are not
conforming but how many defects there are per item. Knowing how many defects there are on a given
part produced on a line might in some cases be as important as knowing how many parts are defective.
Here, non-conformance must be distinguished from defective items since there can be several non-
conformances on a single defective item.
The probability for a nonconformance to be found on an item, in this case follows a Poisson distribution.
If the sample size does not change and the defects on the items are fairly easy to count, the c -chart
becomes an effective tool to monitor the quality of the production process.
If C is the average nonconformity, the UCL and the LCL limits will be given as follow for a k -sigma
control chart:
The count of occurrences of a criteria of interest in a sample of items
1. Center Line
EXAMPLE : 1
Samples of fabric from a textile mill, each 100m2 , are selected, and the number of occurrences of
foreign matter are recorded. Data for 25 samples are shown in table below. Construct a c-chart for the
number of nonconformities.
189
Centerline (c) = c = = 7.560
25
EXAMPLE : 2
The number of paints blemishes on automobile bodies is observed for 30 samples. Each samples
consists of randomly selecting 5 automobiles of a certain make and style. Construct a chart for the
number of paint blemishes. Assuming special causes for the out-of-control points, revise these limits
accordingly.
182
Centerline (c) = c = = 6.07
30
Upper control limit : UCL (c) = c + 3 c = 6.07 + 3 6.07 = 13.46
Lower control limit : LCL (c) = c - 3 c = 6.07 - 3 6.07 = -1.32 = 0
153
Centerline (c) = c = = 5.46 (sample no.17 and 20 strike out)
28
EXERCISE : 1
The number of scratch marks for a particular piece of furniture is recorded for samples of size 10. The
results are show in table below.
1. Construct a chart for the number of scratch marks. Revise the control limits, assuming special
causes for the out-of-control points.
2. Suppose that management sets a goal of 4 scratch marks on average per 10 pieces. Set up an
appropriate control, whether the process is capable of meeting this standard.
Summary:
The normal probability distribution or the normal curve is given by a bell-shape (symmetric) curve.
Such a curve is shown in Figure 4-3. It has a mean of µ and a standard deviation of ı. A continuous
random variable X that has a normal distribution is called a normal random variable. Note that not
all bell-shaped curves represent a normal distribution curve. Only a specific kind of bell-shaped
curve represents a normal curve.
1. The total area under a normal distribution curve is 1.0 or 100%, as shown in Figure 1
Figure 1 Total area under a normal curve. The shaded area is 1.0 or 100%
Consequently, 1/2 of the total area under a normal distribution curve lies on the left side of
the mean and 1/2 lies on the right side of the mean.
3. The tails of a normal distribution curve extend indefinitely in both directions without touching
or crossing the horizontal axis. Although a normal distribution curve never meets the
horizontal axis, beyond the points represented by -3, and +3, it becomes so close to this
NORMAL DISTRIBUTION-1
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
axis that the area under the curve beyond these points in both directions can be taken as
virtually zero. These areas are shown in Figure 3.
The mean, µ, and the standard deviation, ı, are the parameters of the normal distribution. Given the
values of these two parameters, we can find the area under a normal distribution curve for any
interval. Remember, there is not just one normal distribution curve but rather a family of normal
distribution curves. Each different set of values of µ and s gives a different normal distribution. The
value of µ determines the center of a normal distribution on the horizontal axis and the value of s
gives the spread of the normal distribution curve. The two normal distribution curves drawn in
Figure 4 have the same mean but different standard deviations. By contrast, the two normal
distribution curves in Figure 5 have different means but the same standard deviation.
Figure 4 Two normal distribution curves with the same mean but different standard deviations.
Figure 5 Two normal distribution curves with the same standard deviation but different means.
NORMAL DISTRIBUTION-2
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Figure 6 displays the standard normal distribution curve. The random variable that possesses the
standard normal distribution is denoted by z. In other words, the units for the standard normal
distribution curve are denoted by z and are called the z values or z scores. They are also called
standard units or standard scores.
There are four areas on a standard normal curve that all introductory statistics students should
know. The first is that the total area below 0.0 is .50, as the standard normal curve is symmetrical
like all normal curves. This result generalizes to all normal curves in that the total area below the
value of µ is .50 on any member of the family of normal curves. See figure 7.
The second area that should be memorized is between z scores of -1.00 and +1.00. It is .68 or
68%. See Figure 7.
The third area is between z scores of -2.00 and +2.00 and is .95 or 95%. See figure 8.
NORMAL DISTRIBUTION-3
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
The fourth area is between z scores of –3.00 and +3.00 and is .997 or 99.7 %. See figure 9, but
ignore the typo for µ + 3 ı = 99.97.
z Values or z Scores
The units marked on the horizontal axis of the standard normal curve are denoted by z and are
called the z values or z scores. A specific value of z gives the distance between the mean and the
point represented by z in terms of the standard deviation.
In Figure 8, the horizontal axis is labeled z. The z values on the right side of the mean are positive
and those on the left side are negative. The z values for a point on the horizontal axis gives the
distance between the mean and that point in terms of the standard deviation. For example, a point
with a value of z = 2 is two standard deviations to the right of the mean. Similarly, a point with a
value of z = -2 is two standard deviations to the left of the mean.
The standard normal distribution table, Area Under the Curve, lists the areas under the standard
normal curve between z = 0 and the values of z from 0.00 to 3.50. To read the standard normal
distribution table, we always start at z = 0, which represents the mean of the standard normal
distribution. We learned earlier that the total area under a normal distribution curve is 1.0. We also
learned that, because of symmetry, the area on either side of the mean is .5. This is also shown in
Figure 6.
Remember: Although the values of z on the left side of the mean are negative, the area under the
curve is always positive.
The area under the standard normal curve between any two points can he interpreted as the
probability that z assumes a value within that interval.
Example1: Find the area under the standard normal curve between z = 0 and = 1.95.
Solution: To find the required area under the standard normal curve, we locate 1.95 in the standard
normal distribution table, Area Under the Curve. The entry gives the area under the standard
normal curve for z = 1.95 as 0.9744. Next, we find the area under the standard curve for z = 0 as
0.50. We knew this without referring to the table since by definition 50% of the area under the curve
lies on both sides of the standard normal curve, Consequently, the area under the standard normal
curve between z = 0 and z = 1.95 is 0.9744 - 0.50 = .4744. This area is shown in Figure 10. (It is
always helpful to sketch the curve and mark the area we are determining.)
NORMAL DISTRIBUTION-4
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Example 2 Find the area from z = -1.56 to z = 2.31 under the standardized curve
Solution: First find the area below the curve for a z value of 2.31, which from the table, Area Under
the Curve is 0.9896. Next, find the area below –1.56, which is 0.0594. Therefore, the area from
z = -1.56 to z = 2.31 is 0.9896 - 0.0594 = 0.9302. The area can also be found by finding the
distance from the mean of 0 for both z values. z = -1.56 is 0.4406 from 0, and z = 2.31 is 0.4896
from 0. See Figure 11. Either method produces the same result, so the method you choose is up to
you and the table you have to work from.
For a normal random variable X, a particular value of X can be converted to a z value by using the
formula:
NORMAL DISTRIBUTION-5
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Thus, to find the z value for an X value, we calculate the difference between the given value and the
mean µ and divide this difference by the standard deviation s. If the value X is equal to µ, then its z
value is equal to zero. The z value for the mean of a normal distribution is always zero. Note that we
will always round z values two decimal places.
Example 3 Let X be a continuous random variable that has a normal distribution with a mean of 50
and a standard deviation of 10. Convert X = 55 to a z value.
Solution: For the given normal distribution: µ = 50 and s = 10. The z value for X = 55 is computed as
follows:
Thus, the z value for X = 55 is .50. The z values for µ = 50 and X = 55 are shown in figure 4-14.
Note that the z value for µ = 50 is zero. The value z = .50 for X = 55 indicates that the distance
between the mean µ = 50 of the given normal distribution and the point given by X = 55 is 1 /2 of the
standard deviation s = 10. Consequently, we can state that the z value represents the distance
between µ and X in terms of the standard deviation. Because X = 55 is greater than µ = 50, its z
value is positive.
Example 4 Let X be a continuous random variable that has a normal distribution with a mean of 50
and a standard deviation of 10. Convert X = 35 to a z values.
Solution: For the given normal distribution: µ = 50 and s = 10. The z value for X = 35 is computed as
follows.
NORMAL DISTRIBUTION-6
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Because X = 35 is on the left side of the mean (i.e., 35 is less than µ = 50), its z value is negative.
As a general rule, whenever an X value is less than the value of µ, its z value is negative.
To find the area between two values of X for a normal distribution, we first convert both values of X
to their respective z values. Then we find the area under the standard normal curve between those
two z values. The area between the two z values gives the area between the corresponding X
values.
Example 5 , Let X be a continuous random variable that is normally distributed with a mean of 25
and a standard deviation of 4. Find the area between X = 25 and X = 32.
The z value for X = 25 is zero because it is the mean of the normal distribution. The z value for X =
32 is
Figure 14
Area between X = 25 and area X = 32
NORMAL DISTRIBUTION-7
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
One such alternative, called T scores, is defined as a set of scores with a mean of 50 and a
standard deviation of 10. The T scores are obtained from the following formula:
Each raw score is converted to a z score; each z score is multiplied by 10, and 50 is added to each
resulting score. For example, using the data in example 4-11, the raw score of 32 is converted to a
z score of + 1.75 by the usual formula. Then, T is equal to (10)(+1.75) + 50 or 67.50.
Since the mean of T scores is 50, you can still tell at a glance whether a score is above average (it
will be greater than 50) or below average (it will be less than 50). Also, you can tell how many
standard deviations above or below average a score is. For example, a score of 40 is exactly one
standard deviation below average (equivalent to a z score of - 1.00) since the standard deviation of
T scores is 10. A negative T score is mathematically possible but virtually never occurs; it would
require that a person be over five standard deviations below average, and scores more than three
standard deviations above or below the mean almost never occur with real data.
Scores on some nationally administered examinations, such as the Scholastic Aptitude Test (SAT),
the College Entrance Examination Boards, and the Graduate Record Examination, are transformed
to a scale with a mean of 500 and a standard deviation of 100. These scores, which we will call SAT
scores for want of a better term, are obtained as follows:
Figure 15 Relationship among area under the normal curve, standard deviation, percentiles, z, T, and SAT
scores.
NORMAL DISTRIBUTION-8
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
The raw scores are first converted to z scores; each z score is multiplied by 100, and 500 is added
to each resulting score. The proof that this formula does yield a mean of 500 and a standard
deviation of 100 is similar to that involving T scores. (In fact, an SAT score is just ten times a T
score.) This explains the apparent mystery of how you can obtain a score of 642 on a test with only
several hundred items. And you may well be pleased if you obtain a score of 642, since it is 142
points or 1.42 standard deviations above the mean (and therefore corresponds to a z score of
+ 1.42 and a T score of 64.2).
NORMAL DISTRIBUTION-9
# Prepared by Haery Sihombing @ IP
sihmobi E
PROCESS CAPABILITY
By convention, when a process has a Cp value less than 1.0, it is considered potentially incapable
of meeting specification requirements. Conversely, when a process Cp is greater than or equal to
1.0, the process has the potential of being capable.
Ideally, the Cp should be as high as possible. The higher the Cp, the lower the variability with
respect to the specification limits. In a process qualified as a Six Sigma process (i.e., one that
allows plus or minus six standard deviations within the specifications limits), the Cp is greater than
or equal to 2.0.
However, a high Cp value doesn't guarantee a production process falls within specification limits
because the Cp value doesn't imply that the actual spread coincides with the allowable spread (i.e.,
the specification limits). This is why the Cp is called the process potential.
The process capability index, or Cpk, measures a process's ability to create product within
specification limits. Cpk represents the difference between the actual process average and the
closest specification limit over the standard deviation, times three.
By convention, when the Cpk is less than one, the process is referred to as incapable. When the
Cpk is greater than or equal to one, the process is considered capable of producing a product within
specification limits. In a Six Sigma process, the Cpk equals 2.0.
The Cpk is inversely proportional to the standard deviation, or variability, of a process. The higher
the Cpk, the narrower the process distribution as compared with the specification limits, and the
more uniform the product. As the standard deviation increases, the Cpk index decreases. At the
same time, the potential to create product outside the specification limits increases.
Cpk can only have positive values. It will equal zero when the actual process average matches or
falls outside one of the specification limits. The Cpk index can never be greater than the Cp, only
equal to it. This happens when the actual process average falls in the middle of the specification
limits.
A: For Pp and Ppk calculations, the standard deviation used in the denominator is based on all of the data
evaluated as one sample, without regard to any subgrouping. This is sometimes referred to as the overall
standard deviation, Vtotal.
For Cp and Cpk calculations, the standard deviation is based on subgroups of the data using
subgroups ranges, standard deviations or moving ranges. This “within-subgroup” process variation
CAPABILITY PROCESS-1
# Prepared by Haery Sihombing @ IP
sihmobi E
can be considerably smaller than the overall standard deviation estimate, especially when there are
long-term trends in the data.
When there are slow fluctuations or trends in the data, the estimate of the process variability based
on the subgroups can be smaller than the estimate using all of the process data as one sample.
This often occurs when the differences among observations within the subgroup are small, but the
range of the entire dataset is significantly larger. Since the within-subgroup variation measures
tend to ignore the range of the entire group, they can underestimate the overall process variation.
All of the observations and their variability as a group are what is important when characterizing the
capability of a process to stay within the specification limits over time. Underestimating the
variability will increase the process capability estimate represented by Cp or Cpk. However, these
estimates may not be truly representative of the process.
The following box plot shows data where the within group variability is small, but there are both
upward and downward trends in the data. There are a significant number of observations beyond
the specification limits.
When the Process Capability procedure in Statit is performed based on this data, there are
significant differences between the estimates of Pp and Cp (and, analogously, Ppk and Cpk).
CAPABILITY PROCESS-2
# Prepared by Haery Sihombing @ IP
sihmobi E
For example, the calculated Cpk, which uses the within-subgroup estimate of the process variability
is 1.077. This would typically be considered to represent a marginally capable process – one with
only about 0.12% of the output beyond the specifications (12 out of 1000 parts). However, the
calculated Ppk value, which uses the variability estimate of the total sample, is only 0.672. This
would indicate a process that is not capable and probably produces a high percentage of output
beyond the specifications. Note that the actual amount of production beyond the specifications is
5% or roughly 1 out of every 20 parts.
Which of these values are correct? Both are calculated correctly according to their equations, but
here the Ppk value is probably the most representative of the ability of the process to produce parts
within the specifications.
Note: One way to determine that the variability estimate is not truly representative of the process is
to compare the Estimated and Actual values for the Product beyond Specifications in the Statistical
output. If the estimated percentage of samples beyond specification is significantly different than
the actual percentage reported, then more investigation and analysis of the data would be
warranted to achieve the best Process Capability estimates possible based on the data.
CAPABILITY PROCESS-3
Preamble :
This article is devoted to the topic of process capability, with the objective of making people aware of this subject
and its significance to business success.
The author believes that personal awareness is a prerequisite to personal action, and personal action is what we
need for success.
• It can be a source material for you to use in discussing this topic with your organization.
• It will address issues like what is process capability, how to measure it, and how to calculate the process
capability indices (Cp, Cpk).
• It will also attempt to explain the differences between process capability and process performance;
relationship between Cpk and non-conforming (defect) rate; and illustrate the four outcomes of comparing
natural process variability with customer specifications.
• Lastly a commentary is provided on precautions we should take while conducting process capability
studies.
1. Process capability is the long-term performance level of the process after it has been brought under
statistical control. In other words, process capability is the range over which the natural variation of the
process occurs as determined by the system of common causes.
2. Process capability is also the ability of the combination of people, machine, methods, material, and
measurements to produce a product that will consistently meet the design requirements or customer
expectation.
Process capability study is a scientific and a systematic procedure that uses control charts to detect and eliminate
the unnatural causes of variation until a state of statistical control is reached. When the study is completed, you
will identify the natural variability of the process.
1. To set realistic cost effective part specifications based upon the customer's needs and the costs
associated by the supplier at meeting those needs.
2. To understand hidden supplier costs. Suppliers may not know or hide their natural capability limits in an
effort to keep business. This could mean that unnecessary costs could occur such as sorting to actually
meet customer needs.
3. To be pro-active. For example, a Cpk estimation made using injection molding pressure measurements
during a molding cycle may help reveal a faulty piston pressure valve ready to malfunction before the
actual molded part measurements go out of specifications. Thus saving time and money.
Cp, Cpl, Cpu, and Cpk are the four most common and timed tested measures of process capability.
• Process capability indices measure the degree to which your process produces output that meets the
customer's specification.
• Process capability indices can be used effectively to summarize process capability information in a
convenient unitless system.
• Cp and Cpk are quantitative expressions that personify the variability of your process (its natural limits)
relative to its specification limits (customer requirements).
Following are the graphical details and equations quantifying process capability:
Where :
USL = Upper
Specification Limit
LSL = Lower
Specification Limit
s = Standard
Deviation of the
Process
In 1991, ASQ / AIAG task force published the "Statistical Process Control" reference manual, which presented the
calculations for capability indices ( Cp, Cpk ) as well as process performance indices ( Pp, Ppk ).
The difference between the two indices is the way the process standard deviation ( s ) is calculated.
Ppk uses the calculated standard deviation from individual data where s is calculated by the formula :
• Ppk attempts to answer the question "does my current production sample meet specification ?" Process
performance indices should only be used when statistical control cannot be evaluated.
• On the other hand, Cpk attempts to answer the question "does my process in the long run meet
specification?" Process capability evaluation can only be done after the process is brought into statistical
control. The reason is simple: Cpk is a prediction, and one can only predict something that is stable.
The readers should note that Ppk and Cpk indices would likely be similar when the process is in a state of
statistical control.
Notes :
1. As a thumb rule a minimum of 50 randomly selected samples must be chosen for process performance
studies and a minimum of 20 subgroups ( of sample size, preferably of at least 4 or 5 ) must be chosen
for process capability studies.
2. Cpk for all critical product measurements considered important by the customer should be calculated at
the beginning of initial production to determine the general ability of the process to meet customer
specifications. Then from time to time, over the life of the product, Cpks must be generated. A control
chart must always be maintained to check statistical stability of the process before capability is computed.
Using process capability indices it is easy to forget how much of product is falling beyond specification. The
conversion curve presented here can be a useful tool for interpreting Cpk with its corresponding defect levels. The
defect levels or parts per million non-conforming were computed for different Cpk values using the Z scores and
the percentage area under the standard normal curve using normal deviate tables.
The table below presents the non-conforming parts per million ( ppm ) for a process corresponding to Cpk values
if the process mean were at target.
Cpk Value Sigma Value Area under Normal Curve Non Conforming ppm
0.1 0.3 0.235822715 764177.2851
0.2 0.6 0.451493870 548506.1299
0.3 0.9 0.631879817 368120.1835
0.4 1.2 0.769860537 230139.4634
0.5 1.5 0.866385542 133614.4576
0.6 1.8 0.928139469 71860.531
0.7 2.1 0.964271285 35728.7148
0.8 2.4 0.983604942 16395.0577
0.9 2.7 0.993065954 6934.0461
1.0 3.0 0.997300066 2699.9344
1.1 3.3 0.999033035 966.9651
1.2 3.6 0.999681709 318.2914
1.3 3.9 0.999903769 96.231
1.333 3.999 0.999936360 63.6403
1.4 4.2 0.999973292 26.7082
1.5 4.5 0.999993198 6.8016
1.6 4.8 0.999998411 1.5887
1.666 4.998 0.999999420 0.5802
1.7 5.1 0.999999660 0.3402
1.8 5.4 0.999999933 0.0668
1.9 5.7 0.999999988 0.012
2.0 6.0 0.999999998 0.002
The Cpk conversion curve for process with mean at target is shown next.
Explanation :
A process with Cpk of 2.0 ( +/- 6 sigma capability), i.e., the process mean is 6 sigma away from the nearest
specification can be expected to have no more than 0.002 nonconforming parts per million.
This process is so good that even if the process mean shifts by as much as +/- 1.5 sigma the process will produce
no more than 3.4 non-conforming parts per million.
The next section provides the reader with some practical clarifications on Process Capability (Voice of the
process ) and Specification ( Expectations of the customer ).
As seen from the earlier discussions, there are three components of process capability:
1. Design specification or customer expectation ( Upper Specification Limit, Lower Specification Limit )
2. The centering of the natural process variation ( X-Bar )
3. Spread of the process variation ( s )
A minimum of four possible outcomes can arise when the natural process variability is compared with the design
specifications or customer expectations:
Case 1: Cpk > 1.33 ( A Highly Capable Process )
A Highly Capable Process : Voice of the Process < Specification ( or Customer Expectations ).
This process will produce conforming products as long as it remains in statistical control. The process owner can
claim that the customer should experience least difficulty and greater reliability with this product. This should
translate into higher profits.
Note : Cpk values of 1.33 or greater are considered to be industry benchmarks. This means that the process is
contained within four standard deviations of the process specifications.
This process will produce greater than 64 ppm but less than 2700 non-conforming ppm.
A Barely Capable Process : Voice of the Process = Customer Expectations.
This process has a spread just about equal to specification width. It should be noted that if the process mean
moves to the left or the right, a significant portion of product will start falling outside one of the specification limits.
This process must be closely monitored.
Note : This process is contained within three to four standard deviations of the process specifications.
This process will also produce more than 2700 non-conforming ppm.
The variability ( s ) and specification width is assumed to be the same as in case 2, but the process average is off-
center. In such cases, adjustment is required to move the process mean back to target. If no action is taken, a
substantial portion of the output will fall outside the specification limit even though the process might be in
statistical control.
Capability indices described here strive to represent with a single number the capability of a process. Much has
been written in the literature about the pitfalls of these estimates.
Following are some of the precautions the readers should exercise while calculating and interpreting process
capability:
1. The indices for process capability discussed are based on the assumption that the underlying process
distribution is approximately bell shaded or normal. Yet in some situations the underlying process
distribution may not be normal. For example, flatness, pull strength, waiting time, etc., might natually
follow a skewed distribution. For these cases, calculating Cpk the usual way might be misleading. Many
researchers have contributed to this problem. Readers are requested to refer to John Clements article
titled
"Process Capability Calculations for Non-Normal Distributions" for details.
2. The process / parameter in question must be in statistical control. It is this author' s experience that there
is tendency to want to know the capability of the process before statistical control is established. The
presence of special causes of variation make the prediction of process capability difficult and the meaning
of Cpk unclear.
3. The data chosen for process capability study should attempt to encompass all natural variations.
For example, one supplier might report a very good process capability value using only ten samples
produced on one day, while another supplier of the same commodity might report a somewhat lesser
process capability number using data from longer period of time that more closely represent the process.
If one were to compare these process index numbers when choosing a supplier, the best supplier might
not be chosen.
4. The number of samples used has a significant influence on the accuracy of the Cpk estimate.
For example, for a random sample of size n = 100 drawn from a know normal population of Cpk = 1, the Cpk
estimate can vary from 0.85 to 1.15 ( with 95 % confidence ). Therefore smaller samples will result in even
larger variations of the Cpk statistics. In other words, the practitioner must take into consideration the
sampling variation' s influence on the computed Cpk num ber. Please refer to Bissell and Chou, Owen, and
Borrego for more on this subject.
Concluding Thoughts :
In the real world, very few processes completely satisfy all the conditions and assumptions required for estimating
Cpk. Also, statistical debates in research communities are still raging on the strengths and weaknesses of various
capability and performance indices. Many new complicated capability indices have also been invented and cited
in literature. However, the key to effectual use of process capability measures continues to be the level of user
understanding of what these measures really represent. Finally, in order to achieve continuous improvement, one
must always attempt to refine the "Voice of the Process" to match and then to surpass the "Expectations of the
Customer".
References :
1. Victor Kane, "Process Capability Indices", Journal of Quality Technology, Jan 1986.
2. ASQ / AIAG, "Statistical Process Control", Reference Manual, 1995.
3. John Clements, "Process Capability Calculations for Non-Normal Distributions", Quality Progress, Sept
1989.
4. Forrest Breyfogle, "Measurement of Process Capability", Smarter Solutions, 1996.
5. Bissell, "How Reliable is Your Capability Index", Royal Statistical Society, 1990.
6. Chou, Owen, and Borrego, "Lower Confidence Limits of Process Capability Indices", Journal of Quality
Technology, Vol 22, No. 3, July 1990.
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Consider a company that produces a hundred thousand tires a day, if the company is open 16 hours a
day (two shifts) and it takes an employee 10 minutes to test a tire, it would need at least 2084
employees in the quality control department to test every single tire that comes out of production and a
tremendous amount of space for the QA department and the inventory.
For a normally distributed production output, taking a sample of the output and testing it can help
determine the quality level of the whole production. Sampling consists into testing a subset of the
population in order to derive a conclusion for the whole population.
The sample statistics may not always be exactly the same as their corresponding population
parameters. The difference is known as the sampling error.
Acceptance sampling is “the middle of the road” approach between no inspection and 100% inspection.
There are two major classifications of acceptance plans: by attributes (“go, no-go”) and by variables.
Acceptance sampling plans always lie someplace amid no inspection and 100% inspection. Sometimes
there is no choice. If you must get product out the door and all you have in stock is rejected or defective
material, then you must sort through it and try to find the good ones. (I would also look for another
supplier or two) This is very costly. (I would be trying to get the supplier to pick up your cost of doing
their job) But, what choice is there? But you MUST meet customer’s requirements.
But first a little more on 100% inspection. I am a firm believer that YOU CAN NOT INSPECT QUALITY
INTO A PRODUCT. 100% Inspection has an effectiveness of 40-65%. And that doesn’t count the 5-
10% breakage. It is a waste of people power. (Formally known as man-power) I remember reading, I
think it was in one of Juran’s books that 100% inspection is only 60% effective. But I have found it to be
a little less than that.
A point to remember is that the main purpose of acceptance sampling is to decide whether or not the lot
is likely to be acceptable, not to estimate the quality of the lot.
It was pointed out by Harold Dodge in 1969 that Acceptance Quality Control is not the same as
Acceptance Sampling. The latter depends on specific sampling plans, which when implemented
Sampling Plan -1
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
indicate the conditions for acceptance or rejection of the immediate lot that is being inspected. The
former may be implemented in the form of an Acceptance Control Chart. The control limits for the
Acceptance Control Chart are computed using the specification limits and the standard deviation of
what is being monitored .
According to the ISO standard on acceptance control charts (ISO 7966, 1993), an acceptance control
chart combines consideration of control implications with elements of acceptance sampling. It is an
appropriate tool for helping to make decisions with respect to process acceptance. The difference
between acceptance sampling approaches and acceptance control charts is the emphasis on process
acceptability rather than on product disposition decisions.
Sampling Plan -2
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
radios for years and never found the problem. We were looking in the wrong place. So if you find yourself needing to 100% inspect, at
least collect data and do the control charts, even just X Bar, Range and a Distribution charts can tell you a lot. You never know what it
may reveal until you do.
Shortly after that I was promoted to Component Engineering where I first heard of Six Sigma. I wrote software that allowed Motorola
and their suppliers to use a standard SPC package to aid in starting their Six Sigma program. This allowed suppliers to submit sample
data and control charts with every shipment coming into VQA. Without this software, it would have been a mess trying to train the
inspectors what to look for with the different SPC charts and without training the inspectors, the Quality Engineers would have to look at
each shipment and that would have just made them over priced inspectors.
The concepts of Six Sigma are much larger than just measurements of course. It’s also about getting a Return On Net Investments,
(RONI), and Return On Net Assets, (RONA).
There are two major classifications of acceptance plans, (or AQL, Acceptable Quality Level), by
attributes or discrete data (“go, no-go”) and by variable data, (real numbers). The attribute case is the
most common for an acceptance sampling.
Sample
A subset of population used instead of the entire population for improved accuracy and reduces cost. Sampling
is more accurate in statistical process control for several reasons including it is less susceptible for ‘inspection
fatigue’ in a process, because there is a lesser chance for errors. Products that are 100% inspected sometimes
are found to have an astounding number of defects.
a) Systematic sampling
Methodology for sampling in which units are selected from the population at a regular interval (e.g., once an
hour, every other lot, etc.). This is used for many SPC, (statistical process control), control charts like X Bar
and R Bar charts. One reason for this type of sampling is to see if the process has any variations due to
different times of the day, shift workers changing, temperature changes during day vs. night times, etc
b) Stratified sampling
The act of dividing a larger population into subgroups, using systematic sampling, then taking a random
sample from each subgroup. Random sampling can frequently minimize the sampling error in the population.
This in turn increases the precision of any estimation methods used. You MUST be sure to select relevant
variables for the grouping process. The information, or data, must be as accurate as possible. This is a method
of sampling with any given population could vary significantly depending on the grouping and how uniform the
grouping is.
c) Sampling bias
When data is influence in one way or another so that the data no longer represents the entire population.
d) Random sampling
Technique that insures each item in a sample for inspection is selected completely by chance to be measured.
You should never do 100% inspection in any six sigma quality plan. Sampling is more accurate in statistical
process control for several reasons including it is less susceptible for ‘inspection fatigue’ in a process, because
there is a lesser chance for errors. Products that are 100% inspected sometimes are found to have a
astounding number of defects.
By using SPC and other control charts with a control plan, you can measure fewer production piece parts,
(typically using a systematic sampling plan in unison with stratified sampling insuring that sampling bias does
not become an issue), thus saving time and labor, and in theory, still have the same confidence level of the
quality of you process. Famous errors and lessons learned in random sampling can be attributed to Literary
Digest magazine in 1936. They took a random sample from the telephone listings for a pre-election poll. The
10 million samples taken during the depression years were not accurate because in 1936 the people who could
afford a phone and magazine subscription did not constitute a random sample. The company was soon out of
business.
Sampling Plan -3
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
If a produced unit can have a number of different defects, then demerits can be assigned to each type of
defect and product quality measured in terms of demerits. As an AQL is an acceptable level, the probability of
acceptance for an AQL lot should be high. (Typical values between 92.74% to 99.999996% for six sigma, see
Cpk compares to PPM for value reasons)
Some sources characterize an acceptable quality level as the highest percent defective that should be
considered reasonable as the process average. Usually monitored using SPC, (Statistical Process Control), at
the production levels by Quality inspection.
Standard military sampling procedures, (MIL-STD), have been used for over 50 years to achieve these goals.
The MIL-STD defines AQL as…
“the maximum percent defective (or the maximum number of defects per hundred units) that, for purposes of
sampling inspection, can be considered satisfactory as a process average.”
Suppose a population of 10 bolts has diameter measures of 9, 11, 12, 12,14, 10, 9, 8, 7, 9. The mean
for that population would be 10.1. If a sample of the following three measures - 9, 14, 10- is taken
from the population, the mean of the sample would be (9 + 14 +10)/3 = 11 and the sampling error
Let's take another sample of three measures 7, 12 and 11. This time the mean will be 10 and the
sampling error
If another sample is taken and estimated, its sampling error might be different. These differences are
said to be due to chance. So if it is possible to make mistakes while estimating the population's
parameters from a sample, how can we be sure that sampling can help get a good estimate? Why use
sampling as a mean of estimating the population parameters?
Where is the mean of the samples and is the standard deviation of the samples.
The implication of this theorem is that for sufficiently large populations, the normal distribution can be
used to analyze samples drawn from populations that are not normally distributed or which shapes are
unknown.
When means are used as estimators to make inferences about a population parameters, and ,
then the estimator will be approximately normally distributed in repeated sampling.
Sampling Plan -4
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
In that example, we had 10 bolts, if all possible samples of 3 were computed, there would have been
120 samples and means.
The mean and standard deviation of that sampling distribution are given as:
Example 1:
Gajaga-electronics is a company that manufactured circuit boards, the average imperfection on a board
is with a standard deviation when the production process is under control.
A random sample of circuit boards has been taken for inspection and a mean of defects
per board was found. What is the probability of getting a value of if the process is under control?
Solution:
Since the sample size is greater than 30, the central limit theorem can be used in this case even though
the number of defects per board in this case follows a Poisson distribution. Therefore, the distribution of
the sample mean is approximately normal with the standard deviation
The previous example is valid for an extremely large population. Sampling from a finite population
will require some adjustment called the finite correction factor:
Sampling Plan -5
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Example 2:
A city's 450 restaurant employees average $35 tips a day with a standard deviation of 9. a sample of 50
employees is taken, what is the probability that the sample will have an average of less than $37 tips a
day.
Solution:
On the Z-score table, 1.77 corresponds to .4616 therefore, the probability of getting an average daily tip
of less than $37 will be .4616 + .5= .9616.
If the Finite correction factor was not taken into account, z would have been 1.57 which corresponds to
.4418 on the z score table and therefore the probability of having a daily tip of less than $37 would have
been .9418.
The Sample proportion applies to situations that would have required a Binomial distribution where is
the probability for a success and q the probability for a failure with .
When a random sample of n trials is selected from a Binomial population (an experiment with n identical
trials with each trial having only two possible outcomes considered as success or failure) with
parameter p , the sampling distribution of the sample proportion will be Where x is the
number of success.
Sampling Plan -6
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Example 3:
In a sample of 100 workers, 25 might be coming late once a week.
, the sample proportion of the late comers will be 25/100 = 0.25. In that example,
Where:
= Sample proportion
P = Population proportion
n = Sample size
q = 1- p
Example 4:
40% of the parts that come of a production line are defective, what is the probability of taking a random
sample of size 75 from the line and finding that .7 or less are defective?
Solution:
Sampling Plan -7
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Example 5:
40% of all the employees have signed up for the stock option plan. An HR specialist believes that this
ratio is too high. She takes a sample of 450 employees and finds that 200 have signed up. What is the
probability of getting a sample proportion larger than this if the population proportion is really 0.4?
Solution:
The probability of getting a sample proportion larger than 0.4 will be 0.5 - 0.4582 = 0.0418.
Using the central limit theorem, we have determined that the z value for sample means can be used for
large samples.
Since Z can be positive or negative, the next formula would be more accurate
where:
is the lower confidence limit LCL and is the upper confidence limit UCL
Sampling Plan -8
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
But a confidence interval presented as such does not take into account the area under the normal
curve that is outside the confidence interval.
We estimate with some confidence that the mean is within the interval:
but we cannot be absolutely certain that it is unless the confidence interval is 100%.
For a two-tailed normal curve, if we want to be 95% sure that is within that interval, then the
confidence interval will be equal to .95 (1 - or 1 - .05) and the areas under the tails will be
or:
The table below shows the most commonly used confidence coefficients and their Z-score values.
Confidence interval (1 - )
0.90 0.10 1.645
0.95 0.05 1.96
0.99 0.01 2.58
Example 6:
A survey of companies that use solar panels as a primary source of electricity was conducted. the
question that was asked was this: How much of the electricity used in your company comes from the
solar panels? A random sample of 55 responses produced a mean of 45 megawatts. suppose the
population standard deviation for this question is 15.5 megawatts. Find the 95% confidence interval for
the mean.
Solution:
We can be 95% sure that the mean will be between 40.9 and 49.1 megawatt, in other words the
probability for the mean to be between 40.9 and 49.1 will be 0.95.
Sampling Plan -9
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
When the sample size is large , the sample's standard deviation can be used as an estimate
of the population standard deviation.
Example 7:
A sample of 200 circuit boards was taken from a production line and it showed the number of average
defects to be 7 and a standard deviation of 2. What is the 95% confidence interval for the population
average ?
Solution:
In repeated sampling, 95% of the confidence intervals will enclose the average defects per circuit board
for the whole population .
Example 8:
What would the interval be like if the confidence interval were 90%?
Solution:
In repeated sampling, 95% of the confidence intervals will enclose the average defects per circuit board
for the whole population .
If. Estimating the population mean with small sample sizes and unknown
a t -Distribution
We have seen that when the population is normally distributed and the standard deviation is known,
can be estimated to be within the interval . But as in the case of the above example,
is not known. in these cases, it can be replaced by S , the sample's standard deviation and is found
within the interval . Replacing with S can only be a good if approximation if the sample
sizes are large, i.e. n>30.
In fact, the Z formula has been determined not to always generate normal distributions for small sizes
even if the population is normally distributed.
So in the case of small samples and when is not known, the t -distribution is used instead.
The right side of this equation is identical to the one of the Z formula but the tables used to determine
the values are different from the ones used for the z values.
Just as in the case of the z formula, the t can also be manipulated to estimate , but since
the sample sizes are small, in order not to produce a biased result, we need to convert the them to
degrees of freedom df. df = n - 1.
or
Example 9:
A manager of a car rental company wants to number of times luxury cars are rented a month, she takes
a random sample of 19 cars that produce the following result:
3 7 12 5 9 13 2 8 6 14 6 1 2 3 2 5 11 13 5
She wants to use these data to construct a 95% confidence interval to estimate the average.
Solution:
3 + 7 + 12 + 5 + 9 +13 + 2 + 8 + 6 + 14 + 6 + 1 + 2 + 3 + 2 + 5 +11 +13 + 5 = 127
b2 Distribution
In most cases, in quality control, the objective of the auditor is not to find the mean of a population but
rather to determine the level of variation of the output. He would for instance want to know how much
variation the production process exhibits about the target in order to see what adjustments are needed
to reach a defect free process.
The shape of the resembles the normal curve but it is not symmetrical and its shape depends on
the degree of freedom.
Example 10:
A sample of 9 screws was taken out of a production line and the values are as follow:
13.00mm
13.00mm
12.00mm
12.55mm
12.99mm
12.89mm
12.88mm
12.97mm
12.99mm
We are trying to estimate the population variance with 95% confidence.
Solution:
We need to determine the point of estimate which is the sample's variance.
With a degree of freedom df of n – 1 = 8. Since we want to estimate with a confidence level of 95%,
From the table, the values of and for a degree of freedom of 8 are respectively 17.5346
and 2.17973.
And
We can clearly see that the nominator is nothing but the Sampling error E. We can therefore replace
by E in the Z formula and come up with:
Example 11:
A production manager at a call center wants to know how much time should an employee spend on the
phone with a customer on average. She wants to be within 2 minutes of the actual length of time and
the standard deviation of the average time spent is known to be 3 minutes. What sample size of calls
should she consider if she wants to be 95% confident of her result?
Solution
Since we cannot have 8.6436 calls, we can round up the result to 9 calls.
The manager can be 95% confident that with a sample of 9 calls she can determine the average length
of time an employee needs to spend on the phone with a customer.
We have already seen that the Z formula for the sample proportion is given as:
Example 12:
A study is being conducted to determine the extent to which companies promote Open Book
Management. The question asked to employees is: Do your managers provide you with enough
information about the company? It was previously estimated that only 30% of the companies did
actually provide the information needed to their employees. If the researcher wants to be 95% confident
in the results and be within 0.05 of the true population proportion, What size of sample should she
take?
Associated with the LTPD is a confidence statement one can make. If the lot passes the sampling plan,
one can state with 90% confidence that the quality level (defective rate, etc.) is below the LTPD (i.e.,
the defective rate of the lot > LTPD). On the other hand, if a lot passes the sampling plan, then one can
state with 90% confidence that its quality level is equal to or better than the LTPD (passing the
sampling plan demonstrates that the LTPD has been meet).
The LTPD of the sampling plan describes what the sampling plan will reject, but it is also important to
know what the sampling plan will accept. Information on what the sampling plan will accept is provided
by the AQL of the sampling plan.
The LTPD is used to help describe the protection provided a sampling plan. But it only provides half the
answer. It describes what the sampling plan will reject. We would also like to know what the sampling
plan will accept. The answer to this second question is provided by the AQL.
Lot Tolerance Percent Defective (LTPD), expressed in percent defective, is the poorest quality in an
individual lot that should be accepted. The LTPD has a low probability of acceptance. In many sampling
plans, the LTPD is the percent defective having a 10% probability of acceptance using an accepted
sampling plan. With this plan, the producer agrees to produce just enough nonconforming product such
the consumer will accept the lot using the agreed to sampling plan and AQL level.
RQL and Beta together specify the fraction defective (RQL) of a lot that the plan that will have a small
probability (Beta) of accepting. RQL and Beta together define the “consumer’s point” of the oc curve.
RQL and Beta define the “consumer’s point” of the operating characteristic curve of the plan. (oc curve).
It is called the “consumer’s point” because it satisfies the consumer’s intentions of usually rejecting lots
if those lots are truly RQL fraction defective.
Some of the standards manuals neglect to mention RQL/LTPD/LQ. They bury the consumer’s point in
the structure of the sampling plan tables so that you do not have to think about it. Using such
standards, you cannot be sure that the sampling plan that you choose will classify a lot as acceptable or
rejectable the way that you want it to. The advantage of such standards is that by using them you can
say truthfully that you are following the standard. This might make you politically safe but not technically
safe from a quality standpoint.
The above description is for attribute sampling plans. If you have variables data you must additionally
use the within-lot standard deviation. If your variables sampling plan is for the mean of a measured
variable, AQL-mean and RQL-mean are used.
Acceptable Quality Level (AQL) is the maximum percent defective that is considered satisfactory as a
process average by the producer and consumer. In other words, if, on average, 4% (AQL=4.0)
nonconforming product is acceptable to BOTH the producer and consumer, then the producer agrees to
produce, on average, 4% nonconforming product.
OC Curve - Operating Characteristic Curve shows how the probability of acceptance (y-axis) depends
on the quality level (bottom axis).
Acceptance sampling is an important aspect of statistical quality control. It originated back in World
War II when the military had to determine which batches of ammunition to accept and which ones to
reject. They knew that they couldn't test every bullet to determine if it will do its job in the field. On the
other hand, they had to be confident that the bullets they're getting will not fail when their lives are
already on the line. Acceptance sampling was the answer - testing a few representative bullets from
the lot so they'll know how the rest of the bullets will perform.
Acceptance sampling is a compromise between not doing any inspection at all and 100% inspection.
The scheme by which representative samples will be selected from a population and tested to
determine whether the lot is acceptable or not is known as an acceptance plan or sampling plan. There
are two major classifications of acceptance plans: based on attributes ("go, no-go") and based on
variables.
Sampling plans can be single, double or multiple. A single sampling plan for attributes consists of a
sample of size n and an acceptance number c. The procedure operates as follows: select n items at
random from the lot. If the number of defectives in the sample set is less than c, the lot is accepted.
Otherwise, the lot is rejected.
In order to measure the performance of an acceptance or sampling plan, the Operating Characteristic
(OC) curve is used. This curve plots the probability of accepting the lot (Y-axis) versus the lot fraction
or percent defectives.
A single sampling plan, as previously defined, is specified by the pair of numbers (n,c). The sample size
is n, and the lot is rejected if there are more than c defectives in the sample; otherwise the lot is
accepted.
There are many distinctive approaches that could be used to initiate such plans by two widely used
ways of picking (n,c) as follow as:
Use tables (such as MIL STD 105D) that focus a) Two-Point Method
on either the AQL or the LTPD desired. b) Reverse of the Two-Point Method
Specify 2 desired points on the OC curve and
solve for the (n,c) that uniquely determines an c) OC-Curve to name just a few.
OC curve going through these points.
III. a. Use tables (such as MIL STD 105D) that focus on either the AQL or the LTPD
desired.
1 Choosing a Sampling Plan: MIL Standard 105D
Sampling plans are typically set up with reference to an acceptable quality level, or AQL . The AQL is the base
line requirement for the quality of the producer's product. The producer would like to design a sampling plan
such that the OC curve yields a high probability of acceptance at the AQL. On the other side of the OC curve,
the consumer wishes to be protected from accepting poor quality from the producer. So the consumer
establishes a criterion, the lot tolerance percent defective or LTPD . Here the idea is to only accept poor quality
product with a very low probability. Mil. Std. plans have been used for over 50 years to achieve these goals.
These three streams combined in 1950 into a standard called Mil. Std. 105A. It has since been modified from
time to time and issued as 105B, 195C and 105D. Mil. Std. 105D was issued by the U.S. government in 1963.
It was adopted in 1971 by the American National Standards Institute as ANSI Standard Z1.4 and in 1974 it
was adopted (with minor changes) by the International Organization for Standardization as ISO Std. 2859. The
latest revision is Mil. Std 105E and was issued in 1989.
These three similar standards are continuously being updated and revised, but the basic tables remain the
same. Thus the discussion that follows of the germane aspects of Mil. Std. 105E also applies to the other two
standards.
The foundation of the Standard is the acceptable quality level or AQL. In the following scenario, a certain
military agency, called the Consumer from here on, wants to purchase a particular product from a supplier,
called the Producer from here on.
In applying the Mil. Std. 105D it is expected that there is perfect agreement between Producer and Consumer
regarding what the AQL is for a given product characteristic. It is understood by both parties that the Producer
will be submitting for inspection a number of lots whose quality level is typically as good as specified by the
Consumer. Continued quality is assured by the acceptance or rejection of lots following a particular sampling
plan and also by providing for a shift to another, tighter sampling plan, when there is evidence that the
Producer's product does not meet the agreed-upon AQL.
Mil. Std. 105E offers three types of sampling plans: single, double and multiple plans. The choice is, in general,
up to the inspectors.
Because of the three possible selections, the standard does not give a sample size, but rather a sample code
letter. This, together with the decision of the type of plan yields the specific sampling plan to be used.
In addition to an initial decision on an AQL it is also necessary to decide on an "inspection level". This
determines the relationship between the lot size and the sample size. The standard offers three general and
four special levels.
A lot acceptance sampling plan (LASP) is a sampling scheme and a set of rules for making decisions. The
decision, based on counting the number of defectives in a sample, can be to accept the lot, reject the lot, or
even, for multiple or sequential sampling schemes, to take another sample and then repeat the decision
process.
rejected if there are more than c defectives. These are the most common (and easiest) plans to use
although not the most efficient in terms of average number of samples needed.
x Double sampling plans:
After the first sample is tested, there are three possibilities:
1. Accept the lot
2. Reject the lot
3. No decision
If the outcome is (3), and a second sample is taken, the procedure is to combine the results of both
samples and make a final decision based on that information.
x Multiple sampling plans:
This is an extension of the double sampling plans where more than two samples are needed to reach a
conclusion. The advantage of multiple sampling is smaller sample sizes.
x Sequential sampling plans:
This is the ultimate extension of multiple sampling where items are selected from a lot one at a time and
after inspection of each item a decision is made to accept or reject the lot or select another unit.
x Skip lot sampling plans:
Skip lot sampling means that only a fraction of the submitted lots are inspected.
Deriving a plan, within one of the categories listed above, is discussed in the pages that follow. All derivations
depend on the properties you want the plan to have. These are described using the following terms:
Æ Acceptable Quality Level (AQL): The AQL is a percent defective that is the base line requirement for
the quality of the producer's product. The producer would like to design a sampling plan such that there is
a high probability of accepting a lot that has a defect level less than or equal to the AQL..
Æ Lot Tolerance Percent Defective (LTPD): The LTPD is a designated high defect level that would be
unacceptable to the consumer. The consumer would like the sampling plan to have a low probability of
accepting a lot with a defect level as high as the LTPD.
Æ Type I Error (Producer's Risk): This is the probability, for a given (n,c) sampling plan, of rejecting a
lot that has a defect level equal to the AQL. The producer suffers when this occurs, because a lot with
acceptable quality was rejected. The symbol is commonly used for the Type I error and typical values
for range from 0.2 to 0.01.
Æ Type II Error (Consumer's Risk): This is the probability, for a given (n,c) sampling plan, of accepting a
lot with a defect level equal to the LTPD. The consumer suffers when this occurs, because a lot with
unacceptable quality was accepted. The symbol is commonly used for the Type II error and typical
values range from 0.2 to 0.01.
Æ Operating Characteristic (OC) Curve: This curve plots the probability of accepting the lot (Y-axis)
versus the lot fraction or percent defectives (X-axis). The OC curve is the primary tool for displaying and
investigating the properties of a LASP.
Æ Average Outgoing Quality (AOQ): A common procedure, when sampling and testing is non-
destructive, is to 100% inspect rejected lots and replace all defectives with good units. In this case, all
rejected lots are made perfect and the only defects left are those in lots that were accepted. AOQ's refer
to the long term defect level for this combined LASP and 100% inspection of rejected lots process. If all
lots come in with a defect level of exactly p, and the OC curve for the chosen (n,c) LASP indicates a
probability pa of accepting such a lot, over the long run the AOQ can easily be shown to be:
ATI = n + (1 - pa) (N - n)
IIIb. Specify 2 desired points on the OC curve and solve for the (n,c) that uniquely
determines an OC curve going through these points.
We start by looking at a typical OC curve. The OC curve for a (52 ,3) sampling plan is shown below.
It is instructive to show how the points on this curve are obtained, once we have a sampling plan (n,c) - later
we will demonstrate how a sampling plan (n,c) is obtained.
We assume that the lot size N is very large, as compared to the sample size n, so that removing the sample
doesn't significantly change the remainder of the lot, no matter how many defects are in the sample. Then the
distribution of the number of defectives, d, in a random sample of n items is approximately binomial with
parameters n and p, where p is the fraction of defectives per lot.
The probability of acceptance is the probability that d, the number of defectives, is less than or equal to c, the
accept number. This means that
Solving for (n,c): Equations for calculating a sampling plan with a given OC curve
In order to design a sampling plan with a specified OC curve one needs two designated points. Let us design a
sampling plan such that the probability of acceptance is 1- for lots with fraction defective p1 and the
probability of acceptance is for lots with fraction defective p2. Typical choices for these points are: p1 is the
AQL, p2 is the LTPD and , are the Producer's Risk (Type I error) and Consumer's Risk (Type II error),
respectively.
If we are willing to assume that binomial sampling is valid, then the sample size n, and the acceptance number
c are the solution to
These two simultaneous equations are nonlinear so there is no simple, direct solution. There are however a
number of iterative techniques available that give approximate solutions so that composition of a computer
program poses few problems.
Assume all lots come in with exactly a p0 proportion of defectives. After screening a rejected lot, the final
fraction defectives will be zero for that lot. However, accepted lots have fraction defectivep0. Therefore, the
outgoing lots from the inspection stations are a mixture of lots with fractions defective p0 and 0. Assuming the
lot size is N, we have.
For example, let N = 10000, n = 52, c = 3, and p, the quality of incoming lots, = 0.03. Now at p = 0.03, we
Setting p = .01, .02, ..., .12, we can A plot of the AOQ versus p is given below
generate the following table
From examining this curve we observe that when the incoming quality is very good (very small fraction of
defectives coming in), then the outgoing quality is also very good (very small fraction of defectives going out).
When the incoming lot quality is very bad, most of the lots are rejected and then inspected. The "duds" are
eliminated or replaced by good ones, so that the quality of the outgoing lots, the AOQ, becomes very good. In
between these extremes, the AOQ rises, reaches a maximum, and then drops.
The maximum ordinate on the AOQ curve represents the worst possible quality that results from the rectifying
inspection program. It is called the average outgoing quality limit, (AOQL ).
From the table we see that the AOQL = 0.0372 at p = .06 for the above example.
One final remark: if N >> n, then the AOQ ~ pa p
Calculating the Average Total Inspection: The Average Total Inspection (ATI)
What is the total amount of inspection when rejected lots are screened?
If all lots contain zero defectives, no lot will be rejected.
If all items are defective, all lots will be inspected, and the amount to be inspected is N.
Finally, if the lot quality is 0 < p < 1, the average amount of inspection per lot will vary between the sample
size n, and the lot size N.
Let the quality of the lot be p and the probability of lot acceptance be pa, then the ATI per lot is
ATI = n + (1 - pa) (N - n)
For example, let N = 10000, n = 52, c = 3, and p = .03 We know from the OC table that pa = 0.930. Then
ATI = 52 + (1-.930) (10000 - 52) = 753. (Note that while 0.930 was rounded to three decimal places, 753
was obtained using more decimal places.)
Setting p= .01, .02, ....14 generates the A plot of ATI versus p, the Incoming Lot Quality (ILQ) is given
following table below.
Double and multiple sampling plans were invented to give a questionable lot another chance. For example, if
in double sampling the results of the first sample are not conclusive with regard to accepting or rejecting, a
second sample is taken. Application of double sampling requires that a first sample of size n1 is taken at
random from the (large) lot. The number of defectives is then counted and compared to the first sample's
acceptance number a1 and rejection number r1. Denote the number of defectives in sample 1 by d1 and in
sample 2 by d2, then:
If a second sample of size n2 is taken, the number of defectives, d2, is counted. The total number of defectives
is D2 = d1 + d2. Now this is compared to the acceptance number a2 and the rejection number r2 of sample 2.
In double sampling, r2 = a2 + 1 to ensure a decision on the sample.
There exist a variety of tables that assist the user in constructing double and multiple sampling plans. The
index to these tables is the p2/p1 ratio, where p2 > p1. One set of tables, taken from the Army Chemical Corps
Engineering Agency for = .05 and = .10, is given below:
Example
We wish to construct a double sampling plan according to
p1 = 0.01 = 0.05 p2 = 0.05 = 0.10 and n1 = n2
We find the row whose R is closet to 5. This is the 5th row (R = 4.65). This gives c1 = 2 and c2 = 4. The value
of n1 is determined from either of the two columns labeled pn1.
The left holds constant at 0.05 (P = 0.95 = 1 - ) and the right holds constant at 0.10. (P = 0.10). Then
holding constant we find pn1 = 1.16 so n1 = 1.16/p1 = 116. And, holding constant we find pn1 = 5.39, so
n1 = 5.39/p2 = 108. Thus the desired sampling plan is
n1 = 108 c1 = 2 n2 = 108 c2 = 4
If we opt for n2 = 2n1, and follow the same procedure using the appropriate table, the plan is:
n1 = 77 c1 = 1 n2 = 154 c2 = 4
The first plan needs less samples if the number of defectives in sample 1 is greater than 2, while the second
plan needs less samples if the number of defectives in sample 1 is less than 2.
We will illustrate how to calculate the ASN curve with an example. Consider a double-sampling plan n1 = 50,
c1= 2, n2 = 100, c2 = 6, where n1 is the sample size for plan 1, with accept number c1, and n2, c2, are the
sample size and accept number, respectively, for plan 2.
Let p' = .06. Then the probability of acceptance on the first sample, which is the chance of getting two or less
defectives, is .416 (using binomial tables). The probability of rejection on the second sample, which is the
chance of getting more than six defectives, is (1-.971) = .029. The probability of making a decision on the first
sample is .445, equal to the sum of .416 and .029. With complete inspection of the second sample, the
average size sample is equal to the size of the first sample times the probability that there will be only one
sample plus the size of the combined samples times the probability that a second sample will be necessary.
For the sampling plan under consideration, the ASN with complete inspection of the second sample for a p' of
.06 is
The general formula for an average sample number curve of a double-sampling plan with complete inspection
of the second sample is
where P1 is the probability of a decision on the first sample. The graph below shows a plot of the ASN versus
p'.
Skip Lot sampling means that only a fraction of the submitted lots are inspected. This mode of sampling is of
the cost-saving variety in terms of time and effort. However skip-lot sampling should only be used when it has
been demonstrated that the quality of the submitted product is very good.
The parameters f and i are essential to calculating the probability of acceptance for a skip-lot sampling plan. In
this scheme, i, called the clearance number, is a positive integer and the sampling fraction f is such that 0 < f
< 1. Hence, when f = 1 there is no longer skip-lot sampling. The calculation of the acceptance probability for
the skip-lot sampling plan is performed via the following formula
where P is the probability of accepting a lot with a given proportion of incoming defectives p, from the OC
curve of the single sampling plan.
An important property of skip-lot sampling plans is the average sample number (ASN ). The ASN of a skip-lot
sampling plan is
ASNskip-lot = (F)(ASNreference)
where F is defined by
Therefore, since 0 < F < 1, it follows that the ASN of skip-lot sampling is smaller than the ASN of the
reference sampling plan.
In summary, skip-lot sampling is preferred when the quality of the submitted lots is excellent and the supplier
can demonstrate a proven track record.
The tables give inspection plans for sampling by attributes for a given batch size and acceptable quality
level (AQL). An inspection plan includes: the sample size/s (n), the acceptance number/s (c), and the
rejection number/s (r). The single sampling procedure with these parameters is as follows: Draw a
random sample of n items from the batch. Count the number of nonconforming items within the sample
(or the number of nonconformities, if more than one nonconformity is possible on a single item). If the
number of nonconforming items is c or less, accept the entire batch. If it is r or more then reject it. In
most cases r =c+1 (for double and multiple plans, there are several values for the sample sizes,
acceptance, and rejection numbers).
The standard includes three types of inspection (normal, tightened, and reduced inspection). The type
of inspection that should be applied depends on the quality of the last batches inspected. At the
beginning of inspection, normal inspection is used. The types of inspection differ as follows:
x Tightened inspection (for a history of low quality) requires a larger sample size than in under
normal inspection.
x Reduced sampling (for a history of high quality) has a higher acceptance number relative to
normal inspection (so it is easier to accept the batch)
There are special switching rules between the three types of inspection, as well as a rule for
discontinuation of inspection. These rules are empirically based.
Nonconforming items
The nonconformity of an item is expressed as the percent of nonconforming items. When
each item can contain more than one defect, the nonconformity of an item is expressed as
the number of non-conformities (defects) per 100 items.
Percent/Proportion Non-Conforming (p)
The percent or proportion of non-conforming items in a batch or in a process. In many cases
this is unknown, but it is used to learn about scenarios for different values of p.
Rejection Limit (r)
The smallest number of non-conforming items in a sample that would lead to the rejection of
the entire lot. In most cases (besides reduced sampling) this value is equal to the
acceptance limit -1.
Run Length
The run length is the number of samples taken until an alarm is signaled by the control
chart.
Sample size (n)
The number of items that should be randomly chosen from a batch.
Sampling Fraction f
The proportion of items (or batches, in Skip lot sampling) that are inspected during some
phase, when applying continuous sampling. f is between 0 and 1. There are three ways to
sample with a fraction of f:
1. Probability Sampling: Each item/batch is sampled with probability f.
2. Systematic Sampling: Every 1/f 'th item/batch is sampled.1/f must then be a natural
number (e.g., every 3rd item is inspected, when f=1/3).
3. Block-Random Sampling: From each 1/f consecutive items/batches, one is chosen at
random. 1/f must then be a natural number (e.g., in each block of 3 items one is chosen,
when f=1/3).
Shift size
The purpose of using a control chart is to detect a shift in the process mean, of a specific
size. To detect a shift of two standard-deviations-of-the mean, enter the value 2.
Type of Inspection
There are three types of inspection:
x Normal inspection is used at the start of the inspection activity.
x Tightened inspection is used when the vendor's recent quality history has deteriorated (acceptance
criteria are more stringent than under normal inspection).
x Reduced inspection is used when the vendor's recent quality history has been exceptionally good
(sample sizes are usually smaller than under normal inspection).
Several other sampling-related issues currently facing the medical device industry will also be
discussed. In February of this year, the U.S. Department of Defense canceled Mil-Std-105E, which
contained a widely used table of sampling plans. What are the alternatives and how should they be
used? As the industry focuses increasingly on the prevention of defects and statistical process control
(SPC), will the need for acceptance sampling disappear? And finally, how can the cost of acceptance
sampling be reduced to help manufacturers remain competitive in today's marketplace?
Ideally, when a sampling plan is used, all bad lots will be rejected and all good lots accepted. However,
because accept/reject decisions are based on a sample of the lot, there is always a chance of making
an incorrect decision. So what protection does a sampling plan offer? The behavior of a sampling plan
can be described by its operating characteristic (OC) curve, which plots percent defectives versus the
corresponding probabilities of acceptance. Figure 1 shows the OC curve of the attribute single
sampling plan described above. With that plan, if a lot is 3% defective the corresponding probability of
acceptance is 0.56. Similarly, the probability of accepting lots that are 1% defective is 0.91 and the
probability of accepting lots that are 7% defective is 0.13.
An OC curve is generally summarized by two pints on the curve: the acceptable quality level (AQL) and
the lot tolerance percent defective (LTPD). The AQL describes what the sampling plan generally
accepts; formally, it is that percent defective with a 95% percent chance of acceptance. The LTPD,
which describes what the sampling plan generally rejects, is that percent defective with a 10% chance
of acceptance. As shown in Figure 2, the single sampling plan n=50 and a=1 has an AQL of 0.72%
defective and an LTPD of 7.6%. The sampling plan routinely accepts lots that are 0.72% or better and
rejects lots that are 7.6% defective or worse. Lots that are between 0.72% and 7.6% defective are
sometimes accepted and sometimes rejected.
Figure 2: AQL and LTPD of Single Sampling Plan n=50 and a=1
Manufacturers must know and document the AQLs and LTPDs of the sampling plans used for their
products. The AQLs and LTPDs of individual sampling plans can be found in Table X of MIL-STD-
105E, and Chart XV of ANSI Z1.4 gives the AQLs and LTPDs of entire switching systems (described
below).1,2 Software also can be used to obtain the AQLs and LTPDs of a variety of sampling plans.3
Spec-AQLs are commonly interpreted as the maximum percent defective for which acceptance is
desired. Lots below the Spec-AQL are best accepted; Lots above the Spec-AQL are best rejected.
The Spec-AQL, therefore, represents the break-even quality between acceptance and rejection. For
lots with percent defectives below the Spec-AQL, the cost of performing a 100% inspection will exceed
the benefits of doing so in terms of fewer defects released. Since this cost is ultimately passed on to
the customer, it is not in the customer's best interest for the manufacturer to spend $1000 to 100%
inspect a lot if only one defect is found that otherwise would have cost the customer $100. Spec-AQLs
should not be interpreted as permission to produce defects; however, once lots have been produced,
the Spec-AQLs provide guidance on making product disposition decisions.
Example 1: If a process is known to consistently produces lots with percent defectives above the
Spec-AQL, all lots should be 100% inspected, but if some lots are below the Spec-AQL, the company
could use a sampling plan to screen out lots not requiring 100% inspection. To ensure that lots worse
than the Spec-AQL are rejected, a sampling plan with an LTPD equal to the Spec-AQL can be used,
but at the risk of rejecting some acceptable lots. For a Spec-AQL of 1.0%, the single sampling plan
n=230 and a=0, which has an LTPD of 1.0%, would be appropriate. There is a simple formula for
determining the sample size for such studies. Assuming an accept number of zero and a desired
confidence level of 90%, the required sample size is
n = 230/Spec-AQL
For 95% confidence, the formula is
n = 300/Spec-AQL
Example 2: The same sampling plan might also be used to validate a process for which there is no
prior history. Before reduced levels of inspection are implemented, it should be demonstrated that the
process regularly produces lots below the Spec-AQL. If the first three lots pass inspections using a
sampling plan with an LTPD equal to the Spec-AQL of 1.0%, the manufacturer can state with 90%
confidence that each of these lots is <1% defective.
However, other sampling plans might be better choices. Suppose the process is expected to yield lots
that are around 0.2% defective. The sampling plan n=230 and a=0 has an AQL of 0.022% and
therefore runs a sizeable risk of failing the validation procedure. A sampling plan with an AQL of 0.2%
and an LTPD of 1% would be a better choice. Using the software cited earlier, the resulting plan is
n=667 and a=3.3
Example 3: Once it has been established that the process consistently produces lots with percent
defectives below the Spec-AQL, the objective of future inspections might be to ensure that lots with
>=4% defective are not released. This requires a sampling plan with an LTPD of 4%. Because, the
sampling plan should also ensure that lots below the Spec-AQL are released, the plan's AQL should be
equal to the Spec-AQL. According to Table I, which gives a variety of sampling plans indexed by their
AQLs and LTPDs, the single sampling plan n=200 and a=4 is the closest match.3 It has an LTPD of
3.96% and an AQL of 0.990%, and thus is statistically valid for this purpose.
Example 4: Now suppose that the process has run for 6 months with an average yield of 0.1%
defectives and no major problems. Although the process has a good history, there is still some concern
that something could go wrong; as a result, the manufacturer should continue to inspect a small number
of samples from each lot. For example, a sampling plan might be selected that ensures that a major
process failure resulting in >=20% defective will be detected on the first lot. The sampling data can
then be trended to detect smaller changes over extended periods of time.
When selecting a sampling plan to detect a major process failure, the nature of the potential failure
modes should be considered. If the primary failure mode of concern is a clogged filter and past failures
have resulted in >=20% defectives, the single sampling plan n=13 and a=0, which has an LTPD of
16.2% and an AQL of 0.4%, is statistically valid.3 If the potential failure mode of concern is a failure to
add detergent to the wash cycle, with a resulting failure rate of 100%, the single sampling plan n=1 and
a=0 is valid.
Example 5: Finally, one might have a proven process for which procedures are in place that minimize
the likelihood of a process failure going undetected. At that point, acceptance sampling might be
limited to the routine collection of data sufficient to plot process-average tends. There is nothing wrong
with simply stating in the written procedures that acceptance sampling is not needed and that the
inspections being performed should be considered as process audits.
In summary, selecting a statistically valid sampling plan is a two-part process. First, the purpose of the
inspection should be clearly stated and the appropriate AQL and LTPD selected; then, a sampling plan
should be selected based on the chosen AQL and LTPD. Because different sampling plans may be
statistically valid at different times, all plans should be periodically reviewed. If a medical device
manufacturer doesn't know the protection provided by its sampling plan or is unclear as to the purposes
of its, it is at risk.
Nevertheless, because manufacturers may need to update many specifications as a result of this
change, now is an especially appropriate time to reexamine MIL-STD-105E and Z1.4 and how to use
them to select valid sampling plans. Although the term Z1.4 will be used in the following discussion, all
of the ensuing comments apply equally to MIL-STD-105E.
Lets us start with what Z1.4 is not. Used by many industries, Z1.4 is not a table of statistically valid
sampling plans. Instead, it contains a broad array of sampling plans that might be of interest to
anyone. For example, one plan requires two samples and has an accept number of 30. Such a plan
would never be appropriate for a medical device, but is applicable in other industries.
Furthermore, Z1.4 is, in fact, a sampling system. A user references the sampling plans in Z1.4 by
specifying an AQL and a level of inspection, and then following a set of procedures for determining
what sampling plan to use based both on lot size and the quality of past lots. The Z1.4 system includes
tightened, normal, and reduced sampling plans and a set of rules for switching between them.
Although these switching rules are frequently ignored, they are an integral part of the standard. As Z1.4
states:
This standard is intended to be used as a system employing tightened, normal, and
reduced inspection on a continuing series of lots .... Occasionally specific individual
plans are selected from the standard and used without the switching rules. This is not
the intended application of the ANSI Z1.4 system and its use in this way should not be
referred to as inspection under ANSI Z1.4.2
Several companies have received Form 483s from FDA for not using the switching rules, a problem that
could have been avoided by having written procedures specifying that the switching rules are not used.
When is the use of switching rules appropriate and when should individual sampling plans be selected
instead? Z1.4 was developed specifically to induce suppliers "to maintain a process average at least as
good as the specified AQL while at the same time providing an upper limit on the consideration of the
consumer's risk of accepting occasional poor lots."2 Thus, the Z1.4 switching system should not be
used to inspect isolated lots, nor should they be used to specify the level of protection for individual lots.
In those cases individual plans should be selected instead.
One situation warrants special mention. Acceptance sampling is frequently used for processes that
generally produce good product but might on occasion break down and produce high levels of defects.
If protection against isolated bad lots or the first bad lot following a series of good lots is the key
concern, the Z1.4 switching rules should not be used or, if they are, the reduced inspection should be
omitted. Because the Z1.4 switching rules are designed to react to gradual shifts in the process
average, they frequently fail to detect isolated bad lots and do not react quickly to sudden shifts in the
lot quality. Even when appropriate, the Z1.4 switching rules are complicated to apply. However, quick
switching systems have been developed that are both simpler to use and provide better protection
during periods of changing quality.3
Finally, there are two common misconceptions about Z1.4. Many people believe that the required
sample sizes increase for larger lots because more samples are required from such lots to maintain the
desired level of protection. The truth is that the standard specifies larger sample sizes to increase the
protection provided for larger lots. The reason for this increase is based on economics: It is more
expensive to make errors classifying large lots; as a result, Z1.4 requires more samples from larger lots
to reduce the risk of such errors. To maintain the same level of protection, one can simply select a
sampling plan based on its OC curve and then use this plan for all lots regardless of size. The single
sampling plan n=13 and a=0 provides the same protection for a lot of 200 units as for a lot of 200,000
units.3,4
The second misconception is that use of Z1.4 ensures that lots worse than the AQL are rejected.
According to this misconception, if the AQL is 1%, lots with >1% defectives are routinely rejected. The
truth is that there is a sizable risk of releasing such lots--one sampling plan with an AQL of 1% accepts
lots that are <=16% defective. The protection provided by the sampling plan is determined by its LTPD,
not AQL, which reveals nothing about what a sampling plan will reject. As a result of this misconception,
many manufacturers believe that their sampling plans provide greater protection than they do. This
illusion can lead to the use of inappropriate sampling plans and can provide a false sense of security.
Repeating the advice given earlier, manufacturers should determine and document the actual
AQLs and LTPDs of all their sampling plans.
While Z1.4 and equivalent standards are widely used by the device industry, rarely are they used in the
manner intended. Most commonly, individuals sampling plans are selected from them. Other tables are
better suited for this purpose, and companies should not be afraid to switch to using those tables. In
addition, using Z1.4 does not ensure valid sampling plans and, in fact, can complicate the selection
process.
Control Sampling
Chart Plan
Much of the reaction against acceptance sampling is attributed to quality guru W. Edwards Deming,
who many believe advocated its elimination. However, what Deming really called for was ceasing
reliance on acceptance sampling. If more time and resources are spent on acceptance sampling than
on process improvement and control, or if a company believes that, no matter what else happens, its
sampling plan ensures shipment of only good product, then that company is overly reliant on
acceptance sampling. Instead, its focus should be on defect prevention and continuous process
improvement.
The real issue is not SPC versus acceptance sampling; it is how to combine the two. Both techniques
require routine product inspections; the trick is to use the same data for both purposes. When variables
sampling is used--that is, when the data consist of actual measurements such as seal strength, fill
volume, or flow rate--data can be combined on a single acceptance control chart. Figure 3 provides an
example of such a chart containing fill-volume data. The inside pair of limits, UCL and LCL, are the
control limits. A point falling outside these limits signals that the process is off target and that corrective
action is required. The outside pair of limits, UAL and LAL, are the acceptance limits. A lot whose
sample falls outside these limits is rejected. In the figure, lot 13 is outside the control limits but inside
the acceptance limits, which indicates that the process has shifted. Corrective action on the process is
required to maximize the chance that future products will be good; however, no action is required on the
product lot. Rejecting whenever a point exceeds the control limits can result in the rejection of perfectly
good lots. Similarly, it is wasteful to wait until the acceptance limits are exceeded before taking
corrective action on the process. Therefore, separate limits for process and product actions are
required. Such limits are also frequently called action limits, warning limits, and alert limits. No matter
what the name, however, if the result of exceeding a limit is to act on the process, the limit is serving
the purpose of a control limit; if action is instead taken on the product, the limit is serving as an
acceptance limit.
If attributes sampling is performed, the data must be handled much differently, and care must be taken
in implementing SPC so that the resulting change is not illusionary. Consider, for example, a packing
operation that inspects for missing parts using the single sampling plan n=13 and a=0. Whenever a lot
is rejected, an attempt is made to fix the process. Historically, the process has averaged around 0.2%
defective. When management decides to implement SPC, a p-chart of the inspection data is
constructed as shown in Figure 4. The upper control limit is 3.92%, and samples with one or more
defectives exceed this control limit, triggering attempts to fix the process and rejection of recent
product. The company can now state truthfully that SPC is used, but in reality nothing has changed--
the same data are collected and the same actions taken. A better approach is to continue acceptance
sampling as before and, because this does not protect against a gradual increase in the process
average, to analyze the resulting data for trends. Figure 5 shows a p-chart of the same data, but with
the data from each day combined. This chart indicates that a change occurred between days 5 and 6;
this change is not so apparent in Figure 4.
Neither SPC or acceptance testing can detect a problem before defectives are produced. However, by
accumulating data over time, attribute control charts can indicate small changes in the process average
that acceptance sampling will not reveal. Used in combination, sampling plans provide immediately
protection against major failures while control charts protect against minor sustained problems.
Consider an example.The ANSI Guideline for Gamma Sterilization provides procedures for establishing
and monitoring radiation dosages. One procedure is a quarterly audit of the dosage that requires the
sterilization of test units at a lower dosage than is actually used for the product. The test dose is
selected to give an expected positive rate of 1%. (A positive is a unit that tests nonsterile.) For each
audit, an initial sample of 100 units is tested. If two or fewer positives are found, the process has
passed the audit; in the event of three or four positives, one retest can be performed. This quarterly
audit procedure has an AQL of 1.50% and an LTPD of 5.55%. An alternative to this procedure is to
test 50 samples, passing on zero positives and failing on four or more positives. In the event of 1 to 3
positives, a second sample of 100 units is tested. The audit is considered passed only if the cumulative
number of positives in the 150 units is four or less. This double sampling plan has an AQL of 1.36% and
LTPD of 5.73%.
Figure 6: OC Curves of the ANSI Quarterly Audit Sampling Plan and an Alternative Plan for Monitoring
Sterilization Dosage
The OC curves of both procedures are nearly identical, as shown in Figure 6. Indeed, these two
sampling plans are substantially equivalent procedures, except for the number of units tested. Figure 7
shows average sample number (ASN) curves for the two plans. If the positive rate is 0.5%, the
alternative procedure requires an average of 70 units compared to an average of 102 for the ANSI
quarterly audit procedure. If the alternative plan is used to destructively test expensive medical devices,
this difference can mean a sizable savings.
For any sampling plan, its AQL and LTPD can be found and then other plans providing equivalent
protection can be identified. ANSI Z1.4 provides tables of double and multiple sampling plans that
match its single sampling plans, and tables of matching double sampling plans, quick switching
systems, and variables sampling plans are available for the single sampling plans given in Table I of
this article.3 Single sampling plans are the simplest to use, but require the largest number of samples.
Although they are more complicated, the other types of sampling plans can reduce the number of
samples tested. For destructive tests of expensive products, the number of units tested is the prime
consideration, and the many alternatives to single sampling plans should be investigated.
CONCLUSION
Acceptance sampling is one of the oldest techniques used for quality control, yet it remains poorly
understood and misconceptions regarding its procedures and terminology are widely held. Acceptance
sampling does not have to be complicated. Your company can optimize its procedures by
remembering this list of principles:
x The protection level provided by a sampling plan is described by what it accepts -- its AQL--and
what it rejects-- its LTPD.
x Selecting a statistically valid sampling plan requires stating the objective of the inspection,
selecting the appropriate AQL and LTPD, and then choosing a sampling plan that provides the
desired protection.
x Companies must know the AQLs and LTPDs of all their sampling plans. It doesn't matter
whether a sampling plan comes from MIL-STD-105E or some other source, and the protection
provided by a plan does not depend on the lot size; it's the AQL and LTPD that reveal what
protection the sampling plan provides.
x SPC cannot serve as a replacement for acceptance sampling. Instead, these two techniques
should be combined by using the same data to control the process and to make product
disposition decisions.
x Sampling plans with the same AQL and LTPD are substantially equivalent procedures, so costs
can sometimes be reduced by using equivalent double, multiple, or variables sampling plans as
alternatives to single sampling plans.
REFERENCES
1. Sampling Procedures and Tables for Inspection by Attributes, MIL-STD-105E, Washington D.C.
, U.S. Government Printing Office, 1989 .
2. Sampling Procedures and Tables for Inspection by Attributes, ANSI/ASQC Z1.4, Milwaukee,
WI, American Society for Quality Control, 1981.
3. Taylor, W A, Guide to Acceptance Sampling, Libertyville, IL, Taylor Enterprises, 1992. (Software
is supplied with this book.)
4. Schilling, E G, Acceptance Sampling in Quality Control, New York City, Marcel Dekker, 1982.
5. Guideline for Gamma Radiation Sterilization, ANSI/AAMI ST32-1991, Arlington, VA, Association
for the Advancement of Medical Instrumentation, 1992.
INTRODUCTION
Quality must be designed into the product, not inspected into it. Quality can be defined as meeting customer needs and
providing superior value. This focus on satisfying the customer's needs places an emphasis on techniques such as
Quality Function Deployment to help understand those needs and plan a product to provide superior value.
Quality Function Deployment (QFD) is a structured approach to defining customer needs or requirements and translating
them into specific plans to produce products to meet those needs. The "voice of the customer" is the term to describe
these stated and unstated customer needs or requirements. The voice of the customer is captured in a variety of ways:
direct discussion or interviews, surveys, focus groups, customer specifications, observation, warranty data, field reports,
etc. This understanding of the customer needs is then summarized in a product planning matrix or "house of quality".
These matrices are used to translate higher level "what's" or needs into lower level "how's" - product requirements or
technical characteristics to satisfy these needs.
While the Quality Function Deployment matrices are a good communication tool at each step in the process, the
matrices are the means and not the end. The real value is in the process of communicating and decision-making with
QFD. QFD is oriented toward involving a team of people representing the various functional departments that have
involvement in product development: Marketing, Design Engineering, Quality Assurance, Manufacturing/ Manufacturing
Engineering, Test Engineering, Finance, Product Support, etc.
The active involvement of these departments can lead to balanced consideration of the requirements or "what's" at each
stage of this translation process and provide a mechanism to communicate hidden knowledge - knowledge that is known
by one individual or department but may not otherwise be communicated through the organization. The structure of this
methodology helps development personnel understand essential requirements, internal capabilities, and constraints and
design the product so that everything is in place to achieve the desired outcome - a satisfied customer. Quality Function
Deployment helps development personnel maintain a correct focus on true requirements and minimizes misinterpreting
customer needs. As a result, QFD is an effective communications and a quality planning tool.
Quality Function Deployment requires that the basic customer needs are identified. Frequently, customers will try to
express their needs in terms of "how" the need can be satisfied and not in terms of "what" the need is. This limits
consideration of development alternatives. Development and marketing personnel should ask "why" until they truly
understand what the root need is. Breakdown general requirements into more specific requirements by probing what is
needed.
Once customer needs are gathered, they then have to be organized. The mass of interview notes, requirements
documents, market research, and customer data needs to be distilled into a handful of statements that express key
customer needs. Affinity diagramming is a useful tool to assist with this effort. Brief statements which capture key
customer requirements are transcribed onto cards. A data dictionary which describes these statements of need are
prepared to avoid any misinterpretation. These cards are organized into logical groupings or related needs. This will
make it easier to identify any redundancy and serves as a basis for organizing the customer needs for the first QFD
matrix.
In addition to "stated" or "spoken" customer needs, "unstated" or "unspoken" needs or opportunities should be identified.
Needs that are assumed by customers and, therefore not verbalized, can be identified through preparation of a function
tree. These needs normally are not included in the QFD matrix, unless it is important to maintain focus on one or more of
these needs. Excitement opportunities (new capabilities or unspoken needs that will cause customer excitement) are
identified through the voice of the engineer, marketing, or customer support representative. These can also be identified
by observing customers use or maintain products and recognizing opportunities for improvement.
QFD -1
# Prepared by Haery Sihombing @ IP
sihmobi E
QFD -2
# Prepared by Haery Sihombing @ IP
sihmobi E
Focus on negative interactions - consider product concepts or technology to overcome these potential tradeoff's
or consider the tradeoff's in establishing target values.
8. Calculate importance ratings. Assign a weighting factor to relationship symbols (9-3-1, 4-2-1, or 5-3-1). Multiply
the customer importance rating by the weighting factor in each box of the matrix and add the resulting products
in each column.
9. Develop a difficulty rating (1 to 5 point scale, five being very difficult and risky) for each product requirement or
technical characteristic. Consider technology maturity, personnel technical qualifications, business risk,
manufacturing capability, supplier/subcontractor capability, cost, and schedule. Avoid too many difficult/high
risk items as this will likely delay development and exceed budgets. Assess whether the difficult items can be
accomplished within the project budget and schedule.
10. Analyze the matrix and finalize the product development strategy and product plans. Determine required
actions and areas of focus. Finalize target values. Are target values properly set to reflect appropriate
tradeoff's? Do target values need to be adjusted considering the difficulty rating? Are they realistic with respect
to the price points, available technology, and the difficulty rating? Are they reasonable with respect to the
importance ratings? Determine items for further QFD deployment. To maintain focus on "the critical few", less
significant items may be ignored with the subsequent QFD matrices. Maintain the product planning matrix as
customer requirements or conditions change.
One of the guidelines for successful QFD matrices is to keep the amount of information in each matrix at a manageable
level. With a more complex product, if one hundred potential needs or requirements were identified, and these were
translated into an equal or even greater number of product requirements or technical characteristics, there would be
more than 10,000 potential relationships to plan and manage. This becomes an impossible number to comprehend and
manage. It is suggested that an individual matrix not address more than twenty or thirty items on each dimension of the
matrix. Therefore, a larger, more complex product should have its customers needs decomposed into hierarchical levels.
To summarize the initial process, a product plan is developed based on initial market research or requirements
definition. If necessary, feasibility studies or research and development are undertaken to determine the feasibility of the
product concept. Product requirements or technical characteristics are defined through the matrix, a business
justification is prepared and approved, and product design then commences.
QFD -3
# Prepared by Haery Sihombing @ IP
sihmobi E
The concept selection matrix shown below lists the product requirements or technical characteristics down the left side
of the matrix.
These serve as evaluation criteria. The importance rating and target values (not shown) are also carried forward and
normalized from the product planning matrix. Product concepts are listed across the top. The various product concepts
are evaluated on how well they satisfy each criteria in the left column using the QFD symbols for strong, moderate or
weak. If the product concept does not satisfy the criteria, the column is left blank. The symbol weights (5-3-1) are
multiplied by the importance rating for each criteria. These weighted factors are then added for each column. The
preferred concept will have the highest total. This concept selection technique is also a design synthesis technique. For
each blank or weak symbol in the preferred concept's column, other concept approaches with strong or moderate
symbols for that criteria are reviewed to see if a new approach can be synthesized by borrowing part of another concept
approach to improve on the preferred approach.
Based on this and other evaluation steps, a product concept is selected. The product concept is represented with block
diagrams or a design layout. Critical subsystems, modules or parts are identified from the layout. Criticality is determined
in terms of effect on performance, reliability, and quality. Techniques such as fault tree analysis or failure modes and
effects analysis (FMEA) can be used to determine criticality from a reliability or quality perspective.
The subsystem, assembly, or part deployment matrix is then prepared. The process leading up to the preparation of the
deployment matrix is depicted below.
The product requirements or technical characteristics defined in the product planning matrix become the "what's" that
are listed down the left side of the deployment matrix along with priorities (based on the product planning matrix
importance ratings) and target values. The deployment matrix is prepared in a manner very similar to the product
planning matrix. These product requirements or technical characteristics are translated into critical subsystem, assembly
or part characteristics. This translation considers criticality of the subsystem, assembly or parts as well as their
characteristics from a performance perspective to complement consideration of criticality from a quality and reliability
perspective. Relationships are established between product requirements or technical characteristics and the critical
QFD -4
# Prepared by Haery Sihombing @ IP
sihmobi E
subsystem, assembly or part characteristics. Importance ratings are calculated and target values for each critical
subsystem, assembly or part characteristic are established. An example of a part/assembly deployment matrix is shown:
PROCESS DESIGN
Quality Function Deployment continues this translation and planning into the process design phase. A concept selection
matrix can be used to evaluate different manufacturing process approaches and select the preferred approach. Based
on this, the process planning matrix shown below is prepared.
Again, the "how's" from the higher level matrix (in this case the critical subsystem, assembly or part characteristics)
become the "what's" which are used to plan the process for fabricating and assembling the product. Important processes
and tooling requirements can be identified to focus efforts to control, improve and upgrade processes and equipment. At
this stage, communication between Engineering and Manufacturing is emphasized and tradeoff's can be made as
appropriate to achieve mutual goals based on the customer needs.
In addition to planning manufacturing processes, more detailed planning related to process control, quality control, set-
up, equipment maintenance and testing can be supported by additional matrices. The following provides an example of
a process/quality control matrix.
QFD -5
# Prepared by Haery Sihombing @ IP
sihmobi E
The process steps developed in the process planning matrix are used as the basis for planning and defining specific
process and quality control steps in this matrix.
The result of this planning and decision-making is that Manufacturing focuses on the critical processes, dimensions and
characteristics that will have a significant effect on producing a product that meets customers needs. There is a clear
trail from customer needs to the design and manufacturing decisions to satisfy those customer needs. Disagreements
over what is important at each stage of the development process should be minimized, and there will be greater focus
on "the critical few" items that affect the success of the product.
QFD PROCESS
Quality Function Deployment begins with product planning; continues with product design and process design; and
finishes with process control, quality control, testing, equipment maintenance, and training. As a result, this process
requires multiple functional disciplines to adequately address this range of activities. QFD is synergistic with multi-
function product development teams. It can provide a structured process for these teams to begin communicating,
making decisions and planning the product. It is a useful methodology, along with product development teams, to
support a concurrent engineering or integrated product development approach
.
Quality Function Deployment, by its very structure and planning approach, requires that more time be spent up-front in
the development process making sure that the team determines, understands and agrees with what needs to be done
before plunging into design activities. As a result, less time will be spent downstream because of differences of opinion
over design issues or redesign because the product was not on target. It leads to consensus decisions, greater
commitment to the development effort, better coordination, and reduced time over the course of the development effort.
QFD requires discipline. It is not necessarily easy to get started with. The following is a list of recommendations to
facilitate initially using QFD.
x Obtain management commitment to use QFD.
x Establish clear objectives and scope of QFD use. Avoid first using it on a large, complex project if possible. Will
it be used for the overall product or applied to a subsystem, module, assembly or critical part? Will the complete
QFD methodology be used or will only the product planning matrix be completed?
x Establish multi-functional team. Get an adequate time commitment from team members.
x Obtain QFD training with practical hands-on exercises to learn the methodology and use a facilitator to guide
the initial efforts.
x Schedule regular meetings to maintain focus and avoid the crush of the development schedule overshadowing
effective planning and decision-making.
x Avoid gathering perfect data. Many times significant customer insights and data exist within the organization,
but they are in the form of hidden knowledge - not communicated to people with the need for this information.
On the other hand, it may be necessary to spend additional time gathering the voice of the customer before
beginning QFD. Avoid technical arrogance and the belief that company personnel know more than the
customer.
Quality Function Deployment is an extremely useful methodology to facilitate communication, planning, and decision-
making within a product development team. It is not a paperwork exercise or additional documentation that must be
completed in order to proceed to the next development milestone. It not only brings the new product closer to the
intended target, but reduces development cycle time and cost in the process
QFD -6
# Prepared by Haery Sihombing @ IP
sihmobi E
In 1984 Ford USA was introduced to the QFD process - One year later a project was set up with Ford body and
assembly and its suppliers. In 1987 The Budd Company and Kelsey-Hayes, both Ford suppliers developed the first case
study on QFD outside Japan. In parallel with this Bob King published his book (1987) entitled 'Better designs in half the
time: Implementing QFD in America'. Soon after John Hauser and Don Clausing published their article "The House of
Quality" in the Harvard Business Review. This article was the catalyst that sparked the real interest in QFD - and
because QFD is not a proprietary process, it soon became used in ever widening circles with many followers across the
globe.
Quality Functional Deployment (QFD) is a method that promotes structured product planning and development -
enabling the product development team to clearly specify and evaluate the customers needs and wants against how it
could measured (CTQ) and then achieved in the form of a solution.
The methodology takes the design team through the concept, creation and realisation phases of a new product with
absolute focus. QFD also helps define what the end user is really looking for in the way of market driven features and
QFD -7
# Prepared by Haery Sihombing @ IP
sihmobi E
benefits - it lists customer requirements, in the language of the Customer and helps you translate these requirements
into appropriate new product characteristics.
Summary
The use of QFD can help you identify design objectives that reflect the needs of real Customers. Identifying design
objectives from a customers point of view ensures that customers interests and values is created in the phases of the
product innovation process. It can also promote an evolutionary approach to product innovation by carefully evaluating
from both market and customer perspectives, the performance of preceding products.
QFD -8
# Prepared by Haery Sihombing @ IP
sihmobi E
Reduced Cost...
QFD supports the identification of the product characteristics that customers rate as less important. High performances
with respect to these characteristics, when compared to those of competing products, provide opportunities for cost
reduction.
To thrive in business, designing products and services that excite the customer and creating new markets is a critical
strategy. And while growth can be achieved in many different ways--selling through different channels, selling more to
existing customers, acquisitions, geographic expansion--nothing energizes a company more than creating new products
or upgrading existing products to create customer delight.
Quality Function Deployment (QFD) is a methodology for building the "Voice of the Customer" into product and service
design. It is a team tool which captures customer requirements and translates those needs into characteristics about a
product or service.
The origins of QFD come from Japan. In 1966, the Japanese began to formalize the teachings of Yoji Akao on QFD.
Since its introduction to America, QFD has helped to transform the way businesses:
- plan new products
- design product requirements
- determine process characteristics
- control the manufacturing process
- document already existing product specifications
QFD uses some principles from Concurrent Engineering in that cross functional teams are involved in all phases of
product development. Each of the four phases in a QFD process uses a matrix to translate customer requirements from
initial planning stages through production control.
Each phase, or matrix, represents a more specific aspect of the product's requirements. Binary relationships between
elements are evaluated for each phase. Only the most important aspects from each phase are deployed into the next
matrix.
Phase 1-Led by the marketing department, Phase 1, or product planning, is also called The House of Quality. Many
organizations only get through this phase of a QFD process. Phase 1 documents customer requirements, warranty data,
competitive opportunities, product measurements, competing product measures, and the technical ability of the
organization to meet each customer requirement. Getting good data from the customer in Phase 1 is critical to the
success of the entire QFD process.
QFD -9
# Prepared by Haery Sihombing @ IP
sihmobi E
Phase 2- Phase 2 is led by the engineering department. Product design requires creativity and innovative team ideas.
Product concepts are created during this phase and part specifications are documented. Parts that are determined to be
most important to meeting customer needs are then deployed into process planning, or Phase 3.
Phase 3-Process planning comes next and is led by manufacturing engineering. During process planning,
manufacturing processes are flowcharted and process parameters (or target values) are documented.
Phase 4-And finally, in the production planning, performance indicators are created to monitor the production process,
maintenance schedules, and skills training for operators. Also, in this phase decisions are made as to which process
poses the most risk and controls are put in place to prevent failures. The quality assurance department in concert with
manufacturing leads Phase 4.
QFD is a systematic means of ensuring that customer requirements are accurately translated into relevant technical
descriptors throughout each stage of product development. Meeting or exceeding customer demands means more than
just maintaining or improving product performance. It means building products that delight customers and fulfill their
unarticulated desires. Companies growing into the 21st century will be enterprises that foster the needed innovation to
create new markets.
Summary
QFD is originally from the manufacturing industry and was developed in 1966 in Japan. It is a quality oriented process
which attempts to prioritize customer needs and wants and translate these needs and wants into technical requirements
and specifications in order to deliver a product or service by focusing on customer satisfaction. The main task is to
transform the voice of the customer into a prioritized set of actionable targets. Although QFD is normally used for
manufacturing purposes, major software organizations are adopting QFD and applying it to the software development
environment. This is termed SQFD (software QFD). Some benefits of QFD include reducing time to market, reducing
design changes, decreased design and manufacturing cost, improved quality and possibly the most important, increased
customer satisfaction.
The QFD process consists of four phases: Product Planning, Parts Deployment, Process Planning and Production
Planning called the Causing Four Phase Model. Three QFD matrices and a planning production table translate customer
requirements into production equipment settings. The first phase is the House of Quality, and the HOWS (technical
requirements) become the WHATS of the phase, Parts Deployment. The HOWs of this stage (Parts Characteristics)
become the WHATs of the third stage, Process Planning. Finally, the HOWs of this stage (key process operations)
become the WHATs of the Production Planning stage (The last stage).
The House of Quality is the tool used through the process to prioritize customer requirements into product features. The
House of Quality consists of six steps: Customer Requirements, Planning Matrix, Technical Requirements, Inter-
relationships, Roof and Targets. The HOQ takes in structured requirements as input and outputs design targets for
prioritized requirements.
When QFD is applied to Software Engineering, the Product Planning phase is expanded slightly. This model applies
QFD to focus on the needs of building software, specifically focusing on requirements engineering. The most important
aspect of SQFD is customer requirements. These requirements are mapped into technical requirements in Phase 1 and
prioritized based on customer input.
QFD -10
# Prepared by Haery Sihombing @ IP
sihmobi E
h Correlation matrix
In any product there are bound to
be interactions between different
design requirements. These
correlation's are shown in the
triangular matrix at the top of the
QFD 1 chart (the "roof" on the
"house of quality"). Relationships
are identified as positive or
negative, strong or weak.
Negative relationships are
particularly important, as they
represent areas where trade-offs
are needed. If these are not
identified and resolved early in the
process, there is a danger that
they will lead to unfulfilled
requirements. Some of these
trade-offs may cross departmental
or even company boundaries. This
should not present problems in a
proper team environment, but if
designers are working in isolation,
unresolved requirement conflicts
can lead to repeated and
unproductive iterations.
QFD -11
# Prepared by Haery Sihombing @ IP
sihmobi E
h Improvement direction
In some cases the target value for a design requirement is the optimum measurement for that requirement. In other
cases it is an acceptable value, but a higher or lower value would be even better. You would not normally want to
expend additional design effort to increase the weight of an aircraft component of the initial design came out below the
target weight
This row on the chart is used to identify the improvement direction for the design requirement, especially if this is not
immediately obvious
h Target value
Each design requirement will have a target value. These are entered into TeamSET using the same form as the
requirement, and are displayed in this part of the chart
h Design requirements
Design requirements are descriptions, in measurable terms, of the features the product or service needs to exhibit in
order to satisfy the customer requirements.
Requirement groupings
Once the list of design requirements is complete, it must be rearranged and the requirements amalgamated to bring
them all to the same level of detail and make the matrix more manageable. TeamSET allows you to structure the
requirements with as many levels of headings and sub-headings as you need. Notes may be added to hold more
detailed requirements that have been grouped into a single design requirement.
h Customer requirements
Whore the customers?
The first step in any QFD exercise is to identify the "customers" or "influencers". Any individual who will buy, use or work
with the product should be considered a customer. Thus the purchaser, the user (if different from the purchaser), the
retailer, the service engineer, and the manufacturing plant where the product is made are all customers for the design. If
you supply an international market place remember that customers from different geographic locations or from different
ethnic or cultural backgrounds may have differing requirements for your product.
Data collection
QFD -12
# Prepared by Haery Sihombing @ IP
sihmobi E
h Relationship matrix
The relationship between each customer requirement and every design requirement is assessed and entered in the
matrix. The symbols shown in on the left are more intuitive and are preferred by many. If you would rather use the ones
on the right, it is a simple matter to switch between them
For each cell in the matrix, the question should be asked: "Does this design requirement have an influence on our ability
to satisfy the customer requirement?" If there is a relationship, its strength should be assessed in terms of "strong",
"medium", or "weak". It should be noted that the design requirement may have a negative influence on the customer
requirement. This is still a valid relationship, and its strength should be evaluated in the normal way.
QFD -13
# Prepared by Haery Sihombing @ IP
sihmobi E
h Customer's competitor
rating
The customer's opinion of the
competitor products is collected at
the same time as the customer
requirements. Competitor products
are assessed for their ability to
satisfy each customer requirement.
The data is represented on a scale
of one to five, where five indicates
that the requirement is totally
satisfied. The data may be input
numerically or graphically. Results
are displayed graphically.
QFD -14
# Prepared by Haery Sihombing @ IP
sihmobi E
Weak relationship = 1
Medium relationship = 3
Strong relationship = 9
The numerical values of the relationships are multiplied by the importance weighting for the relevant customer
requirement, and the results added for the matrix column associated with each design requirement. The total is
displayed in the 'Absolute importance' cell
The design requirements with the highest totals are those which are most closely involved in satisfying the customer's
wants and needs. These are the ones that we must get right. The design requirements with low scores are those which
are not very important to the customer. These are the ones where we can possibly afford to compromise. The ranking of
the design requirements is displayed in the Ranking row on the chart. It can also be displayed and printed graphically on
an interactive bar graph.
h Difficult or new
Design requirements that need new features or are difficult to satisfy should be noted here. This will highlight the fact
that development work may be required.
h Carry forward
When QFD charts are cascaded, design requirements become visible on other charts. the "carry forward"
row shows how many other charts each design requirement is used on.
QFD -15
# Prepared by Haery Sihombing @ IP
sihmobi E
EXAMPLE:
Affinity Analysis
A major supplier of digital devices needs to upgrade one of its products, a palm pilot. They found they were losing
business because their competitors, in some extent, are already one generation ahead of their current product.
These are requirements based on the voice of customer, along with their importance ratings for this handheld palm pilot
Our group discussed the sorted the requirements and ended up with the following grouping with the requirements
assigned to one group listed below: Hardware, Ease of Use, Speed, and Cost.
QFD -16
# Prepared by Haery Sihombing @ IP
sihmobi E
House of Quality
EXAMPLE Summary
Using QFD we were to group requirements as a customer would, using Affinity Analysis and take some of those
requirements and build a House of Quality with them. The first exercise was very easy to complete. We did not write
each requirement out on cards and perform the affinity analysis in the traditional way. Instead we can up with our own
groupings and as a group assigned each requirement to a group we thought it belonged in. The list of requirements
pertained to a hardware product so we guessed on a majority of the requirements’ groups.
The second exercise was more difficult to do because of the lack of time and understanding of how to fill out the House
of Quality. There is a step-by-step process that you work through to fill in a House of Quality and even the simple one
that we were trying to do took twice as long as the time given for the exercise. Several aspects of the HoQ require the
customer so without once again we guessed at what we felt were realistic values.
QFD -17
# Prepared by Haery Sihombing @ IP
sihmobi E
additional
The Importance Ratings and Customer Competitive Assessment Rooms: Marketing and /or the market researcher
designs the market research so that the team can use the results as inputs to successfully complete the Importance
Ratings and Customer Competitive Assessment rooms. These rooms are located on the matrix where benefit rankings
and ratings are assembled for analysis. The Importance Rankings provide the team with a prioritization of customer
requirements while the Customer Competitive Assessment allows us to spot strengths and weaknesses in both our
product and the competition's products.
QFD -18
# Prepared by Haery Sihombing @ IP
sihmobi E
The "Hows" Room: The next step is the completion of the "Hows" room. In this activity the entire team asks for each
"What", "How would we measure product performance which would provide us an indication of customer satisfaction for
this specific 'What'?" The team needs to come up with at least one product performance measure, but sometimes the
team recognizes that it takes several measures to adequately characterize product performance.
The Relationships Matrix Room: After the "Hows" room has been completed, the team begins to explore
the relationships between all "Whats" and all "Hows" as they complete the Relationships Matrix room. During
this task the team systematically asks, "What is the relationship between this specific 'how' and this specific
'what'?" "Is there cause and effect between the two?" This is a consensus decision within the group. Based
on the group decision, the team assigns a strong, medium, weak or no relationship value to this specific
"what/how" pairing. Then the team goes on to the next "what/how" pairing. This process continues until all
"what/how" pairings have been reviewed. The technical community begins to assume team leadership in
these areas.
The Absolute Score and Relative Score Rooms: Once the Relationships Matrix room has been
completed, the team can then move on to the Absolute Score and Relative Score rooms. This is where the
team creates a model or hypothesis as to how product performance contributes to customer satisfaction.
Based on the Importance Ratings and the Relationship Matrix values, the team calculates the Absolute and
Relative Scores. These calculations are the team's best estimate as to which product performance
measures ("hows") exert the greatest impact on overall customer satisfaction. Engineering now begins to
know where the product has got to measure up strongly in order to beat the competition.
The last three rooms receive the most input from the technical side of the team, but total team involvement
is still vital.
The Correlation Matrix Room: There are times in many products where customer requirements translate
into physical design elements which conflict with one another; these conflicts are usually reflected in the
product "hows". The Correlation Matrix room is used to help resolve these conflicts by highlighting those
"hows" which have are share the greatest conflict.
For example, let's say that the "how" called "weight" should be minimized for greatest customer satisfaction.
At the same time there might be two other "hows" titled "strength" and "power capacity". The customer has
expressed preferences that these be maximized. Based on what we know about physics, there may be a
conflict in minimizing "weight" and maximizing "strength" and "power capacity". The analysis that takes place
in the Correlation Matrix room systematically forces a technical review for all likely conflicts and then alerts
the team to either optimize or eliminate these conflicts or consider design alternatives.
The mechanics of the analysis is to review each and every "how" for possible conflict (or symbiosis) against
every other "how". As mentioned in the previous sentence sometimes symbiotic relationships between
"hows" do surface in this analysis. This analysis also allows the team to capitalize on those symbiotic
situations.
The Technical Competitive Assessment Room: This is the room where engineering applies the
measurements identified during the construction of the "Hows" room. "Does our product perform better than
the competitive product according to the specific measure that we have identified?" Here is where the team
tests the hypothesis created in the Relative Score room. It helps the team to confirm that it has created
"hows" that make sense, that really do accurately measure characteristics leading to customer satisfaction.
Analysis in the Technical Competitive Assessment and Customer Competitive Assessment rooms can also
help uncover problems in perception. For example, perhaps the customer wants a car that is fast, so your
team comes up with the "how" of "elapsed time in the quarter mile". After comparing performance between
your car and the competitor's vehicle, you realize that "you blew the doors off the competitor's old crate".
However when you look in the Customer Competitive Assessment room, you see that most of the
marketplace perceives the competitor's car as being faster. While you might have chosen one of the correct
"hows" to measure performance, it is clear that your single "how" does not completely reflect performance
needed to make your car appear faster.
The Target Values Room: The last room of Target Values contains the recommended specifications for the product.
These specifications will have been well thought out, reflecting customer needs, competitive offerings and any technical
trade-off required because of either design or manufacturing constraints.
The House of Quality matrix is often called the phase one matrix. In the QFD process there is also a phase two matrix to
translate finished product specifications into attributes of design (architecture, features, materials, geometry,
QFD -19
# Prepared by Haery Sihombing @ IP
sihmobi E
subassemblies and / or component parts) and their appropriate specifications. Sometimes a phase three matrix is used
to attributes of design specifications into manufacturing process specifications (temperature, pressure, viscosity, rpm,
etc.).
The huge success enjoyed by firms using QFD is balanced by the numerous firms failing to effectively
implement it. We have listed several success keys that should enhance the chances of successful
implementation:
1. Management must make it clear that QFD is a priority.
2. Set clear priorities for QFD activities. Specifically, management needs to allocate resources
for and insist on execution of market research and Technical Competitive Assessment.
3. Make QFD training available, preferably "just-in-time" to use QFD.
4. Insist that decisions be based upon customer requirements.
5. Understand the terms used in QFD.
6. Insist on cross-functional commitment and participation.
7. Become leaders of QFD rather than managers.
Process Planning is the third of four phases in the QFD process. In this phase, relationships between the prioritized
design attributes from the previous phase and process steps identified during this phase are documented. This is
accomplished by completing the Phase III Process Planning matrix.
The approach used to complete the Planning matrix is similar to the approach taken when completing the Design matrix
during the Design Planning phase.
In this phase, the objective is to identify key process steps which will be further analyzed in Phase IV, Production
Planning.
QFD -20
# Prepared by Haery Sihombing @ IP
sihmobi E
Matrix Relationships
The relationships between design attributes and process steps are shown in the body of the Planning matrix.
The key design attributes from the columns in the Design matrix become the rows of the Planning matrix.
Processes applicable to the design attributes are placed in the columns of the Planning matrix. Relationships
between column and row entries are shown as follows:
Each type of relationship is also assigned a numeric value (e.g., 9,3,1 for strong, moderate, and weak
relationships respectively) for further analysis that occurs while prioritizing the process steps.
QFD -21
# Prepared by Haery Sihombing @ IP
sihmobi E
EXAMPLE 1:
QFD -22
# Prepared by Haery Sihombing @ IP
sihmobi E
EXAMPLE 2:
QFD -23
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
FMEA
Introduction
Customers are placing increased demands on companies for high quality, reliable products. The
increasing capabilities and functionality of many products are making it more difficult for manufacturers
to maintain the quality and reliability. Traditionally, reliability has been achieved through extensive
testing and use of techniques such as probabilistic reliability modeling. These are techniques done in
the late stages of development. The challenge is to design in quality and reliability early in the
development cycle.
Failure Modes and Effects Analysis (FMEA) is methodology for analyzing potential reliability problems
early in the development cycle where it is easier to take actions to overcome these issues, thereby
enhancing reliability through design. FMEA is used to identify potential failure modes, determine their
effect on the operation of the product, and identify actions to mitigate the failures. A crucial step is
anticipating what might go wrong with a product. While anticipating every failure mode is not possible,
the development team should formulate as extensive a list of potential failure modes as possible.
The early and consistent use of FMEAs in the design process allows the engineer to design out failures
and produce reliable, safe, and customer pleasing products. FMEAs also capture historical information
for use in future product improvement.
Types of FMEA's
There are several types of FMEAs, some are used much more often than others. FMEAs should
always be done whenever failures would mean potential harm or injury to the user of the end item
being designed. The types of FMEA are:
FMEA Usage
Historically, engineers have done a good job of evaluating the functions and the form of products and
processes in the design phase. They have not always done so well at designing in reliability and
quality. Often the engineer uses safety factors as a way of making sure that the design will work and
protected the user against product or process failure. As described in a recent article:
"A large safety factor does not necessarily translate into a reliable product. Instead, it often leads to an
overdesigned product with reliability problems."
Failure Analysis Beats Murphey's Law
Mechanical Engineering , September 1993
FMEA's provide the engineer with a tool that can assist in providing reliable, safe, and customer
pleasing products and processes. Since FMEA help the engineer identify potential product or process
failures, they can use it to:
x Develop product or process requirements that minimize the likelihood of those failures.
x Evaluate the requirements obtained from the customer or other participants in the design
process to ensure that those requirements do not introduce potential failures.
x Identify design characteristics that contribute to failures and design them out of the system or
at least minimize the resulting effects.
x Develop methods and procedures to develop and test the product/process to ensure that the
failures have been successfully eliminated.
x Track and manage potential risks in the design. Tracking the risks contributes to the
development of corporate memory and the success of future products as well.
FMEA -1
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
x Ensure that any failures that could occur will not injure or seriously impact the customer of the
product/process.
Benefits of FMEA
FMEA is designed to assist the engineer improve the quality and reliability of design. Properly used the
FMEA provides the engineer several benefits. Among others, these benefits include:
x Improve product/process reliability and quality
x Increase customer satisfaction
x Early identification and elimination of potential product/process failure modes
x Prioritize product/process deficiencies
x Capture engineering/organization knowledge
x Emphasizes problem prevention
x Documents risk and actions taken to reduce risk
x Provide focus for improved testing and development
x Minimizes late changes and associated cost
x Catalyst for teamwork and idea exchange between functions
FMEA Timing
The FMEA is a living document. Throughout the product development cycle change and updates are
made to the product and process. These changes can and often do introduce new failure modes. It is
therefore important to review and/or update the FMEA when:
x A new product or process is being initiated (at the beginning of the cycle).
x Changes are made to the operating conditions the product or process is expected to function
in.
x A change is made to either the product or process design. The product and process are inter-
related. When the product design is changed the process is impacted and vice-versa.
x New regulations are instituted.
x Customer feedback indicates problems in the product or process.
FMEA Procedure
The process for conducting an FMEA is straightforward. The basic steps are outlined below.
1. Describe the product/process and its function. An understanding of the product or process
under consideration is important to have clearly articulated. This understanding simplifies the
process of analysis by helping the engineer identify those product/process uses that fall within
the intended function and which ones fall outside. It is important to consider both intentional
and unintentional uses since product failure often ends in litigation, which can be costly and
time consuming.
2. Create a Block Diagram of the product or process. A block diagram of the product/process
should be developed. This diagram shows major components or process steps as blocks
connected together by lines that indicate how the components or steps are related. The
diagram shows the logical relationships of components and establishes a structure around
which the FMEA can be developed. Establish a Coding System to identify system elements.
The block diagram should always be included with the FMEA form.
3. Complete the header on the FMEA Form worksheet: Product/System, Subsys./Assy.,
Component, Design Lead, Prepared By, Date, Revision (letter or number), and Revision Date.
Modify these headings as needed.
FMEA -2
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
4. Use the diagram prepared above to begin listing items or functions. If items are components,
list them in a logical manner under their subsystem/assembly based on the block diagram.
5. Identify Failure Modes. A failure mode is defined as the manner in which a component,
subsystem, system, process, etc. could potentially fail to meet the design intent. Examples of
potential failure modes include:
Corrosion
Hydrogen embrittlement
Electrical Short or Open
Torque Fatigue
Deformation
Cracking
6. A failure mode in one component can serve as the cause of a failure mode in another
component. Each failure should be listed in technical terms. Failure modes should be listed for
function of each component or process step. At this point the failure mode should be identified
whether or not the failure is likely to occur. Looking at similar products or processes and the
failures that have been documented for them is an excellent starting point.
7. Describe the effects of those failure modes. For each failure mode identified the engineer
should determine what the ultimate effect will be. A failure effect is defined as the result of a
failure mode on the function of the product/process as perceived by the customer. They should
be described in terms of what the customer might see or experience should the identified
failure mode occur. Keep in mind the internal as well as the external customer. Examples of
failure effects include:
Injury to the user
Inoperability of the product or process
Improper appearance of the product or process
Odors
Degraded performance
Noise
FMEA -3
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
Establish a numerical ranking for the severity of the effect. A common industry standard scale
uses 1 to represent no effect and 10 to indicate very severe with failure affecting system
operation and safety without warning. The intent of the ranking is to help the analyst determine
whether a failure would be a minor nuisance or a catastrophic occurrence to the customer.
This enables the engineer to prioritize the failures and address the real big issues first.
8. Identify the causes for each failure mode. A failure cause is defined as a design weakness that
may result in a failure. The potential causes for each failure mode should be identified and
documented. The causes should be listed in technical terms and not in terms of symptoms.
Examples of potential causes include:
Improper torque applied
Improper operating conditions
Contamination
Erroneous algorithms
Improper alignment
Excessive loading
Excessive voltage
9. Enter the Probability factor. A numerical weight should be assigned to each cause that
indicates how likely that cause is (probability of the cause occuring). A common industry
standard scale uses 1 to represent not likely and 10 to indicate inevitable.
10. Identify Current Controls (design or process). Current Controls (design or process) are the
mechanisms that prevent the cause of the failure mode from occurring or which detect the
failure before it reaches the Customer. The engineer should now identify testing, analysis,
monitoring, and other techniques that can or have been used on the same or similar
products/processes to detect failures. Each of these controls should be assessed to determine
how well it is expected to identify or detect failure modes. After a new product or process has
been in use previously undetected or unidentified failure modes may appear. The FMEA
should then be updated and plans made to address those failures to eliminate them from the
product/process.
11. Determine the likelihood of Detection. Detection is an assessment of the likelihood that the
Current Controls (design and process) will detect the Cause of the Failure Mode or the Failure
Mode itself, thus preventing it from reaching the Customer. Based on the Current Controls,
consider the likelihood of Detection using the following table for guidance.
12. Review Risk Priority Numbers (RPN). The Risk Priority Number is a mathematical product of
the numerical Severity, Probability, and Detection ratings:
RPN = (Severity) x (Probability) x (Detection)
The RPN is used to prioritize items than require additional quality planning or action.
13. Determine Recommended Action(s) to address potential failures that have a high RPN. These
actions could include specific inspection, testing or quality procedures; selection of different
components or materials; de-rating; limiting environmental stresses or operating range;
redesign of the item to avoid the failure mode; monitoring mechanisms; performing
preventative maintenance; and inclusion of back-up systems or redundancy.
14. Assign Responsibility and a Target Completion Date for these actions. This makes
responsibility clear-cut and facilitates tracking.
15. Indicate Actions Taken. After these actions have been taken, re-assess the severity,
probability and detection and review the revised RPN's. Are any further actions required?
16. Update the FMEA as the design or process changes, the assessment changes or new
information becomes known.
FMEA -4
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
FMEA -5
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
None No effect 1
h Probability
1 in 3 9
1 in 20 7
1 in 400 5
1 in 2,000 4
1 in 150,000 2
FMEA -6
# Prepared by Haery Sihombing @ IP
Sihmo\bi E
h Detectability
FMEA -7
Basic Concepts of FMEA and FMECA
Failure Mode and Effects Analysis (FMEA) and Failure Modes, Effects and Criticality Analysis (FMECA) are methodologies designed to identify potential failure
modes for a product or process, to assess the risk associated with those failure modes, to rank the issues in terms of importance and to identify and carry out
corrective actions to address the most serious concerns.
Although the purpose, terminology and other details can vary according to type (e.g. Process FMEA, Design FMEA, etc.), the basic methodology is similar for
all. This article presents a brief general overview of FMEA / FMECA analysis techniques and requirements.
x Item(s)
x Function(s)
x Failure(s)
x Effect(s) of Failure
x Cause(s) of Failure
x Current Control(s)
x Recommended Action(s)
x Plus other relevant details
Most analyses of this type also include some method to assess the risk associated with the issues identified during the analysis and to prioritize corrective
actions. Two common methods include:
x Evaluate the risk associated with the issues identified by the analysis.
x Prioritize and assign corrective actions.
x Perform corrective actions and re-evaluate risk.
x Distribute, review and update the analysis, as appropriate.
The RPN can then be used to compare issues within the analysis and to prioritize problems for corrective action.
Criticality Analysis
The MIL-STD-1629A document describes two types of criticality analysis: quantitative and qualitative. To use the quantitative criticality analysis method, the
analysis team must:
To use the qualitative criticality analysis method to evaluate risk and prioritize corrective actions, the analysis team must: