Вы находитесь на странице: 1из 25

LECTURE NOTE

ON
STATISTICAL QUALITY CONTROL (STS 371)

K.S. ADEKEYE, Ph.D

1 Statistical Quality Control


Statistical methods are analytical tools used to evaluate men, materials, Machines, or processes.
Evaluations obtained by these methods assist in maintaining desired result by using past history to predict
capabilities or trends. Such analytical methods are management tools which furnish data to all levels
supervision for appropriate action.

Some advantages of statistical techniques in interpreting engineering data and controlling manufactured
products are:
More uniform at a higher level.
Less waste by reduction of rework and scrap
Improved inspection by better planning and better execution.
Higher production of good parts per man per machine hour.
Improved design tolerance.
Better plant relations through coordinated effort.

Control through statistical methods differs from the procedures of manufacturing a product according to
schedule and then sorting the products into acceptable and non-acceptable lots. Eventually these control
methods help to decide.
When the process is operating at a satisfactory level.
When the process level is not satisfactory and corrective action is required to prevent the
manufacture of unacceptable products.
In order to ensure high levels of quality, elementary statistical techniques have been developed to
control or monitor the quality of a product. These techniques and the actions of implementing them
are referred to as Statistical Quality Control (S.Q.C.). Statistical quality control has traditionally been
divided into two categories, namely; acceptance sampling and statistical process control.

Acceptance sampling is an attempt to judge the quality of lots that have been made from samples from
those lots. While Process Control is the use of techniques to monitor the process as the product is being
made to ensure that defectives are not being made to begin with.

1
2 QUALITY
The word quality is often used to signify excellence of a product or service we hear talk about
Rolls-Royce quality and top quality. In some manufacturing companies, quality may be used to
indicates that a product conforms to certain physical characteristics set down with a particularly tight
specification. But if we are to manage quality, it must be defined in a way, which recognizes the true
requirements of the customers.

The ability to meet the customers requirements is vital, not only between two separate organizations, but
within the same organization. There exists in every factory, every department, every office, a series of
suppliers and customers. The typist is a supplier to the boss is the typist meeting the requirements?
Does the boss receive error-free typing set out as he wants it, when he wants it? If so, then we have a
quality typing service. Does the factory receive from its supplier defect-free parts which conform to the
requirements of the assembly process? If so, then we have a quality supplier.
For industrial and commercial organizations, which are viable only if they provide satisfaction to the
consumer, competitiveness in quality is not only central to profitability, but crucial to business survival.
The consumers should not be required to make a choice between price and quality, and for manufacturing
or service organizations to continue to exist, they must learn how to manage quality. In todays tough and
challenging business environment, the development and implementation of a comprehensive quality
policy is not merely desirable it is essential.
Every day, people in certain factories scrutinize together the results of the examination of the previous
days production, and commence the ritual battle over whether the material is suitable for despatch to the
customer. One may be called the Production Manager and another the Quality Control Manager. They
may argue and debate the evidence before them, the rights and wrongs of the specification, and each tries
to convince the other of the validity of their argument. Sometimes they nearly break into fighting.
This ritual is associated with trying to answer the question: Have we done the job correctly?
correctly being a flexible word depending on the interpretation given to the specification on that
particular day. This is not quality control, it is post-production detection, wasteful detection of bad
product before it hits the customers. There is a belief in some quarters that to achieve quality we must
check, test, inspect or measure the ritual pouring on of quality at the end of the process and that
quality, therefore, is expensive. This is nonsense, but it is frequently encountered. In the office we find
staff checking other peoples work before it goes out, validating computer input data, checking invoices,
typing, etc. There is also quite a lot of looking for things, chasing things that are late, apologizing to
customers for non-delivery, and so on waste, waste and more waste.

Quality is now beyond conformance to specification as a measure of evaluating excellence, it is now


evaluation of target value and continuously striving to decrease this variability about the target value to
make more uniform product.

Quality is defined simply as meeting the requirements of the customer and this has been expressed in
many ways by other authors:
Fitness for purpose or use (Juran).
The totality of features and characteristics of a product or service that bear on its ability to satisfy
stated or implied needs (BS 4778: Part 1: 1987 (ISO 8402: 1986)).

2
The total composite product and service characteristics of marketing, engineering, manufacture,
and maintenance through which the product and service in use will meet the expectation by the
customer (Feigenbaum).

In 1926, W. A. Shewhart introduced the concept of using statistical methods to study the quality of
manufacturer products. Deming places responsibility for quality improvement squarely on the shoulders
of top management. In fact, the Deming philosophy has been embodied in a set of principles known
collectively as Demings 14 points for management (Deming, 1982). Philip Crosby based his quality
philosophy on zero defects, as performance standard. Kaoru Ishikawa (Japanese) named his progam as
Total Quality Control a term first coined by another American quality guru, Armand Feigenbaum.
Ishikawa is the inventor of the cause and effect diagram, also called the Ishikawa diagram.

To reduce the total costs of quality, control must be at the point of manufacture or operation; quality
cannot be inspected into an item or service after it has been produced. It is essential for cost-effective
control to ensure that articles are manufactured, documents are typed, or that services are generated
correctly the first time. The aim of process control is the prevention of the manufacture of defective
products and the generation of errors and waste in non-manufacturing areas.
To get away from the natural tendency to rush into the detection mode, it is necessary to ask different
questions in the first place. We should not ask whether the job has been done correctly, we should ask
first: Can we do the job correctly? This has wide implications and this book aims to provide some of the
tools which must be used to ensure that the answer is Yes. However, we should realize straight away
that such an answer will only be obtained using satisfactory methods, materials, equipment, skills and
instruction, and a satisfactory or capable process.

2.1 The Quality Control Function


The quality control function exists primarily as a service organization, interpreting specifications
established by product engineering and assisting the manufacturing department in production to meet
these specifications. As such, the function is responsible for collecting and analyzing vast quantities of
data and then presenting the findings to various other departments for appropriate corrective action.

To properly understand such a function requires an awareness of the quality concept. Product quality is a
somewhat intangible characteristic in many aspects. Essentially, quality is established by the customers,
and the product designed and manufactured for sale is intended to meet these customer requirements.
Inferior quality, as indicated by appearance or performance, is ultimately reflected in a declining sales
picture and if not corrected, the particular business may be forced to terminate its activity. These
customer quality requirements are interpreted by the product engineer who establishes the specifications
and sets tolerances.
Process engineering is responsible for specifying the operations as well as designing and/or procuring
equipment which will meet the product specifications. Manufacturer utilizes this equipment to produce,
and the quality control function ensures that the manufactured product conforms to the specifications.

It is necessary to re-emphasize the need for conformance to specification and why the product, including
components, must be produced within the allowable tolerances. Product uniformity is attained by

3
adhering to specifications. Uniformity permits the interchangeability of parts, which is a basic principle
in modern manufacturing and without which mass production would be impossible.

Uniformity does not imply absolute identity. It is a well known fact that no two objects are ever identical
in all respects. In view of this, the product designer must decide what variation can be tolerated in each
characteristic and then set up specifications indicating this. The total permissible variation depends on the
following factors:
(i) The variation permissible without destroying the product usefulness or the customer
acceptance.
(ii) The variation generated by the unavoidable deviations in the performance of the
machines which will do the work and the person who will operate the machines.
(iii) The variation generated by the characteristics of the material which enters the plant either
for processing or to be used in processing

Thus quality is relative measures of product goodness, and quality standards may fluctuate depending on
customers requirements and no product availability. If the product is better than the standards,
manufacturing costs may be prohibitive. If the product is below the standards, then performance may be
impaired, and customer acceptance may decline.

In order to avoid any misconceptions concerning total quality control as advocated by Feigenbaum (1991)
the following statements regarding organization are given:
Total-quality control programs thus require, as an initial step, top managements reemphasis of the
respective quality responsibilities and accountabilities of all company employees in new-design control,
in incoming-material control, in product control, and in special process studies.

The second principle of total-quality-control organization is a corollary to the first one: because quality is
everybodys job. In a business, it may become nobodys job. Thus the second step required in total-
quality programs becomes clear. Top management must recognize that the many individual
responsibilities for quality will be exercised most effectively when they are buttressed and serviced by a
well-organized, genuinely modern management function whose only area of operation is in the quality-
control jobs, and whose responsibilities are to be sure that the products shipped are right-and at the
right quality cost.

The need for top management support is evident and is emphasized by Feigenbaum (1991). This is
obviously nothing new, since management support is required for any activities to be successful, but it is
too frequently used as an excuse for inaction. Management is never quoted as opposing quality efforts in
fact, quite the contrary but if items such as production, costs, and quality are ranked by importance, they
often occur precisely in that order.

2.2 Variation in Quality


Variation has been defined as the difference between things which might otherwise be thought to be
alike because they were produced under conditions which were as nearly alike as it is possible to make

4
them. The underlying cause for the differences in product reliability and quality is variation. Thus
variation is the reason for using statistical methods.

2.2.1 Principles Underlying Variation


The principles underlying variation can be categorized into the following:
(i) No two things are exactly alike. :Experience has proven that things are never identical. Each
individual, for example, has characteristics which differ from all others. Even such things as
two peas in a pod have slight differences if examined closely enough. Likewise, no two parts
are precisely identical. Blue print tolerances result from engineers recognizing this fact.
Although it is impossible to make interchangeable parts precisely identical, it is desirable to keep
variations small as possible.

(ii) Variation in a product or process can be measured: Two pins, two co stings, or two hinges
when manufactured may look just alike but are they? With a suitable comparator, micrometer,
measurement units is small enough, each parts can be shown to be different from the next. This
difference or variation is important when it has some effect on the functioning of parts being
produced.
(iii) Individual outcomes are not predictable: Can it be reliably predicted whether the next flip of a
coin will turn up heads or tails? With what assurance can one predict, to the tenth of a millimeter,
the outside diameter of the next part coming from a machine.? Even the most learned finger print
expert can not describe the identifying characteristics of the index finger of the next person he
meets. Exactly predicting such things would be contrary to common sense, since if many items
(iv) Groups of things form patterns with definite characteristics: If identical parts from a process
are carefully measured for a given dimension and are counted and arranged in order of size, a
definite pattern will be revealed. This general characterizing pattern will also repeat with another
group from the same productive process.
(v) Sources of variation. Variation is attributed to two different sources. One called chance and the
other, called assignable cause of variation. The Chance (random) variation results from inherent
changes in the process such as variation in raw material, changing atmospheric conditions, room
vibration, and backlash in equipment. Variations due to chance, being beyond control, give rise to
a characteristic bell-shaped pattern. The assignable variation consists of correctable errors. These
errors may be such details as basic changes in materials, incorrect processing temperature or tool
speeds, operator errors, or damage to equipment. Variations due to assignable sources tend to
distort this pattern. There are infinite sources of variation in a manufacturing process assignable.
These can all be classified into the following: Man, Material, Method, Machine, Measurement
and Environment.

If the process is functioning in a stable manner, these contribute to chance variation and are
indistinguishable from one another. If they can be identified and the magnitude reduced or eliminated,
the amount of product variation will be reduced. Since all variation cannot be eliminated, an attempt is
made to reduce the amount contributed by each source. The operator doing the job, the machine or
process performing the work, the substance from which the piece is made and the devices gagging the
work. For example, the substitution of an automatic machine for one that is manually operated tends to
reduce the first source. However, variation resulting from operating disturbances, dirt, deflection, wear

5
work-piece variation, and work-piece mutilation is still involved in the last three sources. Phrasing it
differently, even though the process appears to be invariable, some variations still exist. Variation from
piece to piece is accepted as inevitable, but control of variation is based on studying for evaluating
process variations, determining whether change or assignable sources are present, and predicting the
future.

3. STATISTICAL PROCESS CONTROL

What is a process?
A process is the transformation of a set of inputs, which can include materials, actions, methods and
operations, into desired outputs, in the form of products, information, services or generally results. In
each area or function of an organization there will be many processes taking place. Each process may be
analysed by an examination of the inputs and outputs. This will determine the action necessary to improve
quality.
The output from a process is that which is transferred to somewhere or to someone the customers.
Clearly, to produce an output which meets the requirements of the customers, it is necessary to define,
monitor and control the inputs to the process, which in turn may have been supplied as output from an
earlier process. At every suppliercustomer interface there resides a transformation process and every
single task throughout an organization must be viewed as a process in this way.
To begin to monitor and analyse any process, it is necessary first of all to identify what the process is, and
what the inputs and outputs are. Many processes are easily understood and relate to known procedures,
e.g. drilling a hole, compressing tablets, filling cans with paint, polymerizing a chemical. Others are less
easily identified, e.g. servicing a customer, delivering a lecture, storing a product, inputting to a computer.
In some situations it can be difficult to define the process. For example, if the process is making a sales
call, it is vital to know if the scope of the process includes obtaining access to the potential customer or
client. Defining the scope of a process is vital, since it will determine both the required inputs and the
resultant outputs. A simple static model of a process is shown in Figure 1.1. This describes the
boundaries of the process.

6
What is control?
All processes can be monitored and brought under control by gathering and using data. This refers to
measurements of the performance of the process and the feedback required for corrective action, where
necessary. Once we have established that our process is in control and capable of meeting the
requirements, we can address the next question: Are we doing the job correctly?, which brings a
requirement to monitor the process and the controls on it. Managers are in control only when they have
created a system and climate in which their subordinates can exercise control over their own processes
in other words, the operator of the process has been given the tools to control it.
If we now re-examine the first question: Have we done it correctly?, we can see that, if we have been
able to answer both of the questions: Can we do it correctly? (capability) and Are we doing it
correctly? (control) with a yes, we must have done the job correctly any other outcome would be
illogical. By asking the questions in the right order, we have removed the need to ask the inspection
question and replaced a strategy of detection with one of prevention. This concentrates attention on the
front end of any process the inputs and changes the emphasis to making sure the inputs are capable of
meeting the requirements of the process. This is a managerial responsibility and these ideas apply to every
transformation process, which must be subjected to the same scrutiny of the methods, the people, the
skills, the equipment and so on to make sure they are correct for the job.
The control of quality clearly can take place only at the point of transformation of the inputs into the
outputs, the point of operation or production, where the letter is typed or the artefact made. The act of
inspection is not quality control. When the answer to Have we done it correctly? is given indirectly by
answering the questions on capability and control, then we have assured quality and the activity of
checking becomes one of quality assurance making sure that the product or service represents the
output from an effective system which ensures capability and control.

Statistical Process Control (SPC) uses simple but powerful graphical tools to monitor the process to
improve quality by decreasing product variability. This narrowing of the limits of variability does not stop
when the product meets specifications, but continues thereafter for continuous improvement to make
more uniform product. The objective in the use of SPC fundamentals is process improvement, to find and
solve problems and to reduce variation.
When the management of a company is to start the use of SPC, it is best for them to start small,
broadening their base as improvement is attained. A company should pick a quality problem related to
their customers biggest complaint, for that should be the companys greatest concern. If there are no
major complaints, a problem related to one of the largest amounts of scrap and/or re-work should be
chosen. Simple but powerful graphical tools should be then used to solve problems and to improve the
process.

There are many SPC fundamentals. These are brainstorming, flow-charting, cause and effect diagram,
Pareto analysis, histogram, scatter diagram, run charts and control charts.

3.1 Run Charts Analysis


Run charts are plots of data arranged in time sequence. Analysis of run charts is performed to determine
if the partners can be attributed to common causes of variation, or if special causes of variation were
present. Therefore a run is a time ordered sequence of points of the same class.

7
Run charts should be used for preliminary analysis of any data measured on a continuous scale that can be
organized in time sequence. Run chart candidates include such things as fuel consumption, production
throughput, weight size, etc. Run charts answer the question was this process in statistical control for the
time period observed?. If the answer is NO, then the process was influenced by one or more special
causes of variation. If the answer is YES, then the long-term performance of the process can be estimated.

3.1.1 Run charts Procedures


(i) Plot a line chart of the data in time sequence.
(ii) Find the median of the data.
(iii) Draw a horizontal line across the chart at that point and label the line as Median.

It should be noted that run charts are evaluated by examining the runs on the chart. There are several
different statistical tests that can be applied to analyze run, these are run length, numbers of run and trend
of run.

(a) Run length


A run to the median is a series of consecutive points on the same side of the median. Unless the process
is being influenced by special causes, it is unlikely that a long series of consecutive points will all fall on
the same side of the median. Thus, checking run length is one way of checking for special causes of
variation.
The length of a run is found by simply counting the number of consecutive points on the same side of
the median. However, it may be that some values are exactly equal to the median. If only 1 value falls
exactly on the median line, ignore it. If more than 1 value is on the line, assign them to one side or the
other in a way that results in 50% being on one side and 50% on the other. On the run chart, mark the
points that will be counted above the median with an a and those that will be counted below the median
with a b.

After finding the longest run, compare the length of the longest run to a standard value. If the longest run
is longer than the maximum allowed, then the process was probably (approximately a 95% probability)
influenced by a special cause of variation.

(b) Number of runs


The number of runs is found by simple counting. The number of runs expected from a controlled process
can also be mathematically determined. A process that is not being influenced by special causes will not
have either too many runs or too few runs. If there are fewer runs than the smallest allowed or more runs
than the largest allowed then there is a high probability that a special cause is present.

(c) Trends
A trend is a consecutive increase or decrease in the plotted points. It should be noted that when counting
increases or decreases, no change values should be ignored. The run chart should not have any
unusually long series of consecutive increases or decreases. If it does, then a trend is indicated and it is
probably due to a special cause of variation. Compare the longest count of consecutive increases or
decreases to the longest allowed. If the observed maximum trend exceeds the table value then there is a
strong probability that a special cause of variation caused the process to drift.

8
It should be noted that run charts are variables charts. Run charts have two advantages over all other
variable charts, these are (i) they are easier to construct, and (ii) they are non-parametric. A non-
parametric statistical method is one that doesnt require any assumptions about the distribution of the
population from which the sample is drawn. Most of the other SPC variables charts require that the
population be normally distributed, at least approximately.

3.2 Control Chart Concepts


The control chart is a tool used primarily for analyzing data, either discrete or continuous, which are
generated over a time period. The concept was evolved by Walter A. Shewhart, of Bell Telephone
Laboratories, in 1924. At the time he suggested the chart could fulfill three basic functions.

To define a goal for an operation.


To aid in the attainment of that goal
To judge whether the goal had been reached.

The term control chart has been universally accepted and used. However, it must be understood at the
outset that the chart does not actually control anything. It simply provides a basis for action and is
effective only if those responsible for making decisions act upon the information which the chart reveals.
Adequate control requires a means for continuously monitoring repetitive operations.
Control charts in general can be classified as variable or attribute. The chart for variables is used when
continuous data are collected, such as a dimension, a weight, a resistance, a sensitivity, a gain, a
temperature or hardness, for this control, average and range chart is used. The chart for attribute has
implication when either discrete data are obtained or when it is desired to classify continuous
measurement, as acceptance or unacceptable.

3.2.1 Why should teams use Control Charts?


A stable process is one that is consistent over time with respect to the center and the spread of the data.
Control Charts help you monitor the behavior of your process to determine whether it is stable. Like Run
Charts, they display data in the time sequence in which they occurred. However, Control Charts are
more efficient that Run Charts in assessing and achieving process stability. Your team will benefit from
using a Control Chart when you want to:
Monitor process variation over time.
Differentiate between special cause and common cause of variation.
Assess the effectiveness of changes to improve a process.
Communicate how a process performed during a specific period.

Knowledge of the behaviour of the random variation is the foundation on which control chart rest. If a
group of data statistical pattern that might be reasonably produced by assignable causes, then it is
assumed that no special assignable causes are present. The condition which produces these variation are
accordingly said to be under conform to a pattern that might reasonably be produced by chance cause,
then it is concluded that one or more assignable causes are at work. In this case the conditions producing
the variations are said to be out of control.

9
3.2.2 Out of control conditions
An out of control condition is defined to include
(i) Points outside: Points beyond either control limits indicates that an external influence exists, that
is an assignable cause is present. Investigation should be made to incorporate desirable causes
and eliminate those which are undesirable.
(ii) A run: At times a change occurs even though no point fall outside the control limits. This
change can be observed when successive plotted points are on one side with respect to the
center line but are still within the limits. The run indicates a shift in average or a reduction in
variation. A run is indicated when:
2 out of 3 successive points outside 2 sigma limits
4 out of 5 successive points outside 1 sigma limits.
Eight successive points fall on the same side or 11 of 12 successive points fall on the
same side or 13 of 15 successive points fall on the same side
(iii) A trend: In some operations there is a steady progressive Change in the plotted points. This
is called trend and may be caused by tool wear or machine deterioration. These factors may
cause a trend in the average and also in the variability. When the variability is much less than
specifications permit, a trend can be used to control tool replacement by starting the operation
near one limit and permitting a drift towards the other. Care must be taken to correct the
trend before it goes too far. Special control limits are useful in this case.
(iv) A Cycle: At times the operation is affected by cycles caused by psychological chemical reasons
or by daily, weekly, or seasonal effects. These make their appearance on the chart by a
definite up and down pattern, with points possibly out of control at both limits, and can be
constructed as assignable variation.

3.2.3 What do we need to know to interpret Control Charts?


Process stability is reflected in the relatively constant variation exhibited in Control Charts. Basically, the
data fall within a band bounded by the control limits. If a process is stable, the likelihood of a point
falling outside this band is so small that such an occurrence is taken as a signal of a special cause of
variation. In other words, something abnormal is occurring within your process. However, even though
all the points fall inside the control limits, special cause variation may be at work. The presence of
unusual patterns can be evidence that your process is not in statistical control. Such patterns are more
likely to occur when one or more special causes is present.
Control Charts are based on control limits which are 3 standard deviations (3 sigma) away from the
centerline. You should resist the urge to narrow these limits in the hope of identifying special causes
earlier. Experience has shown that limits based on less than plus and minus 3 sigma may lead to false
assumptions about special causes operating in a process. In other words, using control limits which are
less than 3 sigma from the centerline may trigger a hunt for special causes when the process is already
stable.
The three standard deviations are sometimes identified by zones. Each zones dividing line is exactly one-
third the distance from the centerline to either the upper control limit or the lower control limit
(Viewgraph 19). Zone A is defined as the area between 2 and 3 standard deviations from the centerline on
both the plus and minus sides of the centerline. Zone B is defined as the area between 1 and 2 standard

10
deviations from the centerline on both sides of the centerline. Zone C is defined as the area between the
centerline and 1 standard deviation from the centerline, on both sides of the centerline.

4 Control Chart For Variable

Control chart based on measurement of quality characteristics are often found to be a more economical
means of controlling quality than chart based on attributes. Significant changes in either the mean or
standard deviation are an indication of significant changes in the process. When control is undertaken by
using variable instead of attribute it usually takes the form of employing
(i) X chart and the R-chart
(ii) X and S-chart
(iii) Control Chart For Individual (X and MR Chart)

The X chart is used to control the average of the process while the R or S chart is used to control the
general variability of the process.

(i) X and R charts


The X and R chart is the most sensitive control chart for tracing and identifying causes of assignable
variation in a process. The pattern on the range chart is read first, and from this it is possible to identify
many causes directly. The X pattern is read in light of the R chart, and this makes it possible to
identify other causes. Finally, the X Pattern and R patterns are read jointly which gives still further
information.

X - Control Chart
The UCL and LCL usually are set at 3sigma limit away from the center line. Using the 3- sigma limits for
analyzing past data, we have
UCL = X + 3s X
LCL = X - 3s X

Also, the UCL and the LCL for attaining current control when standard are given is

11
UCL = X + 3s / n
LCL = X - 3s / n

It should be noted that s X = R / d2 is an estimate of s , d2 is a standard table value on the Statistical


Quality Control table, X and s are standard given by the management.
For analyzing past data, using the estimate of s , the control limits for X chart can be reduced to .
UCL = X + A2 R
CL = X
LCL = X - A2 R
Where A2 = 3/d2n is a variable in the appropriate SQC table.

Range (R) Chart


The range (R) chart shows variation in the ranges of sample of a process. Let R 1, R2 ..Rm be the ranges
in the m-samples. If the chart is being used to analyze past data, then the control limits is
UCL = R + 3s R
UCL = R
UCL = R - 3s R
Where
m

Ri
R= i =1

m
An estimate of s R is given by s R = d3 R / d2 and hence the parameter of the R-Chart with the 3 control
limits becomes
UCL = R + 3d3 R / d2
CL = R
LCL = R - 3d3 R / d2

Let D3 = 1 3d3/d2 and D4 = 1 + 3d3/d2, then the R chart control limits are reduced to
UCL = RD4
CL = R
LCL = RD3
The constant D3 and D4 are available in the appropriate SQC Table. When R- chart is being used to
compute current output the range of a sample of n-items is computed and plotted in the R-chart. If the
sample does not fall outside the control limits and there is no indication of non-randomness with the
control limit, the process is deem to be under control, otherwise the process is deem to be out of control
with respect to its variability and an investigation is undertaken to locate the assignable cause.

12
If the management provide standard value at which its product is expected to be in-control, the
construction of X and R charts are simplified.
If the standard values are represented by X and , the control limits for R-chart are given by:
UCL = D2s
CL = s
LCL = D1s
Similarly, the control limits for the X chart when standard are:
UCL = X + As
CL = X
LCL = X - As
The value of A, D1, D2 for various value of n are available in SQC Table.

(ii) X and S Control Charts

There are many reasons for using S-chart instead of R-chart to analyze dispersion. Among them are:
i. Easy computation
ii. If sample sizes are greater than or equal to 6 the range chart losses its efficiency.
iii. If subgroup sizes are unequal.

If 2 is unknown, then an unbiased estimate of 2 is the sample variance (S2) computed by the popular
equation
n
1 n i=1 X i
S2 = ( X i - X )2 , where
n- 1
i =1
X=
n
The 3 control limits for S- chart are given by
UCL = C 4s + 3s 1- C42
CL = C4s
LCL = C 4s - 3s 1- C 42
.

For simplification of the above, Let


B5 = C 4 - 3 1- C 42 and B6 = C4 + 3 1- C 42 .
.

Then the parameter of the S-chart with a standard value for given becomes
UCL = B6
CL = C4
LCL = B5

13
m

If is unknown, then we must analyze the past data. If m sample are taken, then Si
. The
S= i =1

m
statistics S /C4 is unbiased for . Therefore, the control limits for S- Chart for analyzing past data
becomes
UCL = B4 S
CL = S

LCL = B3 S

When S /C4 is used to estimate , the control limits on the corresponding X - charts are determined
using
UCL = X + A3 S
CL = X
LCL = X + A3 S

1 m
Where X = X i , and the constants B3, B4, and A3 are obtainable from the appropriate SQC Table
m i =1
for various value of n.
(iii) Control Chart For Individual (X and MR Charts)
Individual control charts are statistical tools used to evaluate the central tendency of process over time.
They are sometimes also called moving range charts because of the way in which the control limits are
calculated. The most common control charts for individuals are the X (individual) and MR (moving
range) chart. Individuals control charts are used when it is not feasible to use averages for process control.
It is used when
(i) Observation are expensive to get (e.g. destructive test).
(ii) Outputs are too homogeneous over short time intervals (e.g. PH of a Solution).
(iii) The production rate is slow and the interval between successive observation is long.
Individual control charts take the run chart one step forward to provide control limits on the variability of
the individuals.

The procedures for the computation of X and MR charts is presented stepwise as follows:
Step 1:- Collect the individual reading. Ideally 30 or more readings should be collected.
Step 2:- Calculate MR values. Moving range (MR) value is the difference between the current reading
and the previous reading. Usually the high value minus the low value is employed. Thus the value
of MR values is always greater than or equal to Zero
Step3:- Find the center line for the two charts
CLX = X = 1/n X1 for X chart
and
n- 1

_____ MRi
for MR chart
MR = i =1

n- 1
Step4:- Compute the control limits for the Charts.

14
The 3 sigma control limits for the X - chart are:
_____
UCL = X + A MR
3
_____
LCL = X - A3 MR
But when n = 2, the value of A3 from the SQC table is 2.66. Therefore, the control limits for the chart
becomes
____
UCL = X + 2.66 MR
_____
LCL = X - 2.66 MR
The control limits for the MR chart are:
_____
UCL = 3.27 MR
LCL = 0
It should be noted that the value 3.27 is the value of B4 in the SQC table when n = 2.
Step5:- Plot the center lines and control limits, plot two separate charts one for X values and the other for
the MR values.
Step6:- Plot the points. Plot the x values and MR values from each observation. (Note that the first
reading does not have moving range value).

5 CONTROL CHART FOR ATTRIBUTE


There are four basic attribute control charts, these are
Number of defective chart (np Chart)
Fraction defective chart (p-Chart)
Number of defect chart (C Chart)
Number of defect per unit chart (U-Chart)
When using attribute data, subgroup size should be large, for instance n = 50, 100 or even larger. Also the
subgroup size may vary from subgroup to subgroup.

5.1 The Number of Defective (np) chart.


The number of defective (np) chart is used to monitor or control the number of defective produced by a
process. It is applied when the subgroup size remains constant from subgroup to subgroup and the
appropriate performance measure is a unit count.
Let d represent the number of defective produced per subgroup and n the number inspected. If p is
unknown, then
i. calculate p and n p using
m m

ii. __ di di
np = i =1
, and p = i =1

m mn
iii. compute 3 sigma limit using 3s np = 3 np (1- p )

if p is unknown or

3s np = 3
np (1- p )

if p is known

iv. compute control limits using

15
__ __
UCL = np + 3np and LCL = np - 3p
v. Plot the number defective (np) values from the subgroup and connect them
vi. Plot the center line and control limit
vii. Evaluate the control chart looking for evidence of lack of control.

5.2 The Proportion of Defective (p) Control Chart


The proportion of defective chart is used to monitor or control the proportion or fraction defective
produced by a process. The p-chart can be used when the subgroup size varies and also when the
subgroup size is constant. The p chart is the most widely used attribute control chart.

The p - chart can be analyzed using three different methods. These are, varying control limits, control
limits based on the average sample size, and the standardize approach.

5.2.1 Varying Control Limits Approach


(i) Calculate p from each subgroup. Let d represent the number of defective items and n the sample size,
then the proportion defective p is obtained by p = d/n
(ii) Calculate the center line
m m
CL = p =
i=1
di/mn or p = i =1
pi / m.

(iii) Compute the control limits


UCL = p + 3p
LCL = p - 3p
where
p (1- p )
3p = 3

if p is known


ni

or
p (1- p )
3p = 3

if p is unknown


ni

(iv) Plot the p values and connect them
(v) Plot the center line and the control limit of each subgroup.
vi) Evaluate the chart for evidence of data of control.

5.3 Number of Defects (C) Charts


The numbers of defects (C) charts are statistical tools used to evaluate the number of occurrences
produced by a process. Usually an occurrence is a defect. The C chart is different from the np chart in
that, in C chart all the defects are used to determine the control limits. If a unit has more than one defect
we will count each defect separately.

16
C-charts can be applied to any variable where the sample size is constant and the appropriate performance
measure is a count of some events or class of events. For example, the umber of accident per month in a
large organization can be counted. The number of non conformities follows the Poisson distribution. The
number of defects found in each subgroup is denoted by C.
The steps for the setting of C-chart is similar to that of np-chart with
3c = 3C if standard value of C is given, and
3c = 3 C when no standard is given.
Therefore the control limits will be
UCL = C + 3C
CL = C for standard value given
LCL = C + 3C

and
UCL = C + 3 C
CL = C when standard value is not given
LCL = C - 3 C
m
Where C = Ci / m
i =1

5.4 Number of Defects Per Unit Chart (U-Chart)


Inspection of a unit of product may continue until the first defect (non conformity) is detected and the
inspection then stopped with the product declared defective (non conforming). If instead the inspection is
continued until all defects are found, it is appropriate to chart the number of defects per unit. The U chart
is used when the sample size varies from sample to sample. The U chart is similar to the p-chart.

Let C be the number of defect found and n the number of units inspected for each subgroup, then
U =C / n
It should be noted that a unit is any arbitrary units of production and the settings of the U - chart
(similar to that of the p- chart) is highlighted below:
Compute U for each sub group
m

Compute the center line


U i
U= i =1

m
U
Compute the 3u = 3 when a standard value of U is given
ni
To compute the control limits three methods are applicable.
(i) Use variable control limit.
U
UCL U + 3 , and LCL = U - 3 U
n n

i

i
(ii) Use control limit based on average sample size

17
m
U U
UCL =U +
3
n , LCL = U -
3
n ,
ni


n= i =1

m
(iii) Use n to compute the control limits, but use ni to examine values for points near these limits.

The table below presents the summary of the attribute control charts along with their corresponding
control limits( lines)

6.0 ACCEPTANCE SAMPLING PLAN

The purpose of acceptance sampling plan is to determine course of action (i.e. to accept or reject) and to
prescribe a procedure that, if applied to a series of lot will give a specified risk of accepting a lot of a
given quality level.

Generally, when the decision to accept or reject is made on the basis of a sample from the lot, the
methodology is known as acceptance sampling.
Acceptance sampling is use in the following situation
i. When testing is destructive.
ii. When the cost 100% inspection is very high in composition to the cost of passing a non
conforming item.
iii. When there are many similar item to be inspected
iv. When information concerning the producers quality is not available.
v. When automated inspection is not used.

18
vi. When 100% inspection is not feasible. For example 100% inspection may require so much time
that production scheduling will be seriously compromised.
vii. When the supplier has an excellent history of quality and some reduction in inspection is desired

6.1 The Operating Characteristic (OC) Curve For Acceptance Sampling Plan

The OC curve is an evaluation tool that shows the probability of accepting a lot submitted for inspection
for a range of fraction non-conforming (p) values. It also displays the discriminatory power of a sampling
plan. The figure below is a typical OC curve

The OC curve starts with 100 percent acceptance at 0 percent defective and end at 0 percent
acceptance at 100 percent defective. Thus, the acceptance rate decreases as the percent defective
increases. The curves relative steepness measures the plan power to discriminate between various
quality levels and is thus related to the cost and amount of inspection.

The shape of an OC-curve is determined by the parameter of a sampling plan. These are N, n and c. the
shape of the OC curve with respect to the parameters can be generalized as:
(i) The larger the sample size, the steeper the scope of the OC-

19
curve. This means that the higher the sampling size the higher the quality protection of the
sampling plans.
(ii) The smaller the acceptance number, the steeper the slope of the OC-curve. This means that the
smaller the acceptance number, the higher the level of quality protection.

6.2 Computation of OC - Curve


If the sample was taken from an isolated lot of finite size, the probability of acceptance will be
computed using the hyper geometric distribution.

Pa = Pr (d c)
If D is the no of non-conforming units in the lot, c is the acceptance number, N is the lot size, and n is
the sample size. Then the probability of acceptance is
DN - D



c
d

n - d

Pa =
N
d =0


n

If the lot size (N) is large at least 10 times the sample size or that sample are taken from a stream of
lots selected at random from a process the probability of acceptance will be computed using the
binomial distribution. Then the probability of acceptance is given by

n
Pa =
P

d (1-P) n- d

d

If the lot size is large and the probability of a non conforming unit is small (N, P0), the
probability of acceptance will be computed using the Poisson distribution.

c
pa = l dl - l
, l = np.
d=0 d!
Acceptance sampling plan is divided into Attribute and variables. In Attribute sample, a unit is classified
into a conforming or non conforming on the basis of whether it meets established specification or not. In
variables sampling, only one quality characteristics of a unit is observed.

In attribute sampling, a predetermined number of units from each lot is inspected. There are several types
of plans for attributes sampling plan. Four of these are single- sampling, Double- sampling, multiple-
sampling and sequential-sampling plans.

7 Single Sampling Plan (SSP)


A single sampling plan is a procedure in which a sample is drawn from a lot and inspected. If the number
of non-conforming unit(d) is les than or equal to the acceptance number (c), the lot is accepted otherwise
the lot is rejected.

20
There are three parameters needed to define a single sampling plan, these are N (the lot size), n (the
sampling size) and c (the acceptance number). For example, if a sampling plan is specified as N = 1000,
n = 25, and c = 2, it means a random sample of 25 was drawn from the lot size of 1000 and if the number
of non-conforming units in the sample is less than or equal to 2 accept the lot otherwise reject it.

7.1 Designing A Single Sampling Plan.


To design a SSP using the OC curve, four values must be specified. They are , , Po and P 1. The single
sampling plan should have a probability of acceptance (1-) for lots with fraction nonconforming Po and
for lot with fraction nonconforming P1. The OC curve for this plan must pass through the two points
specified by the four parameters. Using the binomial distribution, we have the following equations.
c
n d
Pa = Po (1- Po)n- d = 1- a

d
d =0

c
n d
Pa = P1 (1- P1)n- d = b


d
d =o

The unknown parameter in the above two equations are n and c. Solving the two equation simultaneously
for n and c using trial and error approach, c is the smallest value satisfying the expression
p1
r (c - 1) > > r (c )
P0
c 2 ( 1- b)
Where r (c) =
c 2 (a )
and n is obtained from the interval

c 2 (1- b) c 2 (a )
n
2 p1 2 p0

The degree of freedom of the chi-square is 2(c+1).

8 Double-Sample Plans (DSP)


A double sample plan is an extension of SSP. it is defined by four parameters.
n1 = First sample size
n2 = Second sample size
c1 = acceptance number for the first sample
c 2 = acceptance number for both sample

The procedure for DSS is to draw and inspect a random sample of n 1 units from the lot, if d1 c1 accept, if
d1 > c2 reject the lot. If
c1 < d1 c2 take the second sample of n2 units from the lot. If
d1 + d2 c2 accept the lot, otherwise we reject the lot.

8.1 Designing A Double Sampling Plan Using Grubbs Tables

21
Four values are needed to be specified in the design of a DSP when the Grubbs Table is employed. These
parameters are , , Po and P1. These values will specify the two points of interest on the OC curve, i.e.,
(Po, 1- ) and (P1, ). It should be noted that these two points will not produce a unique OC curve for a
double sampling plan. In order to produce a unique DSP, an additional constraint needs to be imposed.
The commonly used constraint are n2 = 2n1 and n1 = n2. A DSP with these constraints is found using
Grubbs table with heading n2 = 2n1 and n1 = n2. Both tables are for = 0.05 and = 0.10. The following
are steps for using Grubbs table.

Step 1: specify the values of , , Po and P1.


Step 2: specify the size of the second sample relative to that of the first sample
P1
Step 3: Compute the ratio R =
P0
Step 4: chose the plan number on the Grubbs table that is closest to the value of R
Step 5: From the plan number selected in step 4, read off the value of c1 and c2
Step 6: The size of the first sample, n1 can be determined from the two columns with the headings Pn1. If
a is held constant, then
Pn1
n1 =
P0
Similarly, if b is held constant, then
Pn1
n1 =
P1

8.2 Computation of OC Curve For Double Sampling Plan


The OC curve for double sampling plan consists of a principal OC curve and supplementary curves. The
principal curve shows the probability of acceptance on the combined first and second samples against the
fraction non conforming. If we denote the probability of acceptance on the combine sample by Pa, then
Pa = Pa' + Pa"
Where
Pa' = Probability of acceptance on the first sample
Pa" = Probability of acceptance on the second sample.
For example if c1 = 1 and c2 =3, then the second sample will be drawn if d1 is 2 or 3. Thus, the lot will be
accepted if d1 + d2 3. Therefore d2 should be either 0 or 1.
This means that for the lot to be accepted at the second stage the following combination will be used.
d1 = 2, d2 = 0
d1 = 2, d2 = 1
d1 = 3, d2 = 0

It should be noted that d1 = c1 + 1 . Therefore


Pa" = Pr (d1 = 2) * Pr (d2 = 0) + Pr (d1 = 2) * Pr (d2 = 1) + Pr (d1 = 3) * Pr(d2=0)

8.3 Average Sample Number For Double Sampling Plan

22
The ASN for double sampling plans is defined by
ASN = n1 p1 + (n1 + n2 )(1- p1 )
= n1 + n2 (1- p1 ).
where, p1 = Pr(d1 c1 ) + Pr( d1 > c2 ).
REFERENCES:
1. Adekeye, K.S. (2009). An Introduction to Industrial Statistics: Concepts and Practice (Revised
Ed.), Ilorin, Nigeria
2. Bank, J. (1989). Principle of Quality Control.John Wiley and Sons, NY
3. Caulutt, R. (1987). Statistics in Industry a Failure of Communication. The Statistician, 36, pp.
555 560
4. Cowden, D. J. (1957). Statistical Methods in Quality Control. Prentice Hall, Englewood Cliffs,
New Jersey
5. Crosby, P.B. (1967). Cutting The Cost of Quality. Farnsworth, Inc, Boston
6. Deming, W.E. (1982). Out of the Crisis. MIT, Cambridge, Mass, USA
7. Dodge, H.F. and Romig, H.G. (1941). Single Sampling and Double Sampling Inspection Tables.
The Bell System Technical Journal, Vol 20(1), pp. 1 - 61
8. Dodge, H.F. and Romig, H.G. (1944). Sampling Inspection. Iohn Wiley & Sons, NY
9. Ewans, W.D. and Kemp, K.W. (1960). Sampling Inspection of Continuous Processes With No
Auto Correlation Between Successive Results. Biometrika 47, pp. 363 - 380
10. Feigenbaum, A.V. (1991). Total Quality Control(3rd Edition, revised). McGraw Hill, NY
11. Grant, E.L. (1964). Statistical Quality Control, McGraw Hill, N Y
12. Jerry, B. (1989). Principles of Quality Control. John Wiley& Sons, NY
13. John S. O. (2003). Statistical Process Control (5th Edition). Butterworth-Heinemann Oxford
14. Juran, J.M. (ed.) (1988). Quality Control Handbook. McGraw-Hill, NY
15. Juran, J.M. (ed.) (1999). Quality Handbook (5th Ed.), McGraw-Hill, New York, USA.
16. Montgomery, D.C.(2005). Introduction to Statistical Quality Control, John Wiley & Sons, NY
17. Oakland, J.S. (2000) .Total Quality Management, Text and Cases, 2nd Edn, Butterworth-
Heinemann, Oxford, UK.
18. Shewhart, W.A. (1931). Economic Control of Manufactured Product, Van Nostrand, New York,
19. Stapenhurst T. (2005). Mastering Statistical Process Control. Butterworth-Heinemann, Jordan
Hill, Oxford.
20. Wheeler, D.J. (1993). Understanding Variation - The Key to Managing Chaos. Knoxville, TN:
SPC Press
21. Wheeler, D.J., & Chambers, D.S. (1992). Understanding Statistical Process Control (2nd Ed.).
Knoxville, TN: SPC Press.

23
STS 371: STATISTICAL QUALITY CONTROL

OUTLINE
(i) Concept of SQC
(ii) Quality and Variation in Quality
(iii) Run chart Analysis
(iv)Control charts concept
(v) Control charts for means, Standard deviations and ranges
(vi)Control charts for numbers defectives
(vii) Acceptance sampling procedures
(viii) Single sampling procedure and Designing
(ix)Double sampling procedure and Designing
(x) Multiple sampling procedure
(xi)Lot-by lot sampling inspection by attributes and variable sampling plan
(xii) Continuous sampling plans

TEXTBOOKS
(i) Adekeye K.S. (2009). An Introduction to Industrial Statistics (Concepts and Practice)
(ii) Grant, E.L. and Leavenworth, R.S. (1980). Statistical Quality Control, Mc Graw-Hill,

New York.

(iii) John, J.A., Whitaker, D. and Johnson, D.G. (2006). Statistical Process Control.

Statistical thinking in business. 2nd ed., Chapman & Hall.

(iv)Merton R.H (2003) Statistical Quality Control for the Food Industry, 3rd Edition. New

York

24
(v) Montgomery, D.C., 2005. Introduction to Statistical Quality Control, John Wiley and
Sons
(vi)Oakland, J (2002) Statistical Process Control. Sixth Edition, Jordan Hill, Oxford.

(vii) Stapenhurst T (2005) Mastering Statistical Process Control, Jordan Hill, Oxford

25

Вам также может понравиться