Академический Документы
Профессиональный Документы
Культура Документы
The Standard
Vol. 17, Issue 1 The Newsletter of the Measurement Quality Division, American Society for Quality Winter 2003
Chair’s Column
My goal is to formulate a long-term strategy and
vision for the Measurement Quality Division where its
membership can grow. We still need to provide the
value to our core membership. However, there are
other professions besides metrology that use mea-
surements in their daily lives that can benefit from the
MQD. When considering that, we can include every-
one in some way or another. In a Division leadership
role, my goal shall be to explore ways and means to
attract new members by publicizing our efforts and
activities that can provide value to other individuals.
I look forward to this new role and to cultivate new
relationships in the measurement community.
I have a lot to learn in this new role. I shall start out
by listening to others and interacting with them. Please
feel free to contact me.
The Standard
Vol. 17, Issue 1 The Newsletter of the Measurement Quality Division, American Society for Quality Winter 2003
As this edition goes to press, the Japanese are a new way. In other words, to achieve efffective
rapidly redefining their organizations; and European levels of innovative practices, an organization must
firms are moving perhaps less rapidly but with increas- improve its creativity in its work groups and individu-
ing momentum to become more competitive by cutting als as well as in its measurement labs. And it must
costs through restructuring, reengineering, and knowl- foster the right kind of organizational culture that will
edge management interventions. Additionally, there encourage creativity and turn it into innovation.
are the burgeoning competitors from Pacific rim na-
The articles in this edition describe the character-
tions, Eastern Europe, and Latin America.
istics that an organization’s culture needs to possess
The key question is: How does a firm obtain a to achieve strategic competitive advantage through
global competitive advantage in the face of such stiff innovation. As Peter Drucker has said over and over
competition? Quality management alone no longer again: Every organization--not just business--needs
serves this purpose, as a successful continuous im- one core competence: innovation. And every orga-
provement program merely keeps you in the race. It nization needs a way to record and appraise its
doesn’t help you leapfrog the competition because innovative performance.
savvy competitors have programs of their own and are
not just standing still. Furthermore, these savvy
competitors are using speed strategies, reengineering,
and flexible manufacturing; and many have begun
using their own metrology programs. So what’s the
next source of competitive advantage going to be?
Innovation!
Innovation is the only sustainable competitive ad-
vantage to any situation. It is what enables an orga-
nization to create its products and services and to
differentiate them from those of its competitors. Inno-
vation is also where ideas come from that enable an
organization to cut its costs. Innovation is the focus of
this edition, especially in terms of how organizations
use innovation to achieve competitive advantage in
the global marketplace of tomorrow.
However, global competition is not the only chal-
lenge facing businesses and their staff as they enter
the Year 2003. Change is occurring at an accelerating
rate, and new technology is being introduced at break-
neck speed. The workforce is becoming more and
more diverse. There is a growing scarcity of highly
skilled workers, and we are smack dab in the middle
of a transformation from an industrial- to a knowledge-
based society. Constituencies are becoming more
demanding, and the entire business environment is
becoming more extremely complex by the moment.
To meet these strategic challenges and take ad-
vantage of opportunities they create, businesses need
to embrace creative problem solving and innovation in
Winter 2003 The Standard Page 4
The fruits of these efforts yield the exam questions that and encourages ASQ in this endeavor. We will assist
will be used by the Exam Review workshop scheduled wherever we can.’ Our ASQ hats are off to NCSLI for
for April 4-5, 2003, at ASQ Headquarters. their vote of confidence and support for the CCT
program.
by Bill McCullough
bemuch the same as the current licensing for engi- In 1963, the California State Legislature created a
neers, brain surgeons, undertakers, barbers, physiolo- “blue ribbon” advisory committee, the California Pro-
gists, cosmetologists, chiropractors, etc. People in fessional Metrology Committee (CPMC), put the full
general, and the typical free spirits that characterize resources of the state behind it, and chartered it to draft
the metrology technician populace, rebel at the thought legislation to “Certificate (the words “License” and
of governmental control. And to make such a system “Certify” were specifically not used) all persons within
truly fair and unbiased, “grandfathering,” so common in the State that are in any way engaged at any level in the
the licensing procedures of certain other trades and design, manufacture, or service of any devices that
professions, would have to be totally eliminated. All measure or in any way quantify any goods or services
candidates would have to be individually examined as manufactured or sold within the State of California or in
to their own qualifications, and with no consideration of any way have an influence on the health and/or welfare
any duration of time that they may have been working of any person in the State of California.”
at a function, or of the questionable opinions of That group labored for over seven years with the
superiors who are desirous of retaining faithful problem. They, too, foresaw the impediments I men-
drones irrespective of relative comparisons to tioned above. They did devise a complex scheme that
norms established for the certification level. would have circumvented most of them and which
Second is the complexity of the technical aspects included the prohibition of “grandfathering.” They also
involved. There are literally hundreds of technologies had in their draft a clause which would prohibit any
involved in the science of metrology, each with a person who served on the CPMC from ever becoming
myriad of permutations of technical variation, of fine “certificated.” Their draft legislation included three
specialization, and of level. Add to that complexity the categories of individual: Metrologist (Scientist), Me-
astronomical number of measurement devices and trology Engineer (Designer), and Technician/Inspector
their variations of make, model, generation, and so (Repair/Calibrator). Each of these had three levels of
forth, and the magnitude of possible practical permuta- competence, and as starters 33 specific disciplines
tions of specific certificates becomes mind boggling as were named (it was planned to add more as required).
well as impossible to adminster. Without specific The draft never reached implementation due to exter-
certifications, the entire program would be meaning- nal factors beyond the committee’s control, and the
less. Technicians are doers. They perform very project had extended for such a duration that the
specific tasks, generally with designated items of hard- legislators who were sponsoring it had moved on.
ware. Without specific certification for the specific Those who replaced them knew of or cared little about
function on the specific hardware, any such pro- the causes which instigated the project.
gram would be meaningless.” A by-product of their efforts, which remains to this
While I wrote the above monologue five and one- day, is the Measurement Science Conference (MSC).
half years ago, my opinions have not changed much. Ten years ago I was asked to write the early history of
Of the six panelists, five in their position papers-- the MSC. It was impossible to separate the early
although each looking at the problem from a different history of the MSC from the CPMC. As a result, my
perspective--came to virtually the same conclusion. early history of the MSC contains a rather detailed
Over 200 people crowded into a room posted for 90 explanation of the early work of the CPMC. If anyone
maximum for the hour and one-half-plus session. By a is interested, I have a few copies. That should be
show of hands it was determined that well over half of enough for this issue. Don’t forget or procrastinate; get
the attendees were employers, metrology laboratory your applications in to Dean Gordon or Dr. Watson for
managers, or other supervisors of metrology techni- enrollment in the only baccalaureate level program in
cians. During the session, several managers (some measurement science in this nation. Remember, you
executives in large commercial metrology organiza can get your degree on campus, on-line, or, if you can
tions) got up and stated they would not under any get enough students together, they will come to you
circumstances give any credibility to any certifications and present the program at your facility.
issued by any entity outside of their own organizations. You can still reach me at::
When the session was concluded, by another show of
hands it was determined that greater than 90% of the painchaud4@cs.com
attendees believed that certification of metrology tech- or if that one does not work try
nicians would be an exercise in futility; on the other
olepappy@juno.com
hand, the agreement among those who claimed to be
employers of metrology technicians was unanimous.
Winter 2003 The Standard Page 9
Measurements and measurement science (metrology) have a long and rich history of a strong relationship with
the statistical sciences. This has been exemplified at the U.S. National Institute of Standards and Technology
(NIST), formerly the National Bureau of Standards (NBS), where this nation keeps its standards and supports
metrology and metrology research. At the risk of annoying many other outstanding NBS and NIST statisticians
by their omission, I note that for many years the Bureau employed John Mandel, Jack Youden, Churchill
Eisenhart, Brian Joiner, Robert Paule, Lynne Hare, Raghu Kacker, Mary Natrella, and Harry Ku.
There’s a reason for this. Metrology is essentially a statistical pursuit. It is the study of measurement error (we’re
now supposed to call it measurement uncertainty). Was there no uncertainty, there would be very little science
in measurement science; only a great deal of good engineering would be required to build and use measuring
equipment.
Statistics, therefore, are necessary to understand measurements. In fact, 30 years ago at NBS, a philosophy
of measurement assurance was proposed by Eisenhart, Cameron, and Pontius. They reasoned that measure-
ment is, in fact, a process. As such, it may be addressed by the entire armamentarium of process improvement
tools developed and made familiar by the quality profession. (Envision, if you will, that measurement is a
manufacturing process whose output is numbers). We can, and do, use Pareto charts, scatterplots, histograms,
fishbone diagrams, control charts, designed experiments – the entire collection of quality tools, for the design,
control, and improvement of measurements. More about this later.
"When you can measure what you are speaking about, and express it in numbers, you know something about
it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and
unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to
the stage of science."
But, in fact, expressing something in numbers is only part of the equation. In order to ‘advance to the stage
of science,’ expression of a quantity requires two other statements: a unit of measure and some description of
the quality of the state of knowledge of the quantity. This information and more are sometimes called ‘metadata’
– data about the data.
For our purposes here, we’re most interested in a statement of quality for a number. Most often, we would like
this to be an expression of spread or possible variation. The VIM (International Vocabulary of Measurement,
1993), a world-wide standard, defines uncertainty as 'Parameter, associated with the result of a measurement,
that characterizes the dispersion of the values that could reasonably be attributed to the measurand.' A
measurand is a particular quantity subject to measurement.
We make the further distinction that a single measurand is expected to have a unique value when measured
in a single measurement act and that the ‘dispersion of values’ observed is due to sources of variation within the
measurement process. In many cases, repeated measurements of a unit will experience variation because the
item being measured also varies. It’s important to separate these variations both in our minds and where possible
in our calculations. We try to focus on, understand, and quantify the uncertainty of the measurement process; and
once this has been accomplished, we can separately characterize the variation of the object or process being
measured.
Following this logic, the best specifications do not use rectangular tolerances at all but rather specify the
‘dispersion of values that could be reasonably attributed to the measurand.’ This could be done simply, even for
the general public, with specifications such as “95% of all trains will arrive within ten minutes of their scheduled
time,” or “at least 50% of the pills in this bottle will have the stated amount of active ingredient.”
Educating the public in these concepts has been almost fruitless. While quality engineers and other quality
professionals continue to stress the understanding of variation, published specifications and internal manufac-
turing documents still speak almost universally in tolerances.
ISO 14253-1
Uncertainty Range s
Figure 1
How Quality Science and Statistical Science are Used in Measurement Science
We started this paper by explaining how statistics and metrology are inextricably linked by the need to describe
measurements in a way that respects the distributional nature of the results. In addition, quality tools and
statistical tools play an integral part in the daily practice of metrology. Herewith we discuss some examples.
Winter 2003 The Standard Page 12
Product distribution
Measurement Distribution
as reflected by check
standard
Product specification
limit
Figure 2
Winter 2003 The Standard Page 13
The measurement distribution as shown by the check standard distribution can be compared to the product
distribution to yield a capability statistic.
Alternatively, you could compare the check standard spread to the specification of the product. After all, your
purpose is to be able to detect the mean and spread of the product and to determine whether it meets the
specifications. If the spread of the measurement process (as shown by the check standard) is large, of the order
of the width of the specification, your measurements won’t be fit for their intended purpose.
A good rule of thumb is that the measurement distribution should be no wider than 20% of the width of the
product distribution. Narrower is better.
1
For further information on how this can be done, see my Measure for Measure column in Quality Progress,
January, 2002.
2
Roughly speaking, Normal is indicated when the measurement error is a constant in the measurement units,
and Lognormal is indicated when the error is a constant percentage.
UCLx
-750
D-825
e
v
i Mean
a-900
t
i
o
n-975
LCLx
Mean
0
LCLx
-1 0 0
D
e
v
i
a-2 0 0
t
i
o
n
-3 0 0
Single point out of control
Special cause
3 5 85 3 5 92 3 60 0 36 0 7
0 5 0 5
Day seq
This check standard shows good control except for a single point where something wrong was discovered
as a result of interpreting the chart.
UCLx
0.
D 0
e
v - Mean
i
a0.
t 3-
i
o 0.
n LCLx
6-
0. 330 337 345
9 More than 15/16 point s
00 50 Day 00
seq below the mean
This 42-inch check standard was tracked for almost a decade. Towards the end of that period, the control chart
showed a drift down, indicated above by the last points showing out-of-control for 15 out of 16 points below the
center line (mean). Measurements made with an alternative method confirmed that the standard (not the
measurement method) had in fact shrunk over that time period.
Winter 2003 The Standard Page 15
If the capability was not acceptable when the chart was first installed, but the chart was under control, this
probably indicates a common-cause source of variation in the measurement process. As with any other
improvement effort, measurement assurance tools follow familiar paths such as using designed experiments to
find ways of understanding and reducing measurement variation.
An unexpected benefit of control charting your check standard measurements occurs in the determination of
calibration intervals. Intervals is an important and persistent topic in metrology, since periodic calibration of
instruments is often the largest single expense for a laboratory. Make the interval too short and you could go
broke. Make it too long and you might make a mistake in a measurement result. Worse yet, if you discover that
an instrument went out of calibration while it was being used, most calibration quality system standards require
you to track back through all previous work done by that instrument (back to the last time it was known to be ‘good’)
to make sure that no harm was done. If you have been control charting your check standards, the last time the
measurement process – including the instrument – was known to be good was the most recent point on the chart,
and that was probably only a day or two ago. This is much better than tracing back to the last annual calibration.
Instruments and their processes exhibit two common failure modes – catastrophic and drift. Sudden changes
in a control chart are a strong indicator that something broke. Long-term changes will often track drift of
measurement standards.
If it’s time for a calibration and a control chart of check standards shows continuous, ongoing evidence of
statistical control of the measurement process, why is calibration needed? It probably isn’t. A formal process of
evaluating measurement assurance data can be an enormous money-saver by preventing unnecessary
calibrations. There is a possibility that the reference standard and the check standard are both drifting in the same
direction by about the same amount. When this happens, the control chart won’t show a need for calibration when
the need actually exists. To prevent this unlikely occurrence, calibration is still performed but at a greatly reduced
frequency. For even greater confidence, multiple check standards may be used – the chance of several drifting
together is small indeed.
At this point there are three fundamental approaches to analysis of the closure you have achieved.
First is the engineering approach. Part of the statement of the original task is a specification of the tolerance
of the process – a limit to the allowed closure error. For example, we could specify that closure anywhere within
a two-meter circle is acceptable. Because you will not have (can not have) perfectly returned to ‘go,’ the square
you have marked will contain a ‘surveyor’s gore’ – an area that belongs in your square according to some
measurements and belongs outside according to other measurements (see Figure 3). If closure has been
achieved to within the stated tolerance, you’ve finished the task. If not, some process rules have to be laid down
that state what to do next.
N
"Surveyor's
Gore"
Circle of permissible
Start error ("Closure")
Finish
Figure 3
Second is the statistical approach. Typically, a second set of measurements would be made as replicates of
the first set. The measurement lingo for this process of collecting more than one datum per cell is ‘overspecifying’
or ‘overdetermining’ the problem. In many cases a more sparse design can be used, where only a fraction of the
cells contain replicates; but because of the sequential nature of the measurement process here, this wouldn’t
make sense.
Winter 2003 The Standard Page 17
In the engineering case above, the cumulative error of all angles and lengths winds up being assigned to the
position of the last point, and all closure error is visualized as arising from the last point being in the wrong place.
Once replicate data are available, another model such as a least-squares fit can be used to fairly distribute the
errors among all individual (angle and distance) measurements. This results in a more accurate picture of the
actual locations of all boundaries of the area, and any standard measure of goodness-of-fit for the model can be
used in place of the closure circle. Although a linear model (polynomial fit of degree one) is often adequate in these
applications, other physical situations have been modeled with polynomials of degree as high as eight. It’s difficult
to imagine a physical phenomenon that can’t be very accurately represented by an equation with nine adjustable
coefficients.
Third is the metrological approach. A professional metrologist has a background and experience in the physics,
chemistry, or other science appropriate to the measurement at hand and may also have a degree of mathematical
sophistication. A metrologist looking at this problem will proclaim “but the earth isn’t flat” and will suggest replacing
a general-purpose polynomial equation with a spherical model that better (but still not perfectly) reflects the
physical realities. This will still result in a fitting procedure, testing of the residuals, and an evaluation of the
accuracy of the results; but the model equation is likely to be much simpler for any given accuracy.
It is not my intent to denigrate either the engineering or the statistical approach. All three methods have
advantages and require different skills from the practitioner, and the choice of an approach represents a tradeoff
between simplicity of application and accuracy of the result. A closure error of two meters might be far better than
is needed for the practical application of this area measurement, at which point the extra data taking and model
making called for by the other methods would be a waste of time and money.
Suppose we’re measuring the shape of a curved part such as the faceplate (panel) of a color television picture
tube. The panel is oriented face up in a measuring jig; and an X-Y grid array of mechanical probes is applied, each
looking for the position of the glass in the Z direction. A conventional definition of part failure would be to reject
if any point is out of tolerance.
Winter 2003 The Standard Page 18
True statistical independence of all of the points, which would support the conventional definition, does not
exist. There is a lot of autocorrelation due to the manufacturing process because the curvature of the panel is
not free to change independently at every point; but, rather, if there is a deviation it will be some sort of gradual
slope, rise, slough, or lump extending over several X-Y points. The measurement process often contributes
autocorrelation as well: for example, if the panel does not sit down perfectly in its jig but perhaps tilts a bit. All
of the points along one side or near one corner will be distorted in a similar way.
Now if the problem is large enough to send several points out of tolerance, rejection is proper; and it’s not
obvious why there’s a problem. Suppose, though, that a perfect measurement would show all points within
tolerance even though, due to one of the aforementioned distortions, some points are not precisely at the values
they should occupy. To this, add a small, acceptable amount of random variability due to ordinary measurement
noise. Now, the probability of failure of one point is large and is magnified by the fact that many points were brought
close to their limits by the correlated variation; and so the chance that one point will randomly be pushed over
the line is unfairly increased.
What’s the ‘right’ way to approach the analysis of these measurement data? It depends on the voice of the
customer. Some degree of distortion is inevitable in a picture tube faceplate, and different forms of distortion are
more – or less – visible and annoying to the viewer. The point-by-point inspection wrongly increases producer’s
risk because it controls distortions that don’t correlate well with viewer dissatisfaction.
One approach to elimination of the increased risk is to adopt a disposition strategy that does not depend on
every single point being correct but rather evaluates the geometry as a whole. The same grid of points is
measured in the same way, but the answers are compared to the nominal curvature in a two-dimensional fitting
process. The residuals from this process are tested using standard goodness-of-fit criteria, and a rejection
threshold is placed on the overall difference between the ideal and actual curvature.
Point-by-point tolerances are still computed, and rejection due to them is still possible in order to catch single
large-value outliers that ruin the picture but don’t have enough leverage to ruin the fit. The tolerance for those
outliers can be much greater than the tolerance was when the points were used for the detailed inspection. Other
analytical methods of outlier detection can also be used at that point.
This same way of thinking has been successfully applied to flatness of surface plates used for dimensional
inspection and to frequency response of bandpass amplifiers (in the latter case gain measurements are made
at many frequencies across the passband and the result compared to an ideal shape. The individual values are
highly correlated so they shouldn’t be allowed independent votes).
A good general approach when the errors are those of geometry, e.g. the picture tube or surface plate, is to
express the errors in terms of a distorted coordinate system – one that can include rotation, translation, and
improper scaling.
Winter 2003 The Standard Page 19
Influence Magnitude
Type Distribution Divisor Quotient Square Comments
Quantity (microinch)
Uncertainty of
Reference 1.5 B Expanded 2 0.75 0.57 From NIST
Standard Certificate
Variation of Standard
Measurement 0.78 A Normal 1 0.78 0.61 Deviation from
Process control chart
Resolution of
Mahr Federal
Measurement 0.1 B Rectangle 1.73 0.057 0.0033
Process Gauge
From
Reproducibility 0.55 A Normal 1 0.55 0.3 experimental
data
1. The uncertainty of the reference standard was determined by a calibration traceable to National Standards.
2. The total variation of the gage block measurement process is determined using Measurement Assurance
techniques (check standards and control charts, ref ISO 10012-2), as calculated by commercial software.
3. The check standard for this process is a three-inch gage block.
4. The control chart for this process was in control at the time the unit under test was measured. This provides
evidence that the variation is Normally distributed, thus the standard deviation is the correct statistic to use.
Measurement Model
The following influence quantities are included in the variation measured by the control chart:
1. Temperature difference between the reference standard and the UUT at a nominal 20C. (including operator
body heat, block soak time).
2. Variation in temperature measurement equipment.
3. Variation in the digital readout due to vibration, electrical noise.
4. Random positioning errors and other variation due to operator technique.
The following influence quantities may not have been captured by the control chart but were considered of
negligible magnitude:
1. Difference in thermal expansion coefficient between the reference standard and the UUT, combined with
difference in temperature of the standard and UUT from 20C.
2. Difference in elastic deformation of block surfaces under pressure of the measuring probe.
3. Deformation of blocks under under their own weight.
4. Error in wringing film thickness (only single blocks are measured here).
The following influence quantities have not been evaluated; but, since the control chart is okay, there is
evidence that they do not have a significant effect:
1. Software bugs.
2. Differences in surface finish of the reference and unknown blocks.
3. Long-term drift of the check standard hidden by periodic resetting of the control limits (was not done here).
It is clearly understood by the authors of the standard and by most practitioners that the results are estimates
and are usually pretty rough estimates at that. The assumption of normality for the separate variations is probably
okay but certainly not perfect. In addition, some influence factors are estimated by engineering methods rather
than by measurement – but the expansion coefficients used are that of the nominal material, not of the actual
composition of the device, etc.
In addition, some data about the magnitude of the variation may by available only in some alternate form such
as a specification with a tolerance. The uncertainty of a standard or instrument used as part of a measurement
setup is often obtained by reading its calibration certificate. If the certificate conforms to GUM requirements, the
expanded uncertainty and coverage factor (or confidence interval) will be stated and can be used directly. Other
information, such as manufacturer’s manual specs, nonconformant certificates, and engineering estimates must
Winter 2003 The Standard Page 21
be included with an assumption as to their statistical distribution and, if not Gaussian, transformed so that a
meaningful standard deviation can be calculated. While any transformation is allowed, several are specified in
the GUM and have become part of the routine practice of making budgets.
In other situations concerning environmental conditions, other factors prevail. Suppose I wish to calculate the
influence of temperature on a single dimensional measurement, such as when using an end (rod) standard. The
standard is being used in a controlled temperature room, and at first blush you could assume that the rod is the
same as the time-average room temperature. This might be a good approximation, but it likely is not. All but the
most sophisticated environmental controls simply turn heaters or coolers on and off according to the demands
of a thermostat. Some systems don’t even run the air circulation continuously but cycle that as well.
A graph of the room conditions versus time will reveal, in these cases, two common conditions: coolong on,
with the room air temperature close to the minimum within a range, and cooling off, with the room air temperature
close to the maximum. The temperature actually spends very little time near the center of the range. This is best
represented by the U-shaped distribution. If the overall temperature range is 2a (=a above and a below the elusive
middle), then, again transforming to a standard deviation, the uncertainty contribution = a/root2.
Simplified Calculations
Because uncertainty is only an estimate, most practitioners will either ignore correlations or, where it obviously
exists, assume perfect correlation and calculate accordingly (add the standard deviations of that portion rather
than including it in the RSS).
Other ways are also used to simplify GUM calculations. It is commonplace to reduce all statements of variation
to the same units before entering them in the budget, therefore setting the sensitivity weights equal to one. If we
are budgeting for a dimensional measurement, we calculate the dimensional effects of temperature as a side
calculation and put the result into the budget as a length variation. This makes the budget itself simpler and easier
to understand.
Finally, many practitioners simply skip trying to figure out the effective degrees of freedom for the overall budget
and just choose a coverage factor (multiplier) of two – stating that this represents approximately a 95% confidence
interval around the stated value.
Taken together, along with software available, these simplified approaches have made uncertainty budgeting
a tractable, if still annoying, chore.
Final Thoughts
It’s an understatement to say here that statistics and metrology are inextricably intertwined. We have given
just a few examples where taking an approach that respects the statistical nature of measurement variation results
in powerful technical improvement and often in money saved as well.
Yet, statistical thinking about measurement results and measurement data is far too rare. As a laboratory
assessor, I have visited more than 80 calibration labs – some of them more than once. Only two of them were
using control charts, and only one of those was doing it correctly (‘correctly’ not some esoteric concept here – any
quality engineer could do this correctly using only the tools from the CQE body of knowledge).
Instrument users still set calibration intervals based more on guesswork and tradition despite the abundance
of statistical tools to help do it better.
Despite some concerted efforts, tolerance thinking and specifying still rules our engineering attitudes, with the
predictable outcomes of increased risk for both producer and consumer.
Yet, despite these depressing facts, inroads are being made. Wider acceptance and use of the GUM and
formal requirements of quality system standards are gradually forcing the world to change for the better. If we
do our part by educating ourselves and those around us, it can only speed the change.
References:
• Eisenhart, C. Realistic Evaluation of the Precision and Accuracy of Instrument Calibration Systems. J. Res
Nat’l Bur. Stand 67C 161-187 (1963)
• Belanger, B., and Croarkin, C. Measurement Assurance Programs NBS Special Publication 676 - 1984
• Guide to the Expression of Uncertainty in Measurement - 1995. Published by a joint committee of ISO, IEC,
and others (known as the GUM)
• VIM, International Vocabulary of Basic and General Terms in Metrology - 1993. Published by a joint committee
if ISO, IEC, and others
• Taylor, B.N., and Kuyatt, C.E. Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement
Results NIST Technical Note 1297 - 1994
• General Requirements for the Competence of Testing and Calibration Laboratories INTERNATIONAL
STANDARD ISO/IEC 17025 - 1999
• Geometrical Product Specification - Inspection by Measurement of Workpieces and Measuring Instruments:
INTERNATIONAL STANDARD ISO 14253-1
• Stein, P.G. Measure for Measure - Bimonthly column in Quality Progress, ASQ beginning September, 1999.
(See also lead articles September, 1999, and February, 2001)
Winter 2003 The Standard Page 23
Note on permission to use cartoons. These are public domain freeware and permission is granted. See http:/
/www.strange-matter.com/faq.html if you need details or more cartoons.
Winter 2003 The Standard Page 24
Membership
Open
Education
Open
Publications
Open
Newsletter Editor
Frank Voehl ................... See Treasurer
Historian
Open
REGIONAL COUNCILORS
Region 1 Region 9 Region 14
Joseph Califano, Hemagen Diagnos- Dr. Henrik S. Nielson, HN Metrology Chuck Carter, C.L. Carter, Jr. & Asso-
tics, Inc., 40 Bear Hill Road, Waltham, Consulting, Inc., 5230 Nob Lane, India- ciates, Inc. 1211 Glen Cove Drive,
MA 02154 • (417) 890-3766, FAX (617) napolis, IN 46226 • (317) 849-9577, E- Richardson, TX 75080 • (972) 234-
890-3748 mail: hsnielson@worldnet.att.net 3296, FAX (972) 234-3296, E-mail:
Region 2 Region 10 asqccarter@aol.com
Karl F. Speitel, 14 Kalleston Drive, Mark Schoenlein, Owens Illinois Plas- Region 15
Pittsford, NY 14534 • (716) 385-1838 tics Group, One SeaGate 29L-PP, To- Bryan Miller, Champion International,
Region 3 ledo, OH 43666 • (419) 247-7285, FAX Inc., P.O. Box 189, Courtland, AL 35816
Eduardo M. Heidelberg, Carter (419) 247-8770, E-mail: mark. • (205) 637-6735, FAX (205) 637-5202
Wallace, 61 Kendall Dr., Parlin, NJ schoenlein@owens-ill.com Region 25
08859 • (609) 655-6521, FAX (609) Region 11 Open
655-6736 Raymond Perham, Michelin Tire Corp.,
Region 4 Rt 4 Antioch Church, P.O. Box 2846,
Alex Lau, Imperial Oil, 111 St. Clair Greenville, SC 29605 • (864) 458-1425,
Ave W, Toronto, Ont, Canada M5W- FAX (864) 458-1807, E-mail: Please notify the editor of any
1K3 • (416) 968-4654, FAX (416) 968- ray.perham@us.michelin.com, or
errors or changes so that this list
5560, E-mail: alex.lau@esso.com home E-mail: r.perham007@aol.com
can be updated.
Region 5 Region 12
Open Donald Ermer, University of Wiscon-
Region 6 sin Madison, 240 Mechanical Engineer-
Open ing Bldg., 1513 University Avenue,
Region 7 Madison, WI 53706-1572 • (608) 262-
Rolf B.F. Schumacher, Coast Quality 2557
Metrology Systems, Inc., 35 Vista Del Region 13
Ponto, San Clemente, CA 92672-3122 Thomas A. Myers, Bellevue Univer-
• (949) 492-6321, FAX (949) 492-6321 sity, PMP, CQM, 1000 Galvin Rd. S.,
Region 8 Bellevue, NE 68123 • 1-800-756-7920
Open ext. 3714, FAX (402) 293-2035, E-
mail: tmyers@scholars.bell
REGIONAL MAP
The Journal of the Measurement Quality Division
American Society for Quality
Winter 2003