Вы находитесь на странице: 1из 5

Calibration

From Wikipedia, the free encyclopedia


Jump to navigationJump to search
"Zeroing" redirects here. For the U.S. government antidumping duties, see Zeroing (trade). For
other uses, see Zeroing (disambiguation).

Look up calibration in
Wiktionary, the free
dictionary.

In measurement technology and metrology, calibration is the comparison


of measurement values delivered by a device under test with those of a calibration standard of
known accuracy. Such a standard could be another measurement device of known accuracy, a
device generating the quantity to be measured such as a voltage, a sound tone, or a physical
artefact, such as a metre ruler.
The outcome of the comparison can result in one of the following:

 no significant error being noted on the device under test


 a significant error being noted but no adjustment made
 an adjustment made to correct the error to an acceptable level
Strictly speaking, the term "calibration" means just the act of comparison, and does not include
any subsequent adjustment.
The calibration standard is normally traceable to a national standard held by a national
metrological body.

Contents

 1BIPM Definition
 2Modern calibration processes
o 2.1Quality
 3Instrument calibration prompts
 4Basic calibration process
o 4.1Purpose and scope
o 4.2Frequency
o 4.3Standards required and accuracy
o 4.4Manual and automatic calibrations
o 4.5Process description and documentation
 5Historical development
o 5.1Origins
o 5.2Calibration of weights and distances (c. 1100 CE)
o 5.3The early calibration of pressure instruments
 6See also
 7References
 8External links

BIPM Definition[edit]
The formal definition of calibration by the International Bureau of Weights and Measures (BIPM)
is the following: "Operation that, under specified conditions, in a first step, establishes a relation
between the quantity values with measurement uncertainties provided by measurement
standards and corresponding indications with associated measurement uncertainties (of the
calibrated instrument or secondary standard) and, in a second step, uses this information to
establish a relation for obtaining a measurement result from an indication."[1]
This definition states that the calibration process is purely a comparison, but introduces the
concept of measurement uncertainty in relating the accuracies of the device under test and the
standard.

Modern calibration processes[edit]


The increasing need for known accuracy and uncertainty and the need to have consistent and
comparable standards internationally has led to the establishment of national laboratories. In
many countries a National Metrology Institute (NMI) will exist which will maintain primary
standards of measurement (the main SI units plus a number of derived units) which will be used
to provide traceability to customer's instruments by calibration.
The NMI supports the metrological infrastructure in that country (and often others) by
establishing an unbroken chain, from the top level of standards to an instrument used for
measurement. Examples of National Metrology Institutes are NPL in the UK, NIST in the United
States, PTB in Germany and many others. Since the Mutual Recognition Agreement was signed
it is now straightforward to take traceability from any participating NMI and it is no longer
necessary for a company to obtain traceability for measurements from the NMI of the country in
which it is situated, such as the National Physical Laboratory in the UK.
Quality[edit]
To improve the quality of the calibration and have the results accepted by outside organizations it
is desirable for the calibration and subsequent measurements to be "traceable" to the
internationally defined measurement units. Establishing traceability is accomplished by a formal
comparison to a standard which is directly or indirectly related to national standards (such
as NIST in the USA), international standards, or certified reference materials. This may be done
by national standards laboratories operated by the government or by private firms offering
metrology services.
Quality management systems call for an effective metrology system which includes formal,
periodic, and documented calibration of all measuring instruments. ISO 9000[2] and ISO
17025[3] standards require that these traceable actions are to a high level and set out how they
can be quantified.
To communicate the quality of a calibration the calibration value is often accompanied by a
traceable uncertainty statement to a stated confidence level. This is evaluated through careful
uncertainty analysis. Some times a DFS (Departure From Spec) is required to operate machinery
in a degraded state. Whenever this does happen, it must be in writing and authorized by a
manager with the technical assistance of a calibration technician.
Measuring devices and instruments are categorized according to the physical quantities they are
designed to measure. These vary internationally, e.g., NIST 150-2G in the U.S.[4] and NABL-141
in India.[5] Together, these standards cover instruments that measure various physical quantities
such as electromagnetic radiation (RF probes), sound (sound level meter or noise dosimeter),
time and frequency (intervalometer), ionizing radiation (Geiger counter), light (light meter),
mechanical quantities (limit switch, pressure gauge, pressure switch), and, thermodynamic or
thermal properties (thermometer, temperature controller). The standard instrument for each test
device varies accordingly, e.g., a dead weight tester for pressure gauge calibration and a dry
block temperature tester for temperature gauge calibration.

Instrument calibration prompts[edit]


Calibration may be required for the following reasons:

 a new instrument
 after an instrument has been repaired or modified
 when a specified time period has elapsed
 when a specified usage (operating hours) has elapsed
 before and/or after a critical measurement
 after an event, for example
o after an instrument has been exposed to a shock, vibration, or physical damage, which
might potentially have compromised the integrity of its calibration
o sudden changes in weather
 whenever observations appear questionable or instrument indications do not match the
output of surrogate instruments
 as specified by a requirement, e.g., customer specification, instrument manufacturer
recommendation.
In general use, calibration is often regarded as including the process of adjusting the output or
indication on a measurement instrument to agree with value of the applied standard, within a
specified accuracy. For example, a thermometer could be calibrated so the error of indication or
the correction is determined, and adjusted (e.g. via calibration constants) so that it shows the
true temperature in Celsius at specific points on the scale. This is the perception of the
instrument's end-user. However, very few instruments can be adjusted to exactly match the
standards they are compared to. For the vast majority of calibrations, the calibration process is
actually the comparison of an unknown to a known and recording the results.

Basic calibration process[edit]


Purpose and scope[edit]
The calibration process begins with the design of the measuring instrument that needs to be
calibrated. The design has to be able to "hold a calibration" through its calibration interval. In
other words, the design has to be capable of measurements that are "within engineering
tolerance" when used within the stated environmental conditions over some reasonable period of
time.[6] Having a design with these characteristics increases the likelihood of the actual
measuring instruments performing as expected. Basically,the purpose of calibration is for
maintaining the quality of measurement as well as to ensure the proper working of particular
instrument.
Frequency[edit]
The exact mechanism for assigning tolerance values varies by country and as per the industry
type. The measuring of equipment is manufacturer generally assigns .the measurement
tolerance, suggests a calibration interval (CI) and specifies the environmental range of use and
storage. The using organization generally assigns the actual calibration interval, which is
dependent on this specific measuring equipment's likely usage level. The assignment of
calibration intervals can be a formal process based on the results of previous calibrations. The
standards themselves are not clear on recommended CI values:[7]
ISO 17025[3]
"A calibration certificate (or calibration label) shall not contain any recommendation on
the calibration interval except where this has been agreed with the customer. This
requirement may be superseded by legal regulations.”
ANSI/NCSL Z540[8]
"...shall be calibrated or verified at periodic intervals established and maintained to
assure acceptable reliability..."
ISO-9001[2]
"Where necessary to ensure valid results, measuring equipment shall...be calibrated or
verified at specified intervals, or prior to use...”
MIL-STD-45662A[9]
"... shall be calibrated at periodic intervals established and maintained to assure
acceptable accuracy and reliability...Intervals shall be shortened or may be lengthened,
by the contractor, when the results of previous calibrations indicate that such action is
appropriate to maintain acceptable reliability."
Standards required and accuracy[edit]
The next step is defining the calibration process. The selection of a standard or
standards is the most visible part of the calibration process. Ideally, the standard
has less than 1/4 of the measurement uncertainty of the device being calibrated.
When this goal is met, the accumulated measurement uncertainty of all of the
standards involved is considered to be insignificant when the final measurement
is also made with the 4:1 ratio.[10] This ratio was probably first formalized in
Handbook 52 that accompanied MIL-STD-45662A, an early US Department of
Defense metrology program specification. It was 10:1 from its inception in the
1950s until the 1970s, when advancing technology made 10:1 impossible for
most electronic measurements.[11]
Maintaining a 4:1 accuracy ratio with modern equipment is difficult. The test
equipment being calibrated can be just as accurate as the working standard.[10] If
the accuracy ratio is less than 4:1, then the calibration tolerance can be reduced
to compensate. When 1:1 is reached, only an exact match between the standard
and the device being calibrated is a completely correct calibration. Another
common method for dealing with this capability mismatch is to reduce the
accuracy of the device being calibrated.
For example, a gauge with 3% manufacturer-stated accuracy can be changed to
4% so that a 1% accuracy standard can be used at 4:1. If the gauge is used in
an application requiring 16% accuracy, having the gauge accuracy reduced to
4% will not affect the accuracy of the final measurements. This is called a limited
calibration. But if the final measurement requires 10% accuracy, then the 3%
gauge never can be better than 3.3:1. Then perhaps adjusting the calibration
tolerance for the gauge would be a better solution. If the calibration is performed
at 100 units, the 1% standard would actually be anywhere between 99 and 101
units. The acceptable values of calibrations where the test equipment is at the
4:1 ratio would be 96 to 104 units, inclusive. Changing the acceptable range to
97 to 103 units would remove the potential contribution of all of the standards
and preserve a 3.3:1 ratio. Continuing, a further change to the acceptable range
to 98 to 102 restores more than a 4:1 final ratio.
This is a simplified example. The mathematics of the example can be
challenged. It is important that whatever thinking guided this process in an
actual calibration be recorded and accessible. Informality contributes
to tolerance stacks and other difficult to diagnose post calibration problems.
Also in the example above, ideally the calibration value of 100 units would be
the best point in the gauge's range to perform a single-point calibration. It may
be the manufacturer's recommendation or it may be the way similar devices are
already being calibrated. Multiple point calibrations are also used. Depending on
the device, a zero unit state, the absence of the phenomenon being measured,
may also be a calibration point. Or zero may be resettable by the user-there are
several variations possible. Again, the points to use during calibration should be
recorded.
There may be specific connection techniques between the standard and the
device being calibrated that may influence the calibration. For example, in
electronic calibrations involving analog phenomena, the impedance of the cable
connections can directly influence the result.
Manual and automatic calibrations[edit]
Calibration methods for modern devices can be manual or automatic.
Manual calibration - US serviceman calibrating a pressure gauge. The device under
test is on his left and the test standard on his right.

As an example, a manual process may be used for calibration of a pressure


gauge. The procedure requires multiple steps,[12][citation needed] to connect the gauge
under test to a reference master gauge and an adjustable pressure source, to
apply fluid pressure to both reference and test gauges at definite points over the
span of the gauge, and to compare the readings of the two. The gauge under
test may be adjusted to ensure its zero point and response to pressure comply
as closely as possible to the intended accuracy. Each step of the process
requires manual record keeping.

Automatic calibration - A U.S. serviceman using a 3666C auto pressure calibrator

An automatic pressure calibrator [13] is a device that combines an electronic


control unit, a pressure intensifier used to compress a gas such as Nitrogen,
a pressure transducer used to detect desired levels in a hydraulic accumulator,
and accessories such as liquid traps and gauge fittings. An automatic system
may also include data collection facilities to automate the gathering of data for
record keeping.

Вам также может понравиться