Вы находитесь на странице: 1из 8

Measurement Errors

An error of measurement is the discrepancy or difference between


the result of a measurement and the absolute true value of the
quantity measured.
A number of similar measurements, each carrying an error, can be
used to provide valuable information regarding the accuracy and
repeatability of any measurements made.
Different types of errors are defined as follows:
Lecture 5
Continued
Absolute error: It is the difference between the true value applied to
a measurement system and the indicated value of the system.
Absolute Error = = True Value - Indicated Value
Accuracy of a measurement system refers to its ability to indicate a
true value exactly. Accuracy is related to absolute error.
%Accuracy = [1 - (| |/True Value)] 100
Continued
Precision error: It is a measure of the random variation found during
repeated measurements.
Bias error: It is the difference between the average value and the
true value.
Hysteresis error: It occurs when the output of a measurement
system is dependent on the previous value indicated by the system.
(A sequential test applies a sequential variation in the input value
over the desired input range.) Hysteresis error refers to differences
in the values found between going upscale and downscale in a
sequential test.
Continued
Linearity error: It is a measure of the non-linear behavior of a
measurement system.
Zero error: The shift of the zero intercept of the calibration curve is
known as zero error of the measurement system.
Overall instrument error: All known errors are put together to
develop an estimate of overall instrument error. For N known errors,
the overall instrument error e
c
is given as:
e
c
= [e
1
2
+ e
2
2
+ . + e
N
2
]
1/2
Uncertainty of Measurements
In any measurement, other than a calibration, the error can not be
exactly known since the true value is not known. However, from the
results of a calibration, the operator might feel confident that the
error is within certain bounds, a plus or minus range of the indicated
reading.
Since the magnitude of the error in any measurement can only be
estimated, one refers to an estimate of the error in the measurement
as the uncertainty present in the measured value.
Uncertainty of instruments has its origin in the errors that are
present in the measurement system, its calibration, and
measurement technique and is manifested by measurement system
bias and precision errors.
Continued
Uncertainty can be formally defined as: the range within which the
true value of the quantity measured is likely to lie at a given level of
probability.
By its very nature it consists of the results of a number of
measurements each of which carries an error, it expresses the
variability which always occurs when more than one measurement
is made.
Hierarchy of Standards
A primary standard defines the value of a unit. Once agreed upon,
the primary standard forms the exact definition of the unit until it is
changed by some later agreement.
The hierarchy of standards can be described as follows:
1. Primary standard
2. Inter-laboratory transfer standard
3. Local standard
4. Working instrument
References/Further Reading
Richard S. Figliola, Donald E. Beasley, Theory and Design for Mechanical
Measurements, John Wiley, Singapore, 2004.
C. V. Collet, A. D. Hope, Engineering Measurements, Pitman, London,
1983.