Вы находитесь на странице: 1из 10

Instrumentation and Measurement

ME 341 (3+0)
Objectives of todays lecture
Explain Error and its types for different
measurement.
Explain Calibration.

Calibration
Calibration is a comparison between measurements one
of known magnitude or correctness made or set with one
device and another measurement made in a similar way as
possible with a second device. The device with the known or
assigned correctness is called the standard. The second
device is simply the test instrument.
In general use, calibration is often regarded as including the
process of adjusting the output or indication on a
measurement instrument to agree with value of the applied
standard, within a specified accuracy.

Why instruments need calibration


Measurement is the act, or the result, of a quantitative
comparison between a predetermined standard and an
unknown magnitude. It is essential that the procedure
and apparatus employed for obtaining the comparison
must be provable i.e. it must be caused to prove its
ability to measure reliably. The procedure for this is
called calibration. In order that result of measurement
be meaningful to all, the standard that is used for
comparison must be accurately known and commonly
accepted. Further apparatus used and method used for
comparison must be provable.

When instruments need calibration


Calibration is required:

with a new instrument


when a specified time period is elapsed
when a specified usage (operating hours) has elapsed
when an instrument has had an accident (or has been
mishandled) which potentially may have put it out of
calibration
sudden changes in weather
whenever observations appear questionable

Calibration methods
Calibration procedures involve a comparison of the
particular instrument with either:
1. A primary standard
2. A secondary standard with a higher accuracy
than the instrument to be calibrated
3. A known input source

Error
Error in Measurement: The difference between
measured and true value of a measured is termed as
error in measurement.
Error = measured value - true value

Uncertainty
(or Uncertainty Interval):
Uncertainty is an estimate of the limits of error in
measurement. This is always stated with some level
of confidence.

Error in measurement is never known (except in case of calibration where


input measurements are known). It can only be estimated with in an
uncertainty interval with some level of confidence. For example a
tachometer measures engine speed as 4000 rpm. The range of tachometer
is 10 000 rpm and the accuracy is specified as 5% of full scale with 95%
confidence level. On an average for a set of 100 readings, 5 readings will
either be greater than 4500 rpm or less than 3500 rpm (95% will fall
between 4000 0.05 x 10 000 = 3500 rpm and 4000 + 0.05 x 10 000 =
4500 rpm). The uncertainty in this case +/- 500 rpm with 95 % confidence
level.
The specification of error as +/- 500 is termed as absolute error and +/5% of full scale is termed as relative error.

Types of error
Gross errors: largely human errors, among them
misreading of instruments, incorrect adjustment and
improper application of instruments, and computational
mistakes.
Random Error (Precision Error or Noise): Error caused
by any factors that randomly affect measurement of the
variable across the sample. Random error are
characterized by lack of repeatability in output of
measuring system. Random error adds to variability in
data but does not affect average of the group.
Random Error = reading average of reading

Systematic Error :(Fixed Error or Bias): Error


caused by any factors that systematically
affect measurement of the variable across
the sample. Random error are characterized
by repeatability in output of measuring
system. Systematic error tend to be
consistently positive or negative.
Systematic error = average of reading true value

Вам также может понравиться