Вы находитесь на странице: 1из 18



Objectives of engineering measurement-Basic measuring system-block diagram and

description-Performance characteristics of instruments-Static and Dynamic. Errors in
measurement error analysis. Units-Dimensions Standards. Instrument calibration.

1. Electronic Measurements & Instrumentation By R. K. Rajput
2. Measurement and Instrumentation Principles By Alan S. Morris
3. Electronic instrumentation By H.S.Kalsi

Introduction Objectives of engineering measurement

Measurement of a given parameter is the quantitative comparison between a predefined

standard and an unknown quantity.

Requirements for a Basic Measuring System

1. Comparison standard is accurately defined and commonly accepted

2. The procedure and the instrument used for the comparison must be provable.

Advantages of Electronic Measurement

1. Most of the quantities can be converted by transducers into electrical or electronic signals.
2. Electrical or electronic signals can be amplified, filtered, multiplexed, sampled and measured.
3. The measurement can easily be converted into digital form.
4. The measured signals can be transmitted over long distances.
5. Higher sensitivity, low power consumption and higher degree of reliability.

ECE DEPT, MBITS, 2017 Page 1


Block Diagram of Basic Measuring System U.Q [12 mark]

The various elements can be grouped as ,

1. Primary sensing element
2. Data conditioning element
3. Data presentation element

1. Primary sensing element

An element of the instrument which makes first contact with the quantity to be measured is
called primary sensing element. For e.g. in ammeter coil carrying current to be measured is a
primary sensing element. In most cases after the primary sensing element there is a transducer
which converts the measurand into a corresponding electrical signal.


Mechanical: contacting spindle, elastic devices

Electrical: resistances, capacitances, Inductances

Optical: photoelectric diodes and transistors

2. Variable conversion element

The output of the primary sensing element is in electrical form such as voltage, frequency
etc. This is not suitable for actual measurement system. For e.g. if the measurement system is
digital then analog signal obtained from primary sensing element is not suitable for digital
system. So the variable conversion element is nothing but an analog to digital convertor. Some
instruments do not need variable conversion element.

Example: ADC

ECE DEPT, MBITS, 2017 Page 2


3. Variable manipulation element

The variable manipulation element manipulates the signal preserving the original nature of
the signal. This is because the output from previous stages not enough to drive the next stage.
Manipulation involves change in numerical value of the signal.

For e.g. an amplifier amplifies the magnitude of the input at its output keeping the original nature
of the signal. In some cases if we want to reduce the output of the previous stage attenuators are
used as the variable manipulation element.

Sometimes the signals are required to be processed with some processes like modulation clipping
clamping etc. to obtain signal in pre and acceptable form from highly distorted form. Such a
process is called signal conditioning which is also done in second stage. Hence second stage is
called data conditioning or signal conditioning elements.


Mechanical: gearing, cranks

Electrical: amplifying or attenuating systems, filters, bridges

Optical: Mirrors, lenses, optical filters

4. Data transmission element

When the elements of the system are physically separated it is necessary to transmit data
from one stage to another. This is achieved by data transmission element. The signal
conditioning and data transmission together called intermediate stage of an instrument.

5. Data presentation element

The transmitted data used by the system finally for monitoring, controlling or analyzing
purposes. Thus the user gets the information in proper form according to the purpose for which it
is intended. This function is done by data presentation element.

If the data is to be monitored then visual display devices are used as data presentation element.

If the signal is to be recorded for analysis purpose then magnetic tapes, recorders, high speed
cameras are used as data presentation elements.

For control and analysis purpose microprocessors, computers and microcontrollers are used as
data presentation elements.

ECE DEPT, MBITS, 2017 Page 3


This stage is called terminating stage of an instrument.


Indicator types: moving pointer and scale, liquid column

Digital types: direct alphanumeric readouts

Recorder: digital printing, inked pen and chart

Processor and computers:


The moving coil is the primary sensor. The magnets and coils together act as data conditioning
stage to convert current in coil to a force. This force is transmitted to the pointer through
mechanical linkages which act as data transmission element. The pointer and scale act as data
presentation element.

Performance Characteristics

Divided into 2 categories

1. Static characteristics 2. Dynamic characteristics

Static characteristics

Static characteristics are considered for instructions which are used to measure an unvarying
process condition.
1. Accuracy, Error and Correction
Accuracy is defined as the measure of closeness of output reading of an instrument to the
accepted standard value. Accuracy depends on following factors:

Variation of signal being measured

Accuracy of instrument itself

Accuracy of observer

ECE DEPT, MBITS, 2017 Page 4


Static Error: The difference between best measured value and the true value of the quantity is
called as static error.
Es = Vm Vt
Where Vm = measured value of quantity
Vt = true value of quantity
Es = static error
Relative Static Error: The relative static error is defined as the ratio of absolute static error to
the true value of quantity under measurement.
Er = Es/Vt ; Er = Vm Vt / Vt
The difference between true value and measured value of a quantity is called correction (CS)
CS = Vt Vm = -Es
2. Precision/Reproducibility/repeatability

Precision: It is the measure of consistency of instrument output for a given input. i.e. successive
readings do not differ.
Repeatability: describes the closeness of output readings when the same input is applied
repetitively over a short period of time, with the same measurement conditions, same instrument
and observer, same location and same conditions of use maintained throughout.

Reproducibility: describes the closeness of output readings for the same input when there are
changes in the method of measurement, observer, measuring instrument, location, conditions of
use and time of measurement.

ECE DEPT, MBITS, 2017 Page 5


3. Tolerance: Tolerance is a term that is closely related to accuracy and defines the maximum
error that is to be expected in some value.
4. Range and Span:
The range of an instrument defines the minimum and maximum values of a quantity that
the instrument is designed to measure.
Span represents the algebraic difference between the upper and lower range values of the
Eg: Range: 2Kg 50Kg Span : 50 2
= 48Kg

5. Drift:
It is an undesired gradual departure of instrument output over a period of time that is
unrelated to changes in input, operating conditions or load. Drift may be caused by following

1. High mechanical stresses developed in some parts of machine

2. Wear and tear
3. Mechanical vibrations
4. Temperature changes
5. Stray electric and magnetic fields

The different types of drifts are:

Zero drift: drift is called zero drift if whole of instrument calibration gradually shifts over by the
same amount.

Span drift: if the calibration from zero upwards changes proportionally then it is called span or
sensitivity drift.

Zonal drift: when the drift occurs only over a portion of span of an instrument it is called zonal

6. Linearity
It is normally desirable that the output reading of an instrument is linearly proportional to
the quantity being measured. The ability to reproduce the input characteristics symmetrically is
called linearity. It can be expressed by straight line equation. Or in other words, linearity is
simply a measure of maximum deviation of any of the calibration points from the straight line.

ECE DEPT, MBITS, 2017 Page 6


7. Threshold
It defines the minimum value of input which is necessary to cause a detectable change
from zero output.

8. Resolution
When the input is slowly increased from non-zero value, it is observed that the output
does not change until a certain increment is exceeded. This increment is called resolution.
9. Hysteresis
For a given value of input, the output may be different depending on whether input is
increasing or decreasing. Hysteresis is the difference between these two values of output.

ECE DEPT, MBITS, 2017 Page 7


If the input to the instrument is increased from a negative value the output also increases. This is
shown by curve 1. But if the curve is now decreased steadily, the output does not follow the same
curve but lags by certain value. It traces the curve 2 as shown in fig. The difference between the
two curves is called hysteresis.

10. Dead zone or dead space and dead time

Dead zone is defined as the range of different input values over which there is no
change in output value.

The time required by a measurement system to begin to respond to a change in the

measurand is termed dead time.
11. Sensitivity: ratio of magnitude of output signal to the input signal or response of measuring
system to the quantity being measured.

When elements are arranged in series the overall sensitivity is the multiplication of individual

Dynamic Characteristics

The dynamic characteristics of a measuring instrument describe its behavior between the
time a measured quantity changes value and the time when the instrument output attains a steady
value in response.

The dynamic characteristics of an instrument are

(i) Speed of response : It is the rapidity with which an instrument responds to changes in
measured quantity

(ii) Fidelity: It is defined as the degree to which a measurement system indicates

changes in the measured quantity without any dynamic error.

(iii) Lag: It refers to retardation or delay in the response of measurement system to changes in
measured quantity.

Retardation type lag: response of measurement system begins immediately after a change in
measured quantity has occurred.
Time delay type lag: in this case the response begins after a dead time after the application of input.

(iv) Dynamic Error: it is the difference between the true value of quantity changing with time
and the value indicated by instrument.

ECE DEPT, MBITS, 2017 Page 8



Errors are inherent in the process of making measurements and in instruments used for

Limiting or Guarantee Error

The manufacturers specify the accuracy of the instruments within a certain percentage of full
scale reading. The components like the resistor, inductor, and capacitor are guaranteed to be
within a certain percentage of rated value. This percentage indicates the deviations from the
nominal or specified value of the particular quantity. These deviations from the specified value
are called Limiting Errors. These are also called Guarantee Errors.

% limiting error : % Er Er 100%

Eg : 100 4% , 4% represents the limiting error

ECE DEPT, MBITS, 2017 Page 9



Static Error: The difference between best measured value and the true value of the quantity is
called as static error. i.e repeated measurement of the same quantity gives different indications.
Es = Vm Vt
Where Vm = measured value of quantity
Vt = true value of quantity
Es = static error

Static Error Classification

ECE DEPT, MBITS, 2017 Page 10


1. Gross Error

Due to human mistakes in reading instruments, recording and calculating results

Also due to improper use of instruments

Impossible to eliminate, but can be anticipated and corrected.

The best example of these errors is a person or operator reading pressure gage 1.01N/m2 as
1.10N/m2. It may be due to the persons bad habit of not properly remembering data at the time
of taking down reading, writing and calculating, and then presenting the wrong data at a later
time. This may be the reason for gross errors in the reported data, and such errors may end up in
calculation of the final results, thus deviating results.

Can be avoided by adopting two means:

Immense care should be taken while taking the reading and recording data
Multiple readings should be taken for the quantity being measured.

1. Systematic Error

Occur due to short comings of instruments such as defective or worn parts or ageing
or effects of environment on instrument

Located by having repeated measurements under different conditions or different


Also called bias, and influence all measurements of quantity alike

ECE DEPT, MBITS, 2017 Page 11


a) Instrumental Error

Inherent in measuring instrument because of their mechanical structure

Also due to misuse of instruments and loading effects
Loading effects: incapability of system to faithfully measure inputs
Can be avoided by

By selecting suitable instrument for particular measurement

Applying correction factors after determining amount of instrumental error
Calibrating instrument against a standard

b) Environmental Error
Due to conditions external to measuring device
Eg: temperature, humidity, pressure, vibrations etc
Can be avoided by

Using instruments in controlled conditions of temperature, pressure, humidity in

which it was assembled and calibrated.
Measure deviations in local conditions from calibrated ones and then apply suitable
Automatic compensations for departure from calibrated conditions
Make new calibration under local conditions
Providing magnetic and electrostatic shields to protect from their effects.

c) Observational Error

Due to carelessness of operator

Parallax: error arised on account of pointer and scale not being in same plane
Wrong scale reading ad recording
Inaccurate estimate of average reading
Incorrect conversion of units

Avoided by using instruments having digital display of output.

ECE DEPT, MBITS, 2017 Page 12


3. Random Errors

Accidental, small and independent

Vary in unpredictable manner

Random errors are caused by the sudden change in experimental conditions and
noise and tiredness in the working persons.
These errors are either positive or negative.
An example of the random errors is during changes in humidity, unexpected change
in temperature and fluctuation in voltage.
These errors may be reduced by taking the average of a large number of readings.

Sources of Errors
Sources of systematic errors
1. System disturbance during measurement.
2. Effect of environmental changes
E.g. Humidity, Temperature Changes, Stray electric and magnetic fields
3. Bent meter needles
4. Use of un-calibrated instruments
5. Drift in instrument characteristics
6. Poor cabling practices
Sources of random errors
1. Arise when measurements are taken by human observation of an analog meter otherwise
called parallax error.
2. Response time: Time taken by instrument to show 63.2% change in a reading to a step
input. This factor contributes to uncertainty of measurements
3. Noise: any signal that doesnt convey any information. Its reduced by

ECE DEPT, MBITS, 2017 Page 13


1. Filtering
2. Careful selection of components
3. Shielding and isolation of measuring system

Sources of Instrumental Errors

1.Mechanical structure.
2.Misuse of instruments.
3.Loading effects: Improper way of using the instrument.
4.Design Limitations: In the design of instrument certain factors such as friction and
resolving power leads to uncertainty of measurements
5. Energy exchanged by interaction: when energy required for operating the measuring
system is extracted from measurand, its value is altered. This alteration is dependent
upon the capacity of system
General Sources of Errors

1. Transmission: during transmission of information from primary sensor to indicator, the

signal may be attenuated due to following reasons

Suffer loss through leakage

Absorbed in the communication channel
Distorted by resonance, attenuation or delay

2. Deterioration of measuring system :

Eg: change in resistance of circuit due to strain
3. Maintenance : If maintenance not done properly error occurs
4. Insufficient Knowledge of process parameters and design conditions


Units: A unit of measurement is a definite magnitude of a physical quantity, defined and adopted
by convention and/or by law, that is used as a standard for measurement of the same physical
For example, length is a physical quantity. The metre is a unit of length that represents a definite
predetermined length. When we say 10 metres (or 10 m), we actually mean 10 times the definite
predetermined length called "metre"

Classified into 2 Types: Fundamental units and derived units

Fundamental units are basic standard units. Eg : Kg, m, Seconds

Derived units are those units derived from fundamental units. Eg : m/s

ECE DEPT, MBITS, 2017 Page 14


Definitions of Standard Units

Very quantity has a quality that distinguishes it from all other quantities.

This unique quality is called dimension

Eg : Velocity = displacement/ time = [L][T-1]


Measurement is a process of comparison and basis of comparison is standardized units.

They must be precisely defined.

The standards of measurements are classified as :

1. Primary

2. Secondary

3. Working

ECE DEPT, MBITS, 2017 Page 15


1. Primary Standards: the highest standard of either a base unit or a derived unit is called a
primary standard.
These standards are copies of international prototypes and are kept throughout the world in
national standard laboratories.

They constitute the ultimate basis of reference and are used for the purpose of verification and
calibration of secondary standards.

They have highest possible accuracy

These units are : Quite


Not relative but finite

Constitute ultimate basis of reference and used for verification and calibration of standards

They have highest possible accuracy and not available for use outside national laboratories

2. Secondary standards: Reference calibrated standards designed and calibrated from primary
They are periodically sent to national laboratories for calibration.
Kept by measurement laboratories and industrial organizations to check and calibrate the general
tools for their accuracy and precision.

3. Working standards:

Have accuracy of one order lower than that of secondary standards.

Normal standards use by workers and technicians who actually carry out measurements.

ECE DEPT, MBITS, 2017 Page 16


Instrument calibration

Calibration consists of comparing the output of the instrument or sensor under test against the
output of an instrument of known accuracy when the same input (the measured quantity) is applied
to both instruments. This procedure is carried out for a range of inputs covering the whole
measurement range of the instrument or sensor.

Need for Instrument Calibration

1. Instrument calibration has to be repeated at prescribed intervals because the characteristics of any
instrument change over a period.

2. Changes in instrument characteristics are brought about by factors such as mechanical wear, and
the effects of dirt, dust, fumes, chemicals and temperature changes in the operating environment.

3. To a great extent, the magnitude of the drift in characteristics depends on the amount of use an
instrument receives and hence on the amount of wear and the length of time that it is subjected to
the operating environment.

4. Some drift also occurs even in storage, as a result of ageing effects in components within the

Calibration chain and traceability

ECE DEPT, MBITS, 2017 Page 17


Working Standards [Company Instrument Laboratory]

The calibration facilities provided within the instrumentation department of a company provide the
first link in the calibration chain. Instruments used for calibration at this level are known as working
However, over the longer term, the characteristics of even such standard instruments will drift,
mainly due to ageing effects in components within them. Therefore, over this longer term, a
programme must be instituted for calibrating working standard instruments at appropriate intervals
of time against instruments of yet higher accuracy.

Secondary Reference Standard [Standards Laboratory]

The instrument used for calibrating working standard instruments is known as a secondary
reference standard. This must obviously be a very well-engineered instrument that gives high
accuracy and is stabilized against drift in its performance with time.

This implies that it will be an expensive instrument to buy. It also requires that the environmental
conditions in which it is used be carefully controlled in respect of ambient temperature, humidity

When the working standard instrument has been calibrated by an authorized standards laboratory, a
calibration certificate will be issued. This will contain at least the following information:

Primary Reference Standard [National Standard Organization]

This describes the highest level of accuracy that is achievable in the measurement of any particular
physical quantity. When the working standard instrument has been calibrated by an authorized
standards laboratory, a calibration certificate will be issued. This will contain at least the following

1. The identification of the equipment calibrated

2. The calibration results obtained
3. The measurement uncertainty
4. Any use limitations on the equipment calibrated
5. The date of calibration
6. The authority under which the certificate is issued.

For Dynamic characteristics, Error Analysis and problem refer notebook

ECE DEPT, MBITS, 2017 Page 18