Вы находитесь на странице: 1из 24

MECHANICAL MEASUREMENTS AND METROLOGY

Course code: 15ME1142

Unit 1 : Introduction to Measurements

Class: VII Semester - Sec 1

Faculty: Dr. Subash Gokanakonda


Definitions and Basic principles of Measurement Systems

• The word measurement is used to tell us the length, weight, temperature, color or a change in one of these physical entities
of a material

• Measurement provides us with means for describing various physical and chemical parameters of materials in quantitative
terms.

• It is a process of comparing the input signal (unknown magnitude) with a pre-defined standard and giving out the result.

• Operation of all machines are to be controlled (manually or automatically). Measuring the concerned variables is the first
requirement for controlling.

• The two basic requirements to be met to get good result from measurement:
✓ The standard must be accurately known and internationally accepted.
✓ The apparatus and experimental procedure adopted for comparison must be proven

Example of Measurement (Automobiles)


1. Speed of travel: It is measure for the safety of the occupants (Speedometer)
2. Distance travelled: This is measured to know the distance (Kms) travelled by the vehicle (Odometer)
3. Fuel level: This is measured to know the quantity of fuel available in the fuel tank of the vehicle
4. Engine temperature: This is measured to know the temperature of the engine for safety.
INSTRUMENTATION (Measurement system)

• Definition
➢ The technology of using instruments (measurement systems) to measure and control physical and
chemical properties of materials
or
➢ It is a branch of engineering science that deals with techniques used for measurement, the
measurement devices used and the problems associated with the techniques used for measurement.

Uses of Instrumentation

• To study the function of different components and determine the cause of all functioning of the
system
• To test a product on material for quality control
• To discover effective components
• To develop new theories
• Monitor data in the interest of health and safety
Methods of Measurement
1. Direct and Indirect measurement
2. Primary, Secondary and tertiary measurement
3. Contact and non-contact type of measurement

1. Direct Comparison Method:


• The measurand (unknown quantity) is directly compared with the standard.
• The result of measurement is a number and a unit.
• Used for measuring physical quantities such as time, mass, length etc.
• When the measurand is very small, human being cannot use this method with accuracy and precision. Hence a
major constraint

2. Indirect Comparison Method:


• These measurement systems (instrumentation) have transducer element
• It converts the measurand from one form to another (analogous signal) without changing the information
content
• The analogous signal then processed and sent to the end devices which will give the result of measurement.
• The input signal is converted to some other form and then compared with standard.
Input, Output Configuration of a Measuring Instrument

An instrument performs an operation on an input quantity to provide an output called measurement.

Input – i Output – o Operational transfer function - G

Types of inputs

1. Desired input: A quantity that the instrument is specifically intended to measure. (oD=GD*iD )

2. Interfering Input: a quantity to which the instrument is unintentionally sensitive. (oI=GI*iI )

3. Modifying Input: a quantity that modifies the input-output relationship for both the desired and interfering inputs.
Classification of Instruments
1. Automatic and Manual Instruments:
Example: Hg-in-glass thermometer : Automatic Instrument
Resistance thermometer : Manual Instrument since it incorporated Wheatstone bridge in circuit

2. Self-generating (Active) and Power-operated instruments (Passive)


Active: Example : Hg-in-Glass thermometer, Dial indicator, Bourdon gauge, Pitot tube etc.
Passive: Some instruments require some auxiliary source of power such as compressed air, electricity, hydraulic
supply etc. for their operation and hence externally powered (Passive) Instruments. The input signals supplies only
an insignificant portion of the output power.
Example: LVDT – Displacement, force, pressure
Photo-conductive transducer – Light to resistance info
Strain Gauge load cell using Wheatstone bridge
Resistance thermometers and thermistors
3. Self-contained and Remote indicating instruments
4. Deflection and Null output instruments
Example: Spring Balance, Dial indicators
Pan Balance, Wheatstone bridge circuit
Classification of Instruments
5. Analog and Digital instruments:
Analog: The signals of an analog unit vary in a continuous fashion and can take on infinite number of values in a
given range.
Examples: Fuel Gauge, Ammemeter, Voltmeters, Wristwatch, Speedometer
Digital: Signals that vary in discrete steps and that take a finite number of different values in a given range. The
digital instruments convert a measured analog value into digital quantity which is displayed numerically. The
output may either be a digit (pulse or step) for every successive increment of input.
Examples: Digital multimeter, Digital speedometer.
6. Absolute and Secondary instruments:
Absolute instruments measures the process variable directly from the process without the use of conversion.
They do not require comparison with any standard.
Secondary instruments require to be calibrated with respect to the standard instrument
7. Mechanical, Electrical and electronic instruments:
Mechanical: Simple in construction and design. No external power source required. Remote indication not
possible. Cause noise and do not give accurate results
Electrical: More rapid indication. Involves conversion of electrical quantity to mechanical movement on scale.
Electronic: Respond quickly to dynamic and transient conditions. Light in weight, compact, consume less power,
highly reliable. Have high sensitivity and flexibility and remote indication is possible. Non-contacting
measurement is also possible.
8. Contacting and Non-contacting type instruments:
Performance Characteristics of a Measuring Instrument

• Characteristics: The quantitative representation of the performance of an instrument.

➢ Static Characteristics

➢ Dynamic Characteristics

Static Characteristics
• These pertain to a system where quantities to be measured are constant or those that vary slowly with time.
• Example: Temperature of a furnace
• The measurand may be subjected to sudden changes or steps (thermometer thrust into hot liquid)
• The signal may fluctuate rapidly.

Dynamic Characteristics
• These pertain to a system where quantities to be measured are ‘time varying process variable’
• They quantify the dynamic relation (time sensitive) between the input and output of an instrument
Static Characteristics
• Range and Span
• Accuracy, Error and Correction
• Calibration
• Repeatability
• Reproducibility
• Precision
• Sensitivity
• Threshold
• Resolution
• Drift
• Hysteresis and Dead Zone

1. Range and Span


• The region between the limits in which an instrument is designed to operate for measuring, indicating or recording a
physical quantity – Range
• Span represents the algebraic difference between the upper and lower ranges of the instruments
Example:
• Range: -10 K to 80 K ; Span = 90 K
• Range: 5 bar to 100 bar ; Span = 95 bar
• Range: 0 volts to 75 volts ; Span = 75 volts
Static Characteristics
2. Accuracy, Error and Correction
• Accuracy of an indicated value (measured) is defined as closeness to an accepted standard value
(true value)

• The difference between measured value (Vm) and true value (Vt) of the quantity is expressed as
instrument error
(ES = Vm – Vt) : ‘Static Error or Absolute Error of Measurement’

• Static correction is defined as the difference between the true value and the measured value of the
quantity. CS = Vt – Vm

• The correction of the instrument reading is of the same magnitude as error but opposite sign
(CS = - ES)
Static Characteristics
Error Specification

Point Accuracy: The accuracy of an instrument is stated for one or more points in its range
Example: Thermometer can read within 0.5 oC between 100 oC and 200 oC

Percentage of true value or the Relative Error: The absolute error of measurement is expressed as a % of true value of the
unknown quantity
𝑀𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑣𝑎𝑙𝑢𝑒 − 𝑇𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒
𝐸𝑟𝑟𝑜𝑟 = × 100 (%)
𝑇𝑟𝑢𝑒 𝑉𝑎𝑙𝑢𝑒

The above % error is the maximum for any point in the range of the instrument

The size of error diminishes with a drop in the true value

Percentage of full scale deflection: The error is calculated on the basis of the maximum value of the scale
𝑀𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑉𝑎𝑙𝑢𝑒 − 𝑇𝑟𝑢𝑒 𝑣𝑎𝑙𝑢𝑒
𝐸𝑟𝑟𝑜𝑟 = × 100 (%)
𝑀𝑎𝑥. 𝑠𝑐𝑎𝑙𝑒 𝑣𝑎𝑙𝑢𝑒
Static Characteristics

Calibration
• The magnitude of the error and the corresponding correction to be made is determined by periodic comparison of the
instrument with known standards (constant)
• The procedure for making, adjusting or checking a scale so that readings of the instrument conform to accepted
standard is called as Calibration.
• The graphical representation of the calibration record in called Calibration Curve
• This curve relates standard values of measurand to actual values of output throughout the operating range of the
instrument.
• The comparison may be made with
✓ A primary standard
✓ Secondary standard of accuracy greater than the instrument to be calibrated
✓ A known input source
• Requirements for better calibration of an instrument:
✓ It is to be carried out with the instrument in the same position and subjected to same temperature and other
ambient conditions as those under regular operation
✓ It is calibrated with values of measurand impressed both in the increasing and in the decreasing order. The
results are expressed graphically (output – ordinate and measurand – abscissa)
✓ Output readings for series of impressed values going up the scale may not agree with the output readings for the
same input values when going down.
✓ Lines or curves plotted in the graphs may not form closed loop
Static Characteristics
Hysteresis:
• The magnitude of output for a given input depends upon the direction of the change in input (ascending or descending).
• This dependence of output upon previous inputs is called Hysteresis.
• It represents non-coincidence of loading and un-loading curves.
• It is the maximum difference for the same measured quantity between the up-scale and down-scale readings during full
transverse in each direction.
• Hysteresis results from the presence of irreversible phenomenon such as
➢ mechanical friction,
➢ slack motion in bearings and gears,
➢ elastic deformation,
➢ magnetic and thermal effects
• It can occur in electronic systems due to heating and cooling effects which occur differentially under conditions of rising
and falling input.

Dead Zone
• It is the largest range through which an input signal can be
varied without initiating any response from the indicating
instrument
• Friction or play is the main cause
Static Characteristics

Drift
• It is an undesirable gradual departure of the instrument output over a period of time that is unrelated to changes in
input., operating conditions or load.
• It an instrument reproduces same readings at different timings for same variation in measured variables – No Drift
• Zero drift is said to set in when there is a gradual shift in the entire calibration due to slippage, permanent set or due
to under warming up of electronic tube circuits
• Span drift or Sensitivity drift – If there is proportional change in the indication all along upward scale
• Zonal drift – if the drift occurs only over a portion of the span of an instrument
• Causes of Drift
➢ Wear and tear at the mating parts
➢ Mechanical vibrations
➢ Containment of primary sensing elements
➢ Development of high mechanical stresses in some parts
➢ Temperature changes, stray electric and magnetic fields
Static Characteristics

Sensitivity
• It is the ratio of the magnitude of the response (output signal) to the magnitude of the quantity being measured (input
signal)
𝐶ℎ𝑎𝑛𝑔𝑒 𝑜𝑓 𝑜𝑢𝑡𝑝𝑢𝑡 𝑠𝑖𝑔𝑛𝑎𝑙
𝑆𝑡𝑎𝑡𝑖𝑐 𝑠𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦, 𝑘 = 𝐶ℎ𝑎𝑛𝑔𝑒 𝑜𝑓 𝑖𝑛𝑝𝑢𝑡 𝑠𝑖𝑔𝑛𝑎𝑙
• It is represented by the slope of the calibration curve if the ordinates (Y axis values) are expressed in their actual values.
• For linear calibration curve – sensitivity is constant
• For a non-linear calibration curve – sensitivity is not constant and must be specified in terms of input value
• It has wide range of inputs depending on the instrument.

Threshold and Resolution


• The smallest increment of quantity being measured which can be detected with certainty by an instrument represents
the threshold and resolution of the instrument
• When the input is gradually increased from zero, there will be some minimum value input before which the instrument
will not detect any output change.
• Threshold defines the minimum value of input which is necessary to cause detectable change from zero output.
• When the input signal is increased from a non-zero value, the instrument output does not change until a certain input
increment is exceeded – Resolution / discrimination
• Resolution defines the smallest change of input for which there will be a change of output.
Static Characteristics

Precision and Repeatability


• Refers to the closeness among several measurements of the same true value with the same instrument, by the same
operator over a short time span.
• Precision refers to the degree of agreement within a group of measurements i.e., it prescribes the ability of the instrument
to reproduce its readings over and over again for a constant input signal.
• Stability refers to the reproducibility of the mean reading of an instrument , repeated on different occasions separated by
intervals of time which are long compared with the time of taking a reading.
• Reproducibility is the closeness between measurements of the same quantity when the individual measurements are
made under different conditions
• Constancy refers to the reproducibility of the mean reading of an instrument when a constant input is presented
continuously and the conditions of the test are allowed to vary within specified limits

Linearity
• The working range of most of the instruments provide a linear relationship between the output and input.
• Linearity is defined as the ability to reproduce the input characteristics symmetrically, i.e., by the straight line equation.
• The closeness of the calibration curve to a specified straight line is the linearity of the instrument.
Static Characteristics

Tolerance: Range of inaccuracy which can be tolerated in measurements ; it is the maximum possible permissible error.

Readability and Least count


• Readability indicates the closeness with which the scale of the instrument may be read.
• Least count represents the smallest difference that can be detected on the instrument scale.

Backlash: The maximum distance or angle through which any part of a mechanical system may be moved in one direction
without applying appreciable force or motion to the next part in a mechanical system.

Zero stability: A measure of the ability of the instrument to restore to zero reading aafter the measurand has returned to
zero and other variations such as temperature, pressure, humidity, vibration etc. have been removed

Stiction (Static Force): Force or torque that is necessary just to initiate motion from rest.
Static Characteristics

Consider three instruments X, Y and Z, measuring the true value of 10mm.

Instrument Measured Values (mm)


11
True Value
9.91, 9.92, 9.91, 9.94, 9.93, 9.91,
9.95, 9.92, 9.92, 9.93 Instrument X
X Accurate High Precision
10.5 Instrument Y
Instrument Z
9.11, 9.12, 9.11, 9.13, 9.14, 9.12,
10
9.11, 9.13, 9.14, 9.15
Y
Low Accuracy High precision
9.5
9.3, 9.2, 9.1, 9.7, 9.05, 9.5, 9.4,
9.35, 9.6, 9.55
Z
Low Accuracy Low Precision 9
1 3 5 7 9
Dynamic Characteristics

1. Speed of response and measuring lag


• The speed of response or responsiveness is defined as the rapidity with which an instrument responds to a change in the
value of the quantity being measured.
• Measuring lag refers to retardation or delay in the response of an instrument to a change in the input signal.
2. Fidelity and Dynamic error
• Fidelity of an instrumentation system is defined as the degree of closeness with which the system indicates or records
the signal which is impressed upon it.
• It refers to the ability of the system to reproduce the output in the same form as the input.
• The difference between the indicated quantity and the true value of the time varying quantity is the dynamic error.

3. Overshoot
• Because of mass and inertia, a moving part of the instrument does not immediately come to rest in the final deflected
position. The pointer goes beyond the steady state i.e., it overshoots
• The overshoot is defined as the maximum amount by which the pointer moves beyond the steady state.

4. Dead time and Dead zone


• Dead time is defined as the time required for an instrument to begin to respond to a change in the measured quantity.
• It represents the time before the instrument begins to respond after the measured quantity has been altered.
• Dead zone is the result of friction, backlash or hysteresis in the instrument.
Errors

• Error: Difference between the measured value (Vm) and the true value (Vt) of a physical quantity.
• The accuracy of an instrument is measured in terms of error
Static Error, Es = Vm – Vt
• If the instrument reads higher than true value – Positive Error
If the instrument reads lower than the true value – Negative Error

Sources of Errors

• Calibration of instrument
• Instrument reproducibility
• Measuring arrangement
• Work piece
• Environmental conditions
• Observer’s skill
Errors - Classification

1. Systematic or Fixed Errors


a) Calibration Errors
b) Human Errors (Observational and Operational)
c) Loading Errors (Systematic interaction errors)
d) Errors of technique
e) Experimental errors
2. Random or Accidental Errors
a) Error of judgement
b) Variation of condition
3. Illegitimate Errors
a) Blunder or mistakes
b) Computational errors
c) Chaotic errors
Errors - Classification

1. Systematic Errors: They occur due to the use of improper procedures / conditions. They are consistent in action i.e., they
have same magnitude and sign.
a) Calibration Errors: Calibration is a process of giving a known input to the instrument and taking necessary actions
to see that the output matches with input. They are fixed errors due to improper calibration of the instrument
b) Human Errors: Observation and operations errors. They are due to improper observation made by the user of the
instrument. Operational errors occur due to improper use of the instrument.
c) Loading Errors: The instrument always take energy from the measurand and due to this, the signal source is altered
by the act of measurement. This effect is called loading. As the measurand loses energy due to this, such errors are
introduced.
Examples: Thermometer – when a thermometer is introduced, it alters the thermal capacity of the system and
heat leakage takes place.
Reading of a hand tachometer will vary depending on the pressure with which it is pressed on the shaft
d) Errors of Technique: Improper use of the exact technique for executing an operation.
e) Experimental Errors: They are due to faults in the equipment. The accuracy is affected due to limitations in its
design and construction. Examples- Improper assembly of components, selection of improper material.
Errors - Classification

2. Random Errors: The errors whose magnitude and sign vary. Results in lack of consistency is the measured value for the
same input. They care caused due to defects in the elements of the instrument

a) Judgement Errors: They are made while making judgement of the results.

b) Variation of condition: Due to the difference in prevalent condition of the instrument (place of manufacture,
calibration and place of usage) – Pressure, temperature, humidity etc. are different.

➢ If a Hg-in-glass thermometer is located at a place where the air pressure is high, the air pressure acts on the walls
of the thermometer causing the mercury level to rise for no change in temperature

➢ A bourdon-tube pressure gauge has a link-sector-pinion arrangement. The link may expand if the ambient
temperature increases, causing an error.

✓ The instrument has to be calibrated at the place of use.

✓ Atmospheric temperatures are to be monitored

✓ The instrument is to be used in conditions as prescribed by the manufacturer


Errors - Classification

3. Illegitimate Errors: These errors are due to blunders on the part of the person using the instrument. They may also be
due to faulty instrument, faulty adjustment, improper use of the instrument

a) Blunders or Mistakes: Human mistakes while adopting the right procedure.

b) Computational Errors: Due to mistakes in computation of results by the human user.

c) Chaotic Errors:

• Errors induced by random disturbances such as vibrations, noises, shocks etc. of sufficient magnitude that tend
to affect the measurand.

• These results due to loss in input signal during transmission.

Вам также может понравиться