Вы находитесь на странице: 1из 19

CONCEPTS OF MEASURING SYSTEMS

Units=

The result of a measurement of a physical quantity must be defined both in kind


and magnitude. The standard measure of each kind of physical quantity is called a
unit.

Magnitude of a physical quantity = (Numerical ratio) X (Unit)

Absolute Units

An absolute system of units is defined as a system in which the various units are all
expressed in terms of a small number of fundamental units.

Fundamental and Derived Units

In Science and Technology two kinds of units are used

(i) Fundamental units (ii) Derived units

The fundamental units in mechanics are measures of length, mass and time. The
sizes of fundamental units, whether centimeter or meter or foot, gram, or kilogram
or pound, second or hour are quite arbitrary and can be selected to fit a certain set
of circumstances. Since length, mass and time are fundamental to most other
physical quantities besides those in mechanics; they are called the Primary
fundamental units.

Measures of certain physical quantities in the thermal, electrical, illumination


fields are also represented by fundamental units. These units are used only where
these particular disciplines are involved and therefore they are called Auxiliary
Fundamental Units
All other units which can be expressed in terms fundamental units with the help
of physical quantities are called Derived Units. Every derived unit originates from
some physical law or equation which defines that unit.

The volume V of a room is equal to the product of its length (l), width (b), and
height (h) therefore

V=lbh

If meter is chosen as the the unit of length, then the volume of a room
6m X 4m X 5m is 120 m3. The number of measures (6 X 4 X 5 =120) as well as units
(m x m x m = m3) are multiplied. The derived unit for volume is thus m3.

Some fundamental units

S. No Name Unit Symbol


1 Length Metre M
2 Mass Kilogram kg
3 Time Second sec
4 Electric Current Ampere A
5 Temperature Kelvin K
6 Luminous Intensity Candler Cd

Supplementary Units

S. No Name Unit Symbol


1 Plane angle radian rad
2 Solid angle steradian sr
Derived Units

S. No Name Unit
1 Area m2
2 Volume m3
3 Density kg/m3
4 Angular velocity rad/sec
5 Angular acceleration rad/sec2
6 Pressure, Stress kg/m2
7 Energy Joule(Nm)
8 Charge Coulomb
9 Electric Field Strength V/m
10 Capacitance (ASec/V)
11 Frequency Hz
12 Velocity m/sec
13 Acceleration m/sec2
14 Force Kg-m(N)
15 Power Watt
(J/sec)
16 EMF Volt (W/A)

Dimensions

Disregarding the problem of measurement and the concept of magnitude, it is


evident that every quantity has a quality which distinguishes it from all other
quantities. This unique quality is called Dimension. The dimension is written in a
characteristics notation, example [L] for length, [T] for time and so on.

A derived unit is always recognized by its Dimensions, which can be defined


as the complete algebraic formula for the derived unit. Thus when quantity such as
area A of a rectangle is measured in terms of other quantities i.e. length l, and
width b, in this case, the relationship is expressed mathematically as;

Area A = l x b

Since l and b each have the dimensions of a length, [L], the dimensions of area is

[A] = [L] x [L] = [L2]

If meter (m) is unit of length, then meter square (m2) can be used as unit of area.

In mechanics has the three fundamental units are length, mass and time. The
dimensional symbols are length [L], Mass [M], time [T]

Conversions
A Few Standard Conversions

1 ft = 30.48 cm= 12 inches


1 m = 3.28 ft
1 kg = 2.2 pounds
1 hp = 746 W

Standards and their Classification

A standard is a physical representation of a unit of measurement. The term


standard is applied to a piece of equipment having a known measure of physical
quantity. They are used for the purpose of obtaining the values of the physical
properties of other equipment of by comparison methods

The classification of standards is based on the function and the application of the
standards.

(a) International Standards


International standards are defined on the basis of international agreement.
They represent the units of measurements which are closest to the possible
accuracy attainable with present day technological and scientific methods.
International standards are checked and evaluated regularly against absolute
measurement in terms of the fundamental units. The international standards are
maintained at the International Bureau of Weights and Measures and are not
available to the ordinary user of measuring instruments for the purposes of
calibration or comparison

(b) Primary Standards.

Primary standards are absolute standards of such high accuracy that they
can be used as the ultimate reference standards. These standards are maintained
by national standards laboratories in different part of the world. The primary
standards, which represent the fundamental units and some of the derived
electrical and mechanical units, are independently calibrated by absolute
measurements at each of the national laboratories.

(c) Secondary Standards

The secondary standards are the basic reference standards used in industrial
measurement laboratories. The responsibility of maintenance and calibration of
these standards lies with the particular industry involved. These standards are
checked locally against reference standards available in the area. Secondary
standards are normally sent periodically to the national standards laboratories for
calibration and comparison against primary standards.

(d) Working Standards

The working standards are the major tools of a measurement laboratory.


These standards are used to check and calibrate general laboratory instruments for
their accuracy and performance. For example, a manufacturer of precision
resistances may use a Standard Resistance (which may be working standard) in the
quality control department for checking the values of resistors that are being
manufactured. This way, he verifies that his measurement set up perform within
the limits of accuracy that are specified.

Measurements and Measuring Systems

Measurements

The measurement of a given quantity is essentially an act or the result of


comparison between the quantity and a predefined standard. Since two quantities
are compared the result is expressed in numerical values

Methods of Measurement

The methods of measurement may be broadly classified into two categories.


(i) Direct methods (ii) Indirect methods

(i) Direct Method

In these methods, the unknown quantity (also called the measurand) is


directly compared against a standard. The result is expressed as a numerical
number and a unit. Direct methods are quite common for the measurement of
physical quantities like length, mass and time

(ii) Indirect methods

Measurements by direct methods are not always possible, feasible and


practicable. These methods in most of the cases, are inaccurate because they involve
human factors. They are also less sensitive. Hence direct methods are not preferred
and are rarely used.

In engineering applications Measurement systems are used. These


measurement systems use indirect methods for measurement purposes.
A measurement system consists of a transducing element which converts the
quantity to be measured in an analogous form. The analogous system is then
processed by some intermediate means and is then fed to the end devices which
present the result of the measurement.

Instrument

An instrument may be defined as a device for determining the value or


magnitude of a quantity or variable.

Basic Types of Measuring instruments are

(i) Mechanical measuring instruments


(ii) Electrical measuring instruments
(iii) Electronic measuring instruments

Classification of Instruments

There are many way in which instruments can be classified. Broadly, instruments
are classified into two categories.

(i) Absolute Instruments


(ii) Secondary Instruments

(i) Absolute Instruments

These instruments give the magnitude of the quantity under measurements


in terms of physical constants of the instrument. The examples of this class of
instruments are Tangent Galvanometer and Rayleigh’s current balance

(ii) Secondary Instruments

These instruments are so constructed that the quantity being measured can
only be measured by observing the output indicated by the instrument. These
instruments are calibrated by comparison with an absolute instrument or another
secondary instrument which has already been calibrated against an absolute
instrument. A voltmeter, a glass thermometer, and a pressure gauge are typical
examples of secondary instruments.

Functions of Instruments and Measurement Systems

Instruments may be classified based on their function. Three main functions


are (i) Indicating Function (ii) Recording Function (iii) Controlling Function

(i) Indicating Function


These instruments provide information regarding the variable quantity
under measurement and most of the time this information are provided by the
deflection of the pointer. This kind of function is known as the indicating function of
the instruments.

(ii) Recording Function


These instruments usually use the paper in order to record the output. This
type of function is known as the recording function of the instruments.

(iii) Controlling Function


This is function is widely used in industrial world. In this these instruments
controls the processes.

Characteristics of Instruments and Measurement Systems

The performance characteristics of an instrument are mainly divided into two


categories:
1. Static characteristics
2. Dynamic characteristics

 Set of criteria defined for the measurements, which are used to measure the
quantities, which are slowly varying with time or almost constant, i.e., do not
vary with time, are called Static Characteristics.
 While when the quantity under measurement changes rapidly with time, the
relation existing between input and output are generally expressed with the help
of differential equations and are called “Dynamic Characteristics”.

Static Characteristics

Desirable Undesirable

Accuracy/ Precision Drift/ Hysteresis

Resolution/ Threshold Dead Zone

Sensitivity Static Error

Repeatability
Dynamic Characteristics

Desirable Undesirable

Speed of Response Lag

Fidelity Dynamic Error

Calibration
 The various performance characteristics are obtained in one form or another by
a process called “Calibration”.

 It is the process of making an adjustment or marking a scale so that the


readings of an instrument agree with the accepted and the certified standard.

Some important definitions are

Static Error: It is the difference between the measured value and true
value of the quantity
Mathematically
δA = Am − At ----------- eq (1.1)
δA: Absolute error or Static error
Where, Am: Measured value of the quantity
At : True value of the quantity

. Static Correction: It is the difference b/w the true value & measured
value of the quantity mathematically
δC=(−δA)=(At−Am)
Limiting error or Relative error:
(εr) = δA/At
εr=(Am − At)/At
Percentage relative error:
% εr = (δA/At) × 100
From relative percentage error, accuracy is expressed as
A = 1 − |εr|
Where A: relative accuracy
And a = A × 100%
Where a = Percentage accuracy
 Error can also be expressed as percentage of Full Scale Deflection (FSD) as,

Am  At
X 100
F .S .D

Example: The expected value of voltage to be measured is 150 V. However, the


measurement gives a value of 149 V. Calculate (i) Absolute error (ii) Percentage
error, (iii) Relative accuracy (iv) Percentage accuracy (v) Error expressed as
percentage of full scale reading if scale range is 0 – 200 V.

Solution: Expected value implies true value


At= 150 V
Am = 149 V
(i) Absolute error = Am − At = −1 V
(ii) % εr = (Am − At )/At ×100= 1/150 ×100=− 0.66%
(iii) A = 1 − |εr| = 1 − |−1/150|= 0.9933
(iv) % a = A × 100 = 99.33%
(v) F.S.D= [(Am − At )/F.S.D] × 100

= −1/200 ×100 = − 0.5 %


Example:
A Voltage has a true value of 1.50 V. An analog indicating instrument with a scale
range of 0 – 2.50 V shows as a voltage of 1.46 V. What are the values of absolute
error and correction. Express the error as a fraction of the true value and the full
scale deflection (f.s.d.).

𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧:
Absolute error δA = Am − At = 1.46 – 1.50 = − 0.04 V
Absolute correction δC = − δA = + 0.04 V
Relative error,εr= δA/At= (− 0.04 /1.50) ×100= −2.67 %
Relative error (expressed as a percentage of F.S.D.) = (− 0.04 / 2.5) ×100 =−1.60 %
Where F.S.D. is the Full Scale Deflection.

Example: A meter reads 127.50 V and the true value of the voltage is 127.43 V
Determine (a) The static error, (b) The static correction for this instrument

Solution:
From Eqn. 1.1, the error is
δA = Am − At= 127.50 – 127.43 = + 0.07 V
Static Correction δC = − δA = − 0.07 V

Example: A thermometer reads 95.45°C and the static correction given in the
correction curve is –0.08°C. Determine the true value of the temperature.

Solution:
True value of the temperature At= Am + δC = 95.45 – 0.08 = 95.37°C
3. Accuracy: It is the degree of closeness with which the instrument reading
approaches the true value of the quantity.

 Accuracy is expressed in the following ways:


Accuracy as “Percentage of Full Scale Reading.
 In case of instruments having uniform scale, the accuracy can be expressed
as percentage of full scale reading.

Example: The accuracy of an instrument having full scale reading of 50 units is


expressed as ±0.1% of full scale reading.

Note: This form of notation indicates the accuracy is expressed in terms of limits of
error.

 So for the accuracy limits specified above, there will be ±0.05 units of error in
any measurement.

 So for a reading of 50 units, there will be a error of ±0.05 units i.e., ±0.1%
while for a reading of 25 units, there will be a error of ±0.05 units and i.e.,
±0.2%.

 Thus as reading decreases, error in measurement is ±0.05 units but net


percentage error is more. Hence specification of accuracy in this manner is
highly misleading.
 Accuracy as “Percentage of True Value”.
 This is the best method of specifying the accuracy. Here it is specified in
terms of true value of quantity being measured.

Example: Accuracy can be specified as ±0.1% of true value. This indicates that as
readings gets smaller, error also gets reduced.
 Accuracy as “Percentage of Scale Span”: For an instrument with a max , amin
representing full scale and lowest reading on scale, then (amax−amin) is called
span of the instrument (or) scale span.
 Accuracy of an instrument can be specified as percent of such scale span.

Example:
 For an instrument having scale span from 25 to 225 units, then accuracy can
be specified as ±0.2% of scale span i.e., ± [(225−25)× 0.2/100] which is ±0.4
units of error in every measurement.

Point Accuracy: Here accuracy is specified at only one particular point of


scale.
 It does not give any information about accuracy at any other point on scale.

Example: A wattmeter having a range 1000 W has an error of ± 1% of full scale


deflection. If the true power is 100 W, what would be the range of readings?
Suppose the error is specified as percentage of true value, what would be the range
of the readings?

Solution:
When the error is specified as a percentage of full scale deflection, the
magnitude of limiting error at full scale = ± 1/100 ×1000= ± 10 W

Thus the Wattmeter reading when the true reading is 100 W may be
100 ± 10 W i.e., between 90 to 110 W
Relative error = ± 10/100 ×100= ± 10%

Now suppose the error is specified as percentage of true value.


The magnitude of error = ± 1/100 ×100= ± 1 W
Therefore the meter may read 100 ± 1 W or between 99 to 101 W
 Accuracy can also be defined in terms of static error.

4. Precision:It is the measure of degree of agreement within a group of


measurements.

 High degree of precision does not guarantee accuracy.

Precision is composed of two characteristics


(i). Conformity
(ii). Number of significant figures

(i). Conformity

 Consider a resistor having value of 2385692 which is being measured by


ohmmeter as 2.4M Ω consistently, due to non-availability of proper scale.

 The error created due to limitation of scale is called precision error.

(ii). Significant Figures

 Precision of the measurement is obtained from the number of significant


figures, in which the reading is expressed.

 Significant figures convey the actual information about the magnitude and
measurement precision of the quantity.

Example: A resistance of 110 Ω, is specified may be closer to 109 Ω, and 111 Ω.


Thus, there is 3 significant figures while if it is specified as 110. 0 Ω, then it may be
closer to 110.1 Ω or 109.9 Ω. Thus, there are now 4 significant figures.

 Thus, more the significant figures the greater is the precision of


measurement.

 Normally, large numbers with zeros are expressed in terms of powers of ten.
Example: Approximate population of a city is reported as 4,90,000 which actually
is to be read as the population lies between 4,80,000 to 5,00,000 but due to
misconception it can also be implied as population lies between 489,999 to 490,001.

 So it is expressed as 49× 104 or 4.9 × 105, which is 2 digit significant figure.

5. Sensitivity: The sensitivity denotes the smallest change in the measured


variable to which the instrument responds.

It is defined as the ratio of the changes in the output of an instrument to a change


in the value of the quantity to be measured.
Mathematically it is expressed as,
Infinitest imal Change in output
Sensitivity 
Infinitest imal Change in input
q o
Sensitivity 
qi

∆qo

Output
∆qi
qo

Input qi
6. Hysteresis
Many times, for the increasing values of input an instrument may indicate
one set of output values, and for the decreasing values of input, the same
instrument may indicate it different set of output values. When these output values
are plotted against the input the following types of graphs are achieved. For the
increasing and the decreasing inputs, the output shows a maximum variations at
half of full scale, for this reason hysteresis error is specified at 50% of the full scale.

7. Threshold
The smallest change in the input that gives a perceivable change on the
output of an instrument is called the resolution.
In most of the instruments, when the input is increased from zero value there
is a small dead band or dead zone for which no perceivable output is indicated by
the instrument. Thus the smallest input that gives some perceivable output is the
threshold of the instrument. Then we can say the resolution is the smallest change
in input that can be measured and threshold is the smallest input that can be
measured. Then needless to say that the resolution has a meaning only after the
threshold input has been passed.
8. Repeatability
Repeatability is a measure of closeness with which a given input may be
measured over and over again.

Measurement of Errors
The measurement error is defined as the difference between the true or
actual value and measured value
 The true value is the average of the finite number of measurement
 Measured value is the precise value

Types of Errors in Measurement


The error may arise from the different source and are usually classified into
the following types.
1. Gross Error
2. Systematic Error
3. Random Error

1. Gross Errors
Gross errors may occur because of the human mistakes. For example consider
the person using the instruments takes the wrong reading, or they can record the
incorrect data. Such type of error comes under the gross error. The gross error can
only be avoided by taking the reading carefully
Two methods can remove the gross error. These methods are
 The reading should be taken carefully.
 Two or more reading should be taken of the measurement quantity. The
readings are taken by the different experimenter and at a different point
removing the error.
 These type of errors include the loading effect and the misuse of the
instruments.
2. Systematic Error
The systematic errors are mainly classified into three categories
(i) Instrumental Errors (ii) Environmental Errors (iii) Observational Errors
(i)Instrumental Errors
 These errors may be due to wrong construction, wrong calibration of
measuring instruments.
 These types of error may arise due to friction or may be due to hysteresis.
(ii) Environmental Errors
 These errors are due to the external condition of the measuring devices.
 External condition includes temperature, pressure, humidity or it may
include magnetic field
(iii) Observational Errors
Such types of errors are due to the wrong observation of the reading.

3. Random Errors
The error which is caused by the sudden change in the atmospheric condition,
such as type of error is called random error.

Вам также может понравиться