Вы находитесь на странице: 1из 32

Measurement & Calibration

By
Ramesh Dham
AM-Instrumentation
Venue: Training Hall
Date: 25-Sep-2008
Time 4:30 PM
Measurement
Measurement is the first step that leads to
control and eventually to improvement- If you
can’t measure something, you can’t understand
it. If you can’t understand it, you can’t control it.
If you cant control it, you can’t improve it.

What gets measured gets done.

One accurate measurement is worth a thousand


expert opinions.
UNITS OF MEASUREMENT
The measurement of quantity is done by
comparing it with some standard called “unit”. A
unit, therefore, is any division of quantity, which
is accepted as one unit of that quantity. A
quantity (Q) is expressed as the product of a
number (number of times in comparison to the
standard) and the name given to the unit or
standard.
• Q=nXname of unit
• Q=nu
System of units

• Our earlier units have been human based (in the context
of what we use in our daily life) and, therefore, varied
from country to country and even from society to society.
We had measure of length (foot) in terms of the length of
a foot step as unit.

• Relating units to immediate physical world is not wrong;


rather it is desirable. What is wrong is that there are
many units for the same quantity with no relative merit
over each other. We are led to a situation, where we
have different units for the same quantity, based on
experiences in different parts of the world. These
different units of the same quantity do not bear any
logical relation amongst themselves. We, therefore, need
to have uniform unit system across the world.
System of units
Classify quantities in two groups :

• Basic or fundamental quantities


• Derived quantities

Basic or fundamental units are a set of units for physical


quantities from which other units can be derived. We are
limited to study few of basic units; others (derived) are
derived from them.
Features of fundamental units

• They are not deducible from each other.


• They are invariant in time and place (in classical context).
• They can be accurately reproduced.
• They describe human physical world.
International System of Units
• The International System of Units (abbreviated SI from systeme
internationale , the French version of the name) is a scientific method of
expressing the magnitudes or quantities of important natural phenomena.
There are seven base units in the system, from which other units are
derived. This system was formerly called the meter-kilogram-second
(MKS) system.
• All SI units can be expressed in terms of standard multiple or fractional
quantities, as well as directly. Multiple and fractional SI units are defined
by prefix multipliers according to powers of 10 ranging from 10 -24 to 10
24 .

There are seven basic quantities included in SI system of measurement :


• Length
• Mass
• Time
• Current
• Temperature
• Amount of substance (mole)
• Luminous intensity
Definitions of the SI base units
• Unit of length meter The meter is the length of the path travelled by light in vacuum
during a time interval of 1/299 792 458 of a second.
• Unit of mass kilogram The kilogram is the unit of mass; it is equal to the mass of the
international prototype of the kilogram.
• Unit of time second The second is the duration of 9 192 631 770 periods of the
radiation corresponding to the transition between the two hyperfine levels of the
ground state of the cesium 133 atom.
• Unit of electric current ampere The ampere is that constant current which, if
maintained in two straight parallel conductors of infinite length, of negligible circular
cross-section, and placed 1 meter apart in vacuum, would produce between these
conductors a force equal to 2 x 10-7 newton per meter of length.
• Unit of thermodynamic temperature kelvin The kelvin, unit of thermodynamic
temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple
point of water.
• Unit of amount of substancemole1. The mole is the amount of substance of a
system which contains as many elementary entities as there are atoms in 0.012
kilogram of carbon 12; its symbol is "mol." 2. When the mole is used, the elementary
entities must be specified and may be atoms, molecules, ions, electrons, other
particles, or specified groups of such particles.
• Unit of luminous intensity candela The candela is the luminous intensity, in a given
direction, of a source that emits monochromatic radiation of frequency 540 x 1012
hertz and that has a radiant intensity in that direction of 1/683 watt per steradian
Conversion of units
Despite endeavor on world level for adoption of SI unit, there are, as a
matter of fact, wide spread variation in the selection of unit system.
Engineering world is full of inconsistencies with respect to the use of
unit system. We often need to have skill to convert one unit into
another. We take a simple example here to illustrate how it is done for
the case of basic quantity like mass.

Let us consider a mass of 10 kg, which is required to be converted into


gram - the mass unit in cgs unit (Gaussian system). Let the
measurements in two systems are “ n1u1 ” and “ n2u2 ” respectively.
But, the quantity, “Q”, is “10 kg” and is same irrespective of the
system of units employed. As such,

Q=n1u1=n2u2
n2=104
Q=n2u2=104gm
The process of conversion with respect to basic quantities is straight
forward. The conversion of derived quantities, however, would involve
dimensions of the derived quantities. We shall discuss conversion of
derived quantities in a separate module.
Imperial measuring System
Imperial System
There is an older set of units which some people still use or refer to and some
materials are still ‘named in this measurement.
It is called Imperial measuring and uses feet, inches, yards and miles.

Distance
• Inches " approximately equal to 2.54 cm or 25.4 mm
• Feet ' 12 inches = 1 foot Approx. equal to 30 cm and 300 mm
• Yard yd 36 inches = 3 feet Approx. equal to 91 cm and 910 mm
• Chain chn 66 feet Approx. equal to 1980 cm and 19800 mm?
• Mile m 5280 feet or 1760 yards Approx. equal to 158400 cm and 1584000 mm?

• Weight
• Ounces oz 28 grams
Pound lb 0.45 kg or 450 grams

• Volume
• Pint pt 568 ml or 0.568 l
Gallon g 4.54 litres
Calibration
The set of operations which establish
under specific conditions, the relationship
between values indicated by measuring
instrument or measuring system or value
represented by a material measure or a
reference material, and the corresponding
value of a quantity realized by a reference
standard.
Calibration Labs
NPL India NPL, UK NIST, USA

ERTL-E ERTL-W ERTL-N ERTL-S OTHER

ETDC ETDC ETDC ETDC OTHER


States States States States LAB

NPL – National Physical Laboratory.


ERTL – Electronics Regional Test Laboratory.
ETDC – Electronics Test and Development Center
Need of Calibration
The main objectives of calibration services are:

 To maintain quality control and quality assurance in production.

 To comply with requirements of global trade.

 To meet the requirement of ISO guides.

 To promote international recognition.

 For tracking back measurement results to national standards.


Benefits of Calibration
 It fulfils the requirements of traceability to national /
international standards like ISO 9000, ISO 14000 etc.
 As a proof that the instrument is working.
 Confidence in using the instruments.
 Traceability to national measurement standard.
 Interchangeability.
 Reduced rejection, failure rate thus higher return.
 Improved product and service quality leading to satisfied
customers.
 Power saving.
 Cost Saving.
 Safety.
The Basic Requirements for Calibration

Reference / Calibration Standards & other


instruments / equipments
Controlled Environment Conditions.
Competence of Calibration Lab personnel.
Traceability of Reference / Calibration
standards.
Documentation.
Reference / Calibration Standard
Generally calibration standards are categorised as Passive and Active
standards.
Passive Standards:
A passive standard is one that either (a) does not require or rely on
operating power from external source or (b) does not contain elements
within its circuitry, that are responsible for amplification of the signal.
e.g. Decade Resistance, Current Shunt, RTD, Thermocouple etc.
Active Standard:
An active instrument on the other hand does rely on powered circuitry which
processes applied signal in some manner or other.
e.g. A digital multimeter, A multifunctional calibrator etc.

Further Classification of Standards are made


 Primary Level standard
 Secondary level standard
 Tertiary or working Level Standard
Control On Environmental Conditions
In order to derive best accuracy and meaningful
calibration result it is important to control environmental
conditions in which measurements are made. Control
and monitoring of following factors should be maintained,
as recommended by manufacturer of standard /
instrument.

 Temperature ( e.g. 25 +/-4.0 deg.C)


 Relative Humidity (e.g. </=70% RH)
 Illumination level ( e.g. minimum 450 Lx.)
 Acoustic Level (e.g. max. 60 dB)
 Shock and Vibration should be adequately.
 Power supply Regulation (e.g. +/- 1%)
 Temperature gradient (e.g. 1.5 deg.C / hour)
 Proper earth etc..
Documentation
In any quality system, documentation has very important role and
therefore, proper care shall be taken for documenting.
 Calibration Procedure / methods.
 Calibration Results( Recording of data)
 Calibration Report and
 Calibration Certificate

A full explanation of documentation one may find in guidelines of ISO


9000 ( ISO 10012-1) documents.
However, calibration report must be address answer for
 Who? ( Name / identification of Cal Lab Tech./Approving authority)
 What? ( Cal. Result / Data)
 When? ( Calibration Date)
 Where?( Location / address, where calibration done)
 How? ( Calibration Procedure / Method)
 Limitations / decision shall also be reflected.
UNDERSTANDING
INSTRUMENT’S SPECIFICATION
The term “accuracy” in measurement refer to the
typically closeness of a measurement result to
the true value. The term “accuracy” is frequently
used to denote a small difference from true
value, really the “inaccuracy” of measurement. In
common language we use the accuracy is +/-
1.0% of full scale, here, what is intended that the
“ limits of inaccuracy are +/-1.0% of full scale”.

In order to select appropriate test / measuring


instrument it is imperative to have full
understanding of instrument specifications.
Analog and Digital Meter Accuracy
 The analog meters are usually given as a percentage of full scale,
such as +/-1.0% of FSD. Analog instruments may also be classified
according to their accuracy class such as 0.2, 0.5 etc, which means
limits of error in percentage of full scale i.e. +/-0.2% of FS, 0.5% of
FS so on.
The major consideration here is that accuracy of the reading
(absolute accuracy) decreases as the reading become less than full
scale.

 Digital instrument manufacturers specify the performance


specification in many ways. These normally includes input
specification, range specification and some time a floor value.
However in majority of cases accuracy is usually specified as : +/-
% of reading +/- % of range +/- counts.
In this case % error indicated gives % of reading or input. The count
errors depends on the resolution of the display.
Determination of Absolute Accuracy
Typical Specification

X% of input +/- Y5 of range +/- Z count


e.g.
If accuracy of a meter is given as
+/-(0.002% of input + 0.001% of range + 10 uV),
Find out absolute accuracy on meter range 0.1 V for a voltage level of 25mV.

Solution:
Step:1
+/- 0.002% of 25 mV = 0.002 x 25 = 0.0005 mV = +/- 0.5 uV
100
Step:2
+/- 0.001% of 0.1 V = 0.001 x 0.1 = 0.000001 mV = +/- 1.0 uV
100
Step: 3 = +/- 10 uV
Step: 4 = +/- Add( Step1,2 &3) = 11.5 uV = Absolute accuracy
Step: 5 = Divide 11.5 by 25000uV ( 25mV) x 100
= +/- 0.044% of reading (i.e. absolute accuracy in % of reading)
Accuracy Ratio
( Reference Standard : Unit Under Calibration)

Preferred => 1 : 10
As per ANSI / NCSL Z540-1-1994 Standard =>1 : 4
As per ISO 10012-Part-1-1992 Standard => 1 :3

Calibration Standard
( e.g. Accuracy : +/- 0.33 deg.C

Process Instrument
( e.g. Accuracy : +/- 1.0 deg.C

Process Tolerance
( e.g. Temp. Required : 250 +/- 10 deg.C
Why Accuracy Ratio
Should Be 1 : 3
If S = UUC and U = Reference Standard Specification.
The Resultant Specification, R is Given By
R = Sqrt( S^2 + U^2)
With U :S = 1 : 3 R = Sqrt( 3^2 + 1^2) = 3.162
The resultant Specification Expands by [{(R-S) / S} X 100]
i.e. by 5.4%
If U :S = 1: 10, The Specification Expands by 0.5%
If U :S = 1: 4, The Specification Expands by 3.1%
If U :S = 1: 3, The Specification Expands by 5.4%
If U :S = 1: 2, The Specification Expands by 11.5%
If U :S = 1: 1, The Specification Expands by 41.4%
ISO 9001:2000 – Calibration Requirement

Clause 7.6 of ISO9001:2000 Control of Monitoring &


Measuring Devices.

Calibration may effect the following other clauses of ISO


9001:2000
7.3.5 :Design & Development Verification
7.3.6 :Design & Development Validation
7.4.3 :Verification of Purchased Product
7.5.1 :Control of Production & Service Provision
7.5.2 :Validation of Process for Production &
Service Provision.
6.3 :Infrastructure
8.2.4 :Monitoring and Measurement of Product.
A Close Look to ISO9001:2000,
Clauses give rise to the points Like:
 Do you have procedure for inspecting on receipt at the
factory to check for conformity of purchase products with
specification?
 Do the procedure cover the verification of test
certification and other data received from the vendor
along with purchased product?
 Is product status identified with respect to measuring and
monitoring requirement.
 Are monitoring, measuring, analysis and improvement
processes planned and implemented as needed to
demonstrate conformity of the product?
 Can it be demonstrated that the measurement and
monitoring of the product is carried out at various stages
of the product realization process in accordance with
planned arrangements?
Study of above points ultimately lead to requirement for establishing monitoring
and measurement facility with the sole purpose that there exists to be a system
to demonstrate one’s capability to provide reliable measurements result. For
establishing an acceptable system there is need to introduce element of
ISO100012 part-1 content of which could be briefly outlined as follows:

 Selection and identification of measuring equipment.


 Procuring measuring Equipment.
 Calibrating measuring equipment
 Frequency of calibration.
 Calibration Status
 Calibration Record
 Safeguard & care of measuring equipments
 Handling and controlling equipments
 Action when equipment is out of calibration
 Establishing traceability
 Cumulative effects of uncertainty
 Controlling environment conditions.
Interval / Frequency of Calibration
The Frequency of calibration is one of the most argued point in calibration
control program and considered to be gray area however, we need to
assign frequency for each piece of equipment. Any prudent exercise for
fixing interval or frequency of calibration shall include:

 Type of equipment
 Frequency of use
 Manufacturer’s recommendations
 Environmental conditions of use
 Maintenance and service
 Accuracy of measurement sought
 Frequency of cross-check
 Loss due to an incorrect data getting accepted because of measuring
equipment has become faulty.

While setting the guideline for frequency of calibration, cost of calibration


can’t be ignored and hence one make balance between cost of calibration
and cost due to incorrect data getting accepted due to measuring equipment
is faulty.
Method of Finding intervals
For initial choice of Interval factor to be taken into account are:
b) The equipment manufacturer’s recommendations.
c) The extent and severity of use
d) The influence of environment
e) The accuracy of measurement sought

A range of methods is available for reviewing the confirmation intervals. These differ
according to whether:

 Items of equipment are treated individually or as a groups( e.g. by make or by type)


 Item fail to comply with their specifications due to drift with the lapse of time, or by use
 Data are available and importance is attached to the history of calibration of the
equipment.

No single method is ideally suited for the full range of equipment encountered.

There are five different methods for that


p. Automatic or “Staircase “ adjustment method
q. Control Chart Method
r. Calendar time method
s. “In-use” time method
t. In-service or “black-box testing” method
Method of Finding intervals
a Automatic or “Staircase “ adjustment method :
Each time an item or equipment is confirmed on a
routine basis, the subsequent interval is extended if it
is found to be with in tolerance.
c. Control Chart Method:
The same calibration points are chosen from every
confirmation and the results are plotted against time.
From these plots, scattered and drift are calculated,
the drift being either the mean drift over one
confirmation interval or, in case of very stable
equipment, the drift over several intervals. From these
figures the effective drift may be calculated.
Method of Finding intervals
c. Calendar time method:
Item of measuring equipment are initially arranged into groups on
the bases of their similarity of construction and of their expected
similar reliability and stability. A confirmation interval is assign to the
group, initially on the bases of engineering intuition. In each group,
the quantity of items which return at their assigned confirmation
interval and are found to have excessive error or to be otherwise
non-confirming is determined and expressed as a proportion of the
total quantity of item in that group which are confirmed during a given
period. If the proportion of nonconforming items of equipment is
excessively high, the confirmation interval should be reduced. If it
appears that a particular sub-group of items( such as a particular
make or type) does not behave like the other member of the group,
this sub-group should be removed to a different group with a different
confirming level. If the proportion of nonconforming items of
equipment in a given group proves to be very low, it may be
economically justifiable to increase the confirmation level.
Method of Finding intervals
a. “In-use” time method
This is a variation on the foregoing methods. The basic method
remain unchanged but the confirmation level is expressed in hour of
use rather than in calendar months of elapsed time.

e. In-service or “black-box testing” method


This method is complementary to a full confirmation. It can provide
useful interim information on characteristics of measuring
equipment between full confirmation and can give guidance on the
appropriateness of the confirmation programme. This method is
suitable for complex instruments. Critical parameters are checked
frequently ( once per day or even more often) by portable
calibration gear or preferably, by a “black-box” made up specifically
to check the selected parameters. If the equipment is found to be
nonconforming by using “black-box”, it is returned for a full
confirmation.
*

Question-Answers
THANK YOU

Вам также может понравиться