Академический Документы
Профессиональный Документы
Культура Документы
Submitted by:
Ganesh Kumar (16M211)
Gaushik.C (16M212)
Gokulanand (16M213)
Gopikrishnan (16M214)
Harish (16M215)
Kannan.W (16M216)
Krishnaraju (16M217)
Karthikeyan (16M218)
Linkesh (16M219)
CALIBRATION
A set of operations that establish, under specified conditions, the
relationship between values of quantities indicated by a measuring
instrument or measuring system, or values represented by a material
measure or a reference material, and the corresponding values realized by
standards.
Calibration is the comparison of a measurement device (an unknown)
against an equal or better standard.
A standard in a measurement is considered the reference; it is the one in
the comparison taken to be the more correct of the two.
Calibration finds out how far the unknown is from the standard.
Instrument error can occur due to a variety of factors:
Drift, environment, electrical supply, addition of components to the
output loop, process changes, etc.
Since a calibration is performed by comparing or applying a known
signal to the instrument under test, errors are detected by
performing a calibration.
An error is the algebraic difference between the indication and the
actual value of the measured variable.
CHARACTERISTICS OF A CALIBRATION
Calibration Tolerance: Every calibration should be performed to a specified
tolerance. The terms tolerance and accuracy are often used incorrectly
Accuracy Ratio: This term was used in the past to describe the relationship
between the accuracy of the test standard and the accuracy of the instrument
under test.
Calibration tolerances should be determined from a combination of factors.
These factors include:
• Requirements of the process
• Capability of available test equipment
• Consistency with similar instruments at your facility
• Manufacturer’s specified tolerance
Accuracy: The ratio of the ratio of the error to the output, expressed in percent
reading.
Traceability: The property of a result of a measurement whereby it can be
related to appropriate standards, generally national or international standards,
through an unbroken chain of comparisons.
Eg: National Bureau of Standards (NBS), maintains the nationally recognized
standards.
IMPORTANT CONDITIONS FOR CALIBIRATION
Calibration should be done using measurement system with proper
accuracy ,stability and range.
Measuring equipment should have desired accuracy and precision.
All test and measuring instrument should be securely and durably
labelled.
Record should be maintained for all test and measuring equipment
included in calibration system.
It should be ensured the environmental condition(temperature, relative
humidity, cleanliness, vibration, electromagnetic interference, power) are
suitable for calibration and maintained.
•Caliper jaws should be cleaned and make sure they are free of dirt
•Make sure that the movement of gear is proper without any hindrance.
•Bring the jaws in contact and check the dial for zero error if it is not
manually set it for zero
•Insert a 0.5inch(12.7mm) standard gauge block between the jaws to measure the
outer diameter ,make sure that the jaws are in contact with the block, record the
readings accurate to three decimal places.(take three readings for reducing the
inconsistency.
•Same procedure is repeated for 1 inch (25.4mm) and 4 inch (101.6mm) standard
gauge blocks. Note the readings and compare, with the help of the results we will
calibrate the Vernier caliper.
•To calibrate the internal jaws, set them to 0.5 inch (12.7mm) and use the locking
screw to lock their position. Then use another calibrated Vernier caliper to
measure the distance between the jaws. Record the readings and compare.
•The above step is repeated for 25.4mm and 101.6mm standard gauge blocks.
•For calibrating height measurement, set a12.7mm gauge block on flat surface
then the caliper is placed vertically so that its bottom flat surface rests on top of
the gauge block. Now extend the depth measuring stick to touch the ground and
note the reading.
MICROMETER:
A micrometer is a precision measuring instrument, used to obtain very
fine measurements and available in metric and imperial versions.
CALIBRATION OF MICROMETER:
In order to calibrate micrometer the following procedures should be followed:
•Keep the micrometer in stable room temperature before starting the calibration
process
•Move the spindle towards the anvil until they make contact to check that the
primary pointer on the thimble scale is in line with the reference line of the barrel
scale. If not so, use the adjusting wrench to achieve this zero point alignment.
•Spin the ratchet anticlockwise until enough space is present between the anvil
and the spindle to accommodate the gauge block.
•Place the gauge block between the anvil and the spindle.
•Check that both anvil and spindle are touching the gauge block evenly.
•Set the lock nut while the micrometer is still on the gauge block.
•Read off the value from the barrel scale to obtain a reading to the nearest half
millimeter.
•Read off the value from the thimble scale that is parallel with the reference line
of the barrel scale.
•Add the barrel scale value and the thimble scale value to obtain the total
measurement reading.
•Compare the reading obtained with the actual Value of the gauge block to
determine whether a lead error exists.
•Compensate any readings obtained with the micrometer by this relevant lead
error if micrometer cannot be immediately replaced of repaired.
FEELER GAUGE
Calibration of dial indicators
The dial gauges can be calibrated with the help of quick jack with various types
of dials
Put your dial gauge in the dial stand and set it up to a certain height and make it
touch with a the quick jack and mark zero on dial and move the stand
horizontally to read a value in the quick jack and the dial will generate a value
due to the changes in height of the quick jack
Reverse the direction of horizontal dial movement to the same position started
with to obtain again the zero value and now the zero error can be noticed if it’s
present
And the zero correction is done
Hence the dial indicators are calibrated in this format
DIAL INDICATOR
Slip Gauges
Slip gauges are the universally accepted ‘standard of length’ in industries. These
are the simplest possible means of measuring linear dimensions very accurately.
For tool-room and other precision work, the ordinary methods of measurement
are not always accurate. Micrometre and Vernier’s calliper can be used to check
tolerance fine within 0.002 to 0.02 mm, but for fine tolerance they are not
effective. Thus there is a need of instrument which can measure fine tolerance
limit.The means to do so are ‘slip gauges’. They can be used to measure
tolerances in the range of 0.001 to 0.0005 mm very accurately.
The sets are available in ‘Metric’ and ‘English’ units. Letter ‘E’ is used for inch
units (English units) and
Letter “M’ is used for mm units (Metric units). The number of pieces in a set is
given by the number followed by letter E or M.
For Example, E 81 refers to a set whose blocks are in inch unit and 81 in
number. Similarly M 45 refers to a set whose blocks are in mm units and are 45
in number.
Protective Slips:
Apart from these above, two extra gauges of 2.5 mm each are also supplied as
protective slips. The purposes of protective slips are to prolong the life of slip
gauges. These are often made of the same material as the rest of the sets or
sometimes they may be made from tungsten carbide, which is a wear resistant
material.Protective slips identified by letter ‘P’ marked on one face. These are
placed at each end of the assembled blocks, to ensure that any wear or damage
is confined to these two blocks.
Wringing Process:
If two blocks are twisted together under certain pressure, it will be found that
due to molecular attraction and atmospheric pressure they will adhere to each
other quite firmly. This process is known as wringing. This Process is very
useful to produce a required size by assemble several gauge blocks.
Before wringing of blocks; wipe them clean using a cloth, chamois leather, or a
cleansing tissue. Vaseline, grease or dust should be removed by petroleum.
Start wringing with the largest sizes first. Place two faces together at right
angles as shown in figure, and with pressure, twist through 90°. This action
should be smooth and with constant pressure.
When the largest gauges have been assembled, follow same process with the
others in order of decreasing size of blocks.
This means that your supplier has established a systematic approach to quality
management. A statement of conformity to ISO 9001 should not, however, be
considered a substitute for a declaration or statement of product or service
conformity.
Can suppliers claim that their products or services meet ISO 9001 ?
No. The reference to ISO 9001 indicates that the supplier has a quality
management system that meets the requirements of ISO 9001.
The ISO 9000 series are based on seven quality management principles (QMP)
The seven quality management principles are:
Optical Comparators
A comparators is a precision instrument used to compare the Dimensions of given
working component with the actual working standard.
An optical comparator is one of the types of a comparator which is made up of
optical means.
In optical comparator, a light source and a reflecting surface (Mirror) are used as
the optical means
An incident ray will hit the mirror and gets reflected. This ray will be projected
on the scale.
In the below picture an incident ray OA is projected with an angle θ on the mirror
and it gets reflected with the same angle θ.
The mirror can be tilted. where the tilting of the mirror can be controlled by the
measuring plunger. This movement will be projected on the graduated scale.
So that if the mirror tilts an angle α, then the movement in the reflected ray will
be angle 2α. This is the working principle of the optical comparators
Mechanical-Optical Comparators
Mechanical-Optical Comparators are same as the Optical comparators but the
Plunger in mechanical-Optical comparator will be replaced with the Pivoted
levers.
1. High accuracy will be achieved since it is having very less moving parts.
2. Parallax error will be avoided.
3. Less weight compared to other comparators due to fewer parts.
4. Very Suitable for precession measurements due to high magnification can be
achieved.
Never leave your gages in contact with dirt or oil for long periods of time.
Oils can corrode the polished surface of your gage. This includes skin oils.
Wash your hands before handling gages. The natural acids and alkalinity
on our skin can cause rusting or corrosion. Hold the gages by the ends only,
to minimize contact with skin.
Use a soft, non-abrasive and clean cloth to wipe your gages clean, before
and after use. A dirty, chip-filled cloth can mar the highly finished surface.
After calibration, use a good cleaning solvent to remove dirt, oils and
fingerprints, then wipe dry with another soft, clean cloth.
Electric Comparators
An electrical comparator consists of a base a stand, power unit, measuring unit,
indication unit and amplification unit. In this comparator, the measuring contact
movement is changed into an electrical signal and then this signal is recorded by
a device that can be adjusted in terms of plunger movement. For this, an AC
Wheatstone bridge circuit including a galvanometer is used.
This electric comparator comprises of a tough stylus, an iron armature that breaks
against the W and spring and W1.If the armature is located between the coils W
& W1, then the inductance of these coils is equal; the Wheatstone bridge is stable
and forms the datum line.
When the work piece is located under the stylus for the measurement purpose due
to the difference in datum. The armature, component size would either be raised
up or down. It defeats the Wheatstone bridge balance that results the unbalanced
current flow. This current is directly adjusted into difference in size of the
component which is expanded by an amplifier specified by the galvanometer
.These comparators have a precision of 0.001 mm. The main advantages of these
comparators are no moving parts, sensitivity and accuracy over long periods.
Linear Variable Differential Transformer:
An LVDT (linear variable differential transformer) is an electromechanical sensor
used to convert mechanical motion or vibrations, specifically rectilinear motion,
into a variable electrical current, voltage or electric signals, and the reverse.
Actuating mechanisms used primarily for automatic control systems or as
mechanical motion sensors in measurement technologies.
In short, a linear transducer provides voltage output quantity, related to the
parameters being measured, for example, force, for simple signal conditioning.
LVDT Sensor devices are sensitive to electromagnetic interference. Reduction of
electrical resistance can be improved with shorter connection cables to eliminate
significant errors. A linear displacement transducer requires three to four
connection wires for power supply and signal power delivery.
Procedure:
Let us assume that the required height of the component is 32.5mm. Initially this
height is built up with slip gauges. The slip gauge blocks are placed under the
stem of the dial gauge. The pointer in the dial gauge is adjusted to zero. The slip
gauges are removed.
Now the component to be checked is introduced under the stem of the dial gauge.
If there is any deviation in the height of the component, it will be indicated by the
pointer.