Вы находитесь на странице: 1из 2

ERRORS IN MEASUREMENT

TRUE VALUE : -
Practically true value is the name given to the exact measurement of the quantity under observation. Technically it is the average
value of an infinite number of measurement values when average deviation due to various contributing factors tends to zero. Such
an ideal situation is impossible to realize in practice and hence it is not possible to determine the true value of the quantity by
experimental means. The main reason for this is that the positive deviation from the true value donot equal to the negative deviation
and hence do not cancel each other. Thus normally an experimental would never know that the value of quantity being measured by
the experiment means is the true value or not. In fact, in practice the term true value than refers to a value that would be obtained if
the quantity under consideration were measured by an exampular method i.e. a method agreed upon by expert as being sufficiently
accurate for the purposes to which the data will ultimately be put to use.

ACCURACY AND PRECISION


In the fields of science, engineering, industry and statistics, the accuracy[1] of a measurement system is the degree of closeness
of measurements of a quantity to its actual (true) value. The precision[1] of a measurement system, also called reproducibility or
repeatability, is the degree to which repeated measurements under unchanged conditions show the same results.[2] Although the
two words can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method.
A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an
experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve
accuracy. Eliminating the systematic error improves accuracy but does not change precision.
A measurement system is called valid if it is both accurate and precise. Related terms are bias (non-random or directed effects
caused by a factor or factors unrelated by the independent variable) and error (random variability), respectively. The terminology is
also applied to indirect measurements, that is, values obtained by a computational procedure from observed data.
In addition to accuracy and precision, measurements may have also a measurement resolution, which is the smallest change in
the underlying physical quantity that produces a response in the measurement. In the case of full reproducibility, such as in the case
of rounding a number to a representable floating point number, the word precision has a meaning not related to reproducibility. For
example, in the IEEE 754-2008 standard it means the number of bits in the significand, so it is used as a measure for the relative
accuracy with which an arbitrary number can be represented.

SYSTEMATIC ERRORS
Systematic errors are biases in measurement which lead to the situation where the mean of many separate measurements
differs significantly from the actual value of the measured attribute. All measurements are prone to systematic errors, often of
several different types. Sources of systematic error may be imperfect calibration of measurement instruments (zero error), changes
in the environment which interfere with the measurement process and sometimes imperfect methods of observation can be either
zero error or percentage error. For example, consider an experimenter taking a reading of the time period of a pendulum swinging
past a fiducial mark: If his stop-watch or timer starts with 1 second on the clock then all of his results will be off by 1 second (zero
error). If the experimenter repeats this experiment twenty times (starting at 1 second each time), then there will be a percentage
error in the calculated average of his results; the final result will be slightly larger than the true period. Distance measured by radar
will be systematically overestimated if the slight slowing down of the waves in air is not accounted for. Incorrect zeroing of an
instrument leading to a zero error is an example of systematic error in instrumentation.
Systematic errors may also be present in the result of an estimate based on a mathematical model or physical law. For instance,
the estimated oscillation frequency of a pendulum will be systematically in error if slight movement of the support is not accounted
for.
Systematic errors can be either constant, or be related (e.g. proportional or a percentage) to the actual value of the measured
quantity, or even to the value of a different quantity (the reading of a ruler can be affected by environment temperature). When they
are constant, they are simply due to incorrect zeroing of the instrument. When they are not constant, they can change sign. For
instance, if a thermometer is affected by a proportional systematic error equal to 2% of the actual temperature, and the actual
temperature is 200°, 0°, or −100°, the measured temperature will be 204° (systematic error = +4°), 0° (null systematic error) or
−102° (systematic error = −2°), respectively. Thus, the temperature will be overestimated when it will be above zero, and
underestimated when it will be below zero.
Constant systematic errors are very difficult to deal with, because their effects are only observable if they can be removed. Such
errors cannot be removed by repeating measurements or averaging large numbers of results. A common method to remove
systematic error is through calibration of the measurement instrument.
In a statistical context, the term systematic error usually arises where the sizes and directions of possible errors are unknown.
Systematic Errors id divived as: - (A). Instrument (B). Environmental (C). Observational
SYSTEMATIC ERROR
Systematic error is a type of error that deviates by a fixed amount from the true value of measurement. As opposed to random
errors, systematic errors are easier to correct. There are many types of systematic errors and a researcher needs to be aware of
these in order to offset their influence.
Systematic error in physical sciences commonly occurs with the measuring instrument having a zero error. A zero error is when
the initial value shown by the measuring instrument is a non-zero value when it should be zero.
For example, a voltmeter might show a reading of 1 volt even when it is disconnected from any electromagnetic influence. This
means the systematic error is 1 volt and all measurements shown by this voltmeter will be a volt higher than the true value.
This type of error can be offset by simply deducing the value of the zero error. In this case, if the voltmeter shows a reading of 53
volt, then the actual value would be 52 volt. In this case, the systematic error is a constant value.
Sometime the measuring instrument itself is faulty, which leads to a systematic error. For example, if your stopwatch shows 100
seconds for an actual time of 99 seconds, everything you measure with this stopwatch will be dilated, and a systematic error is
induced in your measurements. In this case, the systematic error is proportional to the measurement.
In many experiments, there are inherent systematic errors in the experiment itself, which means even if all the instruments were
100% perfect there would still be an error.
For example, in an experiment to calculate acceleration due to gravity using the length and time period of a simple pendulum,
the size of the pendulum bob, the air friction, the slight movement of support, etc. all affect the calculated value. These systematic
errors are inherent to the experiment and need to be accounted for in an approximate manner.
Many systematic errors cannot be gotten rid of by simply taking a large number of readings and averaging them out.
For example, in the case of our faulty voltmeter, even if a hundred readings are taken, they will all be near 53 volt instead of the
actual 52 volt.
Therefore in such cases, calibration of the measuring instrument prior to starting the experiment is required, which will reveal if
there is any systematic error or zero error in the measuring instrument.
Systematic errors can also be produced by faulty human observations or changes in environment during the experiment, which
are difficult to get rid of.

RANDOM ERRORS
Random errors are errors in measurement that lead to measurable values being inconsistent when repeated measures of a
constant attribute or quantity are taken. The word random indicates that they are inherently unpredictable, and have null expected
value, namely, they are scattered about the true value, and tend to have null arithmetic mean when a measurement is repeated
several times with the same instrument. All measurements are prone to random error.
Random error is caused by unpredictable fluctuations in the readings of a measurement apparatus, or in the experimenter's
interpretation of the instrumental reading; these fluctuations may be in part due to interference of the environment with the
measurement process.
The concept of random error is closely related to the concept of precision. The higher the precision of a measurement
instrument, the smaller the variability (standard deviation) of the fluctuations in its readings.

SYSTEMATIC ERROR V/S RANDOM ERROR


Measurement errors can be split into two components: random error and systematic error. Random error is always present in a
measurement. It is caused by inherently unpredictable fluctuations in the readings of a measurement apparatus or in the
experimenter's interpretation of the instrumental reading.
Whereas, systematic errors are predictable, and typically constant or proportional to the true value. If the cause of the systematic
error can be identified, then it can usually be eliminated. Systematic errors are caused by imperfect calibration of measurement
instruments or imperfect methods of observation, or interference of the environment with the measurement process, and always
affect the results of an experiment in a predictable direction. Distance measured by radar will be systematically overestimated if the
slight slowing down of the waves in air is not accounted for. Incorrect zeroing of an instrument leading to a zero error is an example
of systematic error in instrumentation.

GROSS ERROR: Errors that occur when a measurement process is subject occasionally to large inaccuracies.
The class of error mainly covers human mistake in reading instruments recording and calculating result.
The responsibility of the mistake normal lies with the experimental. The experimental may grossly misreal the scale. For example,
he may, due to oversight, read 31.5 degree C implace of 35.1 degree C (actually reading ). Error in recording but as long as human
being involvead, some gross errors will definility being committee. Although complete elimination of gross error is impossible, one
should try to anticipate & correct them.
These can be avoided by two means :-
-> Great care should be taken with reading & recording the data.
-> 2, 3 or more reading should be taker for the quantity under measurement.

LIMITING ERROR FORMULA FOR INDICATING INSTRUMENT


Magnitude of limiting error of instrument, where is the percentage accuracy, Fullscale reading.

If is any reading within the fullscale reading range, then, Percentage Limiting error for will be .
Example: A 0 to 200V voltmeter has a guaranteed accuracy of 1% of fullscale reading. The voltage measured by this instrument is 50V.
What is the limiting error?
Solution: Magnitude of limiting error of instrument, = .01*200 = 2V

Hence, Limiting error for 50V is, .= = .04 = 4%


GAUGE FACTOR OF STRAIN GAUGE:
Gauge factor (GF) or strain factor of a strain gauge is the ratio of relative change in electrical resistance to the mechanical strain ε, which is
the relative change in length.

In practice, the resistance is also dependent on temperature. The total effect is


Where
ε = strain = ΔL / L; ΔL = absolute change in length; L = original length; ν = Poisson's ratio;
ρ = Resistivity
ΔR = change in strain gauge resistance; R = unstrained resistance of strain gauge; α =
temperature coefficient
θ = temperature change

Precision
The precision of a measuring instrument is determined by the smallest unit to which it can measure. The precision is said to be the
same as the smallest fractional or decimal division on the scale of the measuring instrument.

Accuracy
Accuracy is a measure of how close the result of the measurement comes to the "true", "actual", or "accepted" value. (How close is
your answer to the accepted value?)

Tolerance
Tolerance is the greatest range of variation that can be allowed. (How much error in the answer is occurring or is acceptable?)

Вам также может понравиться