Вы находитесь на странице: 1из 44

Lab Manual I

Wagih Ghobriel



Preface 4

Excerpts from the Code of Behavior on Academic Matters 5

1. Experimentation 6

1.1 The scientific method and the laboratory

1.2 Experiments and theories
1.3 Main features of an experiment

2. Health and Safety in the Laboratory 8

2.1 Introduction
2.2 Lasers
2.3 Electricity
2.4 Ionizing radiation
2.5 Pressurized helium
2.6 Mercury lamps
2.7 Lead objects
2.8 Mercury spills
2.9 Equipment safety and care

3. Record Keeping and Report Writing 11

3.1 Laboratory record book

3.2 Formal laboratory report
3.3 Formal report

4. Types of Error 14

4.1 Introduction
4.2 Personal errors
4.3 Random errors
4.4 Systematic errors
4.5 Accuracy and precision
4.6 Uncertainty and significant figures
4.7 Uncertainty in directly measured quantities

5. Random Errors 21

5.1 Basic definitions

5.2 The standard deviation
5.3 The standard error
5.4 Sample problems
5.5 The normal distribution
5.6 Poisson distribution
5.7 Rejection of measurements

6. Propagation of Errors 30

6.1 What is propagated error?

6.2 Propagation of probable errors
6.3 Special cases and corollaries
6.4 Solved problems

7. Graphical Analysis and Linear Regression 37

7.1 Introduction
7.2 Graphical representation of data
7.3 Error bars
7.4 Histograms
7.5 Semi-log and log-log plots
7.6 Modeling
7.7 Regression
7.8 Line of best fit
7.9 The method of least squares


A typical undergraduate physics course is presented in three parts:

• Lectures, which give an introduction to physics in its purest form as one studies the
behavior of idealized models of reality.

• Tutorials, where the physical models are further investigated by assigning numbers to the
parameters and solving problems based on simplified versions of reality.

• Laboratory sessions, where one performs experiments to investigate the predictions of

some of the physical models.

In fact, the extensive work in the lecture and tutorials may leave the experimenter
quite unprepared for what he/she encounters in the laboratory! In our everyday lives, we are
often faced with the difference between “theory” and “reality”: we all make plans based on
what we expect to happen, and we all know that not everything goes as planned.
Unfortunately, the physical sciences are not immune to this experience. Even the most
logically sound or mathematically elegant theory may fall in the face of one experimental
fact, which contradicts its predictions.

Therefore, one performs experiments, in general, to obtain realistic information about

a certain phenomenon, the final goal being to describe and understand it. This capability
stems from doing the experiment and not just hearing, seeing, or reading about it.

Specifically, the objectives of the introductory physics laboratory are to help you

• become familiar with the laboratory apparatus and equipment,

• become acquainted with different experimental procedures and analysis,
• gain the necessary experimental skills,
• experience the scientific method and its powerful capabilities and outcomes.

These introductory notes present the basic elements of experimental physics.


Excerpts From the Code of Behavior

On Academic Matters

B. Offenses
The university and its members have a responsibility to ensure that a climate
which might encourage, or conditions which might enable, cheating, misrepresentation
or unfairness not be tolerated. To this end all must acknowledge that seeking credit or
other advantages by fraud or misrepresentation or seeking to disadvantage others by
disruptive behavior is unacceptable…

Whenever in this Code an offense is described on “knowing”, the offense shall

likewise be deemed to have been committed if the person ought reasonably to have

B.I It shall be an offense for a student knowingly:

(d) to represent as one’s own any idea or expression of an idea or work of
another on any academic examination or term test or in connection with
any other form of academic work i.e. to commit plagiarism…

(e) to submit without the knowledge and approval of the instructor to whom
it is submitted, any academic work for which credit has previously been
obtained or is being sought in another course or program of study in the
University or elsewhere;

(f) to submit any academic work containing a purported statement of fact or

reference to a source which has been concocted.

B.II Parties to offenses

(a) Every member is a party to an offense under this Code who knowingly:

(i) actually commits it;

(ii) does or omits to do anything for the purpose of aiding or

assisting another member to commit the offense;

(iii) abets, counsels, procures, or conspires with another member to

commit or be a part to an offense.


1.1 The scientific method and the laboratory

Age after age, the development and success of science led to a well
established “method”. This method proved to be effective in obtaining, analyzing and
exploiting knowledge. That is the “scientific method”. Its power emanates from its
characteristic sequential logical steps to: recognize a phenomenon, search for the proper
answer, study the inputs and outputs of the given proposals and models, perform experiments
to test and investigate the various possibilities and establish the appropriate theories.

Any idea, therefore, must be tested in the laboratory to be regarded as correct or

incorrect. On the other hand, claims that cannot be tested are classified as “unscientific“. The
scientific method simply states that any result and predictions of any theory to be rationally
defensible and acceptable must be tested and be in accordance with the results of planned
experiments. So, based on this solid method, one can differentiate between information,
events or occurrences that are presented as objectively real and other erroneous perceptions
or concepts of reality. In few words, through the golden series of links comprising the
scientific methodology, one can define “facts”.

The laboratory is the place where the scientific ideas are investigated as a basis for
evaluation or judgment. Since physics is the most fundamental of all the sciences, most of the
principles discussed in the following sections can provide a useful basis and support to
experimenting in many other areas of science. Further, the physics laboratory turns, indeed,
to be the ideal and proper place to perform experimental measurements and, consequently,
adopt the “ scientific method “:

Purpose Hypothesis Materials & Procedure & Observations Conclusion

Apparatus Method

1.2 Experiments and theories

Fortunately, there are many variables in the physical world around us that can be
explored through experiments. An experiment is the instruments and/or procedures used to
discover, investigate or test some fact or phenomena. Experimentation is the act and
procedures of experimenting. The experimental method (or approach) is just a part of the
greater “scenario” of the “scientific method”.

Often, in the early stages of one’s “scientific” career, the theory behind the
experiment is well established and the experimenter is required to investigate the theoretical

results through a systematic experimental procedure. In the advanced stages, one may very
well carry out experiments which initiate modeling and theoretical development.

Experiments may be used to a certain extent to prove the theories and also theories
may be created to confirm related experiments. Theories may be replaced by other ones
because the new ones prove to agree quantitatively with the experiments for a wider range of
variables. In fact, the relation between the observation of a certain complex physical
phenomenon, the theories trying to order and simplify this observation, and the experiments
conducted to create or test these theories may be illustrated as shown in Fig. (1.1).




Fig. (1.1) The elements of a scientific original work

based on the “scientific method”

There is no single procedure for scientific development and no single “scientific

method“. Sometimes, the “idea” takes the priority from the “observation“. Typical examples
may be the wave model of matter and the “invention” of the concept of the neutrino.
However, experiments remain invariably the indispensable corner stone through which one
can compare the models with the real phenomena or observations.

Although the scientific method is a way to check ideas and techniques, it is not the
way for invention of new theories, hypotheses and methods. In fact, some discoveries had
been based on haphazard trial-and-error approach, accidents, or experimentation without
systematic procedures. However, it is beyond argument that most of the breakthroughs and
great discoveries need devoted great minds with inventive skills. Further, the role played by
imagination, inspiration, ingenuity and luck cannot be denied in directing those gifted
scientists towards creating original achievements.

1.3 Main features of an experiment

In the physics laboratory, you will consider three aspects of an experiment:
measurements, uncertainty, and modeling. Measurement is the quantification of
observation. Measured values are always made relative to and in comparison with others.
Practically, all measurements are subject to varying degrees of uncertainty so that an
experimental value is usually reported with the accompanying error or degree of uncertainty.
Modeling is an attempt to describe the relationship between different measured quantities. A
model can then be used to calculate physical constants, or make predictions about quantities
not yet measured.

Health and Safety in The Laboratory

2.1 Introduction
In any work situation, health and safety should be of prime concern to everyone. Your
safety and the safety of your co-workers are paramount.

Although all the experiments in the lab are designed with safety in mind, there are
situations in which special care must be taken. Most health and safety precautions involve a
lot of common sense based on an understanding of the materials and equipment that you are
working with. The problem is that lasers, radioactive sources, and pressurized lamps,etc. may
be outside of the realm of your everyday experience. The following comments and
precautions may be useful with respect to the standard hazards encountered in the
undergraduate physics laboratories.

2.2 Lasers

• Never stare directly into the beam .

• Never direct a beam towards another person.
• Avoid the beam’s reflections in a mirror or any other polished surface .
• You can safely view the laser beam incident on an object such as a normal unpolished
wall or screen.

2.3 Electricity
Most of the lab equipment is set up so that the exposed wires have harmlessly low
voltages. However, if you suspect that any terminals carry dangerously high voltages, be
careful not to touch them. Call your lab technician for assistance.

• One useful precaution in handling potentially hazardous electrical equipment is to work

with one hand in your pocket so as not to provide the electricity a path to pass through
your body.
• When one is connecting an electrical meter to a circuit, attention should be given to the
correct polarity: connect positive to positive and negative to negative.
• Check the circuit connections before closing the switch ( if any ) or connecting to the
power supply . With every modification to the circuit, disconnect the electrical source
first .
• Avoid unnecessary heating in electrical circuits .
• Use the meters initially adjusted to the largest scale because excessive voltage or current
may cause damage or “pegging” of the instrument. For greater sensitivity, one can move

the scale knob to a lower setting after knowing the general magnitude of a
measurement .
• Avoid loose connections. Check the connections of the different parts of the circuit
every now and then.

2.4 Ionizing Radiation

The total radiation dose from a gamma ray source is proportional to the source
intensity, the time over which you are exposed, and the inverse square of the distance of the
source from your body. For alpha- and beta-ray sources, the radiation dose drops off with
distance faster than the inverse square. The following suggestions will help you keep your
exposure to a minimum.

ƒ Place the sources you are using as far from yourself and others as is reasonably
achievable and consistent with doing the experiment.
ƒ Minimize the time that the source is near you and others.
ƒ Some radioactive sources are solids and encapsulated to prevent contact. Others may be
in liquid form and hence the danger of spillage may arise. Therefore, certain care in
handling these sources is required. Use gloves to prevent contact with your skin. Mouth
pipetting is strictly prohibited.
ƒ Avoid unnecessary handling or contact with the skin.
ƒ Eating , drinking , and smoking are not permitted in any area where radioactive materials
are being used.
ƒ Protective gloves or forceps should be used in handling the radioactive materials.
ƒ Wash your hands thoroughly after working with the radioactive materials.
ƒ When not in use, keep the radioactive materials in a shielded-labeled container in a place
of limited access.

2.5 Pressurized helium

One of the lab experiments uses a pressurized helium tube. Although your lab
demonstrator will handle the dispensing of the helium, you may find these tips useful in other
• Always note that the helium gas is under pressure and inhalation can cause rapid
suffocation. Avoid breathing gas.
• Keep oil and grease away . The container should be kept far from heat sources.
• Do not drop the container . Store and use it with adequate ventilation.
• Open and close the regulating valve slowly.
• Use equipment rated for cylinder pressure.
• Close valve after each use and when empty.
• Avoid contact with eyes , skin or clothing.
• If inhaled , remove victim to fresh air. Get medical attention immediately.

2.6 Mecury Lamps

Sometimes, these lamps are used in the physics laboratory, especially in optics
experiments. They may emit ultraviolet radiation, which can be harmful to the eyes. Some
lamps, therefore, may need shielding.

2.7 Lead Objects

Lead is commonly used as shielding for radiation sources. It can be absorbed into
your body through your skin or your mouth and is unhealthy if ingested in large amounts. It is
worthwhile minimizing your exposure.
• Wear gloves when handling lead objects.
• Wash your hands after completion of the experiment.

2.8 Mercury Spills

Mercury vapor, when inhaled can be harmful as it accumulates in the brain. In the
event of a spill, you should try to minimize your exposure.
ƒ Agree that one laboratory partner remains near the spill to prevent the spread of the
mercury by an unknowing passer-by.
ƒ Immediately notify the lab technician, who will clean up the spilled mercury.

2.9 Equipment Safety and Care

Some of the equipment in the physics laboratory is expensive and/or delicate.
Improper use may lead to damage of certain parts. Diligently follow any instructions
provided and do not deal with equipment carelessly because you may hurt yourself and/or the

Record Keeping and Report Writing

3.1 Laboratory Record Book

Perhaps the most important skill necessary to the experimentalist is the ability to keep
a clear and accurate record of work done in the lab. Every parameter pertinent to the
experiment should be recorded as well as every measurement made, whether good or bad. In
fact, everything you do with respect to each lab should appear in a lab notebook.

You must record your lab data in a bound notebook

Loose sheets of data are a recipe for disaster! A suitable notebook (preferably containing
graph paper) can usually be found in the bookstore.

The following steps will ensure that your lab record is complete:

• Read the experiment write-up several times and make sure that you understand the
• Start a fresh page in your lab book. Be sure to title and date the experiment.
• If necessary, look up any fundamental constants or the values of any parameters you will
require for your calculations.
• Perform the experiment, recording every measurement you make and any relevant
observations. You may also want to record any identifying characteristics of the
equipment that you are using for the measurement (serial number, defining marks, etc.) in
case you need to repeat or review a measurement at a later date.
• Summarize your data into charts, tables , graphs , and perform all required calculations.

Your lab book will be marked for completeness at the conclusion of each experiment.
Note that it is your responsibility to make sure that your demonstrator sees your lab book
before the end of the period.

3.2 Formal Laboratory Report

During the course of the year you will be expected to submit formal laboratory
reports. The purpose of this requirement is to train you to write a scientific report.

It must be emphasized that a formal report is a concise summary of an experiment

that has been performed. The details of the experiment, calculations, analysis and comments
will of course be in your laboratory record book.

In a formal report the single most important thing is to write from the point of view of
the reader. Assume that your reader(s) will have at least the same level of scientific
competence and experience that you do. The reader should understand from your report what
you have done, why you have done it, and what you have concluded. On the other hand, the
reader is not interested in going through the details of how you multiplied, divided etc. You
will have to use your judgment to determine what to include and what to exclude. For
example, most common measuring devices do not have to be described; a stopwatch is a
familiar tool. However, if you use some ingenious or novel method or tool in your
experiment, this should be explained in sufficient detail so that the reader can understand
what you did.

It is customary to write a report in the third person and in the past tense ( third –
person – past – passive ). Avoid verbosity; contrary to popular student belief, reports are
not marked on the basis of length. In fact, an unnecessarily long report demonstrates an
inability to be scientifically discriminating and may therefore earn a lower mark than a
shorter report containing the same material.

Formal report should be computer-generated using word-processing and spreadsheet

software such as Word and Excel.

Your report should be clear, concise, and complete

3.3 Report Format

Over the years, a general format has been developed for scientific reports and you will be
expected to adhere quite closely to it.

Title Page

…is the first page of your report. It should include the title of the experiment, the lab
manual designation of the experiment, the date the experiment is performed, the date the
report is submitted, your name and your partner’s name, your student number and your lab
section number.


…is the second page of your report. It should include the purpose of the experiment, and
the results with uncertainty. The concept of an abstract is one which most students have
difficulty grasping at first. Perhaps the best way to understand what a scientific abstract
should contain is to look first at a nonscientific abstract. The following then is taken from the
“Inside Today’s Star” section on the front page of The Toronto Star (October 27, 1980),
under the title, “They Loved Elfi’s Win”:

“Darling of 10,000 spectators, Toronto’s Elfi Sclegel, 16, won Canada its first medal in the
vault event at the World Cup ’80 Gymnastics meet at Maple Leaf Gardens last night.”

As you can see, a wealth of information has been included in this one simple statement.
This is the ultimate aim of the scientific abstract: to state, in at most one paragraph, what
was accomplished during the experiment.


…is a brief description of the physical principle being tested and how the testing is being
done. It will likely include any equations (numbered) that will be used in the analysis.
(Detailed derivations should not appear in the introduction. If a derivation is necessary, it can
be shown in an appendix. ) It may include historical information and/or a summary of
previous work. Because you have limited time to write your report (and the demonstrator has
limited time to read it !) we suggest that your introduction not exceed one page.


There is no point in rewriting the procedure from the manual. For these reports, “refer
to manual” is acceptable. However, if the procedure you followed in the performance of the
experiment differs in any special way from that outlined in the manual, make note of it here.

Data and Analysis

This section should include all of your data in condensed form, normally a graph
(or graphs). Calculated results should be given in their final form and should include their
respective uncertainties. The most significant criticism of this section of a student report is
that it is unreadable – there is a lot of data, but no narrative. Lead the reader, in words,
through the data and the accompanying arguments and calculations. Extensive tables of data,
uncertainty calculations, derivations, etc. will interrupt the flow of the narrative for the
reader. You will need to decide what is essential for the reader to see “up front”, and what
may be relegated to an appendix.

Discussion and Conclusions

Discuss your results fully here. Draw whatever conclusions seem appropriate in light
of your results and make comparisons to relevant theory if available.


Use appendices to present material which is relevant to the report but which would
interrupt the flow of the report for a reader. You will need at least one appendix in which you
present your raw data, tabulated with its uncertainty (possibly your computer printout if you
have used a spreadsheet). You should also provide one example of each type of uncertainty
calculation (so that your demonstrator can evaluate your method), and you may need to
derive equations or answer questions asked in the lab manual which do not fit neatly into the
body of the report.

The data sheet from your lab book or notes

Make sure that you include the page initialed by the demonstrator.

Types of Error

4.1 Introduction
The term “error” is used in all sciences to indicate any divergence between a
computed, observed, or measured quantity and the corresponding true, specified, or
theoretical correct value. Unfortunately, the “true” value is often unknown and there is no
simple way to identify or even draw attention to the errors in an experimental procedure.

The process of taking any measurement must always involve some “uncertainty” or
“experimental error”. Since laboratory investigations necessitate taking measurements of
various physical variables, these errors are unavoidable. Note that the term “ error ” has come
to be synonymous with the term “uncertainty” in modern experimental science. It does not
mean that a mistake has been made in the measurement; it merely indicates a range over
which we are uncertain about the actual value.

Detecting and determining error in experimental laboratory work requires a

careful and thorough investigation of the entire experimental procedure

Experimental errors can be generally classified into three types personal, systematic,
and random errors.

4.2 Personal Errors

Personal errors may arise from personal preference, or carelessness in the different
steps of the experimental procedure. Typical examples …

¾ Personal inclination in favoring some observations over others:

Sometimes, the experimenter may become biased in favor of one observation over the
others. The reason behind this preference may be that this value is published in one of the
references, or, according to the observer’s impression, it is “correct”. In general, when the
observer makes this type of “personal error”, automatically and concurrently a chain of
mistakes ensues:

• The preferred reading is considered to be exact or at least more significant than the others
which is not true and unjustified, scientifically. As a result, the other observations do not
have exactly the same chance and validity.
• To justify and support the preference of a certain observation, one may make further
mistakes such as rejection of other important readings, recording the smallest fractional
scale not exactly as they should be, or forcing other readings to comply with that which is
personally preferred.

• The observer fails to follow the systematic scientific procedure, which implies that all
observations, which are monitored and recorded at the same conditions, must have the
same significance and consideration.

Differentiate between what you get and what you wish to get

• Lack of thoroughness

Taking insufficient care may lead to experimental errors. Although the following
cases contain terms and situations which will be treated in details afterwards, typical
circumstances where lack of consideration may occur are:
• Lining up a meter stick in measuring the length of an object.
• Waiting until the balance stabilizes in measuring weights.
• Considering the estimated part (least significant digit) of the smallest division of a non-
digital scale.
• Rounding off significant figures through the process of data analysis.
• Applying the proper mathematical formula in the analysis.

Attention to detail is rewarded with good results

¾ The reading errors

In reading a scale, one may make some errors, which may be termed as “reading
errors”. They may arise in one of the following situations:

• In measuring the length of an object by a meter stick.

• Positioning the eye with respect to the reading location. For example, the value of the
reading taken by one eye where the second one is closed is different from one eye to the
other. The position of the head also affects the reading when inclined left or right for
horizontal scales and up or down for vertical scales.
• The thickness of a meter stick may affect the correctness of reading the scale .

The apparent change in the relative position of an object when viewed from different
positions is called “ parallax ”. Since reading an analog scale usually involves judging the
position of a pointer or reference mark against a background scale, parallax is often an issue.
A typical example is the change in reading a scale due to change in the position of the eye. It
follows that the “parallax error” is the error associated with reading an instrument which uses
a scale or pointer because the observer’s eye and the point of reading are not in a line
perpendicular to the plane of the scale as shown in Fig.(4.1) and (4.2). A mirror placed
behind the object so that the reflection can be used to gauge the position of the eye is a
common way of reducing parallax. The following advice is given in terms of a measurement
with a meter stick but the principles are generally applicable.

• Line the meter stick against the edge of the object or as near to it as possible.
• Apply the meter stick edge-to-edge against the object. Keep the scale always
parallel to the line to be measured.
• The line of sight should be perpendicular to the scale.
• Do not incline your head to right or left when measuring horizontal scales, or
up or down when measuring vertical scales.

Viewing from _
above (error) _
Scale _
The line of _
sight should _
be perpen- _
dicular to _
the scale _
Wrong set up _
of the meter stick _
Right setup _
Viewing from of the meter stick _
below (error) _

(a) The reading of a vertical scale (b) The meter stick should be parallel
may appear different if viewed (not inclined) with respect to the
from above or below the perp- line of length measurement
endicular line to the scale

Fig.(4.1) Some sources of reading errors in vertical length

measurements by using a meter stick

Length of the edge

The distance from to be measured
the edge to be
measured should be
as small as possible o b j e c t

Thickness of the
meter stick (exagg-
erated) needs spe- 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
cial consideration meter stick
in reading the scale

The position of the Inclination of the head The position of the

observer is shifted to the right or left may observer is shifted
to the left with res- cause reading errors in to the right with res-
pect to the perpen- reading horizontal scales pect to the perpen-
dicular to the scale dicular to the scale

Fig.(4.2) Five viewing positions in reading horizontal scales.

All but the one in the middle give rise to reading errors

4.3 Random errors

Random errors are those resulting from unidentified sources which give rise to
uncontrollable, and unpredictable perturbations in the experimental setup. The same observer,
performing repeated measurements on the same system, may well find that different results
are obtained each time, even though he/she is confident of each measurement. The random
error is an error that can be predicted “only” on a statistical basis. Some circumstances, which
give rise to random error, are:

• Fluctuating experimental or environmental conditions such as perturbations in electrical

current or voltage, irregularly timed mechanical vibrations caused by other equipment, air
currents, etc.
• Small variations in the initial conditions of each measurement e.g. Is the airtrack glider
starting at exactly the same position each time? Is the orientation of the free fall bob
exactly the same each time it is released?
• Making a judgment based on a visual output. e.g. As a screen is moved back and forth,
where does the image formed by a lens seem to be “best” focused? As an output is
observed on an oscilloscope, where does the “maximum” signal occur?
• Sometimes, a personal error turns to be a random error. For example, if the observer
makes unbiased repeated measurements then the resulting readings may involve random
errors. A typical example is the parallax error in taking a scale reading.

It is sometimes possible to reduce random errors by improving the observer’s skills,

by using specialized equipment, or by cleverly redesigning the experiment. Furthermore,
since random errors are statistical in nature, repeated measurements of the same variable
produce better results because the correct values dominate and the incorrect ones become
statistically insignificant. (This will be discussed thoroughly in Section 5).

4.4 Systematic errors

A systematic error occurs when a measuring device or an experimental procedure
biases a measurement to produce an inaccurate result. Typical situations…

• An instrument is incorrectly “zeroed”. For example, an ammeter with no current passing

through it, should read zero. But what if it doesn’t? Sometimes, the instrument has a
special screw adjustment, which can be used to reset the zero. Failing that, a correction
must be made to each reading which accounts for the difference between the location of
the true zero and the apparent zero.

• An instrument is incorrectly calibrated. A “calibration” is a determination of the correct

value corresponding to each scale mark reading on a measuring device by comparison
with a standard. A standard is guaranteed by its manufacturer to have a certain value to a
very high precision. (For example, to calibrate a balance one uses a set of standard
masses.) Notice that an instrument may be correctly zeroed but improperly calibrated.

• An experimenter uses a stopwatch or a clock to measure a time interval. There is

inevitably a delay between when the event occurs and when the observer stars/stops the

clock. This delay is known as the “reaction time” of the observer. (It can be measured
and so corrected for if necessary.)

• The experimental setup changes because of an unexpected change in the environment

(temperature, humidity, etc.)

• Sometimes, a personal error turns into be a systematic error. For example, an

experimenter may consistently consider the lower (or higher) reading of a scale division.

Each experiment is subject to its own set of systematic errors, and there is no easy
way to identify them. In fact, it may only become clear that there was a problem well after the
experiment was performed. This is where careful note taking during an experiment may help.
Suppose that analysis of a particular set of measurements cast suspicion on the readings from
a certain meter. If the instrument can be identified, it is a simple matter to return to the lab
and check its calibration. Get into the habit of recording at least the place and time of the
experiment, and anything about the equipment which might identify it, or the environment
which might effect the outcome.

Dealing with the systematic errors needs some skills:

Detect . Prevent . Eliminate . Quantify . Reduce

⇒ Note that the systematic errors do not change by performing the experiment repeatedly.
The repetition does not add any improvement to the situation.

4.5 Accuracy and precision

In our daily speech and even in most dictionaries, the two words “accuracy” and
“precision” have a connotation very close to each other and in most cases the same meaning.
In experimental analysis, however, there is an important and critical distinction between
them: The accuracy of a measured value is concerned with how close that value is to the
“true” value, while the precision deals with the spread of the results.

For example, suppose that you have done a very careful measurement of the
temperature of the water in your aquarium over a period of several days and found an average
value of 20.25 oC over the period of a week and a spread in the daily measurements of about
0.05 oC . You then check your thermometer against a standard and discover that it always
reads 0.5 oC lower than the standard! Because your thermometer is poorly calibrated, your
measurement, although precise, is not accurate. Random errors affect the precision of a
measurement. A systematic error affects the accuracy.

Most measuring equipment comes with a manufacturer’s estimate of its accuracy. It is

important to realize that you cannot reduce this type of uncertainty by repeated measurement:
a given thermometer will always read high or always read low under the same conditions.

4.6 Uncertainty and significant figures

The term uncertainty is used in general to indicate the estimated amount by which an
observed or calculated value departs from the true value. The final outcome of any
measurement should be an interval of confidence inside which the true value lies. For
example, in measuring the length of a book by using a ruler or meter stick, one may be able
to decide with confidence that the value lies between 21.6 cm and 21.8 cm. It is
conventional to report this result in the form 21.7 ± 0.1 cm.

Indicating explicitly the uncertainty, forms an essential and

indispensable part of representing any experimental result.

The quantity 0.1 in the above example is called the absolute uncertainty. The
relative uncertainty is defined as the absolute uncertainty divided by the measured value. In
the example above, the relative uncertainty is ± 0.1 / 21.7 = ± 0.005. The relative uncertainty
is often quoted as a percentage (± 0.5 % in this example). This percentage implies the quality
of the reading and it is often called the precision of the measurement.

It is inappropriate to quote a measurement to a higher (or lower) degree of precision

than is implied by its uncertainty. Typical mistakes are:

Nominal Should be
result… reported as… Because…

7.2 ± 0.1 the uncertainty is in the first decimal place. The

7.1824 ± 0.1 following digits are not significant. In this case, keep
or 7.18 ± 0.1 the first decimal place ( and at most the second ).

3.2 ± 0.05 3.20 ± 0.05 the uncertainty is in the second decimal place. There
should be an estimate of that digit.

(4.63 ± 0.02)×103 the uncertainty is itself an estimate. It should have one

4628.10 ± 18.56 ( at most two ) significant figures.
or 4630 ± 20

4.7 Uncertainty in directly measured quantities

The most basic type of error associated with a directly measured quantity is the
reading error. Suppose that one intends to determine the mass of a cylinder using a balance.
By examining the scale of the balance, one can see that it has a finite number of divisions,
perhaps 0.1 g apart. The pointer however, usually falls somewhere in between these
divisions. This gives rise to the doubtful or estimated figure of the measurement.

In this respect, two situations may arise:


(1) Analog devices:

For illustration, consider the typical scale and devices shown below in Fig.(4.3).

2 3

Fig.(4.3) Typical analog scale and devices

The scale shows a reading that is certainly greater than 2.40 units and certainly less
than 2.50 units and the pointer is approximately half way between the markings. A
conservative estimate of the reading would be 2.45 ± 0.05 units. Notice that the stated reading
error (0.05 units) is half the smallest scale division (0.1 units) and is the maximum reading
uncertainty on this piece of equipment. In practice, a skilled observer may be able to reduce
the uncertainty to as low as 1/5 the smallest scale division.

(2) Digital devices:

For illustration, consider the typical digital devices and display shown below in

Fig.(4.4) Typical digital display and devices

A digital clock only indicates which division on the clock’s time scale has most
recently passed. For example, in Fig.(4.4), the display reads 1.23 s. One can only be sure that
the elapsed time is greater than 1.23 s and less than 1.24 s. Thus, to encompass all
possibilities, the measurement should be recorded as 1.235 ± 0.005 s. A similar argument
can be made for any instrument with a digital readout.

Random Errors

5.1 Basic definitions

The term universe or population is used to represent an infinite set of measurements
that characterizes the phenomenon under consideration. Therefore, any set of individuals or
objects having some common observable characteristic constitutes a population. A
sample is a subset of data selected from the population. Since the universe is inaccessible to
us (it would take a long time to do an infinite number of measurements!), the information in
the sample is used to make decisions, estimates, predictions, or generalization about the
population as a whole. The uncertainty can then be viewed as a measure of reliability of any
inference drawn from the sample. It is worth mentioning that the terms “population” and
“sample” may refer either to the measured individuals or to the measurements themselves.

In dealing with a population or a sample there are several useful variables which can
be defined. Consider a set of n repeated measurements { x1 , x 2 , ... , x n } .

• Average ( or mean ) value x of a set of n measurements is :

( x1 + x 2 + ... + x n ) = 1
n n
i =1

The mean value gives the “true” or “most probable” value of x . This calculation
assumes that all of the experimental measurements are equally probable with random errors.

Often, one wishes to indicate how large the spread in the observations is around the
mean value. This quantity can be considered a measure of the “dispersion”, spread, or
variability and reflects on the precision of the measurement. The dispersion of experimental
measurements may be expressed through the following definitions.

• The “deviation”, d i , from the mean:

d i = xi − x
The deviation can be positive or negative. The average of the deviations of a set of
measurements is always zero.

• Mean deviation, d :
1 1 n
d = ( d1 + d 2 + ... + d n ) = ∑ di
n n i =1
d is a measure of the dispersion of experimental data about the mean value and so is
a measure of the precision of the measurement. Sometimes, the experimental results are

reported in the form x = xi ± d , although the more common measure of the spread is the
standard deviation discussed in the next section.

5.2 The standard deviation

The variance of a population, σ 2 , is defined as the average value of the square of

the deviations:
1 2

( 2 2 1 n 2
σ = d1 + d 2 + ... + d n = ∑ d i
n i =1
(In this case, squaring eliminates the sign of the deviation.) The positive square root of the
variance, σ , is called the standard deviation ( or sometimes, the root-mean-square
deviation )of a population.
∑d i

The standard deviation is always positive and has the units of the measured values
xi . The standard deviation is interpreted as a measure of the random uncertainty on each
measurement, xi , based on the entire population. For a finite sample, it can be statistically
shown that a better estimate of the uncertainty is given by
n −1
∑ d i2
where s is known as the sample standard deviation. Notice that the difference between σ
and s is significant only for small values of n . The symbol s will be used in what follows
for the standard deviation associated with finite sets of actual observations and σ when
referring to defined distributions or to a universe of observations.

The experimental value, x , of a measurement is commonly reported in the form:


5.3 The standard error

The standard deviation s and the reading errors are uncertainties associated with each
measurement xi of a certain variable x . The uncertainty on the mean value x is called the
“standard error” or “standard deviation of the mean” sx and is given by:
sx =
The experimental value of the measured variable x is usually reported as: x = x ± s x .

Notice that as the number of measurements increases, the standard error decreases. A
good guide as to how many measurements n of a particular quantity x one should make is

the following: at least enough times to reduce the standard error to the level of the reading
error of the equipment. This statement will become clearer from the example that to follow.

5.4 Sample Problem

Example 5.1

Consider the following experiment. You are to determine the diameter of a small
cylinder using a micrometer caliper, which has a reading uncertainty of at most 0.005 mm.
The results of 10 measurements are shown in the following table:

Reading 1 2 3 4 5 6 7 8 9 10
Diameter,mm 16.520 16.545 16.500 16.510 16.505 16.540 16.515 16.490 16.525 16.490

Average = 16.514 using the function AVERAGE

population standard deviation = 0.018 using the function STDEVP
sample standard deviation = 0.019 using the function STDEV
standard error = 0.006

Now, look carefully at the numbers in the table. Is the reading uncertainty, 0.005 mm, a
reasonable estimate of the uncertainty on the measurement ? Put another way, do most of the
measurements fall within ± 0.005 mm of each other ? The answer is no. The measurements
have a spread, which is significantly larger than can be accounted for by the reading error. In
other words, the measurements indicate that there is some random uncertainty as well. Enter
the standard deviation…

The average ( mean ) diameter x:

x = ( 1/10 ) ( 16.520 + 16.545 + … + 16.490 ) = 16.514 mm

The sample standard deviation s is:

s ={( 10 − 1 ) [ ( 16.520 − 16.514 ) + ( 16.545 − 16.514 ) + … + ( 16.490 − 16.514 ) ]}
−1 2 2 2
= 0.018

Notice that we have calculated s to only one significant figure, since it is just an estimate.
The standard deviation and the reading error are both uncertainties on each diameter
measurement, but the reading error is 0.005 mm and the standard deviation is ± 0.018 mm.
Which one is “the” error ? In general, it is the larger value, since that is the one that limits
your precision and best represents the spread in the data.

The uncertainty on the average value of the 10 measurements, the standard error, is
sx = = 0.006 mm
This is of the order of the reading error of the micrometer (0.005 mm.) Therefore, the final
result of the measurement is:

x = 16.514 ± 0.006 mm
Almost any modern scientific calculator will perform the statistical calculations for
you: you simply enter the data. ( If your calculator has symbols such as SD, DATA, s, or σ,
then it is almost certainly capable of it. Either check your manual for instructions or bring
your calculator along to the lab and ask one of your demonstrators.) If you intend to use such
a calculator, you should perform a calculation longhand at least once to ensure that you know
how it arrives at the answer. In the Microsoft EXCEL program, the functions STDEVP and
STDEV calculate the population and sample standard deviations, respectively.

Example 5.2

Now let us consider the extreme opposite case. Imagine that you are weighing an
object on a balance. You find that

m = 26.10 ± 0.05 g

The 0.05 g error is the reading error of the balance and is about as good as you can do on
that particular piece of equipment. So, you take the mass off the balance, put it back on,
weigh it again and get exactly the same answer. Your partner gets the same result and your
demonstrator gets the same result too. So you have 4 independent measurements of the mass
of the body, each with the same result. Clearly the standard deviation is zero and the error on
each measurement is simply the reading error.

The real question in this case is how best to handle the uncertainty on the average. We
argued in the section on random error that if a measurement is repeated many times the
uncertainty on the average is better than the uncertainty on each measurement and is
calculated as the standard error. But is that true here ?

0 . 05
sm = = 0 . 03 ∴ m = 26 . 10 ± 0 . 03 g
Not sure ? Then carry the argument further. Suppose that you did the same
measurement over 10,000 times (!), each time getting the same result. The standard error on
the average is then

0 . 05
sm = = 0 . 0005 ∴ m = 26 . 1000 ± 0 . 0005 g
Is it possible to measure a mass to 0.0005 g on a balance, which can only be read to 0.05 g ?

The rules of statistics are only a rough guide. In a situation like this, where it would
seem ludicrous to apply them, don’t be afraid to ignore them and use your “common sense”.
In this example, presenting your results as m = 26.10 ± 0.05 g is probably the reasonable
thing to do, because the same reading error has likely occurred in all cases.

5.5 The normal distribution

In the nineteenth century, Gauss’ graduate students were making astronomical
observations under his direction. When reviewing the results it became apparent to him that
no two measurements of the same quantity were quite the same. Gauss decided that his
students must be mistaken and proceeded to perform the same measurements by himself. To
his dismay (and embarrassment) he found that no two of his results repeated themselves
exactly either! After sufficiently regaining his composure, he decided to perform a large
number of measurements of the same quantity to try to discover some pattern. He then made
a “histogram” of the results and discovered the now famous normal or Gaussian
distribution shown in Fig.(5.1). The central importance of the normal distribution in

statistics and experimental analysis stems from the fact that many actual populations
approximate the normal form closely. Mathematically, the frequency function, f (x) , which
describes the normal distribution is given by :

1 ⎛ x−x ⎞
− ⎜ ⎟
2⎝ σ ⎠
f ( x ) = fo e
The distribution is symmetric with a mean value x and standard deviation σ .


r 0.7

e 0.6
u 0.5
e 0.4 s = 0.5
c 0.3
s = 1.0
y 0.2

0.1 s = 1.5

2 4 6 8 10 12
Random V ariable , x

Fig.(5.1) Three Gaussian distributions with the same mean values

And different standard deviations

From the mathematical form of the normal distribution, it can be shown that 68% of
the area under the Gaussian curve lies between the limits x − σ and x + σ . Thus any
result xi chosen at random has a 68% chance of being within one standard deviation of the
average value x . Further, the chance increases to 95% within ± 2 σ and 99% within ± 3 σ
as shown in Fig.(5.2). The same argument can be stated for the relation between the standard
error s x and the true or expected value of the sample average.

x = x ±σ

x−3σ x−2σ x−σ x x+σ x+2σ x+3σ x

Fig.(5.2) Normal or Gaussian distribution

5.6 Poisson distribution

Random variables that can assume only the integer values 0, 1, 2, … are called
“discrete”. (For example in radioactive decay, the random variable is the number of nuclear
decay events. Since a nucleus has either decayed or it has not, there can either be 0 events or
1 event, but not 0.5 event. ) A probability distribution, which describes such a “counting”
experiment, is the Poisson distribution. The frequency function has the form:
xx −x
P( x, x ) = e
where x is the mean number of events during a given unit of measurement (e.g. time). The
distribution is derived assuming that the probability that an event occurs in a unit of time is
the same for all the units, and that the number of events that occur in one unit of
measurement is independent of the number that occur in other units. Table (5.1) gives the
value of the function for some particular values of x and x .

A typical example of a Poisson distribution is shown in Fig.(5.3). The most probable

value of P ( x, x ) is the greatest integer less than x . ( If x is an integer, then there will be two
equal maxima at x = x and x = x −1 ). The variance of the distribution, σ = x ; the standard

deviation is then σ = x .

P(n;n) Two equal maxima

at n = n and n = n − 1
Mean value = n
Standard deviation = √ n


0 1 2 3 4 5 6 7 8 9 10 11 12 n

Fig.(5.3) Typical example for Poisson distribution

with x ≡ n and n = 5

Table(5.1) Typical values of P(n;n) for particular values of n and n

n 1 2 3 4 5 6 7 8 9 10
0 0.3679 0.1353 0.0498 0.0183 0.0067 0.0025 0.0009 0.0003 0.0001
1 0.3679 0.2707 0.1494 0.0733 0.0337 0.0149 0.0064 0.0027 0.0011 0.0005
2 0.1839 0.2707 0.2240 0.1465 0.0842 0.0446 0.0223 0.0107 0.0050 0.0023
3 0.0613 0.1804 0.2240 0.1954 0.1404 0.0892 0.0521 0.0286 0.0150 0.0076
4 0.0153 0.0902 0.1680 0.1954 0.1755 0.1339 0.0912 0.0573 0.0337 0.0189

5 0.0031 0.0361 0.1008 0.1563 0.1755 0.1606 0.1277 0.0916 0.0607 0.0378
6 0.0005 0.0120 0.0504 0.1042 0.1462 0.1606 0.1490 0.1221 0.0911 0.0631
7 0.0001 0.0034 0.0216 0.0595 0.1044 0.1377 0.1490 0.1396 0.1171 0.0901
8 0.0009 0.0081 0.0298 0.0653 0.1033 0.1304 0.1396 0.1318 0.1126
9 0.0002 0.0027 0.0132 0.0363 0.0688 0.1014 0.1241 0.1318 0.1251

10 0.0008 0.0053 0.0181 0.0413 0.0710 0.0993 0.1186 0.1251

11 0.0002 0.0019 0.0082 0.0225 0.0452 0.0722 0.0970 0.1137
12 0.0001 0.0006 0.0034 0.0113 0.0263 0.0481 0.0728 0.0928
13 0.0002 0.0013 0.0052 0.0142 0.0296 0.0504 0.0729
14 0.0001 0.0005 0.0022 0.0071 0.0169 0.0324 0.0521

15 0.0002 0.0009 0.0033 0.0090 0.0194 0.0347

16 0.0003 0.0014 0.0045 0.0109 0.0217
17 0.0001 0.0006 0.0021 0.0058 0.0128
18 0.0002 0.0009 0.0029 0.0071
19 0.0001 0.0004 0.0014 0.0030

20 0.0002 0.0006 0.0019

21 0.0001 0.0003 0.0009
22 0.0001 0.0004
23 0.0002
24 0.0001

5.7 Rejection of measurements

Often when repeating measurements, or taking a series of measurements, of a
particular quantity, one value appears spurious and we would like to throw it out. For
example, if the error in a particular quantity is taken to be the standard deviation σ , only 68%
of the measurements are expected to be within x ± σ , 95% within x ± 2σ , 99% within
x ± 3σ , and so on. We never expect 100% of the measurements to overlap within any finite
sized error for a truly Gaussian distribution. Of course, in most experiments, even the
assumption of a Gaussian distribution is only an approximation.

Thus, it is always dangerous to throw out a measurement. May be the statistics

“conspired” and the measurement is perfectly valid. Even more dangerous, perhaps the
suspect value is indicative of a physical process affecting the whole experiment. Therefore,
one must depend on pure observation, giving every experimental outcome the same chance of
consideration and analysis. Very little new physics would be discovered in the lab if the
experimenter always throws out measurements that do not match his preconceived

In general, there are two different types of experimental data taken in the laboratory
and the question of rejection of measurements is handled in slightly different ways for each:

(1) Series of measurements

This is a set of measurements in which one or more observable is changed for each
data point. A measurement of this type should only be rejected if all of the following criteria
are satisfied:
• The suspect value is more than two or three standard deviations away from the average or
expected value.
• No other value appears spurious.
• A definite physical explanation of why the suspect value is spurious is found.

(2) Repeated measurements

This is a set of measurements performed with all of the observable variables held
fixed. In this case you may be justified in rejecting a measurement, based on the first two
criteria, even if no physical explanation of the spurious result is at hand. (It may be that,
unknown to you, some large and isolated fluctuation in some environmental parameter has
occurred upsetting the single measurement). The prudent course would be to note this value
along with the rest of the data and then take several more measurements. If you are then
convinced that this is an isolated event, you may be justified in omitting it from your analysis.

Example (5.3)

The following is a set of data t i taken for the time interval in ms on the free fall apparatus.
The reading error is 0.05 ms:

465.85 465.85 465.15 465.65 465.95

465.35 465.45 465.85 465.65 465.75
465.85 466.05 465.65 465.75 465.55
466.05 465.75 465.75 465.75 465.75

(1) Calculate the mean value t .

(2) Calculate the best estimate of the universe standard deviation st .
(3) Calculate the standard deviation of the mean s t .
(4) Within which limits does a single reading have a 68 % chance of falling? Which limits
give a 95% chance ?
(5) Within which limits does the mean have a 68 % chance of falling? Which limits give a
95 % chance ?
(6) If a single reading of 467.00 ms is found in the above set, do you decide to accept or
reject it ?
(7) Is the number of measurements adequate ? or more than adequate ?


t =
∑t i
(1) = 465.72 ms

(2) st =
∑( t i − t )2
= 0.22028… ≈ 0.2203 ms
n −1
(3) st = = 0.049257 ≈ 0.0493 ms

(4) (a) t ±s = ( 465.72 ± 0.2203 ) i.e. from 465.500 to 465.940 ms

(b) t ± 2s = ( 465.72 ± 0.4406 ) i.e. from 465.279 to 466.161 ms

(5) (a) t ± st = ( 465.72 ± 0.0493 ) i.e. from 465.671 to 465.769 ms

(b) t ± 2 st = ( 465.72 ± 0.0986 ) i.e. from 465.621 to 465.819 ms

(6) ⏐467.00 − t ⏐ st ≈ 5.8 Rejection

Because :
• The reading 467.00 ms is more than 5 s t far from t .
• No other reading in the table is far from t like this point .

(7) The number of measurements is adequate because s t is of the order of the

reading error of the timer (0.05 ms).

Propagation of errors

6.1 What is propagated error ?

In most cases, one cannot make direct measurements of the variables of interest.
Usually, other quantities are to be measured and then the required one calculated from them.
For example, suppose that the volume of a rectangular block is to be determined. Its length,
width, and height are first measured, and their product calculated to find the volume. Since
there will be a measurement uncertainty on each of the three dimensions, there must also be
an uncertainty in the value calculated for the volume. But what is that uncertainty ? In other
words, how would the uncertainties on the directly measured quantities combine or propagate
in order to contribute to the uncertainty on a calculated quantity.

The propagated error is then the error which

takes place in one operation or stage and spreads
through the succeeding operations or stages .

Before considering the technical details of error propagation, let us reconsider just
what is meant when an experimenter writes down a measurement of a quantity such as
x ± Δx in his/her lab book. There are, in fact, two generally used meanings.

(1) The limit-of-error

In this case, the experimenter guarantees that the actual value of the measured
quantity is greater than x − Δx , and is less than x + Δx . (Of course, this guarantee, like any
other guarantee, is only as good as the person who makes it. In this usage, Δ x is the
maximum possible error.

(2) Probable error

In this case, the experimenter provides the most probable interval in which the true
value lies. What he/she means is that the actual value of the quantity is probably between
x − Δx and x + Δx in a statistical sense. In this usage, Δ x is a standard error.

Consider a simple example. You are measuring the length of a lab table with the only
device you can find, a meter stick. The table is unfortunately longer than one meter, so you
mark off one meter on the table and then measure the difference, 50.0 cm, say. Considering
the definition of the edge of the table, the difficulty in marking the correct spot on the table
before moving the meter stick, etc., you feel that each measurement is accurate to about
0.2 cm. The total length of the table is then:

L = ( 100.0 ± 0.2 cm ) + ( 50.0 ± 0.2 cm ) = 150.0 ± ?


If these errors in the length are the limits of the error, then the experimenter is
guaranteeing that the first length is between 99.8 cm and 100.2 cm, and the second is between
49.8 cm and 50.2 cm. Thus, he can guarantee that the least the total length could be is
99.8 + 40.8 = 149.6 cm, and the most it could be is 100.2 + 50.2 = 150.4 cm. Thus, this
experimenter would report that L = 150.0 ± 0.4 cm.

This type of error, while safe, is conservative--sometimes overly so. Since the
uncertainties can be positive or negative, it is likely that when they combine, the answer is
closer to the actual value than is indicated by the limit of the uncertainty. For example, if one
of the measurements is 0.2 cm too small and the other 0.2 cm too large, the final answer is
“right on”. In the most probable error approach, the simple addition of the uncertainties will
certainly be an overestimate.

The interpretation of the uncertainties determines the rules for propagation. In the lab,
we usually use the interpretation of the uncertainties as probabilistic instead of
deterministic. The rules for the propagation of probable errors are discussed in the next

6.2 Propagation of probable errors

Suppose that one is interested in determining the value of the dependent variable z
by measuring a set of physical quantities x1 , x 2 ,..., x k . Then z can be written as a function of
the independent variables x1 , x 2 ,..., x k :

z = f ( x 1 , x 2 , ... , x i , ... , x k )

Assuming that the measurement of each of the independent variables has resulted in
statistically significant probable errors of sx1 , sx2 ,...,sxk , the uncertainty on the dependent
variable z can be calculated as:

⎡⎛ ∂ f ⎞2 ⎛∂f ⎞ 2
⎛∂f ⎞ 2 ⎤

sz = ± ⎢⎜⎜ ⎟⎟ sx 1 + ⎜⎜
⎟⎟ sx 2 + ... + ⎜⎜ ⎟⎟ sx k ⎥

⎢⎣⎝ 1 ⎠
x ∂
⎝ 2⎠x ∂
⎝ k⎠x ⎥⎦

The equation is based on the assumption that the standard errors are independent and
random. The proof of this formula is beyond the scope of these introductory notes.

For example, suppose that the measurable quantities are the length, width, and height
of a rectangular box and the volume is to be calculated. Then V = l w h , and the uncertainty
on V is given by:

2 2 2
⎛∂V ⎞ ⎛∂V ⎞ ⎛∂V ⎞
sV = ± ⎜⎜ ⎟⎟ sl2 + ⎜⎜ ⎟⎟ sw2 + ⎜⎜ ⎟⎟ sh2
⎝∂l⎠ ⎝ ∂ w⎠ ⎝∂h⎠

Because the errors are combined in quadrature (the square root of the sum of squares),
the largest error will tend to dominate. For example,

22 + 12 = 2.24 ≈ 2
where the final error is rounded off to the appropriate significant figure. In many cases, one
need only consider the largest of the terms in the sum, which reduces the amount of
computation considerably.

6.3 Special cases and corollaries

Let us review some typical applications of the general equation of the probable-error
approach for special mathematical forms. ( You should be able to derive any of these results
from the general form.) In the following expressions, x, y, and z are variables,
s x , s y , and s z , are the associated random errors, and a, b, and c are constants. Any constant
value will have zero fractional error and zero uncertainty.

6.3.1 Sum or difference of two variables

If z = x± y

then sz = sx2 + sy2

6.3.2 Product and quotient of two variables
If z = x. y or z=
2 2
sz ⎛ sx ⎞ ⎛ s y ⎞
Then = ⎜ ⎟ + ⎜⎜ ⎟⎟
z ⎝ x⎠ ⎝ y⎠

This rule can be extended to any number of variables. For example, suppose that
z is a function of three variables,
z =
2 2 2
sz ⎛ sx ⎞ ⎛ s y ⎞ ⎛ sw ⎞
Then = ⎜ ⎟ + ⎜⎜ ⎟⎟ + ⎜ ⎟
z ⎝ x ⎠ ⎝ y ⎠ ⎝ w⎠

Notice that what is calculated here is the fractional uncertainty on z.


Corollary (i)

For a constant multiple :

If z = ax
Then sz = a sx
i.e. if a measured quantity is multiplied by a constant, a , then its uncertainty is also
s s
multiplied by that constant. The fractional error stays the same, i.e z = x .
z x
Corollary (ii)

Any linear combination :

If z = a x +b y

then sz = a2 sx2 + b2 sy2

6.3.3 Variables raised to powers

If z = xa
sz s
Then sz = a xa −1. sx or = a x
z x
Sum of powers :
If z = xa + yb
then sz = ( a x ) .s + ( b y ) .s
a −1 2 2
b −1 2 2

6.3.4 Powers and products

If z = c.xa . yb
2 2
sz ⎛ a sx ⎞ ⎛ b s y ⎞
then = ⎜ ⎟ + ⎜⎜ ⎟⎟
z ⎝ x ⎠ ⎝ y ⎠

6.3.5 Logarithmic function

If z = a . ln x
then sz = a. x

6.3.6 Exponential function

If z = c . eax
then = a .sx
Any of the above mentioned results can be derived from the general expression. It
would worth your while to convince yourself that this is so, since you may need to use the
more general form to derive uncertainties for functions that you come across in the lab which
are not defined here.

6.4 Solved problems

Example (6.1)

The measured diameter, d , of a sphere is d = 4.0 ± 0.4 cm, what is the value of the
radius, r , of the sphere ?

d 4.0
r = = = 2.0 cm
2 2
sd 0.4
sr = = = 0.2 cm by corollary (i) of Sec.(6.3.2)
2 2
∴ r = 2.0 ± 0.2 cm

Notice that the fractional uncertainty on both the diameter and radius are the same.
Similarly the percent error is the same:

( 0.4 / 4.0 ) × 100 = ( 0.2 / 2.0 ) × 100 = 10 %

Example (6.2)

You are performing an air track experiment. You determine that the track cart travels
a distance of x = 50.0 ± 0.3 cm in a time t = 110.4 ± 0.5 s . What is its average velocity v ?

The average velocity is defined as the distance traveled over the time taken.

x 50.0
v= = = 0.4528... cm / s ≈ 0.453 cm / s
t 110.4

2 2
sv ⎛ s x ⎞ ⎛ st ⎞
= ⎜ ⎟ +⎜ ⎟ using the formula of Sec(6.3.2)
v ⎝x⎠ ⎝t ⎠
2 2
⎛ 0.3 ⎞ ⎛ 0.5 ⎞
= ⎜ ⎟ +⎜ ⎟ ≈ 0.008
⎝ 50.0 ⎠ ⎝ 110.4 ⎠

Notice that this is not the uncertainty on v , but rather the fractional uncertainty on v .
Rearranging, one gets:

sv s v = 0.008 × v ≈ ± 0.004 cm/s

∴ v = 0.453 ± 0.004 cm/s

Notice that in the final result v has the same number of decimal places as s v . It is s v which
determines the number of significant figures in v .

Example ( 6.3)

A ball-bearing has a measured diameter d , of 2.425 ± 0.005 mm . What is its

volume V ?

Solution :

The volume of a sphere is:

4 4 ⎛d ⎞ π 3
V = π r3 = π ⎜ ⎟ = d
3 3 ⎝2⎠ 6

d 3 = 2.4253 = 14.2605... mm3

sd3 = 3 d 2 sd = 3 × 2.4252 × 0.005 ≈ 0.09 mm3
∴ d 3 = 14.26 ± 0.09 mm3
∴ V= × ( 14.26 ± 0.09 ) = 7.47 ± 0.05 mm 3

Example (6.4)

The acceleration due to gravity, g , is measured in an air track experiment, by

measuring the acceleration of a cart, a ± s a , when the track is tilted at some angle θ . The
function relating the variables is:

g =
sin θ

Derive an expression for the error in g .

Solution :

g is a function of the variables a and θ , i.e.

g = f ( a ,θ )

2 2
⎛∂ f ⎞ ⎛∂ f ⎞
sg = ⎜⎜ ⎟⎟ sa2 + ⎜⎜ ⎟⎟ sθ2
⎝ ∂a ⎠ ⎝ ∂θ ⎠

2 2
⎛ 1 ⎞ 2 ⎛ a cosθ ⎞ 2
= ⎜⎜ ⎟⎟ sa + ⎜⎜ 2 ⎟⎟ sθ
⎝ sinθ ⎠ ⎝ sin θ ⎠
2 2
sg ⎛1⎞ ⎛ cosθ ⎞ 2
= ⎜ ⎟ sa2 + ⎜⎜ ⎟⎟ sθ
g ⎝a⎠ ⎝ sinθ ⎠

where sθ must be in radians.


Graphical Analysis
Linear regression

7.1 Introduction

Section 5 focused on the treatment of measurements performed repeatedly under

identical conditions in order to obtain frequency distributions. In the lab, it often happens that
experiments are performed in which one outcome of the experiment is measured as a function
of an independent variable. In other words, the experimental conditions under changed in a
controlled way during the measurement. The purpose of such a measurement is usually to
determine some kind of functional relationship between the variable and the outcome.

7.2 Graphical representation of data

Graphs are a wonderful means of visualizing the experimental results. In many labs, a
“plot-as-you-go” approach is recommended so that any obvious trends or graphs in a data set
can be seen immediately. For formal analysis, a number of excellent computer programs are
available which can provide many alternatives for representing the data graphically. Just
pressing a single button or clicking on a single choice in any spread sheet program (e.g. the
CHART choice in MICROSOFT EXCEL ) makes it possible to browse among many
graphical representations of a single data set. It is a powerful and often (but not necessarily !)
time-saving facility in data analysis.

The following are some practical suggestions for producing useful graphs:

• The size of the graph is important because it reflects the accuracy of the data. Therefore,
choose convenient scales that are easy to plot and read. You can often make use of more
of the available graph paper by choosing the appropriate orientation (usually described as
“portrait” or “landscape”).

• The graphs must be clear, neat and legible. Both axes should be fully and clearly labeled.
The graph should have a significant and simple title. Any symbol should be identified in,
or beside, the graph. Do not show calculations on a graph.

• The uncertainty at each point on a graph is usually shown by a bar or a cross (as discussed
below. ) If you have many closely spaced data points, or data points with very similar
uncertainties, you may want to put representative error bars on just a few points for clarity
of presentation. An explanatory caption or note may be given in the report.

• Graphs may be hand-drawn, computer generated, or a combination of both. The

limitations of a spread sheet program should not define how your graph is presented: if
the spreadsheet will not produce the graphs in the form you want it, or it is excessively
time-consuming to produce the graph through the spreadsheet, do it by hand!

7.3 Error bars

Measured quantities are always associated with uncertainties. It is useful to be able to
read the uncertainty on a data point directly from a graph. This is the function of an error
bar. Usually, error bars indicate the standard deviation of the measurement represented by
the point on the graph.


Suppose that the force applied F,

to a spring is 0.25 ± 0.02 N. N
Assuming that the force is plotted 0.27 N
on the ordinate axis as in Fig.(7.1), 0.02 N
the actual data point, 0.25, is 0.25 N ⊕
represented by the dot, and a 0.02 N
vertical line through the dot 0.23 N
extending from 0.23 to 0.27,
represents the uncertainty on the x,m
data point. Fig.(7.1) Error bar

In many experiments, the variables plotted on the ordinate and abscissa both have
uncertainties. Then one may draw horizontal and vertical error bars on the same plot.
Fig.(7.2) shows the possible error bars on a graph.

y ⊕ y y
⊕ ⊕
⊕ ⊕ ⊕

⊕ ⊕

⊕ ⊕

x x x
(a) Horizontal error (b) Vertical error bars (c) Horizontal and vertical
bars on x uncertainty. on y uncertainty. error bars on both
x and y uncertainties

Fig.(7.2) Representation of uncertainties with error bars


7.4 Histogram
When a measurement is subject to a random error, and is repeated many times, it is
likely that a range of outcomes will occur. It is often possible to build up a frequency
distribution of the measured outcome over the range of the measurement. A plot of such a
distribution is called a histogram and is a widely used method of reporting data.

Examples of histograms are shown in Fig.(7.3). In each case the histogram is

constructed by breaking the interval of the measurements (in this case zero to xo) into a set of
subintervals on the abscissa (the horizontal axis), and then counting the number of
measurements which occur in each subinterval. The frequency of the measurement (or
sometimes, the relative frequency) is then plotted on the ordinate. Fig.(7.3) shows the effect
of the data set and subinterval size on the shape of the histogram.


x x x
(a) Small data set (b) Larger data set (c) Very large data set

Fig.(7.3) The histogram is a frequency distribution of a variable

7.5 Semi-log and log-log plots

These types of graphs are most useful in dealing with data which vary exponentially.
A typical example is radioactive decay where the activity of a sample (i.e. the number of
nuclei which disintegrate per unit time) as a function of time can be described by:
−λ t
A = Ao e
where Ao and λ are constants. It is very difficult to obtain λ from a direct plot of A vs. t
on a linear scale (Fig.(7.4a)). However, taking the (natural) logarithm of both sides produces
a linear graph since
l n A = − λ t + l n Ao
Thus, a plot of ln A vs. t is linear with a slope of λ as shown in Fig.(7.4b). Such a
plot is known as a semi-log plot (since only one of the axes has a log scale).

There are several ways to produce a semi-log plot. You can evaluate the logarithm of
the values (in this case the activity measurements) and directly plot these on the ordinate axis.
Alternatively, you can use a special type of graph paper, called (not surprisingly) semi-log
paper whose ordinate axis is scaled logarithmically as in Fig.(7.4b). This has the advantage

that the activity can be plotted directly onto the graph with no mathematical manipulation.
Most of the available spreadsheet programs also offer a semi-log plot option.

A, A, ℓn A = −λ t + ℓn Ao
c/s c/s
100 10
90 i i Slope = −λ
-λ t
80 A = Aoe i i
70 i
60 i i
1 i
50 i 10
40 i i
30 i i ·
20 i
10 i i
0 10
0 10 20 t,s 0 10 20 t,s
(a) A plot of A vs.t on a linear scale . (b) Semi-log plot : t on a linear scale and
Obtaining the constant λ is tedious A on a logarithmic scale . Slope = −λ

Fig.(7.4) Linear and semi-log plots of the exponential function .

It is common to find that one variable in an experiment has a power law dependence
on another, i.e.
y = a xn
where a and n are constants. Again, it is inconvenient to analyze ( x, y ) data of this type
plotted on a standard linear scale. In this case taking the log of both sides yields:
log y = n log x + log a
which shows a linear dependence between log y and log x . In this case, a log-log plot, in
which both axes are scaled logarithmically, is most useful. The exponent, n, can be extracted
from the slope of the line and the coefficient, a, from the intercept. For hand-plotting, log-log
graph paper is available.

7.6 Modeling
Once the experiment is performed, attention turns naturally to modeling. In the
physical science a model is usually a mathematical construction which relates the variables
under consideration. Models are important because they are usually more accessible to study
and analysis than the original system or phenomenon. Since it is easy to change any variable
in a mathematical model and study the effect on other variables, the model can be used to
predict or get new information about the original system or related systems that have not yet
been studied.

Several situations may arise when one starts to compare a model with a real system.
(See Fig.(7.5).) The agreement between the original system, represented by the experimental
data, and the adopted model, as represented for example by the straight line form, may vary
over a wide range. In a situation, such as encountered in the experiment of Fig.(7.5f), a
prudent experimenter would recheck everything--the experimental setup, errors, calculations,
analysis--before judging the validity of the model.

y y y

x x x
(a) (b) (c)
Complete agreement Agreement over part of the range

y y y Straight–line model
System observations

x x x
(d) (e) (f)
Different intercepts The observations are Complete disagreement
but the same slope scattered uniformly between the model and system

Fig(8.5) Different situations that may arise for the agreement

between the original system and the adopted model of a straight line

7.7 Regression
One method of modeling the relationship between variables is regression analysis or
regression modeling. A regression analysis is useful for precisely the kind of experiment
outlined in section (7.1) in which the frequency distribution of one variable is measured as a
function of another. The regression analysis may describe the relationship between two or
more variables. Usually, the value of the dependent variable is studied versus one or more

independent variables. A plot of the expected value of the dependent variable over the range
of the independent variable is called a regression curve (the independent variable is plotted
as the abscissa and the dependent variable as the ordinate). Because many functions can be
reduced to a linear form,
y = m x + b
through either a change of variables or applying a logarithmic scale, we will consider
specifically the problem of linear regression in the next sections.

7.8 The Line of Best Fit

Consider an experiment in which an independent variable y is measured as a function

of a dependent variable x. It would be reasonable to expect that even if the underlying
physics of the system demanded a linear relationship between the two variables, a plot of the
measured data would not show a perfect straight line: after all, if the quantity y were
measured over and over again for a single value of x, there would likely be a distribution of
values centered around a mean with an associated standard deviation. This would likely give
rise to some scatter in the points around their mean values on a scale characteristic of the
standard deviation of the measurement. It will then be necessary to estimate the slope and
intercept of the straight line through the data which best represents the “true” relationship
between the variables. This procedure is called “fitting”.

y y
~ (x1,y1) (x2,yc2)
y = m1 x + b1
~ y=mx+b d1 d2 y = mx + b
~ y = m2 x + b2 (x1,yc1)
~ (x2,y2) Calculated
~ Measured

0 0
0 x 0 x
Fig(7.6) The best fitting straight line and Fig(7.7) Experimental points and the
two other lines consistent with slightly calculated values showing the
smaller and slightly larger slopes . corresponding deviations .

The simplest way to fit a line through the data is “by eye”. Using a transparent ruler
(or straight edge), move the defining edge of the ruler vertically until there are an equal
number of points above and below it. Rotate the ruler around the center of these points so that
the deviations on either side are roughly equal. This will be your “best-fit” line. In addition,
you will need to draw two other lines, which have larger and smaller slopes respectively, but

which still reasonably represent the data. (fig.(7.6)). These you will use to estimate the
uncertainties on the slope and intercept.

Since what constitutes the best straight line is clearly a matter of judgment, it is
entirely conceivable that given the same set of data, two experiments would draw different
“best” fit lines through it. It would obviously be better to have some analytic criterion to
decide on the best fit. This is provided by the least squares test.

7.9 The Method of Least Squares

Suppose that you have a set of data points ( xi , y i ) and a straight line through the data
points described by the slope and intercept parameters m and b , respectively. Then at xi ,
the calculated y -value ( i.e. the y -value predicted by the straight line ) is

y ci = m x i + b

A measure of how near the calculated value is to the experimental value, is the deviation

d i = yi − yci = yi − ( m xi + b )

(See Fig.(7.7) ). Clearly, in choosing a best fit line, your goal is to, in some sense, minimize
the deviation, since the smaller d i is, the closer the line passes to the experimental point.
( If d i = 0, the line actually goes through the point ). Of course, the individual deviations
can not all be minimized ( unless all of the data points fall exactly on a straight line ). What
can be minimized is the sum of the squares of the deviations,

M = ∑d i

(Squaring eliminates the sign of d i , i.e. it doesn’t matter whether the point is above or below
the line). In the least squares or linear regression method, the best-fit line is the one which
minimizes the sum of the squares of the deviations from the line.

The problem of finding the best-fit line is now reduced to the mathematical problem
of minimizing the function M in the two variables m and b . By means of the differential
calculus, this condition for the minimum can be expressed as

∂M ∂M
=0 and =0
∂m ∂b
In a few steps, one can obtain analytical expressions for the slope and intercept of the
best-fit line:

Slope m=
Nσ 2 ∑( x
i − x )( yi − y )

Intercept b = y − mx
N is the number of data points, x and y are the average values of xi and y i , respectively
and σ is the standard deviation of the x values as defined in Section 5. In addition,
statistical analysis leads to estimates of the uncertainties of the slope and intercept:

Standard deviation of the distribution of deviations on the y-variable

sd =
∑ d i2
N −2

Standard deviation of the slope

sm = s y
N ∑ xi2 − ( ∑ xi ) 2

Standard deviation of the intercept

sb = s y
∑x i

N ∑ xi2 − ( ∑ xi ) 2

The standard deviations s m and s b are calculated on the assumption that the uncertainties on
the points are all the same. They give intervals of uncertainty with the usual Gaussian
implications (e.g. one standard deviation give 68% probability of enclosing the “universe
value” and so on).