Вы находитесь на странице: 1из 14

TERM PAPER

COURSE NAME: NUMERICAL ANALYSIS

COURSE CODE:- MTH 204

TOPIC:-ERROR IN NUMERICAL ANALYSIS

SUBMITTED TO: SUBMITTED BY:


Reetika Salaria Mam Navdeep Dhillon

MTH Department ROLLNO: D1802B56

REG.NO: 10801454

B.Tech (CSE)
AKNOWLEDGEMENT
I would like to express my gratitude to all those who gave me the possibility to
complete this term paper. I want to thank the Lovely Professional University for
giving me permission to commence this term paper, to thank to Reetika Salaria
Mam, who give and confirmed this permission and encouraged me to go ahead
with my term paper.

I am deeply indebted to my supervisor Reetika Salaria Mam whose help,


stimulating suggestion and encouragement helped me to accomplish this Term
Paper.

I am equally thankful to my parents and friends who have provided me


financial and mental help for completing this assignment. Last but not the least I
am thankful to the Almighty, who stood by me during this toiling time.

Navdeep Dhillon
TABLE OF CONTENTS

 Define Error analysis


• Systematic Errors
• Random Errors
 Error analysis in numerical modeling
 Error analysis in language teaching
 Error analysis in molecular dynamics simulation
 Types of Errors in Numerical Analysis

• The Round Off error


• Truncation and discretization error
• The Discretization error
• Numerical Stability errors
• Absolute and relative errors

 General Techniques Error Of Numerical Analysis


 Numerical analysis
 The generation and propagation of errors

• Error propagation
• Error generation

 Numerical stability and well posed problems


 Conclusion
 Bibliography
Error analysis

Error analysis is the study of kind and quantity of error that occurs, particularly in the fields of
applied mathematics (particularly numerical analysis), applied linguistics and statistics.

Systematic Errors

Systematic errors arise from a flaw in the measurement scheme which is repeated
each time a measurement is made. If you do the same thing wrong each time you make
the measurement, your measurement will differ systematically (that is, in the same
direction each time) from the correct result. Some sources of systematic error are:
 Errors in the calibration of the measuring instruments.
 Incorrect measuring technique: For example, one might make an incorrect scale
reading because of parallax error.
 Bias of the experimenter. The experimenter might consistently read an instrument
incorrectly, or might let knowledge of the expected value of a result influence the
measurements.
It is clear that systematic errors do not average to zero if you average many measurements. If a
systematic error is discovered, a correction can be made to the data for this error. If you measure
a voltage with a meter that later turns out to have a 0.2 V offset, you can correct the originally
determined voltages by this amount and eliminate the error. Although random errors can be
handled more or less routinely, there is no prescribed way to find systematic errors. One must
simply sit down and think about all of the possible sources of error in a given measurement, and
then do small experiments to see if these sources are active. The goal of a good experiment is to
reduce the systematic errors to a value smaller than the random errors. For example a meter stick
should have been manufactured such that the millimeter markings are positioned much more
accurately than one millimeter.

Random Errors

Random errors arise from the fluctuations that are most easily observed by making multiple trials
of a given measurement. For example, if you were to measure the period of a pendulum many
times with a stop watch, you would find that your measurements were not always the same. The
main source of these fluctuations would probably be the difficulty of judging exactly when the
pendulum came to a given point in its motion, and in starting and stopping the stop watch at the
time that you judge. Since you would not get the same value of the period each time that you try
to measure it, your result is obviously uncertain. There are several common sources of such
random uncertainties in the type of experiments that you are likely to perform:
 Uncontrollable fluctuations in initial conditions in the measurements. Such fluctuations
are the main reason why, no matter how skilled the player, no individual can toss a
basketball from the free throw line through the hoop each and every time, guaranteed.
Small variations in launch conditions or air motion cause the trajectory to vary and the
ball misses the hoop.
 Limitations imposed by the precision of your measuring apparatus, and the uncertainty in
interpolating between the smallest divisions. The precision simply error Analysis means
the smallest amount that can be measured directly. A typical meter stick is subdivided
into millimeters and its precision is thus one millimeter.
 Lack of precise definition of the quantity being measured. The length of a table in the
laboratory is not well defined after it has suffered years of use. You would find different
lengths if you measured at different points on the table. Another possibility is that the
quantity being measured also depends on an uncontrolled variable. (The temperature of
the object for example).
 Sometimes the quantity you measure is well defined but is subject to inherent random
fluctuations. Such fluctuations may be of a quantum nature or arise from the fact that the
values of the quantity being measured are determined by the statistical behavior of a large
number of particles.
No matter what the source of the uncertainty, to be labeled "random" an uncertainty
must have the property that the fluctuations from some "true" value are equally likely to
be positive or negative. This fact gives us a key for understanding what to do about
random errors. You could make a large number of measurements, and average the result.
If the uncertainties are really equally likely to be positive or negative, you would expect
that the average of a large number of measurements would be very near to the correct
value of the quantity measured, since positive and negative fluctuations would tend to
cancel each other.

Error analysis in numerical modeling

In numerical simulation or modeling of real systems, error analysis is concerned with the
changes in the output of the model as the parameters to the model vary about a mean.

For instance, in a system modeled as a function of two variables . Error analysis deals
with the propagation of the numerical errors in and (around mean values and ) to error in
(around a mean ).

In numerical analysis, error analysis comprises both forward error analysis and backward error
analysis. Forward error analysis involves the analysis of a function which is
an approximation (usually a finite polynomial) to a function to determine the
bounds on the error in the approximation; i.e., to find such that . Backward error
analysis involves the analysis of the approximation function , to determine
the bounds on the parameters such that the result .

Error analysis in language teaching

In language teaching, error analysis studies the types and causes of language errors. Errors are
classified 3 according to:

 modality (i.e., level of proficiency in speaking, writing, reading, listening)


 linguistic levels (i.e., pronunciation, grammar, vocabulary, style)
 form (e.g., omission, insertion, substitution)
 type (systematic errors/errors in competence vs. occasional errors/errors in performance)
 cause (e.g., interference, interlanguage)
 norm vs. system

Error analysis in molecular dynamics simulation

In molecular dynamics (MD) simulations, there are errors due to inadequate sampling of the
phase space or infrequently occurring events, these lead to the statistical error due to random
fluctuation in the measurements.

For a series of M measurements of a fluctuating property A, the mean value is:

When these M measurements are independent, the variance of the mean <A> is:

but in most MD simulations, there is correlation between quantity A at different time, so the
variance of the mean <A> will be underestimated as the effective number of independent
measurements is actually less than M. In such situations we rewrite the variance as :

where φμ is the autocorrelation function defined by

We can then use the autocorrelation function to estimate the error bar. Luckily, we have a much
simpler method based on block averaging.

Types of Errors in Numerical Analysis


The abacus: an early math calculator

In the world of math, the practice of numerical analysis is well known for focusing on algorithms
as they are used to solve issues in continuous math. The practice is familiar territory for
engineers and those who work with physical science, but it is beginning to expand further into
liberal arts areas as well. This can be seen in astrology, stock portfolio analysis, data analysis and
medicine. Part of the application of numerical analysis involves the use of errors. Specific errors
are sought out and applied to arrive at mathematical conclusions.

The Round Off Error

The round-off error is used because it a representation of every number as a real number is not
possible. So rounding is introduced adjust for this situation. A round-off error, represents the
numerical amount between what a figure actually is versus its closest real number value,
depending on how the round is applied. For instance, rounding to the nearest whole number
means you round up or down to what is the closest whole figure. So if your result is 3.31 then
you would round to 3. Rounding the highest amount would be a bit different. In this approach, if
your figure is 3.31, your rounding would be to 4. In terms of numerical analysis the round-off
error is an attempt to identify what the rounding distance is when it comes up in algorithms. It's
also known as a quantization error.

Round off errors arise because it is impossible to represent all real numbers exactly on a machine
with finite memory (which is what all practical digital computers are).

Truncation and discretization error

A truncation error occurs when approximation is involved in numerical analysis. The error factor
is related to how much the approximate value is a variance from the actual value in a formula or
math result. For example, take the formula of 3 times 3 plus 4. The calculation equals 28. Now,
break it down and the root is close to 1.99. The truncation error value is equal to 0.01

Truncation errors are committed when an iterative method is terminated or a mathematical


procedure is approximated, and the approximate solution differs from the exact solution.
Similarly, discretization induces a discretization error because the solution of the discrete
problem does not coincide with the solution of the continuous problem. For instance, in the
iteration in the sidebar to compute the solution of 3x3 + 4 = 28, after 10 or so iterations, we
conclude that the root is roughly 1.99 (for example). We therefore have a truncation error of
0.01.

Once an error is generated, it will generally propagate through the calculation. For instance, we
have already noted that the operation + on a calculator (or a computer) is inexact. It follows that
a calculation of the type a+b+c+d+e is even more inexact.

What does it mean when we say that the truncation error is created when we approximate a
mathematical procedure. We know that to integrate a function exactly requires one to find the
sum of infinite trapezoids. But numerically one can find the sum of only finite trapezoids, and

hence the approximation of the mathematical procedure. Similarly, to differentiate a function, the
differential element approaches to zero but numerically we can only choose a finite value of the
differential element.

The Discretization Error

As a type of truncation error, the discretization error focuses on how much a discrete math
problem is not consistent with a continuous math problem.

Numerical Stability Errors

If an error stays at one point in an algorithm and doesn't aggregate further as the calculation
continues, then it is considered a numerically stable error. This happens when the error causes
only a very small variation in the formula result. If the opposite occurs, and the error propagates
bigger as the calculation continues, then it is considered numerically unstable.

Absolute And Relative Errors

The absolute error in a measured quantity is the uncertainty in the quantity and has
the same units as the quantity itself. For example if you know a length is 0.428 m ± 0.002
m, the 0.002 m is an absolute error. The relative error (also called the fractional error) is
obtained by dividing the absolute error in the quantity by the quantity itself. The relative
error is usually more significant than the absolute error. For example a 1 mm error in the
diameter of a skate wheel is probably more serious than a 1 mm error in a truck tire. Note
that relative errors are dimensionless. When reporting relative errors it is usual to
multiply the fractional error by 100 and report it as a percentage.

General Techniques Error Of Numerical Analysis

The overall goal of the field of numerical analysis is the design and analysis of techniques to
give approximate but accurate solutions to hard problems, the variety of which is suggested by
the following.
 Advanced numerical methods are essential in making numerical weather prediction
feasible.
 Computing the trajectory of a spacecraft requires the accurate numerical solution of a
system of ordinary differential equations.
 Car companies can improve the crash safety of their vehicles by using computer
simulations of car crashes. Such simulations essentially consist of solving partial
differential equations numerically.
 Hedge funds (private investment funds) use tools from all fields of numerical analysis to
calculate the value of stocks and derivatives more precisely than other market
participants.
 Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and
crew assignments and fuel needs. This field is also called operations research.
 Insurance companies use numerical programs for actuarial analysis.

Numerical analysis

Babylonian clay tablet YBC 7289 (c. 1800–1600 BCE) with annotations. The approximation of
the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 +
51/602 + 10/603 = 1.41421296... 1 Image by Bill Casselman. 2

Numerical analysis is the study of algorithms that use numerical approximation (as opposed to
general symbolic manipulations) for the problems of continuous mathematics (as distinguished
from discrete mathematics).

One of the earliest mathematical writings is the Babylonian tablet YBC 7289, which gives a
sexagesimal numerical approximation of , the length of the diagonal in a unit square. Being
able to compute the sides of a triangle (and hence, being able to compute square roots) is
extremely important, for instance, in carpentry and construction. 3

Numerical analysis continues this long tradition of practical mathematical calculations. Much
like the Babylonian approximation of , modern numerical analysis does not seek exact
answers, because exact answers are often impossible to obtain in practice. Instead, much of
numerical analysis is concerned with obtaining approximate solutions while maintaining
reasonable bounds on errors.

Numerical analysis naturally finds applications in all fields of engineering and the physical
sciences, but in the 21st century, the life sciences and even the arts have adopted elements of
scientific computations. Ordinary differential equations appear in the movement of heavenly
bodies (planets, stars and galaxies); optimization occurs in portfolio management; numerical
linear algebra is important for data analysis; stochastic differential equations and Markov chains
are essential in simulating living cells for medicine and biology.

Before the advent of modern computers numerical methods often depended on hand interpolation
in large printed tables. Since the mid 20th century, computers calculate the required functions
instead. The interpolation algorithms nevertheless may be used as part of the software for solving
differential equations.

The generation and propagation of errors

Error propagation ;-Consider two numbers


Under the operations of addition or subtraction,
The magnitude of the propagated error is therefore not more than the sum of the initial absolute
enors; of course, it may be zero.
Under the operation of multiplication:
The maximum relative error propagated is approximately the sum of the initial relative errors.
The same result is obtained when the operation is division.

Error generation
*
Often (for example, in a computer) an operation × is approximated by an operation × , say.
* *
Consequently, x×y is represented by x ×y . Indeed, one has
so that the accumulated enor does not exceed the sum of the propagated and generated errors.

Error propagation and generation


It has been noted that a number is to be represented by a finite number of digits, and hence often
by an approximation. It is to be expected that the result of any arithmetic procedure (any
algorithm) involving a set of numbers will have animplicit errorrelating to the error of the
original numbers. One says that the initial errors propagatethrough the computation. In addition,
errors may he generatedat each step in an algorithm, and one speaks of the total cumulative
errorat any step as the accumulated error.
Since one ains to produce results within some chosen limit of error, it is useful to consider error
propagation. Roughly speaking, based on experience, the propagated errordepends on the
mathematical algorithmchosen, whereas the generated erroris more sensitive to the actual
ordering of the computational steps. It is possible to be more precise, as described below.
Propagation of errors

Once you have some experimental measurements, you usually combine them according to some
formula to arrive at a desired quantity. To find the estimated error (uncertainty) for a calculated
result one must know how to combine the errors in the input quantities. The simplest procedure
would be to add the errors. This would be a conservative assumption, but it overestimates the
uncertainty in the result. Clearly, if the errors in the inputs are random, they will cancel each
other at least some of the time. For example if two or more numbers are to be added then the
absolute error in the result is the square root of the sum of the squares of the absolute errors of
the inputs, i.e. if z = x + y then Δz = [(Δx)2 + (Δy)2 ]1/ 2 In this and the following expressions,
Δx and Δy are the absolute random errors in x and y and Δz is the propagated uncertainty in z.
The formulas do not apply to systematic errors.
It is discussed in detail in many texts on the theory of errors and the analysis of experimental
data. For now, the collection of formulae on the next page will suffice. It has been suggested that
this article or section be merged into Discretization error.The study of errors forms an important part of
numerical analysis. There are several ways in which error can be introduced in the solution of the
problem.

Numerical stability and wellposed problems

Numerical stability is an important notion in numerical analysis. An algorithm is called


numerically stable if an error, whatever its cause, does not grow to be much larger during the
calculation. This happens if the problem is wellconditioned, meaning that the solution changes
by only a small amount if the problem data are changed by a small amount. To the contrary, if a
problem is illconditioned, then any small error in the data will grow to be a large error.

Both the original problem and the algorithm used to solve that problem can be wellconditioned
and/or illconditioned, and any combination is possible.

So an algorithm that solves a wellconditioned problem may be either numerically stable or


numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a
wellposed mathematical problem. For instance, computing the square root of 2 (which is roughly
1.41421) is a wellposed problem. Many algorithms solve this problem by starting with an initial
approximation x1 to , for instance x1=1.4, and then computing improved guesses x2, x3, etc...
One such method is the famous Babylonian method, which is given by xk+1 = xk/2 + 1/xk. Another
iteration, which we will call Method X, is given by xk + 1 = (xk2−2)2 + xk. 4 We have calculated a
few iterations of each scheme in table form below, with initial guesses x1 = 1.4 and x1 = 1.42.

Babylonian Babylonian Method X Method X

x1 = 1.4 x1 = 1.42 x1 = 1.4 x1 = 1.42

x2 = 1.4142857... x2 = 1.41422535... x2 = 1.4016 x2 = 1.42026896

x3 = 1.414213564... x3 = 1.41421356242... x3 = 1.4028614... x3 = 1.42056...


... ...

x1000000 = 1.41421... x28 = 7280.2284...

Observe that the Babylonian method converges fast regardless of the initial guess, whereas
Method X converges extremely slowly with initial guess 1.4 and diverges for initial guess 1.42.
Hence, the Babylonian method is numerically stable, while Method X is numerically unstable.

Numerical stability is affected by the number of the significant digits the machine keeps on, if
we use a machine that keeps on the first four floatingpoint digits,a good example on loss of
significance these two equivalent functions

if we compare the results of

and

by looking to the two above results, we realize that loss of significance which is also called
Subtractive Cancelation has a huge effect on the results, even though both functions are
equivalent; to show that they are equivalent simply we need to start by f(x) and end with g(x),
and so
the true value for the result is 11.174755...

which is exactly g(500)=11.1748 after rounding the result to 4 decimal digits

now imagine that you use tens of terms like these functions in your program, your error will
increase as you proceed in the program, unless you use the suitable formula of the two functions
each time you evaluate either f(x), or g(x), the choice is dependent on the parity of x .

The example is taken from Mathew; Numerical methods using matlab , 3rd ed

Conclusion

Math errors, unlike the inference of their name, come in useful in statistics, computer
programming, advanced mathematics and much more. The error evaluation provides
significantly useful information, especially when probability is required.

Once you have some experimental measurements, you usually combine them
according to some formula to arrive at a desired quantity. To find the estimated error
(uncertainty) for a calculated result one must know how to combine the errors in the input
quantities. The simplest procedure would be to add the errors. This would be a
conservative assumption, but it overestimates the uncertainty in the result. Clearly, if the
errors in the inputs are random, they will cancel each other at least some of the time. If
the errors in the measured quantities are random and if they are independent
BIBLIOGRAPHY:-

 http://en.wikipedia.org/wiki/List_of_numerical_analysis_topics#
Error
 http://www.ehow.com/list_6022901_types-errors-numerical-
analysis.html
 http://kr.cs.ait.ac.th/~radok/math/mat7/step3.htm
 www.google.com
 www.wikepedia.com
 www.soople.com

Вам также может понравиться