Вы находитесь на странице: 1из 4

Taylors Theorem:

f x
f

f x0
n

f x0 x x0

x0 x x0
n!

n 1

x
x
f x
R x

Where

P x

f '' x0 x x0
2!

'

x
n

Solution: f(x) = [x - (x-1)] x (x + (x-1)) / (x - (x-1))=


1 / (x + (x-1))

n 1

x x0
1 !

Truncation Error (remainder)

Determine second order (n=2) Taylor polynomial approx for


f(x) = x1/4 expanded about x0=1. Include remainder term.
Solution:

1, f ' x0

f x
f

'''

21 11 4
x
64 0

x0

f x

1 3 4
x
4 0

1
4

3 7 4
x
16 0

Let x = 0.123 + , where / 0.123 is small. Given the problem


where x = 0.123 gives a solution of 0.4629, now compare to
the perturbed problem where x = 0.123 + . f(x) = -0.5
(0.123 + )/3 (0.123 + )2/24 = -0.54163 0.34358 +
O()2 0.5416 for all small , so problem is unstable.

16

21
64

3
1
x1
x1
32
4

, f '' x0

7
x
128

11 4

x1

Determine a good upper bound for the truncation error of


the Taylor poly approximation above when 0.95 x 1.06 by
bounding the remainder term. Give 4 significant digits.
Solution:

|f x P x | max
7
.95
128
f x0 h

11 4

7
x
128

1.061

11 4

x1

h2 ''
f x0
2!

hf ' x0

h3 '''
f x0
3!

sin 1

sin x

x
x
2! 3!
, cos x

x
n!

Consider:

f x

, where:

Always converges if the initial approximation p0 is sufficiently


close to the root p.
converges quadratically
2 if p is a simple zero
(multiplicity, m = 1)

e
f

is a

tanh x

Consider g(h) = [sin(1 + h) - sin(1)] / h, h 0, where the


arguments for sin are in radians . When h is close to 0,
evaluation of g(h) is inaccurate in floating-point arithmetic.
In the parts below, use 4 digit, idealized, rounding floatingpoint arithmetic. If x is a floating-point number, assume
that fl(sin(x)) is determined by rounding the exact value of
sin(x) to 4 significant digits.

f(x) =x - (x-1), where x > 1. Formula is inaccurate for


large positive x. No problem when x 1. How should f(x)
be evaluated in floating point arithmetic to avoid the
subtractive cancellation?

x
y

0.27
1.38

[a,b],

. Consider the

in order to determine an

1
c

2x
1
c

2P

:
Perturbed Data:

f x

xn

1
c or 1 P
2

P
2P

. F(x) is accurate when x is close to 0.

f x

1 0.123
0.123
2
3
24
0.54163 0.34358 O

f x

0.5416 for all such that

f xn
f ' xn

xn

xn 3 R
2xn 3 R

xn

xn

xn 1

Only unstable if NO data

f x

f x

Show that the problem

0.27
0.961
1.23 x
has the exact
1.38
4.89
6.29 y
0.03
1
, whereas the given system has solution
0.196
1

f x

1
cP

determine an iterative formula for computing


Solution:

is ill conditioned.

given problem
computed solution
0.4629
x 0.123
perturbed problem
f x
x 0.123

If fl(f(x)) is to be evaluated for each of the following ranges


of values of x, specify whether the computed floating-point
result will be accurate or inaccurate.
x is large and positive (eg. x > 4)
Answer: Inaccurate
x is close to 0 (eg. x = 0. 001)
Answer: Accurate
x is large and negative (eg. x < -4)
Answer: Accurate

Evaluate fl( g(h) ) for h = 0. 00351:


fl(1+h)= fl(1+0.00351)= fl(1.00351)=1.004
fl(sin(1+h))= fl(sin(1.004))= fl(0.843625...)=0.8436
fl(sin(1))= fl(0.841470...)=0.8415
fl(sin(1+h)-sin(1))= fl(0.8436-0.8415)=0.002100
fl((sin(1+h)-sin(1))/h)= fl(0.002100/0.00351)=
fl(0.598290...)=0.5983
Note: exact value is 0.538824...

and:

Show that the computation of fl(f(0.123)) is unstable.


Solution:

a b x
c d y

For example,

With small

C a, b means: f x , f ' x , f ''' x all exist and are continuous on


p
a, b is a root of f x
0, f p
0

Solution:
Almost any perturbation of the 6 constants in the data will do.

Let

Solution: X large and positive, Use Rationalization:

of computing the solution

solution

Apply the Newton-Raphson method to

x
1.23 x
6.29 y

0.96
4.91

Newtons Method: if p is a zero of f(x), and p0 is an approximate of p,

Solution:

| |

Stable if ANY data

What values of x, where x>1 is f x


x x 1 subject
to subtractive cancellation? How should f(x) be evaluated in
floating point arithmetic in order to avoid cancellation?

system:

mid a, b

then:

f x

exact solution r

,p

# of iterations, N, to achieve absolute error < :

Apply Newtons method to

Algorithm Stability: an algorithm is stable if it determines a


computed solution (using floating-point arithmetic) that is close
to the exact solution of some small perturbation of the problem.

Trig Substitution:
Sin(2)=2sin()cos()
sin(a-b)=sin(a)cos(b)-cos(a)sin(b)
cos(a-b)=cos(a)cos(b)+sin(a)sin(b)
cos(2)=cos2()-sin2()=2cos2()-1=1-2sin2()

system of 2 linear equations in the 2 unknowns x, y. If ad-bc

Floating point operations: relative error |1-p*/p| is very


small: <b1-k(chopping), <0.5b1-k(rounding) (b=base) but
increases w/more fl() operations.
Use the following to eliminate subtractive Cancellation:

Rationalization

Taylor approximations:

x
1!
x

0, then the solution is:

Use the Taylor polynomial approximation from above to


obtain a polynomial approximation, say p(h), to g(h).
Solution: p(h) = [sin(1) + hcos(1) (h2 / 2)sin(1))
sin(1)] / h = cos(1) hsin(1)/2

If a, b, c, d, e, f have known values then

h2
sin 1
2

h cos 1

p |

iterative formula for computing

computed solution r

With small

Taylor's Theorem can be expressed in two equivalent forms:


The way as defined, or by using a change of variable
(replacing x by x0 + h, so that h = x - x0 is the independent
variable):
f(x0 + h) = f(x0) + hf'(x0) + h2f"(x0)/2! + h3f"'(x0) / 3! + ...
Using the latter form of Taylor's Theorem (without the
remainder term), determine the quadratic (in h) Taylor
polynomial approximation to sin(1+h). Note: leave your
answer in terms of cos(1) and sin(1).
Solution: sin(1+h) sin(1) + h(cos(1)) + h2 (-sin(1)) / 2

|p

Perturbed Data:

1.360 105

f x0

Problem Conditioning: Ill conditioned if exact solution


changes greatly with small changes in data. Only wellconditioned if for ALL small i, r
r
Data:

Use this form of Taylors theorem to determine the


quadratic approx to sin(1+h).
Solution:

sin 1

Let f(x) = [(sin(x) - ex) + 1] / x2, x 0, x in radians.


To 4 significant digits, the exact value of f(0.123) is -0.5416,
and fpn computation is inaccurate. In order to obtain a better
formula for approximating f(x) when x is close to 0, use the
Taylor polynomial approximations for ex and sin(x) (both
expanded about x0 = 0) in order to obtain a quadratic
polynomial approximation for f(x). Show that this polynomial is
unstable for fl(f(0.123))
Solution: sin(x) x x3/6, ex 1 + x + x2/2 + x3/6 + x4/24
f(x) [x x3/6 (1 + x + x2/2 + x3/6 + x4/24) + 1] / x2 =
-0.5 x/3 x2/24

repeat until [a,b] is sufficiently small c will


be close to a zero of f(x)
if tolerance is specified, stop when b a 2
Max absolute error of successive approx:

for x close to 0

is small.

Since this is not close to -0.4629 for all small , the


computation is unstable.

x
fl y z for w=16.00, x=43.61, y=12.31,
z=56.68
Using idealized, rounding, floating point arithmetic (base 10,
k=4), the above equation evaluates to 0.1. The exact value is
0.0292. Use the definition of stability to show (by using only a
perturbation in y) that the computation is stable.
Solution:
To show that its stable, find a value for which the exact value
of (16.00)(43.61)-(12.31+)(56.68) is approx equal to 1.

in order to

R
xn 2
xn 3 R
xn
xn
3 R
R
x
2x
n
n
2xn
xn 2
3
3
3
R xn
R
2R
2xn
xn
xn
2xn 3 R
2xn 3 R
xn 2

Apply Newton-Raphson to f(x) = x2 1/c to determine an iterative


formula for computing 1/sqrt(c).
Solution: f(x) = x2 1/c; f'(x) = 2x
pn = pn-1 (pn-12 1/c)/2pn-1 = (pn-12 + 1/c) / 2pn-1 = 0.5(pn-1 + 1/cpn-1)
For arbitrary c > 0, let p0 be the initial approximation to 1 / sqrt(c) , let
{p1 , p2 , p3, ...} be the sequence of computed approximations to 1 /
sqrt(c) using the iterative formula from (a), and let en = pn 1 / sqrt(c)
for n = 0,1,2,3,... Show using algebra that en = en-12 / 2pn-1. Also, it
followed that lim(n = ) |en| / |en-1|2 = lim(n = ) 1/2pn-1 = c /2
Solution: en = pn 1/sqrt(c) = [pn-12 + 1/c]/2pn-1 1/sqrt(c) =
[sqrt(c)pn-12 + 1/c 2pn-1] / 2pn-1sqrt(c) = sqrt(c)[pn-12 2pn-1/sqrt(c)
+ 1/c] / 2pn-1sqrt(c) = [pn-1 1/sqrt(c)]2 / 2pn-1 = en-12 / 2pn-1
Let R denote any positive number. Apply the Newton-Raphson method
to f(x) = x2 - (R / x) in order to determine an iterative formula for
computing 3R Simplify the formula so that it is in the form: xn [g(xn) /
h(xn)], where g(xn) and h(xn) are simple polynomials in xn.
Solution: xn+1 = xn f(xn)/f(xn) = xn (x2 (R/xn)) / (2xn + R/xn) =
xn[1- (xn3 R)/ (2xn3 + R)] = xn(2xn3 - R - xn3 + R)/(2xn3 + R) = xn (xn3
+ 2R)/(2xn3 + R)
Consider the case R = 2. Given some initial value x0, if the iterative
formula in (a) converges to 32, what will be the order of convergence?
Solution: f'(x) = 2x + R/x2 = 2x + 2/x2, so at the root x = 32
f'(32) = 2 32 + 2 / 22/3 0 32 is a simple zero quadratic
convergence
Convergence:
Suppose that the absolute errors in three consecutive approximations
with some iterative method are en= 0.07; en+1 = 0.013; e n+2 = 0.00085
Use the definition of convergence to estimate the order of convergence
of the iterative method.
Solution:
Definition of the order of convergence : limn en+1/ en = k; if n is
sufficiently large then, en+1/ en = k = en+2/ en+1 en+1/ en+2 = k = (en/
en+1); solve for (=1.62)

fl fl w

16 43.610.1

12.31 0.0012491 Thus,


56.68
y 12.31
for example, if y =12.31 is perturbed to y
0.00125 , then the exact value of w x y z 0.10005,
Solving for gives

which is close to 0.1, and since


computation is stable.

0.00125

0.123

12.31

is small, the

Bisection Method: used to compute a zero of any function f(x)


that is continuous on any interval [a,b] for which f a
f b
0 has linear convergence 1
a and b are two initial approximations
new approximation is the midpoint of [a,b],

b 2

if f c
0, stop
otherwise, a new interval that is half the length
of the previous interval is determined:
o if f a
f c
0 set b c
f c
0 set a c
o if f b

Multiplicity:

If a zero exists at p for f(x), keep taking derivatives until

m = multiplicity
If m= 1, then p is a simple zero of f(x)

Secant Method:
th

Error in k approx.:

Order of convergence:

f P

p; e

P
P

|e ||e

Horners Algorithm: Given a polynomial P


and a value x0, it evaluates P(x0) and P(x0)
Given a0, a1,an and x0, compute:
bn = an
bn-1 = an-1+bnx0
b1 = a1+b2x0
b0 = a0+b1x0 = P(x0)

1.618 for a simple 0.


x

ax

cn = bn
cn-1 = bn-1+cnx0
c1 = b1+c2x0 = P(x0)

P(x) = (x-x0)Q(x)+b0
Original polynomial: P(x) = anxn+ an-1xn-1+ a1x+a0
Deflated polynomial: Q(x) = b1+b2x2+b3x3+bnxn-1
An approximation to one of the zeros of P(x) = x4 + x3 - 6x2 - 7x 7 is
x0 = 2.64 . If x0 is used as an approximation to a zero of P(x), use
synthetic division (Horner's algorithm) to determine the associated

deflated polynomial. Note: do not do any computations with


the Newton-Raphson method.
Solution: a0 = -7, a2 = -6, a3 = 1, a4 = b4 = 1, b3 = a3 +
b4x0 = 1 + 1(2.64) = 3.64, b2 = a2 + b3x0 = -6 +
3.64(2.64) = 3.6096, b1 = 2.529344, b0 = -0.322532
Deflated Poly: x3 + 3.64x2 + 3.6096x + 2.529344
Newtons Method w/Horners Algorithm and
Polynomial Deflation:
to approx. a zero of P(x)
choose an initial approx. p0
i = N: use Horners Algorithm to evaluate {bn ... b0} and {cn
... c0} so b0 = P(pN-1) and c1 = P(pN-1) compute p.n. pN-1b0/c1; pN is the approx. to a zero of P(x) deflated polynomial
is Q(x) = b1+b2x+b3x2+...bNxN-1

P x

Mullers Method:

a x

b x

*P(x) and f(x) are equal at x0, x1, x2

c
b

f x
x

f x
x
f x
x

x
x
f x
x x
x
x f x
x
x x
x

f x
x x
f x
x x

f x
f x

to determine the next approx, x3:

2c

sign b b

4ac

then reinitialize x0, x1, x2 to be x1, x2, x3 and repeat.


Order of convergence = 1.84 for simple zero
Let P(x)=x4+x2-3. If Mullers method is used to approx. a
zero of P(x) using initial approx. 0,1,2, give the Lagrange
form of the interpolating polynomial.
Solution:
x
0, x
1, x
2

L x
L x
L x
f x
f x
P x

1 x 2
1
2
x x 2
1
1
x x 1
2 1
P x
1,

x x x x
x x
x
x
x x x x
x x
x
x
x x x x
x x
x
x
P x
3, f x
P x
17
3L x
L x

P x

xx1 xx2
x0 x1 x0 x2

y0

xx0 xx2
x1 x0 x1 x2

xx0 xx1

y1

x2 x0 x2 x1

is positive is order of convergence

Order of Convergence:
|P

lim

P|

|P

P|

Determine a0, b0, d0, a1, b1, c1 and d1 so that


S(x) = [a0 + b0x - 3x2 + d0x3 , -1 x 0; a1 + b1x + c1x2 +
d1x3, 0 x 1]is the natural cubic spline function such that
S(-1) = 1, S(0) = 2 and S(1) = -1. Clearly identify the 8
conditions that the unknowns must satisfy, and then solve for
the 7 unknowns.
Solution: S0'(x) = b0 6x + 3d0x2
S1'(x) = b1 + 2c1x +
3d1x2
S0"(x) = -6 + 6d0x
S1'(x) = 2c1 + 6d1x
The 8 conditions are: S0(-1) = 1 a0 b0 3 d0 = 1 a0
b0 d0 = 4
S1(0) = 2 a1 = 2
S1(1) = -1 a1 + b1 + c1 + d1 = -1 b1 + c1 + d1 = -3
S1(0) = S0(0) a1 = a0 a0 = 2
S1'(0) = S0'(0) b1 = b0
S1"(0) = S0"(0) 2c1 = -6 c1 = -3
S0"(-1) = 0 -6 6d0 = 0 d0 = -1
S1"(1) = 0 2c1 + 6d1 = 0 -6 + 6d1 = 0 d1 = 1
From the first condition, b0 = a0 d0 4 = -1
From the fifth condition, b1 = b0 b1 = -1

f x k Lk ' x

f' x

n 1

d xx0 xx1 xxn


n 1 !
dx

k 0

simplified error term:

y2

n 1

d xx0 xx1 xxn


dx
n 1 !

xj

xx0
y
x1 x0 1

n 1

x xj

xj

xj xk

n 1 !

k 0, k j

Let P(x) denote the (linear) polynomial of degree 1 that


interpolates f(x) = cos(x) at the points x0 = -0.1 and x1 =
0.1 (where x is in radians). Use the error term of
polynomial interpolation to determine an upper bound for
P(x) - f(x), where x [-0.1, 0.1]. Do not construct P(x).
Solution:
|P(x) f(x)| = | f"(E) (x 0.1)(x + 0.1) / 2! | with 0.1 <
E < 0.1
= | -cos(E) (x2 0.01) / 2 |
<= [cos(0) / 2] Max(
0.1 < E < 0.1)|x2 0.01|
= 0.5 |0 0.01| = 0.005

For n = 1, j = 0:

Error Term of Polynomial Interpolation:

between x0 and x2. For equally spaced data, x1-x0 = x2-x1 = h:

f x

P x

x
1 !

where P(x) is the interpolation polynomial and x0, x1, xn are


distinct points, therefore:
|f x
P x |
1
x max| x x x x x x |
max f
n 1 !

to determine max, either use inspection or take


derivative and set it equal to 0

assume f(x)C(n+1)[a,b]
Let P(x) denote the (linear) polynomial of degree 1 that
interpolates f(x) = cos x at the points x0 = -0.1 and x1 =
0.1 (where x is in radians). Use the error term of
polynomial interpolation to determine an upper bound for
|P(x) - f(x)|, where x [-0.1, 0.1] Do not construct P(x).
Solution: |P(x) - f(x)| = |f"()(x-0.1)(x+0.1)/2!| with
0.1< < 0.1
= |-cos()(x2 0.01)/2| <= cos(0)/2 max(-0.1 <= <=
0.1)|x2 0.01| = 0.5 |0 0.01| = 0.005
Runge Phenomenon:

as n , Pn(x) diverges from f(x) for all values of x


such that 0.726 x 1 (except at the points of
interpolation xi)

consider the error term for polynomial interpolation


as an example of this
Lagrange Interpolation Polynomials:

f x
P x

P x for k
f x L , x

f x L

0,1,2 n
f x L

f x

with error term:

For n = 2 (middle of 3 data points), j = 1:

f ' x1

x 1 x2
x0 x 1 x0 x 2

f x0

f x2

where:

x2 x0 x2 x1

x
f

f x

f x

f x1

2x1 x0 x2
x 1 x 0 x1 x 2

x1 x0

with error term

f x

where 1 lies

4f x

3f x
x

N h

f x

K h

K h

h
2

f x

2h

x 2h
h
f
2

hf x

for j

h
2

for j

K h

h
2

has truncation error of O(h2j).

Newton-Cotes Closed Quadrature Formulas:


Trapezoidal rule (n=1):

h
f x
2

f x dx

h
f
12

f x

x , degree = 1
h x
Simpsons Rule (n=2):
h
f x
4f x
3
2, degree = 3

f x dx

h
f
90

f x

Simpsons Rule Precision Table (degree of precision is 3)


b

f x

f x dx

f x dx
a

1
x
x
x
x

a
6

4
b

a
6

3
b

a
6

2
b

a
6

a
6

1
a
a
a
a

h
f x0
3
4 1 1
4

4f x1
1

2
a b
4
2
a b
4
2
a b
4
2

f x2

b
b
b

a
a

h5
f
90

2
b

a
3

a
4

a
5

Newton-Cotes Open Quadrature:


The open-newton cotes quadrature formula(for n=1) that approximates
xg(x) dx is 3h/2 (g(xo)+ g(x1)) where h = (xo - x1)/3 and xi = x +
(i+1)h for i= 0,1,2,; recall that Euler's method can be derived by
intergrating this differential equation y'(t) = f(t,y(t)) over [tI, tI+1] and
using a simple "rectangular" rule to approximate f(t,y(t)) dt. Derive a
formula for approximating th solution to y'(t) = f(t,y(t)) by intergrating
this differential equation over [tI, tI+1] and approximating f(t,y(t)) dt
by the above open newton cotes quadrature formula for n=1.
Solution:
y'(t) = f(t,y(t)) y(tI+3) = y(tI) + f(t,y(t)) y(tI) + 3h/2
(f(tI+1,y(tI+1))+ f(tI+2,y(tI+2))) which suggests I+3 = i + 3h/2
(f(tI+1,I+1))+ f(tI+2,I+2)))
Quadrature Precision:
Determine the degree of precision of the quadrature formula
(3h/4)[3f(h) + f(3h)] which is an approximiation to03h f(x)dx.
Solution: f(x) = 1
03h dx = 3h 3h/4[3 + 1] = 3h
3h/4[3h + 3h] = 9h2 / 2
f(x) = x
03h xdx = 9h2 / 2
03h x2 dx = 9h3
3h/4[3h2 + 9h2] = 9h3
f(x) = x2
3
3
4
3h
0 x dx = 81h / 4
3h/4[3h3 + 27h3] = 45h4 / 2
f(x) = x
4
81h / 4
thus the degree of precision is 2
Composite Simpsons Rule:
m applications of Simpsons rule on [a,b] requires [a,b] be subdivided

Case m = 2: Requires 5 quadrature points and 4 subintervals.

h
f
3

h
f
n 1 !

h
f
3

4f

2f

4f

Case m = 4: Requires 9 quadrature points and 8 subintervals.

f x dx

Numerical Differentiation Using Taylors Theorem:

M has truncation error of O(h2).


Replace h by h/2 (or h/some value) and follow
steps in case 1.
N calculation for extrapolation table:

f x dx

Where:

f x

into an even # of subintervals, each of length:

For n = 2, j = 0 or 2 (equally spaced data):

f x
1
2h

h
2

Case 2:

Cubic Spline Interpolation:


S(x) is a cubic spline interpolant for f(x) if:
(a) S(x) is a cubic polynomial, denoted by Sj(x), on each
subinterval [xj, xj+1], 0 j n-1
(b) Sj(xj) = f(xj), for 0 j n-1 and Sn-1(xn)=f(xn)
(c) Sj(xj) = Sj(xj+1), for 0 j n-2
(d) Sj(xj) = Sj(xj+1), for 0 j n-2
(e) Sj(xj) = Sj(xj+1), for 0 j n-2
(f) and either one of:
(i) S(x0) = S(xn) = 0 (free/natural boundary condition)
(ii) S(x0) = f(x0) and S(xn) = f(xn)
(clamped boundary condition)
There are infinite solutions for conditions (a)-(e) with 4n-2
conditions to be satisfied in 4n unknowns. If (f) is satisfied,
there are only 4n conditions in 4n unknowns and a unique S(x).

or linear interpolation (n=1):

xx1
y
x0 x1 0

x
x

x
x

M = exact value, N1(h) = computed approx. using stepsize h, K1h...=


truncation error of O(h)
Use same formula to approx M, but replace h by h/2 (or h/some value).
Determine linear combination of two formulas for M so that largest
term in truncation error (O(h)) cancels out. N calculation for
extrapolation table:

Numerical Differentiation Formulas:


Given f(x) and x. Write a Lagrange interpolating polynomial,
and differentiate it and its error term. Solve
.

17L x

Polynomial Interpolation:

let y=f(x) and y1=f(x1) for given values x1

then P(x1)=y1 interpolates f(x)

general form:

P x

x x0 x x1 x xk1 x xk 1 x xn
xk x0 xk x1 xk xk1 xk xk 1 xk xn

Ln,k x

h
f
3

Truncation error:

for h, etc, replace all h by h. Write Taylor for all points, do


linear combination to cancel as many derivatives as possible.
Consider the initial-value problem y'(t) = (1/t)(y2 + y) and y(1)
= -2. Use the Taylor method of order n = 2 with h = 0.1 to
approximate y(1.1). Show all of your work and the iterative
formula.
Solution: f(t, y(t)) = (1/t)[(y(t))2 + y(t)] so y"(t) = f/t +
f/y = (-1/t2)(y2 + y) + (1/t)(2y + 1)((y2 + y) / t) = 2y2(y +
1) / t2
Taylor method of order 2 is wi+1 = wi + hf(ti, wi) + h2f'(ti, wi)/2
= wi + h(wi2 + wi)/ti + (h22wi2 / 2ti2)(wi + 1)
= wi + hwi(wi + 1)/ti + (h2wi2/ti2)(wi + 1)
So, w1 = w0 + hw0(w0 + 1)/t0 + h2w02(w0 + 1)/t02
= -2 + 0.1(-2)(-2 + 1) / 1 + 0.01(4)(-2 + 1) / 12 = -1.84

4f
E

2f

4f

2f
f

4f

2f

4f

Compute two approximations computed to x/(1+x2) dx , using the


non-composite Simpsons rule(with h=0.1) and using the composite
Simpsons rule (with h=0.05 and bounds: [0,0.2]).
Solution:
Simpsons rule: I1 = h/3 [f(0) + 4f(0.1) + f(0.2)] = 0.01961158
Composite Simpsons rule:
I2 = h/3 [f(0) + 4f(0.05) + 2f(0.1) + 4f(0.15) + f(0.2)] = 0.0196143
Find the truncation error term for the two approximations to obtain an
O(h4) approximation to the value of the given definite intergal.
Solution:
Width h= 0.1
(1) I = I1 + kh4 + O(h6)
(2) I = I2 + k(h/2)4 + O(h6)
16*(2) - (1) solve for I I = I2 + (I2 - I1)/15 + O(h6)
I = 0.019103533

Richardsons Extrapolation:
Case 1:

Romberg Integration: is the application of Richardsons Extrapolation


to the composite trapezoidal rule approximations. Rk,1 is the composite

trapezoidal rule approx. to

N h

K h

K h

K h

f x dx using 2k-1 sub intervals.

R1,1
R2,1
R3,1
M
R

R2,2
R3,2
M

R3,3
M
f

Disadvantages of Eulers: not sufficiently accurate. Global


truncation is O(h), means that h must be small for high
accuracy approx.

Taylors Method of order n:

, where

, where

, where

For Columns >1:

wi

, where

R2,2
R3,2

R3,3

where Rk,1 = the trapezoidal rule approximation to ab f(x)dx


using 2k-1 subintervals on [a,b]. Use R1,1 and R2,1 and
Richardson extrapolation to show that R2,2 is equal to
Simpsons Rule approximation to ab f(x)dx
Solution: R22 = [4R21 R11] / 3 or R21 + [R21 R11] / 3
= 4[(b-a)(f0 + 2f1 + f2)/4 (b-a)(f0 + f2)/2] / 3
= (b-a)[2(f0 + 2f1 + f2) (f0 + f2)] / 6 = (b-a)[f0 + 4f1 +
f2] / 6
= h/3 (f0 + 4f1 + f2) with h = (b a) / 2
Adaptive Quadrature:
S1 = (non-composite) Simpsons rule approx to
S2 = composite Simpsons rule approx using 2 applications
of Simpsons rule

f x dx

f x dx
S |

If |S

h
f
1440
1
|S
15

15 ,

1
S
15

g t dt

Approximate
Solution:

f x

cg x

dx using Gauss-Legendre with n = 3.

5
8
5
f x
f x
f x
9 1 9 2 9 3
1
5cos 0.7745967
8cos 0.0

5cos 0.7745967

f x dx

9 1 0.0

9 1 0.7745967

9 1 0.7745967

2.144494

Eulers Method: is obtained from this truncated Taylor


polynomial approx. by replacing y(ti+1) by its numerical
approx. wi+1, and similarly replacing y(ti) by wi:
w
(initial condition)

w
w

y t
h f t ,w

y t

y t
h

x
x
x

a
a
a

f x
f x
f x

m
m
m

a
a
a
a
a
a

,E

m E

,E

m E

,E

m E

Make triangular and back Substitute.


Algorithms:
Forward elimination:
For i=1 to n-1
For j=(i+1) to n
multaji/aii
For k=i+1 to n
ajk ajk-mult aik
bjbj-multbi
Back substitution:
xnbn/ann
For i=n-1 to 1

a x

Fails with divide by zero (aii=0). Use Partial Pivoting to reduce


error:
Find largest pivot:
pi
for k=1 to n
if |aki|>|api| then pk
row swap if necessary

if pi
For k=i to n
tempaik
aikapk
apktemp
tempbi
bibp
bptemp
continue with Gaussian.

f t ,w

Geometric interpretation of Eulers Method:

w
w
w

Direct Methods for Solving Linear Systems:


Matrix algebra: Ax=B x=A-1B
Ax=b, n equations, n unknowns
Case 1:
a11x1 + a12x2 ++ a1nxn = b1
a21x1 + a22x2 ++ a2nxn = b2

An1x1 + an2x2 ++ annxn = bn


Case 2:
P(x) = a0 + a1x + a2x2

Forward Elimination:

Gauss-Legendre: To approximate an integral using this


method it must be modified using a change of variable to an
integral on [-1,1]. The integral is then approximated as a
linear combination of f(x) using values from the following
table:
i
Coefficients (ci)
Roots(xi)
1
5/9
0.7745967
2
8/9
0.0
3
5/9
-0.7745967

f x dx

Consider the initial-value problem y'(t) = (1/t)(y2 + y) and y(1)


= -2.
Approximate y(1.1) using h = 0.1 and the following second
order Runge-Kutta method: wi+1 = wi + (h/2)[f(ti, wi) + f(ti+1,
wi + hf(ti, wi))]
Solution: w1 = w0 + (h/2)[f(t0, w0) + f(t1, w1 + hf(t0, w0))]
= -2 + (0.1/2)[f(1, -2) + f(1.1, -2 + 0.1f(1, -2))]
= -2 + 0.05[(4 2) + f(1.1, -2 + 0.1(2))]
= -2 + 0.05[2
+ ((-1.8)2 1.8)/1.1]
= -2 + 0.05(2 + 1.309090) = 1.8345
The Order of the local truncation error of the Runge-Kutta
method is O(h3)

1 x
1 x
1 x

S |

f x dx

t i ,wi , int n 1

Runge-Kutta Methods: higher order (1) formulas that only


require evaluations of f(t, y(t)), and not any of its derivatives.
Methods of order 4 and 5 are most commonly used.

Consider approximating ab f(x) dx using Romberg


integration. Denote the Romberg table by

R1,1
R2,1
R3,1

n1

Global Truncation is O(hn), so order is n.


Eulers method is just special case where n = 1

hn
f
n!

h2 '
f t i ,wi
2

h f t i ,wi

h
y
n 1 !

Truncation Error: 1st Column = O(h2), 2nd Column = O(h4),


3rd O(h6)

h f

wi

Truncation error is O(hn+1): just above remainder term:

Error Term, First column:

f t ,w

Global Truncation Error at ti is |yi-wi|


If the global truncation error is O(hk), numerical method for
computing wi is said to be of the order k, the larger the
order of k, the faster the rate of convergence. A difference
method is said to be convergent (wrt the differential
equation it approx.) if: lim max
w| 0
N |y

Matlab Ax=b type A\b for Gaussian Elimination with Partial


Pivoting:
det A
1 a a a where m=# of row swaps. To find
inverse solve [A|I]. Better method, if need A-1b, solve for x in
Ax=b. If A-1B is needed, more efficient to solve AX=B for X

max |y

Stability: Stable if there exists small perturbations E,e such that

hM L
e
2L

w|

x is close to the exact solution (A+E)y=b+e.

Eulers method is of order 1.

small perturbation means

Local Truncation Error: the error incurred in one step of a


numerical method assuming that the value at the previous
mesh point is exact.
For Eulers Method,

|y

h
|

f t ,y t
h
y
2

h
M,
2

Local truncation error is O(h2)


If the local truncation error is O(hk+1) then the global is
O(hk) method is order k

a , x

E A , e b

are small.

Gaussian elimination with PPivoting is almost always stable


Condition of problem Ax=b Ill-conditioned if there exists one
small perturbation of the data (A+E)y=b+e for which the exact
value of y Can be measured by the condition number of A:
Matlab cond(A)
A A
Interpolation?
For the following data, (xi, f(xi): (-1,0); (1,1); (2,3); Suppose a
function g(x) of the form g(x) = c0 + c1e-x + c2ex is to be
determined so that g(x) interpolates the above data at the

specified points xi. Write a system of linear equations in matrix/vector


form Ac = b whose solution will give the values of the unknown c0, c1
and c2 that solve this interpolation. Leave answer in terms of e, dont
solve the system.
Solution: [1 e e-1; 1 e-1 e; 1 e-2 e2] * [c0; c1; c2] = [0; 1; 3]

Matlab Code:

ezplot(0.5*exp(x/3)-sin(x), [-4, 5] )
fplot( 3*x^3-x-1, [-1 2], - )
fzero to compute the zeros of f(x):
fzero(' 0.5*exp(x/3)-sin(x)', [ -5, -3] )
Change bounds for each interval in which a zero is located.
Polynomial Operations:
The coefficients of a polynomial are stored in a vector p
(starting with the coefficient of the highest power of x).
p(x) = x3 - 2x 5
p = [1 0 -2 -5];
r = roots(p)
Will compute and store all of the roots of the polynomial in r.
Alternate form: r = roots( [1 0 -2 -5] )
If r is any vector, p = poly(r), will result in p being a vector
whose entries are the coefficients of a polynomial p(x)
that has as its roots the entries of r.
Example:
r = [1 2 3 4];
p = poly(r)
Results in p = [1 -10 35 -50 24],
p(x) = x 4 -10x3 + 35x2 - 50x + 24
The function polyval evaluates a polynomial at a
specified value. If p = [1 -10 35 -50 24], then polyval(p, -1)
gives the value of p(x) = x 4 -10x3 + 35x2 - 50x + 24
at x = -1, namely 120. MATLAB uses HORNER'S
ALGORITHM for polyval.
Put with polynewton!!!!!!!!
Polynewton computes one zero of a polynomial. Input
variables: n the polynomial degree a a vector of the
coefficients of the polynomial P(x)
pzero the initial approximation to a zero Nzero the
maximum number of iterations allowed tol relative error
tolerance used to test for convergence and the output
variables are p the final computed approximation to a zero
of P(x) b the final vector of values b computed by Horners
algorithm, from which the deflated polynomial can be
obtained.
polynewton(4, [-3.3 -1 0 2 5], 1.3, 20, 1e-8);
Will output to the screen each computed approximation to
the zero.
[x, y] = polynewton(4, [-3.3 -1 0 2 5], 1.3, 20, 1e-8);
Will output to the screen each computed approximation to a
zero, will store the final computed approximation in the
variable x, and will store the final computed vector of values
b from Horners algorithm in the vector y.
The MATLAB function INTERP1 can be used to do
linear and cubic polynomial interpolation. If X denotes a
vector of values, Y = F(X) a set of corresponding function
values, then z = interp1(X, Y, x) or z = interp1(X, Y, x,
'linear').
z = interp1(X, Y, x, 'cubic') determines z based on a cubic
polynomial approximation.
For the cubic spline with clamped boundary conditions, the
data to be interpolated should be stored in vectors, X and Y,
where Y has 2 more entries than X and the first and last
entries of Y are the two boundary conditions. If S(x)
denotes the cubic spline interpolant, and z is a given
number, then the value of S(z) can be computed by
entering spline(X, Y, z). To determine the coefficients of
the spline, first determine the pp (piecewise polynomial)
form of the spline by entering pp = spline(X, Y). Then enter
[breaks, coefs] = unmkpp(pp). breaks: a vector of the
knots (or nodes) of the spline, coefs: an array, the i-th row
of which contains the coefficients of the i-th spline

Gaussian Elimination Algorithm


(forward elimination)
For i=1,2,n-1 do
For j=i+1,i+2,,n do
mult aji/aii
for k=i+1,i+2,,n do
ajk ajk-mult*aik
bj bj-mult*bi
(back substitution)
xn bn/ann
for i=n-1,n-2,,1 do
xi [bi-sum(aijxj, from j=i+1 to n)]/aii
Gaussian Elimination with partial pivoting
(forward elimination)
for i=1,2,,n-1 do
(find the largest pivot)
pi
for k=i+1,i+2,,n do
if |ak,i|>|ap,i| then
pk
if p!=i then
for k=i,i+1,,n do
tempai,k
ai,kap,k
ap,ktemp
tempbi
bibp
bptemp
for j=i+1,i+2,,n do
multaj,i/aii
for k=i+1,i+2,,n do
aj,kaj,k-mult*ai,k
bjbj-mult*bi
(back-substitution)
xnbn/an,n
for i=n-1,n-2,,1 do
xi[bi-sum(aijxj, from j=i+1 to n)]/aii

Composite Trapezoidal Rule: Input upper and lower limits of the


integral a and b, max # of iterations maxiter, and tolerance tol.
function trap(a, b, maxiter, tol)
m = 1;
x = linspace(a, b, m+1);
y = f(x);
approx = trapz(x, y);
disp(' m integral approximation');
fprintf(' %5.0f %16.10f \n ', m, approx);
for i = 1 : maxiter
m = m*2 ;
oldapprox = approx ;
x = linspace ( a , b , m+1 ) ;
y = f(x);
approx = trapz(x, y);
fprintf(' %5.0f %16.10f \n ', m, approx);
if abs( 1-approx/oldapprox) < tol
return
end
end
fprintf('The iteration did not converge in %g', maxiter), fprintf('
iterations.')
Multiplicity of the roots.
function test_mult(x)
root = newton (pi/2, 40, 1e-8);
fprintf('\nroot used = %18.10f\n',root);
for i = 0:x
fprintf('m = %g',i),fprintf(' p = %18.10f\n',f_diff(root,i));
end
Use polyval to evaluate q(x) at the first zero of q(x) computed by
MATLAB in (a). This should give a result very close to 0.
r = [1 1 1 1 1 1 1 1];
p = poly(r)
roots_of_p = roots(p)
q = p + [0 0 0 0.001 0 0 0 0 0]
qr = roots(q)
polyval(q,qr(1))
Use INTERP1 to approximate z = P(1.3), where P(x) is the
piecewise linear interpolating polynomial to y = sin x at the 11
equally-spaced points
for i = 1:11
x(i)=(i-1)*pi/20;
y(i)=sin(x(i));
end

% Use back-substitution to solve Ax = y for x.


function x = solve(n, A, y)
x(n) = y(n) / A(n, n);
for i = n-1 : -1 : 1
sum = 0;
for j = i+1 : n
sum = sum + A(i, j) * x(j);
end
x(i) = (y(i) sum)/A(i, i);
end
Secant: To find the zero of the function. Input
initial approx. p0, p1, tolerance tol, and max # of
iterations N.
function p = secant(p0, p1, N, tol)
i=2;
q0=f(p0);
q1=f(p1);
while i<=N
p=p1-q1*(p1-p0)/(q1-q0);
if abs(p-p1)<tol
return;
end
i=i+1;
p0=p1;
q0=q1
p1=p;
q1=f(p);
end
fprintf('failed to converge in %g',Nzero),fprintf('
iterations\n')

z = interp1(x, y, 1.3, 'linear');


fprintf(' z = %18.10f\n',z);
fprintf(' sin(1.3) = %18.10f\n',sin(1.3));
fprintf(' abs(sin(1.3)-z) = %18.10f\n',abs(sin(1.3)-z) );
Assignment #3
plot a graph of the function
>> ezplot('0.5*exp(x/3)-sin(x)',[-4,5])
fzero to compute the 3 zeros of
>> fzero('0.5*exp(x/3)-sin(x)',[-5,-3])
ans = -3.3083
h=fzero('10*(0.5*pi*1^2-1^2*asin(x/1)-x*sqrt(1^2-x^2))12.4',[0,1])
h = 0.1662; >> r=1; >> x=r-h; x = 0.8338

Newton.m
To find a zero of the function. Input Initial approx. pzero,
maximum interations Nzero, tolerance tol.
function p = newton(pzero,Nzero,tol)
i=1;
Quad - Numerically evaluate integral, adaptive Simpson
while i <= Nzero
quadrature. Use array operators .*, ./ and .^
-5
p = pzero-f(pzero)/fp(pzero);
using tol = 10
2
fprintf('i = %g',i),fprintf(' approximation = %18.10f\n',p)
sin(1 / x )dx
if abs(1-pzero/p)<tol
0.1
return
[Q, fnc] = quad (@f, 0.1, 2, 10^-5 )
end
function y=f(x)
i=i+1;
y = sin(1./x);
pzero=p;
Horner: to evaluate the polynomial. Input location x0, end
Ode45: is variable stepsize Runge-Kutta method that uses
degree n, coefficients a. Outputs: b = P(x0), c = P(x0)
fprintf('failed to converge in %g',Nzero),fprintf(' iterations\n')
two Runge-Kutta formulas of orders 4 and 5. Solve non-stiff
f.m
differential equations, medium order method.
function [b, c] = horner(x0, n, a)
function y=f(x)
b(n+1)=a(n+1);
y=32*(x-sin(x))-35;
y (t ) = y(t ) 5e t sin(5t ) , y(0) = 1 on [0,3]
c(n+1)=a(n+1);
fp.m
for j = n: -1: 2
function y=f(x)
[t, y] = ode45('f', [0 3], 1)
b(j) = a(j) + b(j+1)*x0;
y=32*(1-cos(x));
function z=f(t,y)
c(j) = b(j) + c(j+1)*x0;
z = -y-5*exp(-t)*sin(5*t);
function [p,b] = polynewton(n, a, pzero, Nzero, tol)
end
i=1;
b(1) = a(1) + b(2)*x0;
while i<=Nzero
Newtons method to find reciprocal Matlab version of Newtons method for computing 1/R
[b, c] = horner(pzero, n, a);
p = pzero - b(1)/c(2);
1
f ( x) = R
function p = reciprocal(R,pzero,Nzero,tol)
fprintf('i = %g',i),fprintf(' approximation = %18.10f\n',p)
x
i=1;
if abs(1-pzero/p)<tol
1
while
i<=Nzero

f ( x) = 2
return
x
p=2*pzero-R*pzero*pzero;
end
fprintf('i = %g',i),fprintf(' approximation = %18.10f\n',p)
i=i+1;
f ( p0 )
y(i)=p;
pzero=p;
p = p0
if abs(1-pzero/p)<tol
f ( p0 )
End
return
1
fprintf('failed to converge in %g',Nzero),fprintf(' iterations\n')
R
end
p
p = p0 0
i=i+1;
1
pzero=p;
p02
end
fprintf('failed to converge in %g',Nzero),fprintf(' iterations\n')
p = p0 + p0 R p02

p = 2 p0 R p02

Вам также может понравиться