Вы находитесь на странице: 1из 15

Iterative solution methods for inverse problems: IV Newton type methods

Iterative solution methods for inverse problems:


IV Newton type methods

Barbara Kaltenbacher, University of Graz

22. Juni 2010


Iterative solution methods for inverse problems: IV Newton type methods

overview

Newton’s method

Levenberg-Marquardt
Monotonicity of the errors
Convergence
Convergence rates

Iteratively regularized Gauss-Newton method (IRGNM)


Convergence and convergence rates
Iterative solution methods for inverse problems: IV Newton type methods
Newton’s method

Newton’s method

F 0 (xkδ )(xk+1
δ
− xkδ ) = y δ − F (xkδ ) . (1)
formulation as least squares problem
min ky δ − F (xkδ ) − F 0 (xkδ )(x − xkδ )k 2
x∈D(F )

ill-posedness apply Tikhonov regularization:


Levenberg-Marquardt method:
min ky δ − F (xkδ ) − F 0 (xkδ )(x − xkδ )k 2 + αk kx − xkδ k 2 , (2)
x∈D(F )

Iteratively regularized Gauss-Newton method (IRGNM)


min ky δ − F (xkδ ) − F 0 (xkδ )(x − xkδ )k 2 + αk kx − x0 k 2 (3)
x∈D(F )

choice of sequence αk and convergence anaylsis different for both


methods.
Iterative solution methods for inverse problems: IV Newton type methods
Levenberg-Marquardt

Levenberg-Marquardt

δ
xk+1 = xkδ + (F 0 (xkδ )∗ F 0 (xkδ ) + αk I )−1 F 0 (xkδ )∗ (y δ − F (xkδ )) , (4)
Choice of αk :
ky δ − F (xkδ ) − F 0 (xkδ )(xk+1
δ
(αk ) − xkδ )k = q ky δ − F (xkδ )k (5)
for some q ∈ (0, 1) inexact Newton method.
(5) has a unique solution αk provided that for some γ > 1
ky δ − F (xkδ ) − F 0 (xkδ )(x † − xkδ )k ≤ q
γ ky δ − F (xkδ )k (6)
which can be guaranteed by a condition on F : ∀x, x̃ ∈ B2ρ (x0 ) ⊆ D(F )
kF (x) − F (x̃) − F 0 (x)(x − x̃)k ≤ c kx − x̃k kF (x) − F (x̃)k , (7)
Choice of stopping index k∗ : discrepancy principle:
ky δ − F (xkδ∗ )k ≤ τ δ < ky δ − F (xkδ )k , 0 ≤ k < k∗ , (8)
[Hanke 1996]
Iterative solution methods for inverse problems: IV Newton type methods
Levenberg-Marquardt
Monotonicity of the errors

Levenberg-Marquardt: Monotonicity of the errors


Theorem
Let 0 < q < 1 < γ and assume that F (x) = y has a solution and
that (6) holds so that αk can be defined via (5). Then, the
following estimates hold:

kxkδ − x † k 2 − kxk+1
δ
− x † k 2 ≥ kxk+1
δ
− xkδ k 2 , (9)

kxkδ − x † k 2 − kxk+1
δ
− x †k 2
2(γ − 1) δ
≥ ky − F (xkδ ) − F 0 (xkδ )(xk+1
δ
− xkδ )k 2 (10)
γαk
2(γ − 1)(1 − q)q δ
≥ ky − F (xkδ )k 2 . (11)
γ kF 0 (xkδ )k 2
Iterative solution methods for inverse problems: IV Newton type methods
Levenberg-Marquardt
Monotonicity of the errors

Levenberg-Marquardt: Monotonicity proof


αk (Kk Kk∗ + αk I )−1 (y δ − F (xkδ )) = y δ − F (xkδ ) − Kk (xk+1
δ
− xkδ ) ,

δ
kxk+1 − x † k 2 − kxkδ − x † k 2
δ
= 2h xk+1 − xkδ , xkδ − x † i + kxk+1
δ
− xkδ k 2
= h (Kk Kk∗ + αk I )−1 (y δ − F (xkδ )),
2Kk (xkδ − x † ) + (Kk Kk∗ + αk I )−1 Kk Kk∗ (y δ − F (xkδ )) i
= − 2αk k(Kk Kk∗ + αk I )−1 (y δ − F (xkδ ))k 2
− k(Kk∗ Kk + αk I )−1 Kk∗ (y δ − F (xkδ ))k 2
+ 2h (Kk Kk∗ + αk I )−1 (y δ − F (xkδ )), y δ − F (xkδ ) − Kk (x † − xkδ ) i
≤ − kxk+1δ
− xkδ k 2 − 2αk−1 ky δ − F (xkδ ) − Kk (xk+1
δ
− xkδ )k ·
 
ky δ − F (xkδ ) − Kk (xk+1
δ
− xkδ )k − ky δ − F (xkδ ) − Kk (x † − xkδ )k .

ky δ − F (xkδ ) − Kk (x † − xkδ )k ≤ γ −1 ky δ − F (xkδ ) − Kk (xk+1


δ
− xkδ )k .
Iterative solution methods for inverse problems: IV Newton type methods
Levenberg-Marquardt
Convergence

Levenberg-Marquardt method: Convergence


Theorem
Let 0 < q < 1 and assume that F (x) = y is solvable in Bρ (x0 ),
that F 0 is uniformly bounded in Bρ (x † ), and that the Taylor
remainder of F satisfies (7) for some c > 0. Then the
Levenberg-Marquardt method with exact data y δ = y ,
kx0 − x † k < q/c and αk determined from (5), converges to a
solution of F (x) = y as k → ∞.
Theorem
Let the assumptions of Theorem 2 hold. Additionally let
k∗ = k∗ (δ, y δ ) be chosen according to the stopping rule (8) with
τ > 1/q and let kx0 − x † k be sufficiently small. Then for some
solution x∗ of F (x) = y

k∗ (δ, y δ ) = O(1 + | ln δ|) and kxkδ∗ − x∗ k → 0 as δ → 0


Iterative solution methods for inverse problems: IV Newton type methods
Levenberg-Marquardt
Convergence rates

Levenberg-Marquardt method: Convergence rates


Theorem
Let a solution x † of F (x) = y exist and let

F 0 (x) = Rx F 0 (x † ) and kI −Rx k ≤ cR kx−x † k , x ∈ Bρ (x0 ) ⊆ D(F ) ,


(12)
x † − x0 = (F 0 (x † )∗ F 0 (x † ))µ v , v ∈ N (F 0 (x † ))⊥ (13)
hold with some 0 < µ ≤ 1/2 and kv k sufficiently small. Moreover,
let αk and k∗ be chosen according to (5) and (8), respectively with
τ > 2 and 1 > q > 1/τ . Then the Levenberg-Marquardt iterates
defined by (4) remain in Bρ (x0 ) and converge with the rate
 2µ 
kxkδ∗ − x † k = O δ 2µ+1 .

[Hanke 2009]
Iterative solution methods for inverse problems: IV Newton type methods
Levenberg-Marquardt
Convergence rates

Remarks

I rates with a priori αk , k∗ :

αk = α0 q k , for some α0 > 0 , q ∈ (0, 1) ,

µ+ 21 µ+ 1
c(k∗ +1)−(1+ε) αk∗ ≤ δ < c(k+1)−(1+ε) αk 2 , 0 ≤ k < k∗ ,
 2µ 
k∗ = O(1+| ln δ|) , kxkδ∗ −x † k = O (δ (1+| ln δ|)(1+ε) ) 2µ+1 .
[BK&Neubauer&Scherzer 2008]
I generalization to other regularization methods (e.g., CG) in
place of Tikhonov [Hanke 1997], [Rieder 1999, 2001, 2005]
Iterative solution methods for inverse problems: IV Newton type methods
Iteratively regularized Gauss-Newton method (IRGNM)

Iteratively regularized Gauss-Newton method (IRGNM)

δ
xk+1 = xkδ +(F 0 (xkδ )∗ F 0 (xkδ )+αk I )−1 (F 0 (xkδ )∗ (y δ −F (xkδ ))+αk (x0 −xkδ )) .
(14)
a-priori choice of αk :
αk
αk > 0 , 1≤ ≤r, lim αk = 0 , (15)
αk+1 k→∞

for some r > 1.


a-priori or a posteriori choice of k∗

ky δ − F (xkδ∗ )k ≤ τ δ < ky δ − F (xkδ )k , 0 ≤ k < k∗ , (16)

[Bakushinski 1992], see also the book [Bakushinski&Kokurin 2004];


[BK&Neubauer&Scherzer 1997], see also the book [BK& Neubauer&Scherzer 2008
Iterative solution methods for inverse problems: IV Newton type methods
Iteratively regularized Gauss-Newton method (IRGNM)
Convergence and convergence rates

IRGNM: Convergence and convergence rates: idea of proof I


key idea:
δ
kxk+1 − x † k ≈ αkµ wk (µ) with wk (s) as in the following lemma.
Lemma
Let K ∈ L(X , Y), s ∈ [0, 1], and let {αk } be a sequence satisfying
αk > 0 and αk → 0 as k → ∞. Then it holds that

wk (s) := αk1−s k(K ∗ K +αk I )−1 (K ∗ K )s v k ≤ s s (1−s)1−s kv k ≤ kv k


(17)
and that

0, 0 ≤ s < 1,
lim wk (s) =
k→∞ kv k , s = 1,

for any v ∈ N (A)⊥ .


Iterative solution methods for inverse problems: IV Newton type methods
Iteratively regularized Gauss-Newton method (IRGNM)
Convergence and convergence rates

IRGNM: Convergence and convergence rates: idea of proof II


Indeed, in the linear and noiseless case (F (x) = Kx, δ = 0) we get
from (14) using Kx † = y and (13)

xk+1 − x †
= xk − x † + (K ∗ K + αk I )−1 (K ∗ K (x † − xk ) + αk (x0 − x † + x † − xk ))
= − αk (K ∗ K + αk I )−1 (K ∗ K )µ v

To take into account noisy data and nonlinearity, we rewrite (14) as


δ
xk+1 − x † = − αk (K ∗ K + αk I )−1 (K ∗ K )µ v
 
− αk (Kk∗ Kk + αk I )−1 K ∗ K − Kk∗ Kk (18)
(K ∗ K + αk I )−1 (K ∗ K )µ v
+ (Kk∗ Kk + αk I )−1 Kk∗ (y δ − F (xkδ ) + Kk (xkδ − x † )) .

where we set Kk := F 0 (xkδ ), K := F 0 (x † ).


Iterative solution methods for inverse problems: IV Newton type methods
Iteratively regularized Gauss-Newton method (IRGNM)
Convergence and convergence rates

IRGNM: Convergence and convergence rates


Theorem
Let B2ρ (x0 ) ⊆ D(F ) for some ρ > 0, (15),

F 0 (x̃) = R(x̃, x)F 0 (x) + Q(x̃, x)


kI − R(x̃, x)k ≤ cR , kQ(x̃, x)k ≤ cQ kF 0 (x † )(x̃ − x)k

and
x † − x0 = (F 0 (x † )∗ F 0 (x † ))µ v , v ∈ N (F 0 (x † ))⊥
for some 0 ≤ µ ≤ 1/2, and let k∗ = k∗ (δ) be chosen according to
the discrepancy principle (16) with τ > 1. Moreover, we assume
that kx0 − x † k, kv k, 1/τ , ρ, and cR are sufficiently small. Then
we obtain the rates (  2µ 
δ † o δ 2µ+1 , 0 ≤ µ < 12 ,
kxk∗ − x k = √
O( δ) , µ = 21 .
Iterative solution methods for inverse problems: IV Newton type methods
Iteratively regularized Gauss-Newton method (IRGNM)
Convergence and convergence rates

Remarks
I The same convergence rates result can be shown with the a
priori stopping rule
−1
k∗ → ∞ and η ≥ δαk∗2 → 0 as δ → 0 . (19)
for µ = 0 and
µ+ 12 µ+ 21
ηαk∗ ≤ δ < ηαk , 0 ≤ k < k∗ , (20)
even for 0 < µ ≤ 1.
I The a priori result remains valid under the alternative weak
nonlinearity condition
F 0 (x̃) = F 0 (x)R(x̃, x) kI − R(x̃, x)k ≤ cR kx̃ − xk
and
(21)
for x, x̃ ∈ B2ρ (x0 ) and some positive constant cR .
Iterative solution methods for inverse problems: IV Newton type methods
Iteratively regularized Gauss-Newton method (IRGNM)
Convergence and convergence rates

Further remarks
I logarithmic rates: [Hohage 1997]
I generalization to regularization methods Rα (F 0 (x)) ≈ F 0 (x)†
in place of Tikhonov [BK 1997]
δ
xk+1 = x0 + Rαk (F 0 (xkδ ))(y δ − F (xkδ ) − F 0 (xkδ )(x0 − xkδ )) . (22)

I continuous version [BK&Neubauer&Ramm 2002]


I projected version for constrained problems [BK&Neubauer 2006]
I analysis with stochastic noise [Bauer&Hohage&Munk 2009]
I analysis in Banach space [Bakushinski&Konkurin 2004], [BK&
Schöpfer&Schuster 2009], [BK& Hofmann 2010]
I preconditioning [Egger 2007], [Langer 2007]
I quasi Newton methods [BK 1998]

Вам также может понравиться