Академический Документы
Профессиональный Документы
Культура Документы
Part I
Modelling Money in General Equilibrium: a Primer
Lecture 4
The Basic MIU model: Value function solution
1 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
I Motivation
This lecture goes back to the observation that Walsh (chapter 2) solves
the basic MIU model with the value function approach, while we used in
Lecture 2, alternatively, the Lagrange multiplier approach
2 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
ω t +1 = g ( ω t , c t ), (2)
3 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
ct = h (ω t )
ω t +1 = g ( ω t , ct ),
4 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
e )g s.t. ω
max fu (c ) + βV (ω e = g (ω, c ), and ω given, (3)
c
5 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
The task is to solve jointly for V (ω ) and h (ω ) which are linked by the
Bellman equation
6 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
There exist various methods for solving the Bellman equation, depending
on the precise nature of the functions u and g
Under certain assumptions - like the concavity of u (c ) and the convexity
and compactness of the set f(ω t +1 , ω t ) : ω t +1 g (ω t , ct ), ct 2 R g -
it turns out that the solution exhibits the following elements:
e )g s.t. ω
Vj +1 (ω ) = maxfu (c ) + βVj (ω e = g (ω, c ), and ω given,
c
e ).
starting from an initial functional guess V0 (ω
This convergence result leads to a solution procedure which is called value
function iteration
7 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
3) There exists a unique and time invariant optimal policy of the form
ct = h (ω t ). The derivation of the policy function uses the optimality condition
∂g (ω, c ) 0
u 0 (c ) + β
V [g (ω, c )] = 0, (6)
∂c
resulting from the maximization of the RHS of (4) w.r.t. c
Using (6), this simpli…es via the envelope theorem to the expression
∂g (ω, h (ω )) 0
V 0 (ω ) = β V [g (ω, h (ω ))] (7)
∂ω
8 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
These concepts may seem rather abstract, but they are not when used in
practice
For this example we will verify that the two solution approaches (Value
function approach and Lagrange technique) lead to the same results
9 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
10 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
Let us use
ke = g (k, c ) = Ak α c
The Bellman equation (5) for the example at hand becomes:
The task is to solve jointly for the value function V (k ) and the policy
function c = h (k ) which satisfy
V (k ) = u (h (k )) + βV [g (k, h (k ))]
= ln(h (k )) + βV [Ak α h (k )] (9)
11 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
! for a given initial guess about the value function, called V0 (k ), we optimize
the RHS of (8) over c, establish thereby a policy function h1 (k ) and insert it
into the RHS of (8) to obtain a new value function V1 (k )
12 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
Assume
V 0 (k ) = 0
This guess holds for all feasible values of k, including ke
Max of RHS of (8) over c yields (trivially!)
c = h1 (k ) = Ak α and ke = g (k, h1 (k )) = 0
13 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
Policy function: consider what we got from the iterations done so far:
h1 (k ) = Ak α
1
h2 (k ) = Ak α
1 + αβ
1
h3 (k ) = Ak α
1 + αβ + (αβ)2
There is a pattern behind this, ie after j = T steps we will get
1
hT (k ) = Ak α
1 + αβ + (αβ)2 + (αβ)3 + ... + (αβ)T 1
Recall: α 2 (0, 1), β 2 (0, 1), implying α β 2 (0, 1)
Thus, the iteration process ensures that the policy function converges:
c = h ∞ (k ) = h (k ) = (1 αβ)Ak α (10)
Similarly
ke = g (k, h∞ (k )) = g (k, h (k )) = αβAk α (11)
16 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
Value function: consider what we got from the iterations done so far:
V 0 (k ) = 0
V 1 (k ) = a1 + α ln(k )
V 2 (k ) = a2 + (1 + αβ)α ln(k )
V 3 (k ) = a3 + [1 + αβ + (αβ)2 ]α ln(k )
As concerns the terms including α ln(k ), there is a pattern behind this, ie
after j = T steps we will get
VT (k ) = aT + [1 + αβ + (αβ)2 + (αβ)3 + ... + (αβ)T 1
]α ln(k )
Since α β 2 (0, 1), for j ! ∞ there will be convergence, ie
α
V ∞ (k ) = V (k ) = a ∞ + ln(k )
1 αβ
What is still missing before we can fully characterize V (k )?
! The limit term
a = a∞
17 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
Within this eqn we can determine a by combining all those terms which
are not linked to α ln(k )...
18 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
...ie from
α α
V (k ) = a + ln(k ) = ln[(1 αβ)Ak α ] + β[a + ln(αβAk α )]
1 αβ | {z } 1 αβ | {z }
h (k ) g (k ,h (k )
The value function approach is one solution technique among many others
Alternatively, the problem at hand can be solved with the Lagrange
approach
The value function approach is often used to implicitly characterize
optimal solutions of problems for which no explicit solution exists
Moreover, it is a convenient tool to obtain numerical solutions
20 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
Consider
∞
Lt = ∑ βt fln(ct ) + λt [Aktα k t +1 ct ]g
t =0
Optimization of Lt over the choice variables {ct , kt +1 , λt ; 8t > 0} leads
to a two-dimensional, non-linear system of …rst-order di¤erence equations
in c and k, ie
the consumption Euler equation
1 1
= βαAktα+11 (13)
ct | {z } ct +1
f 0 ( k t +1 )
22 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
In sum, the transitional dynamics of the Value function and the Lagrange
solutions are characterized by the same di¤erence equations
24 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
(k )1 α
c = A (k ) α ] = (1 αβ) A (k )α
k = A (k ) α [1
A
These steady state values are consistent with eqns (10) and (11) obtained
from the value function iteration, ie
c = h (k ) = (1 αβ)Ak α
ke
g (k, h (k )) = αβAk α ,
=
where by concavity of k α the values ke and c converge against k and c
25 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
amounting to
1
V (k ) = ln(c )
1 β
1
= fln[(1 αβ)A ] + α ln(k )g
1 β
1 α
= fln[(1 αβ)A ] + ln(αβA )g (17)
1 β 1 α
26 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
Recall from Lecture 2 that we used the Lagrange approach to derive the
intertemporal optimality conditions which characterize the basic MIU
model
By contrast, Walsh (Chapter 2) derives these conditions using the value
function approach
In view of the techniques introduced in this lecture it should be no
surprise why the two approaches generate identical results
28 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
∞
∑ βt u (c t , m t )
t =0
kt 1 kt 1 ( 1 + it 1 ) b t 1 + m t 1
f( ) + τ t + (1 δ) + = ct + kt + bt + m t
1+n 1+n (1 + n )(1 + π t )
29 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
kt 1 kt 1 ( 1 + it 1 ) b t 1 + m t 1
ωt f( ) + τ t + (1 δ) + = ct + kt + bt + m t
1+n 1+n (1 + n )(1 + π t )
Using this de…nition of ω, we can substitute out for the (old) state
variable k, ie
kt = ω t ct bt m t
If one combines the last two eqns, the law of motion of the state
variable ω can be arranged to satisfy the structure
ω t +1 = g ( ω t , c t , b t , m t ), ie
ωt ct bt m t
ω t +1 = f( ) + τ t +1
1+n
ω t ct bt m t ( 1 + it ) b t + m t
+(1 δ) + (19)
1+n (1 + n )(1 + π t +1 )
30 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
e )g
V (ω ) = max fu (c, m ) + βV (ω (20)
c ,b,m
subject to
e
ω = g (ω, c, m, b )
ω c b m 1 δ (1 + i )b + m
= f( )+e
τ+ (ω c b m) +
1+n 1+n e)
(1 + n )(1 + π
and with ω given
31 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
32 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
e )g
V (ω ) = max fu (c, m ) + βV (ω s.t.
c ,b,m
ω c b m 1 δ (1 + i )b + m
e =f(
ω )+e
τ+ (ω c b m) +
1+n 1+n e)
(1 + n )(1 + π
Optimal choice of c:
e 0
∂ω
uc (c, m ) + β e) = 0
V (ω
∂c
1
uc (c, m ) = β [f 0 (k 0 ) + 1 δ] V 0 (ω
e) (21)
1+n
Optimal choice of b:
∂ωe 0
β V (ωe) = 0
∂b
1 1+i
β [f 0 (k 0 ) + 1 δ] + V 0 (ω
e) = 0 (22)
1+n e)
(1 + n )(1 + π
33 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
ω c b m 1 δ (1 + i )b + m
e =f(
ω )+e
τ+ (ω c b m) +
1+n 1+n e)
(1 + n )(1 + π
Optimal choice of m:
e 0
∂ω
um (c, m ) + β e) = 0
V (ω
∂m
1 1
um (c, m ) = β [f 0 (k 0 ) + 1 δ] V 0 (ω
e ) (23)
1+n e)
(1 + n )(1 + π
Optimal choice of ω:
e 0
∂ω
V 0 (ω ) = β e)
V (ω
∂ω
1
V 0 (ω ) = β [f 0 (k 0 ) + 1 δ] V 0 (ω
e) (24)
1+n
34 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
1+r 1+r
uc (c, m ) = β ( ) V 0 (ω
e) and V 0 (ω ) = β ( ) V 0 (ω
e)
1+n 1+n
gives
uc (c, m ) = V 0 (ω )
and, accordingly,
1+r
uc (c, m ) = β ( ) u c (e e)
c, m (25)
1+n
Eqn (22) implies
1+i
1+r = (26)
e
1+π
35 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution
1
uc (c, m ) = β [f 0 (k 0 ) + 1 δ] V 0 (ω
e)
1+n
1 1
um (c, m ) = β [f 0 (k 0 ) + 1 δ] V 0 (ω
e)
1+n e)
(1 + n )(1 + π
leads to:
1+r 1
um (c, m ) = β V 0 (ω
e) β V 0 (ω
e)
1+n e)
(1 + n )(1 + π
1
= [1 ] uc (c, m )
e)
(1 + r )(1 + π
i
= uc (c, m ) (27)
1+i
36 / 37
Motivation Value function approach Example: Value function vs. Lagrange approach MIU model: value function solution