Вы находитесь на странице: 1из 19

EC744 Lecture Notes: Discrete State Space Method

Prof. Jianjun Miao

Discretization of AR(1) Shock Method 1. Tauchen (1986)


2 . where ut is iid N 0,

yt = yt1 + ut

The standard deviation of yt is y = / 1 2. Step 1. Choose N points discrete state space. ymax = m y , ymin = m y
n o ymaxymin N2 (y i N = y y min, ymin + N 1 , ..., ymin + N1 max ymin) , ymax i=1

Step 2. Choose N intervals I1 = , ymin + ymaxymin , 2(N 1) I2 = ymin + ymaxymin , ymin + ymaxymin , ..., 2(N 1) (N1) IN1 =

(N1)(ymaxymin) N(ymaxymin) ymin + , ymin + 2(N1) 2(N1) N(ymaxymin ) , 2(N 1)

IN = ymin +

Step 3. Determine transition probabilities ij = Pr yt+1 Ij |yt = y i


Matlab Code: markovappr.m [z,Tran,probst,arho,asigma] = markovappr(rho,sigma,m,N) Input: rho : Persistence sigma: Volatility m : Determines the width of discretized state space, Tauchen uses m = 3, ymax = m*vary, ymin = -m*vary, ymax, and ymin are two boundary points N : Number of possible states chosen to approximate the y_t process, usually N = 9 should be ne.

Output: Tran : Transition matrix of the Markov chain z : N by 1 vector of discretized state space of y_t

probst : Invariant distribution arho : First order autoregression coecient for Markov chain asigma: Standard deviation for Markov chain Y_t

Method 2. Tauchen and Hussey (1991)


2 . where ut is iid N 0,

xt = xt1 + (1 ) x + ut

Step 1. Choose N intervals dened by x1 = , x2, ..., xN +1 = +. The cutos are such that Thus,
! !

xi+1 x x

xi x x

1 . N

i1 + x. xi = x1 N

Step 2. Construct discrete state space

n h io i = E x |x xi, xi+1 . z t t

z 1, ..., z N

by

Step 3. Compute the transition probabilities.


i h i j , xj+1 |x xi, xi+1 . ij = Pr xt+1 x t h

The discretized Markov process is given by zt.

Matlab Code: tauch_hussey.m [s,p,probst,arho,asigma] = tauch_hussey(xbar,rho,sigma,n) Input: xbar : mean of the process (xt) rho : persistence sigma: volatility n : number of nodes

Output: s p

: n by 1 vector of discretized state space of x_t : Transition probability

probst : Invariant distribution arho : First order autoregression coecient for Markov chain asigma: Standard deviation for Markov chain x_t

Simulating a Markov Chain Step 1. Compute the cumulative distribution of the Markov chain c. That is, c is the probability that the economy is in a state lower or equal to j ij given that it was in state i in period t 1. Step 2. Set the initial state and simulate T random numbers from a uniform distribution over [0, 1]: {pt}T . t=1 Step 3. Assume the economy was in state i in t 1, nd the index j such that
c c i(j1) < pt ij ,

then zj is the state in period t.

Matlab Code: markovsimul.m [chain,state] = markovsimul(PI,s,n,s0, seed) Input: PI s n s0 : Transition matrix : State vector : Length of simulation : Initial state (index)

seed : Optional Output: chain: values for the simulated markov chain state : index of the state

Value Function Iteration. DP problem v (x, z) = max F (x, y, z) +


y

0, z 0 Q z, dz 0 v y

Step 1. Choose a grid for the endogenous state X = {x1, ..., xn} , and a grid for the shock Z = {z1, ..., zm} . Discretize the AR(1) shock as a Markov chain with the transition matrix = ij .

Step 2. Start with an initial guess for the value function V (x, z) for (x, z) X Z, and choose a stopping criterion > 0. Step 3. For each xi X, i = 1, ..., n and zh Z, l = 1, ..., m, compute T V (xi, zh) = max F xi, x0, zh + hj V x0X j=1

m X

0, z . x j

Step 4. If kT V (x, z) V (x, z)k < , then stop. Otherwise, use T V to update V and go back to Step 3.

Policy Function Iteration. Step 1. Set a stopping criterion > 0. Start with an initial guess of the policy function g (x, z) . Step 2. Compute the value function V (x, z) associate with g, assuming that this policy is operative forever. Step 3. Find a new policy function g (x, z) such that g (x, zh) solves
yX

max F (x, y, zh) +

j=1

m X

hj V y, zj

Step 4. If k (x, z) g (x, z)k < , then stop. Otherwise, use g to update g g and go to step 2.

In step 2, we solve a linear system of the form V (xi, zh) = F (xi, g (xi, zh) , zh) + for V (xi, zh) . In matrix form,

m X

hj V g (xi, zh) , zj

j=1

T V (x, z) = (I Q)1 F (x, g (x, z) , z) ,

nn

where Q = Qij

with
(

Qij =

1 if g (xi, zh) = xj for some h, 0 otherwise.

T V (x, z) is an (nm) 1 vector. We have to reshape it as an n m matrix: V (x, z) = reshape (T V (x, z) , n, m)

Using Interplotation Use less grid points for state variables, but more grid points for choice or control variables. It is faster and more accurate. Step 1. Choose a grid X for the state variable x and a grid Y for the choice variable y X = {x1, x2, ..., xn} , Y = {y1, y2, ..., yN } , N >> n. Step 2. Start with a guess V (x, z) and choose a stopping criterion .

Step 3. Compute an interpolated value function V (x, z) of V (x, z) and solve T V (xi, zh) = max F (xi, y, zh) +
yY j=1 m X

hj V y, zj

at each point (xi, zh) . Step 4. If kT V V k < , then stop. Otherwise, use T V to update V and go back to step 3.

Computing Stationary Distribution The optimal policy function x0 = g (x, z) and the shock z induce a Markov chain st = (xt, zt) . Let the jth element of s be the pair (xi, zh) where j = (h 1) n + i. Thus, s = [(x1, z1) , ..., (xn, z1) , (x1, z2) ... (xn, z2) , ..., (x1, zm) , ..., (xn, zm)]0. The transition matrix is
o 0, z 0 | (x , z ) = (x, z) = I x, x0, z Pr (xt+1, zt+1) = x t t zz 0 0 0 n

where I x, x , z = 1 if x = g (x, z) , = 0, otherwise. This denes an M M matrix P , where M = nm. Suppose P has a stationary distribution . Then the stationary distribution is (xi, zh) = (j) , where j = (h 1) n + i.

Simulating the Model Use the deterministic growth model as an example. One can use Chebyshev polynomial to approximate the policy function. Alternatively use interpolation, which is easy to program.

Вам также может понравиться