Вы находитесь на странице: 1из 6

1

1.1

Ordinary Kriging
Introduction

Kriging is described as a geostatistical interpolation technique for estimating unknown values from data observed at known locations. This method of estimation is a weighted linear combination of the known sample values around the point to be estimated.The basic technique is Ordinary Kriging, and it depends on variogram models to express autocorrelation relationships between sample values in the data set.

In Ordinary Kriging, the data set has a non-stationary mean values that varies over the area of interest (search radius), and has a stationary mean.This method is highly reliable and is recommended for most data sets.The ordinary kriging (OK) predictor is a weighted linear combination of the available samples:
n

(x0 ) = Z
i=1

w i Z ( xi )

(1)

where Z is assume to be stationary, i.e. has a constant unknown mean, , and a known variogram, (h).The random variables Z = [Z (x1 ), ..., Z (xn )] and the unknown value at the point under estimation is Z (x0 ). The weight based on the data is determined using the variogram and two statistical optimality criteria: unbiasedness and minimum mean-squared prediction error (MSPE).

1.2

Point kriging

The estimation of a parameter at unobserved location x is one of the most common interpolation problems. In order to predict the unknown values of points, a linear combination of the values at known locations is to be found. The procedure used is call point kriging, and its estimator is of the form:
n

(x0 ) = Z
i=1

w i Z ( xi )

(2)

Where wi = w1 , ..., wn , which is the weight and is unknown. It is appropriate to select the weights in order to have an unbiased estimator which has the smallest possible estimation variance. Since Z is assumed to be stationary; E [Z (x0 )] = for all x0 D This implies that
n

(x0 )] = E [Z
i=1

wi E [Z (xi )] =

(3)

Therefore the weights sum to 1 due to unbiasedness condition.


n

wi = 1
i=1

Now using the stationarity hypothesis, the estimation variance is calculated with the help of the covariance function. The random function model
2 ij . parameters for the error variance is R and covariances is chosen as C

(x0 )] = E (Z (x0 ) 2 (x0 ) = V ar[Z (x0 ) Z


n n

n i=1 n

wi Z (xi ))2 = wi Z (xi )Z (x0 )

= E Z (x0 )2 +
i=1 j =1

wi wj Z (xi )Z (xj ) 2
i=1

n 2 R

= +
i=1 j =1

ij 2 wi wj C
i=1

i0 wi C

(4)

Here the estimation variance is a quadratic function of weights wi . One method for solving constrained optimization problems(i.e.unbiasedness constraint)is the Lagrange multiplier m.

1.3

The Lagrange multiplier

The method Lagrange multiplier is a technique for converting a constrained minimization problem into an unconstrained one. To solve the
2 2 minimization of R , we set the n partial derivatives of R to 0, this will

result to n equations with n unknowns. Now the unbiasedness condition will add another equation(n+1) but no more unknowns. Since the solution to this system is a bit complicated, a variable m, the lagrange parameter is introduced into Equation (4) as shown below:
n 2 R n n n

= +
i=1 j =1

ij 2 wi wj C
i=1

i0 + 2m( wi C
i=1

wi 1)

(5)

The term we have added in Equation(5) is possible since it is 0 due to unbiasedness condition:
n

wi = 1
i=1 n

wi 1 = 0
i=1 n

2m(
i=1

wi 1) = 0

By adding the new term in Equation(5), the error variance is now a function of n+1 variables with n weights and the lagrange parameter. Now 3

setting the n+1 partial rst derivatives to 0 for each of these variables, then we will have a system of n+1 equations and n+1 unknowns. Moreover, by setting the partial rst derivative to 0 with respect to m will give unbiasedness condition.
n 2 (R )

(2m( =
i=1

wi 1)) m

Setting the above quantity to 0 gives the unbiasedness condition:


n

wi = 1
i=1

The equation above results in the weights wi for the linear estimator, the equation system is known as the kriging system in terms of covariances. The
2 solution of the n+1 equations will give the set of weights that minimizes R

under the constraint that the weights sum to 1. This result will provide a value for m.

1.4

Ordinary Kriging using variogram

The estimation variance can be expressed with the variogram function as:
n n n

(x0 )] = = V ar[Z (x0 ) Z


2 j =1 i=1

wj wi ij + 2
i=1

wi i0

The aim is to minimise 2 under the unbiasedness conditions. The problem can be solved using linear equation system by introducing m,wi that minimizes 2 to:
n

wj ij + m = i0 i = 1, ..., n
j =1 n

wj = 1
j =1

The above equation is the kriging system,wi is the kriging weights and 2 is called the kriging variance. The kriging error variance can be simplied to:
2 R = n

wi i0 + m
i=1

1.5

Block Kriging

A simpler way to estimate the average expected value in a given location is by block kriging. Block kriging is similar to point kriging. The estimation using block kriging method is obtained by: Z (V ) = 1 |V | Z (x0 ) dx0
v

where Z(V) is a random variable corresponding to the mean value over a volume V(block) in the domain D. Here again the linear estimator is of the form : (V ) = Z
n

wi Z (xi )
i=1

and has to be found. The unbiasedness condition results again to :


n

wi = 1
i=1

Now the estimation variance is this case is: (V )] = 2 (V ) = V ar[Z (V ) Z


n n n

= (V, V )
j =1 i=1

wj wi ij + 2
i=1

wi ( xi , V )

Ordinary kriging equation:Matrix Form

The ordinary kriging system can also be written in matrix form:

C 11 . . . . . . Cn1 1

C1n .. . Cnn 1

C0 w 1 1 1 . . . . . . . . . . = C 0 w 1 n n 1 m 0

(6)

To obtain the weights, we multiply Equation(6) on both sides by C 1 ; C.w = D C 1 .C.w = C 1 .D I.w = C 1 .D w = C 1 .D (7)

Вам также может понравиться