Вы находитесь на странице: 1из 40

47

3

Modeling and Calibration

Background

Whenever scientic or engineering measurements are made, the result is
normally not in a form that immediately expresses the knowledge that the
measurements represent. The user wants measurements in quantities with
appropriate units attached to represent the knowledge contained in the
measurements and some form of mathematical model to encapsulate the
entire set of measurements so that they can be described by a function or
set of equations. In both tasks, mathematical techniques are used to give the
best match possible between measured data and the mathematical relation-
ships representing the data.
As indicated in Chapter 2, data acquisition instrumentation must be
calibrated against a standard. The rst task is to convert the raw measure-
ment to a quantity with appropriate units that represent the measurement.
To do this, one must develop a mathematical mapping that relates the
quantity measured to the data collected. As shown in Figure 3.1, this cal-
ibration should include a sensor that actually responds to the quantity
measured and signal processing devices used prior to the sampling of the
voltage or current produced in response to the quantity being measured.
The mathematical representation of the entire data set is also an analytical
function to develop a mapping between the acquired data and a theoretical
model for the data. This mapping may also have to smooth over inaccu-
racies and noise in the measured data.
Both tasks can use the same mathematical tools to achieve the desired
goals. This chapter explores the basic concepts used to develop this mathe-
matical mapping and discusses the method of least squares for modeling
the results. These tools allow us to model data and obtain a best represen-
tation. The initial discussion deals with calibration, but we keep the other
uses for the tools in mind as well.

1094_frame_C03 Page 47 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC

48

Introduction to PCM Telemetering Systems, 2nd edition

Objectives

This chapter examines topics related to modeling data for calibration and
analysis purposes. At the end of this discussion, the student will be able to:
1. Understand the need for and general strategy of instrument
calibration
2. Perform a least squares t of a polynomial to a data set
3. Justify the order of the model chosen to t the data
4. Estimate the statistical condence of a data point
We will use standard statistical tools such as those found in Microsoft
Excel


and other packages. The student is encouraged to use these packages
whenever possible rather than making the computations by hand.

Basics

Before beginning the mathematical techniques associated with calibration
and data modeling, we rst look at individual issues related to calibration
and data modeling. The two uses have different approaches and needs even
if the mathematical tools are basically the same.

Calibration

Calibration has several ofcial denitions.

Calibration

is (1) the adjustment
of a device so that the output is within a specic range for a particular value
of the input; (2) a comparison of the indication of an instrument under test,
or registration of the meter under test, with an appropriate standard; or (3)
the process of determining the numerical relationship, within an overall
stated uncertainty, between the observed output of a measurement system
and the value, based on standard sources, of the physical quantity being
measured.

4

From this, we see that two approaches can be followed: certifying that
the measurement is within some preapproved standard range for the mea-
surement and determining the amount of deviation from the desired stan-
dard and expressing that quantity numerically. In the rst case, the
instrument can be adjusted to correct improper readings. In the second
case, adjustment may be possible but it may also be adequate to leave the
instrument alone and apply the numerical corrections to the result to
achieve the correct result. Both techniques require that known standard

1094_frame_C03 Page 48 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


Modeling and Calibration

49
inputs are applied to the instrument and the resulting output observed. In
this chapter, we will concentrate on the analysis-based method.
There are two general strategies for developing the calibration equation:
(1) analytically model the transfer functions for each stage in the data col-
lection process (

T

S

,

T

C

, and

T

m

in Figure 3.1), and develop a process transfer
function,

T

process

, such as:
(3.1)
and t any free parameters such as gain, sensitivity, and bandwidth to the
data set; or (2) develop an overall equation describing the mapping using
polynomial equations of the form:
(3.2)
and t the coefcients without trying to model the underlying characteristics
of these devices.
The rst strategy results in a predened equation with a certain, and
hopefully small, number of unknown parameters that must be empirically
determined. They may have to be determined individually for each device
to account for device-to-device variations. The second strategy results in

FIGURE 3.1

Measurement devices included in the calibration process.
Sensor
Signal
Conditioner
Measurement
Device
Applied
Stimulus
T
S
T
C
T
M
Voltage or current
converted to a number
Calibration
Region
T T T T
process S C M

T a V
process i
i
i


1094_frame_C03 Page 49 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


50

Introduction to PCM Telemetering Systems, 2nd edition

a polynomial or power law model that the engineer develops to produce
reasonable mapping to the data. The mapping may not have physical
signicance as in the rst strategy. For expediency, most engineers prefer
the second technique. We do not try to fully understand and account for
all the potential effects at the individual component level; rather, we try
to model the overall trend, thereby treating the measurement devices as
a system.

Sensor Example

Let us begin the calibration discussion by considering a capacitive rain
gauge (CRG) as a typical sensor that needs calibration. The CRG works by
using a captured volume of water to make a capacitor. The output voltage
across the CRG is proportional to the stored water volume. As rain falls
during a storm, the gauge lls and the output voltage gives an integrated
rainfall measurement. An example of a reading from a CRG during a
simulated rainstorm is shown in Figure 3.2. The actual volume, the mea-
sured volume, and the output voltage level are given. We can see a slight
difference between the actual volume and the measured volume of water
in the gauge.
We have found from experience that this difference is also a function of
external air temperature. To understand how the CRG is behaving, we peri-
odically add known quantities of water to the gauge and measure the output.
One such calibration measurement set is given in Table 3.1 and the input


output relationship is illustrated in Figure 3.3. The relationship is basically
linear and two related questions arise: is this truly linear and, if so, what are
the slope and intercept values? We will use mathematical and statistical

FIGURE 3.2

Simulated output of a capacitive rain gauge during a rainstorm.
0
10
20
30
40
50
60
70
80
90
100
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 100 200 300 400 500 600
Time (seconds)
True Gauge Volume
Measured Output

1094_frame_C03 Page 50 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


Modeling and Calibration

51
techniques to answer these questions so that we can apply the results to
make accurate measurements.

Calibration Range

The sensor has a natural range of valid input and output levels as illustrated
in Figure 3.4. First, it has a minimum input signal level necessary to excite
the sensor. It also has a maximum input signal level above which the sensor
fails. Between them is a normal sensor response region where the user will
desire to have measurements made. This is the region that will have the
calibration function dened to match the sensor. On either side of that
boundary, signal conditions may exceed the sensors specications.

TABLE 3.1

Rain Gauge Data

Exact input (mm) Measured output (mm)

0 0.07
5 4.37
10 10.31
15 16.38
20 22.19
25 27.70
30 33.07
35 38.27
40 43.56
45 48.76
50 54.04

FIGURE 3.3

Actual versus measured output values for the CRG.
Input Volume (mm)
0 10 20 30 40 50
0
10
20
30
40
50
60
O
u
t
p
u
t

V
o
l
u
m
e

(
m
m
)

1094_frame_C03 Page 51 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


52

Introduction to PCM Telemetering Systems, 2nd edition

These specications dene the usable signal level and environmental
variables. For example, a sensor may have temperature, humidity, and
vibration specications that cannot be exceeded for the sensor to operate
properly. If the sensor exceeds these specications, varying results will
occur. A device that is slightly out of specication will usually give inac-
curate readings but the device is not permanently damaged. It will continue
to operate when it is returned to normal specications. However, it may
need to be recalibrated to achieve accurate readings again. If the device is
subject to extreme conditions, the sensor may be permanently damaged
and will have to be replaced.

Measurement Calibration Process

Calibration is the process whereby specic inputs are applied to a measure-
ment system and the corresponding outputs are measured. The input is the
known quantity or the independent variable while the measurement output
is the unknown quantity or dependent variable. The input values must relate
to a known and reliable standard. This standard may come from a local
laboratory and provide a reasonably close approximation to the exact de-
nition for the quantity. Alternatively, the laboratory standard may be cali-
brated against another standard, which is calibrated against a primary
standard as dened by a national standards laboratory. In all cases, we wish
to truth the measurement.
Calibration should be done regularly. Most instruments have a

calibra-
tion interval

which is dened as the maximum length of time between

FIGURE 3.4

Sensor calibration and sensitivity regions.
Total Sensor
Failure
Calibration
Curve
Sensor Output
Valid Calibration
Interval for Normal
Sensor Conditions
Calibration Invalid,
Sensor Unchanged
Calibration Invalid,
Sensor Characteristics
May Change

1094_frame_C03 Page 52 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


Modeling and Calibration

53
calibration services during which each standard and test and measuring
equipment is [sic] expected to remain within specic performance levels
under normal conditions of handling and use.

4

Calibrations are also
applied if the measurement system is subject to large stresses. In some
applications, it is standard procedure to calibrate the measurements at the
start of each experiment or major use to prevent subsequent problems. This
can be especially important for uses involving human health and safety.
Calibration is not a random process. A preferred method such as the
following is usually adopted:
1. Start the standard input at the middle of the inputoutput range.
2. Record input and output values at the desired step interval until
the maximum input level is reached.
3. Start to decrease the input level and record input and output values
at the desired step interval until the minimum input level is reached.
4. Start to increase the input level and record input and output values
at the desired step interval until the midpoint input level is reached
again.
In all cases, the maximum and minimum input levels are not to be passed.

5

This will result in a calibration curve in which hysteresis error and other
sensitivity changes can be determined.

Calibration Variables

The purpose of calibration and modeling is to relate the measured quantity
to the measurand via some form of mathematical relationship. Mathematical
mapping will relate the measurand to a physical output variable such as
voltage or current. As in the CRG example, this mapping is also a function
of temperature, and to a certain extent, age. In the most general form, this
mapping will look like:
(3.3)
The exact form of the functional mapping is up to the analyst to develop.
The mapping can be linear or nonlinear, depending on the type of system
used. We will examine a technique to perform the analysis regardless of the
variables chosen. For example, let the output variable be voltage,

V

, the
environmental variables be the temperature,

T

, and the humidity,

H

. Suppose
we know from physical analysis of the existence of a second-order relation-
ship between the measurand and the system output voltage. The mapping
function could then be of the form:
(3.4)
measurand ( _ output variable; environmental_variables)
measurand a a V a V a T a H + + + +
0 1 2
2
3 4

1094_frame_C03 Page 53 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


54

Introduction to PCM Telemetering Systems, 2nd edition

This will allow us to t the expected functional variation plus the envi-
ronmental inuence. Naturally, to fully calibrate the instrumentation over
this parameter space, a great number of measurements over all combinations
of the variables must be made. This effort is justied if the instrumentation
should make very precise measurements or must be used over a wide range
of conditions. If the instrumentation is to be operated in well controlled
conditions, environmental effects may be ignored.

Difference between Calibration and Usage

During calibration, the measurand is replaced by a known standard and it
is treated as the independent variable. The measurement output is the
unknown and is treated as the dependent variable. In actual usage, the roles
are reversed. In usage mode, the measurement process output is the known
quantity and the measurand is the unknown. At this point the calibration
process is reversed to deduce the correction required to make a correct
measurement. Depending upon how the modeling is performed, this differ-
ence implies that the modeling equation may need to be inverted for use
with the measurement data.

Data Modeling

In many aspects, data modeling and developing calibration mapping are
similar processes. We will now discuss the differences and extensions of the
basic concept.

Difference between Calibration and Data Modeling

In the calibration process, the mathematical equations used to model the
process work on the specics of the inputoutput process of the measure-
ment system. The next step is to perform mathematical modeling of the entire
data set. There may be an underlying physical theory that unites the data.
The least squares techniques discussed later in this chapter can also be
applied to the entire data set to t the data to the model. The goal is the
same: develop a numerical technique that can be used to represent the
entirety of the data set and allow the user to work with the equation and
not with individual data points. Generally, the calibrated measurand values
are used in the modeling process instead of raw uncalibrated measurements.
While the two steps could, in principle, be combined, usually they are not.
Keeping them separate allows easier manipulation of different model classes.

Modeling as Filtering

The least squares technique discussed later in this chapter can also be applied
in the ltering and signal processing methods to be developed in Chapter
5. In a sense, the least squares technique can act as a lter to smooth the

1094_frame_C03 Page 54 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


Modeling and Calibration

55
data through noise that may be present in the measurement process. This is
important because real-world measurement will always include a certain
amount of uncertainty and noise. Therefore, these techniques are more
important than simply modeling the underlying data process.

Error Types

Various types of errors can affect the accuracy and precision of the measure-
ment process. This section covers the types of errors that can be found in
measurements. The next section explores statistical ways of describing ran-
dom errors. Other types of errors can be quantied and compensated for by
proper calibration.

Systematic Errors

If there is bias in the measurement equipment or observer making the mea-
surement, an offset can develop between the true value and the measured
value. This is illustrated in Figure 3.5. The line on the right side of the gure
represents the true value that the measurement process is attempting to
discover. The systematic error acts to displace the measurements away from
the true value to the new value. One can think of the systematic error as a
DC voltage applied to the measurement process. In this case, the systematic
error displaces the measurements toward the value in the middle of the
gure. By proper calibration, this displacement can be discovered and the
appropriate compensation applied.

FIGURE 3.5

Relationship between random and systematic errors with respect to true value.
Mean Value of Measurements True Value
Systematic Error
Measurement Uncertainty
Distribution of
Actual Measurements

1094_frame_C03 Page 55 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


56

Introduction to PCM Telemetering Systems, 2nd edition

Random Errors

The second effect shown in Figure 3.5 is caused by random errors in the
measurement process. Random errors can come from a number of sources
such as electrical noise and they tend to obscure the central value, even when
a large number of measurements are made. The best we can do is estimate
the underlying value. From a voltage point of view, random errors do not
have DC values such as systematic errors have. Random errors provide a
displacement to the measurement. However, a random error will not be
apparent based on only one value. Many samples of a measurement are
needed to see the extent of the random error. The mathematical tting pro-
cedure tends to smooth the measurements and represent estimates for the
underlying values.

Interference

Sometimes a signal that is not directly part of the measurement process will
seep into the electronics and cause interference. This stray signal can bias
the result but it is difcult to remove via calibration alone because the
interfering signal is not part of the system. Additionally, it not always like
a random error because the underlying interference may have a deterministic
structure. The best solution for interference is to shield the measurement
process from it so that the interference does not corrupt the measurement.

Hysteresis Error

Starting from the middle of the measurand range and scanning from low to
high and then back to low across the measurand range may be the best way
to perform calibration to determine whether a hysteresis error occurred in
the measurement system as illustrated in Figure 3.6. Because the measure-
ment system may react differently as the measurand increases from a low
to a high value than it does as the measurand changes in the opposite sense,
we dene

hysteresis error

as the maximum separation due to hysteresis
between up-scale-going and downscale-going indications of the measured
value after transients have decayed.

4

(see also Fraden

2

and Liptk

5

). The
hysteresis error,

e

H

, is dened in terms of maximum voltage separation,

HH

,

and the scale factor,

SF

, using:
(3.5)

Dead Band Error

At certain areas, the measurement system is insensitive to changes in the
measurand. These regions of insensitivity produce a

dead band

error,
e
HH
SF
H



1094_frame_C03 Page 56 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


Modeling and Calibration

57
dened as the range through which an analog quantity can vary without
initiating response.

4

(see also Fraden

2

and Liptk

5

). This concept is illus-
trated in Figure 3.7. A small at spot in the middle of the graph represents
the dead band that can cause measurement inaccuracy.

FIGURE 3.6

Hysteresis error in measurement process.

FIGURE 3.7

Dead band error in measurement process.
Measurand
IN I
N
S
HN
H
U
p
-
s
c
a
le
-
g
o
in
g

M
e
a
s
u
r
e
m
e
n
t
D
o
w
n
-
s
c
a
le
-
g
o
in
g
M
e
a
s
u
r
e
m
e
n
t
S
c
a
l
e

F
a
c
t
o
r
O
u
t
p
u
t

M
e
a
s
u
r
e
m
e
n
t
Measurand
Dead Band
A
c
t
u
a
l

R
e
s
p
o
n
s
e
D
e
s
i
r
e
d

R
e
s
p
o
n
s
e
O
u
t
p
u
t

M
e
a
s
u
r
e
m
e
n
t

1094_frame_C03 Page 57 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


58

Introduction to PCM Telemetering Systems, 2nd edition

Statistical Concepts

To properly characterize the effects of noise and better quantify the results
of data modeling, we must use certain basic statistical measures that deal
with both single measurements and whole sets of measurements. The Gaus-
sian statistical assumption will be used because it serves as the normal
assumption and describes most of the effects that we are concerned with.

Basic Measurement Model

The normal assumption made in measurement is that the actual measure-
ment is the linear sum of a true value plus noise (superposition model). If
we make a number of measurements of the same point,

X

0

, the resulting
vector of measurements, , can be written as:
(3.6)
A noise vector,



, associated with the measurement process modies the
measurement of the true value,

X

0

. In principle, if we know the value of the
noise vector elements, we can invert the equation and uniquely recover the
true value. In practice, we can

estimate

the noise and then arrive at our best
estimate for the true value. This process is illustrated in Figure 3.8 in which
a series of measurements are made at each point along the

x

-axis. One of
these points is enlarged to show the distribution of the measurements con-
tained in that point. In the next section, we will describe the probability
concepts used to describe the noise process.

Probability Concepts

The noise process is described by probability measures. The important functions
that describe the probability that a noise will take on certain values are the
probability density and the probability distribution. We will also concentrate on
Gaussian probability functions since they are the most widely used. This will
be a brief discussion. For more information, consult standard references.

1,3,7,8

Relative Frequency

One of the most intuitive methods for representing the concept of the prob-
ability that an event will occur is using the relative frequency probability
denition. If

A

is an event, for example, winning at

21

, losing a baseball
game, or having a meteor hit your car, the probability,

p(A),

out of a total
sample of

N

events is related to the number of times the event

A

occurs,

n(A)

, by the equation:
v
X
r r
X X N +
0
r
N

1094_frame_C03 Page 58 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


Modeling and Calibration

59
(3.7)
We can see that we really need a large sample for the concept of probability
to be meaningful. A single measurement is not enough. We will need to
make reasonable approximations since we do not have the time to make
an innite number of measurements.

Probability Density

Generally, we wish to know the probability of events at many locations. This
gives rise to the concept of a

probability density function

(PDF). This
function can be continuous or discrete. Noise tends to be a continuous
quantity so we will concentrate on the continuous form.
Let us start by extending the relative frequency concept and make a
histogram of the relative frequency of a number of measurements. For
example, suppose we design an experiment where the true measure-
ment value is known to be 12.5 in magnitude and each measurement has
a little noise added. Let us make 100 measurements and plot the relative
frequency of each value obtained as shown in Figure 3.9. Since we know
the correct value should be 12.5, the plot gives a relative frequency of
the noise values. The histogram intervals are chosen by computing the
bin width,


, knowing the total number of points,

N

, and the range of the
points by using the relationship:

6

FIGURE 3.8

Series of measurements with one measurement selected to show the distribution of measure-
ments comprising one of the data points.
0 2 4 10
average
-2
0
2
4
6
8
10
12
14
.
.
.
.
.
.
.
.
.
0.5
0.3
0.1
-0.1
-0.3
-0.5 Independent Variable
p A
n A
N
N
( ) lim
( )


1094_frame_C03 Page 59 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3


60

Introduction to PCM Telemetering Systems, 2nd edition

(3.8)
If we extend this notion to a case where an innite number of noise
measurements are made and then plot the histogram, we will arrive at the
probability density function,

p(x)

for the noise. This function describes how
the random variable that we are measuring is spread. The stronger the
function, the higher the probability that the quantity we are measuring will
be found in that region. The PDF for a random variable,

x

, will have an
average value or mean value,



, given by:
(3.9)
This is also sometimes called the rst moment of the variable

x

. The PDF
can also be used to compute the variance of the random variable,


2

, by
using the equation:
(3.10)
The PDF has two properties that distinguish its function from an arbitrary
function. The PDF is nonnegative; that is, a probability is always a positive

FIGURE 3.9

Relative frequency of measurements with noise added.
11 11.5 12 12.5 13 13.5 14 14.5
0
2
4
6
8
10
12
14

( )
max min
x x
N

xp x dx ( )

( ) ( ) x p x dx
2
1094_frame_C03 Page 60 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 61
number. The integral of the PDF is unity. We can use the PDF to compute
probabilities of events once we know the functional form. How do we
determine the correct PDF for a set of measured data since we do not have
an innite number of measurements? Some commercial software packages
allow the user to enter the measured data and they rank order the best
estimates for the mathematical PDF to represent the data.
Once we have the PDF, we can compute the probability that the continuous
variable x can be found between the limits a and b by using the relationship:
(3.11)
As the limits a and b approach each other, the probability of nding x
between them becomes zero; that is, the probability that a continuous vari-
able will be found at exactly one point is zero. Also, as the limits a and b
approach innity, the probability approaches unity; that is, if one covers the
number line, the probability is 1 that x will be found somewhere.
Can the function h(x) = 1/2 [u(x) u(x 2)] be a valid PDF for a random
variable x? The function u(z) is the unit step function. The area of h(x)
computes to 1 and it is nonnegative so it has the correct mathematical
properties. We could interpret it as a uniform probability that x is between
0 and 2. Using this PDF, what is the probability that 0.125 < x 1? Applying
Equation (3.11), we obtain p(0.125 < x 1) = 0.438.
Cumulative Distribution Function
Another way to represent the probability information is with the cumulative
distribution function (CDF). It does not contain any information different
from the PDF; it merely represents it in a different way. The PDF and CDF
are related by the denition of the CDF. The CDF, F(a), for a random variable
with a probability density p(x) is:
(3.12)
That is, the CDF measures the probability that x is less than the point a.
We can extend this to computing the probability that x is between two limits,
a and b, as:
(3.13)
We can see the relationship between the PDF and the CDF for the above
example in Figure 3.10. For any CDF, the minimum value is 0 and the max-
p a x b p x dx
a
b
( ) ( ) <

F a p x a p x dx
a
( ) ( ) ( )

p a x b F b F a ( ) ( ) ( ) <
1094_frame_C03 Page 61 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

62 Introduction to PCM Telemetering Systems, 2nd edition
imum value is 1. The CDF is also a nondecreasing function. Using the
previous example, the CDF values for the two end points are F(0.125) = 0.063
and F(1) = 0.5. The probability that x is between a and b computes to 0.438
as it did above.
Gaussian PDF and Noise Model
The Gaussian PDF is commonly used to model noise processes. Many
noise processes are actually accumulations of multitudes of individual
FIGURE 3.10
PDF and CDF for a random variable.
1094_frame_C03 Page 62 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 63
interactions. Under the central limit theorem, these accumulated interac-
tions give the resulting noise a Gaussian PDF. The Gaussian PDF is given
by the equation:
(3.14)
We can use Equations (3.11), (3.12), and (3.13) to make computations.
However, the integrals are generally not computable in closed form. How-
ever, some well known related functions can make the computations for
Gaussian PDFs easier.
The erf Function
Because the Gaussian PDF is difcult to compute, there are tables of
related functions. The rst is the error function, which is usually just called
the erf function. There are generally two similar denitions of the erf
integral.
1,8
We will use the following integral equation, which is more
commonly used today:
(3.15)
This is the integral of a Gaussian PDF from 0 to the value x. It is not the
same Gaussian as in Equation (3.14) but can be obtained by a transformation
of variables. This is shown schematically in Figure 3.11. The error function
has the following properties:
erf(x) = erf(x)
erf() = 1
The erf function is commonly available in computing packages such as
Mathcad* It is also tabulated in Table A.1 in the appendices.
The error function has a related function known as the complementary
error function or erfc. This function is computed from the erf using:
(3.16)
For a Gaussian variable x between the limits a and b, we can compute the
probability that x is between the limits by using the erf and erfc functions as
follows:
* Mathcad is a copyright of MathSoft Engineering & Education, Inc., 2001.
p x
x
( ) exp
( )
,

,
,
]
]
]
]
1
2 2
2
2

erf x e du
u
x
( )

2
2
0

erfc x erf x ( ) ( ) 1
1094_frame_C03 Page 63 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

64 Introduction to PCM Telemetering Systems, 2nd edition
(3.17)
For example, let = 2.5 and
2
= 0.5. What is the probability that the
measurement x is between 1.5 and 3.5? Using the erf function:
(3.18)
What is the probability that the measurement is greater than 10? Using
the erf function, we compute:
(3.19)
The answer is not exactly zero but the computation is close enough.
FIGURE 3.11
Gaussian PDF with erf and Q functions indicated.
p a x b erf
b
erf
a
erfc
b
erfc
a
< ( )
j
(
,
\
,
(

j
(
,
\
,
(

j
(
,
\
,
(

j
(
,
\
,
(
1
2 2
1
2 2
1
2 2
1
2 2

p x erf erf ( . . )
. .
* .
. .
* .
.
1 5 3 5
1
2
3 5 2 5
2 0 5
1
2
1 5 2 5
2 0 5
0 8427
<
j
(
,
\
,
(

j
(
,
\
,
(

p x p x erf > ( ) <


j
(
,
\
,
(

10 10
1
2
1
2
10 2 5
2 0 5
0
( )
.
* .
1094_frame_C03 Page 64 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 65
The Q Function
Another function related to the erf(x) and erfc(x) is the Q function which is
dened by the integral equation:
(3.20)
If the error function is the integral of the Gaussian PDF from 0 to the point
x, then the Q function is related to the integral of the PDF from x to innity
as shown in Figure 3.11. The Q function is related to the erf and erfc by the
following equations:
(3.21)
(3.22)
Similar to Equation (3.17), we can compute probabilities with the Q function
(3.23)
The Q function is also tabulated in Table A.1 in the appendices. Using
Equations (3.21) and (3.22), we can also compute the Q function using the
erf and erfc functions available in analysis packages such as Mathcad.
Electronic Noise
Electronic noise is typical. Normally, engineers assume that the noise can be
modeled as a Gaussian process. The system has a natural bandwidth, B,
measured in Hertz. Normally, the noise is a zero-mean process; that is, the
noise does not have a DC offset. We parameterize the noise by an equivalent
system temperature, T
sys
. This is not a physical temperature; it is a convenient
measure of the total noise produced. For example, a device that produces as
much noise as a black body source at 290 K is said to have a noise temperature
of that same 290 K regardless of the physical temperature of the device. The
system temperature is used to compute the noise spectral density, N
0,
which
describes the noise process in the frequency domain. The spectral density in
watts/Hertz is computed from the system temperature and Boltzmanns
constant, k, using:
(3.24)
Q x e dy
y
x
( )

1
2
2
2

Q x erfc
x
( )
j
(
,
\
,
(
1
2 2
Q x erf
x
( )
j
(
,
\
,
(
,

,
]
]
]
1
2
1
2
p a x b Q
a
Q
b
( ) <

j
(
,
\
,
(

j
(
,
\
,
(

N kT
sys 0

1094_frame_C03 Page 65 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

66 Introduction to PCM Telemetering Systems, 2nd edition
Boltzmanns constant is 228.6 dBW/k-Hz. From this, we can compute
the variance of the Gaussian process using the spectral density and the
bandwidth:
(3.25)
The relations used when we examined system noise related to data trans-
mission will be used again.
Mean, Variance, and Standard Deviation Estimates
In Equations (3.9) and (3.10), we saw the denitions of the mean, , and the
variance,
2
. Related to the variance is the standard deviation, . These
denitions are based on having the full probability density function and,
essentially, an innite number of measurements. Of course, this is impossible
to attain in practice. This section covers practical estimates for these param-
eters. In the next section, we will see how well we know these results.
Parameter Estimation
In real systems, we need to estimate the mean and the variance based upon
a nite number of measurements. We may not even know the underlying
PDF. A typical estimate for the mean, <> or , for a set of N measurements,
{x
i
}, assuming that each measurement is equally probable, is to use the
customary equation for computing an average:
(3.26)
In a similar manner, we can compute the estimated variance, <
2
>, of the
data set by using:
(3.27)
We can use the data in Table 3.2 to compute the estimated mean and
estimated variance. Applying Equations (3.26) and (3.27), respectively, we
arrive at an estimated mean of 12.4024 and estimated variance of 0.3999. The
question is, how good are the estimates? The error and uncertainty in the
mean and condence intervals are used to answer this question.
Error in the Mean
The previous example involved a nite number of samples and a single
estimate for the mean and variance. Suppose we ran the experiment again.

2
0
N B
x
< >

x
N
x
i
i
N
1
1
< >


( )

2
2
1
1
1 N
x x
i
i
N
1094_frame_C03 Page 66 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 67
It is not hard to believe that we would obtain slightly different measures for
the mean and variance. If we experiment multiple times, we would build
up a distribution of mean values as illustrated in Figure 3.12. They cluster
around the mean of the means which is a better estimate of the true value
than any individual sample mean would be. The variance in the distribution
of the mean values is called the error in the mean. It is computed in terms
of the estimated variance, <
2
>, and the number of measurements, N, in the
data set using:
3
(3.28)
The error in the mean for the data set in Table 3.2 is 0.0080. As we can see
from Equation (3.28), the number of data points in the measurement set
inuences the error in the mean. The greater the number of points in the
data set, the smaller the error in the mean will be.
TABLE 3.2
Sample Data for Computing Mean and Variance
12.7112 12.2444 12.9988 13.0394 11.6551
11.8167 12.4733 12.6544 13.2575 12.4193
12.9993 12.5057 12.2144 13.0580 12.0337
13.5600 12.8323 11.9057 12.9222 11.8778
12.2843 11.2862 12.6215 13.5281 11.5461
12.2451 13.0092 12.8680 11.3763 12.0059
11.9353 12.6319 12.2218 12.8240 11.7719
13.3503 12.7698 12.6444 12.9008 11.5540
11.2474 11.5000 13.2573 13.2929 12.2232
12.2079 11.1374 11.9733 12.1102 12.6152
FIGURE 3.12
Distribution of mean values about the mean of the means.

2
2

< >
N
Mean Value of Measured Mean Values
Distribution of Measured
Mean Values

1094_frame_C03 Page 67 Thursday, April 11, 2002 10:01 AM


2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

68 Introduction to PCM Telemetering Systems, 2nd edition
Uncertainty in the Mean
Just as the standard deviation is the square root of the variance, the uncer-
tainty in the mean is the square root of the error in the mean, or:
3
(3.29)
For the example data set in Table 3.2, the uncertainty in the mean is 0.0894.
Condence Intervals
While the error and the uncertainty in the mean are easy to compute, they
do not really tell the user how close they might be to the true value, except
in the most general way. A more intuitive measure is the condence interval
for the estimate as illustrated in Figure 3.13. The true value for the mean, ,
and the estimated mean, <>, are shown and their difference is an estimation
error. Based on the N samples and the uncertainty in the mean, we can dene
a condence interval around the estimated mean.
3
This condence interval
is intended to represent the region in which we expect to nd the true mean
value if we can make an innite number of measurements.
The width of the condence interval is a function of the uncertainty in the
mean and the level of condence we want in the result. If we wish to be very
condent of the result, then the interval must be broader than it would be if
we were willing to accept a lower level of condence. If we dene as the
percent level of uncertainty in the estimate, then we will have a 100-minus-
percent condence in the estimate. For example, a 5% uncertainty corresponds
to a 95% condence level. Typical values that indicate we are condent of the
result are 95% or 99%. If <> is our estimate for the mean of the measurements
and

is the uncertainty in the mean, we estimate that the true value, , will
be found with 100-minus- percent condence in a region bounded by
(3.30)
where z
/2
is a parameter based on the condence level chosen. Basically, z
/2
is
a measure of the number of standard deviations about the mean that we will
FIGURE 3.13
Condence interval relative to true mean and the estimate of the mean.


< >
2
N


+ z z
/ / 2 2

<>
Confidence Interval
Estimation Error
<> - z
/2
/ N
<> + z
/2
/ N
1094_frame_C03 Page 68 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 69
search to nd the true value. The use of z
/2
is based on the assumption that
more than 30 measurements are in the data set and that we have a Gaussian
process. Values for z
/2
are given in Table A.2 in the appendices. In the table, P
is the Gaussian probability or the condence level desired. The z
/2
is read from
the column with the desired condence level. In an example from Table 3.2, the
95% condence interval is 0.1753 around the estimated mean 12.4024. That is,
with 95% condence, we believe the true value lies between 12.227 and 12.578.
Suppose only a smaller number of measurements are available, or we do
not know the variance in the data, or we are not sure Gaussian statistics still
hold. Do we have a way to compute the condence interval? If the number
of measurements is below 30 or the Gaussian assumption does not hold, a
t distribution can be used. In that case, the values from the t distribution
shown in Table A.3 are used. The value for z
/2
is replaced with t
/2
which is
a function both of the condence level and the number of degrees of freedom,
, in the measurement set. The number of degrees of freedom is given by
= N 1. The condence interval bound equation is then:
(3.31)
Number of Measurements Required
The number of measurements, N, is a critical parameter and the condence
of the results directly scales with N. The main question is, how many mea-
surements are sufcient? Certainly, we need more than one. Are 10, 100, or
1000 measurements sufcient and how do we know? One approach is to use
the condence interval. If we know the condence interval we want and the
approximate variance in the data, we can solve for the number of measure-
ments needed.
Least Squares Fitting
The least squares method is the usual procedure for tting a model to a data
set, especially when measurement noise makes the exact nature of the equa-
tion a bit uncertain or when we do not have an a priori model for the system.
This method can be used to model both the calibration of the instrumentation
system and the underlying process producing the data. This section concen-
trates on tting polynomials to the data. Once this procedure is understood,
other types of functions (trigonometric, orthogonal polynomials, etc.) can be
used and the tting procedure determined. Since the procedure can be used
both in calibration and data analysis, we will not make a distinction between
them during the development of the method. As a practical matter, the
necessary equations are developed here but they are standard functions in
analysis packages such as Excel and similar programs.


+ t t
/ / 2 2
1094_frame_C03 Page 69 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

70 Introduction to PCM Telemetering Systems, 2nd edition
Least Squares Denition
The least squares process provides the best estimate for the parameters
to specify a model. The method does not tell the user which model to
use. A number of methods can determine which of several models is
relatively better but no method can indicate which model is the correct
one in an absolute sense. The least squares method is based on minimiz-
ing the mean square error between the data and the chosen model.
8,9
The
computations involve a set of data points consisting of an independent
variable {x
i
} and a dependent variable {y
i
}. The model produces a set of
estimates of the dependent variable, , based on the independent vari-
able. The mean square error, mse, between the data and its estimate is
dened as:
(3.32)
The mse is a function of the quality of the data and the model selected. If
an inappropriate model is chosen, the mse can be large, even if the data are
relatively noise free. To t the model, a sufcient number of points are
needed. This number must be greater than the number of parameters deter-
mined in the tting procedure. We can also weight the t in Equation (3.32)
by dividing the variance in each point. Unless the variances of the points
are signicantly different, this step will not greatly affect the nal results so
we will not use it.
Linear Least Squares Mean Square Error Base
To see how least squares equations are developed, let us rst consider a
linear t to data. In this case, the model becomes:
(3.33)
We next apply the model to the mse Equation (3.32). This gives a model-
specic mse equation of:
(3.34)
To minimize the mse, we take the partial derivative of the mse in Equation
(3.34) with respect to each of the t parameters, a
0
and a
1,
giving two equa-
tions of the form:

y
i
mse
N
y y
i i
i
N

( )

1 2
1

y a a x
i i
+
0 1
mse
N
a a x y
i i
i
N
+
( )

1
0 1
2
1
1094_frame_C03 Page 70 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 71
(3.35)
We will then solve the system of equations for the t parameters. The
system to be solved is:
(3.36)
We can reorganize the equations in Equation (3.36) as follows:
(3.37)
It is often easier to manipulate equations if they are written in matrix form,
as follows:
(3.38)
For example, we will next apply this to the data in Table 3.1. Table 3.3
shows the sums needed for using Equation (3.38). Solving the matrix equa-
tion for the coefcients gives a
0
= 0.1991 and a
1
= 1.094. The sample data
points and the t to the data are shown in Figure 3.14.
Linear Least Squares Statistical Base
The matrix formulation of the least squares t is not the only way to structure
the solution. The summations listed below can be used to determine coef-
cients for the linear t.
3
They can also help assess the quality of the t so
that the summations are used for more than solving for coefcients. The
required summations are:

+
( )

a
a a x y
j
i i
i
N
0 1
2
1
0
a a x y
a a x y x
i i
i
N
i i i
i
N
0 1
1
0 1
1
0
0
+
( )

+
( )

Na a x y
a x a x x y
i
i
N
i
i
N
i
i
N
i
i
N
i i
i
N
0 1
1 1
0
1
1
2
1 1
+
+




N x
x x
a
a
y
x y
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i

j
(
,
,
,
,
,
\
,
(
(
(
(
(
j
(
,
\
,
(

j
(
,
,
,
,
,
\
,
(
(
(
(
(
1
1
2
1
0
1
1
1
1094_frame_C03 Page 71 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

72 Introduction to PCM Telemetering Systems, 2nd edition
1. Average value of the independent variables:
(3.39)
2. Average value of the dependent variables:
(3.40)
TABLE 3.3
Example Least Squares Computation
x y x
2
xy
0 0.07 0 0
5 4.37 25 21.85
10 10.31 100 103.1
15 16.38 225 245.7
20 22.19 400 443.8
25 27.7 625 692.5
30 33.07 900 992.1
35 38.27 1225 1339.45
40 43.56 1600 1742.4
45 48.76 2025 2194.2
50 54.04 2500 2702
Totals 275 298.72 9625 10477.1
FIGURE 3.14
Sample data from Table 3.1 and least squares t to the data.
Calibration Setting (mm)
0 10 20 30 40 50
-10
0
10
20
30
40
50
60
x
N
x
i
i
N

1
1
y
N
y
i
i
N

1
1
1094_frame_C03 Page 72 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 73
3. Number of degrees of freedom for N data points:
(3.41)
4. Spread of the x
i
around their mean
(3.42)
5. Spread of the y
i
around their mean
(3.43)
6. Cross-product of the {x
i
} with the {y
i
}
(3.44)
With these summations, the linear t coefcients are:
(3.45)
Table 3.4 shows the application of these summations to the sample data
set from Table 3.1. Using Equation (3.45), we obtain the coefcients a
0
=
0.1991 and a
1
= 1.094 which are the same values as those found by the
matrix method.
Quality of the Fit
The t for the coefcients is only one factor. We need to answer two ques-
tions: are the coefcients well determined and is this model correct? This
section will examine both issues. One of the rst tests to try in determining
whether a t is reasonable is to plot the residuals across the data set as is
done in Figure 3.15. The residuals,
i
, are the differences between the mea-
sured dependent variable and the model output for each point:
(3.46)
N 2
SS x x
XX i
i
N

( )

2
1
SS y y
YY i
i
N

( )

2
1
SS x x y y
XY i i
i
N

( )

( )

1
a
SS
SS
a y a x
XY
XX
1
0 1

i i i
y y

1094_frame_C03 Page 73 Thursday, April 11, 2002 10:01 AM


2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

74 Introduction to PCM Telemetering Systems, 2nd edition
Ideally, the plot of the residuals should look like a plot of random points.
There should be about as many points with positive residuals as there are
with negative residuals. Also, there should not be any obvious structure in
the residuals, as illustrated in Figure 3.16, which indicates that the model
might not be appropriate. Figure 3.15 shows a few more positive residuals
than negative residuals. With only 11 points, it is difcult to denitively
determine whether a systematic problem exists.
TABLE 3.4
Summations for Least Squares Fit
x y
0 0.07 625 677.1591
5 4.37 400 455.7273
10 10.31 225 252.6955
15 16.38 100 107.7636
20 22.19 25 24.8318
25 27.7 0 0.0000
30 33.07 25 29.5682
35 38.27 100 111.1364
40 43.56 225 246.0545
45 48.76 400 432.0727
50 54.04 625 672.0909
Mean 25 27.156
SSXX 2750
SSXY 3009.1
FIGURE 3.15
Residual plot showing error between data and model as a function of data point number.
(x x)
2

(x x)(y y)
Measurement Number
1 2 3 4 5 6 7 8 9 10
-1
-0.5
0
0.5
1
1094_frame_C03 Page 74 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 75
In addition to residual plots, numeric indicators can be used as well to
indicate the quality of the modeling process. The correlation coefcient is
used to determine the quality of the t and the f statistic is used to indicate
the appropriateness of the model.
Correlation Coefcients
To determine the quality of the t, we can use the analysis of variance
(ANOVA) technique to provide indicators of how well the coefcients are
determined. This analysis is found in many standard statistical computer
software packages and in spreadsheet programs such as Excel. To start the
analysis, we dene several more summations that will be needed in addition
to the earlier equations. The summations this time require the model for the
dependent variable to produce the estimated points, , given by Equation
(3.33). We can then compute the regression variability:
(3.47)
The error in the t is given by the difference between the measured
dependent variable and the model output. With this, we compute the
error variability:
(3.48)
Equation (3.43) can be interpreted as the total variability in the measure-
ment set. The quantities SS
YY,
SSR, and SSE are related by SS
YY
= SSR + SSE.
The ratio SSE/SS
YY
can be interpreted as the proportion of the t error to
the total spread in the data. Table 3.5 gives the computations using Equations
(3.33), (3.47), and (3.48) for the data set in Table 3.1.
FIGURE 3.16
Nonrandom error residuals.
Position Position
Incorrect Model Assumed Heteroscedastic Error

y
i
SSR y y
i
i
N

( )

2
1
SSE y y
i i
i
N

( )

2
1
1094_frame_C03 Page 75 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

76 Introduction to PCM Telemetering Systems, 2nd edition
The squared correlation, R
2
measures the proportion of the total variability
in the dependent variable that is accounted for by the t. R
2
is always less
than 1 but the closer to 1 the value is, the better the linear t. The correlation
is computed using:
(3.49)
A related parameter is the correlation coefcient, r, that measures the
tightness of the t and whether it is a positive correlation or an anticorrela-
tion. The correlation coefcient is computed using:
(3.50)
The sgn() is the function that takes the sign of the argument. Using the
data in Table 3.5, R
2
= 0.9993 and r = 0.9997. This implies a nearly perfect
t of the model to the data. Once the coefcients of the model have been
determined, the condence intervals on each coefcient can also be deter-
mined. To do this, we rst determine the standard error in the overall
model using:
(3.51)
and the standard error in each coefcient from:
TABLE 3.5
Data for Determining Quality of Fit
SSR SSE SS
YY
0.1991 748.3209 0.0724 733.6711
5.2720 478.9254 0.8136 519.2184
10.7431 269.3955 0.1876 283.8000
16.2142 119.7313 0.0275 116.1300
21.6853 29.9328 0.2547 24.6648
27.1564 0.0000 0.2955 0.2955
32.6275 29.9328 0.1958 34.9711
38.0985 119.7313 0.0294 123.5129
43.5696 269.3955 0.0001 269.0793
49.0407 478.9254 0.0788 466.7171
54.5118 748.3209 0.2226 722.7299
Totals 3292.6119 2.1781 3294.7901

y
i
R
SSR
SS
SSE
SS
YY YY
2
1
r a R sgn( )
1
2
s
SS a SS
N
YY XY

1
2
1094_frame_C03 Page 76 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 77
(3.52)
To compute the 95% condence interval, we use a t distribution with =
N 2 degrees of freedom. The two intervals are given by:
(3.53)
In the example we have been examining, s = 0.4919, SE(a
1
) = 0.009381, and
SE(a
0
) = 0.2775. For a 95% condence interval, we use t
0.025
with = 9 or
2.262 from Table A.3. The condence intervals are i
1
= 1.094 0.021 and i
0
=
-0.199 0.628.
f Statistic
Another parameter that is frequently computed with the model tting pro-
cedure is the f statistic. This computation will not tell the user whether a
model is correct in an absolute sense. It can be used to indicate whether a
model is adequate to explain the data, and it can be used to determine whether
one model ts the data better, worse, or about the same as another model.
To perform the computation, we need the results of Equations (3.47) and
(3.48). We also dene k as the number of parameters in the tting procedure.
For a linear t, k = 2, the f statistic is computed from those results and the
number of measurements, N, using:
(3.54)
In our example, f = 7558. Is this a good result? To make a decision, we
need an F-distribution table such as Tables A.4A through A.4C in the appen-
dices. These tables list the critical values f

(m,n) where gives the desired


uncertainty level for the result. The decision rule is that if:
(3.55)
the model adequately describes the data at the 1- condence level. In our
example, if wanted to have 95% condence that the model adequately ts
SE a
s
SS
SE a s
N
x
SS
XX
XX
( )
( )
1
0
2
1

+
i a t SE a
i a t SE a
1 1 0 025 1
0 0 0 025 0


.
.
( )
( )
f
SSR
k
SSE
N k

+ ( ) 1
f f k N k > +

( , ) 1
1094_frame_C03 Page 77 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

78 Introduction to PCM Telemetering Systems, 2nd edition
the data, then = 0.05. In the example, k = 2 and N k + 1 = 10. The critical
value table lists f

(2,10) = 4.1 so this model does a very good job of explaining


the data at a condence level even better than 99%.
Nonlinear Fits
So far in this chapter, we have examined linear ts. In this section, we will
extend the tting procedure to nonlinear models. First, we will look at
parametric models and then nonlinear polynomial models. The f statistic
from the previous section can be used to determine the quality of the model
in both cases.
Parametric Models
The least squares method can be applied to more than linear equations.
Suppose we had a model for a data set in the form:
(3.56)
The x is the position variable while T is for temperature, H is for humidity,
and P is for pressure. The mean square error equation is then:
(3.57)
The partial derivative of Equation (3.57) is then taken with respect to each
variable as was done earlier. This leads to a system of equations that can be
solved for the coefcients a
j
as was done in the linear equation case but this
time with more coefcients.
Power Series Models
We can apply the derivation used for minimizing the mean square error to
higher order polynomial models as well. The general form for the model for
each output value, , at each input point, x
i
, will be:
(3.58)
As with the linear model, we take the partial derivative of the mse with
respect to each of the t parameters, a
j
. This will yield a system of linear
equations that can be solved for the coefcients. The general form for the
matrix that must be solved is:

y a a x a T a H a P + + + +
0 1 2 3 4
mse
N
a a x a T a H a P y
i i i i i
i
N
+ + + +
( )

1
0 1 2 3 4
2
1

y
i

y a x
i j i
j
j
M

0
1094_frame_C03 Page 78 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 79
(3.59)
For example, let us t the data in Table 3.6. Figure 3.17 shows both the
data and the t. The data appear to follow a second-order equation. The
necessary set of equations would appear in matrix form:
(3.60)
The coefcients can be found by solving the matrix equation:
(3.61)
TABLE 3.6
Data for Polynomial Fit
x y
0 3.033
1 3.6805
2 5.5885
3 4.6192
4 3.6122
5 0.8997
6 2.9238
7 7.3846
8 13.3494
9 20.5442
10 27.4637
N x x x
x x x x
x x x x
x x x x
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i








1
2
1
3
1
1
2
1
3
1
4
1
2
1
3
1
4
1
5
1
3
1
4
1
5
1
6
L
L
L
ii
N
i
i
N
i i
i
N
i i
i
N
i i
i
N
a
a
a
a
y
x y
x y
x y

j
(
,
,
,
,
,
,
,
,
,
,
,
,
\
,
(
(
(
(
(
(
(
(
(
(
(
(
j
(
,
,
,
,
,
,
\
,
(
(
(
(
(
(

j
(
,
,
,
,
,
,
,
,
,
,
1
0
1
2
3
1
1
2 2
1
3 2
1
L
L L L L L
L
L
,,
,
\
,
(
(
(
(
(
(
(
(
(
(
(
(
N x x
x x x
x x x
a
a
a
y
x y
x
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i
i
N
i i
i
N


j
(
,
,
,
,
,
,
,
,
\
,
(
(
(
(
(
(
(
(
j
(
,
,
,
\
,
(
(
(

1
2
1
1
2
1
3
1
2
1
3
1
4
1
0
1
2
1
1
ii i
i
N
y
2 2
1

j
(
,
,
,
,
,
,
,
,
\
,
(
(
(
(
(
(
(
(
a
a
a
0
1
2
1
11 55 385
55 385 3025
385 3025 25333
50 2327
587 9025
5584 0176
j
(
,
,
,
\
,
(
(
(

j
(
,
,
\
,
(
(

j
(
,
,
\
,
(
(

.
.
.
1094_frame_C03 Page 79 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

80 Introduction to PCM Telemetering Systems, 2nd edition
This yields the following model for the data set, which is also used to
generate the line in Figure 3.17:
(3.62)
Using this model and Equations (3.47), (3.43), and (3.49), the value for SSR
is 1275.8 and SS
YY
is 1277.4 while R
2
is 0.999. Using Equation (3.48) we nd
the SSE is 1.540 and using Equation (3.54), we nd that f is 2486. This value
greatly exceeds the values in the table so this is a good model to t the data.
Cautions with Least Squares
Like any other mathematical technique, the least squares method cannot be
applied blindly with the expectation that good results follow. This section
discusses basic cautions to take with this technique.
Model Selection
The least squares method and its associated statistical analysis will not tell
the user whether one model is correct to the exclusion of all others. The
best that the user can hope for is a statistical indicator that indicates at
some level of statistical condence whether a model is consistent with the
data. If the user picks several incorrect models to investigate, it is possible
that none will really t the data. The method and the statistics will not
x this problem.
FIGURE 3.17
Data (markers) and second-order least squares t (line) to the data.
Position
0 2 4 6 8 10
-30
-25
-20
-15
-10
-5
0
5
10
M
e
a
s
u
r
e
m
e
n
t

. . . y x x + 2 7243 2 2824 0 5344


2
1094_frame_C03 Page 80 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 81
The range of the model also needs to be correct. The model is not valid
outside the data range. Therefore, greatly extending the range of the model
could lead to a divergence between the model and where the underlying
process is headed. This concept is illustrated in Figure 3.18.
Outlying Points
Suppose that the dataset has a single point that is corrupted by a large
amount of noise. The effect of the bad point will differ depending on
where the point is located in the data set. Figure 3.19 illustrates the effect
of a bad point at the upper edge and at the middle of the data set. The
bad point at the upper edge pulls the t toward the data set. However,
when the bad point is located in the middle, the line is pulled a much
smaller distance. This indicates that least squares is sensitive to outlying
FIGURE 3.18
Attempting to extend the least squares t beyond the data range.
FIGURE 3.19
Effects of outlying points on least squares t when an outlier occurs at the edge and the middle.
F(x)
x
Fitting Region
Fit Curve
"True" Curve
-2
2
6
10
14
0 2 4 6 8 10
Original Fit
Modified Fit
Outlying Point
-2
2
6
10
14
0 2 4 6 8 10
Original Fit
Modified Fit
Outlying Point
(a) Outlying Point at Edge of Region
(b) Outlying Point at Middle of Region
1094_frame_C03 Page 81 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

82 Introduction to PCM Telemetering Systems, 2nd edition
data points and that outliers at the edges of the data set can have the
greatest effect on the t.
Overtting the Model
Mathematics will allow tting of N data points up to (N 1)-degree poly-
nomial. However, if the user does not know of an underlying physical model,
he or she may be tempted to try to t the data with a higher order model
than necessary. We can get a sense of this by looking at the SSE computation.
Typically, the SSE will decrease with increasing model order until it reaches
the point where it begins to atten out as illustrated in Figure 3.20. Using
the f statistic and comparing the SSEs between models will tell the user when
increasing the order of the t does not substantially improve the quality of
the t. Once the statistics tell the user that the model accounts for the data,
it is probably best to stop increasing the order of the model unless there is
an underlying physical reason for including more terms.
References
1. Couch, L.W., Digital and Analog Communication Systems, 6th ed., Prentice-Hall,
Upper Saddle River, NJ, 2001.
2. Fraden, J., Handbook of Modern Sensors: Physics, Designs, and Applications, Amer-
ican Institute of Physics, New York, 1997.
FIGURE 3.20
Change in SSE with t order.
0 1 2 3 4
Order of Fit
SSE
1094_frame_C03 Page 82 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 83
3. Gonick, L. and Smith, W., The Cartoon Guide to Statistics, Harper Perennial, New
York, 1993.
4. The IEEE Standard Dictionary of Electrical and Electronics Terms, 6th ed., Institute
of Electrical and Electronics Engineers, Piscataway, NJ, 1997.
5. Instrument Society of America, Instrument terminology and performance, in
Process Measurement and Analysis, Liptk, B.G., Ed., Chilton, Radnor, PA, 1995.
6. Klaassen, K.B., Electronic Measurement and Instrumentation, Cambridge Univer-
sity Press, New York, 1996.
7. Lathi, B.P., Modern Digital and Analog Communication Systems, 3rd ed., Oxford
University Press, New York, 1998.
8. Papoulis, A., Probability, Random Variables, and Stochastic Processes, McGraw Hill,
New York, 1965.
9. Rabinovich, S., Measurement Errors: Theory and Practice, American Institute of
Physics, New York, 1995.
Problems
1. For each of the following error sources, comment on whether
they would primarily affect precision, accuracy, or both about
equally: systematic error, random error, hysteresis error, and
interference.
2. Make a histogram of data with noise by using a standard analysis
package such as Excel, Mathcad, or MATLAB. Select a true value
such as 10 and add random noise to it. Do this 100 times and plot
the results in a histogram. If Gaussian noise was added, does the
result look like a Gaussian? Experiment with different noise types
that are available with your computer package.
3. Given a set of measurements corrupted with Gaussian noise where
the measurements have a mean value of 10 and a variance of 2,
estimate the following probabilities:
A. The measurement is between 9 and 11
B. The measurement is between 8 and 12
C. The measurement is between 5 and 15
D. The measurement is greater than 16 or less than 4.
4. Consider measuring a constant 5-V power supply with a noisy volt
meter. The meter adds zero-mean Gaussian noise with a variance
of 0.1 volt
2
to each measurement. What are the upper and the lower
measurement limits such that the probability of nding the mea-
surement outside of these limits is no more than 0.01?
1094_frame_C03 Page 83 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

84 Introduction to PCM Telemetering Systems, 2nd edition
5. For the following set of measurements, estimate the mean, the
variance, the estimated error in the mean, and the estimated uncer-
tainty in the mean. Also nd the 95% and 99% condence intervals
on the mean.
6. If a system has a noise temperature of 290 K and a bandwidth of
100 kHz, determine the noise spectral density and the variance of
the process.
7. Determine the necessary equations to perform a least squares t to
the model:
y(t) = a cos(2f
0
t) + b sin(2f
0
t)
Assume that the frequency f
0
is known and the constants a and b
are to be found by the t. How would you modify the procedure
if the frequency were also unknown?
8. Determine a model to t the graph given in Figure P3.1. Assume
that the noise in the measurements is effectively zero at each point.
7.894826 5.883171 7.659018 6.045662
6.240702 8.185223 6.77516 7.191845
7.293361 5.947835 8.317885 7.295075
7.283844 7.902557 7.343442 6.675203
5.455724 5.811363 7.297515 7.52899
FIGURE P3.1
Data plot for Problem 8.
0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
3.5
4
1094_frame_C03 Page 84 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Modeling and Calibration 85
9. In an experimental data set with zero-mean Gaussian noise added
to each measurement, we have two strange points that appear to
lie far from the rest of the data. The noise has a variance of 0.1. For
the rst of the two points, the distance between the mean of the
data and the point is 0.3 while for the second point, the distance
is 0.6. Would you discard either, both, or neither of these points
and why did you make that determination?
10. In the text, we developed a case to adequately explain the data in
Table 3.1 as a linear t. Suppose we wished to use a third-order t
to the data just to see what happens. Compute the necessary coef-
cients a
0
, a
1
, a
2
, and a
3
. Compute the f value as well. Does this
model adequately explain the data? How does it compare with the
linear t?
11. A second-order t was presented for the data in Table 3.6. Try the
t with a linear polynomial. Is the linear t a better t than the
second-order t? Explain your reasoning.
12. Try tting the data in Table 3.6 as a third-order polynomial. Is the
third-order t a better t than the second-order t? Explain your
reasoning.
13. A measurement set includes the following data. The data were
collected at 10 settings and 10 points were collected at each setting.
Each measurement has noise added but the noise variance can be
assumed to be the same at each of the settings. With this data set:
A. Find the mean, variance, error in the mean, uncertainty in the
mean, and 95% condence interval for the mean value at each
of the ten settings.
B. Using the mean values, perform rst-, second-, third-, and
fourth-order least squares ts to the data. Determine which t
is preferred using the f statistic.
C. Plot your choice for the best t to the data. Plot the residuals
between the mean values and the model you have chosen.
Use a computer-based analysis package to make the computations
easier.
A. Setting X
1
= 0.5; voltage values: 0.626, 0.083, 0.139, 0.506, -0.580,
0.071, 0.033, 0.124, 0.257, 0.309
B. Setting X
2
= 1.025; voltage values: 0.831, 0.168, 0.007, 0.249, 0.929,
0.822, 0.588, 0.611, 0.444, 0.173
C. Setting X
3
= 1.875; voltage values: 0.911, 0.971, 0.820, 1.417, 0.413,
1.332, 1.045, 1.253, -0.037, 0.309
D. Setting X
4
= 3.425; voltage values: 2.003, 1.823, 1.272, 1.431, 2.152,
1.751, 1.259, 1.186, 1.487, 1.492
1094_frame_C03 Page 85 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

86 Introduction to PCM Telemetering Systems, 2nd edition
E. Setting X
5
= 6.050; voltage values: 2.221, 2.048, 1.330, 1.422, 2.338,
2.110, 2.508, 1.595, 1.843, 2.163
F. Setting X
6
= 10.125; voltage values: 2.577, 2.591, 2.885, 3.078,
2.993, 1.990, 2.151, 3.550, 2.320, 2.042
G. Setting X
7
= 16.025; voltage values: 2.867, 3.368, 2.911, 3.421,
3.958, 3.204, 3.478, 2.892, 3.063, 2.559
H. Setting X
8
= 24.125; voltage values: 4.264, 3.892, 3.525, 3.316,
3.539, 2.869, 3.257, 3.183, 3.643, 3.270
I. Setting X
9
= 34.800; voltage values: 3.655, 3.287, 4.684, 4.686,
4.749, 4.724, 3.703, 4.253, 4.016, 3.170
J. Setting X
10
= 48.425; voltage values: 4.779, 4.189, 4.110, 3.834,
5.024, 4.723, 4.444, 3.997, 4.475, 4.133
1094_frame_C03 Page 86 Thursday, April 11, 2002 10:01 AM
2002 by Taylor & Francis Group, LLC
D
o
w
n
l
o
a
d
e
d

b
y

[
U
n
i
v
e
r
s
i
d
a
d

I
n
d
u
s
t
r
i
a
l

D
e

S
a
n
t
a
n
d
e
r
]

a
t

0
7
:
2
0

2
7

N
o
v
e
m
b
e
r

2
0
1
3

Вам также может понравиться