Академический Документы
Профессиональный Документы
Культура Документы
BIOELECTRIC PHENOMENA
nation of the potential field at the body surface from underlying bioelectric activity, can be formulated.
The cardiac forward problem starts with a quantitative description of the sources in the heart; the resulting body surface potentials are known as the electrocardiogram. In a similar way sources associated with the activation of skeletal
muscle lead to the electromyogram. We will also consider the
electroencephalogram and electrogastrogram, where we will
discover bases for sources other than propagating action potentials. We consider these applications of basic theory only
in an introductory way, because there are separate articles for
each. It is the goal of this article to elucidate the underlying
principles that apply to each of the aforementioned and
other applications.
MEMBRANE ELECTROPHYSIOLOGY
Excitable CellsMacroscopic Structure
The main mammalian tissues that are electrically excitable
are nerves and muscles. Although such cells vary greatly in
size, shape, and electrical properties, there are nevertheless
certain fundamental similarities. In order to illustrate their
different cellular structures, we introduce excitable cell histology in this section, although it is somewhat ancillary to the
general goals of this article, and it is very brief; the interested
reader may consult one of the references for more detailed
information. Some additional material will also be found later
in this article in the section on Applications.
BIOELECTRIC PHENOMENA
The application of engineering principles and technology to
medicine and biology has had an increasing influence on the
practice of medicine. The most visible of these contributions
is in the form of medical devices. This article, however, describes the engineering introduction of quantitative methods
in the field of bioelectricity. When such contributions first became evident, in the early 1950s many physiology researchers
were already employing modern quantitative methods to develop and utilize governing equations and suitable models of
bioelectric phenomena. Today it appears that systems physiology lives on as biomedical engineering, while physiology has
become more concerned with cell and molecular biology. On
the other hand, biomedical engineering is also currently involved in efforts to develop and apply quantitative approaches
at cellular and molecular levels.
This article, which is concerned with the electric behavior
of tissues, reviews what is known about the biophysics of excitable membranes and the volume conductor in which they
are imbedded. Our approach emphasizes the quantitative nature of physical models. We formulate an engineering description of sources associated with the propagating action potential and other excitable cellular phenomena. With such
sources and a mathematical description of fields generated in
a volume conductor the forward problem, namely a determi-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
BIOELECTRIC PHENOMENA
339
Cell body
(soma)
Initial segment
of axon
Axon hillock
Nucleus
Terminal
buttons
Dendrites
(a)
Muscle
Fibers
Tendon
Fibril
Tendon
(b)
340
BIOELECTRIC PHENOMENA
Lipid
bilayer
+
+
Extracellular
side
Narrow
selectivity
filter
Aqueous pore
Gate
Sugar
residues
Cytoplasmic
side
Voltage
sensor
+
+
Channel
protein
P
Anchor
protein
+
0
nm
by which a channel may allow the passage of only one particular ion species; selectivity may depend on the channel diameter, the charges that line the channel, or other details.
An important tool in the study of membranes is molecular
genetics. These techniques have been used to determine the
primary structure of most channels of interest. Unfortunately
it has not been possible to deduce the secondary and tertiary
structure. However educated guesses lead to a determination
of which portions of the primary amino acid sequence is intramembrane, cytoplasmic, and extracellular. As noted in Fig. 2
the channel protein extends into the cytoplasm as well as the
extracellular space.
The Squid Axon
Hodgkin and Huxley (2) pioneered a quantitative study of excitable membranes in the 1950s. For their preparation, they
used the giant axon of the squid. This axon was chosen because of its large diameter (approximately 500 m), which
allowed the insertion of an axial electrode. Until this time all
measurements of the electric behavior of excitable cells utilized only external electrodes, which left much information
inaccessible. In the absence of intracellular potentials the
conventional wisdom was that the resting membrane was depolarized, meaning that it was at zero transmembrane potential (the term depolarization continues to be used, although
it now simply implies activation of an excitable membrane).
Hodgkin and Huxley measured resting potentials on the order
of 70 mV (inside minus outside).
The squid axon, like any nerve, can be activated by passing
an adequate (transthreshold) pulse of current between two
electrodes in the external bath. A propagating action potential of the shape described in Fig. 3 is initiated at the activating end and travels to the opposite end. Except for end effects
propagation is characterized by an unchanging waveshape
and uniform velocity (assuming an axially uniform preparation).
The squid axon exemplifies an unmyelinated nerve fiber.
Although this is not typical of nerve fibers in the human body,
it presents a very simple model for analysis. One may consider that the intracellular space is simply a uniform electrolyte, whereas the extracellular space (sea water) constitutes
an independent electrolyte. Both intracellular and extracellular regions are electrically passive, and consequently whatever mechanism is responsible for the action potential must
involve the membrane.
From a chemical analysis of intracellular fluid and sea water (which constitutes the extracellular environment for the
squid), Hodgkin and Huxley determined that the major ions
available for current flow are K, Na, and Cl. They also
noted that the ionic composition of the extracellular fluid differs markedly from the intracellular. The intracellular and
extracellular concentrations of the aforementioned ions associated with the squid giant axon are shown in Table 1.
The squid axon contains a very high intracellular potassium concentration. If we assume the membrane to be permeable only to potassium, then from Table 1 we would expect
potassium ions to flow out of the intracellular space to the
lower concentration extracellular space. This single ion movement can only be transient, because positive charge will accumulate at the outside of the membrane, leaving negative
BIOELECTRIC PHENOMENA
Table 1. Intracellular and Extracellular Concentrations of
Ions Associated with the Squid Giant Axon
Ion
Intracellular (mM)
Extracellular (mM)
345
72
61
10
455
540
K1
Na1
Cl1
IK + ICl + INa = 0
[Ko ]
[Ko ]
RT
ln
= 25.2 ln
in mV
F
[Ki ]
[Ki ]
(1)
where Vm is the transmembrane potential defined as the intracellular minus extracellular potential across the membrane, R is the gas constant, F Faradays constant, and T the
absolute temperature. The coefficient RT/F evaluates to 25.2,
for Vm in mV, assuming T at room temperature (20 C). Note
that for anions the ratio in Eq. (1) must be inverted, giving
(for chloride) Vm 25.2 ln ([Cli]/[Clo]). For the numerical values in Table 1 we evaluate the potassium Nernst potential as
89.2 mV, the chloride as 54.09 mV, and the sodium as
46.5 mV.
Stimulus
Recording
Transmembrane potential, mv
50
(2)
(3)
(4)
where ui is the ions mobility, zi its valence, and the potential gradient. Goldman assumed a constant electric field in
the membrane and hence set
= Vm /d
(5)
(6)
Nerve
S
341
Di =
ui RT
|zi |F
(7)
Evaluating the potassium, sodium, and chloride electric currents arising from Eqs. (3), (4), and (7) and inserting each into
the steady-state constraint of Eq. (2) leads to the following
equation for the steady-state transmembrane potential Vm
namely
Stimulus
artifact
1 msec
100
Figure 3. The transmembrane potential measured with an intracellular microelectrode registers the stimulus artifact and an elicited
action potential on a nerve axon. The electrode configuration is shown
in the inset above. The transmembrane potential does not return to
baseline smoothly but shows a positive after-potential.
Vm = 25.2 ln
P
(8)
342
BIOELECTRIC PHENOMENA
Low-resistance seal
(50 M)
Suction
Seal, G
Cell attached
KCI/Ca+free
pulse of suction
or voltage
Pull
Patch Clamp
The Nobel prize-winning work of Neher and Sakmann (6) was
for the development of the Patch Electrode. This is a glass
micropipette with a tip diameter of 1 m or less. It is carefully
fire-polished so that when placed against a cell membrane
and with the application of gentle suction a very high resistance (gigaOhms) seal may be achieved. (Special cleaning of
the cell membrane may also be required.) Once this very high
resistance seal is achieved, then, as described in Fig. 4, four
configurations can subsequently be obtained. In the cellattached configuration the patch electrode measures transmembrane currents over the small membrane area contacted;
the cell itself remains intact. Other configurations include
only the membrane patch itself or the entire cell membrane
(patch removed).
The results of an experiment with the cell-attached electrode are shown in Fig. 5, which gives the transmembrane
current response to a transmembrane voltage step; nine successive trials are described. In each case it is seen that in
the small accessed area only a single channel (identified as
potassium) contributes to the measured current; the current
is either zero or 1.5 Pa depending on whether the channel is
closed or open. The need for a gigaOhm seal can now be understood as necessary to prevent currents from entering the
patch electrode via an extracellular pathway (leakage currents); without the gigaseal even small amounts of such extraneous currents could easily obscure the extremely small
desired transmembrane current.
A number of important characteristics of the single channel can be deduced from the experiments shown in Fig. 5.
We note that the channel has only two stateseither open or
closed. From the successive trials we also see that the response is different each time and is hence stochastic. On the
other hand when a large number of successive trials are
summed we see that a definite pattern emerges, so that the
probability, n(t), that a channel is open can be described as a
function of time; this is the same function as the averaged
channel conductance as a function of time as seen in the ensemble average curve [Fig. 5(b)]. Although this curve can be
found by averaging 40 successive trials of the same single
channel, we would also expect this result were we to conduct
a single trial in which the simultaneous current from 40 channels were measured. In fact in the whole-cell recording configuration (Fig. 4) because both the intracellular and extracellular regions must have the same potential, the entire cell
membrane is at the same transmembrane voltage and all
channels are in parallel; the measured transmembrane cur-
Pull
Pull
Using a
small cell
Pull
low Ca+
Pull
Air
exposure
Pull
10 m
Whole-cell
recording
Outside-out
patch
Inside-out
patch
rent for a single ion will then be proportional to the probability that a single channel is open.
We also note from Fig. 5 that although the onoff intervals
change as a function of time (these are the aforementioned
random variables) the magnitude of channel current is fixed.
Scaling the curve in Fig. 5 we deduce that the channel conductance is roughly 20 pS. Thus, when the channel is open
we can describe its current as
iK = 20(Vm EK ) pA
(9)
(10)
BIOELECTRIC PHENOMENA
Em
+50
2 pA
100 mV
343
Potassium:
IK = gK (Vm EK )
(11a)
(11b)
Em
Sodium:
where the conductivities gK and gNa were expected to be functions of time and transmembrane potential. In Eq. (11) EK
and ENa are the potassium and sodium equilibrium (Nernst)
potentials, the difference from Vm being the net ion driving
force. IK and INa are ensemble current densities per unit area
of memberane. To account for a small additional leakage current they added
1 pA
I1 = g1 (Vm E1 )
10
20
30
Time (msec)
40
(11c)
(12)
344
BIOELECTRIC PHENOMENA
possible value of potassium conductance (all potassium channels open). The temporal behavior of the probability n was
assumed to follow Eq. (10) where the rate constants and
depend solely on Vm.
From their measurements Hodgkin and Huxley learned
that sodium showed second-order kinetics. To deal with this
they let
gNa = gNa m h
3
(13)
(14)
dh
= h (1 h) h h
dt
(15)
(16)
(17a)
Vm ENa
+ IK (t)
Vm ENa
(17b)
Note that IK(t) and INa(t) appearing in Eq. (17b) are assumed
the same as in Eq. (17a). From these two equations, the two
unknown values of INa(t) and IK(t) are obtained.
The continuous functional expressions chosen by Hodgkin
and Huxley to approximate their discrete measurements of
the s and s are given below:
n =
vm
0.01(10 vm )
, n = 0.125 exp
exp[(10 vm )/10] 1
80
m =
0.1(25 vm )
vm
, m = 4 exp
exp[(25 vm )/10] 1
18
vm
h = 0.07 exp
,
20
h =
(18)
(19)
(30 vm )
exp
+1
10
1
(20)
In these expressions vm is the transmembrane potential variation from the resting value, that is vm Vm Vrest, so that it
reflects the true signal apart from a dc component. The numerical values in Eqs. (18) to (20) are based on vm in millivolts.
The total transmembrane current per unit area is evaluated by summing Eqs. (11a) and (11b) and adding the capacitive component namely cmdvm /dt where cm is the specific capacitance in farads per unit area. The capacitance has
already been noted to correspond to the physical membrane
structure and is consequently fairly uniform among various
membrane types. For the squid axon, and many other membranes, it equals 1.0 F/cm2. As might be expected it does not
vary with time or transmembrane voltage. A complete expression for membrane current density, im, may thus be given as
i m = cm
dvm
+ IK + INa + Il
dt
(21)
BIOELECTRIC PHENOMENA
(23)
(24)
345
(25)
Equation (25) is known as the WeissLapique formula. Multiplying Eq. (25) by T and describing QT I0(T)T as the threshold charge results in
QT = IT (T + m )
(26)
346
BIOELECTRIC PHENOMENA
ists between the excitable tissues and the recording electrode(s). To simulate this situation quantitatively there are
two main considerations. The first is to find an engineering
(quantitative) description of the sources that are generated by
tissue activation. The second, based on such a source description, is to evaluate the currents that will flow in the surrounding passive volume conductor. One is particularly interested in the associated electrical potential field, which will be
sampled by the recording electrodes. These two goals are the
subject of this section.
ExampleLong Fiber in an Unbounded Volume Conductor
We begin by considering an excitable, infinitely long fiber lying in an unbounded volume conductor. This idealized model
could approximate a long skeletal muscle or squid axon fiber
in a volume conductor whose extent is large compared to fiber
dimensions (well be more precise about this later). We assume that a propagating action potential has been initiated,
so that at the moment a full spatial action potential vm(z) is
present on the fiber. Since fibers are very long compared to
their small diameter we may assume no intracellular radial
variation of current density or potential and consequently
that we can describe i(z) as the intracellular potential, a
function only of the axial coordinate, z. Ohms law links the
total intracellular axial current, Ii(z), with the intracellular
field, i(z)/z, according to
i (z)
= Ii ri
z
(27)
Ii
z
(28)
where im is the transmembrane current per unit length. Combining Eqs. (27) and (28) gives the classical linear-core-conductor expression namely
im (z) =
1 2 i (z)
ri z2
(29)
The starting point for HodgkinHuxley simulation of a propagating action potential is equating Eq. (29) with Eq. (21).
Since physiological fibers are very long compared to their
diameters, then for points in the volume conductor that are
well outside the fiber we can consider the transmembrane
current to arise from a line source on the fiber axis. A short
fiber element of length dz will behave like a point source of
current (imdz). Again, because fibers are normally very thin,
the fibers presence within the volume conductor may be ignored, and we may consider the current from the aforementioned element to flow into an unbounded conductor. Now the
current density from a point current source, I0, through a concentric sphere of radius R, where all lie in a uniform, unbounded conductor, is by symmetry simply I0 /(4R2). The
electric field at that point, e /R, from Ohms law is
(30)
where we replaced I0 by the point source element imdz. Substituting Eq. (29) into Eq. (30) and integrating over z gives an
expression for the field of a fiber lying in an unbounded uniform volume conductor and which is carrying an action potential as
e =
a2 i
4e
2 vm 1
dz
z2 R
(31)
2 vm
z2
(32)
The line source density, isource, is thereby identified as a current source being generated by the propagating action potential vm(z). The relationship of isource and vm is specified by Eq.
(32) describing the source quantitatively. In turn isource will
generate an electric field in the surrounding volume conductor, and this is given by Eq. (31). The total field at any point
arises from a summation of contributions from every source
element Isourcedz.
To more clearly distinguish source points from field points,
since the same coordinate system is being utilized for both,
we choose here unprimed coordinates to describe the source
and primed coordinates for the field. We may therefore write
BIOELECTRIC PHENOMENA
Eq. (31) as
e (x , y , z ) =
1
4e
2
2
2
dz
(33)
p(x x )
2
+ (y y )2 + (z z )2
While an extension of Eq. (32) to include multicellular tissues must still be considered, it seems reasonable to expect
that an expression similar to Eq. (32) will arise except that
the source needs to be described more generally as a volume
source, Iv, in which case isource is a degenerate case when the
source can be considered to be one-dimensional. For a volume
source Eq. (33) generalizes to
1
(x , y , z ) =
4
Iv (x, y, z)
2
(x x ) + (y y )2 + (z z )2
dv (34)
KS (0)
4
KS (0)
(
=
2
S (z) =
pa
0
(35)
(1)
r=
(x, y, z)
KS (S)
dS
R
(37)
+ z |z|)
2
;
;
;
;;
(x x)2 + ( y y)2 + (z z)2
Po
P(x, y, z)
S =
2
2 + z2
(2)
Iv
2 =
347
(36)
(a)
(b)
Figure 6. (a) A surface source KS or double-layer lies in the arbitrary surface S. The two sides are denoted 1 and 2 and the positive
normal n is from 1 to 2. P is an arbitrary field point. (b) The behavior
of the field at P0 is examined by decomposition of S into a small source
disc centered at P0 and the remaining sources. The field of the latter
are well behaved at P0 and hence whatever discontinuities might be
present can be studied by examining the behavior of the disc field
alone. From R. Plonsey, The formulation of bioelectric source-field relationships in terms of surface discontinuities. J. Franklin Inst., 297:
317324, 1974.
348
BIOELECTRIC PHENOMENA
1 = 2
z
K
= S
(38)
I0 (1/R)
4 l
(39)
I0
(1/R) dll
4
(40)
aR p
4 R2
(41)
1
4
a R dS
S
a
R2
(42)
D (z) =
(0)
4
a
0
2z d
( 2 + z2 )3/2
(43)
D
=
n 2
D |2 D |1 =
D
n 1
(44)
(45)
Now satisfies Laplaces equation everywhere, because, except for the membrane that we assume to occupy zero volume,
all space is source-free and passive (i.e., there are no volume
sources). Consequently, since is piecewise constant
2 = 2 = 0
o
i
o
i
(46)
n
Jm
BIOELECTRIC PHENOMENA
(47)
(48)
i
e
i
e
e
=
=0
n S
n S
n S
n S
(49)
The notation in Eqs. (48) and (49) emphasizes that the and
its normal derivative are evaluated at the membrane surface;
n is the outward membrane normal.
An examination of Eqs. (48) and (49) shows that the scalar
function , which satisfies Laplaces equation, has a continuous normal derivative across the membrane, but the function
itself is discontinuous. A comparison with Eq. (44) reveals
that the membrane behaves like a double layer, lying in the
membrane surface, whose strength is
=
D
2
D
1
= e i
(50)
1
4
a R dS
S
a
R2
1
4
Z
(e i )
S
a R dS
R2
(52)
1
4 p
Z
(e e i i )
S
S
a R dS
R2
(53)
(54)
(55)
The function , defined in Eq. (45), then satisfies the following boundary conditions
2 1 = (2 1 ) = 0
/n 2 /n 1 = 0
(51)
349
(56)
(57)
350
BIOELECTRIC PHENOMENA
In principle we can apply this treatment to a complex inhomogeneous volume conductor. Along each interface between
regions of different conductivity Eq. (57) is used. The inhomogeneous volume conductor is, in this way, replaced by a uniform homogeneous medium for the scalar function [as discussed in connection with Eq. (47)] except that at all
interfaces a double layer of the form given in Eq. (57) exists.
Clearly the result is a scalar function, , satisfying Laplaces
equation and all boundary conditions [in view of Eq. (44)];
the resultant is necessarily the correct solution. Assuming
multiple cells and an inhomogeneous medium the previous
results can be summarized as
1
4
XI
i
(e i )
cells
S
a R dS
1
+
R2
4
XZ
2 1
Sj
S
a R dS
R2
(58)
where i enumerates all cells and j all interfacial surfaces. Using Eq. (45) and solving for the potential at the point p
results in
p =
1
4 p
+
XI
(e e i i )
cells
Z
1 X
i
4 p
(j
Sj
S
a R dS
2
R
S
a dS
j ) R 2
R
(59)
(60)
where e(z) and i(z) are extracellular and intracellular potentials at the membrane interface, then, using Eq. (53) we
have
(p) =
1 i
4 p
Vf (z)
S
1
R
S
dS
(61)
1
R
Vf (, z)
V
dv
(63)
Vf (, z)
(62)
1
R
Vf (z)
z
az
1
R
+ Vf (z) 2
1
R
(64)
1 i
4 p
Z V (z)
f
V
az
1
R
dv
(65)
1
4e
Z
z
[e e (z) i i (z)]
dz
z
Z
A
(1/R)
dA
z
(66)
where we have broken the volume integral into a cross-sectional integration and an axial integration.
Equation (66) is a new source-field relationship that evaluates the extracellular field from sources arising from the propagating action potential on a single fiber in an unbounded
volume conductor. Using Eq. (41) we can identify the crosssectional integral [including the coefficient 1/(4e)] as evaluating the field of a unit magnitude double-layer disc (with the
fiber radius a) oriented in the z direction. If z is the source
location the result of the cross-sectional integration can be
written Wd[, (z z)] where (, z) are the coordinates of the
field point. The quantity in Eq. (66) namely (z) [ee(z)
ii(z)]/z constitutes the double-layer (disc) density (a function of z). So, we also can write Eq. (66) as
e ( , z ) =
(z)Wd [ , (z z )] dz
z
(67)
BIOELECTRIC PHENOMENA
1
4e
Z
z
2
[ (z) e e (z)] dz
z2 i i
Z
A
1
dA
R
(68)
and
e ( p) =
1
4e
Z
[i i (z) e e (z)] dz
z
2 (1/R)
dA
z2
(69)
In Eq. (68) the cross-sectional integral is the field of a singlelayer disc lying in the fiber cross section. The extracellular
field is the convolution of this field with the source density
function given by
K(z) =
2
[ (z) e e (z)]
z2 i i
(70)
The extracellular field in this formulation arises from a volume current-source density Iv a2K(z) with K(z) given in
Eq. (70).
In Eq. (69) the cross-sectional integral can be identified as
the field from a disc of axial quadrupoles (a single such quadrupole consists of two axial dipoles displaced in the z direction). In this formulation the function we called iVf (z) is itself the source-density function. All the aforementioned
sources are equivalent sources and all give the same answer
in the extracellular region. Depending on Vf and the geometry
one of these may be particularly attractive either for simplicity in calculation or for its clear physical interpretation or
both. But, clearly, all formulations will give identical results.
For source-field distances that are large enough the singlelayer disc in Eq. (68) can be approximated by a localization of
the source on the axis. One can examine this approximation
by comparing the field of the disc with that of a point source
of the same total strengh at the disc origin. For R/a 5 the
error will be under 1% (13). Given this condition, we may replace
Z
A
a2
1
dA =
R
R
(71)
One can also show that for the unbounded volume conductor
e is very small and can be neglected. (Under these conditions
we can also replace i by vm.) Assuming both approximations
permits the conversion of Eq. (68) into
( , z ) =
a2 i
4e
Z
z
1 2 vm
dz
R z2
(72)
351
1
4
J i (1/R) dv
V
(73)
352
BIOELECTRIC PHENOMENA
(74)
e =
1
4
Z
BidomainMathematical Description
J i /R) dv
(J
(1/R) J i dv
(75)
1
4
Z
(1/R) J i dv
(76)
(77)
(78)
showing that the total current density is the sum of the ohmic
(conduction) current, E , plus Ji. But whereas Ji was
introduced as a dipole source density it now appears as an
applied current density (both interpretations have the same
dimensions, of course).
Although cardiac muscle is actually made up of a collection
of discrete cells we have introduced Ji as a source function
that is continuous. The same simplification can be introduced
in regard to the cardiac tissue structure. The fine details of
this structure include the collection of cells where each cell is
connected to its neighbors by roughly ten junctional elements
(15). But on a global basis the intracellular space can be regarded as a continuum. A similar argument can be applied to
the interstitial space, which, although containing many convolutions on a subcellular scale, yet macroscopically may be
considered through an averaged, continuous medium. Such a
model is known as a bidomain since it consists of an intracellular and an interstitial (continuous) domain. In view of the
noted preferential propagation along fiber axes (reflecting
higher conductivity values in this direction) the cardiac bidomain can be expected to be anisotropic. A further simplification is to define the intracellular and interstitial domains on
the same tissue space. For a given (bidomain) coordinate an
intracellular and extracellular potential would be retrieved;
their difference is the transmembrane potential at that point
Possibly a more satisfactory description of the bidomain utilizes the language of mathematics. In each domain we require
that Ohms law be satisfied. Accordingly, letting i refer to intracellular and e interstitial, we get
J i = gix
and
J e = gex
i
i
i
a x + giy
a y + giz
az
x
y
z
e
e
e
a x + gey
a y + gez
az
x
y
z
(79)
(80)
(81)
(82)
Here we have taken into account that the actual relative intracellular cross-sectional area is p 1, whereas in the bidomain, because the full tissue space is occupied, it is now
raised to 1 and hence the bidomain conductivity must be proportionately reduced. A similar argument applies to the interstial bidomain conductivity. For the transverse directions
the geometrical factor is more difficult to evaluate. For circular cylindrical fiber arrays the transverse interstitial conductivity (16) has been found to be
gex = gey =
1 p
e
1+ p
(83)
In view of the complex structure of cardiac tissue experimental determination of bidomain conductivities is normally re-
BIOELECTRIC PHENOMENA
353
ElectrodesReciprocity
We have focused attention on the evaluation of volume conductor fields from sources in excitable tissues. If the potential
field is evaluated at the surface of the body then a pair of
small electrodes will sample this field and record the difference in potential. But if the electrode is large compared to
spatial variations in potential then the measured potential is
an averaged value. How should the average be taken? It can
be shown that relative to a remote site an electrode with a
conducting surface Se lying in an (applied) potential field a
(in the absence of the electrode) measures a voltage, V, (19)
given by
V =
a Jr dS
(84)
Se
where Jr is the volume conductor current density at the electrode surface Se that arises when a unit current is put into
the electrode and removed at the remote reference. This reciprocal current, Jr, behaves as a weighting function. For a
spherical or circular cylindrical electrode Jr may be uniform
and the weighting similarly uniform. For a surface electrode
one expects an increased weighting to arise near the edges.
Another useful formulation that is helpful in considering
the signals measured by an electrode comes from the application of reciprocity. Consider a bounded volume conductor of
volume V at the surface of which are placed two recording
electrodes, a and b, which yield a measured voltage Vab. The
source is described by a volume distribution Ji within V. The
reciprocity theorem states that
Vab =
J i r dv
(85)
(86)
APPLICATIONS
This section is devoted to a presentation of four systems in
which electrical signals arising from specific organs are recorded; the goal in each case is to obtain information about
the sources of each signal. The aforementioned systems are
the electrocardiogram (ECG), electromyogram (EMG), electroencephalogram (EEG), and electrogastrogram (EGG). Each is
treated in detail as a separate chapter in this volume; the
consideration here is limited solely to a discussion of sourcefield relationships introduced in this chapter. Our interest
centers on the quantitative evaluation of sources and on pertinent aspects of the volume conductor for each of the aforementioned systems. This introduces more advanced material
as well as illustrate the application of the earlier material of
this chapter.
Electrocardiography
Information on the electrical activity within the heart itself
comes, mainly, from canine studies where multipoint (plunge)
electrodes are inserted into the heart. The instant in time
that an activation wave passes a unipolar electrode is marked
by a rapid change in potential (the so-called intrinsic deflection) and, based on recordings from many plunge electrodes,
it is possible to construct isochronous activation surfaces. The
cardiac conduction system initiates ventricular activation at
many sites nearly simultaneously and this results in an initial broad front. The syncytial nature of cardiac tissue appears to result in relatively smooth, broad, activation surfaces, and because fibers lie parallel to the endocardium and
epicardium the anisotropy insures wavefronts to also lie parallel to these surfaces.
The temporal cardiac action potential has a rising phase of
approximately 1 msec, followed by a plateau of perhaps 100
msec and then by slow recovery, which also requires approximately 100 msec. Because activation is a propagated phenomena, a spatial action potential can be obtained from the temporal version since the space-time function must be of the
form of a propagating wave vm(s t) where s is the local direction of propagation and the velocity. Thus behind the
isochronal wavefront is a region undergoing depolarization
(for 50 cm/sec and a rise time of 1 msec its thickness is
0.5 mm (it is hence quite thin and often approximated as a
surface). Behind that, the tissue is uniformly in the plateau
state while ahead of the activation wave it is uniformly at
rest. Application of Eq. (86) shows that there are no sources
except in the region undergoing activation (the gradient being
zero wherever vm is unvarying). Thus the activation source is
a volume double layer with a thickness of around 0.5 mm lying behind the activation isochrone. The total strength of the
double layer is given by integrating Eq. (86) in the direction
of propagation; using the bidomain model this come out
= (Vpeak Vrest )
re
ri + re
(87)
354
BIOELECTRIC PHENOMENA
Z
AQRST
= C
j
l j dv
(88)
heart
where is the area of the action potential (a function of position) and the volume integral in Eq. (88) is taken throughout
the heart. If the cardiac action potentials all had similar
shapes but the duration of the plateau was a variable (possibly this is the leading difference in morphology), then
Z
AQRST
j
= C
d l j dv
(89)
heart
BIOELECTRIC PHENOMENA
4 K 2 + (z z )2
(90)
where K z / r. This result differs from 1/R in Eq. (33) because it is assumed that a single fiber in a muscle bundle lies
in an anisotropic monodomain medium, which can be described by the radial, axial conductivity parameters r, z. A
further weighting arises from the large canula surface area,
and this necessitates a surface integration of Eq. (90) of the
type shown in Eq. (84), the result of which we designate WS.
(As an approximation the canula may be considered uniform
and its surface integral estimated from a line integral giving
a simple average). The potential function is given by
e = im (z) WS
(91)
355
1 aR
=
4 R2
"
XZ
Sj
(e e i i ) dS
(92)
1 aR
4 R2
Z
J i dv
(93)
Vab =
J i r dv
(85)
Electroencephalography
The electrical activity of the brain can be detected using scalp
electrodes; its spontaneous sources generate the electroencephalogram (EEG). Such sources arise from a large number
of cells. However, in contrast with the ECG and EMG, the
EEG apparently does not significantly involve action potentials of neuronal cells but rather is associated with the synaptic activity that precedes or follows activation. Actually EEG
sources are believed to arise mainly from postsynaptic potentials (PSP) on cortical pyramidal cells. A large number of cells
appear to underlie a detectable signal, and this implies temporal (and spatial) synchronization. If invasive intracellar recording is undertaken, both spike (action potential) and wave
type activity is found; abolition of the latter terminates the
EEG. In addition to the aforementioned spontaneous activity
giving rise to the EEG as measured at the scalp, one can also
obtain signals that are a response to stimuli using auditory,
visual, and tactile modalities. These are described as evoked
potentials.
As we learned from Eq. (86), a region consisting of excitable cells will establish an electrical source provided that a
356
BIOELECTRIC PHENOMENA
Z
D )/ 3 ] dS
[(D
VQ = 1/(4)
(94)
BIBLIOGRAPHY
1. B. Hille, Ionic Channels of Excitable Membranes. 2nd ed., Sunderland, MA: Sinauer Assoc., 1992.
2. A. L. Hodgkin and A. F. Huxley, A quantitative description of
membrane current and its application to conduction and excitation in nerve, J. Physiol., 117: 500544, 1952.
3. D. E. Goldman, Potential, impedance, and rectification in membranes, J. Gen. Physiol., 27: 3760, 1943.
4. A. L. Hodgkin and B. Katz, The effect of sodium ions on the electrical activity of the giant axon of the squid, J. Physiol., 108:
3777, 1949.
ber die von der molekularkinetischen theorie die
5. A. Einstein, U
warme gefordertebewegung von in ruhenden flussigkeiten suspendierten teilchen, Ann. Physik, 17: 549560, 1905.
6. E. Neher and B. Sakmann, The patch clamp technique, Sci. Am.,
266: 2835, 1992.
7. J. E. Randall, Microcomputers and Physiological Simulation. 2nd
ed, New York: Raven Press, 1987.
8. J. Dempster, Computer Analysis of Electrophysiological Signals.
London: Academic Press, 1993.
9. B. Frankenhaeuser, Quantitative description of sodium currents
in myelinated nerve fibers of Xenopus laevis. J. Physiol. 151: 491
501, 1960.
10. C-H. Luo and Y. Rudy, A dynamic model of the cardiac ventricular action potential, Circ. Res., 74: 10711096, 1994.
11. R. Plonsey and D. B. Heppner, Considerations of quasistationarity in electrophysiological systems, Bull. Math. Biophys., 29: 657
664, 1967.
12. R. Plonsey, The formulation of bioelectric source-field relationships in terms of surface discontinuities, J. Franklin Inst., 297:
317324, 1974.
13. R. Plonsey, The active fiber in a volume conductor, IEEE Trans.
Biomed. Eng. BME-21: 371381, 1974.
14. C. S. Henriquez, N. Trayanova, and R. Plonsey, Potential and
current distributions in a cylindrical bundle of cardiac tissue, Biophys. J., 53: 907918, 1988.
15. R. H. Hoyt, M. L. Cohen, and J. E. Saffitz, Distribution and threedimensional structure of intercellular junctions in canine myocardium, Circ. Res., 64: 563574, 1989.
16. B. J. Roth and F. L. Gielen, A comparison of two models for calculating the electrical potential in skeletal muscle, Ann. Biomed.
Eng., 15: 591602, 1987.
17. L. Clerc, Directional differences of impulse spread in trabecular
muscle from mammalian heart, J. Physiol., 255: 335346, 1976.
18. D. E. Roberts and A. M. Scher, Effect of tissue anisotropy on extracellular potential fields in canine myocardium in situ, Circ
Res, 50: 342351, 1982.
19. R. Plonsey, Dependence of scalar potential measurements on
electrode geometry, Rev. Scientific Instrum., 36: 10341036, 1965.
20. R. Plonsey and A. van Oosterom, Implications of macroscopic
source strength on cardiac cellular activation models, J. Electrocard., 24: 99112, 1991.
21. D. W. Liu, G. A. Ginant, and C. Antzelevitch, Ionic basis for electrophysiological distinctions among epicardial midmyocardial,
and endocardial myocytes from the free wall of the canine left
ventricle, Circ. Res., 72: 671687, 1993.
22. D. B. Geselowitz, The ventricular gradient revisited: Relation to
the area under the action potential, IEEE Trans. Biomed. Eng.,
BME-30: 76, 1983.
23. R. Plonsey, Recovery of cardiac activitythe T-wave and ventricular gradient. In J. Liebman, R. Plonsey, and Y. Rudy (eds.), Pediatric and Fundamental Electrocardiography. Boston: Martinus
Nijhoff, 1987.
ROBERT PLONSEY
Duke University
357
394
BIOMAGNETISM
BIOMAGNETISM
Biomagnetism describes the electromagnetic and magnetic
phenomena that arise in biological tissues. These phenomena include
Magnetic field at and beyond the body
Response of excitable cells to magnetic field stimulation
Intrinsic magnetic properties of the tissue
The magnetic field is generated either by the bioelectric currents or by magnetic material in the body. Similarly, by feed-
(1)
(2)
Because tissue capacitance is negligible (quasistatic conditions), charges redistribute themselves in a negligibly short
time in response to any source change. Because the divergence of J evaluates the rate of change of the charge density
with respect to time and because the charge density must be
zero, the divergence of J is necessarily zero. (We refer to the
total current J as being solenoidal, or forming closed lines of
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
BIOMAGNETISM
(3)
The solution of Eq. (3) for the scalar function for a region
that is uniform and infinite in extent (3) is
1
4 =
J i dv
(4)
r
V
Because a source element J i dv in Eq. (4) behaves like a
point source in that it sets up a field that varies as 1/r, the
expression J i is defined as a flow source density IF. Because we seek the solution for field points outside the region
occupied by the volume source, Eq. (4) may be transformed
(3) to
1
4 = J i
dv
(5)
r
v
This equation represents the distribution of potential due
to the bioelectric source J i within an infinite, homogeneous,
volume conductor that has conductivity . Here J i dv behaves
like a dipole element (with a field that varies as its dot product
with (1/r), and hence J i can be interpreted as a volume dipole density).
By using Greens theorem (4), Geselowitz (2) developed Eq.
(6) which evaluates the electric potential anywhere within an
inhomogeneous volume conductor containing internal volume
sources:
1
1
4 (r) = J i
dv +
( j j )
dS j
r
r
v
sj
j
(6)
The current density J throughout a volume conductor gives
rise to a magnetic field given by the following relationship
(3,5);
1
4H = J
dv
(7)
r
v
where r is the distance from an external field point at which
H is evaluated on an element of volume dv inside the body,
Jdv is a source element, and is an operator with respect to
the source coordinates. Substituting Eq. (2) in Eq. (7) and dividing the inhomogeneous volume conductor into homogeneous regions with surfaces Sj,
1
1
4H = J i
dv
j
dv
(8)
r
r
v
vj
j
Again using Greens theorem and making some vector manipulations, we obtain Eq. (9)
1
1
4H(r) = J i
dv +
( j j )
dS j
r
r
v
sj
j
(9)
This equation describes the magnetic field outside a finite volume conductor containing internal (electric) volume sources
J i and inhomogeneities (j j). It was first derived by Geselowitz (6).
395
396
BIOMAGNETISM
104
107
108
Static field
of earth
Geomagnetic
noise
(nT)
10
Commercial
flux-gate
magnetometer
high
low
1010
1011
(pT)
NASA
fluxgate
magnetometer
Radiofrequency
noise
MOG
1012
1013
1015
MMG
MCG
MEG
1014
(fT)
Line-frequency
and harmonic
noise
Laboratory
noise
SQUIDmagnetometer
Thermal
noise field
in eddy-current shield
Induction-coil
magnetometer
(at Tampere)
1016
Thermal noise field of body
103 102 101 100 101 102 103
Frequency [Hz]
Biomagnetic
signals
Equivalent
input noise
Noise fields
104
105
106
Thermal
noise fields
Biomagnetic signals
MCG magnetocardiogram
MMG magnetomyogram
MEG magnetoencephalogram
MOG magneto-oculogram
Noise fields
Static field of the earth
Geomagnetic fluctuations
Laboratory noise
Line frequency noise
Radio frequency noise
Figure 2. Sensitivity distribution of a detector that detects the electric dipole moment of a bioelectric volume source.
BIOMAGNETISM
397
x
Figure 3. Sensitivity distribution of a detector that detects the magnetic dipole moment of a bioelectric volume source.
398
BIOMAGNETISM
x
z
(a)
x
z
100 mm
170 mm
y
70 mm
140 mm
170 mm
30 mm
(b)
Figure 4. Various methods for detecting
the magnetic dipole moment of the heart.
(a) The basic principle, the XYZ-lead system. (b) Symmetrical XYZ-lead system. (c)
Symmetrical unipositional lead system.
(c)
BIOMAGNETISM
399
Magnetoencephalography
Similarly as in the cardiac application, in magnetic measurement of the electric activity of the brain, the benefits and
drawbacks of the MEG can be divided into theoretical and
technical categories. First, the theoretical aspects are discussed.
Half-Sensitivity Volumes of Electro- and Magnetoencephalography. The half-sensitivity volumes for different EEG and MEG
leads as a function of electrode distance and gradiometer
baselines are shown in Fig. 7(a). The minimum half-sensitiv-
400
BIOMAGNETISM
100
80
Magnetometer coil r = 10 mm
1 A/s
40
20
0
20
40
60
Coil distance
from scalp = 20 mm
10
20
10
Maximum
sensitivity
104
60
r[mm]
100
80
Half-sensitivity
volume
30
5000
4000
40
50
5000
4000
3000
3000
2000
2000
rb 76 mm
78 mm
80 mm
100
rc 80 mm
1000
1000
rs 85 mm
rt 92 mm
150
500
500
400
400
300
300
200
200
h [mm]
200
100
100
0
JLM [pA/m2]
Thus, contrary to general belief, the EEG can focus its sensitivity better on a small region in the brain than the wholehead MEG. At about 20 to 30 of separation, the two-electrode EEG lead needs slightly smaller separation to achieve
the same half-sensitivity volume as the planar gradiometer.
The sensitivity distributions of these leads, however, are similar. Note that if the sensitivity distributions of two electric or
magnetic different lead systems, are the same, they detect
exactly the same source and produce exactly the same signal.
Therefore, the planar gradiometer and two-electrode EEG
lead detect similar source distributions.
Sensitivity of EEG and MEG to Radial and Tangential
Sources. The three-electrode EEG has its maximum sensitivity under the electrode that forms the terminal alone. This
sensitivity is mainly directed radially to the spherical head
model. With short electrode distances, the sensitivity of the
two-electrode EEG is directed mainly tangentially to the
spherical head model. Thus with the EEG it is possible to
detect sources in all three orthogonal directions, that is, in
the radial and in the two tangential directions relative to the
spherical head model.
In the axial gradiometer MEG lead, the sensitivity is directed tangentially to the gradiometer axis of symmetry and
BIOMAGNETISM
thus also tangentially to the spherical head model. In the planar gradiometer, the sensitivity has its maximum under the
center of the coils and is directed mainly linearly and tangentially to the spherical head model. The MEG lead fields are
oriented tangentially everywhere to the spherical head model.
This may be easily understood by recognizing that the lead
field current does not flow through the surface of the head
because no electrodes are used. Therefore, the MEG detects
only sources oriented in the two tangential directions relative
to the spherical head model.
10
80
EVALUATION
100
401
60
40
Planar gradiometer
Coil radius = 10 mm
1 A/s
20
0
20
Maximum
sensitivity
40
60
Half-sensitivity
volume
x[mm]
100
80
z
y
x
20
30
40
104
50
103
103
103
102
100
0
Zero sensitivity line
JLM [pA/m2]
150
102
200
h [mm]
102
BIOMAGNETISM
60
50
e te
r = 10 mm
h = 20 mm,
mete
Ax
ia
gr
om
di
h = 20
mm
r = 10
mm
EEG/MEG
Half-sensitivity volume
[cm3]
1 coil
Vortex
402
30
Plan
Vo
rte
ar gr
adio
40
Ta
1
10 Electrode
ng
en
tia
20
Radial
2 Electrodes
es
3 Electrod
30
0
0
50
d = 36mm
Radia
60
100
d = 72mm
(21 electrodes)
90
120
150
200
[ ] 180
150
250 [mm]
Electrode/coil separation
(a)
EEG/MEG
Half-sensitivity volume
[cm3]
(Inner sphere volume = 683 cm3)
El
ec
tro
de
5
Planar gradiometer
h = 20 mm r = 10 mm
r = 1 mm
h = 20 mm
15 mm
lec
3E
10 mm
5 mm
0 mm
Figure 7. (a) Half-sensitivity volumes of different EEG and MEG
leads as a function of electrode distance/magnetometer baseline; (b)
Lower left corner of the previous figure magnified.
15
10
0
10
trod
20
es
[ ] 20
[mm] 30
Electrode/coil separation
(b)
BIOMEDICAL ENGINEERING
403
JAAKKO MALMIVUO
Tampere University of Technology
BIOMEDICAL ENGINEERING
403
BIOMEDICAL ENGINEERING
Biomedical engineering is the collective term for the disciplines that bring the concepts and principles of engineering to
the field of medicine. The integration of chemical, mechanical,
electrical, and computer engineering fundamentals with biology and medical science has a relatively recent history that
began in the mid-1900s. Technological and scientific advances
in the twentieth century created the opportunity for biomedical engineering innovations, such as physiological simulation
and modeling, designing of implants and drug delivery systems, and development of instrumentation and diagnostic
tools.
The emergence of biomedical engineering followed the
movement of primary medical care from the home to the hospital in the 1930s and 1940s. Until this time, hospitals were
used mainly for care of the poor. Home care by physicians,
midwives, and family was the predominant form of health
care. The hospital, however, became the center of modern
medical care after the discovery of X rays and antibiotics. By
the 1930s, the use of barium salts and radio-opaque materials
allowed X-ray visualization of practically all organ systems
(1). Because of its cost, the improved diagnostic capability
that radiation equipment provided was available only at hospitals. The advent of antibacterial agents and antibiotics, for
example, sulfanilamide in the mid-1930s and penicillin in the
early 1940s (1), helped prevent cross-infection among patients
and staff, a previous deterrent to hospital care.
Electronic innovations developed for the military effort in
World War II provided the basis for advances in medical electronics in the post-war era. These advances made it possible
to measure low-level biosignals, which lead to a better understanding of electrical impulses and the central nervous system. The biologists who had been recruited for radar work
during the war were prepared for these developments. However, the next generation of biologists was without this benefit, and now technology was advancing rapidly. The need for
a bridge between the gap of technical knowledge and biology
resulted in the emergence of the biomedical engineer (2).
The areas in which engineering blends with medicine are
abundant and diverse. Biomedical engineers design imaging
and diagnostic instrumentation, drug delivery systems, medical sensors, prostheses, rehabilitative devices, and artificial
organs. They develop biocompatible materials, model physioJ. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
404
BIOMEDICAL ENGINEERING
BIOMATERIALS
Biomedical engineers working in this area are concerned with
researching and designing safe and reliable synthetic materials that can intimately contact living systems and tissues.
This contact makes it essential that these materials are physiologically acceptable and pharmacologically inert, that is,
nontoxic and noncarcinogenic. Additional requirements include (1) adequate mechanical strength, (2) adequate fatigue
life, (3) proper weight and density, and (4) usable in reproducible and cost effective large-scale fabrication (4). Examples of
biomaterials range from replacement parts to sutures, diagnostic aids, and tooth fillings. The three main classes of biomaterials are metals, ceramics, and polymers.
Metallic Biomaterials
Metals in the body can corrode and possibly cause damage to
an implant and harmful interactions with its corrosion products. Some metals, such at iron (Fe) and cobalt (Co), are required by the body for normal function but are still harmful
if available in more than minute quantities. Implants consisting of stainless steels, cobalt-chromium (CoCr) alloys, and
titanium (Ti) and titanium alloys are corrosion resistant and
biocompatible. Stainless steels with molybdenum (Mo), types
316 and 316L, have increased salt-water corrosion resistance
and are commonly found in temporary implants like fracture
plates or screws. Type 316L, which differs from type 316 only
in carbon content, is more widely recommended. Cobalt-chromium alloys are used in dentistry and artificial joints (castable CoCrMo) and in knee and hip prostheses (wrought CoNiCrMo). Titanium is a strong, lightweight metal that is ideal
for implants. However, its poor shear strength precludes it
from being used in bone screws and plates. Titanium-nickel
alloys (TiNi) have the uncommon property of shape memory
effect (SME). This is involved when, after deformation, a material returns to its previous shape when heat is applied. Attempts to take advantage of this property include research
into intracranial aneurysm clips, contractile artificial muscles
for an artificial heart, and orthopedic implants. There are
other specialized uses for metals, such as platinum alloys for
electrodes and tantalum for wire sutures. In dentistry, gold
provides durable, corrosion-resistant fillings, and gold alloys
are implemented in cast restorations, inlays, crowns, and
cusps. Dental amalgam for cavities is a mixture of liquid mercury with silver, tin, copper, and zinc (5).
Ceramic Biomaterials
Ceramics are primarily inorganic, polycrystalline compounds,
such as silicates and metal oxides. However, the covalently
bonded forms of carbon, such as graphite and diamonds, are
also considered ceramics. Improvements in ceramic formation
in the late twentieth century have resulted in materials without the characteristic brittleness and low impact and tensile
strengths that previously limited the use of ceramics. Ceramics are typically used in bone replacement and dental crowns.
The three general classifications of ceramic biomaterials are
(1) nonabsorbable, (2) bioactive or surface reactive, and (3)
biodegradable or resorbable. Ceramics are also designated as
relatively inert, semi-inert, and noninert.
Common nonabsorbable or relatively inert ceramics include alumina (Al2O3), zirconia (ZrO2), and carbonaceous ce-
BIOMEDICAL ENGINEERING
ramics. Implant uses for nonabsorbable ceramics are generally for structural support, such as bone plates, bone screws,
and femoral heads. This class of ceramics has also been used
in ventilation tubes and in sterilization and drug delivery devices. Carbon ceramics are primarily for coating surfaces of
devices that are used to repair diseased heart valves and
blood vessels because of their high compatibility with tissue
and blood.
Surface-reactive ceramics are primarily used to coat metal
prostheses. When implanted, these ceramics actually form
strong bonds with surrounding tissue. Dense, nonporous
glass-ceramics, formed by controlled crystallization of glasses,
fall into this category.
Resorbable or noninert ceramics are used to make both implants and drug delivery devices. The implants, predominantly variations of calcium phosphate, typically act as a substitute for bone. After implantation, resorbable ceramics
degrade while endogenous tissue replaces it (6,7).
Polymeric Biomaterials
Various medical supplies, devices, and implants consist of
polymers. The biocompatibility, ease of processing into diverse shapes, and the relatively low cost make polymers ideal
biomaterials. Out of the multitude of polymers, only about
twenty are used as biomaterials. Polyvinylchloride (PVC) tubing, sheets and films form IVs, catheters, cannulae, blood and
solution bags, and surgical packaging. Pharmaceutical bottles, pouches and bags, and orthopedic implants are made
from polyethylene (PE) of varying densities. Artificial vascular grafts, suture, and packaging of devices are among the
medical uses of polypropylene (PP). Soft contact lenses, implantable ocular lenses, dentures, bone cement for joint prostheses fixation, blood pumps and reservoirs, membranes for
blood dialyzers, and IV systems contain polymethylmethacrylate (PMMA). Sutures, including resorbable sutures, and
artificial vascular graft applications involve polyesters. Nylon
thread is a common surgical material. Other polymeric biomaterials include polystyrene (PS) and polystyrene copolymers; rubbers, such as silicone rubber; polyurethane; polyacetal; polysulfone; polycarbonate; and fluorocarbon polymers
(primarily for coatings), such as Teflon (7).
Composite Biomaterials
Materials consisting of two distinct phases or components are
called composites. Various biological materials, such as bone
and skin, are naturally occurring composite materials. Research and development in the field of biomaterials includes
implants formed from composites. Composites offer a means
to manipulate properties, such as strength-to-weight ratios
and stiffness, in ways not possible with homogeneous materials.
The shape of the inclusion material of a composite can be
classified in three ways: (1) particulate, (2) fiber, and (3)
platelet or lamina. These consist of none, one, or two long
dimensions, respectively. Polymeric biomaterials can contain
particulates or fibers to improve stiffness. Examples include
inclusion of bone particles or metal fibers in PMMA to improve the stiffness and fatigue life of bone cement and silica
(SiO2) particles in rubber to strengthen catheters and rubber
gloves (8). Honeycombs and foams are composite materials
and voids that are filled with either air or a liquid. Some po-
405
406
BIOMEDICAL ENGINEERING
The electrocardiograph (ECG), which first appeared in hospitals in 1910, measures the electrical activity of the heart
(18). Devices that measure the electrical activity in other
parts of the body also contribute to current diagnostic capabilities. In addition to the ECG, bioelectric phenomena that are
measured for research and diagnostic purposes include electroencephalography (EEG), electromyography (EMG), electroretinography (ERG), and electrogastrography (EGG), which
measure the electrical activity of brain, muscle, eye, and
stomach, respectively. The measurement of propagated neural impulses that result from electrical stimulation is used to
assess nerve damage.
Biomagnetic fields arise from the electrical activity of tissue. The magnetocardiogram (MCG), or magnetic measurement of the electric activity of the heart, has the highest amplitude of biomagnetic signals (50 pT) and was first detected
in 1963 by Baule and McFee. The MCG, unlike the other
lower amplitude biomagnetic signals, does not require a magnetically shielded room. Comparisons between the MCG and
the ECG have revealed similar capabilities for diagnosing
myocardial disorders with 50% improvement when combined
as an electromagnetocardiogram (EMCG) (19). The ECG is
still much more widely used than the MCG.
Other biomagnetic measurements, for example, the electrical activity of the brain which is called a magnetoencephalogram (MEG), are limited in location by the need for a room
with magnetic shielding because of the very low amplitude of
the signals. The development of the superconducting quantum interference device (SQUID) in 1970 made it possible to
record these low biomagnetic signals with good signal quality.
There are thought to be two advantages of MEG over the
EEG: (1) the ability to measure smaller regions of the brain
and (2) fundamental differences in the sensitivity distribution
between the two methods.
Implantable pacemakers help patients who cannot maintain a steady heartbeat by supplying a controlled, rhythmic
electric stimulus to the heart. This stimulus mimics the action of the sinoatrial node (SA node) of a healthy heart, the
hearts natural pacemaker. With modern implantable pacemakers, clinicians use telemetry to program and monitor
functions externally.
Ventricular fibrillation (VF) is a type of cardiac arrhythmia
that is lethal. Death occurs in minutes during VF if the condition is not corrected. Because self-correction is rarely possible,
defibrillation, typically by the application of an electrical
shock to the heart, resets the heart to normal beating. Defibrillators are used externally, as in emergency rooms or ambulances, or are implanted into patients who are at constant
risk of developing VF. Some commercial airlines are now
equipped with automatic defibrillators that will trigger a
shock if the device determines that the patient is having VF.
These devices do not have to be operated by clinically
trained personnel.
Bioelectric impedance analysis (BIA) of tissue provides information about the small pulsatile impedance changes that
occur during heart and respiratory action. BIA is used to determine body characteristics (e.g., percent body fat) or to reconstruct tomographical images of the body (20,21) by measuring conductivity and permittivity at different frequencies.
BIOMEDICAL SENSORS
Biomedical sensors, or biosensors, convert biologically significant signals into electrical signals (13,15,17,18,22,23).
These sensors have both diagnostic and therapeutic applications, and can be active or passive devices. Two major classes
of biomedical sensors, which are based on the variable measured, are physical and chemical sensors. Bioanalytic sensors
are a special class of chemical sensors that take advantage of
biochemical binding reactions to identify complex biomolecules with high specificity and selectivity. One of the earliest
and most clinically relevant biosensor applications was developed for measuring blood gases (O2, CO2) and pH. Measuring
blood gases and pH continues to be an important use of biomedical sensors. Other aspects of blood chemistry, for example, glucose and lactate, can now be measured. Another
method of classification, involving the method of application
of the sensor, is divided into four categories: (1) noncontacting, (2) skin surface, (3) indwelling (minimally invasive),
and (4) implantable. Indwelling and implantable devices involve carefully selecting inert biomaterials for the sensing interface and packaging. Implantable sensing devices need to
maintain long-term calibration and function.
BIOTECHNOLOGY
Biotechnology is not considered a discipline but rather a collection of procedures and techniques by which a scientist or
engineer attempts to modify biological organisms for the benefit of humanity. These attempts include improving plants
and animals for agricultural and food production, genetic en-
BIOMEDICAL ENGINEERING
407
MEDICAL IMAGING
Medical imaging provides vital information about a bodys
structures and functions. Examples of medical imaging modalities include X rays, magnetic resonance imaging (MRI),
positron emission tomography (PET), single-photon emission
computed tomography (SPECT), ultrasound, and computed
tomography (CT). These areas have advanced rapidly with
the computer age. However, challenges still exist to reduce
the cost of common imaging equipment.
The discovery of X rays by Wilhelm Roentgen in 1895 provided the first technique for seeing inside the human body
(40). The theory behind the images involves the exposure of
the body to X rays which pass through to a detector or interact by being absorbed or scattered. When scattered, the X
rays may still reach the detector and cause a loss in image
quality. When there is not enough variation in the absorption
of X rays between the area of interest and the surrounding
tissues, contrast is provided by barium salts (strong X-ray absorbers). Radiopaque materials, such as iodine compounds,
provide the contrast in X-ray angiography (serial radiographs
of the circulatory system). Standard X-ray imaging is used to
detect disease or injury in bones or other body structures,
while mammograms are used to diagnose breast cancer
(41,42). The X-ray tube for mammograms is different from the
one used to detect changes in bony structures.
Computed tomography (CT), which was developed in the
1970s and is based on the same principles as X-rays, provided
the first cross-sectional images of internal body structures
(43). CT images are produced by reconstructing a large number of X-ray transmission measurements, called projection
data, into tomographic maps of the X-ray linear attenuation
coefficient. Now a standard procedure in most hospitals,
practically all parts of the body are imaged by CT technology. One of the problems associated with both CT and Xray imaging is that tissue damage can occur if single exposures or the accumulated life-time exposures to X rays exceed safe levels.
Magnetic resonance imaging (MRI) uses a strong magnetic
field to align the weak nuclear moments of materials with
atoms containing an odd number of protons or neutrons (e.g.,
1
H, 13C, and 31P) (44,45). Typically, MRI images the protons
(1H) of water because the body is two-thirds water. However,
it is not possible to directly measure the weak signals of the
protons that are aligned with the strong applied magnetic
field. Therefore, resonance techniques are employed to measure the collection of the nuclear moments, called spins. To
distinguish the locations of spins, the magnetic field that is
imposed in MRI has spatial variations. Primarily, MRI images provide diagnostic information. Recently, research efforts on blood flow and brain perfusions, termed functional
408
BIOMEDICAL ENGINEERING
MEDICAL INFORMATICS
Biomedical engineers working in medical informatics develop
computer databases and networks that contain patient-related information (5255). This information facilitates healthcare delivery and assists in clinical decision making. Two
prime examples are the hospital information systems (HIS)
and computer-based patient medical records (CBPMR). The
HIS database encapsulates all of the information regarding
patients, not just a limited departmental or clinical view. A
modern HIS database includes (1) the entire clinical record of
a patient, including all inpatient and outpatient procedures;
(2) all patient charges and financial information; (3) admission, transfer, and release information (hospital bed control);
(4) patient management (prescribed therapy) information;
and (5) clinical decision making functions. The CBPMR is an
electronic form of a patients medical record that includes radiological and pathological images. It has advantages of accessibility and ease in information retrieval over the typical
paper medical record. The CBPMR database supports clinical
decision-making functions by assisting in patient treatment
with suggestions for diagnosis and further testing and by providing therapeutic protocols and alerts for possible drug interactions. Confidentiality of patient information is protected by
having different layers of access available to users with different privileges.
The technological advances in computers and telecommunications have brought about the field of telemedicine. The
CBPMR contains the entire patients record, including images, which can be transferred electronically to consulting
physicians in distant locations. The benefit of telemedicine becomes apparent when considering patients in areas without
major hospitals and medical universities who need the expertise of the medical profession to analyze digital images, such
as magnetic resonance images. One area of concern involves
the quality of digital representations, such as scanned X-rays.
BIOMEDICAL ENGINEERING
409
410
BIOMEDICAL ENGINEERING
REHABILITATION ENGINEERING
A rehabilitation engineer designs and develops technologies
that augment or replace impaired sensory, communication, or
motor systems. A device that augments an impaired function
is called an orthosis, and a replacement device is called a
prosthesis. Rehabilitation engineering is concerned with restoring the ability to perform activities of daily living (ADL),
such as (1) eating, brushing teeth, and reading; (2) public
transportation and building accessibility; (3) personal mobility; (4) sensory disabilities, such as impaired sight or hearing;
and (4) communications. In addition to the biology, physiology, and engineering involved in design, a rehabilitation engineer needs to consider the social, financial, and psychological
impacts of a device or technology. One particular difficulty in
rehabilitation engineering is the variation in retained ability
and needs of patients. Even if a standard device exists, individual modifications are expected because of differences
among patients (68).
Traditional orthoses for sensory impairments are eyeglasses or contacts and hearing aids. The retention of some
function in the sensory system is required for these devices to
work. If vision has been completely impaired through damage
to the retina, optic nerve, or cerebral cortex, other methods
for restoring ADL have been developed. An example of this is
the development of Braille to allow the visually impaired to
read. Advances in computing make scanning text and conversion into either voice or Braille (by the movement of a matrix
of pins) available as other possible reading methods. Many
deaf individuals use their vision and sign language as a sub-
CONCLUSION
A very brief overview of many of the areas that are currently
important in biomedical engineering has been presented.
However, this is a very diverse field that is constantly expanding. Future developments will occur in nanofabrication,
microelectromechanical (MEM) technology, sensory replacements (e.g., the artificial retina), engineered tissues, molecular electronics, low-cost medical devices that will help improve health care without increasing health care costs, and
other emerging areas.
BIBLIOGRAPHY
1. J. D. Bronzino, Biomedical engineering, in Encyclopedia of Applied Physics, New York: VCH, 1991, Vol. 2.
2. H. S. Wolff, Biomedical Engineering, New York: McGraw-Hill,
1970.
3. J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca
Raton, FL: CRC Press, 1995.
4. J. B. Park and R. S. Lakes, BiomaterialsAn Introduction, 2nd
ed., New York: Plenum, 1992.
BIOMEDICAL ENGINEERING
5. J. B. Park, Metallic biomaterials, in J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press,
1995.
6. P. K. Bajbai and W. G. Billotte, Ceramic biomaterials, in J. D.
Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
7. H. B. Lee, S. S. Kim, and G. Khang, Polymeric biomaterials, in J.
D. Bronzino (eds.), The Biomedical Engineering Handbook, Boca
Raton, FL: CRC Press, 1995.
8. C.-C. Chu, Biodegradable polymeric biomaterials: An overview,
in J. D. Bronzino (ed.), The Biomedical Engineering Handbook,
Boca Raton, FL: CRC Press, 1995.
9. A. A. Biewener (ed.), Biomechanics (Structures and Systems): A
Practical Approach, Oxford: Oxford Univ. Press, 1992.
10. T. R. Canfield and P. B. Dobrin, Mechanics of blood vessels, in J.
D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca
Raton, FL: CRC Press, 1995.
11. R. B. Davis III, Musculoskeletal biomechanics: Fundamental
measurements and analysis, in J. D. Bronzino (ed.), Biomedical
Engineering and Instrumentation-Basic Concepts and Applications, Boston: PWS Engineering, 1986.
12. R. L. Lieber and T. J. Burkholder, Musculoskeletal soft tissue
mechanics, in J. D. Bronzino (ed.), The Biomedical Engineering
Handbook, Boca Raton, FL: CRC Press, 1995.
13. R. Aston, Principles of Biomedical Instrumenation and Measurement, New York: Macmillan, 1990.
14. J. J. Carr and J. M. Brown, Introduction to Biomedical Equipment
Technology, 3rd ed., Upper Saddle River, NJ: Prentice-Hall, 1998.
15. J. G. Webster (ed.), Medical Instrumentation: Application and Design, 3rd ed., New York: Wiley, 1998.
16. W. Welkowitz, S. Deutsch, and M. Akay, Biomedical Instruments:
Theory and Design, 2nd ed., San Diego, CA: Academic Press,
1992.
17. D. L. Wise, Bioinstrumentation: Research, Developments and Applications, Boston, MA: Butterworth, 1990.
18. L. Cromwell, F. J. Weibel, and E. A. Pfeiffer, Biomedical Instrumentation and Measurements, 2nd ed., Englewood Cliffs, NJ:
Prentice-Hall, 1980.
19. J. Malmiuvo, Biomagnetism, in J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
20. R. Patterson, Bioelectric impedance measurements, in J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton,
FL: CRC Press, 1995.
21. J.-P. Morucci et al., Biomedical impedance techniques in medicine, Crit. Rev. Biomed. Eng., 24: 223679, 1996.
22. E. A. H. Hall, Biosensors, Englewood Cliffs, NJ: Prentice-Hall,
1991.
23. M. R. Neuman, Physical measurements, in J. D. Bronzino (ed.),
The Biomedical Engineering Handbook, Boca Raton, FL: CRC
Press, 1995.
24. M. Akay, Biomedical Signal Processing, San Diego, CA: Academic
Press, 1994.
25. M. Akay (ed.), Time Frequency and Wavelets in Biomedical Signal
Processing, New York: IEEE Press, 1998.
26. J. Dempster, Computer Analysis of Electrophysiological Signals,
San Diego, CA: Academic Press, 1993.
27. W. J. Tompkins, Biomedical Digital Signal Processing, Englewood
Cliffs, NJ: Prentice-Hall, 1993.
28. R. E. Ziemer, W. H. Tranter, and D. R. Fannin, Signals and Systems: Continuous and Discrete, 3rd ed., New York: Macmillan,
1993.
29. A. Cohen, Biomedical signals: Origin and dynamic characteristics; frequency-domain analysis, in J. D. Bronzino (ed.), The Bio-
411
412
BIOMEDICAL NMR
51. W. J. Greenleaf, Medical applications of virtual reality technology, in J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
52. T. A. Pryor, Hospital information systems: Their function and
state, in J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
53. J. M. Fitzmaurice, Computer-based patient records, in J. D.
Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
54. M. H. Loew, Informatics and clinical imaging, in J. D. Bronzino
(ed.), The Biomedical Engineering Handbook, Boca Raton, FL:
CRC Press, 1995.
55. S. Sengupta, Computer networks in health care, in J. D. Bronzino
(ed.), The Biomedical Engineering Handbook, Boca Raton, FL:
CRC Press, 1995.
56. V. C. Rideout, Mathematical and Computer Modeling of Physiological Systems, Englewood Cliffs, NJ: Prentice-Hall, 1991.
57. C. Cobelli and M. P. Saccomani, Compartmental models of physiological systems, in J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
58. J. L. Palladino, G. M. Drzewiecki, and A. Noordergraaf, Modeling
strategies in physiology, in J. D. Bronzino (ed.), The Biomedical
Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
59. E. A. Woodruff, Clinical care of patients with closed-loop drug
delivery systems, in J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
60. P. M. Galletti, Prostheses and artificial organs, in J. D. Bronzino
(ed.), The Biomedical Engineering Handbook, Boca Raton, FL:
CRC Press, 1995.
61. A. P. Yoganathan, Cardiac valve prostheses, in J. D. Bronzino
(ed.), The Biomedical Engineering Handbook, Boca Raton, FL:
CRC Press, 1995.
62. P. M. Galletti and C. K. Colton, Artificial lungs and blood-gas
exchange devices, in J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
63. S. I. Fox, Human Physiology, 5th ed., Dubuque, IA: Wm. C.
Brown, 1996.
64. P. M. Galletti, C. K. Colton, and M. J. Lysaght, Artificial kidney,
in J. D. Bronzino (ed.), The Biomedical Engineering Handbook,
Boca Raton, FL: CRC Press, 1995.
65. P. M. Galletti and H. O. Jauregui, Liver support systems, in J.
D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca
Raton, FL: CRC Press, 1995.
66. P. M. Galletti et al., Artificial pancreas, in J. D. Bronzino (ed.),
The Biomedical Engineering Handbook, Boca Raton, FL: CRC
Press, 1995.
67. I. V. Yannas, Artificial skin and dermal equivalents, in J. D.
Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
68. C. J. Robinson, Rehabilitation engineering, science, and technology, in J. D. Bronzino (ed.), The Biomedical Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
TAMARA L. TATROE
SUSAN M. BLANCHARD
North Carolina State University
BIOMEDICAL TELEMETRY
BIOTELEMETRY
TELEMETRY IN MEDICINE
MEDICAL TELEMETRY
The word biotelemetry is actually an abbreviation of
biomedical telemetry, which means that the measurement
of different electrical and nonelectrical values measured
on human or animal subjects are transmitted contactless
to a stationary place. When we refer to measurements on
human subjects, we are generally talking about patients
in hospitals or as follow-up in the rehabilitation process,
where freedom of movement is required. Biotelemetry also
can be applied to athletes. The most measured physiological variables are heart and brain potentials known as electrocardiogram (ECG) and electroencephalogram (EEG)
and, for athletes, muscle potentials [i.e., electromyography
(EMG) and accelerations or forces]. In hospitals, in intensive care units or in wards where patient monitoring is performed, the radio frequency (RF) telemetry often is applied
between the free-moving patient and the bedside monitor
or central unit. The RF telemetry allows the patient more
freedom and eliminates the need for nurses to be present
when the patient needs to leave the bed. The nonelectrical values that are most often measured are ECG blood
pressure, temperature, oxygen saturation, myocardial ischemia (by ST-level), and respiration. Biologists and human ecologists have great interest in measuring similar
values for animals. Biotelemetry allows us to receive quantitative data from freely moving animals in their normal
unconned environment without disturbing the transmitter, which must be small and lightweight in relation to the
animals size and should not exceed 1% of the body weight.
For the ecologists and biologists, the study of homing and
migration of birds and terrestrial and aquatic animals is
of great interest.
Besides telemetry over great distances, biotelemetry
over very short distances is also realized when a transmitter is implanted under the skin or swallowed by the patient
or animal. In the latter case, a receiver loop antenna is located over the patient close to the place where the transmitter is implanted. The different processes in the patients
intestines such as pressure, temperature, or pH-value can
be investigated and measured. Swallowed transmitters are
usually called endoradiosondes. This sonde passes through
the stomach and all the intestines transmitting data until
it reaches the exit.
BIOTELEMETRY SYSTEMS
A biotelemetry system consists of a transmitter and a receiver with a transmission link between, as shown in Fig.
1. Transmitted information can be a biopotential or a nonelectric value like arterial pressure, respiration, or body
temperature and pH value. The transducers convert nonelectrical values into electrical signals. However, for biopotential measurements, adequate electrodes must be used.
The voltage at the output of the transducer or on the electrodes is very low (in the range of a few microvolts to about
10 mV), which must be amplied by an amplier. This
measured signal must change the amplitude or frequency
of the carrier [i.e., high-frequency (HF) signal that carries information and enables transmission]. This process
is called modulation. The modulated HF signal on the receiver side usually is low, depending on the distance between the transmitter and the receiver and is inuenced
by outside noise. The receiver must rst amplify this low
signal directly or by the heterodyne principle and then demodulate it by a demodulator to obtain the same signal
waveshape as on the transmitter input. The same process
occurs between a radio station and a radio receiver, except
that the microphone is replaced by a transducer and the
loudspeaker is replaced by a penrecorder or some other display. For HF transmission, two antennas are required on
the transmission and receiving side, which must be in resonance with the carrier frequency. The length of the antenna
is usually one-quarter wave length /4 for a whip antenna
and /2 for a dipole antenna. In biotelemetry, loop antennas or coils for implants are also in use. This is the wireless
transmission realized by RF electromagnetic waves.
Wireless transmission can also be accomplished by using infrared radiation or ultrasound. Biotelemetry over
wires is used less frequently because the patient cannot
move about freely, being limited to short distances. However, if the patient is stationary, telephone lines can be
used for long distances. Telephone lines are used also in
telemedicine where pictures can also be transmitted over
the Internet. The wire and wireless systems can be combined in monitoring a patient at home, with the short-range
wireless system connected to the telephone system. However, telemetry over wires, while trivial, will not be considered here.
MODULATION MODES
Between amplitude modulation (AM) and frequency modulation (FM), FM has the explicit advantage. AM is sensitive
to the patients movement, which changes the signal amplitude as well as the reections from the walls inside the
room. AM is also very sensitive to noise. FM, in contrast,
does not have these drawbacks because the information is
included in the frequency changes of the carrier frequency
f0 , and the amplitude changes of the signal do not inuence the transmitter information. The deviation of the frequency f from the carrier frequency f0 is proportional to
the signal voltage. In addition, FM electronic circuits can
be very simple if only short distances are to be spanned
(Fig. 2).
For greater energy saving pulse modulation (PM) is
more appropriate because of low duty cycle. In this case
the consuming up power is much lower and is reduced
mostly between 0.05 to 0.005 times in comparison to continuous power supply. This means that the pulse current
can be 20 to 200 times stronger than average current con-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
Medical Telemetry
Medical Telemetry
Figure 3. (a) Frequency, (b) time division modulation, and (c) amplitude shift keying.
Medical Telemetry
Figure 4. (a) Frequency division three-channel multiplex, (b) three-channel FM transmitter, and
(c) voltage-to-frequency converter with multivibrator.
and has six force transducers in the input (6). The signals
from the transducers are amplied and fed into the multiplexer (MUX). A clock circuit controls a series of binarycoded pulses which are sent to pins A, B, C through a
counter. These inputs to the multiplexer sequentially select the channel (1, 2, 3, 4, 5, 6) to be connected to the
comparator. The central unit also sends the pulses from
Medical Telemetry
Figure 5. (a) Six-channel transmitter with PPM and (b) time diagram of PPM.
Medical Telemetry
Cd is the negative terminal, and Ni(OH)3 is the positive terminal. NiCd batteries are manufactured as button cells
or cylindrical cells. Button cells are more appropriate for
small transmitters in biotelemetry. Cylindrical cells are adequate for large biotelemetry transmitters carried by patients where size is not restricted and multiple channels
with higher energy consumption are required. Rechargeable batteries can be implanted inside an animal or human
subject (some of the rst pacemakers had rechargeable batteries) and can be recharged during the night, when the
subject is sleeping, over the inductive coils (7).
Solar cells can also be applied or mounted on an animals back, but rechargeable batteries are usually used to
avoid voltage changes when the light changes intensity. Solar cells can also be placed beneath the skin, greater light
intensities are available (2).
BIOTELEMETRY APPLICATIONS
As already pointed out, the transmitter is the most important part of a biotelemetry system and usually requires
the most sophisticated design. A transmitter including its
Medical Telemetry
When the antenna is a dipole with l = /4, the effective antenna length hef is /, where is the wavelength of the
transmitted electromagnetic wave, as shown in Fig. 6(a,b).
This situation occurs when the transmitting antenna is either a dipole or a whip antenna (l = /4). The transmitting
whip antenna is usually shorter than l = /4, in order not
to be clumsy, which greatly diminishes the effective antenna length hef , which is proportional to l/, see Fig. 6(a,b).
This situation occurs when the transmitting antenna is ei-
Medical Telemetry
Figure 6. (a) Whip antenna, (b) dipole antenna, (c) antenna gain as function of l/, and (d) Yagi
antenna.
Medical Telemetry
crutches (6, 8). IR telemetry is employed in the measurement of myographic potentials, temperature, and heart
rate for sportsmen and athletes but only inside rooms
where IR rays can be reected.
or
Ultrasonic Transmission
We have already shown in Eq. (6) that RF transmission is
strongly attenuated in water, so that measuring aquatic
animals in their normal habitat, where they enjoy free
movement, is not possible. In this case, only sound signals
at frequencies above the audible threshold can be used.
Sound signals transmit very well in water. The transmitting frequencies are usually low, between 50 kHz and 200
kHz, and the distances that can be spanned by them do
not exceed 200 m. The emitted power is usually not higher
than 1 W/cm2 (2, 3).
The conversion of electric energy into mechanical energy is done by piezoelectric ceramics like lead-zirconatetitanate. This material has a dual behavior. When a force F
is applied on a disc of piezoelectric material, a charge q is
induced on its electrodes according to q = F (where is a
constant), as shown in Fig. 8(a). Because there is a capacitance C = 0 (S/d) between the electrodes, where S is the
area of the electrodes and d the thickness of the ceramic, a
10
Medical Telemetry
Figure 8. (a) Piezoelectric crystal, (b) ultrasonic transmitter, and (c) ultrasonic receiver.
oxygen O2 and, rarely, chlorine Cl. Endosondes can be active or passive. Active endosondes have their own power
source. Today, lithium batteries are used mostly. Passive
endoradiosondes or implants do not have their own power
source and are powered from an outside source by coils. Endoradiosondes can also be implanted after a surgical procedure inside the human body to follow up some healing
process. This type of endoradiosonde is passive, because it
must dwell inside the body for a long time and small batteries do not store sufcient energy for such a long-term
operation.
Endoradiosondes that are swallowed should be as small
as possibletheir largest parts are the batteries and coils.
They are usually very simple oscillator circuits whose frequency f0 is altered by the measured signal. This frequency modulation has low deviation f/f0 , not higher than
10%, but sometimes lower than 1%. In Fig. 9, transformercoupled, Collpits, and Hartley oscillators are shown. These
oscillators are usually applied in endoradiosondes. FM is
the simplest modulation mode in one-channel biotelemetry. The transmitted signals are received by a loop antenna
(up to 35 cm) located over the patient in a supine position.
With smaller loop antennas that are only a few centimeters in diameter, endosonde tracking can be provided because of better directivity. Endosondes are primarily used
for pressure measurement in the small intestine or bladder with inductive transducers. This inductance L is at the
same time part of the oscillators resonant circuit that determines the oscillation frequency f0 according to Eq. (5).
Medical Telemetry
11
12
Medical Telemetry
Here the resistances R1 and R2 include losses in the transmitting and receiving coils as well as the capacitors C1 and
C2 . Mutual inductivity M depend on the distance between
coils L1 and L2 . Both coils, L1 and L2 , and capacitors C1
and C2 are resonant circuits tuned to the same frequency
fp (5). The implanted circuit can last for more than one
year. For example, after a hip operation, tissue transplant
rejection or tumor development can be monitored by measuring its temperature. Also, after brain surgery of the hydrocephalus, measurement of intracranial pressure might
warn of impending danger. Besides taking measurements,
electromyographic potential could be transmitted to control an articial limb or strain gages could be used to monitor the forces in massive orthopedic implants (16, 17). Pas-
Medical Telemetry
sive implanted devices are applied for relative strain measurements with strain gages hermetically sealed in the
bone cavity for monitoring fracture healing. Also, the temperature increase in an articial joint at the hip or knee
can be monitored (17).
Some stimulators, like the cochlear stimulator (an implant for the deaf) use telemetry for measurement of stimulus amplitude and pulsewidth of any of its 16 control leads.
This implant receives energy via an external coil (18). Figure 15 shows a demodulated signal taken by an endoradiosonde for acidity measurement. It shows the pH value in
a healthy man, which changes after taking 0.5 g NaHCO3
before and after breakfast.
Note that some implanted devices (even nonpassive)
also have small rechargeable cells (mostly NiCd), which
can be recharged with magnetic coils when necessary. In
this way, the transmitter can work continuously.
13
14
Medical Telemetry
Figure 12. Endoradiosonde for pH-value measurement with (a) varactor and (b) astable multivibrator as voltage-to-frequency converter.
Medical Telemetry
15
Figure 14. (a) Rectier with voltage doubling and (b) full-wave
rectier.
dio transmitter carried by the animal has usually a vertical whip antenna that radiates electromagnetic waves
in all directions. The receiving antenna must be a directional antenna that has maximal gain in the direction in
which the transmitter is located and that is determined by
azimuth angle. This can be a multielement (three to ve
directors and a reector) Yagi antenna or a loop antenna
(see section entitled Electromagnetic Transmission). The
transmitter frequency for a Yagi antenna with reasonably
long elements must be about 150 MHz. By rotating a directional antenna at the researchers location A, a maximum
receiving signal must be found, as must the azimuth angle in which direction the animal is located. The maximum
receiving signal can be found with earphones or a pointerscale indicator. To nd the accurate position, a second receiver with a directional antenna must be located at some
other place B. When a maximum is found at this place and
the azimuth angle is determined, two straight lines from A
and B are plotted in the direction dened by the azimuths.
Their intersection is the location of the animal, as shown
in Fig. 17.
However, using this method to determine this location
is not very accurate, because the error in the azimuth angle may be as large as 5 . It is also possible to receive reected signals on some hilly terrain. In this case, another
location C must be used, and the systematic error needs to
be analyzed by testing the triangulation system (21). See
Fig. 17. It is easier, but less accurate, to nd the location
of the animal with only one receiver, which can nd the
direction, and a transponder as the range-measuring instrument. Refer to Fig. 16 for more details. The distance to
be spanned depends on the transmitters output power carried by the animal. Also, the spanned distance is dependent
upon the altitude at which the animal is located. Tracking
birds that are ying high or sitting on a tree requires that
much longer distances be spanned. For example, a trans-
16
Medical Telemetry
Magnetometers mostly employ Hall-effect sensors, or inductance detectors. Radioactive iodine can be used as the
radioactive marker. These tags are mostly intended for
short distances and small animals.
Today Global Positioning System (GPS) is employed
more and more for animal tracking. It allows scientists
to obtain the location of the animal with high accuracy
(within 25 m). GPS uses 24 satellites 20,183 km above
the earths surface and provides 24 h coverage of the entire planet. The satellites work on two L-band frequencies
(1575.42 MHz and 1227.6 MHz). The GPS users on earth
have access to at least four of the satellites. The GPS tracking system consists of an animal-carried GPS receiver and
transmitter, which is produced by many manufacturers
(22, 23). The built-in microcontroller periodically (usually
every 3 s) turns on the GPS receiver to get a position from
the satellite. The GPS receiver computes its position and
puts this information into RAM in the control unit. Then
the GPS receiver is switched off and the RF transmitter
is turned on in order to transmit these data via a digital
telemetry link to the data-logging computer system at the
remote researchers site, as shown in Fig. 18. Automatic
data recording is possible by computer, and a mapping program displays the information on the monitor. GPS gives
the latitude, longitude, and elevation of the animals location. The accuracy within 25 and 40 m can be signicantly
improved using a differential technique, the so-called dif-
Medical Telemetry
BIBLIOGRAPHY
1. Y. Winter, Techniques for the laboratory construction of miniature hybrid-circuits: A 350 mg ECG-transmitter, Proc. 14th
ISOB, Biotelemetry XIV, Marburg: 1998, pp. 6570.
2. R. S. Mackay, Biomedical Telemetry, Sensing and Transmitting Biological Information from Animals and Man, 2nd ed.,
Piscataway, NJ: IEEE Press, 1993.
3. C. A. Caceres (ed.), Biomedical Telemetry, New York: Academic
Press, 1965.
4. A. Santi
c,Theory and application of diffuse infrared biotelemetry, Crit. Rev. Biomed. Eng., 18 (4): 289309, 1991.
5. V. V. Parin (ed.), Biomedical Telemetry, Moskow: Medicina,
1971.
6. A. Santi
c, M. Saban,
V. Bilas, A multichannel telemetry system
for force measurement in the legs and crutches, Proc. 10th Int.
Symp. Biomed. Eng., Zagreb: 1994, pp. 2833.
7. T. R. Crompton, Battery Reference Book, London: Butterworth,
1990.
8. A. Santi
c, M. R. Neuman, V. Bilas, Enhancement of infrared
biotelemetry for monitoring ambulatory patients over wide
areas, Proc. 14th ISOB, Biotelemetry XIV, Marburg: 1998, pp.
9196.
9. J. W. Hines, et al. Advanced biotelemetry systems for space life
sciences: pH telemetry, Proc. 13th ISOB, Biotelemetry XIII,
Williamsburg, VA: 1995.
10. T. Feuerabendt et al., Radiotelemetry on song-birds, Proc. 14th
ISOB, Biotelemetry XIV, Marburg: 1998.
11. B. Woodwared, R. Sh. H. Istepanian, Acoustic biotelemetry of
data from divers, Proc. 15th Annu. Int. IEEE Eng. Med. Biol.
Soc. Conf., Paris: 1992, pp. 10001001.
12. M. R. Neuman, P. T. Schnatz, R. J. Martin, Telemetry of basal
temperature in women and respiration in neonatales, Proc.
8th ISOB, Biotelemetry VIII, Dubrovnik: 1984, pp. 137140.
13. J. G. Webster (ed.), Medical Instrumentation, Application and
Design, 3rd ed., New York: Wiley, 1998.
14. K. Saito, et al. Telemetry capsule for measuring pH of gastric
juice, Proc. 12th ISOB, Biotelemetry XII, Ancona: 1992, pp.
415419.
15. K. Van Schuylenbergh, R. Puers, A computer assisted methodology for inductive link design for implant applications, Proc.
12th ISOB, Biotelemetry XII, Ancona: 1992, pp. 394400.
16. G. Bergmann, F. Graichen, Telemetry for long term orthopaedic implants, Proc. 9th ISOB, Biotelemetry IX,
Dubrovnik: 1987, pp. 295302.
17. S. Taylor, A telemetry system for measurement of forces in
massive orthopaedic implants in vivo, CD-ROM Proc. 18th
Annu. Int. IEEE Eng. Med. Biol. Soc. Conf., Amsterdam: 1996.
18. P. M. Meadows, P. Strojnik, Powering Sensors
with an Implanted FES System (online). Available
http://www.cdd.sc.edu/resweb/resna45.htm
19. G. Schreier, et al. A Non-invasive Rejection Monitoring System
Based on Remote Analysis of Intramyocardial Electrograms
from Heart Transplants, Proc. 17th Annu. Int. IEEE Eng. Med.
Biol. Soc. Conf., Montreal: 1995.
20. P. Wouters, M. De Cooman, B. Puers, Injectable biotelemetry
transponder for identication and temperature measurement,
Proc. 12th ISOB, Biotelemetry XII, Ancona: 1992, pp. 383391.
21. M. Taillade, Trends in satellite-based animal tracking, Proc.
12th ISOB, Biotelemetry XII, Ancona: 1992, pp. 291297.
17
22. M. Soma, A. Nakamura, M. Tsutsumi, The design of satellitelinked transmitter for migratory study of dolphins, Proc. 9th
ISOB, Biotelemetry IX, Dubrovnik: 1987, pp. 319322.
23. J. J. Cupal, L. J. Lacy, F. G. Lindzey, A GPS animal tracking,
Proc. 12th ISOB, Biotelemetry XII, Ancona: 1992, pp. 298304.
24. Anonymous, Leaets from Trimble Navigation Limited,San
Dimas, CA:Sunnyvale, CA, and Magellan Systems Corporation.
25. H. Murakami, et al. Telemedicine using mobile satellite communication, IEEE Trans. Biomed. Eng., 41: 488496, 1994.
ANTE S ANTIC
University of Zagreb, Zagreb,
Croatia
426
BIOMEDICAL SENSORS
Physiologic
system
Sensor
Processor
Display
Observer
BIOMEDICAL SENSORS
A biomedical instrumentation system (Fig. 1) consists of three
main components: the sensor, the processor, and the recorder
and/or display. This article is concerned with the sensor portion of the instrumentation system. As seen in Fig. 1, the sensor is the interface between the biological system and the
electronic signal processing portion of the instrument. When
we consider a biomedical sensor, we must be concerned about
both the biological and the electronic aspects of sensor performance. Biological concerns involve the response of the biological system to the presence of the foreign sensor and the response of the sensor to the biological environment. Electronic
concerns relate to the type of signal that the sensor provides
and how this signal is interfaced to the processor portion of
the instrument. Thus, in considering biomedical sensors we
must look at both the biological and electronic performance of
this component.
Biomedical sensors are classified in many different ways,
as summarized in Table 1. Classifications are determined by
the type of biological variable measured by the sensor, the
technology used for sensing, the approach to obtaining the
output signal from the sensor, and the interface that the sensor establishes with the physiological system. All of these concerns are important in classifying sensors, but depending on
the sensor and the application, it may not be necessary to
use all of the descriptors in the columns of Table 1 for sensor
characterization. It is important to note, however, that the
ways that biomedical sensors differ from sensors used in nonbiomedical instrumentation systems are found in these classifications. Although any sensor can be described by the variable measured and the sensing technology used, their
interaction with a physiological system represents a special
characteristic of biomedical sensors that is not generally of
concern with similar conventional sensors. There are some
Technology Used
for Sensing
Physical
Chemical
Bioanalytical
Electronic
Optical
Electrochemical
Mechanical
Biologic
Sensing Mechanism
Single step
Multistep
Application to
Biologic System
Noncontacting
Noninvasive
Minimally invasive
Invasive
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
BIOMEDICAL SENSORS
427
Variable Sensed
Variable resistor
Strain gage
Linear variable differential
transformer
Velocimeter (laser or ultrasound)
Accelerometer
Thermistor
Thermocouple
Strain gage pressure
transducer
Load cell
Electromagnetic flow meter
Liquid Metal Strain Gages. The liquid metal strain gage was
described by Whitney (1) as a simple means of estimating
changes in limb volume by measuring changes in the limbs
circumference. The sensor shown in Fig. 2 consists of a thin,
compliant silicone elastomer tube filled with mercury. Electrical contacts seal the mercury within the tube at each end and
are connected to lead wires. If the tube is arranged in a
straight line as shown in Fig. 2, the electrical resistance of
the mercury column between the electrical contacts R, is
given by
Force
Flow of electrolytic liquids
vasive if they touch the body but do not enter its cavities or
tissues. Sensors placed on the skin surface, such as a transcutaneous carbon dioxide sensor, are considered noninvasive.
Minimally invasive sensors enter the body but only through
normal orifices, such as the mouth or urethra. These sensors
are often called indwelling sensors. A miniature pH sensor for
measuring gastric pH might seem very invasive to the individual on whom the measurement is made, but in fact it is
only minimally invasive because it enters a natural body cavity. Invasive sensors, on the other hand, must be placed surgically. Tissue must be incised or penetrated to position such
sensors. Sensors located within the cardiovascular system,
such as miniature intraarterial pressure transducers enter
the arterial system only by a surgical cut-down or a skin-penetrating needle. The biomedical environment is extremely
hostile especially for implanted sensors. Thus, special precautions must be made in packaging the sensors to minimize
problems resulting from this environment.
In the following sections we look at examples of biomedical
sensors based upon the variable sensed. We consider the operating principle of each sensor type and look at examples of
biomedical applications.
PHYSICAL SENSORS
A physical sensor is one that measures quantities, such as
geometrical, kinematic, electrical, force, pressure, and flow in
biological systems. Table 2 lists examples of important physical sensors for biomedical measurements. Although similar
sensing devices are used in biomedical and nonbiomedical applications, the realization of these devices as practical components is quite different depending on the application.
L
A
(1)
2
(L L20 )
V 1
(3)
R/R0
L/L0
(4)
L1 + L0
L0
(5)
;
;
;
;;;
;
R=
A physical measurement frequently used in biomedical instrumentation is the determination of linear or angular displacement between two points. In biomedical measurements,
such displacements are frequently determined dynamically to
determine the function of an organ or organism. There are
numerous sensors applied for this measurement. Some are
applied in biomedical measurements, but others are useful
only for nonbiomedical applications. In this section, we de-
Metal
end-plug
Lead
wire
Elastomer tube
Mercury
Metal
end-plug
Lead
wire
428
BIOMEDICAL SENSORS
16r
r
7.353 log10
6.386
2540
d
(6)
where L is the inductance in microhenries, r is the coils radius in millimeters, and d is the diameter of the wire in the
coil in millimeters. By placing a compliant coil around a limb
or the chest or abdomen, its inductance is proportional to the
cross-sectional area of the structure it circumscribes, and this
is used to determine volume or changes in volume. The problem with this arrangement is that a coil of wire is not as compliant as the liquid metal strain gage, and so it must be modified to become a compliant sensor. A simple way of doing this
is to form a wire in a sinusoidal pattern and attach it to an
elastic band so that the wavelength of the sinusoidal wire
(7)
where c is the velocity of ultrasound in tissue, d is the distance between the two sensors, and t is the time it takes the
pulse to propagate from one sensor to the other. Miniature
ultrasonic transducers are made from piezoelectric ceramic or
crystal materials and are as small as a 1 mm to 2 mm cube.
These sensors are used to measure myocardial segment
length by implanting them at different points in the myocardium (5), and they also have been used to dynamically monitor the dilatation of the uterine cervix during labor (6).
Measurement of Force
The measurement of forces in biology and medicine is important in understanding the biomechanics of organisms. As
with displacement sensors, many of the force sensors used for
this are the same as force sensors used in other applications.
These force measurements are based on a load cell structure.
Two variations of this fundamental device, however, are
found frequently applied in biomechanical measurements.
These are the force-sensitive resistor and the compliant dielectric-capacitance sensor.
Force-Sensitive Resistor. One of the simplest and therefore,
least expensive force sensors consists of a carbon-loaded elastomer and metallic contact structure as shown in Fig. 3. The
carbon-filled elastomer is electrically conductive and has a
textured surface that contacts the metallic conductor. This
has been exaggerated in Fig. 3 to illustrate the operating
principle. When small normal forces are applied, the metallic
conductor contacts only the tips of the loaded elastomer layer,
but, as the force increases, the elastomer is compressed and
more of the textured surface makes contact with the metallic
electrode. This causes the electrical resistance between the
electrode and the metallic contact at the base of the conductive elastomer to decrease.
Sensors of this type are frequently used for measuring
forces between body surfaces and the external world. For example, with this type of device, it is possible to measure
grasping, sitting, and standing forces and their distributions.
By patterning the conducting contact, it is possible to have a
force sensor array to measure the distribution of forces over
an area. This is especially useful in studying seating pressures and the reduction of decubitous ulcers. The advantage
of this type of sensor is that it is very thin and relatively
BIOMEDICAL SENSORS
Upper substrate
429
Lower substrate
Lower
substrate
Conductive compliant
elastomer
A
d
(8)
F
AE
(9)
A2 E
A
=
d0 d
AEd0 F
d0
1
Q
=Q
F
V =
C
A
A2 E
C=
(10)
(11)
Upper substrate
Lower substrate
Dielectric elastomer
Lower
substrate
Figure 4. Cross-sectional view of a compliant dielectric force sensor with low (left)
and high (right) applied force.
430
BIOMEDICAL SENSORS
Port
Fluid-filled
Chamber
Dome
;
;
;
;
;
;
;
;;;;;;;;
;
;
;
;;;;;;;
Diaphragm
Plastic housing
Lead wires
Port
Displacement
sensor
Vent
Vent
Silicon diaphragm
with integrated
strain gauges
Housing
(a)
(b)
Figure 5. Fundamental pressure sensor structure (a) and disposable pressure sensor (b).
sure drop along the length of a fluid flowing in a tube is proportional to the volume flow through that tube. Thus, if one
measures the pressure difference along a known resistance,
such as a rigid tube, this pressure drop is proportional to the
flow. Although it is not practical to make such a measurement
in a blood vessel whose geometry changes according to physiological and fluid dynamic conditions, this principle is used for
measuring gas flow.
The pneumotachograph is used for measuring the flow of
air into and out of the airway, and hence, the lungs. By placing a known resistance, such as a metal screen or a corrugated foil in a tube through which the breathing air flows and
measuring the differential pressure across this resistance, it
is possible to obtain a signal proportional to the flow of gas
through this system. This signal can then be electronically
integrated to determine volume.
Pneumotachographs directly measure air flow into the respiratory tract because the actual gas entering the body must
pass through the sensing system. They, therefore, are used
only when there is a direct connection to the airway, such as
when a patient is intubated or a nasaloral face mask is used.
Thus, this device is primarily used for diagnostic studies,
such as in a pulmonary function laboratory.
Electromagnetic Flow Meter. It is known from electromagnetic field theory that charged particles moving in a plane
transverse to a magnetic field experience a force mutually
perpendicular to the direction of their velocity and that of the
magnetic field. If blood or some other fluid containing positively and negatively charged ions flows with a velocity u in
a direction perpendicular to a magnetic field B positive ions
are deflected transverse to the field and the direction of flow,
and negative ions are deflected in the opposite direction. This
BIOMEDICAL SENSORS
e=
u B dL
(12)
(13)
Vessel
Electrode
B
e
Electrode
Magnet
u
431
magnetic flow meters measure flow velocity rather than volumetric flow. It is possible, however, to obtain volume flow information from them because placing the flow probe around a
blood vessel requires a snug fit so that the electrodes make
good contact with the vessel, and this fixes the inner diameter
of the vessel where the flow measurement is made. The inner
diameter is used to determine the cross-sectional area of the
blood vessel and multiplied by the flow velocity, gives the volumetric flow.
Ultrasonic Flow Measurement. The Doppler effect states
that the frequency of a sound or ultrasound signal from a
moving reflector is shifted according to the velocity of the reflector and the angle between the direction of the incident and
reflected sound and that of the reflector:
f =
2 f0 u
cos
c
(14)
432
BIOMEDICAL SENSORS
ble to calculate the flow velocity regardless of the angle between the ultrasound beam and the flow direction.
Rs
Measurement of Temperature
Sensors for measuring biomedical temperature are the same
as those applied in other fields. Those most frequently used
include the thermistor, thermocouple, and temperature-sensitive metallic wire or film resistors. The thermistor is by far
the most common because of its relatively high sensitivity and
small size. The latter is important in many biomedical measurements so that the instrument has rapid response time.
Another area of temperature measurement becoming important for clinical and home applications is radiation measurement of temperature. Inexpensive devices that measure
the infrared radiation from the auditory canal are commercially available, and these respond almost instantaneously.
Tympanic membrane (ear drum) direct temperature measurement using a miniature thermistor or thermocouple has been
recognized as a minimally invasive method of determining
core temperature (15), and the infrared radiation devices take
advantage of this and the rapid response time of infrared detectors for making the measurement (16). Skin temperature
over a portion of the body, such as the breast or abdomen is
also measured by infrared radiation. The technique of thermography is useful in locating subcutaneous or deeper areas
of inflammation such as in the case of some tumors or localized infection.
In addition to the applications previously mentioned, the
most common medical application of temperature sensors is
determining body temperature. This, along with blood pressure is one of the fundamental vital signs used to evaluate
patients, and rapidly responding minimally or noninvasive
sensors are desirable. The most common approach to this
measurement is an electronic thermometer utilizing a lowmass thermistor placed orally. Because a nurse carries this
device on patient rounds, an important aspect of its design is
a protective disposable sheath that is placed over the temperature sensor before it is placed in a patients mouth or elsewhere. This minimizes cross-contamination from one patient
to the next, but it also increases the response time of the sensor because of the series thermal resistance and increased
mass. Thus, an important aspect of this design is to minimize
response time so that temperature is rapidly obtained and
documented, thereby allowing the nurse more time for other
patient interactions.
BIOPOTENTIAL ELECTRODES
The body produces many electrical signals that are useful in
diagnosing and monitoring normal function and disease. The
most frequently measured of these is the electrocardiogram
(ECG) from the heart, the electroencephalogram (EEG) from
the brain and the electromyogram (EMG) from muscle. Biopotential electrodes are sensors placed on or within the body to
pick up these signals for processing and display by an instrumentation system (17). Thus, electrodes serve as the sensor
for these instruments.
The basic operating principle of biopotential electrodes is
converting an ionic current within the body to an electronic
current in the electrode material and associated electrical cir-
Rp
Cp
Ehc
Figure 7. Equivalent electrical circuit for a biopotential electrode.
M
n+
A + pe
+ ne
A
(15)
BIOMEDICAL SENSORS
polarization due to the current. Thus it is important that electrode current is as small as possible. Ideally, it should be zero.
One way to minimize electrode current is to have amplifiers
with very high input impedance and low bias current connected to the electrodes.
Silver/Silver Chloride Electrode
Although electrochemists know of several different electrode
systems that approach the behavior of a nonpolarizable electrode, only the Ag/AgCl electrode is used as a biomedical sensor. This use is generally limited to applications on the skin
surface because the silver ion is toxic in the body. The Ag/
AgCl electrode minimizes polarization because of the low solubility of silver chloride, resulting from oxidation of silver
atoms on the electrode surface in the presence of chloride, the
principal anion of the body (17). There are many ways to realize Ag/AgCl electrodes in practice (19). One of the most robust
forms is a sintered electrode with a silver wire placed along
the axis of a cylindrical mixture of finely powdered silver and
silver chloride compressed to form a pellet. A layer of silver
chloride is formed on a silver electrode surface by electrochemical oxidation in a chloride-containing solution. Exposing
the silver metal surface to chlorine gas, such as in sodium
hypochlorite, ordinary household bleach, also produces a thin
layer of silver chloride. With the silver chloride surface on the
electrode, electrical motion artifact and noise are of much
lower amplitude than with unchloridized electrodes (20).
Examples of Electrodes and Applications
Figure 8 shows some of the common forms of biopotential
electrodes. Skin electrodes are made from Ag/AgCl disks
formed by any of the methods described in the previous section [Fig. 8(a)]. Often a silver foil or silver plated surface is
used as the basis of these electrodes. It is possible to make
electrodes in the form of a needle, as shown in Fig. 8(b), that
is injected into a muscle to pick up EMG signals. Single or
multipolar coaxial electrodes are formed by running an insu-
;
;
;
;
; ;;
Lead wire
AgCl
Exposed wire
at tip
Lead wire
Wire
Ag
(a)
Lead wire
Micropipette tip
433
Insulation
Needle shank
(b)
Flexible catheter
Metal electrodes
(d)
Figure 8. Common forms of biopotential electrodes: (a) Ag/AgCl electrode; (b) coaxial needle
electrode; (c) microelectrode for intracellular measurement; and (d) intracardiac electrode for
sensing and pacing.
434
BIOMEDICAL SENSORS
E = E0
activities of various substances to understand biological function. Chemical sensors are devices that convert concentrations or activities of chemicals into electrical or optical signals
related to these quantities. The major classes of chemical sensors are listed in Table 3. Electrochemical sensors convert the
chemical substance being measured into an electrical quantity, such as voltage, current, or charge. Optical sensors have
their optical properties changed by the chemical being measured or by light of a specific wavelength produced by the
chemical. There are also thermal methods for detecting concentrations of substances and major analytical techniques,
such as spectroscopy and nuclear magnetic resonance, that
involve complete instrumentation systems and are beyond the
scope of this article.
Electrochemical Sensors
(16)
;
;
;
;
;
;
;
;
;
;;;;;;;;;;
;
;
;
;;;;;
;
;
;; ;;;;;; ;
;
;
;
;;;;
a
RT
ln 1
nF
a2
Lead wires
Insulator
Ag/AgCl
electrode
Internal
electrolytic
solution
Platinum cathode
Oxygen-permeable
membrane
Ion-selective membrane
(a)
(b)
BIOMEDICAL SENSORS
4OH
(17)
435
(18)
where a and b are constants based on the measurement conditions. It is important to note that oximetry gives the hemoglobin oxygen saturation, but it does not give the total oxygen
concentration in the blood, because the hemoglobin content is
unknown. If the hemoglobin is independently measured, however, it is possible to determine the total oxygen concentration. This is different from the oxygen tension (partial pressure of oxygen) in the blood. The partial pressure of oxygen
in well-saturated hemoglobin varies over a wide range of values even though the saturation is close to 100%. Oxygen tension is determined only by an electrochemical sensor, such as
the amperometric oxygen sensor, or by analytical laboratory
methods, such as Van Slyke analysis (30).
Although optical oximetry has been a technique for blood
analysis for over 60 years, only in recent years has it become
a major measurement for critical care medicine because of the
development of the noninvasive pulse oximeter (31,32). This
optical method is based on the transillumination of tissue at
the two wavelengths previously described. This is done by
placing light emitting diodes (LED) of the desired wavelengths on one side of a finger, toe, or earlobe and using a
light detector, such as a photodiode or phototransistor, on the
other side opposite the emitters. Now the tissue between the
light sources and detector is the cuvette that holds the blood,
but it differs from the laboratory instrument or invasive oximeter case in that the blood volume being measured is variable because the tissue is not made up entirely of blood. As a
matter of fact, the blood volume varies with time over the
cardiac cycle because of the compliance of the vasculature. At
systole a fresh bolus of arterial blood enters and distends the
vasculature of the tissue thereby increasing the percentage of
blood in that tissue with arterial blood. At diastole the pressure is lower, and so new blood does not enter the tissue.
Blood continues to exit the tissue through the venules and
veins, so that the total blood volume in the tissue decreases
during diastole. These changes in blood volume result in simi-
436
BIOMEDICAL SENSORS
Sample
Transmission
cuvette
Light
source
Optical fibers
Cuvette
Light
detector
Sample
Figure 10. Fiber optic chemical sensor
probe with transmission or reflectance
sample cuvette that contains the sample
itself or a dye in contact with the sample.
Reflectance
cuvette
nism. These sensors, known as bioanalytical sensors, take advantage of one of the following general types of biochemical
reactions: (1) enzymesubstrate, (2) antigenantibody, or (3)
ligandreceptor. When these reactions are used, a sensor
highly specific for a particular biological molecule can be developed. This sensor is usually has two or more stages. The
first stage involves the biological sensing reaction, and this
part of the sensor contains one of the components of the reaction, such as an enzyme or an antibody. The second stage of
the sensor determines if, and to what extent, the biological
reaction has taken place. This portion of the sensor consists
of a physical or chemical sensor that senses the biological reaction based on changes in mass, electrical capacitance, electrical charge transfer, temperature, or optical properties. This
section of the sensor may also consist of a chemical sensor
that detects the product of a reaction or the depletion of one
of the reactants. Bioanalytical sensors are described for many
biological analytes. These sensors are often specific for a particular application (3336). The most common example of a
bioanalytical sensor senses glucose by using the enzyme glucose oxidase. The fundamental reaction involved is
Glucose + O2
glucose oxidase
Glucuronic acid + H2 O2
(19)
BIOMEDICAL SENSORS
437
optical) instrument and the biological system being measured. The validity of the data provided by an instrumentation system is often linked to processes at this interface and
to the functionality of the sensor itself. Although electronic
signal processing compensates for some problems, in general
the quality of a measurement is determined by the quality
of the sensor making that measurement. Understanding the
physics, chemistry, engineering, biology, and applications of
sensors will lead to the development of better devices and
their meaningful application to biomedical problems.
BIBLIOGRAPHY
1. R. J. Whitney, The measurement of changes in human limb volume by means of a mercury-in-rubber strain gage. J. Physiol.,
109: 5, 1949.
2. R. S. Mendenhall and M. R. Neuman, Efficacy of five noninvasive
infant respiration sensors. Proc. IEEE Frontiers Eng. Med. Biol.,
New York: IEEE, 1983.
3. J. A. Adams et al., Measurement of breath amplitudes: comparison of three noninvasive respiratory monitors to integrated pnsumotachograph. Pediatr. Pulmonol., 16 (4): 254258, 1993.
4. L. J. Brooks, J. M. DiFiore, and R. J. Martin, and the CHIME
Study Group, Assessment of tidal volume in preterm infants using respiratory inductance plethysmography. Pediatr. Pulmonol.,
23: 429433, 1997.
5. H. F. Stegall et al., A portable, simple sonomicrometer. J. Appl.
Physiol., 23 (2): 289293, 1967.
6. I. Zador, M. R. Neuman, and R. N. Wolfson, Continuous monitoring of cervical dilatation during labor by ultrasound transit time
measurement, Med. Biol. Eng., 14: 299305, 1976.
7. S. Miyazaki and A. Ishida, Capacitive transducer for continuous
measurement of vertical foot force. Med. Biol. Eng. Comput., 22
(4): 309316, 1984.
8. E. H. Lambert and E. H. Wood, The use of a resistance wire
strain gage manometer to measure intraarterial pressure, Proc.
Soc. Exper. Biol. Med., 64: 186190, 1947.
9. M. Habibi et al., A surface micromachined capacitive absolute
pressure sensor array on a glass substrate, Sens. Actuators A:
Physical, 46: 125128, 1995.
10. J. Mandle, O. Lefort, and A. Migeon, A new micromachined silicon high-accuracy pressure sensor, Sens. Actuators A: Physical,
46: 129132, 1995.
11. J. A. Shercliff, The Theory of Electromagnetic Flow Measurement,
Cambridge: Cambridge University Press, 1962.
12. D. H. Bergel and U. Gessner, The electromagnetic flow meter, in
R. F. Rushmer (ed.), Methods in Medical Research, Chicago: Year
Book, 1966, Vol. XI.
13. C. J. Mills and J. P. Shillingford, A catheter tip electromagnetic
velocity probe and its evaluation, Cardiovasc. Res., 1: 263-273,
1967.
14. L. N. Bohs, B. H. Friemel, and G. E. Trahey, Experimental velocity profiles and volumetric flow via two-dimensional speckle
tracking, Ultrasound Med. Biol., 21 (7): 885898, 1995.
15. J. M. Dabbs, Jr. and M. R. Neuman, Telemetry of human cerebral
temperature, Psychobiology, 15 (6): 599603, 1978.
SUMMARY
Sensors serve an important function in biomedical instrumentation in that they are the interface between the electronic (or
438
BIOMEDICAL TELEMETRY
18. J. G. Webster, Reducing motion artifacts and interference in biopotential recording, IEEE Trans. Biomed. Eng., 31: 823826,
1984.
19. G. J. Janz and D. J. G. Ives, Silver-silver chloride electrodes,
Ann. N.Y. Acad. Sci., 148: 210221, 1968.
20. L. A. Geddes, L. E. Baker, and A. G. Moore, Optimum electrolytic
chloriding of silver electrodes, Med. Biol. Eng., 7: 4956, 1969.
21. K. D. Wise, J. B. Angell, and A. Starr, An integrated circuit approach to extracellular microelectrodes, IEEE Trans. Biomed.
Eng. 17: 238246, 1970.
22. J. J. Mastrototaro et al., Rigid and flexible thin-film microelectrode arrays for transmural cardiac recordings, IEEE Trans. Biomed. Eng. 39: 271279, 1992.
23. O. J. Prohaska et al., Thin-film multiple electrode probes: Possibilities and limitations, IEEE Trans. Biomed. Eng., 33: 223
229, 1986.
24. G. T. A. Kovacs et al., Silicon-substrate microelectrode arrays for
parallel recording of neural activity in peripheral and cranial
nerves, IEEE Trans. Biomed. Eng., 41: 567577, 1994.
25. J. Pine et al., Silicon cultured-neuron prosthetic devices for in
vivo and in vitro studies, Sens. Actuators B: Chemical, 43: 105
109, 1997.
26. L. A. Geddes, Electrodes and the Measurement of Bioelectric
Events, New York: Wiley, 1972.
27. C. D. Ferris, Introduction to Bioelectrodes, New York: Plenum,
1974.
28. J. I. Peterson, S. R. Goldstein, and R. V. Fitzgerald, Fiber optic
pH probe for physiological use, Anal. Chem., 52: 864869, 1980.
29. J. I. Peterson, R. V. Fitzgerald, and D. K. Buckhold, Fiberoptic
probe for in vivo measurement of oxygen partial pressure, Anal.
Chem., 56 (1): 6267, 1984.
30. J. P. Peters and D. D. van Slyke, Quantitative Clinical Chemistry,
Baltimore: Williams & Wilkins, 193132.
31. T. Aoyagi, M. Kishi, and K. Yamaguchi, Improvement of the earpiece oximeter, Jpn. Soc. Med. Electron. Biomed. Eng., 9091,
1974.
32. I. Yoshiya, Y. Shimada, and K. Tanaka, Spectrophotometric monitoring of arterial oxygen saturation in the fingertip. Med. Biol.
Eng. Comput., 18: 2732, 1980.
33. J. S. Schultz, Biosensors, Sci. Am., 265 (2): 6469, 1991.
34. E. A. H. Hall, Biosensors, Englewood Cliffs, NJ: Prentice-Hall,
1991.
35. J. Janata, Principles of Chemical Sensors, New York: Plenum,
1989.
36. M. Lambrechts and W. Sansen, Biosensors: Microelectrochemical
devices, Bristol, UK: IOP Publishing, 1992.
37. Ph. Arquint et al., Integrated blood-gas sensor for pO2, pCO2 and
pH, Sens. Actuators B, B13: 340344, 1993.
38. W. Mokwa et al., Silicon thin film sensor for measurement of dissolved oxygen, Sens. Actuators B: Chemical, 43: 4044, 1997.
MICHAEL R. NEUMAN
Case Western Reserve University
114
DEFIBRILLATORS
SA node
LA
RA
AV node
RV
LV
HisPurkinji System
Figure 1. Diagram of the hearts electrical system. An electrical signal begins at the sino-atrial node (SA node) and travels to the left
atrium (LA) and right atrium (RA). The signal also travels to the
atrioventricular (AV) node. From the AV node, the electrical signal
travels through the HisPurkinje system to the right ventricle (RV)
and left ventricle (LV). The electrical signal causes the heart to contract in a coordinated fashion.
DEFIBRILLATORS
The function of the heart is to pump blood. The heart has two
sides, each consisting of an atrium and a ventricle. Deoxygenated blood is collected in the right atrium, passed to the
right ventricle, and pumped to the lungs where it is oxygenated. Blood is then collected in the left atrium, passed to the
left ventricle, and pumped to the rest of the body. Controlling
this pumping of blood is an electrical signal which passes
through the heart and triggers a coordinated mechanical contraction of the heart. During a normal or sinus beat, this
electrical activity starts in the sinus node, near the junction
of the superior vena cava and right atrium, and passes
through the right atrium to the atrioventricular node (Fig. 1).
From there, the electrical signal spreads down the His bundle
and throughout the left and right ventricles via a series of
specialized conducting cells called Purkinje fibers. Mechanical
action of the heart follows the same pattern with the atria
contracting first and then the ventricles contracting second.
The role of the Purkinje fibers is to rapidly spread the activation signal from the base to the apex of the heart so that
contraction can proceed from apex to base, pushing blood
out of the heart and into the aorta.
There are several disorders of this electrical system. Sometimes the heart beats too slowly, either because the sinus
node does not fire rapidly enough, or the signal is not able to
pass through the atrioventricular node to the ventricle. These
problems are best treated with an implanted pacemaker.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
DEFIBRILLATORS
(a)
Figure 2. Diagram of an implantable cardiovertor defibrillator. The pulse generator is implanted in the pectoral
region. A transvenous catheter electrode is threaded from
the subclavian vein to the superior vena cava and into the
right ventricle of the heart. This catheter also contains a
pace/sense electrode on the tip. Implantation of this system only requires sedation of the patient and a local anesthetic.
(b)
Probability of survival
.60
.40
5
.20
10
0
Figure 3. Diagram of electrode patch placement for external defibrillation. One electrode is placed over the right border of the sternum. The second electrode is placed on the left axillary line overlying
the apex of the heart.
115
15
5
10
15
Collapse to defibrillation interval
20
I0 = 50 A
80
60
= 30 ms 660 J
40
20
0
15
= 10 ms
660 J
30
45
= 20 ms 440 J
8000
1.0
6000
0.8
4000
0.6
0.4
2000
0.2
0
0.0
200 250 300 350 400 450
Defibrillation test shock leading edge voltage
(a)
Figure 5. Relationship between percent success of ventricular defibrillation and final current for exponential waveforms having an initial current (I0) of 50 A and time constants of decay (t) of 10, 20, and
30 ms. Energy is shown in joules for each time constant of decay (t).
Reproduced with permission from The Institute of Electrical and
Electronics Engineers (99).
1700
(V)
1360
680
340
160
320
480
(V)
640
800
(b)
90
Simulated rms error (V)
1020
80
70
60
50
40
21.8
Min SE without
retrospective
prior pdf
Min SE with
retrospective
prior pdf
19.4
16.9
14.5
Min SE
up-down
2.0
3.0
4.0
Number of measurements
12.1
9.69
Percent successful
100
DEFIBRILLATORS
Probability of
defibrillation
(doseresponse curve)
116
(c)
Figure 6. A Bayesian approach to estimating the 95% probability of
successful defibrillation. (a) The method assumes a doseresponse
curve (open squares) which follows the logistic equation, although any
functional form could be used. Also shown is the cost function (closed
circles) that was chosen to give the lowest error. Cost functions that
minimize the absolute error or the patient risk are also possible. (b)
Contour plot of a prior probability density function (pdf) constructed
from a set of assumptions applicable to most implantable defibrillator
electrode configurations. and are variables that describe the logistic equation at the 95% probability point. is the subjects 95% probability point. One over is the slope of the logistic equation at the
95% probability point. For any animal, it is assumed that the 95%
probability point will be between 0 V and 800 V ( ) and that one over
the slope of the logistic equation () is between 0 V and 1700 V. (c)
The simulated performance of the minimum squared error (MinSE)
developed from (a) and (b).
DEFIBRILLATORS
117
8 ms
10 ms
(a)
7.6 ms
(c)
(b)
4.2 ms
10 ms
(d)
(b)
15.0
Model response
(a)
10.0
5.0
0.0
(c)
(d)
10
15
20
Time (ms)
25
30
Leading edge
current
external defibrillators have been developed that employ truncated exponential biphasic waveforms, similar to those used
in ICDs (19,26)
Many studies have shown that certain biphasic waveforms
defibrillate with a lower current and energy than a monophasic waveform. It is important to choose the relative phase durations of the two phases of the biphasic waveform carefully
in order to realize an improvement in efficacy over the monophasic waveform. For waveforms with long time constants,
the first phase should be longer than or equal to the second
phase (27,28). For waveforms with a short time constant, the
second phase can be slightly longer than the first phase
(2931).
Several groups have shown that for square waveforms, defibrillation efficacy follows a strengthduration relationship
similar to cardiac stimulation (32,33); as the waveform gets
longer, the average current at the 50% success point becomes
progressively less, approaching an asymptote called the rheobase (34). Based on this observation, several groups have suggested that cardiac defibrillation can be mathematically modeled using a parallel resistorcapacitor (RC) network to
represent the heart (Fig. 8) (29,3537). Empirically, it has
been determined that the time constant for the parallel RC
network is in the range of 2.5 ms to 5 ms (29,31,36). In one
version of the model (29), a current waveform is applied to
the RC network. The voltage across the network is then calculated for each time point during the defibrillation pulse. The
relative efficacy of different waveform shapes and durations
can be compared by determining the current that is necessary
to make the voltage across the RC network reach a particular
value, called the defibrillation threshold.
Several observations can be made from this model. First,
for square waves, as the waveform duration gets longer, the
voltage across the network gets progressively higher and approaches an asymptote or rheobase. For truncated exponential waveforms, however, the model voltage rises, reaches a
peak, and then, if the waveform is long enough, begins to decrease (Fig. 8). Therefore, the model would predict that monophasic exponential waveforms should be truncated at a time
when the peak voltage across the RC network is reached. Current or energy delivered after that time is wasted and may
even be detrimental if the waveform gets too long (38). In
supporting this prediction, strengthduration relationships
measured in both animals (29) and humans (39) do not approach an asymptote but reach a minimum and remain there
as the waveform gets longer.
Secondly, the model predicts that the heart acts as a lowpass filter (37). Therefore waveforms that rise gradually
should have an improved efficacy over waveforms that turn
on immediately. This prediction has been shown to hold true
for external defibrillation (40), internal atrial defibrillation
(41), and internal ventricular defibrillation (42). Ascending
ramps defibrillate with a greater efficacy than do descending
ramps (42,43). Sweeney et al. showed that a square wave
duty cycle waveform (a waveform in which the current is rapidly turned on and off) defibrillates with the same efficacy as
a square waveform delivering the same total charge as the
duty cycle waveform (37). This idea has implications for new
defibrillators that would use a duty cycle concept to shape
a waveform.
Several groups have suggested that the optimal first phase
of a biphasic waveform is the optimal monophasic waveform
Leading edge
current
DEFIBRILLATORS
Model response
118
22
16
12
6
2
4
8
0
10
Time (ms)
15
20
Figure 8. The response of a parallel resistorcapacitor network representation of the heart to a monophasic and biphasic truncated exponential waveform with a time constant of 7 ms. The parallel resistor
capacitor network has a time constant of 2.8 ms. (a) Input
monophasic waveforms. Leading edge current of the input waveform
was 10 A. The waveforms were truncated at 1, 2, 3, 4, 5, 6, 8, and 10
ms. (b) Model response, V(t). Initially, as the waveform gets longer,
V(t) increases until it reaches a maximum at approximately 4 ms,
after which V(t) begins to decrease. (c) Input biphasic waveforms.
Leading edge current was 10 A. Phase 1 was truncated at 6 ms. Phase
2 was truncated after 1, 2, 3, 4, 5, 6, 7, and 8 ms. (d) The model
response does not change polarity until phase 2 duration is longer
than 2 ms.
DEFIBRILLATORS
Gradient field
EPI
14
18
11
14
2 4
8
3 7
4
6
4
Potential Gradient
During a shock, different amounts of current flow through different parts of the heart. According to Ohms law, the current
density through each region of the heart is equal to the potential gradient in that region divided by the resistivity of that
region. Although current density is difficult to measure directly, techniques to measure the potential gradient are well
established. If we make the assumption that tissue resistivity
is constant in the heart, then the potential gradient is directly
related to current density. For shocks delivered from intracardiac electrodes, the distribution of potential gradients is
highly uneven. High potential gradients occur near the defibrillation electrodes. Low potential gradients occur distant
19
9
10
22
12
5
17
19
8
10
19
24
10
5
8
13
19
23
17
MECHANISMS OF DEFIBRILLATION
In the following sections, the interaction between the defibrillation shock and the fibrillating myocardium will be discussed. We start with how the distribution of the current from
the shock affects defibrillation. Then we discuss how the
shock interacts with the fibrillating myocardium. Finally, we
discuss how the shock changes the action potential, the transmembrane potential, and the ion channels in the membrane.
By looking at all of these different shockfibrillating heart
interactions, we will attempt to summarize what is known
about how an electrical shock causes the fibrillating heart to
return to sinus rhythm.
119
23
26
32
24
23
23
7
ENDO
4
2
6
11
17
3
3
5
7
5
7
8
10
13
15
17
4
Left anterolateral
view
27
21
12
21
11
28
32
37
27
21
9
24
21
6
7
8
19
11
24
22
8
Right posterolateral
view
Figure 9. The potential gradient field from a 500 V, 6 ms unsuccessful defibrillation shock delivered from a catheter electrode in the right
ventricular apex as cathode and a cutaneous patch on the lower left
thorax as anode. Left-hand panels demonstrate the heart from the
left anterolateral view; the two right-hand panels represent the right
posterolateral angle. Numbers represent the potential gradient in
volts per centimeter. Isogradient lines are separated by 10 V/cm.
(Dashed line) The upper border of the right ventricular outflow tract.
(Asterisks) The top row of electrodes in the atrium and right ventricular outflow tract where potential gradients could not be calculated
because no recording sites were above them. Neighboring electrodes
are required to calculate the potential gradient, defined as the change
in potential with distance. (Solid circles) Electrodes from which good
recordings were not obtained. (EPI) epicardial, (ENDO) endocardial.
Reproduced with permission from The Institute of Electrical and
Electronics Engineers (101).
120
DEFIBRILLATORS
fore, at this level of consideration of the mechanism of defibrillation, one of the reasons that some biphasic waveforms
require a smaller shock than some monophasic waveforms for
defibrillation is that they must create a lower minimum potential gradient throughout the heart.
Activation Sequence
There are two things that a shock must do in order to defibrillate the heart. First, it must stop all the fibrillation wavefronts on the heart. Second it must not restart fibrillation.
When shocks are given that are much lower than the strength
needed to defibrillate, activation after the shock appears at
numerous sights throughout the ventricles (52). As the shock
strength is increased, the potential gradients are increased
throughout the myocardium, and activation originates just in
those regions in which the potential gradients are lowest. For
shocks just slightly lower than the strength needed to defibrillate, postshock activation arises only in the small myocardial regions in which the potential gradients remain below
the minimum needed for defibrillation (Fig. 10). Activation
fronts arising from these low potential gradient regions propagate to activate the remainder of the myocardium for a few
cycles following the shock. Then reentry occurs, activation becomes disorganized, and the heart begins to fibrillate again.
Following shocks slightly stronger than that needed to defibrillate, postshock sites of early activation still arise in the
regions of lowest potential gradient; however, successive cycles of activation originate from these regions more slowly
than following unsuccessful, slightly lower strength shocks.
After a few cycles, these activations terminate without reinducing fibrillation (52,53). These results suggest that the reason a minimum potential gradient is required for defibrillation is that, above this minimum, activation fronts are not
generated by the shock that can interact and reinduce fibrillation (53). For shocks delivered through transvenous electrodes at strengths a few hundred volts higher than needed
to defibrillate, ectopic activation fronts first appear following
the shock in regions exposed to the highest potential gradients generated by the shocks, adjacent to the defibrillation
electrodes (54,55). In a similar fashion to activation fronts
arising following shocks just above the defibrillation threshold, these activation fronts also terminate without reinitiating
fibrillation. However, when the shock strength is increased
still further, e.g. to above 1000 V with transvenous electrodes,
the activation fronts arising from the high potential gradient
regions can reinduce ventricular fibrillation, so that the shock
fails even though a lower strength shock succeeds.
Cellular Action Potential
One effect of the shock field is to initiate a new action potential [Fig. 11(a)]. If the shock strength is large enough, new
action potentials can be generated both in tissue adjacent to
the defibrillation electrodes, and in regions throughout the
myocardium distant from the defibrillation electrodes (56,57).
Under certain conditions, the shock can have a second effect
on the action potential. It can cause prolongation of the action
potential and, as a result, a prolongation of the refractory period, without giving rise to a new action potential [Fig. 11(b)].
This action potential prolongation, called a graded response
by some (58), occurs if a shock of sufficient strength is given
when the cells are in their refractory period. Action potential
prolongation occurs when the shock potential gradient deliv-
ered to the tissue is above the minimum level needed to defibrillate. At shock strengths less than the minimum needed to
defibrillate, only an all-or-none response is observed [Fig.
4(a)] (59,60). Although these effects are important for electrical induction of reentry, the fact that action potential prolongation occurs in response to shock field strengths that occur
during defibrillation suggests that action potential prolongation may also be important for defibrillation (59,6165). Action potential prolongation and, hence, refractory period extension are hypothesized to play two different roles in the
mechanism of defibrillation. One, they are thought to prevent
the appearance of propagating activation fronts following the
shock (52,66). Two, by causing a more uniform dispersion of
refractoriness following the shock, they are thought to prevent the block and reentry that cause the activation fronts
that do appear following the shock from degenerating into fibrillation (6165).
In regions in which the shock potential gradient is high,
over 50 V/cm to 70 V/cm, the shock can have detrimental
effects, probably by causing electroporation of the cardiac cell
membranes (67,68). This can cause the transmembrane potential to temporarily hang up near the value of the plateau
of the action potential (69). The cell is electrically paralyzed
and cannot conduct an action potential during this time. At
yet higher potential gradients, probably over 150 V/cm, the
exposed myocardium gives rise to arrhythmic beats (50). This
may be the mechanism that causes the probability of the defibrillation success curve to decrease for very large shocks
(Fig. 5).
Waveform shape alters the shock strength at which these
detrimental effects on the myocardium occur. Yabe et al.
showed that for a 10 ms truncated exponential monophasic
waveform, conduction block occurred in dogs in regions where
the potential gradient was greater than 64 4 V cm1 (70).
Conduction block would last longer for shocks that created
even higher potential gradients in the myocardium. In contrast to monophasic shocks, conduction block occurred when
the potential gradient in the myocardium reached 71 6 V/
cm for a 5 ms/5 ms truncated exponential biphasic shock.
Jones et al. showed that adding a second phase to a monophasic waveform (i.e., making it a biphasic waveform) decreased
the damage done to cultured chick myocytes by the monophasic waveform alone (71). Both results show that biphasic
waveforms are less apt to cause damage or dysfunction in
high-gradient regions. The therapeutic index has been defined
as the range of energies over which a defibrillation waveform
is both safe and effective. Since biphasic waveforms defibrillate at lower energies and cause more myocardial damage
than monophasic waveforms, they have been described as
having a higher therapeutic index than monophasic waveforms.
Transmembrane Potential
For the shock to cause either a new action potential to be
triggered or to prolong an action potential, it must alter the
transmembrane potential. It has been estimated that only
about one quarter of the total current traversing the heart
crosses the membrane to enter the cells (72). Since the defibrillation electrodes are located extracellularly, current from
the shock that enters myocardial cells in some regions must
exit the cells in other regions. These currents, which flow
through the cell membrane, will introduce changes in the
DEFIBRILLATORS
121
ENDO
149
127
149
149
146
151
165
136
162
135
37 154
196
162
138
39
160
144
176
131
160
169
151
176
167
159
144
126
116
69
143 165
122
134
142
118
131
157
150
151
142
146
133
149
155
149
153 141
156
155
159
152
206
189
147
157
137
135
144
150
155
143
171
199
192
139
160
166
175
152
144
133
151 149
60
202
148
144
142
152
154
151
190
213
179
156
133
142
202
169
81
177
141
EPI
325
325
327
327
173
329
337 349
320
320
312
299
320
325
314
315
338
158
280
317
315 335
351
301
316
296
323
329
336
344
324
306
311
350
236
314
183
295
25
388
326
302
377
280
342
297 306
306
273
290
292
313
314
318
327
319
325
285
339
352
306
303
311
301
301
283
270
278
297
320
368
244
322
339
195 269
303
316
319
306
345
329
303
300
327
369
389
336
293
291
289
302
341
306
267
370
396
325
330
316
ENDO
470
470
470
467
479
32
422
450
422
452
473
437
474
521
426
458
442 461
477
426
441
412
424
466
461
506
483
405
408
516
560
436
377
407
424
460
466
453
477
31
339
402
470
485
517
480474
466
446
456
423
448
453 453
429
438
435
414
448
453
372
319
565
424
450
374
417 469
432
380
426 431
434
414
468
480
429
410
402
397
457
501
466
449
482
388
450
456
506
401
353
477
478
436
424
461
414
430
424
411
Figure 10. Postshock activation sequence. The first three cycles after the unsuccessful 500 V,
6 ms defibrillation shock shown in Fig. 9. Numbers represent activation times in milliseconds.
(U) Isochronal lines, separated by 20 ms. () sites of electrodes where adequate recordings were
not obtained. () represent conduction block; (---) Frame shift from one isochronal map to the
next. Such dashed lines are necessary whenever a dynamic process such as reentrant activation
is illustrated by a series of static maps. Reproduced with permission from The Institute of Electrical and Electronics Engineers (101).
transmembrane potential that include depolarization or hyperpolarization during the shock pulse. Several mathematical
formulations have been proposed to describe which regions of
the heart are depolarized and which are hyperpolarized during shocks from a particular defibrillation electrode configuration. These formulations include the cable equations, the
sawtooth model (73,74), the bidomain model (75,76), and the
secondary source model (77). In their simplest form, these formulations incorporate the extracellular and intracellular
spaces as low resistance media and the membrane as a high
resistance in parallel with a capacitance. Therefore, these
simple case models incorporate only passive myocardial properties. Recently, the models have been made more realistic by
the addition of active components to represent the ion chan-
122
DEFIBRILLATORS
1.6 V/cm, 2 ms
S1-S2 (MS)
0
(mV)
222
225
80_
100 ms
(a)
8.4 V/cm, 2 ms
S1-S2 (MS)
90
110
130, 150, 170
180
200
210
220
230
(mV)
80_
230
90
100 ms
(b)
Figure 11. (a) Recordings that illustrate the response to an S2 stimulus of 1.6 V/cm oriented
along the fibers. The S1S2 stimulus intervals for each of the responses are indicated to the
right of the recordings. The responses are markedly different even though the change in S2
timing was only 3 ms. An S1S2 interval of 222 ms caused almost no response, whereas an
interval of 225 ms produced a new action potential. (b) A range of action potential extensions
produced by an S2 stimulus generating a potential gradient of 8.4 V/cm oriented along the long
axis of the myofibers. The recordings were obtained from the same cell as (a). The action potential
recordings, obtained from one cellular impairment, are aligned with the S2 time. An S1 stimulus
was applied 3 ms before phase-zero of each recording. The longest and shortest S1S2 intervals
tested, 230 ms and 90 ms respectively, are indicated beneath their respective phase-zero depolarizations. The S1S2 intervals for each response after S2 are indicated to the right. Reproduced
with permission from the American Heart Association (59).
ations of the appearance of the shock wave occur in the transmembrane potential. For example, a square wave shock may
appear as an exponential change in the transmembrane potential that reaches an asymptote (Fig. 12). Because of the
nonlinear behavior of the membrane introduced by the ion
channels, reversing defibrillation shock polarity does not just
reverse the sign of the change in the transmembrane potential but also alters the magnitude and time-course of the
change in transmembrane potential (Fig. 12).
DEFIBRILLATORS
Extracellular space
Transmembrane
potential
10 ms
(a)
Extracellular space
Transmembrane
potential
10 ms
(b)
123
124
DEFIBRILLATORS
Anodal
500 ms
15
23
31
39
47
55
63
14
22
30
38
46
54
62
13
21
29
37
45
53
61
12
20
28
36
44
52
60
11
19
27
35
43
51
59
10
18
26
34
42
50
58
17
25
33
41
49
57
16
24
32
40
48
56
(a)
Anodal
0.17
0.06
0.50
1.00
0.89
0.47
0.91 1.06
0.8
0.6
0.20.4
0.0
0.22
0.12
0.21
0.35
0.24
0.31
0.38
0.19
0.31
0.53
0.43
0.50
0.44
0.41
0.38
0.27
0.50
0.45
0.52
0.44
0.58
0.53
0.39
0.22
0.55
0.55
0.33
0.35
0.41
0.42
0.33
0.38
0.41
0.37
0.32
0.35
0.40
0.41
0.14
0.19
0.0
0.29
0.2
0.31
0.32
0.32
0.31
0.35
0.16
0.25
0.31
0.24
0.25
0.21
0.0
0.63
0.
BIBLIOGRAPHY
4
0.11
0.34
0.05
(b)
1 mm
1. AVID Trial Executive Committee, Are implantable cardioverterdefibrillators or drugs more effective in prolonging life?, Am. J.
Cardiol., 79: 661663, 1997.
2. A. J. Moss et al., Improved survival with an implanted defibrillator in patients with coronary disease at high risk for ventricular arrhythmia, N. Engl. J. Med., 335: 19331940, 1996.
3. M. L. Weisfeldt et al., American Heart Association report on the
public access defibrillation conference, 1994. Automatic external
defibrillation task force, Circulation, 92: 27402747, 1995.
4. B. E. Gliner, Y. Murakawa, and N. V. Thakor, The defibrillation
success rate versus energy relationship: Part I-Curve fitting and
the most efficient defibrillation energy, Pacing Clin. Electrophysiol., 13: 326338, 1990.
5. M. H. Lehmann et al., Defibrillation threshold testing and other
practices related to AICD implantation: do all roads lead to
Rome?, Pacing Clin. Electrophysiol., 12: 15301537, 1989.
6. W. C. McDaniel and J. C. Schuder, An up-down algorithm for
estimation of the cardiac ventricular defibrillation threshold,
Med. Instrum., 22: 286292, 1988.
7. L. Wang et al., Dependent success rate of repeated shocks at
DFT determined by binary search, Pacing Clin. Electrophysiol.,
20: 1169, 1997.
8. J. D. Bourland, W. A. Tacker, Jr., and L. A. Geddes, Strengthduration curves for trapezoidal waveforms of various tilts for
DEFIBRILLATORS
transchest defibrillation in animals, Med. Instrum., 12: 3841,
1978.
9. R. A. Malkin, T. C. Pilkington, and D. S. Burdick, Optimizing
existing defibrillation thresholding techniques, presented at
Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Piscataway, NJ,
1990.
10. R. A. Malkin et al., Estimating the 95% effective defibrillation
dose, IEEE Trans. Biomed. Eng., 40: 256265, 1993.
11. F. P. van Rugge, L. H. Savalle, and M. J. Schalij, Subcutaneous
single-incision implantation of cardioverter-defibrillators under
local anesthesia by electrophysiologists in the electrophysiology
laboratory, Am. J. Cardiol., 81: 302305, 1998.
125
12. R. A. S. Cooper et al., Comparison of multiple biphasic and monophasic waveforms for internal cardioversion of atrial fibrillation in humans, Circulation, 90: 113, 1994.
31. C. D. Swerdlow, W. Fan, and J. E. Brewer, Charge-burping theory correctly predicts optimal ratios of phase duration for biphasic defibrillation waveforms, Circulation, 94: 22782284, 1996.
126
DEFIBRILLATORS
48. O. Fujimura et al., Effects of time to defibrillation and subthreshold preshocks on defibrillation success in pigs, PACE,
12(2): 358365, 1989.
68. S. B. Knisley and A. O. Grant, Asymmetrical electrically induced injury of rabbit ventricular myocytes, J. Mol. Cell.
Cardiol., 27: 11111122, 1995.
71. J. L. Jones and R. E. Jones, Decreased defibrillator-induced dysfunction with biphasic rectangular waveforms, Am. J. Physiol.,
247: H792H796, 1984.
72. J. C. Eason, Membrane polarization in a bidomain model of electrical field stimulation of myocardial tissue, Duke Univ., Durham, NC, 1995.
54. R. G. Walker et al., Sites of earliest activation following transvenous defibrillation, Circulation, 90: 1447, 1994.
55. R. G. Walker, W. M. Smith, and R. E. Ideker, Activation patterns following defibrillation with different waveforms, Pacing
Clin. Electrophysiol., 18 (part II): 835, 1995.
56. P. G. Colavita et al., Determination of effects of internal countershock by direct cardiac recordings during normal rhythm,
Am. J. Physiol., 250: H736H740, 1986.
57. K. F. Kwaku and S. M. Dillon, Shock-induced depolarization of
refractory myocardium prevents wave-front propagation in defibrillation, Circ. Res., 79: 957973, 1996.
58. C. Y. Kao and B. F. Hoffman, Graded and decremental response
in heart muscle fibers, Am. J. Physiol., 194: 187196, 1958.
59. S. B. Knisley, W. M. Smith, and R. E. Ideker, Effect of field
stimulation on cellular repolarization in rabbit myocardium: implications for reentry induction, Circ. Res., 70: 707715,
1992.
60. S. B. Knisley and B. C. Hill, Optical recordings of the effect of
electrical stimulation on action potential repolarization and the
induction of reentry in two-dimensional perfused rabbit epicardium, Circulation, 88 (part I): 24022414, 1993.
61. J. F. Swartz et al., Conditioning prepulse of biphasic defibrillator waveforms enhances refractoriness to fibrillation wavefronts, Circ. Res., 68: 438449, 1991.
62. R. J. Sweeney et al., Ventricular refractory period extension
caused by defibrillation shocks, Circulation, 82: 965972, 1990.
63. S. M. Dillon, Optical recordings in the rabbit heart show that
defibrillation strength shocks prolong the duration of depolarization and the refractory period, Circ. Res., 69: 842856,
1991.
64. S. M. Dillon and R. Mehra, Prolongation of ventricular refractoriness by defibrillation shocks may be due to additional depolarization of the action potential, J. Cardiovasc. Electrophysiol., 3:
442456, 1992.
65. S. M. Dillon, Synchronized repolarization after defibrillation
shocks: a possible component of the defibrillation process demonstrated by optical recordings in rabbit heart, Circulation, 85:
18651878, 1992.
66. P.-S. Chen, P. D. Wolf, and R. E. Ideker, Mechanism of cardiac
defibrillation: a different point of view, Circulation, 84: 913
919, 1991.
67. O. Tovar and L. Tung, Electroporation of cardiac cell membranes with monophasic or biphasic rectangular pulses, Pacing
Clin. Electrophysiol., 14: 18871892, 1991.
DELAY CIRCUITS
89. S. B. Knisley, Transmembrane voltage changes during unipolar
stimulation of rabbit ventricle, Circ. Res., 77: 12291239, 1995.
90. J. P. Wikswo, Jr., S.-F. Lin, and R. A. Abbas, Virtual electrodes
in cardiac tissue: A common mechanism for anodal and cathodal
stimulation, Biophys. J., 69: 21952210, 1995.
91. J. B. White et al., Myocardial discontinuities: a substrate for
producing virtual electrodes to increase directly excited areas of
the myocardium by shocks, Circulation, 97: 1998.
92. T. Akiyama, Intracellular recording of in situ ventricular cells
during ventricular fibrillation, Am. J. Physiol., 240: H465
H471, 1981.
93. X. Zhou et al., Existence of both fast and slow channel activity
during the early stage of ventricular fibrillation, Circ. Res., 70:
773786, 1992.
94. J. L. Jones and R. E. Jones, Threshold reduction with biphasic
defibrillator waveforms: role of excitation channel recovery in a
computer model of the ventricular action potential, J. Electrocardiol., 23: 3035, 1990.
95. L. Tung and J.-R. Borderies, Analysis of electric field stimulation of single cardiac muscle cells, J. Physiol., 63: 371386,
1992.
96. X. Zhou et al., Transmembrane potential changes caused by
shocks in guinea pig papillary muscle, Am. J. Physiol., 271:
H2536H2546, 1996.
97. J. L. Jones and R. E. Jones, Threshold reduction with biphasic
defibrillator waveforms: role of excitation channel recovery in
computer model of the ventricular action potential, J. Electrocardiol., 23: 3035, 1991.
98. T. D. Valenzuela et al., Estimating effectiveness of cardiac arrest intervention: a logistic regression survival model, Circulation, 96: 33083313, 1997.
99. J. C. Schuder et al., Transthoracic ventricular defibrillation in
the 100 kg calf with untruncated and truncated exponential
stimuli, IEEE Trans. Biomed. Eng., BME 27: 3743, 1980.
100. N. L. Gurvich and V. A. Markarychev, Defibrillation of the heart
with biphasic electrical impulses, Kardiologiia, 7: 109112,
1967.
101. A. S. L. Tang et al., Measurement of defibrillation shock potential distributions and activation sequences of the heart in threedimensions, Proc. IEEE, 76: 11761186, 1988.
102. G. P. Walcott et al., On the mechanism of ventricular defibrillation, Pacing Clin. Electrophysiol., 20: 422431, 1997.
GREGORY P. WALCOTT
RAYMOND E. IDEKER
University of Alabama at
Birmingham
127
370
ELECTROCARDIOGRAPHY
Physiological Basis
ELECTROCARDIOGRAPHY
Electrocardiography is the study of the hearts electrical activity recorded from the surface of the body. Such recordings
represent a total or integrated view of all of the electrically
excitable cells in the heart. A sensitive medical recording device, called an electrocardiograph, is attached to the body
with special electrodes and records the voltage changes on
chart paper. This voltage versus time recording is the electrocardiogram. Both the device and its graphical output are abbreviated by the familiar acronym ECG and, depending on
its contextual use, one could be referring to either one. Because much of the original work in this field was performed
by Willem Einthoven in Holland, the abbreviation of EKG
was based on the Dutch root word kardio and is interchangeably used with ECG.
The ECG provides the physician with a significant series
of waves from which one can measure the rate, rhythm, and
many aspects of the health of the various cardiac muscle tissues that comprise the heart. The actual recording devices
have kept pace with advances in modern technology, so that
todays recording devices use integrated electronics and embedded microprocessors to record, analyze, and store the signals generated by the heart. In addition, a wide variety of
medical devices rely on an ECG signal, in part, to perform
their primary function. Examples of these devices are treadmill systems where the heart is monitored under exercise
workload conditions; cardiac pacemakers that monitor the
heart rhythm from internally implanted electrodes to determine if it is necessary to stimulate the heart because of loss of
function of the hearts natural pacemaker; and sophisticated
imaging systems that require synchronization with the cardiac cycle to minimize the effects of cardiac motion. Thus, the
ECG is still evolving as a tool for studying the heart even
though it is perhaps the oldest test instrument in medicine.
When two wires are placed anywhere on the body surface and
then attached to the inputs of a bioelectric amplifier, it is possible to record the voltage generated by the heart. There are
standard positions for placing the recording electrodes on the
body, but generally the potential difference measured between any two recording sites is the summation of electrical
signals generated by billions of cardiac cells. The adult heart
is a bit larger than a fist, and the sequence of its electrical
activation is directly related to the contractional sequence of
the various heart chambers. It is important to note that the
electrical signals are the triggering event for the mechanical
motion of the heart and that these electrical events precede
the heart contraction. The electromechanical coupling is a significant phenomenon for the overall function of a healthy
heart, but it is possible to have an electrically normal heart
while the mechanical function could be significantly impaired
and vice versa. This article focuses on the electrical activity
of the normal heart, for which it is important to understand
some fundamental aspects of the hearts anatomy and physiology (1).
There are four chambers in the heart. The two upper
chambers are the atria, and the two lower chambers are the
ventricles. Another way to divide the heart is into the right
and left side with the result that the four chambers are the
right atrium, right ventricle, left atrium, and left ventricle.
Figure 1 is a schematic representation of the four chambered
heart with its physical connections to the veins which deliver
blood into the heart and the arteries, the vessels that carry
blood away from the heart. One could begin anywhere in the
To the body
To the
lungs
From the
lungs
From the
body
Left atrium
Right
atrium
Left ventricle
Right ventricle
Figure 1. A cutaway diagram of the heart showing the major chambers and vessels. The blood flow into and out of the heart is indicated
by the arrows.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
ELECTROCARDIOGRAPHY
Depolarization
(Phase 0)
Early repolarization
(Phase 1)
Plateau (Phase 2)
0.0 V
Repolarization
(Phase 3)
Resting
state
(Phase 4)
90 mV
300 ms
Figure 2. A cardiac action potential recorded with a microelectrode
inside of a single cardiac cell. There are five phases describing the
three electrical states of each cell.
371
372
ELECTROCARDIOGRAPHY
1.0 s
SA node
SA node
240
Right atrium
50
Left atrium
Right
atrium
AV node
His bundle
Left
bundle
Right
Left
bundle
ventricle
Left atrium
AV node
50
50
Right
ventricle
His bundle
15
Purkinje fibers
Figure 3. A schematic diagram of the cardiac conduction system.
The major components of the conduction system are shaded and
shown in their approximate anatomic position, but they are not to
scale.
30
80
R
T
traction. This integrated view of the hearts mechanical function is called a syncytium (1).
Now let us consider the heart on a more anatomical basis
and describe the actual sequence of activation and how this
relates to generating the measured waves of the ECG. Figure
3 is a cross-sectional sketch of the heart. All four chambers
are respectively labeled. A network of structures has been
added that comprise the hearts specialized conduction system. Consider Figs. 3 and 4 together to understand the sequence of cardiac activation better. Figure 4 is a timing diagram that indicates the length of time during which each
particular structure has cells undergoing phase zero depolarization. The bottom trace in Fig. 4 is a stylized ECG recording
whose component waves are labeled allowing one to compare
the surface waves with the internal cardiac sources. Note that
not all of the internal structures are observed on the standard ECG.
The electrical activation of the heart begins with automatic
depolarization of an irregular mass of cells in the upper portion of the right atrium called the sinoatrial (SA) node. Once
these cells end their repolarization, there is a gradual incline
of Phase 4 toward the threshold voltage. Thus these cells do
not require an initial excitative current from any other cell,
and they comprise the hearts natural pacemaker. The right
atrial muscle cells respond to these neighboring and depolarizing SA nodal cells by initiating their own depolarizing currents. Notice that there is considerable timing overlap be-
ECG
Q
tween the SA node and the right atrium because the SA node
does not have a single point of interconnection with the right
atrium. The speed at which the cardiac impulse travels is relatively slow in the SA node, on the order of 0.05 m/s.
Conduction velocity plays an important role in cardiac activation. It is easily measured by placing two electrodes with
known spacing on the heart. The occurrence of depolarizing
voltages is easily identified at each site. Together with the
time measured between these two events, the conduction velocity is determined. Conduction velocity is the ratio of the
time between the two events divided by the distance between
the two recording sites. Table 1 lists conduction velocities for
the various cardiac tissues. Note also that physiologists refer
to conduction velocity as the measure of speed of activation
through electrically active tissues. In this sense conduction
Atrial
Muscle
AV
Node
His
Bundle
Bundle
Branches
Purkinje
Fibers
Ventricular
Muscle
0.05
0.5
0.05
1.0
1.02.0
4.0
0.5
ELECTROCARDIOGRAPHY
373
about the same time and have roughly the same duration of
activation.
A sample ECG tracing is shown at the bottom of the timing
diagram. The first rounded wave, called the P-wave, is the
body surfaces manifestation of atrial depolarization. Toward
the end of the P-wave the AV node and HisPurkinje system
depolarize. The standard ECG does not show any evidence
that these structures depolarize because they produce very
small signals compared with the large muscle masses of the
atria and the ventricles. A specialized branch of electrocardiography, called high-resolution electrocardiography (2), uses
computer-based enhancement techniques to record these lowlevel signals from the body surface, which are described later
in this article. Once the ventricles are depolarized, the ECG
shows a rapidly changing voltage called the QRS-complex.
The initial downstroke of this complex is called the Q-wave,
and the initial upstroke is called the R-wave. The final downstroke is called an S-wave. The nature of the QRS-complex
depends highly on the specific lead and any underlying cardiac abnormalities. Thus, there are times when the ventricular depolarization wave may have only a RS-pattern, or may
be just a deep Q-wave. Generally, the complex is called a
QRS-complex even if the pattern does not strictly follow the
QRS-sequence. Following the QRS-complex is a smooth
wave called the T-wave. This wave represents the repolarizing currents of the ventricles. Notice that the atria do not
produce a similar repolarizing waveform. The atrial repolarization wave temporally overlaps with the QRS-complex and
is masked by its higher voltage.
Figure 5 is a larger, stylized ECG recording showing both
amplitude and timing scales. In ECG terminology an interval is the period from the beginning of one wave to the beginning of the next wave. An example is the PR-interval shown
in this figure. A segment is the period between two waves
as demonstrated by the ST-segment in this figure. Finally
this tracing shows the U-wave, a small wave after the Twave. The U-wave is associated with repolarization, but its
actual origin has yet to be definitively determined.
Measurements of the amplitude and duration of each wave
and the above-mentioned intervals and segments are used by
electrocardiographers to diagnose cardiac pathologies such as
ventricular enlargement (hypertrophy), blocks of conduction
in the bundle branches, damage to the ventricles due to heart
attacks, and abnormalities of rhythm. Such diagnostic interpretation of the ECG is an important skill, and there are a
number of texts devoted to the subject (3).
200
ms
1.0
mV
P
PRinterval Q
S
T
U
ST-segment
374
ELECTROCARDIOGRAPHY
History
The electrocardiogram was first recorded by Augustus D.
Waller, an English physiologist, in 1887. His pet bulldog, Jimmie, was his first subject using a device called the capillary
electrometer. The device, crude by todays standards, used a
voltage-sensitive column of mercury that reflected a beam of
light from its meniscus onto a moving photographic plate.
Waller is credited by many to have actually coined the term
electrocardiogram. However, Willem Einthoven, a Dutch
physiologist, is usually credited with bringing the ECG into
clinical practice with a string galvanometer. This device used
a thin wire between poles of a magnet. A movement of the
wire occurred which was proportional to the current flow. Motion of the wire could be used to scrape a carbon residue off a
slowly rotating smoked drum. The evolution of ECG recording instrumentation closely followed the developments of
electronic technologies, such as the vacuum tube, the transistor, the integrated circuit, and the microprocessor. The use of
computers to automate the interpretation of the ECG was a
very early application of computers in medicine (4). Since the
1960s the algorithms for a fully automatically interpreted
ECG have been developed and optimized to the point where
physician overreading, although still a technical requirement,
is seldom necessary for normal ECGs. Complex arrhythmias
have proven difficult for automated interpretation. It usually
requires a highly trained physician to render an accurate
reading. Systems in use today often resemble a fully functioning computer system with very specialized software. Figure 6
J2
Printer
control
J6
Transformer
and
rectifier
J7
J8
Power
supply
J3
J5
Printer
(printhead and
paper transport
CPU
assembly
J15
DRAM
Disk
control
J14
Disk
drive
J4
Battery
J10
Serial
comm
Modem or
direct
connect
CPU
J1
I/O
control
Keyboard
and LCD
display
J9
ROM
Figure 6. A block diagram of a modern
computer-based ECG recorder. Such a device will amplify, digitize, and analyze the
ECG and has all the features of a modern
embedded microprocessor-based instrument (courtesy of Hewlett-Packard Company, Palo Alto, CA).
Patient
module
DSP
J12
Expansion
bus
Preview display
and power
supply
ELECTROCARDIOGRAPHY
375
Figure 7. Each panel has an outline of a male torso with the diagram of the heart positioned approximately in its proper place within
the chest. The solid dots represent electrodes on the body surface.
The lines represent current flow between the electrodes and demonstrate the lead field concept described in the text.
376
ELECTROCARDIOGRAPHY
limb leads: I, II, and III. These leads are defined as follows:
I = VLA VRA
II = VLL VRA
III = VLL VLA
where the terms VLA, VRA, VLL represent the voltages recorded
at the left arm, right arm, and left leg, respectively. Note that
since each measurement is the difference between two voltages sharing a common reference, the choice of the reference
location is arbitrary, and its symbol disappears in the algebra. This is shown in the top panel of Fig. 8. The lines connecting the three limbs define a triangle known as Einthovens triangle, and it demonstrates Einthovens law:
III = II I
In 1934, Frank N. Wilson (11), an American physician, introduced a concept whereby the signals from the two arms
and left leg were averaged by connecting them together with
a set of equal resistors, as shown in the lower panel of Fig.
8. This common terminal was used as a reference for other
electrodes attached to the body. Thus, new leads were formed
using what is now commonly called the Wilson central terminal or WCT. The voltage at the WCT is defined as
VWCT = (VLA + VRA + VLL )/3
II
III
I = VLA VRA
II = VLL VRA
III = VLL VLA
III = II I
Augmented
leads
vi
Vi = vi VWCT
i = 1 to 6
VWCT
Figure 8. The electrode positions for the 12-lead ECG are demonstrated. The top panel shows the formation of Einthovens triangle
from leads I, II, and III. The voltages on the limbs are from the right
arm (VRA), the left arm (VLA), and the left foot (VRA). The middle panel
shows the formation of the augmented leads which are linear combinations of the limb leads. The bottom panel shows the Wilson central
terminal (WCT) formed by averaging the limb voltages through an
equal value set of resistors. The WCT is the reference for the chest
leads (V1, V2 . . . V6).
ELECTROCARDIOGRAPHY
377
378
ELECTROCARDIOGRAPHY
ELECTROCARDIOGRAPHY
379
QRSt
VAT
QRS
area
P area
P area
QRS notch
P area
T area
ST
shape
Type 3
Type 2
Tarea
Type 1
RA
ST
P
PA
PD
PA
STE
QD
T
TA
TA
STON
QA
P
PD PR
Q
Segment
PR-interval
SA
RD
SD
QRSP1
STM
80MS
TD
TD
STSlope
J point
S-interval
Figure 10. A stylized ECG showing the many amplitude, duration, and area measurements
used to develop the measurement matrix for automated ECG interpretation (courtesy of HewlettPackard Company, Palo Alto, CA).
electrode catheters inside the heart and in very close proximity to the respective structures. The noninvasive recording of
HisPurkinje signals was the advent of a new generation of
ECG analysis. As the techniques for high-resolution ECG
evolved perhaps the most clinically significant application
was in recording so-called cardiac late potentials.
Cardiac late potentials typically arise from ventricular
cells which surround a dead region of the heart caused by a
heart attack. These bordering regions with surviving cells appear on the outer edge of the scar tissue and also permeate
into the scarred region. It is possible that complete but circuitous pathways of viable cells can actually traverse the scar
tissue. Such a matrix is often the site where life-threatening
arrhythmias originate and are sustained. During normal
heart rates partial activation of these arrhythmia pathways
have been revealed, originally with electrodes in direct cardiac contact and then eventually by using high-resolution
ECG techniques similar to those used to record HisPurkinje
signals noninvasively (15). The activation of border zone cells
is often delayed past activation of the normal ventricular cells
because they are poorly conducting due to the heart attack.
The artificially long pathways resulting from the mix of dead
and surviving cells within and surrounding the infarct can
also result in depolarizing signals which outlast the end of
normal activation. These signals are not part of normal cardiac activation. The use of computer-based enhancement techniques has been the only way to identify and quantify them.
It has been shown in hundreds of studies that the presence of
these late potentials, after patients have had heart attacks,
indicates that they have a very high risk of future life-threatening arrhythmias.
The primary method used to record both HisPurkinje and
late potential signals is achieved by means of signal averag-
380
ELECTROCARDIOGRAPHY
03:37:00
03:37:30
03:38:00
03:38:30
03:39:00
03:39:30
03:40:00
03:40:30
03:41:00
03:41:30
03:42:00
03:42:30
03:43:00
03:43:30
03:44:00
Figure 11. A portion of a full disclosure ECG from an ambulatory ECG. This mode of presentation, although significantly condensed, allows the trained reader to assess rhythm alterations
rapidly. This is a very abnormal recording.
ELECTROCARDIOGRAPHY
381
19.250
0.346
211
Time (ms)
Time (ms)
(a)
(c)
0
27
0
24
63
30
00
00
30
00
24
21
00
00
21
00
18
15
00
12
90
0.038
0
0.077
1.925
60
21
0.115
209
0.154
3.850
30
70
0.192
18
5.775
0.231
7.700
0.269
15
y
9.625
197
12
11.550
60
Voltage (mV)
0.256
3.850
0.231
3.465
x
0.205
Time (ms)
Time (ms)
(b)
(d)
0
30
0
24
Figure 12. The high-resolution ECG derived by signal averaging the XYZ leads. Panel (a) is a
3s run of the normal scale XYZ leads. Panel (b) is a 0.3s window of the averaged XYZ leads with
a fivefold increase in the voltage scale. Panel (c) is a 40 Hz high-pass version of panel (b). Panel
(d) is the filtered vector magnitude derived from the XYZ with standard measurements indicated
(see text). (Reprinted from the Archives of Internal Medicine, vol. 148, page 1862, 1988, Copyright
1988, American Medical Association.)
21
LAS
0
30
0
24
0
21
0
21
0
18
0
15
12
0.025
90
0.385
60
0.051
30
0.770
205
67
21
0.077
18
1.155
0.102
15
1.540
0.128
1.925
0.154
12
0.179
90
2.310
Voltage (mV)
2.695
60
3.080
30
Voltage (mV)
13.475
Voltage (mV)
83
0.308
90
15.400
30
17.325
382
ELECTROCARDIOGRAPHY
QRS
P
methods for identifying patients at high risk from life-threatening arrhythmias and for enabling lifesaving devices.
II
BIBLIOGRAPHY
V6
A H V
His bundle
electrogram
Figure 13. The top two traces are ECG leads II and V6, and the
bottom trace is a bipolar catheter recording, properly positioned inside the heart, showing the His bundle deflection (H), and intracardiac atrial (A) and ventricular (V) activity.
EDWARD J. BERBARI
Indiana University/
Purdue University, Indianapolis
394
ELECTROENCEPHALOGRAPHY
ELECTROENCEPHALOGRAPHY
An electroencephalogram (EEG) is a record of electric signals
generated by the cooperative action of brain cells or, more
precisely, the time course of extracellular field potentials generated by synchronous action of brain cells. The name is derived from the Greek words enkephalos (brain) and graphein
(to write). An electroencephalogram can be obtained by means
of electrodes placed on the scalp or directly on or in the cortex.
In the latter case it is sometimes called an electrocorticogram
(ECoG) or subdural EEG (SEEG). An EEG recorded in the
absence of external stimuli is called a spontaneous EEG; an
EEG generated as a response to an external stimulus is called
an event-related potential (ERP). The amplitude of an EEG
measured with scalp electrodes is 50 V to 200 V.
In the EEG the following rhythms have been distinguished
(1): delta (0.5 Hz to 4 Hz), theta 4 Hz to 8 Hz), alpha (8 Hz to
13 Hz), and beta (above 13 Hz, usually 14 Hz to 40 Hz) (Fig.
1). The term gamma rhythm for 35 Hz to 45 Hz activity is
now seldom used. The contribution of different rhythms to the
EEG depends on the age and behavioral state of the subject,
mainly the level of alertness. There are also considerable intersubject differences in EEG characteristics. The EEG pattern changes in different neuropathological states and is also
influenced by metabolic disorders (1).
The delta rhythm is a predominant feature in EEGs recorded during deep sleep. During deep sleep, delta waves
have usually large amplitudes (75 V to 200 V peak-to-peak)
and show strong coherence with signals acquired in different
locations on the scalp.
Theta rhythms rarely occur in humans and primates, except during infancy and childhood. In humans, activity in the
theta band is mostly attributed to the slowing of alpha
rhythms due to pathology. However, theta rhythms are predominant in rodents; in their case the frequency range is
broader (4 Hz to 12 Hz) and the waves have a high amplitude
and characteristic sawtooth shape. It is hypothesized that
yyy
;;;
;;;;
yyyy
;;;;;
yyyyy
;;;
yyy
;;;;
yyyy
yyyyy
;;;;;
;;;
yyy
;;;;
yyyy
;;;;;yyyy
yyyyy
;;;
yyy
;;;;
;;; yyyy
yyy
;;;;
T4
A2
F8
C4
F8
T3
T4
T6
C4
A2
T6
C4
F4
Fp2
F4
O2
P4
Cb2
P4
O2
Fp2
Nasion
F2
C2
F2
C2
P2
C2
P2
Fp1
F3
F3
F2
Fp1
C3
A1
T3
P3
C3
F7
T3
T5
O2
P3
Union
O3
T5
Cb1
A1
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
ELECTROENCEPHALOGRAPHY
395
The synapses of the neuron are in contact with the membranes of the other neurons. When the action potential arrives at the synapse, it secretes a chemical substance, called
a mediator or transmitter, which causes a change in the permeability of the postsynaptic membrane to the ions. As a result, ions traverse the membrane, and a difference in potential (postsynaptic potentials, or PCPs) across the membrane
is created. When the negativity inside the neuron is decreased
(e.g., by the influx of Na), the possibility of firing is
higheran excitatory postsynaptic potential (EPSP) is generated. An inhibitory postsynaptic potential (IPSP) is created
when the negativity inside the neuron is increased (by the
flux of Cl ions) and the neuron becomes hyperpolarized. Unlike the action potential, the PSPs are graded potentials: their
amplitudes are proportional to the amount of secreted mediator, which depends on the excitation of the input neuron.
Postsynaptic potentials typically have amplitudes of 5 mV to
10 mV and time spans of 10 ms to 50 ms. In order to obtain
suprathreshold excitation, the amplitudes of many postsynaptic potentials have to be superimposed in the soma of a neuron. A neuron can have very abundant arborizations, making
up to 10,000 synaptic junctions with other neurons.
The electrical activity of neurons generates currents along
the cell membrane in the intra- and extracellular spaces, producing an electric field conforming approximately to that of a
dipole. Microscopic observation of this electric field requires
the synchronization of electrical activity of a large number
of parallelly oriented dipoles (6). Indeed, parallelly oriented
pyramidal cells of the cortex are to a large degree synchronized by virtue of common feeding by thalamocortical connections (2). The condition of synchrony is fullfilled by the PSPs,
which are relatively long in duration. The contribution from
action potentials to the electric field measured extracranially
is negligible.
The problem of the origins of EEG rhythmical activity has
been approached by electrophysiological studies on brain
nerve cells and by the modeling of electrical activity of the
neural populations (2,3). The question arises whether the
rhythms are caused by single cells with pacemaker properties
or by oscillating neural networks. It has been shown that
some thalamic neurons display oscillatory behavior, even in
the absence of synaptic input (7). There is evidence that the
intrinsic oscillatory properties of some neurons contribute to
the shaping of the rhythmic behavior of networks to which
they belong. However, these properties may not be sufficient
to account for the networks rhythmic behavior (2). It seems
that cooperative properties of networks consisting of excitatory and inhibitory neurons connected by feedback loops
play the crucial role in establishing EEG rhythms. The frequency of oscillation depends on the intrinsic membrane properties, on the membrane potential of the individual neurons,
and on the strength of the synaptic interactions.
The role of EEG oscillations in information processing has
not been fully recognized. However, there is strong evidence
that coherent oscillations in the beta range in a population of
neurons might be the basic mechanism in feature binding of
the visual system (8). Indeed, it seems that this observation
is not limited to the visual system and that synchronized oscillatory activity provides an efficient way to switch the system between different behavior states and to cause a qualitative transition between different modes of information
processing. In this way, neuronal groups with a similar dy-
396
ELECTROENCEPHALOGRAPHY
RECORDING STANDARDS
The EEG is usually registered by means of electrodes placed
on the scalp. They can be secured by an adhesive such as
collodion or embedded in a special snug cap. The resistance
of the connection should be less than 5 k, so the recording
site is first cleaned and diluted alcohol, and conductive electrode paste applied to the electrode cup.
Knowledge of the exact positions of electrodes is very important for both interpretation of a single recording and comparison of results; hence the need for standardization. The
traditional 1020 electrode system (9) fixes the positions of
19 EEG electrodes (and two electrodes placed on earlobes:
A1, A2) in relation to specific anatomic landmarks, such that
10% to 20% of the distance between them is used as the electrode interval [Fig. 1(ac)]. The first part of derivations name
indexes the arrays row from the front of the head: Fp, F, C,
P, and O. The second part is formed from numbers, even on
the left and odd on the right side, or z or 0 for the center.
Progress in topographic representation of EEG recordings demands a larger number of electrodes. Electrode sites halfway
between those defined by standard 1020 system have been
introduced in the extended 1020 system (10).
The EEG is a measure of potential difference; in a referential (or unipolar) setup it is measured relative to the same
electrode for all derivations. This reference electrode is usually placed on an earlobe, nose, mastoid, chin, neck, or scalp
center. There is no universal consensus regarding its best location. In the bipolar setup (mortgage) each channel registers
the potential difference between two particular scalp electrodes. Data recorded in a referential setup can be transformed into any bipolar montage, for the sake of display or
futher processing. The average reference montage can be obtained by subtracting from each channel the average activity
from all the remaining derivations. The Hjorth transform references each electrode to the four closest neighbors, which is
an approximation of the Laplace transform (LT). The LT, calculated as a second spatial derivative of the signal, represents
the scalp current density (11).
In contrast with the open question of the reference, the
necessity of artifact rejection is universally acknowledged.
The main problem lies in the lack of a working definition for
an EEG artifactit can stem from muscle or heart activity
(EMG, ECG), eye movement (EOG), external electromagnetic
fields, poor electrode contact, the subjects movement an so
on. Corresponding signals (EMG, EOG, ECG, and body movements) registered simultaneously with EEG are helpful in the
visual rejection of artifact-contaminated epochs.
An EEG is usually digitized by a 12 bit analog-to-digital
converter (ADC) with the sampling frequency ranging from
100 Hz for spontaneous EEGs to several hundred hertz for
ERPs to several kilohertz for recording short-latency far-field
ELECTROENCEPHALOGRAPHY
397
Amplitude
175
150
125
100
75
50
25
Occurrence rate
18
16
14
12
10
8
6
4
2
0
Power
1
R
2
3
4
2000
0
0
Hours
The sleep pattern changes dramatically during maturation. For newborn babies REM takes most of the sleep time,
and in young children only REM and non-REM stages can
be distinguished.
Maturation changes in electrocortical activity of fetal animals also involve an increase of power in the higher frequency
bands, as was shown for fetal lambs by means of wavelet
transform (14). Increased correlation between EEG, respiratory activity, and blood pressure was also found with increasing age (15). However, morphine destroys these correlations.
These observations indicate that maturation is connected
with increased CNS integration.
Physiologically, the maturation process is connected with
the development of dendritic trees and myelination. Myelin
layers produced by glial cells cover the axons of neurons and
act as an insulator of the electrically conductive cells. The
propagation of electrical activity is faster and less energy-consuming in myelinated fibers.
EVENT-RELATED POTENTIALS
ERPs are the stimulus-induced synchronization and enhancement of spontaneous EEG activity (16). Among them, the
most clinically used are the evoked potentials (EPs), usually
defined as changes of EEG triggered by particular stimuli:
visual (VEP), auditory (AEP), somatosensory (SEP). The basic
problem in the analysis of EPs is detecting them within the
usually larger EEG activity. EPs amplitudes are one order of
magnitude smaller than that of the ongoing EEG (or even
less). Averaging is a common technique in EP analysis; it
makes possible the reduction of background EEG noise on the
assumption that the background noise is a random process
but the EP is deterministic.
398
ELECTROENCEPHALOGRAPHY
EEG ANALYSIS
The original method of EEG analysis is visual scoring of the
signals plotted on paper. Modern computer analysis can extend electroencephalographic capabilities by supplying information not directly available from the raw data. However, visual analysis is still a widespread technique, especially for
detection of transient features of signals. In most cases the
agreement of an automatic method with visual analysis is a
basic criterion for its acceptance.
Due to its complexity, the EEG time series can be treated
as a realization of a stochastic process, and its statistical
properties can be evaluated by typical methods based on the
theory of stochastic signals. These methods include probability distributions and their moments (means, variances,
higher-order moments), correlation functions, and spectra.
Estimation of these observables is usually based on the assumption of stationarity, which means that the statistical
properties of the signal do not change during the observation
time. While the EEG signals are ever changing, they can be
subdivided into quasistationary epochs when recorded under
constant behavioral conditions.
EEG signal can be analyzed in the time or the frequency
domain, and one or several channels can be analyzed at a
time. The applied methods involve spectral analysis by the
fast Fourier transform (FFT), autoregressive (AR) or autoregressive moving-average (ARMA) parametric models, time
frequency and time-scale methods (wavelets), nonlinear analysis (including the formalism for chaotic series), and artificial
neural networks.
ELECTROENCEPHALOGRAPHY
The estimation of power spectra is one of the most frequently used methods of EEG analysis (Fig. 3). It provides
information about the basic rhythms present in the signal
and can be calculated by means of the FFT. Spectral estimators with better statistical properties can be obtained by application of parametric models such as AR and ARMA models
or, for time-varying signals, the Kalman filter. For quasistationary EEGs, and AR model is sufficient. The AR model represents a filter with a white noise at the input and the EEG
series at the output; it is compatible with a physiological
model of alpha rhythm generation (19). The AR model also
provides a parametric description of the signal, and makes
possible its segmentation into stationary epochs. It also offers
the possibility of detecting nonstationarities by means of inverse filtering (1).
Interdependence between two EEG signals can be found
by a cross-correlation function or its analog in the frequency
domaincoherence. Cross-correlation can be used for comparison of EEGs from homologous derivations on the scalp.
A certain degree of difference between these EEGs may be
connected with functional differences between brain hemispheres, but a low value of cross-correlation may also indicate
pathology. Cross-covariance functions have been extensively
used in the analysis of ERPs for the study of the electrophysiological correlates of cognitive functions (20). Coherences are
useful in determining the topographic relations of EEG
rhythms. Usually, ordinary coherence calculated pairwise between two signals is used. However, for the ensemble of channels taken from different derivations the relationship between the signals may come from common driving by another
site. In order to find intrinsic relationships between signals
from different locations, partial coherences should be calculated: EEG signals recorded from the ensemble of electrodes
are realizations of one EEG process and are usually correlated (21).
The representation of EEG activity in a spatial domain is
usually performed by mapping. However, it is more effective
for a human observer to look at the map than at the table of
numbers. A map may help to make a direct comparison be-
FFT spectra
10
20
30
40
AR spectra
10
20
30
40
4s. epochs
Hz
Figure 3. FFT and AR power spectra.
399
400
ELECTROENCEPHALOGRAPHY
Frequency
Time
0 signal
A
A
A
A
D
D
Figure 4. Wavelets.
0
5
5
10
10
15
15
20 Hz 20 s
1
2
3
...
...
log N
ELECTROENCEPHALOGRAPHY
401
1. E. Niedermayer and F. Lopes da Silva (eds.), Electroencephalography: Basic Principles, Clinical Applications, and Related Fields,
3rd ed., Baltimore: Williams & Wilkins, 1993.
23. M. Matousek, P. Petersen, and S. Freeberg, Automatic assessment of randomly selected routine EEG records, in G. Dolce and
H. Kunkel (eds.), CEANComputerized EEG Analysis, Stuttgart:
Fisher, 1975, pp. 421428.
402
ELECTROLUMINESCENCE
METIN AKAY
Dartmouth College
KATERZYNA BLINOWSKA
PIOTR DURKA
University of Warsaw
ELECTROMYOGRAPHY
523
ELECTROMYOGRAPHY
ANATOMY AND PHYSIOLOGY
Muscles convert chemical energy into mechanical energy.
Since they can pull but not push, at least two muscles are
needed for each joint connecting two body segments: the agonist and the antagonist. Cocontraction of both muscles causes
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
524
ELECTROMYOGRAPHY
APPLICATIONS OF EMG
Today the main clinical application of the EMG techniques
is based on the use of needles for diagnosing neuromuscular
diseases that modify the morphology of MUAPs. However,
surface techniques are becoming more popular because they
are noninvasive and inexpensive and provide global information. The most important applications of surface techniques
are listed below.
Estimation of Nerve Fiber Conduction Velocity. A peripheral
nerve is electrically stimulated and the muscle response (Mwave) is detected. Measurement of the distance between the
stimulation and detection sites, along with appropriate measurement of the stimulusresponse delay, allows estimation
of the nerve fibers conduction velocity (2).
Myoelectric Manifestations of Muscle Fatigue. As a voluntary
or electrically elicited muscle contraction is sustained in time
under isometric conditions, the EMG signal becomes progressively slower. This change precedes the inability to sustain
the required effort (mechanical fatigue), is referred to as myoelectric manifestations of muscle fatigue, and depends on the
fiber type constituency of the muscle. It is likely that current
ELECTROMYOGRAPHY
V
Vs
X
X
70 mV
I
Propagation direction
h1
h2
Current tripole
0
X
Current field
I
V
X
X
Current source
Current source
Current sink
(a)
(b)
V
Innervation zone
+
_
t
Reference
electrode
Needle
Subcutaneous layer
1
2
3
Depolarization zone
Z
0 mV
CV = 4m/s
Termination
zone
70 mV
Action potentials of fiber #3
(c)
Figure 1. (a) Depolarization zone of a muscle fiber, description of the membrane current, and
monopolar surface potentials Vs generated on the skin by two depolarized zones at two depths
h1 and h2. (b) Muscle fiber transmembrane voltage, current, and tripole model of the transmembrane current. (c) Schematic representation of a motor unit (example with three fibers only) and
of the signal detected by a differential amplifier (surface electrodes) or by a coaxial needle. Physical dimensions are not in correct proportions.
525
526
ELECTROMYOGRAPHY
1 mm
(a)
0.5 mm
.
...
..........
(b)
Surface
research will lead to a noninvasive estimation of the percentage of Type I and Type II fibers thereby reducing the need for
muscle biopsies (35).
Myoelectric manifestations of muscle fatigue are also observable in intermittent isometric contractions, isokinetic
contractions, and, in general, in dynamic contractions. Particularly important fields of interest concern respiratory muscle
fatigue and back muscle fatigue in occupational medicine (6).
Gait Analysis and Muscle Activation Intervals. During movements, such as gait, sport activities, or rehabilitation exercises, it is important to detect the time and the level of individual muscle activations. Surface EMG is the appropriate
tool for this purpose. Crosstalk and relative movement between muscle and electrodes still represent important confounding factors (7,8).
Control of Myoelectric Prosthesis. The motors of artificial
limbs (mostly hands, wrists, and elbows) may be controlled by
surface EMG signals detected from muscles above the level
of amputation. Many systems of this kind are commercially
available (9).
Biofeedback. Providing a patient with real-time information about the level of activity of a particular muscle (or mus-
Raw
EMG signal
Electrode
Muscle
Decomposition
-Motoneurons
Individual motor
unit action potential
trains (MUAPTs)
ELECTROMYOGRAPHY
527
528
ELECTROMYOGRAPHY
VD =V1 V2
Impulse response
e
1
2
x
Vm
Traveling ac
component
dc component
(a)
HD (f)
V/e
V/2e
+
Impulse response of the spatial filter
hDD(x) = (x + e) 2 (x) + (x e)
+
Impulse response
V2
V1
V3
x
Vm
HDD (f)
f
x
V/2e
(b)
VL = V1 + V3 + V4 + V5 4V2
V1
1
4
V4
V2
V3
VL
1
1
V5
1
(c)
Figure 4. Detection techniques and spatial filtering effects. v conduction velocity, Vm monopolar voltage. (a) Single differential (or bipolar) detection. Impulse response and transfer function
of the spatial filter. (b) Double differential detection. Impulse response and transfer function of
the spatial filter. (c) A Laplacian spatial filter performing a discrete two-dimensional differentiation.
V/e
ELECTROMYOGRAPHY
Monopolar
detection
Array
Double differential
(DD) detection
Bipolar (SD)
detection
15
14
Innervation
zone
529
16
Reference electrode
Nontraveling
components
Innervation and
termination zones
Conduction
velocity est.
(a)
11
10
9
8
7
6
5
4
3
2
1
0
10
20
30
Time (ms)
Innervation zones
(b)
40
Figure 5. (a) Linear array detection of a single motor unit action potential. A nontraveling potential is seen in the monopolar detection and is due to the extinction
of the depolarized zones at the tendon endings. Innervation and termination zones
can be clearly seen in the single differential recording. Conduction velocity may be
well estimated using the double differential signals. Signals are simulated using
the model depicted in Fig. 14. (b) Example of array detection of three motor units,
with different properties, from the biceps brachii of a healthy subject. Interelectrode
distance is 5 mm. Straight lines have been added to outline the bi-directional propagation and the different innervation zones [from Merletti and Lo Conte (15)].
f mean =
fP( f ) df
P( f ) d f
(1)
and
f med
0
P( f ) df =
f med
P( f ) df = 0.5
0
P( f ) d f
MU #1
MU #2
Motor
neurons
Contributions of
each MU
MU #N
(a)
MU #1
MU #2
Motor
neurons
Contributions
of each MU
MU #N
(b)
Initial values: MNF = 131 Hz
RMS = 0.342 mV CV = 5.49 m/s
Torque = 60% MVC
EMG signal
160
1.0
MNF = 129 Hz
0.5
1.0
RMS
140
EMG PSD
1.0
0.5
0.0
0.5
1.0
0.5
0.0
0.5
120
100
0.0
MNF = 100 Hz
1.0
0.5
0.0
1.0
Torque
1.0
0.5
0.0
0.5
80
MNF
60
0.5
0.0
1.0
40
20
1.0
MNF = 81 Hz
CV
0
20
40
60
80
1.0
0.5
0.0
0.5
100
Time (s)
Fatigue plot (MNF, RMS, CV, Torque) at 60% MVC
0
(c)
Figure 6
530
50
100 150
Time (ms)
1.0
200
0
1.0
MNF = 68 Hz
0.5
0.0
ELECTROMYOGRAPHY
Initial values: MNF = 119 Hz
RMS = 1.24 mV CV = 5.1 m/s
140
130
RMS
(20 s fit)
120
110
100
CV
90
80
MNF
70
60
50
10
15
Time (s)
20
25
30
EMG PSD
6
4
MNF = 119 Hz
2
0
2
4
6
4
MNF = 84 Hz
2
0
2
4
6
4
MNF = 74 Hz
2
0
2
4
6
4
MNF = 70 Hz
2
0
2
4
10 15 20 25 30
0
100 200 300
Time (ms)
Frequency (Hz)
531
1.0
0.5
0.0
1.0
0.5
0.0
1.0
0.5
0.0
1.0
0.5
0.0
(d)
Figure 6 (Continued ) (a) Schematic diagram of generation of voluntary EMG. (b) Schematic
diagram of generation of electrically elicited EMG. (c) Experimental data from a voluntary contraction of a tibialis anterior muscle. All variables are normalized to their initial value to obtain
the fatigue plot. Notice that voluntary torque could be maintained at 60% MVC (maximal voluntary contraction) for 60 s; but myoelectric variables started to change from the beginning of the
contraction, showing myoelectric manifestations of muscle fatigue. EMG signal detected bipolarly
with 10 mm interelectrode distance. PSD, power spectral density function; RMS, root mean
square value; MNF, mean frequency; CV, conduction velocity [from Merletti and Lo Conte (11)].
(d) Experimental data from an electrically elicited contraction of a vastus medialis muscle. Notice
the change of shape of the M-wave. Detection as in c; f 30 pulses per second [from Merletti
and Lo Conte (15)].
It can be shown that f mean2 kf mean1 and f med2 kf med1. In general the two signals (and the relative spectra) will not be exactly scaled and the ratios f mean2 /f mean1 and f med2 /f med1 will not
be identical but can provide an estimate of k and quantify the
scaling phenomenon. If we define A as the average rectified
value (ARV) and R as the root mean square value (RMS), then
it is A2 A1 /k and R2 R1 / k. The normalized plots of MNF,
MDF, ARV, RMS, and CV (see next section for a discussion of
CV estimation) versus time describe signal changes and are
often referred to as the fatigue plot (15).
Figures 6(c) and 6(d) show an example of fatigue plots and
signals in both experimental situations. It has been demonstrated from animal experiments that the rate of decay of
MNF or MDF in each musclethat is, the estimated scaling
factor kis related to the percentage of Type I and Type II
fibers in the muscle (5). This finding suggests the possibility
of noninvasive fiber type estimation and, if confirmed by human biopsies, is expected to be very relevant in future research concerning rehabilitation and sport medicine. More advanced approaches are being developed to separate the
contribution of scaling from that of spectral shape change and
to relate them to different underlying physiological phenomena.
Applications concern rehabilitation, sports and occupational medicine. An important clinical application of myoelectric manifestations of muscle fatigue concerns the analysis of
back muscle impairment for the investigation of back problems and low back pain (6,16). Issue 4 of volume 34 of the
Journal of Rehabilitation Research and Development (1997) is
devoted to this topic. In particular, the works of Roy et al. (17)
and Oddson et al. (18) focus on the classification of muscle
impairments and on the development of clinical protocols.
The spectral approach requires the signal to be quasi-stationary, that is, its statistical properties must not change
during each time epoch. This requirement is not satisfied during dynamic contractions when the EMG is often generated
in short bursts. More sophisticated methods, based on timefrequency representations and wavelet expansions, are being investigated for the quantification of myoelectric manifestations of muscle fatigue in dynamic conditions.
532
ELECTROMYOGRAPHY
M1 (t) =
N
i=1
N
Si (t i )
(2)
i=1
+
S1(t)
Muscle
fibers
d
Propagation direction with
velocity V
S2(t)
ELECTROMYOGRAPHY
0.2 mV
X(t)
Y(t)
2 ms
533
+1
Cross-correlation function
RXY( )
2 ms
The muscle fiber CVs are distributed over a range, and the
measurement techniques described above give only a number
related to that distribution. Noninvasive techniques for CV
distribution estimation based on surface EMG give significantly more information about the muscle state, and they
would be useful for both clinical and research purposes.
Two recent approaches are based on measurements of the
cross and auto power spectra ratio of the EMGs from the two
sites (22,24). From Eq. (2) with Si(t) defined as the ith motor
unit train and assuming uncorrelated units with identical firing statistics, the ratio, ( f ), of the cross to auto power spectra is given by
( f ) =
N
Pii ( f ) exp( j2 f i )
i=1
1
Figure 8. Cross-correlation function between two EMG signal channels recorded from human biceps showing propagation delay as a
shift in peak of the function [from Li and Sakamoto (26)]. Interelectrode distance: 5 mm.
N
Pii ( f )
(3)
i=1
where Pii( f ) is the autospectra for the ith unit, and N is the
number of units. Note that only autospectra terms appear in
Eq. (3) because of the assumption of uncorrelated units, in
which case all the cross-spectra terms are zero. Now assuming identical MUAPs (a strong limitation) across the units,
the Pii( f ) are identical and Eq. (3) becomes
( f ) = N 1
N
exp( j2 f i )
(4)
i=1
For large N, Eq. (4) is approximately equivalent to the statistical expectation operator over i, in which
( f )
=
f () exp( j2 f ) d = F ( f )
(5)
534
ELECTROMYOGRAPHY
0.4
Probability density
0.3
0.2
0.1
0
0
8
Velocity (m/s)
12
Figure 9. Results from the measurement of conduction velocity probability density function for biceps using the impulse response function
method [from Hunter et al. (22)].
Monopolar
Bipolar
d
d
5
2
1
2
1
3
+1
2
1
+1
+1
3 4
+1
+1
1
4
+1
Doubledifferentiated
d = 2.5 mm
NDdifferentiated
600 V
10ms
Figure 10. Surface EMG acquired with different electrode configurations showing single motor
unit signals [from Rau and Disselhorst-Klug (14)].
ELECTROMYOGRAPHY
535
536
ELECTROMYOGRAPHY
doms) it is necessary with current systems to either have parallel two- or three-state controllers (one for each degree-offreedom) or have a single controller with as many states as
required. Unfortunately, it has been found that multistate
systems with more than three states have unacceptable error
performance due to excessive demand put on the operators
TA
SOL
MG
LG
VAM
SM
BF
GLM
Figure 11. Example of surface EMG detected from a group of muscles during gait of a normal
human subject. TA, tibialis anterior; SOL, soleus; MG, medial gastrocnemius; LG, lateral gastrocnemius; VAM, vastus medialis; SM, semitendinosus; BF, biceps femoris; GLM, gluteus maximus
(courtesy of Dr. Carlo Frigo, Centro di Bioingegneria Politecnico di Milano and Fondazione Don
Gnocchi).
ELECTROMYOGRAPHY
CNS
Muscle
Joint
Control
Battery
537
Output
Prosthesis
Figure 12. Block diagram showing relationship between normal and myoelectric control systems
(shaded area is removed in amputation surgery) [from Parker and Scott (36)].
can be used as the ANN input. Hudgins et al. (39) have demonstrated that the initial 300 ms of dynamic contraction
EMGs contain deterministic components that are repeatable
in time and which differ over contraction functions (see Fig.
13). Thus time samples from these deterministic components
can form feature sets for an ANN trained to classify by function. A 30:8:4 perceptron ANN-based controller is used in this
application and is currently implemented on a TMS320 DSP
micro for clinical testing.
An approach to prosthesis control with significant promise
is to estimate from EMGs the biomechanical variables of a
joint model and to drive the mechanical prosthesis accordingly [see Wood et al. (40)]. Such an approach can incorporate
stiffness control in which the stiffness (or compliance) of the
prosthetic limb is made to match that of the limb model for a
given client.
538
ELECTROMYOGRAPHY
2.5
MES amplitude (V 5000)
2.5
2.5
2.5
MES amplitude (V 5000)
2.5
2.5
2.5
2.5
(a)
2.5
2.5
(b)
Figure 13. Initial 300 ms of EMG obtained from bipolar differential measurement with electrodes over biceps and triceps during elbow flexion. (a) Four 300 ms records and (b) the ensemble
average of sixty 300 ms records demonstrating the deterministic component of the initial phase
[from Hudgins et al. (35)].
tomical features of the MU, conduction velocity, and anisotropy of the tissue. Future research might lead to the development of systems for the automatic identification of the most
likely set of parameters for individual MUs and make them
available to the neurologist for diagnostic evaluation.
ELECTROMYOGRAPHY
Current sources
DD1
2 DD signals
3 DD signals
4 Monopolar signals
I1
a
I2
I2
SD1
VA
SD2
SD3
VB
e
VC
e
I3 I3
Right tripole
I1
Left tripole
DD2
I1 + I2 + I3 = 0
I 1a + I 3 b = 0
539
VD
e
ZE
LL
LR
h
R
N
CV
WTL
CV
WI
WTR
Conductive medium: y z; y = x
(a)
: exp. data
14
: simulation
Model parameters:
12
e = 10 mm
z / r = 6
Electrode pair
10
b = 7 mm
8
a/b = 1/3
h = 4.5 mm
R = 2 mm
WI = 5 mm
WTR = 20 mm
2
WTL = 10 mm
LR = 56 mm
LL = 73 mm
0
10
15
Time (ms)
20
25
CV = m/s
(b)
BIBLIOGRAPHY
1. B. Saltin and P. Gollnick, Skeletal muscle adaptability: Significance for metabolism and performance, Handbook of Physiology,
Skeletal Muscle, Sec. 10, Chap. 19, American Physiological Society, pp. 555631, 1983.
2. M. Smorto and J. V. Basmajian, Clinical Electroneurography,
Baltimore: Williams and Wilkins, 1979.
3. J. Duchene and F. Goubel, Surface electromyogram during voluntary contraction: Processing tools and relation to physiological
events, CRC Crit. Rev. Biomed. Eng., 21: 313397, 1993.
4. R. Merletti, M. Knaflitz, and C. J. DeLuca, Electrically evoked
myoelectric signals, CRC Crit. Rev. Biomed. Eng., 19: 293340,
1992.
5. E. Kupa et al., Effects of muscle fiber type and size on EMG median frequency and conduction velocity, J. Appl. Physiol., 79:
2332, 1995.
540
34. S. E. Mathiassen, J. Winkel, and G. Hagg, Normalization of surface EMG amplitude from the upper trapezius muscle in ergonomic studies: A review, J. Electromy. Kinesiol., 5: 197226, 1995.
35. B. Hudgins, P. Parker, and R. Scott, Control of artificial limbs
using myoelectric pattern recognition, Med. Life Sci. Eng., 13:
2138, 1994.
36. P. Parker and R. Scott, Myoelectric control of prostheses, CRC
Crit. Rev. Biomed. Eng., 13: 283310, 1986.
37. D. Graupe, J. Salahi, and K. Kohn, Multifunction prosthesis and
orthosis control via microcomputer identification of temporal pattern differences in single-site myoelectric signals, J. Biomed.
Eng., 4: 1722, 1982.
38. M. Kelly, P. Parker, and R. Scott, The application of neural networks to myoelectric signal analysis: A preliminary study, IEEE
Trans. Biomed. Eng., 37: 221230, 1990.
39. B. Hudgins, P. Parker, and R. Scott, A new strategy for multifunction myoelectric control, IEEE Trans. Biomed. Eng., 40: 82
94, 1993.
40. J. Wood, S. Meek, and S. Jacobson, Quantitation of human shoulder anatomy for prosthetic arm control, J. Biomechanic., 22: 273
292, 1989.
41. P. Rosenfalck, Intra- and extracellular potential fields of active
nerve and muscle fibers. A physico-mathematical analysis of different models, Acta Physiol Scand. Suppl. 321: 1168, 1969.
42. N. Dimitrova, Model of the extracellular potential field of a single
striated muscle fiber, Electromy. Clin. Neurophysiol., 14: 5366,
1974.
43. T. Gootzen, D. Stegeman, and A. Van Oosterom, Finite limb dimensions and finite muscle length in a model for the generation
of electromyographic signals, Electroencephalogr. Clin. Neurophysiol., 81: 152162, 1991.
ROBERTO MERLETTI
Politecnico di Torino
224
225
LASER PHYSICS
The primary components of a laser (which is an acronym for
Light Amplification by Stimulated Emission of Radiation) are
(Fig. 1): (1) a lasing medium, which may be in a solid, liquid,
or gas phase, capable of undergoing stimulated emission; (2)
an excitation mechanism, which causes the atoms or molecules of the lasing medium to ionize and rise to a higher electronic energy state by absorbing either electrical, thermal, or
optical energy (this process results in a condition known as a
population inversion where more atoms have electrons at a
higher energy state than at a lower energy level); and (3) a
positive feedback system that consist of a highly reflective
curved mirror and a partially transmitting flat mirror causing
the spontaneous photon emission from the active medium to
bounce back and forth between the two reflecting mirrors.
The collision between the spontaneous emission and the
atoms or molecules in the excited state stimulate additional
emission (stimulated emission) inside the active laser cavity
or optical resonator. If the frequencies are properly chosen,
the light will be amplified and emitted along the axis of the
resonator as an intense narrow beam. The result of the stimulated emission is two electromagnetic waves of the same
wavelength traveling parallel and in phase (spatial and temporal coherence) with one another. In contrast to gas-filled
lasers, solid-state lasers, such as the Nd:YAG and ruby lasers,
require an external optical light source as an excitation
source to pump the atoms in the solid-state crystal.
Lasers produce a beam of nonionizing radiation (spot size
on the order of 1 m) that is highly coherent (directional),
generally monochromatic (single wavelength) and collimated
(the beam remains almost parallel along its trajectory with
minimum loss of power due to divergence). In addition, a laser
beam has a high power density, or irradiance, usually expressed in watts per square centimeters (W/cm2). If the radiation is delivered as a pulse, the pulse duration becomes an
additional factor in determining the effect on tissue because
the energy applied during the exposure, which is expressed in
Joules per square centimeter (J/cm2), is equal to the power
density times the pulse duration (also called the fluence).
When the laser is used as a cutting tool, the spot size must
be very small in order to concentrate the power into a tiny
spot. On the contrary, when a laser is used for coagulation,
the beam is defocused to allow for an increased spot size to
spread the laser beam over a large area (Fig. 2). Power densities above 1 kW/cm2 tend to be used for incisions, whereas
Resonator
or
optical cavity
Laser
beam
output
Active medium
Totally reflecting
mirror
Energy input
Partially
transmitting
mirror
Handpiece
Laser beam
Focal zone
Tissue
Defocused zone
Figure 2. A laser beam can be used for cutting (vaporization) when
the focal zone is near the surface of the tissue. For coagulation, the
laser beam is defocused causing the energy to be distributed over a
wider area.
power densities below 500 mW/cm2 are generally used for coagulation.
The transverse electromagnetic mode (TEM) is a term used
to describe the power distribution of a laser beam over the
spot area. For example, a TEM01 mode refers to a multimode
distribution and indicates that the spot has a cool region in
the center of the beam. A TEM00 mode, on the other hand,
produces a uniform Gaussian power distribution with most of
the power concentrated in the center of the beam and the rest
decreasing in intensity toward the periphery of the beam.
Q-Switched Lasers
When a laser is Q-switched, the energy that is normally
stored in the inverted atomic population is suddenly transferred to the oscillating laser cavity. The term Q is used to
convey the fact that a laser cavity is essentially a resonator
with a certain quality factor Q, similar to any conventional
electronic resonant circuit. Normally, the two highly reflecting mirrors inside a conventional laser tube bounce the
energy back and forth internally, thus leading to a high resonant Q factor. By inserting a lossy attenuator into the resonating laser cavity, it is possible to change the Q factor of
the laser considerably to a point where the energy build-up is
insufficiently high for laser oscillation to occur. Conversely, if
the attenuation is suddenly switched off, the energy builds up
inside the laser cavity so that the excess excitation can be
released in a controllable manner as a very high burst of
short-duration energy (usually nanoseconds).
Pulsed and CW Lasers
Depending on how the excitation energy inside the laser cavity is applied, the output beam could be either in the form of
a continuous wave (CW) or a pulsed wave (PW). The output
of PW lasers can vary widely depending on the duration, repetition rate, and energy of the individual pulses.
The effect of the laser on the tissue can be enhanced with
PW rather than CW delivery lasers because pulsed radiation
Incident laser
radiation
Diffuse
reflection
Scattering
Tissue
Absorption
Diffuse
Transmission
Figure 3. Lasertissue interaction.
105
Absorption coefficient (cm1)
226
104
103
Melanin
102
101
Hemoglobin
100
101
102
Protein
Water
103
104
0.1
1.0
Wavelength ( m)
10
Coagulation/necrosis
zone
Laser beam
Hyperthermia
zone
227
Ablation/vaporization
zone
Mirror joints
;;
Handpiece
Mirror
Laser console
Hollow Laser
tube beam
(a)
Cladding
Fiber optic
cable
Laser console
Handpiece
(b)
Figure 6. Typical medical laser delivery systems. (a) Articulated
arm. (b) Fiber optic coupling.
228
(a)
(b)
introduced into the proximal end of the fiber and is transmitted along the fiber length through a process known as total
internal reflection. Flexible fiber optic guides for laser beam
delivery are usually made of silica (SiO2) glass and some additives. These fibers are commonly used for visible wavelengths
as well as near-infrared wavelengths below 2.1 m and can
deliver a relatively low output power. For example, argon and
YAG laser energy can be transmitted through glass optical
fibers with very little loss. Wavelengths above 2.1 m are
usually absorbed by the fused silica or by water impurities
inside the fiber core. High-power UV radiation produced for
example by excimer lasers is difficult to transmit by optical
glass fibers. Therefore, regular glass fibers are used only for
light transmission in the visible region of the spectrum.
The advantages of fiber optic guides, compared to the more
traditional articulating arms or fixed delivery systems, is
their small size, flexibility, improved manipulation, low
weight, and reduced cost. The ability to launch high optical
power into small optical fibers extends the clinical application
of lasers considerably. It permits precisely controlled energy
delivery through flexible endoscopes to remote intravascular
or intracavitary regions of the body where conventional surgical procedures would otherwise be very invasive to perform.
Recently, new polycrystalline fiber optic materials are being evaluated for short-length delivery of the longer IR wavelengths. Examples include Al2O3 fibers for transmission of
CO2 laser energy and silver halide fibers for use with
Er:YAG lasers.
Fiber optic systems used to deliver laser light culminate in
either hot tip (contact) interaction or free beam (noncontact) interaction, depending on what happens to the beam at
the end of the fiber. If the beam is absorbed by the fiber tip
or a tip affixed to the fiber distal end, it is called a hot tip; if
the beam is directed out of the fiber tip and travels a short
distance before it interacts with the tissue, it is considered a
free beam. Tips come in a wide variety of designs. Some tips
are intended to focus the beam, other are shaped to spread
the beam over a wider area (Fig. 7).
Articulated Arms
The multisegmented adjustable articulated arm, coupled to a
detachable lensed handpiece or endoscope, is the most widely
Ruby
Alexandrite
Diode
Thulium : YAG
Holmium : YAG
Erbium/YAG
Neodimium : YAG
Carbon dioxide
Wavelength
(nm)
Color
193
222
248
308
351
325
488
514
511
578
627
633
531
568
647
694
720800
6601500
2010
2120
2940
1064
1318
10,600
Ultraviolet
Ultraviolet
Ultraviolet
Ultraviolet
Ultraviolet
Ultraviolet
Blue
Green
Green
Yellow
Red
Red
Green
Yellow
Red
Red
Near-IR
Near-IR
Near-IR
Near-IR
Near-IR
Near-IR
Near-IR
Mid-IR
229
Dye Lasers
Neodymium-Doped Lasers
Dye lasers are more complex than gas lasers and require optical pumping either by an intense flash lamp or by another
laser because the laser action is not very efficient. They typically use flowing organic dyes and operate in the visible optical spectrum. Dye lasers operate over a broad wavelength
range and have relatively low efficiency. Dye lasers are useful
mostly in photodynamic therapy (PDT) and in retinal photocoagulation.
Neodymium-doped YAG (Nd:YAG) solid-state lasers, introduced in 1961 (11), produce 1.064 and 1.32 m (near infrared)
radiation. These lasers are widely employed when heavy coagulation is desired or when the use of a fiber optic-based delivery system is preferred.
Excimer Lasers
Excimer lasers are based on the ionization of inert gas molecules, such as xenon, argon, or krypton combined with halogen molecules, such as fluorine or chlorine, to generate a
source of high-power UV radiation. Examples of excimer lasers include XeF, which provides a source of 351 nm radiation; XeCl, which is lasing at 308 nm; KrF, which lases at 248
nm; and ArF, which is lasing at 193 nm. These lasers have a
relatively shallow depth of penetration into soft tissues, making them available for very delicate surgical procedures like
the removal of occluding atherosclerotic plaques (thrombus)
inside the vascular system.
An excimer argon fluoride (ArF) laser produces radiation
at 193 nm. This radiation predominantly causes tissue ablation by a photochemical process because short wavelength radiation have sufficient energy to break molecular bonds.
Pulsed ArF lasers enable the removal of approximately 0.2
m-thick tissue layers and are therefore useful in ophthalmology for the correction of near-sighted vision.
MEDICAL APPLICATIONS OF LASERS
Argon Lasers
Therapeutic Applications
Erbium-Yttrium-Aluminum-Garnet Lasers
Erbium (Er) has been used to dope a yttrium-aluminum-garnet (a crystal composed of aluminum and yttrium oxides) and
form a Er:YAG laser. This solid-state laser emits a 2.94 m
and can be used for ablation of tooth enamel and dentin, for
corneal ablation, and more recently also in cosmetic skin resurfacing surgery.
Holmium-Doped Lasers
Holmium (a rare earth element)-doped YAG (Ho:YAG) lasers
emit pulses of 2.1 m wavelength typically with energies below 4 J. They are being used by orthopedic surgeons for soft
tissue ablation in arthroscopic joint surgery and in some urologic applications because it can be used in fluid environments.
Ruby Lasers
Q-switched ruby (Cr:Al2O3) lasers, which emit pulses of 694.3
nm (red) wavelength and can deliver up to 2 J in energy, are
used in dermatological applications to disperse different tattoo pigments and various nonmalignant pigmented lesions.
230
Retina
Iris
Fovea
Visible and
near-infrared
(4001400 nm)
radiation
Macula
Vitreous
body
Cornea
Optic
nerve
Aqueous
humor
Sclera
Lens
Retina
Iris
Fovea
Mid-infrared and
far-infrared
(1400 nm to 1nm)
radiation
Macula
Vitreous
body
Cornea
Optic
nerve
Aqueous
humor
Sclera
Lens
Retina
Iris
Fovea
Near-ultraviolet
(315390 nm)
radiation
Macula
Vitreous
body
Cornea
Aqueous
humor
Optic
nerve
Photodynamic Therapy. Photodynamic therapy is a photochemotherapy technique of photoactivation exogenous photosensitized drugs at specific target sites and the subsequent
selective destruction of certain tumors. Typically, a photosensitizing dye (e.g., hematoporphyrin derivative, or HpD) is first
introduced into malignant cells, which retain the dye, perhaps because of impaired lymphatic drainage or abnormal tumor vasculature. The tissue is then exposed to the incident
laser radiation. Dye lasers in the 630 nm range (e.g., argon)
are generally employed in PDT because this wavelength is
most effective in activating the photosensitizing dye. Even
though the exact mechanism involved in this process is not
fully understood, it is generally believed that this photosensitizer produces a side effect in the target cells that arises from
the highly toxic level of oxygen-free radicals that damage intracellular organelles in the malignant cells, leading to cell
death. The technique has been used mostly to treat superficial
tumors in the skin (22,23). Among the side effects of PDT is
excessive skin photosensitivity, which requires that patients
avoid direct exposure to sunlight for several weeks thereby
preventing potential sunburns.
Diagnostic Applications
Laser Doppler Velocimetry
Basic Principle. Laser Doppler velocimetry (LDV) is a relatively new clinical method for assessing cutaneous blood flow.
This real-time measurement technique is based on the Doppler shift of light backscattered from moving red blood cells
and is used to provide a continuous measurement of blood
flow through the microcirculation in the skin. Although LDV
provides a relative rather than an absolute measure of blood
flow, empirical observations have shown good correlation between this technique and other independent methods to measure skin blood flow.
According to the fundamental Doppler principle, the frequency of sound, or any other type of monochromatic and coherent electromagnetic radiation such as laser light, that is
emitted by a moving object is shifted in proportion to the velocity of the moving object relative to a stationary observer.
Accordingly, when the object is moving away from an observer, the observer will detect a lower wave frequency. Likewise, when the object moves toward the observer, the frequency of the wave will appear higher. By knowing the
difference between the frequencies of both the emitted and
the detected waves, the Doppler shift, it is possible to calculate the velocity of the moving object according to the following equation:
f = 20f cos /c
(1)
231
F=
2 P() d
(2)
232
233
LASER CLASSIFICATIONS
Class 1 Lasers
Class 1 laser products produce very low power (e.g., semiconductor diode lasers used in video disc players); they pose no
known hazard under normal operating conditions.
Class 2 Lasers
Class 2 laser products produce low-power visible light, normally used for brief periods during alignments (e.g., in radiology). These lasers are therefore considered safe for momentary viewing (0.25 s or less) unless an individual stares
directly into the laser beam.
Class 3 Lasers
Class 3 laser products generally consist of medium-power lasers that pose potential hazards to a person during instantaneous exposure of the eyes. This classification is further subdivided into two subclassifications: Class 3A and Class 3B.
Subclassification 3A lasers emit visible light with an average
output power between 1 and 5 mW. Lasers that deliver visible
light with an average output powers between 5 and 500 mW
are subclassified in Class 3B.
Class 4 Lasers
Class 4 lasers include most surgical lasers that have an average output power in excess of 500 mW. Very stringent control
measures are required for this class of lasers. These lasers
produce a hazard not only from direct or specular reflection,
but they may also be hazardous with diffuse reflection.
BIBLIOGRAPHY
1. H. C. Zweng and M. Flocks, Clinical experience with laser photocoagulation. Fed. Proc., Fed. Am. Soc. Exp. Biol., 24 (Suppl. 14):
6570, 1965.
2. H. C. Zweng, L. L. Hunter, and R. R. Peabody, Laser Photocoagulation and Retinal Angiography. St. Louis: Mosby, 1969.
3. H. Beckman et al., Transscleral ruby laser irradiation of the ciliary body in the treatment of intractable glaucoma, Trans. Am.
Acad. Ophthalmol. Otolaryngol., 76: 423436, 1972.
4. L. Goldman et al., Laser radiation of malignancy in man, Cancer
(Philadelphia), 18: 533545, 1965.
5. H. L. Rosomoff and F. Carroll, Reaction of neoplasm and brain to
laser, Arch. Neurol. (Chicago), 14: 143148, 1966.
6. T. E. Brown et al., Laser radiation I: Acute effects of laser radiation on the cerebral cortex, Neurology, 16: 783, 1966.
234
Reading List
Books
M. H. Niemz, Laser-Tissue Interactions: Fundamentals and Applications, New York: Springer-Verlag, 1996.
J. A. S. Carruth and A. L. McKenzie, Medical Lasers: Science and
Clinical Practice, Bristol, UK: Adam Hilger, 1986.
D. H. Sliney and S. L. Trokel, Medical Lasers and Their Safe Use,
New York: Springer-Verlag, 1993.
K. A. Ball, Lasers: The Perioperative Challenge, St. Louis: Mosby,
1995.
C. A. Puliafito, Lasers in Surgery and Medicine: Principles and Practice, New York: Wiley-Liss, 1996.
G. T. Absten and S. N. Joffe, Lasers in Medicine: An Introductory
Guide, Cambridge: Cambridge University Press, 1985.
J. Wilson and J. F. B. Hawkes, Lasers: Principles and Applications,
London: Prentice Hall, 1987.
O
berg, Laser-Doppler blood flowmetry, BosA. P. Shepherd and P. A
ton: Kluwer, 1990.
Review Articles
M. M. Judy, Biomedical lasers. In J. D. Bronzino, (ed.), The Biomedical Engineering Handbook, Boca Rato, FL: CRC/IEEE Press, 1995.
K. F. Gibson and W. G. Kernohan, Lasers in medicinea review. J.
Med. Eng. Tech., 17 (2): 5157, 1993.
Chapters on laser scalpel and laser surgery. In J. G. Webster, (ed.),
Encyclopedia of Medical Devices and Instrumentation. New York:
Wiley, 1988.
J. A. Parrish and B. C. Wilson, Current and future trends in laser
medicine. Photochemistry Photobiology, 53 (6): 731738, 1991.
A. N. Obeid et al., A critical review of laser Doppler flowmetry. J.
Med. Eng. Tech., 14: 178181, 1990.
O
berg, Laser-Doppler flowmetry. Biomed. Eng., 18 (2): 125
P. A
163, 1990.
G. A. Holloway, Jr., Laser Doppler measurements of cutaneous blood
flow. In P. Rolfe (ed.), Non-Invasive Physiological Measurements,
vol. 2. London: Academic Press, 1983.
Periodicals
Research articles on current applications of lasers in medicine are
published in the following journals: Lasers in Medical Science, Lasers in Surgery and Medicine, Lasers in the Life Sciences, Lasers
in Ophthalmology, Photochemistry Photobiology, and Biophotonics International.
YITZHAK MENDELSON
Worcester Polytechnic Institute
MEDICAL COMPUTING
485
MEDICAL COMPUTING
A computer in one form or another is present in almost every
instrument used for making measurements or delivering
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
486
MEDICAL COMPUTING
MEDICAL INFORMATICS
The study of the use of computers in medicine is often called
medical informatics. Medical informatics has been termed
an emerging discipline that has been defined as the study,
invention, and implementation of structures and algorithms
to improve communication, understanding, and management
of medical information (1). (Further information can be
found on the newsgroup sci.med.informatics). This broad area
comprises many of the subjects included in medical computing, and there is a vast literature on ways in which computers
improve health care delivery through aids in organizing, ac-
MEDICAL COMPUTING
487
in heart rhythms formerly thought to be completely disorganized (24,25). Figure 2 is an example of a signal processing
result that demonstrates that electrograms recorded from a
rectangular array during ventricular fibrillation have a great
deal of organization even when they are recorded from sites
separated by as much as 5 mm to 11 mm. Thus, computerbased multichannel acquisition and analysis of cardiac arrhythmias have revealed phenomena that might be crucial in
the improved prevention and treatment of these often fatal
derangements of rhythm.
Multichannel recording from the brain using computerbased systems has resulted in new insights into the way in
which the brains electrophysiological and psychological functions are organized (26). Similar systems have been developed
to study the electrical activity associated with the gastrointestinal (27), genitourinary (28), and reproductive (29) systems.
The development and implementation on computers of the
fast Fourier transform has allowed the examination of biosignals in the frequency domain, opening the doors for new insight into mechanisms of important clinical entities (30).
Spectral methods allow the elucidation of relationships between different physiological systems (31). Wavelet theory
has further extended the application of frequency domain
techniques by avoiding the limitations imposed by discrete
Fourier analysis (32). Wavelets have been applied to electrocardiography, to detect irregularities in heart rhythm; to pho-
R(dx, dy)
1
0.5
1
0.5
10
0
0.5
0 dy
(mm)
10
ators in the heart and brain, respectively. For computer analysis, it is necessary to sample the waveform and convert it
from analog to digital format (18). In electrocardiology, computers have been used for the analysis of clinical ECGs to
identify patterns in the waveform that reflected underlying
disorders of electrical activation or structural heart disease
(19,20). Similarly, patterns in EEGs have been correlated
with normal neurophysiology and with pathologies such as
epilepsy (21). Commercial systems are based on similar analyses and are widely used in hospitals and clinics for diagnosing cardiac and neurological abnormalities.
At the same time, the use of computers has expanded the
level of investigations that are possible in attempting to understand the scientific basis of normal and abnormal physiological function. Even though much was learned about cardiac
electrophysiology by using instruments like string galvanometers (22), improved technology and the application of digital
computers have allowed measurements to be made in situations that were not accessible to earlier investigators. For example, it is now possible to record from hundreds of sites on
the surface of the heart and within the cardiac tissue to assemble a high-resolution reconstruction of the intrinsic or externally generated bioelectric events in the myocardium (23).
Cardiac mapping studies that acquire data from many sites
simultaneously have shown that there is considerable order
0
(dx (mm)
0
0.5
10
10
0
0
10
(a)
(b)
1
0.5
0
0.5
10
10
0
0
10
10
10
10
10
1
0.5
0
10
0.5
10
0
0
10
10
(c)
(d)
488
MEDICAL COMPUTING
MEDICAL COMPUTING
489
that provide users with scientifically and clinically useful information. Furthermore, the workstation and associated peripheral devices can be used as archival storage systems for
managing the overwhelming amounts of data that are generated by modern imaging modalities.
The computers associated with these clinical systems can
be networked using industry standard hardware and software
to provide convenient access to remote sites, either within a
hospital or medical center or outside the center to a referring
physician or a specialist who might have more experience in
interpreting certain imaging results. This is an example of
the interaction between medical imaging and telemedicine
(36), often thought of as one of the subdisciplines of medical
informatics.
COMPUTER GRAPHICS AND MEDICINE
Computers are often embedded in other medical instruments that do not play as acutely critical a role as do implanted devices, but are very important in assessing health
and disease. For example, a blood gas or electrolyte analyzer
might have a microprocessor that controls the user interface,
calibration procedures, and data acquisition, analysis, and
display. The widespread use of microcomputers in these analyzers provides convenient access to functions such as calibration and standardization that previously were time-consuming and labor-intensive.
Another early use of computers in medicine was the application of graphics systems for the acquisition and realistic display of anatomic structures in research and clinical situations. Much of the emphasis has been on two- and threedimensional reconstructions of data from medical imaging
modalities. Early work focused on the use of computers to estimate cardiac function from single- and dual-plane cineangiography (37), as well as estimate the reconstruction of coronary artery anatomy from coronary arteriograms (38,39). As
medical imaging technologies have advanced, the computational demands for extracting new information from image
analysis and displaying the data in realistic ways have increased. Substantial portions of the techniques developed for
nonmedical applications are not immediately applicable to biological and physiological systems because of the inherent
variability and irregularity that are not present in, for example, computer-aided design/computer-aided manufacturing
(CAD/CAM) structures (40). Another problem that is unique
to medical applications is the recent emphasis on reducing
health care costs, limiting the unfettered introduction of new
technologies (40). At times, there is a problem with the integration of creative, novel algorithms to a community that is
sometimes reluctant to modify procedures that have been established as effective, comfortable, and productive (3).
Image Processing
Stand-Alone Computers
Other medical instruments are based on general-purpose
computers, typically engineering workstations. Examples of
these are large imaging systems, such as magnetic resonance
imaging (MRI) or computed tomography (CT) systems. In
these cases, the computer fulfills a variety of roles. In MRI
systems, the computer can provide the user interface to the
highly specialized and complex hardware associated with the
magnet. The echo sequences that determine the imaging parameters and quality can be controlled through the computer
as a front end. These workstations often contain high-performance image processing and graphics hardware that can be
used for manipulation of the acquired images and display of
the results to the clinician in intuitive, usable formats. Figure
4 is a volume-rendered, ray-traced magnetic resonance image
of a canine heart with an experimentally induced myocardial
infarction (35). This image was generated on a high-performance workstation, and it demonstrates the kinds of displays
490
MEDICAL COMPUTING
data compression and decompression capabilities to accommodate extension of images from two to three dimensions, the
increased image resolutions achievable with modern devices,
and the need to transmit large datasets over networks.
Computer Graphics
Intimately related to the issues of image processing are the
techniques by which medical and biological images are displayed with enough realism to achieve the intended results
but with enough efficiency to be used in actual clinical situations. Algorithms and programs for accurately portraying
anatomy and, to some extent, function have improved steadily, sometimes exceeding the ability of the hardware to meet
the demands. Fortunately, the well-known advances in performance and cost of advanced graphics hardware, including
general-purpose computers as well as special-purpose graphics processors, have provided the platforms necessary for implementation of state-of-the-art graphics techniques.
The display of two-dimensional images is, in principle,
straightforward on a computer output screen with multiple
colors or gray levels per pixel. The display programs provide
an interface between the user, the image, and the graphics
hardware and software of the computer so that one pixel of
the image is translated to one pixel of the video screen. Complications arise when there is a mismatch between the image
and the screen, so that image pixels must be removed or display pixels must be interpolated. A further complication for
the developer of either two- or three-dimensional graphics
software is the plethora of data file formats that exist (43).
Fortunately, many public domain or proprietary software
packages provide excellent format conversion tools, but some
experimentation is frequently required to use them properly.
MEDICAL COMPUTING
The development of methods for efficient and realistic rendering of three-dimensional images continues to be an area of
ongoing research. Early work reduced anatomic structures to
wire frame models (44), and that technique is still sometimes
used for previewing and rapid manipulation on hardware that
is not sufficiently powerful for handling full images in real
time or near real time. Several methods require the identification of surfaces through image segmentation, as described
above. The surfaces can be triangulated and displayed as essentially two-dimensional structures in three dimensions
(45). After initial processing, this is a rather efficient display
method, but much of the three-dimensional information is
lost. Alternately, the image can be reduced to a series of volumetric structures that can be rendered by hardware specialized for their reproduction (46). One of the most realistic, but
computationally expensive, three-dimensional rendering
methods is ray tracing, in which an imaginary ray of light is
sent through the structures and is attenuated by the opacity
of the anatomic structures that it encounters along the way
(47). Different effects can be emphasized by modifying the dynamic range of the pixels in the imagethat is, by changing
the relationship between the opacity of the image and the
pixel value to be displayed on the screen.
Medical computer graphics are at their most useful when
it is possible to superimpose images from more than one modality into a single display or to superimpose functional information acquired from biochemical, electrical, thermal, or
other devices onto anatomical renderings. As an example of
the former, images from positron emission tomography (PET)
scans, which reflect metabolic activity, can be displayed on
anatomy acquired by magnetic resonance imaging. The combination provides a powerful correlation between structure
491
and function, but the technical challenges of registering images from two different devices or taken at different times are
significant (48). An example of the combination of functional
and anatomic data is the superposition of electrical activity,
either intrinsic or externally applied, of the heart onto realistic cardiac anatomy. This kind of technique can provide new
insights into the mechanisms and therapy of cardiac arrhythmias (49). Figure 6 is a sequence of still frames from a video
showing the progression of a wavefront of electrical activation
across a three-dimensional cardiac left ventricle after an unsuccessful defibrillation shock.
Computer graphics and image processing, along with advanced imaging technologies, are making a significant impact
in medical knowledge and practice and have the potential for
many more applications. A combination of traditional CAD/
CAM visualization and advanced imaging can be used for effective assessment of quality of fit of orthopedic prostheses
(50). Capabilities and functionality have increased dramatically with the advent of advanced graphics hardware and
commercial software packages aimed at scientists and clinicians who are not graphics experts. Full realization of the
benefits of these systems will require further advances in
these areas, along with adaptation to the needs of clinicians
and the constraints of the changing health care climate (51).
COMPUTER SIMULATIONS
Numerical and analytical simulations of physiological processes have intrigued investigators for many decades. The solution of inverse and forward problems in neurophysiology
and electrocardiology was considered to be an important exer-
Figure 6. A composite of eight magnetic resonance and isochronal surface images from the
second activation wavefront after an unsuccessful defibrillation shock. The electrical data were
acquired from about 60 plunge needles with endocardial and epicardial electrodes inserted
through the left and right ventricles of the heart of an experimental animal. Successive isochrones (left to right, top to bottom) are shown at 6 ms intervals. Visualization techniques that
allow the superposition of function and anatomy are very helpful in understanding the relationships between variables and how they affect physiological mechanisms, and they can potentially
lead to improved diagnosis and therapy. Reprinted from Ref. 49, with permission. Copyright CRC
Press, Boca Raton, FL.
492
MEDICAL COMPUTING
tions of virtual reality to medical practice (68,69). Virtual reality has been applied to surgery planning (70), physical medicine and rehabilitation (71), parkinsonism (72), and
psychological disorders (73,74).
Computers have been used in a great many ways to assist
in surgical procedures (70). Surgeons can be trained in surgical techniques by using advanced computer graphics and virtual reality methods (75,76); similar techniques can be used
for surgical planning (7779) and for improving the safety
and efficacy of the surgical procedure. Computers are used
during complex brain surgery as interactive tools for guiding
and measuring the progress of the procedure, with the hope
that resection of lesions could be performed with less damage
to surrounding tissue (80). It is possible to use high-resolution
graphics to traverse internal organs virtually, yielding much
of the same information that is available from standard endoscopic techniques, as shown in the image in Fig. 8 acquired
at the Mayo Clinic (68).
Figure 8. Virtual colonoscopy, with an internal view of the transverse colon. The image was acquired by a helical CT scan, segmented,
and reconstructed. Virtual procedures can replace or augment actual
endoscopic examinations, reducing or eliminating the attendant risk
and discomfort. The image was acquired at the Mayo Clinic. Reprinted from Ref. 68, with permission. Copyright 1998, IEEE.
MEDICAL COMPUTING
493
494
MEDICAL COMPUTING
100 ms
400 mm/s
(a)
(b)
Figure 9. Results of a clinical electrical cardiac mapping study in a
patient undergoing ablation of atrial flutter. (a) Electrograms recorded during the arrhythmia from a catheter inserted into the right
atrium in a loop configuration. (b) Activations derived from the intrinsic deflections in the electrograms shown in panel a. The continuous
nature of the activity demonstrates the reentrant mechanism around
anatomical obstacles in the right atrium. Ablation can eliminate conductivity in part of the reentrant pathway, curing the atrial flutter.
0.525
50
100
ments are made and (2) the location of the electrodes used to
make the measurements. These variables can then be used
for further computations or to make the visualization of the
results more compelling and useful (104). Imaging techniques
as described above can be used for this purpose, allowing the
application of standard image processing packages for better
understanding of the electrophysiology (105). Image processing algorithms can also be used to improve our knowledge
of the underlying pathology and its relation to abnormalities
in electrical phenomena (35).
A traditional way of viewing activation sequences or other
variables in the heart is through contour mapsthat is, lines
of equal values of activation time, potential, or other measured variable. The approach depends on whether the array
of recording electrodes is two- or three-dimensional and
whether the array is in a regular pattern or is irregularly
spaced over the tissue. The variable of interest is typically
interpolated over the region in which the measurements were
made for more pleasing visual effects (106). Figure 10 is a
simple isochronal map of the activation sequence beneath a
rectangular array of electrodes on the outer, or epicardial,
surface of the right ventricle of an experimental animal. Even
0.525
0.525
150
0.525
0.525
200
0.515
0.515
250
0.505
0.505
300
50
100
150
200
250
300
350
MEDICAL COMPUTING
10
11
12
13
14
495
Institutes of Health, Bethesda, Maryland, and National Science Foundation Engineering Research Center Grant CDR8622201.
BIBLIOGRAPHY
15
16
17
18
19
20
21
CONCLUSION
Computers are used in almost every aspect of clinical medicine and biomedical research. They are indispensable in advanced devices and instrumentation. They are widely used for
the collection and analysis of demographic and clinical data
which provide a basis for the improved understanding of the
causes and epidemiologies of disease. They can be very effectively used for the training and accreditation of physicians
and other health care providers. While obviously no substitute for human clinical and scientific judgement, computers
have assumed a critical role as facilitators of diagnostic and
therapeutic procedures. As the inevitable progress in computer software and hardware occurs, medical professionals
will become more dependent on them. There will be continuing improvement in our understanding of methods for increased reliability and safety of computer-based devices and
equipment, and regulatory agencies will develop procedures
for evaluating these resources routinely and objectively. Advanced imaging technologies, higher performance graphics
hardware and software, and new surgical techniques will expand the use of microsurgery and remotely applied surgical
and invasive procedures. These prospects will depend on investigators in computer science, biomedical and software engineering, clinical practice, and physiology, but the computer
has the potential to be a positive force in improving health
care delivery while decreasing the financial burden of the
health care system on society.
ACKNOWLEDGMENTS
19. R. E. Ideker et al., Evaluation of a QRS scoring system for estimating myocardial infarct size: II. Correlation with quantitative
anatomic findings for anterior infarcts, Amer. J. Cardiol., 49:
16041614, 1982.
496
MEDICAL COMPUTING
41. R. C. Gonzalez and R. E. Woods, Digital Image Processing, Reading, MA: Addison-Wesley, 1992.
42. R. K. Justice et al., Medical image segmentation using 3-D
seeded region growing, Proc. SPIE Med. Imag., Newport Beach,
CA, 1997, pp. 900910.
43. D. C. Kay and J. R. Levine, Graphics File Formats. Blue Ridge
Summit, PA: Windcrest/McGraw-Hill, 1992.
44. W. M. Newman and R. F. Sproull, Principles of Interactive Computer Graphics, New York: McGraw-Hill, 1979.
45. W. E. Lorensen and H. E. Cline, Marching cubes: A high resolution 3D surface construction algorithm, Comput. Graphics, 21:
163169, 1987.
46. E. V. Simpson et al., Three-dimensional visualization of electrical variables in the ventricular wall of the heart, Proc. 1st Conf.
Vis. Biomed. Comput., Atlanta, GA, 1990, pp. 190194.
47. K. S. Klimaszewski and T. W. Sederberg, Faster ray tracing using adaptive grids, IEEE Comput. Graphics Appl., 17: 4251,
1997.
48. K. R. Castleman, Digital Image Processing, Englewood Cliffs,
NJ: Prentice-Hall, 1979.
49. T. C. Palmer et al., Visualization of bioelectric phenomena, CRC
Crit. Rev. Biomed. Eng., 20: 355372, 1992.
50. M. W. Vannier et al., Visualization of prosthesis fit in lowerlimb amputees, IEEE Comput. Graphics Appl., 17: 1629, 1997.
51. D. P. Mahoney, The art and science of medical visualization,
Comput. Graphics World, 19: 2532, 1996.
52. L. G. Horan et al., On the possibility of directly relating the
pattern of ventricular surface activation to the pattern of body
surface potential distribution, IEEE Trans. Biomed. Eng., 34:
173179, 1987.
53. L. G. Horan and N. C. Flowers, The Relationship Between the
Vectorcardiogram and Actual Dipole Moment, in C. V. Nelson
and D. B. Geselowitz (eds.), The Theoretical Basis of Electrocardiology, London: Oxford Univ. Press, 1976, pp. 397412.
54. R. J. MacGregor and E. R. Lewis, Neural Modeling: Electrical
Signal Processing in the Nervous System, New York: Plenum,
1977.
55. B. He and R. J. Cohen, Body surface Laplacian ECG mapping,
IEEE Trans. Biomed. Eng., 39: 11791191, 1992.
56. G. Huiskamp and F. Greensite, A new method for myocardial
activation imaging, IEEE Trans. Biomed. Eng., 44: 433446,
1997.
57. D. S. Khoury and Y. Rudy, A model study of volume conductor
effects on endocardial and intracavity potentials, Circ. Res., 71:
511525,1992.
58. K. D. Bollacker et al., A cellular automata three-dimensional
model of ventricular cardiac activation, Proc. Annu. Int. Conf.
IEEE Eng. Med. Biol. Soc., Piscataway, NJ, 1991, pp. 627628.
59. G. W. Beeler, Jr. and H. Reuter, Reconstruction of the action
potential of ventricular myocardial fibers, J. Physiol. (London),
268: 177210, 1977.
60. A. L. Hodgkin and A. F. Huxley, A quantitative description of
membrane current and its application to conduction and excitation in nerve, J. Physiol. (London), 117: 500544, 1952.
61. C.-H. Luo and Y. Rudy, A dynamic model of the cardiac ventricular action potential. II. After depolarizations, triggered activity, and potentiation, Circ. Res., 74: 10971113, 1994.
62. C.-H. Luo and Y. Rudy, A dynamic model of the cardiac ventricular action potential. I. Simulations of ionic currents and concentration changes, Circ. Res., 74: 10711096, 1994.
63. E. E. Daniel et al., Relaxation oscillator and core conductor models are needed for understanding of GI electrical activities,
Amer. J. Physiol., 266 (Gastrointest. Liver Physiol. 29): G339
G349, 1994.
MEDICAL COMPUTING
497
65. E. S. Almeida and R. L. Spilker, Mixed and penalty finite element models for the nonlinear behavior of biphasic soft tissues
in finite deformation: Part I. Alternate formulations, Comput.
Methods Biomech. Biomed. Eng., 1: 2546, 1997.
66. B. R. McCreadie and S. J. Hollister, Strain concentrations surrounding an ellipsoid model of lacunae and osteocytes, Comput.
Methods Biomech. Biomed. Eng., 1: 6168, 1997.
94. P. D. Wolf et al., A 528 channel system for the acquisition and
display of defibrillation and electrocardiographic potentials,
Proc. Comput. Cardiol., Los Alamitos, CA, 1993, pp. 125128.
69. W. M. Smith, Scanning the technology: Engineering and medical science chart fantastic voyage, Proc. IEEE, 86: 474478,
1998.
70. M. L. Rhodes and D. D. Robertson, Computers in surgery and
therapeutic procedures, Computer, 29 (1): 23, 1996.
71. W. J. Greenleaf, Applying VR to physical medicine and rehabilitation, Commun. ACM, 40: 4346, 1997.
72. S. Weghorst, Augmented reality and Parkinsons disease, Commun. ACM, 40: 4748, 1997.
73. G. Riva, L. Melis, and M. Bolzoni, Treating body-image disturbances, Commun. ACM, 40: 6971, 1997.
74. D. Strickland et al., Overcoming phobias by virtual exposure,
Commun. ACM, 40: 3539, 1997.
75. S. L. Dawson and J. A. Kaufman, The imperative for medical
simulation, Proc. IEEE, 86: 479483, 1998.
76. K. H. Hohne et al., A virtual body model for surgical education
and rehearsal, Computer, 29 (1): 2531, 1996.
77. M. Bro-Nielsen, Finite element modeling in surgery simulation,
Proc. IEEE, 86: 503, 1998.
78. E. K. Fishman et al., Surgical planning for liver resection, Computer, 29 (1): 6472, 1996.
79. R. A. Robb, D. P. Hanon, and J. J. Camp, Computer-aided surgery planning and rehearsal at Mayo Clinic, Computer, 29 (1):
3947, 1996.
80. L. Adams et al., An optical navigator for brain surgery, Computer, 29 (1): 4854, 1996.
81. E. Chen and B. Marcus, Force feedback for surgical simulation,
Proc. IEEE, 86: 524530, 1998.
82. H. Delingette, Toward realistic soft-tissue modeling in medical
simulation, Proc. IEEE, 86: 512523, 1998.
83. G. E. Christensen et al., Individualizing neuroanatomical
atlases using a massively parallel computer, Computer, 29 (1):
3238, 1996.
84. M. J. Ackerman, The visible human project, Proc. IEEE, 86:
504511, 1998.
85. W. W. Gibbs, Softwares chronic crisis, Sci. Amer., 271 (3): 86
95, 1994.
86. B. Littlewood and L. Strigini, The risks of software, Sci. Amer.,
267 (5): 6275, 1992.
87. W.-T. Tsai, R. Mojdehbakhsh, and S. Rayadurgam, Capturing
safety-critical medical requirements, Computer, 31: 4042,
1998.
96. K. P. Anderson et al., Determination of local myocardial electrical activation for activation sequence mapping: A statistical approach, Circ. Res., 69: 898917, 1991.
97. E. V. Simpson et al., Evaluation of an automatic cardiac activation detector for bipolar electrograms, Med. Biol. Eng. Comput.,
31: 118128, 1993.
98. A. S. L. Tang et al., Measurement of defibrillation shock potential distributions and activation sequences of the heart in threedimensions, Proc. IEEE, 76: 11761186, 1988.
99. W. Krassowska et al., Finite element approximation of potential
gradient in cardiac muscle undergoing stimulation, Math. Comput. Modelling, 11: 801806, 1988.
100. P. V. Bayly et al., Estimation of conduction velocity vector fields
from 504-channel epicardial mapping data, Proc. Comput.
Cardiol., Indianapolis, IN, 1996, pp. 133140.
101. A. H. Kadish et al., Vector mapping of myocardial activation,
Circulation, 74: 603615, 1986.
102. D. S. Rosenbaum, B. He, and R. J. Cohen, New approaches for
evaluating cardiac electrical activity: Repolarization alternans
and body surface laplacian imaging, in D. P. Zipes and J. Jalife
(eds.), Cardiac Electrophysiology: From Cell to Bedside, Philadelphia: Saunders, 1995, pp. 11871198.
103. F. X. Witkowski et al., Significance of inwardly directed transmembrane current in determination of local myocardial electrical activation during ventricular fibrillation, Circ. Res., 74: 507
524, 1994.
104. E. V. Simpson, T. C. Palmer, and W. M. Smith, Visualization in
cardiac mapping of ventricular fibrillation and defibrillation,
Proc. Comput. Cardiol., Los Alamitos, CA, 1992, pp. 339342.
105. C. Laxer et al., An Interactive Graphics System for Locating
Plunge Electrodes in Cardiac MRI Images, in Y. Kim (ed.), Image Capture, Formatting and Display, Soc. Photo-Optical Instrum. Eng., 1991, pp. 190195.
106. E. V. Simpson et al., Discrete smooth interpolation as an aid to
visualizing electrical variables in the heart wall, Proc. Comput.
Cardiol., Venice, Italy, 1991, pp. 409412.
107. F. R. Bartram, R. E. Ideker, and W. M. Smith, A system for the
parametric description of the ventricular surface of the heart,
Comput. Biomed. Res., 14: 533541, 1981.
108. C. Laxer et al., The use of computer animation of mapped cardiac potentials in studying electrical conduction properties of arrhythmias, Proc. Comput. Cardiol., Los Alamitos, CA, 1991,
pp. 2326.
109. M. Usui et al., Epicardial shock mapping following monophasic
and biphaic shocks of equal voltage with an endocardial lead
system, J. Cardiovasc. Electrophysiol., 7: 322334, 1996.
110. E. J. Berbari et al., Ambiguities of epicardial mapping, J. Electrocardiol., 24 (Suppl.): 1620, 1991.
111. P. V. Bayly et al., Spatial organization, predictability, and determinism in ventricular fibrillation, Chaos, 8: 103115, 1998.
498
WILLIAM M. SMITH
The University of Alabama at
Birmingham
460
LIQUID INSULATION
LIQUID INSULATION
ARC SUPPRESSION PROPERTIES
Switchgear using the arc suppression properties of insulating
liquids (oils) was invented in the early 1880s. In the early
days, the structure of switchgear was simple: a pair of electrodes were placed in insulating oil. In such switchgear the
arc suppression mechanism is also simple: as the electrode
spacing increases so the arc length increases and the electric
arc is suppressed. This suppression results from the cooling
effect of hydrogen gas produced by the decomposition of the
insulating oil due to the arc. In arc suppression in insulating
oil, hydrogen gas produced by decomposition of the insulating
oil due to the arc plays an important role.
Arc Suppression by Hydrogen
The energy of the arc between a pair of electrodes in the insulating oil is dissipated by the electrodes, by conduction and
radiation, evaporation and decomposition of the insulating oil,
heating and expansion of gases produced by the decomposition of the insulating oil, and dissociation of hydrogen. Fifty
to seventy percent of produced gas is hydrogen, and the other
gases are acetylene, methane, and ethane. As shown in Table
1, the thermal conductivity of hydrogen at room temperature
is higher than that of other gases. At 4000C it is about 50
W/m K. This value is more than 5 times higher than for the
other gases. Therefore, the cooling effect is larger than that
of the other gases. By this cooling, the arc is suppressed at the
zero-current point of alternating current. Thus the current is
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
LIQUID INSULATION
461
DISCHARGE RESISTANCE
The behavior of insulating liquids under highly stressed conditions and under conditions of partial discharge are among
the most important items in screening tests for newly developed insulating liquids and also in the routine testing of
liquids.
Gassing Rate
Methods of evaluating gas absorption and evolution of insulating oils under high stress after saturation with a gas are
described in IEC 628 and ASTM D 2330. The fundamental
approaches are similar to each other and amount to a modified Pirelli method.
The condition used in such methods differs from actual
field conditions, especially in the case of hermetically sealed
equipment such as power cables, capacitors, and many
power transformers.
Discharge Resistance
To evaluate the behavior of insulating liquids in a highly
stressed impregnated system and to obtain numerical results
for the recently developed impregnants with very high resistance to partial discharge, the above-mentioned methods are
not sufficient. As new liquids, especially with high aromaticity, are developed and applied voltage stresses are progressively increased, a new method is needed to characterize the
ability of such insulating liquids to prevent or suppress partial discharge under high stress. One of these methods, determination of the partial discharge inception voltage with a
needle and spherical ball oil gap, is described in IEC 61294.
The partial discharge inception voltage obtained by this
method is largely related to the chemical structure of the liquid and is correlative to partial discharge in impregnated insulating systems such as capacitor elements.
DIELECTRIC CONSTANT AND LOSS
Dielectric polarization occurs when an electric field is applied
to insulating oil. When there is a time delay in the formation
of polarization, dielectric loss arises from the phase delay of
polarization under an alternating electric field. The loss due
to this dielectric polarization is proportional to the dielectric
loss tangent tan , which is equal to the ratio of the dielectric
loss factor to the dielectric constant :
tan =
/
(1)
(2)
LIQUID INSULATION
1.4
2.40
tan (%)
1.2
2.20
1.0
Oil A
Oil B
Oil C
2.00
0.8
Oil A
1.80
Dielectric constant
462
0.6
0.4
Oil B
0.2
Oil C
0
80 60 40 20
f m0
C1 (T T0 )
=
fm
C2 + (T T0 )
(3)
2.303RC1C2 T 2
C2 + T T0
(4)
20
40
60
80
Temperature (C)
(5)
60 Hz 80C.
Dielectric Constant
tan (%)
2.18
4.25
5.20
5.05
6.00
4.55
4.00
2.17
2.48
2.51
2.52
0.02
0.03
0.04
0.01
1.0
0.10
0.30
0.005
0.01
0.10
0.008
LIQUID INSULATION
K=
1
1
d
+A
Ta Ti = T +
T = 1.22
(8)
g 3 (Tm T2 )
2
Pr =
a
Gr =
(14)
where
where
Re = v
(13)
The larger q is, the more effective the cooling. The amount
of heat flux is controlled by K, and thus by . The magnitude
of depends on physical properties of the insulating oil, and
the structure of the insulating material. The important physical properties of the oil are the density, kinetic viscosity, thermal expansion coefficient, heat conductivity, and velocity.
The heat flux is also expressed by
(7)
Nu =
(6)
where
q = S(Tm T2 )
463
(10)
(11)
106 WH 1/2
= 0.858
gCd 3
(15)
(16)
where
Ti temperature of insulating oil at inlet of duct
temperature rise of winding above average temperature of insulating oil
T average temperature rise of insulating oil in duct
W thermal loading into duct (W/m2)
H height of duct (m)
d half the duct depth (m)
C specific heat of insulating oil (W s/kg)
density of insulating oil
As shown in Eq. (15) and Eq. (16), it is desirable for cooling
that the kinematic viscosity be low and that the thermal
expansion coefficient, density, specific heat, and thermal conductivity be high.
In Table 3 physical properties of some mineral oils, silicone
liquids, and high-molecular-weight hydrocarbon oils are
shown. Using these data, T and of three oils were calculated. Ratios of T and of silicone liquids and high-molecular-weight hydrocarbon oils to those of mineral oils are shown
in Table 4. The physical properties of the three oils are not
very different, except for the kinematic viscosity. The high
kinematic viscosity reduces heat transfer, as in the case of
silicone and high-molecular-weight hydrocarbon oils.
(12)
RESISTANCE TO IGNITION
where
l length of insulating materials
heat conductivity of insulating oil
v velocity of insulating oil
kinematic viscosity of insulating oil
g acceleration of gravity
thermal expansion coefficient of insulating oil
a thermal diffusivity of insulating oil
Because of increasing environmental problems due to the bioaccumulative nature of polychlorinated biphenyls (PCBs), the
production and use of PCBs have been prohibited throughout
the world. Almost all substitutes for PCBs are more flammable than the PCBs, and the evaluation of flammability of
those liquids becomes very important.
There are many test methods for the evaluation of the resistance to ignition and fire propagation of insulating liquids.
464
LIQUID INSULATION
Naphthenic Oil
Paraffinic Oil
Silicone Liquid
High-MolecularWeight Hydrocarbon
0.120
0.109
0.132
0.116
0.136
0.128
0.134
0.119
2.05
2.33
1.96
2.33
1.49
1.65
1.63
1.90
0.87
0.86
0.96
0.88
7.8 104
7.8 104
1.04 104
8.0 104
11 106
2.1 106
12 106
2.2 106
50 106
16 106
350 106
16 106
MOISTURE EFFECT
Determination of the net calorific value or net heat of combustion of liquid hydrocarbons with a bomb calorimeter described
in ASTM D 240 is specified in ISO 1928. This quantity represents the rate of heat generation by the liquid during its combustion.
Other Fire Tests
The pool fire test (large scale and small scale), trough test,
spray mist test, and heat release test, developed by Factory
Mutual Research, are attractive and practical methods to
Table 4. Ratios of T and of Silicone Liquids and HighMolecular-Weight Hydrocarbon Oils to Those of Mineral Oils a
Property
Mineral Oil
Silicone Liquid
High-MolecularWeight Hydrocarbon
1.00
1.00
2.75
1.18
2.99
1.26
(17)
Xmax
PWsat
(18)
100
465
12 L/L
80
33
60
55
40
106
20
0
40
20
0
20
Temperature (C)
40
60
400
300
80C
200
60C
50C
100
40C
25C
0C
LIQUID INSULATION
20
40
60
80
100
466
LIQUID INSULATION
0
10
Operation (year)
10
15
20
25
1
0.5
0.1
0.05
1.0
0.5
A: Field data
B: Laboratory Data
0
10
15
20
Aging time (h)
25
Figure 4. tan behavior for actual transformer oil in the field (A)
and insulating oil in laboratory data (B).
When any one of the three processes is hindered, streaming electrification can be prevented. The choice of insulating
oil can also affect streaming electrification.
Polarity of Streaming Electrification
In streaming electrification between insulating paper and insulating oil, the oil becomes positively charged and the insulating paper negatively charged. A possible reason is a peculiarity of the oxygen of the hydroxyl group (OH) in the
insulating paper (cellulose).
Oxygen, having high electronegativity (that is, ability to
attract electrons), attracts the electron of hydrogen. It thereby
becomes negatively charged, and the hydrogen becomes positively charged. The cellulose molecule surface is covered with
positively charged hydrogen, which adsorbs negative ions in
oil selectively. Therefore, the insulating oil becomes positively
charged, and the insulating paper is negatively charged.
Streaming Electrification and the Deterioration
of Insulating Paper
The hydroxyl group (OH) of cellulose is changed to the aldehyde group (CHO) or the carboxyl group (COOH) by oxidative deterioration. The extent of polarization due to electron
transfer from hydrogen to oxygen, mentioned above, is in the
following order:
hydroxyl group < aldehyde group < carboxyl group
Accordingly, streaming electrification increases as the insulating paper deteriorates.
Streaming Electrification of Transformers
LIQUID INSULATION
+
+
Adsorption
Flow
Relaxation
16
Oil 2
12
8
Oil 1
0
0
20
Insulating oil
+
+
+ +
+
+ +
+
+
+
+
Insulating paper
Insulating paper
Insulating oil
Insulating paper
+
+
Insulating oil
467
40
60
80
amounts of sulfur compounds, nitrogen compounds, and oxygen compounds also occur. The naphthenic hydrocarbons include dicyclic, tricyclic, and alkyl-substituted hydrocarbons;
the paraffinic hydrocarbons include normal paraffinic and isoparaffinic hydrocarbons; and the aromatic hydrocarbons include dicyclic, tricyclic, and alkyl-substituted hydrocarbons.
The composition of crude oils depends on the area where
they are produced. There are three kinds of crude oils: naphthenic, paraffinic, and mixed. Naphthenic crude oils contain a
large amount of naphthenic hydrocarbons, and paraffinic
crude oils contain a large amount of paraffinic hydrocarbons.
Mixed crude oils are intermediate between naphthenic and
paraffinic. Naphthenic crude oils are produced in South
America, North America, and southern Asia. Paraffinic crude
oils are produced in some areas of North America and northern Asia. Mixed crude oils are produced in the Middle East.
The composition of mineral oils depends on that of the
crude oils from which they are manufactured. There are two
kinds of mineral oils: naphthenic and paraffinic.
Refining Process
Mineral oils are manufactured from distillate of heavy light
oil and light lubricant oil by the process shown in Fig. 7.
Where naphthenic oils are refined, acid treatment followed by
clay filtration is also used. In the case of paraffinic oils dewaxing is part of the refining process. Examples of the composition of a naphthenic oil and a paraffinic oil are shown in Table
5. It is seen that in both paraffinic and naphthenic oils the
amount of paraffinic compounds is greater than the amount
of naphthenic compounds.
To obtain good dielectric characteristics, the amounts of
nitrogen compounds and sulfur compounds should be as small
as possible. However, excessive refining also decreases the
amount of aromatic hydrocarbons. Decrease of the amount of
aromatic hydrocarbons means a decrease in hydrogen absorption, and decrease of the amounts of both aromatic hydrocarbons and sulfur compounds means a decrease in oxidation
stability. It is known that hydrogen adsorption relates to the
partial discharge characteristics of oil (11,12) and that aromatic hydrocarbons have high hydrogen absorption. It is also
known that coexistence of aromatic hydrocarbons and sulfur
compounds is effective for oxidation stability. Therefore, refining must be performed so that the insulating oils maintain
balanced characteristics. The optimum amount of aromatic
hydrocarbons is 10 wt% to 20 wt%. In this case the amount
468
LIQUID INSULATION
Naphthenic oil
Atmospheric
distillation
Vacuum
distillation
Solvent
refining
Clay
filtration
Hydrogenation
Paraffinic oil
Atmospheric
distillation
Vacuum
distillation
Solvent
refining
Hydrogenation
Dewaxing
Clay
filtration
SYNTHETIC OILS
PCBs were among the best and most widely used synthetic
insulating liquids for electrical machines, such as power capacitors and transformers, due to their superb electrical characteristics and nonflammability, until a total ban on their use
and production was imposed, first in Japan in 1972, then in
the USA in 1976, and then in Europe in 1985.
In the 1960s alkylbenzenes were initially developed for
high-voltage cables in view of their superior gassing properties under high voltage stress, and especially for use with synthetic paper.
At the time PCBs were banned, other kinds of synthetic
aromatic hydrocarbons such as alkylnaphthalenes and alkyldiphenylethanes had been developed as candidates for improvements on mineral oils, but because of their higher cost,
they had not been put into practical use. PCBs were then replaced mainly by these new aromatic hydrocarbons.
Aromatic Hydrocarbons
Alkylbenzenes consist of a benzene ring and an alkyl group of
the straight-chain or branched-chain type. Alkylbenzenes are
Type of Oil
Paraffinic
Naphthenic
Proportion of C (%)
Sample
No.
Paraffinic
Naphthenic
Aromatic
1
2
3
60.1
59.9
61.8
29.7
27.5
29.7
10.2
12.6
8.3
1
2
3
45.1
49.0
50.7
36.3
39.0
40.8
18.6
12.0
8.5
Paraffinic
Oil 1
Paraffinic
Oil 2 b
Naphthenic
Oil
144
148
140
7.8 106
2.2 106
25
9.6 106
2.5 106
45
7.9 106
2.1 106
45
2.1
2.13
2.16
0.01
0.01
0.01
1013
1013
1013
LIQUID INSULATION
469
Silicone Liquids
Organic Esters
Dioctylphthalate (DOP) and diisononylphthalate (DINP) have
been used as substitutes for PCBs, especially for capacitors,
because they have higher permittivity (4.5 to 5.5) and flash
point (200C to 240C) than aromatic hydrocarbons. Di-2ethylhexyl orthophthalate (DOP) is specified in IEC 61099 as
a capacitor ester (type C1). As not easily flammable liquids,
phosphoric acid esters such as tricresyl phosphate (TCP) and
trixylenyl phosphate (TXP) are used as blends with aromatic
hydrocarbons. Generally speaking, these esters have high
permittivity and high inherent resistance to electrical stress,
but as manufactured they contain much water and impurities
and their dielectric dissipation factor is very high, so they
must be carefully dehydrated and purified before impregnation and often need an antioxidant or scavenger.
Recently, organic tetraester liquids have been introduced
in transformers because they are less flammable. Their fire
point is higher than 300C, but their viscosity is low compared
with that of currently used mineral oils. The same precautions should be followed as mentioned above, and additives
are effective as in the case of other organic esters.
Tetrahydric alcohol and a mixture of monocarboxylic acid
with suitable stabilizing additives are also specified in IEC
61099 (type T1).
Polybutenes
Polybutenes can have a large range of viscosity (1 mm2 /s to
105 mm2 /s at 40C), depending on polymerization. These liq-
VEGETABLE OILS
Vegetable oils (castor oil, rapeseed oil, etc.) are basically triglyceryl esters of fatty acids, and the fatty acids can be saturated or unsaturated. They were once used for cables and capacitors, and are now mostly used for the impregnation of dc
capacitors and especially energy storage capacitors, as they
have high permittivity. They have not been used for ac power
capacitors, as they have poor dielectric dissipation factors. Recently, however, they have been tried for use with metallized
polypropylene films, with which they have good compatibility,
and their dissipation factor and gas-absorbing ability have
been improved by blending them with aromatic hydrocarbon
liquids.
1.
2.
3.
4.
5.
6.
470
LIQUID INSULATION
Silicone Liquid
Ester Oil
150
160
332
209 103
314
360
1.42 103
242
304
415
1.76 103
0.89
0.96
0.98
1.62
1.62
11 106
2.5 106
50
50 106
16 106
55
76 106
31 106
45
0.88 106
33
0.8 106
22
2.2
2.7
3.2
2.36
2.37
0.01
0.09
0.01
0.009
0.24
(80C)
0.1
(23C)
0.5
(50C)
55
60
64
70
43
8 1012
6 1013
7.6 1012
1 1013
1.8 1011
Mixture b
Mineral Oil
Nonc
Non
Non
Tetrachloroethylene
Non
Non
Non
LIQUID INSULATION
stability under high stress, but with the progressive improvement of process technology for refining crude oil, paraffinic
crude oils and mixtures of naphthenic and paraffinic oils have
also been used because of their wider availability.
Aromatic content in mineral oil is also important, and in
some cases synthetic aromatic hydrocarbons are added. Pure
synthetic aromatic hydrocarbons, mainly alkylbenzenes, are
also used, especially for ultrahigh-voltage power cables, because of their compatibility with synthetic papers, excellent
stability under high stress, and sufficient source of supply.
Polybutenes are used for hollow power cables because of
their wide range of viscosity.
Liquids for cables are specified in IEC 60465 (mineral oils),
60836 (silicone liquids), 60867 (aromatic hydrocarbons), and
60963 (polybutenes).
Cable oils must have the following properties.
1. High dielectric strength and high volume resistivity
2. Low dielectric losses and low dielectric constant
3. Low viscosity and good fluidity over a wide temperature
range (low pour point)
4. High chemical stability and high resistance to oxidation
5. Low temperature coefficient of expansion
6. Sufficient source of supply
7. Nontoxicity and environmental safety
Of these, properties 1, 2, and 3 are most important from the
viewpoint of power cable performance.
471
BIBLIOGRAPHY
1. C. H. Fluschein (ed.), Power Circuit Breaker, Theory and Design,
IEE Power Engineering Series 1, Stevenage, UK: Peregrinus,
1982.
2. N. Ohoka and S. Maekawa, Transformers (in Japanese), Tokyo:
Tokyo Denki University Press, 1975.
3. R. K. Grubb et al., A transformer thermal duct study of various
insulation liquids, A-80 051-3, presented at IEEE PES Winter
Meeting, New York, 1980.
472
LIST PROCESSING
MINEAKI NISHIMATSU
Fukui University of Technology
TERUO MIYAMOTO
Mitsubishi Electric Corporation
TOSHIO SUZUKI
Aichi Electric Co., Ltd.
tributed locations, it is not surprising that computer support in this area will be of great impact on efciency, accuracy, and advancement of health care. Many projects, often
concerted international efforts, address the issue of how to
handle an ever-increasing amount of medical information.
Universal classications have been designed and regularly
rened, while other approaches aim not only to collect, but
also to structure and disclose this exponentially growing
body of medical information. The following sections will be
devoted to a more general description of various topics of
relevance to the eld of medical informatics and may be of
interest for the average reader.
PATIENT DESCRIPTION AND THE ELECTRONIC
PATIENT FILE
The basic goals of the use of computers in medicine concern communication and clinically relevant combination of
data. This electronic medium is expected to enhance and
facilitate such interaction and data interpretation. Ideally,
every citizen should carry a patient data card, which in
an emergency case presents valuable information to the
physician. The Medical Records Institute is an instrumental force in the movement toward such an electronic patient
record. Locally, most hospitals have developed an information system [hospital information system (HIS)]. A patient
card may include information on medical history, familial traits, use of prescription drugs, allergies, lifestyle (including sports activities and use of alcohol and/or tobacco),
availability of x-ray pictures, electrocardiogram recording,
and blood chemistry (2). Obviously, these initiatives involve
delicate ethical issues, as well.
MEDICAL TERMINOLOGY AND EPONYMS
Knowledge obviously can be represented by symbols,
words, denitions, and their interrelations. Knowledge
may be expressed by spoken or written words, ow charts,
(mathematical) equations, tables, or gures. Aspects of language and text interpretation are central issues in AI. A
powerful abstraction of language also provides a powerful
representation of knowledge. Various strategies have been
explored: semantic networks offer a versatile tool for representing knowledge of virtually any type that can be captured in words by employing nodes (representing things)
and links (referring to meaningful relationships), thus expressing causal, temporal, taxonomic, and associational
connections. Other approaches (such as frame systems and
production rule systems) have also been investigated. Conceptual graphs (3) are an emerging standard for knowledge
representation, and the method is particularly suited to
the representation of natural language semantics. Freetext data have limitations due to spelling errors, ambiguity, and incompleteness. However, formalisms that collect
data in a structured and coded format are more likely to
increase the usefulness regarding biomedical research, decision support, quality assessment, and clinical care (4).
However, the lack of standardized medical language limits
the optimal use of computers in medicine. Incorporation of
knowledge bases containing equivalent expressions may
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
American Versus British Spelling. Two standard differences are evident, namely the use of the digraph in British
spelling (e.g., anaemia versus anemia) and preference for
using c (e.g., in leucocyte) rather than k (as in the American word leukocyte). Interestingly, the British equivalent
of the American spelling of the word leukemia is spelled as
leukaemia.
Preferred Terminology. In radiology air means gas
within the body, regardless of its composition or site, but
the term should be reserved for inspired atmospheric gas.
Otherwise, the preferred term is gas. Sometimes the
preferred terminology refers to simplicity; the expression
lower extremity must be replaced by leg, for example. On other occasions the preferred terminology pertains to technical vocabulary that permits high precision
if the available information is exact; the word clumsiness
describes defective coordination of movement in general,
whereas dysdiadokokinesis refers to a defect in the ability to perform rapid movements of both hands in unison
(9).
Meaning Within a Certain Context. The quality blue primarily refers to a particular color. The actual meaning in
medical language may, however, relate to a specic noun,
for example, blue asphyxia, blue baby, blue bloater, blue diaper syndrome, blue dome, blue line, blue nevus, blue pus,
blue sclera, blue stone, and blue toe syndrome (7).
Implicit Information. A particular statement may imply
many relevant components, for example, if urinalysis is
normal, then this result implies the absence of proteinuria,
hematuria, glucosuria, and casts. Also antonyms may apply: leukopenia in particular implies no leukocytosis. This
mutual exclusion principle applies to all terms beginning
The ICD system just entered its tenth version, although the ninth edition is still used. It is applied
CMIT developed by the AMA (13) forms a reference
Circular knowledge
Redundant knowledge
Unnecessary knowledge
is absolutely essential to place more emphasis on differential aspects such as the acquisition of knowledge and
learning.
ANNs can be dened as massively parallel distributed
processors with a natural capacity not only for storing
experience-based, that is, heuristic, knowledge, but also
as a facility for making such knowledge available for use.
ANNs allow limitation of the brain in two ways. First,
knowledge is acquired by means of a learning process, and
second, the synaptic weights are used for storing the knowledge.
It is obvious that, in order to obtain acceptable results
at any of the levels of AI described above, we need to draw
from other elds such as mathematics (its language and
procedures), medicine (especially the neurophysiological
models), computer science (particularly software engineering and systems architecture), linguistics (especially syntax and semantics), psychology (which allows us to analyze
intelligent behavior models), and nally, even philosophy.
Advanced Aspects of AI
Dealing with Uncertainty. AI is not only concerned with
general mechanisms related to the search for solutions
within a given space or with how to represent and utilize the knowledge of a specic discourse domain. Another
aspect, up to now just mentioned in passing, is that concerning inferential mechanisms and/or processes, which
are considered as the starting point for the so-called reasoning models.
In any domain, the propagation of knowledge by means
of AI programs is always carried out by following a welldened reasoning model. These reasoning models contribute in a decisive way to the correct organization of the
search for solutions. Normally, the domain characteristics
and the characteristics of the problems to be solved determine the type of reasoning model to be employed. Thus,
there are domains of a markedly symbolic nature, in which
solutions can be established with absolute condence. In
these cases the use of categorical reasoning models is indicated (63). There are, on the other hand, domains that are
of a statistical nature, where unique solutions cannot be
obtained and where in addition, a decision must be made
as to which of the possible solutions arrived at is the most
probable. In these cases it is preferable to reason with statistical models of which, given the peculiarities of the inferential processes that AI deals with, the Bayesian scheme is
the most widely used (64, 65).
There are other domains in which the concept of uncertainty appears and which may be inherent to the data of the
problem and the facts of the domain, or to the inferential
mechanisms themselves. In such cases reasoning models
are chosen that are capable of correctly manipulating such
uncertainty (6668).
Finally, there are domains in which the inferential elements include nuances of a linguistic nature where hierarchies and classications can be established. Indicated in
these cases are reasoning models based on fuzzy sets (69,
70).
Obviously there are domains that manifest more than
one of the characteristics just mentioned, in which case the
then
This was precisely one of the major drawbacks of the probabilistic models. With this new theory, if
and
with
then
The function f quanties the degree of belonging of an element from the universe to the fuzzy set in question. Thus,
a fuzzy set is one for which there does not exist a clear dividing line between belonging and not belonging of determinate elements from the universe. In order to establish
the fuzzy limits of the corresponding set, we shall require
a criterion, which naturally will be arbitrary. Let us examine the universe of living persons along with the fuzzy
set A of U answering the description A is the set of young
living persons. a property we consider appropriate for the
characterization of the fuzzy subset A is the age of the
universe elements, but how should we dene age? We are
faced with the not insignicant problem of the denition
of criteria for the fuzzication of sets. In our example,
we will consider as young all those elements from the
universe whose age permits them to legally obtain Youth
According to this scale, and using the facts from the example, we may now say that Tom is totally young (or simply
young), Dick is fairly young, and Harry is not at all
young (or simply not young). These expressions represent a natural, human way of expressing judgments with
respect to the ages of our friends.
Although an in-depth discussion of the problems deriving from knowledge representation and from fuzzy reasoning is way beyond the scope of this text, it is, however,
necessary to include a reference to both. Remember that,
from the perspective of AI, the fuzzy model permits us to
represent and manipulate expressions appropriate to the
language of human beings. In such expressions we come
across fuzzy predicates, fuzzy quantiers, and fuzzy probabilities. Other, more conventional approaches to the representation of knowledge lack the means for efciently representing the meaning of fuzzy concepts. Models based on
rst-order logic, or those based on classical probability theories, do not allow us to manipulate the inappropriately
named common-sense knowledge. The reasons for this are
as follows.
Knowledge derived from common sense is lexically imprecise.
Knowledge derived from common sense is of a noncategorical nature.
The characteristics of the fuzzy sets examined in the previous paragraphs give us clues as to the procedure to follow if what we require is the application of knowledge representation models and reasoning models, based on fuzzy
logic(s). Thus (73)
1. In fuzzy logic, categorical reasoning is a special case of
approximate reasoning.
2. In fuzzy logic, it is all a question of degree.
3. Any fuzzy system can be fuzzied.
4. In fuzzy logic, knowledge should be interpreted as a collection of fuzzy restrictions placed on a collection of
variables.
5. In fuzzy logic, reasoning problems and therefore inferential processes should be interpreted as propagations
of the fuzzy restrictions mentioned previously.
Although the theoretical bases for fuzzy logic are quite
clear, applications of the latter to systems of an inferential
nature is problematic. Even at the time of writing, these
difculties have not been entirely overcome. Nevertheless,
it appears that fuzzy-system theories applied to control
problems, in place of more conventional approaches, are
coming up with solutions that are both brilliant and elegant.
FURTHER READING
The elds of medical KBs and terminology are rapidly developing. There is no single comprehensive source of information available, but the professional reader is advised
to scrutinize the following journals and organizations for
10
updated information.
M.D. Computing, published by Springer Verlag (New
York and Berlin), reports on research in the eld of
medical informatics. Of special interest to the clinician is also the journal Experts Systems with Applications, published by Pergamon Press (New York).
IEEE Expert appears four times a year and presents
the latest on AI and expert systems. Contact P.O.
Box 3014, Los Alamitos, CA 90720. This journal is
for those interested in technical details on intelligent
systems and their applications. IEEE (P.O. Box 1331,
Piscataway NJ 08855-1331) also publishes a number
of related journals, for examples, on knowledge engineering, fuzzy logic, ANNs, and multimedia.
The NLM releases news bulletins and provides information on UMLS and contracts for cooperation.
Obviously, annual meetings form the forum for presentation of the latest developments:
IMIA conferences are well known, besides the world
congress organized every four years. IMIA publishes
a Yearbook of Medical Informatics. It offers the pearls
of medical informatics since it covers inuential papers from 100 journals in the eld. Contact Schattauer Publishers, P.O. Box 104545, 70040 Stuttgart,
Germany.
USABILITY OF EXPERT SYSTEMS
In the development of medical software systems in general, and in medical KBSs in particular, there is currently a
gradual shift in philosophy, from a system-directed one, in
which internal architecture and functions set the pace for
development, towards a user-centred philosophy (referred
to as user-centered design, or UCD), in which the user is
implicated in design aspects. However, one of the main obstacles to this new approach to design is the lack of suitable
tools. Consequently, greater effort is required from the software engineering community in the eld of manmachine
interaction or humancomputer Interaction (HCI). A wide
range of techniques are currently available for analyzing
the usability of computerised intelligent systems. (74) The
fact that so many techniques have been developed is due to
the fact that, to date, no single method will ensure that a
system is usable. In fact, the use of several approaches and
an overall analysis of results are generally recommended.
In an attempt to organize and facilitate the learning of
usability analysis techniques, a number of authors have
classied these in terms of hierarchical models.
Of particular interest are the classications drawn up
by Ivory and Hearst (75), Adelman and Riedel (76), and
Preece (77). The simplest of these is the Adelman and
Riedel classication, consisting of three main categories,
namely, heuristic methods, subjective methods, and empirical methods. These are described in turn as follows:
1. Heuristic methods are based on the opinions of usability
experts. These experts analyze the different system in-
networks, fuzzy systems, or genetic algorithms, among others. These systems are called hybrid intelligent systems,
and combine intelligent techniques as well as conventional
computing techniques to achieve a higher level of machine
intelligence. Hybrid systems help to:
1. Improve the available techniques, integrating several
of them so as to conceal the problems that each of
them present. For example, neural networks are good
at learning, but can not do high level reasoning. On the
other hand, symbolic expert systems are good at high
level reasoning, but more limited in learning.
2. Find solutions for complex tasks. Most application domains present several subtasks with different characteristics. For example, the logic and static components
can be adequately managed by expert symbolic systems, while other components that are dynamic, fuzzy
or poorly understood could be managed, for example, by
neural networks.
3. Implementing multifunctional systems. In this case, the
goal is to create a system that can exhibit multiple capacities for information processing in a unique architecture. That is, there is only one system, but it tries to emulate the behaviour of different processing techniques.
One example of this is the use of neural networks for
symbolic processing.
Depending on factors such as functionality, processing
architecture and communication requirements, three basic types of architectures for hybrid systems can be distinguished (82):
1. Expert systems with function replacement, in which a
principal function of a given technique is replaced by
another intelligent processing technique. The aim for
replacement is either increasing the execution speed or
enhancing reliability. An example of this type of hybrid
system could be the replacement of the backpropagation
weight changing mechanism of a neural network with
genetic algorithm operators.
2. Intercommunicative hybrids, which are independent,
self-contained intelligent processing modules that exchange information and perform separate functions to
generate solutions. It is used when a problem can be
divided in sub-problems, each of which can be resolved
using a different technique, such as neural networks,
symbolic systems, etc. An example could be a diagnosis
system in which an expert system realizes inferences,
and calls neural networks when needed to analyze data
top obtain patterns (83).
3. Polymorphic hybrids, which are systems that use a single processing architecture to achieve the functionality
of different intelligent processing techniques. An example is a neural network that tries to perform symbolic
tasks such as step-wise inferencing (84).
METHODOLOGIES
Nowadays, the perspective of knowledge transfer (eliciting knowledge from the expert and translate it to a tool
11
using some kind of software methodology) has been substituted by the perspective of knowledge modelling. This has
been achieved using model-based methodologies that approach the complex problem of knowledge engineering by
constructing different aspect models of the human knowledge involved in some complex domain. There are several methodologies available, such as CommonKADS (85),
MIKE (86), Protege-II (87), etc. All these approach knowledge acquisition and modelling from a structural point of
view and try to palliate the knowledge acquisition bottleneck.
The last methodology, Protege, allows also for the definition of ontologies. Ontologies make possible knowledge
sharing and reuse, playing a major role in supporting information exchange across various networks. An ontology describes the concepts and relationships that are important
in a particular domain, providing a vocabulary for that domain as well as a computerized specication of the meaning of terms used in the vocabulary. Ontologies range from
taxonomies and classications, database schemas, to fully
axiomatized theories. In recent years, ontologies have been
adopted in many business and scientic communities as a
way to share, reuse and process domain knowledge. Ontologies are now central to many applications such as scientic
knowledge portals, information management and integration systems, electronic commerce, and semantic web services. Ontologies applied to the World Wide Web are creating the Semantic Web (88). There are several groups
that attempt to implement content management infrastructure and support the management of the vast amount
of knowledge encoded in clinical systems. These ontologies and rules are served up through applications and services to support guided observation capture, guided ordering, and guided interpretation of clinical data. Workow
portals leveraging this knowledge include the electronic
health record for care-givers and consumers, quality performance management, and clinical research.
BIBLIOGRAPHY
1. G. L. Solomon, M. Dechter, Are patients pleased with computer use in the examination room? J. Fam. Pract., 41:
241244, 1995.
2. P. L. M. Kerkhof, M. P. van Dieijen-Visser (eds.), Laboratory Data and Patient Care. New York: Plenum Publishing,
1988.
3. J. F. Sowa, Conceptual Graphs, Information Processing in
Mind and Machine. Reading, MA: Addison-Wesley, 1984.
The ICD is a widely accepted system that organizes all possible medical diagnoses. The tenth version has been translated for
worldwide application. Developed by the Commission on Professional and Hospital Activities, Ann Arbor, Michigan.
2 8600 Rockville Pike, Bethesda, MD 20894. The NLM releases
news bulletins and provides information on UMLS and contracts
for cooperation.
3 The UMLS is a project initiated by the NLM and distributed
on CD-ROM. Cooperation with parties to implement the system
within their own environment is encouraged but requires a contract (25).
12
13
A. Alonso-Betanzos, O. FontenlaRomero. Intelligent analysis and pattern recognition in cardiotocographic signals using a tightly coupled hybrid system.
Articial Intelligence, Vol. 136, N 1, pp 127, 2002.
84. V. Aijanagadde, Venkat and L. Shastri. Rules and Variables
in Neural nets. Neural Computation, Vol. 3, N 1, pp. 121134,
1991.
85. G. Shreiber, H. Akkermans, A. Anjewierden, R. De Hoog, N.
Shadbolt, W. Van de Velde and B. Wielinga. Knowledge engineering and management. The CommonKADS methodology.
Third Edition. MIT Press, 2002.
86. J. Angele, S. Dekel, R. Perkuhn, and R. Struder. Developing
knowledge-based systems with MIKE. Journal of Automated
Software Engineering, Vol 5, n 4, pp. 326389, 1998.
87. Protege.
http://protege.stanford.edu/index.html.accessed
November, 20th- 2006.
88. IEEE Intelligent Systems. Special issue on The Semantic
Web: A brain for humankind. Vol 16, Issue 2, 2001.
89. M. S. Blois, Information and Medicine. Berkeley: Univ. California Press, 1984.
90. P. L. M. Kerkhof, M. P. van Dieijen-Visser (eds.), Laboratory
Data and Patient Care. New York: Plenum Press, 1988.
91. E. A. Murphy, The Logic of Medicine. Baltimore: Johns Hopkins Univ. Press, 1976.
92. J. A. Reggia, S. Tuhrim (eds.), Computer-Assisted Medical
Decision Making. New York: Springer, 1985.
93. E. H. Shortliffe, Computer programs to support clinical decision making. J. Am. Med. Assoc., 258: 6166, 1987.
94. E. H. Shortliffe, L. E. Perrault (eds.), Medical Informatics: Computer Applications in Healthcare. Reading, MA:
Addison-Wesley, 1990.
95. R. H. Taylor, S. Lavallee, G. C. Burdea, R. Mosges (eds.),
Computer-Intergrated Surgery. Cambridge, MA: MIT Press,
1995.
96. B. T. Williams, Computer Aids to Clinical Decisions. Boca
Raton, FL: CRC Press, 1982.
97. J. L. Bowen: Medical education; educational strategies to promote clinical diagnostic reasoning. New Engl. J. Med. 355(
2006) 22172225.
98. Alphabetical
list
of
disease
names
and
corresponding
ICD-9-CM:
http://www.dmi.
columbia.edu/hripcsak/icd9/2indexa.html
or:
http://icd9cm.chrisendres.com/index.php?action=contents
99. Public
Health
Image
Library:
http://phil.cdc.gov/
phil/whatsnew.asp
100. Software for PDA: http://www.epocrates.com/index.html
101. http://www.merckmedicus.com/pp/us/hcp/templates/tier2/
PDAtools.jsp
102. http://www.skyscape.com/index/home.aspx
14
103. Differential
Diagnosis:DXplain:
http://www.lcs.
mgh.harvard.edu/
104. Gideon: www.gideononline.com
105. Iliad: http://premieremedical.safeshopper.com/276/1904.htm
106. Scientic and Technical Acronyms, Symbols, and Abbreviations: http://www3.interscience.wiley.com/cgi-bin/mrwhome/
104554766/HOME
PETER L. M. KERKHOF
Medwise Working Group
Maarssen, The Netherlands
AMPARO ALONSO-BETANZOS
VICENTE MORET-BONILLO
University of Coruna
508
Goals
Operational system
(processes + resources)
Information
systems
Information
systems
support
operations
One vendor
for HW and SW
Strategic choices:
Hardware vendor
509
1990s
Application program
manufacturers
System integrators
Installation projects
Installation projects
Applications
Applications
Servers,
workstations
Hardware
Networks
Strategic choices:
HW and SW platform
application
manufacturer
Strategic choices:
Architectures,
system tools,
industrial SW,
alliances
Figure 2. The strategic choices that users have to make have increased in number and complexity.
510
General
strategy
Information
systems
Information
strategy
Classification
terms
Message
standards
Middleware
Architecture
New IS
environment
System
integration
Network integrator
Workstations
Project
management
Servers
Process
redesign
Network OS
Physical
network
Potential of
IT
Redesign of
processes
Improving efficiency
Internal integration
Localized exploitation
Revolution
Evolution
Is . . .
Tactical planner
Is Not . . .
Automation of existing
processes
Department
Just systems or computers
Technically driven
Just data management
Isolated islands of data
System integrator
Collaborator
Change agent
Customer advocate
To Investment . . .
Large sums of money
Strategic view of benefits
We must spend this!
Continuous investing
Investment appraisal
Capital planning
Benefits management
Asset accounting
511
512
Clinical
service
line
Patient
+ problem Secondary
care unit
Care services
Policlinics
Wards
Surgery
X-ray
Laboratory
Intensive
care
Support services
Figure 5. Process-oriented organization of health care delivery.
of resources needed during that process. The process paradigm leads to a new organizational structure for care delivery
(Fig. 5). Resources are reorganized to serve the main activity
of problem-solving. The core activity is the clinical service
line, which uses the skills of service units such as laboratory,
radiology, surgery, and wards, according to need.
In the organization unit structure MISs support different
units, including laboratories, radiology, picture archiving and
communication, pharmacy, intensive care, anesthesia and operating rooms, administration, blood-banking, kitchen, maintenance, cleaning, clinical engineering, and so forth.
This creates a need for a glue that integrates the patient
data created in the different units and makes it available
where and when needed. This is supplied by the application
program interface (API) and the message brokering technology utilizing message standards, such as HL7 and DICOM.
The patient data store is the electronic patient record. Systems integration is the function where these MIS applications
are glued together into an interoperable environment.
The resulting MIS architecture has no order. All applications are equal. This equality has created a further need. The
clinicians need to have an overview of what is happening with
the patient, that is, how the care plan that they devised is
being implemented and with what success. Therefore applications like clinicians workstation have been created to provide an integrated picture of the care process. In fact, although the organization is a top-down structure, care is
administered in processes.
In the process paradigm, the MIS architecture centers on
service. The clinician responsible for a patient designs a care
plan. The care plan may contain orders for tests and procedures that are delivered by the care service units. The care
plan is reviewed at regular intervals and when new data become available it is adjusted/redesigned according to need.
The MIS paradigm for this approach is called order/entry
(O/E). O/E systems have lately been highly successful, as
they provide a way for the clinician to be in control of the
procedures performed on the patient and/or on samples taken
from the patient.
Common Services
As the understanding of care delivery and of ways to support
it with information technology has matured, the need for an
infrastructure architecture has emerged. An architecture de-
Medical
domain
Application layer
Middleware layer
(enabling services)
Controllability
Patient
+ problem
Dependability
Referring
unit
fines what the total system is, what function it provides, and
how the pieces that make up the whole system interact. Integration with message brokering is an architecture. Some MIS
applications are used by all, whereas some serve only one
function. In other words, there are common and specific services provided by MIS applications. Additionally, there are
services that are even more common and that are needed in
all IT environments, independent of whether or not that environment has a medical purpose.
Identifying these common and health-care specific common
services is attempted by a number of consortia made up of
industry and user organizations. The Object Management
Group (OMG) is defining a common object request broker architecture (CORBA) and is in the process of identifying these
common services (3). As a part of that activity, a health care
specific task force has been established with the name CORBAmed (14). Microsofts OLE/COM and its Healthcare Users
Group (HUG) compete with OMG, although some agreements
exist on how these can coexist (15). A third approach, known
as the Andover Group, combines the strengths of both, building on HL7 (18). HL7 itself may be also a contender as it is
moved toward full object orientation (9). In the European context, a prestandard on health-care specific common services
exists. It has been produced by the medical informatics committee (TC 251) of the European Standardization Committee
(13). The health care common components (HCC) identified
are: patient, health datum, activity, resource, authorization
and concept (19). The roots of this activity are in a stream of
European Union-funded projects that started nearly 10 years
ago (16).
Another prestandard by TC 251 presents a health care information framework (HIF) that can be used to view any
health care organization and the MIS it uses (20). The HIF
comprises three views: (1) health care domain, (2) technology,
and (3) performance requirements (Fig. 6). All MIS environments must have the required functionsbe dependable and
controllable. These requirements are met by technology in
three layers. Where they are located depends on the solution.
For instance, data privacy and protection can be an integrated feature of an MIS application, or there can be a common middleware service for this across all MIS applications.
The management of data privacy and protection in a MIS environment is certainly going to be easier if that function is
located in the middleware layer than if changes and updates
require manipulation of all MIS applications.
Functionality
513
514
515
bottom up [from the telecommunications infrastructure toward the application layer, Fig. 6)]. The MIS industry delivers applications and systems integration services. As health
care delivery becomes more integrated and extends to homes
and individuals the borders between telemedicine and MIS
are disappearing.
The original meaning of telemedicine was medicine at a
distance. Developments in telecommunications, telematics,
computers, and multimedia have amended this definition.
Now the emphasis is on the access to shared and remote expertise independent of where the patient or the expertise is
located, the multimedia nature of such contacts, and the
transfer of electronic medical data (e.g., high resolution images, sounds, live video, patient records) from one location to
another. Distances and geography are no longer obstacles to
delivering timely and quality health care.
Teleradiology is the most often cited telemedicine application. Other applications include dermatology, ophthalmology,
pathology, psychiatry, transmission of images and signals
generated by ultrasound and endoscopy and by physiological
transducers for diagnostic and monitoring purposes (29).
Taylor (30,31) separates telemedicine into systems and
services. The first deals with the technology needed to deliver
the second. Telemedicine is still mostly in the technology
phase. Numerous experiments and pilots have been conducted (and some are still running) that have established that
the technology works. However, because they have been
mostly closed environments with special funding, the pilots
have not survived in real life. Once the pilot is over it has
proved to be extremely difficult to build a convincing case to
continue with the service on a real cost basis. Teleradiology,
however, is an exception. There is evidence that it is costeffective at case loads that are realistic in typical clinical
practice. As teleradiology has been around the longest it is
reasonable to assume that as other telemedicine services mature in the coming years they will diffuse into clinical practice. The fact that health care services are becoming integrated on community and regional basis and that the
telecommunications infrastructure necessary to support this
change is growing also strengthens the case for telemedicine.
A telemedicine system consists of input/output stations
and a communications channel. The performance requirements depend on the application. In the case of teleradiology
these include:
Image capture, either directly from digital imaging modalities or indirectly from films scanned with digitizers.
Transmission of image and associated patient data
through a data channel. Depending on the speed requirements, the channel can be an ordinary phone line, an
ISDN line, or even ATM. Satellite communications is
used.
Because the size of a digitized X-ray image file is large
(an image of 1000 pixels and a 12-bit gray scale means
that the file size is 12 Mbit) and the bandwidth of the
data channel is limited, the files are usually compressed
at the sending side. Efficient compression algorithms are
lossythat is, all image detail cannot be re-created during decompression. Much effort has been invested into
researching what compression ratios are acceptable in
various radiology applications.
516
Workstations to display X-ray images and associated patient data. The features needed are different at the sending and receiving sides.
User interface to operate the system.
Teleradiology systems are either standalone or integrated with other ISs at both ends. In such cases, image
and patient data communications is usually based on DICOM and HL7 standards, respectively.
Diagnostic and therapeutic telemedicine services include teleconsulting, teleconferencing, telereporting, and telemonitoring (31). Although experiments and pilot projects for numerous telemedicine applications are being conducted, the
development of telemedicine services used in routine clinical
work has been slow. This is explained by a number of factors,
the most important of which is that a telemedicine service is
an add-on to existing services. Therefore it must either offer
benefits that cannot be disputed or replace a less cost-effective service. The argument that it provides a means to deliver
care over a distance is not enough. It must be supplemented
with facts about quality, acceptance, and cost in comparison
with the services it is replacing or augmenting. So far there
is not much data available on the utility of telemedicine with
the exception of teleradiology (32). Reimbursement polices are
another barrier for telemedicine. However, with communityand regionwide integration of service providers, this barrier
will probably disappear.
A further problem is reliability and liability. Can users
trust what they access? Who is responsible if something goes
wrong? When the expert consulted is a human being, the
usual rules of practicing medicine apply. For servers available
through the Internet, however, the situation is quite different. Consequently as this concern has been vioced mechanisms have been created to provide guidelines and certification of these servers. Health on the Net (HON) is one such
service (33).
Data Confidentiality and Data Security
Confidentiality and security of patient data are issues that
cannot be compromised. National legislation defines how the
privacy of a person (even a patient) must be protected. Other
legislation provides the framework in which health care is
practiced. All MISs, MIS environments, and information
management strategies must minimally provide what is required by the relevant laws (34). Some countries require that
all software used in health care be certified that it meets the
national regulations (35).
Organizations should establish an information risk management plan for the implementation of these requirements
into operational processes and MIS. Elements to be included
in such a plan are:
1. How authorization to access patient data is obtained
from the patient (e.g., using individual health cards)
2. How access rights of health professionals are controlled,
maintained, and verified (e.g., audit trails and strong
authentication with electronic signature)
3. How patient data are grouped with different access
rights
4. How patient data are secured (e.g., by encryption)
5. How the training and education of health care professionals and patients in these issues is organized
Data confidentiality however, is not a black-and-white issue.
In real life and especially in health care, every situation that
will arise cannot be legislated nor can the normative requirements be applied in all situations. Common sense must prevail in such situations.
THE FUTURE
The utilization of information technology applications in
health care is influenced by progress in IT, medicine, clinical
practice, and health care delivery. These elements are highly
intertwined with one feeding the others. In IT the major
trends are the Internet, Web technology, and mobile communication. Web browsers are an easy way to provide uniform
user interfaces within an organization. Similarly, Extranets
are a way for the organization to be in contact with its clients
(citizens, patients) without compromising data confidentiality
and security (although there are still doubts about the security features of Web implementations). Mobile communication, fueled by the explosive growth in cellular phones and
value-added services, seems to offer a limitless range of applications.
However, these are just technologies. They need to be applied in a way that results in benefits for the clients/patients,
users, and organizations. User organizations should be careful not to be too enthusiastic about the possibilities offered by
new technologies. New technologies obey the life cycle of
early adaptation by technology enthusiasts and then early
adapters. These provide the testing ground to perfect the
technology and to make it available at an affordable price to
all. If the technology does not survive the tests of the early
adapters it dies (36).
The process approach and the need to manage care jointly
are pushing service providers toward collaboration in order to
meet the needs of their customers and solve the problems of
their patients effectively and efficiently. The scenario of Fig.
7 and Table 4 rests with the idea that IT can integrate data
Body of
medical
knowledge
Care plans
Expected outcomes
Data to be collected
Planned resource needs
Clinical guidelines
Quality
improvement
(TQM, CQI)
Clinical
research
Patient
Care processes
Outcome
Data collected
Resources used
Resources
Organization
Figure 7. A care scenario combining care processes, plans, and clinical guidelines, and with quality improvement both at the organizational and medical research levels.
Future
Health promotion and wellness, independence and security
Virtual care (front lines and centers of excellence)
Client-centered care
Seamless service chains, logistics, community health information networks
(CHIN)
and make it and medical knowledge available in the right format anywhere and at any time. From the IT viewpoint, health
care will become virtual and transparent.
The development of MIS applications that are transportable and integratable naturally starts with identifying
user needs. User involvement in the development, testing,
and evaluation phases is equally important. The concept of a
user, however, needs to be viewed as widely as possible. This
means that one should include all categories of users from
daily end users to management. It also means that efforts
should be made to involve more than one health care organization. It also means that when the resulting product is taken
into use, its costs are offset by benefits and/or savings in
other areas, thus justifying the investment in that specific
product. According to Gremy and Sessler the key elements in
this are the respect of professional identity and a mutual effort for mutual understanding (37). The medical professions
should be empowered by the MIS applications instead of being forced into one working pattern.
BIBLIOGRAPHY
1. ISO/IEC DIS 13235-1 Information technologyOpen Distributed
ProcessingOpen trading functionPart 1: Specification, ISO
Standard.
2. ANSA, Advanced Networked Systems Architecture consortium
[Online]. Available http://www.ansa.co.uk/Research/
3. OMG, Object Management Group [Online]. Available http://
www.omg.org
4. W. Robson, Strategic Management and Information Systems: An
Integrated Approach, London: Pitman, 1994.
5. ICD-10, International Statistical Classification of Diseases and
Related Health Problems. 10th rev., Geneva: WHO, 1992. Also
[Online]. Available http://www.who.ch/hst/icd-10/icd-10.htm
6. SNOMED, The Systematized Nomenclature of Human and Veterinary Medicine [Online]. Available http://snomed.org
7. Cochrane, The Cochrane Collaboration [Online]. Available http://
hiru.mcmaster.ca/cochrane/default.htm
8. DRG, Health Care Financing Administration (HCFA) DRG version 3 [Online]. Available http://www.hcfa.gov/stats/pufiles.htm
9. HL7,
Health
Level
7
[Online].
Available
http://
www.mcis.duke.edu/standards/HL7/hl7.htm
10. DICOM, DICOMDigital Imaging and Communications in Medicine, ACR-NEMA Digital Imaging and Communications in Medicine (DICOM) Standard version 3.0 [Online]. Available http://
www.xray.hmc.psu.edu/dicom/dicom home.html
11. ASTM, The American Society for Testing and Materials [Online].
Available http://www.astm.org
517
12. Edifact, United Nations directories for Electronic Data Interchange for Administration, Commerce and Transport [Online].
Available http://unece.org/trade/untdid and Electronic Data Interchange [Online]. Available http://www.premenos.com/standards
13. CEN TC 251, European Committee for Standardization, Technical Committee for Health Informatics [Online]. Available http://
www.centc251.org
14. CORBAmed, Object Management Group Activity Focusing on
Healthcare Services [Online]. Available http://www.omg.org/
corbamed/corbamed.htm
15. Microsoft HUG, Microsoft Healthcare Users Group [Online].
Available http://www.mshug.org
16. STAR, Seamless Telematics Across Regions, a European Union
Supported Project in the Telematics Applications Program, Sector Health [Online]. Available http://www.mira.demon.ac.uk/star
17. HANSA, Healthcare Advanced Networked System Architecture,
a European Union Supported Project in the Telematics Applications Program, Sector Health [Online]. Available http://
www.effedue.com/hansa/
18. Andover Group, Andover Working Group for Open Healthcare Interoperability [Online]. Available http://www.dmo.hp.com/
mpginf/andover.html
19. HISA, Medical InformaticsHealthcare Information Systems
ArchitecturePart 1: Healthcare Middleware Layer,European
preStandard ENV 12967-1 [Online]. Available http://
www.centc251.org/ENV/12967-1/12967-1.htm
20. HIF, Medical informatics, Helathcare Information Framework,
European preStandard prENV 12443 [Online]. Available http://
www.centc251.org/ENV/12443/12443.htm
21. Prestige, Patient Record Supporting Telematics and Guidelines,
A European Union Supported Project in the Telematics Applications Program, Sector Health [Online]. Available http://
www.rbh.thames.nhs.uk/rbh/itdept/r&d/projects/prestige.htm
22. MYCIN, E. H. Shortliffe, Computer-based Medical Consultations:
MYCIN, New York: Elsevier, 1976, and B. G. Buchanan and
E. H. Shortliffe, Rule-Based Expert Systems. The MYCIN Experiments of the Stanford Heuristic Programming Project, Reading,
MA: Addison-Wesley, 1984.
23. M. J. ONeil, C. Payne, and J. D. Read, Read Codes Version 3: A
user led terminology, Meth. Inform. Med., 34: 187192, 1995; also
[Online].
Available
http://www.mcis.duke.edu/standards/
termcode/read.htm
24. UMLS, Unified Medical Language System [Online]. Available
http://www.nlm.nih.gov/research/umls/UMLSDOC.HTML
25. Galen, Generalised Architecture for Languages, Encyclopaedias
and Nomenclatures in medicine, a European Union Supported
Project in the Telematics Applications Program, Sector Health
[Online]. Available http://www.cs.man.ac.uk/mig/giu
26. EHCRA, Medical Informatics, Electronic Healthcare Record Architecture, European preStandard, prENV 12265 [Online]. Available http://www.centc251.org/ENV/12265/12265.htm
27. MRI, Medical Records Institute [Online]. Available http://
www.medrecinst.com
28. Telematics Applications Report of the strategic requirements
board, Program, Sector Health Care, 1998 [Online]. Available
http://www.ehto.be/ht projects/report board/index.html
29. Telemedicine Information Exchange, TIE [Online]. Available at
http://208.129.211.51/
30. P. Taylor, A survey of research in telemedicine. 1: Telemedicine
systems. J. Telemedicine and Telecare, 4: 117, 1998.
31. P. Taylor, A survey of research in telemedicine. 1: Telemedicine
services. J. Telemedicine and Telecare, 4: 6371, 1998.
518
NIILO SARANUMMI
VTT Information Technology
518
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
519
520
X-ray imaging,
Computer tomography,
Radionuclide imaging,
Magnetic resonance imaging,
Ultrasound, and
The ecology of medical imaging.
The first five sections deal with the medical imaging modalities. For each modality, we will describe the physical and
imaging principles, discuss the role of image processing, and
X-Ray Imaging
X-ray imaging is the most widely used medical imaging modality. A radiograph is a projection image, a shadow formed by
radiation to which the body is partially transparent. In Fig.
1 we show radiographs of a hand. The quantitative relation
between the body and the radiograph is quite complex. Some
radiographs have extremely high resolution and contrast
range. Both static and dynamic studies are performed, and
contrast agents are used to enhance the visibility of internal
structures. Recent developments in instrumentation are computed radiology and solid-state image enhancement systems.
The most commonly used image processing technique is subtraction. Even though extensive research has been performed
on image enhancement and recognition of X-ray images, there
has been very little practical acceptance of these techniques.
[See the article on X-RAY APPARATUS.]
Physical Principles. X rays are electromagnetic waves
whose frequencies are higher than those of ultraviolet light.
The frequency of X rays is normally not denoted in hertz but
rather in photon energy in (kilo) electron volts, which is re-
10
Fat
Muscle
Bone
t
I0
I1
(a)
(cm1)
describe recent developments and future trends. The last section will present the clinical and technical setting of medical
imaging and image processing and indicate the relative importance of the various image modalities.
An article of this length cannot fully describe all medical
image processing areas. We have selected the more important
methods that are currently used in modern hospitals. We do
not review microscope imaging, endoscopy, or picture archival
and communications systems (PACS). Microscopy is the essential tool of pathology, and much image pattern recognition
work has been done in this area. Endoscopic imaging is optical visualization through a body orifice (e.g. brachioscopy,
viewing lung passages; sigmoscopy, viewing the intestine) or
during minimally invasive surgery. PACS, comprising electronic storage, retrieval, transmission and display of medical
images, is a very active field, which includes teleradiology
(the remote interpretation of radiological images) is described
in a separate article on TELEMEDICINE.
521
0.1
10
100
(b)
(1)
where V is the photon energy, (V) is the attenuation coefficient as a function of photon energy, and t is the thickness of
the object.
522
;;
;
Field-defining
diaphragm
X-ray
source
Recorder
Subject
Filter
r
I0
I1
I0
I2
d
Figure 4. A schematic view of a blood vessel surrounded by soft tissue. The contrast depends on the size of the blood vessel and on the
difference between attenuation coefficients (see text).
|I1 I2 |
= |1 exp[(2 1 )d] |(2 1 )d|
I1
(3)
523
Penumbra
Subject
Penumbra
Incident
gamma-ray
photons
Source
Recorder
(a)
Film
(b)
524
indicates suspicious areas to a radiologist, who makes the final decision. This type of interactive diagnosis system has the
promise of combining a radiologists extensive knowledge and
ability to interpret complex images with a machines capability to perform a thorough, systematic examination of a large
amount of data.
Computer Tomography
A tomogram is an image of a section through the body. Tomographic X-ray images were first formed in the 1930s by moving the source and film in such a way that one plane through
the body remained in focus while others were blurred, but
they were used for only very specialized investigations. X-ray
computer tomography, which is abbreviated as Cat or X-Cat,
was introduced in 1973 and gained almost instantaneous acceptance.
In Cat, transmitted X-ray intensity measurements are
made for a large number of rays passing through a single
plane in the body. These measurements are subsequently processed to compute (reconstruct) the values of attenuation coefficient at each point in the plane. This, in effect, produces a
map of the density through the tomographic section plane.
The geometric arrangement for performing these measurements is shown in Fig. 7. A ring of X-ray detectors surrounds
the body being scanned, and an X-ray generator rotates in an
orbit between the body and the detector ring. For each position of the source, intensities of the radiation passing through
the body and impinging on the detectors are digitized and
captured by a computer. A sectional image is computed from
the data collected while the source rotates through a full
circle.
Medical CT images typically have a resolution of 500
500 pixels, which is about the same as that of television, and
much lower than the resolution of radiograms. Radiograms
are projection images and superpose many anatomical structures onto one plane, but all structures in a tomogram are
shown in their correct geometrical relationship. Figure 8
shows a CT scan through the upper abdomen. The spleen
(lower right) contains a hematoma (arrow), a pool of blood
caused by an injury. The hematoma density is only about 5%
higher than that of surrounding tissues, and it would not be
visible in a conventional radiograph.
Subject
X-ray
source
525
Scintillating crystal
Subject
Collimator Photomultiplier
tubes
CT reconstruction algorithms are perhaps the most successful use of image processing in medicine. Unlike X-ray projection images, which are inherently analog, CT images are
inherently digital and are impossible to obtain without sophisticated signal and image processing technology.
Ga
c m
a
me ma
ra
Radionuclides are atoms that are unstable and decay spontaneously. Certain decay reactions produce high-energy photons, which are called gamma rays. There is no difference between gamma and X rays; the difference in terminology
reflects the process that produces the radiation, and not its
physical nature. In nuclear medicine a radiopharmaceutical
agent (a compound containing a gamma-ray-producing radionuclide) is injected into a body. Images formed from the emitted radiation can be used to visualize the location and distribution of the radioactivity. Radiopharmaceutical agents are
designed to migrate to specific structures and to reflect the
functioning of various organs or physiological processes. Consequently, radionuclide imaging produces functional images,
in contrast to the anatomical images formed by conventional
X rays. Some nuclear medicine procedures require only bulk
measurements of radioactivity; for example, the time-course
of radioactively labeled hippuric acid in the kidney is an indication of kidney function. In this article, we will concentrate
on the procedures that require image formation. We will describe the operation of a scintillation camera, the most commonly used device for radionuclide imaging. We will also discuss single-photon emission computer tomography (ELT), an
imaging method that produces three-dimensional data. We
will not discuss positron emission tomography (PET) scanners, which have unique technical and biological advantages
over ECT but are more expensive and are used primarily for
research.
Projection images of radionuclide distributions are most
commonly formed with a scintillation camera, which consists
of two major components: a collimator and a position-encoding radiation detector, as shown schematically in Fig. 9. The
collimator is made form a highly absorbing substance, such
as lead, and contains a set of parallel holes that admit only
radiation normal to the collimator. These gamma rays fall on
a
mm a
Ga mer
ca
Radionuclide Imaging
Subject
Gamma
camera
Figure 10. Schematic diagram of an Emission Computed Tomography (ECT) system. Three gamma cameras collect projection data from
the subject while rotating about a common center.
526
;;;;
;;;
Dc magnet
;;;;
;;;
Gradient and
RF coils
Pickup coil
to the dc field. If this field is at the Larmour frequency, nuclear magnets tip away from the direction of the dc field. The
deflection angle from equilibrium ( flip angle) is proportional
to the product of the RF field strength and duration.
Magnetic Resonance Imaging Principles. A schematic diagram of a magnetic resonance imager is shown in Fig. 12. The
bulkiest and most expensive component is the dc magnet. The
RF coils excite the MR signal, and the pickup coils detect it.
To form an image gradient coils impose linearly varying magnetic fields that are used to select the region to be viewed
and to modulate the signal during readout. In a typical data
collection step, the RF field is applied in the presence of a
gradient field (slice selection gradient), so that only one slice
through the body is at the Larmour frequency. After the magnetic moments are excited, another gradient field is applied
(readout gradient) so that the moments in different portion of
the slice radiate at different frequencies. The signal from the
pickup coil is amplified and digitized. The Fourier transform
of this signal gives a map of the amount of magnetic materials
at regions of different gradient field value. A number of such
data collection steps, each with a different readout gradient,
are required to collect enough data to produce a sectional image. This sequence of data collection is programmed by
applying pulses of current to slice selection coils, RF coil, and
readout gradient coils. A magnetic resonance imager is a flexible instrument: image characteristics, such as slice thickness
and resolution are determined by this pulse sequence. The
slice orientation, thickness, and resolution can be varied.
Image intensity is basically proportional to proton concentration, or the amount of water in different tissues. Figure 13
527
ULTRASOUND
Transducer
528
Object
(a)
through the body. Reflections (echoes) from tissues radiate toward the transducer, which converts them to electrical energy. An image is formed by collecting a set of echoes from
beams sent out along parallel lines. We will describe the main
physical properties that govern propagation of sound in tissues and relate them to factors that affect the quality of images obtained with ultrasound. [See also the article on ULTRASONIC MEDICAL IMAGING.]
Propagation of ultrasound through tissue is governed by
the acoustical wave equation, and the wavelength of acoustical waves places critical limits on the resolution of ultrasound
scanners. Wavelength, frequency, and acoustic velocity are related by f c; in soft tissues c 1500 m/s, so that, for
example, the wavelength at 5 MHz is 0.3 mm. The resolution
of the scanner in the depth direction is limited by the duration of the acoustic pulse and in the lateral direction by the
width of the sound beam. In Fig. 15 we show the schematic
pattern of acoustical energy produced by a concave (focused)
transducer excited with a constant frequency. The transducer
has diameter D and the focus (center of curvature of the surface) is at a distance F from the transducer. The field pattern
is quite complex: approximately, the acoustic beam converges
toward the focus where its diameter w 0.6F/D. The width
of the beam is relatively constant over a depth of field d
2wF/D. To both sides of the region of focus the beam width
increases linearly with distance. To obtain high resolution
(small w), one should use a short wavelength (high frequency)
and or a large transducer of diameter D, which leads to a
small depth of field. This physical limitation can be overcome
by the principle of dynamic focusing with an array transducer. A flat transducer is decomposed into elements. When
an echo is received, signals from various transducer elements
are delayed by appropriate amounts, producing the same sort
of focusing obtained with a curved transducer surface. These
delays are varied with time, allowing focus to be maintained
for a large range of depth.
In practice, only soft tissues (muscle, fat, and fluid-filled
cavities) can be imaged with acoustical imaging. Bone and
d
Figure 15. Depth dependent beam width in a focused ultrasound
transducer.
(b)
529
Computer tomography
Nuclear medicine
Magnetic resonance
imaging
Ultrasound
Subspecialities
Chest, bone, gastrointestinal,
genitourinary, neurological,
neurosurgical, angiography,
mammography
Body, neurological (head)
Studies
157,958
14,343
10,772
12,380
15,844
The work reported in this section is supported by the National Cancer Institute and the National Institutes of Health under grant numbers CA52823 and P41 RR01638.
530
signal processing have been in design and construction of imaging devices: indeed, novel modalities such as CT, radioisotope imaging, MRI, and acoustic scanning are impossible
without signal processing. There is a modest but growing application of post-processing, the use of image processing techniques to produce novel images from the output of conventional imaging devices. The most common medical images,
radiograms, are largely formed with analog methods. However, there is great promise for digital storage and transmission of these images. This is likely to be a large area for development and application of image compression techniques.
There is also the promise, to date largely unrealized, of using
pattern recognition techniques to improve on human interpretation of medical images.
As in other biomedical areas, medical image processing is
driven by the needs and constraints of the health care system.
Innovations must compete with existing methods and require
extensive clinical testing before they are accepted.
In conclusion, the fusion of several indices and local decisions leads to a more reliable global decision mechanism to
improve on diagnostic imaging.
In this part of the article, we gave a few examples of medical applications that make direct use of various aspects of machine vision technology. We also showed how to formalize
many medical applications within the paradigm of image understanding involving low-level as well as high-level vision.
The low-level vision paradigm was illustrated by considering
the problems of extracting a soft tissue organs structure from
an ultrasound image of the organ, and that of registering a
set of histological 2-D images of a rat brain sectional material
with a 3-D brain. For the high-level vision paradigm, we considered the problem of combining local decisions based on different aspects or features extracted from an ultrasound image
of a liver to arrive at a more accurate global decision.
BIBLIOGRAPHY
1. A. Cohen, Biomedical signals: Origin and dynamic characteristics; Frequencydomain analysis, in J. Bronzino (ed.), Biomedical
Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
2. A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1989.
3. W. J. Tompkins (ed.), Biomedical Digital Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1993.
4. M. D. Menz, Minimum sampling rate in electrocardiography, J.
Clin. Eng., 19 (5): 386394, 1994.
5. L. T. Mainardi, A. M. Bianchi, and S. Cerutti, Digital biomedical
signal acquisition and processing, in J. Bronzino (ed.), Biomedical
Engineering Handbook, Boca Raton, FL: CRC Press, 1995.
6. R. E. Challis and R. I. Kitney, The design of digital filters for
biomedical signal processing. I. Basic concepts, J. Biomed. Eng.,
4 (4): 267278, 1982.
7. R. E. Challis and R. I. Kitney, The design of digital filters for
biomedical signal processing. II. Design techniques using the zplane, J. Biomed. Eng., 5 (1): 1930, 1983.
8. R. E. Challis and R. I. Kitney, The design of digital filters for
biomedical signal processing. III. The design of Butterworth and
Chebychev filters, J. Biomed. Eng., 5 (2): 91102, 1983.
9. N. V. Thakor and D. Moreau, Design and analysis of quantised
coefficient digital filters: Application to biomedical signal processing with microprocessors, Med. Biol. Eng. Comput., 25 (1):
1825, 1987.
BANU ONARAL
OLEH TRETIAK
FERNAND COHEN
Drexel University
MEMORY ARCHITECTURE
MEISSNER EFFECT AND VORTICES IN SUPERCONDUCTORS. See SUPERCONDUCTORS, TYPE I AND II.
MEMORIES, SEMICONDUCTOR. See SRAM CHIPS.
531
MICROELECTRODES
BIOELECTRODES
MICROPIPETTES
Microelectrodes have traditionally been developed and
used to measure voltage inside and outside biological
cells, and much of our understanding of the nervous system has come from these recordings. Microelectrodes have
been adapted with some clever arrangements to measure
membrane voltage and ion concentrations simultaneously.
More recently, microelectrodes capable of measuring partial pressures of various gases and concentrations of physiologically relevant chemical substances, such as neurotransmitters, have also been designed. The advantages of
these microelectrodes is that they allow direct measurements within biological tissues to give information about
the local microscopic milieu. Therefore, these electrodes
must be small to minimize interference with physiological function and the damage generated by their insertion. Although these small structures are more fragile than
macrosensors, they usually have better time constants.
MEMBRANE POTENTIALS
All cells in the body have a nucleus and a cytoplasm surrounded by a lipid membrane. The cytoplasm is a good conductor with a resistivity varying between 50 cm and
300 cm. The cytoplasm is separated from the outside of
the cell by a thin (7.5 nm to 10 nm), resistive and capacitive membrane composed almost entirely of proteins and
lipids. Electrical potentials exist across this membrane in
practically all cells of the body. The resting potential difference is generally around 70 mV, and the inside of the
membrane is negative with respect to the outside (See Bioelectric potentials). This resting potential is generated
by the equilibrium of two forces, the diffusion of ions across
the membrane through protein channels and the electrical
force generated by accumulation of ions at the membrane.
The resting potential is sustained by pumps that maintain
diffusion gradients across the membrane. Some cells, such
as muscle bers and neurons, are excitable and generate
large voltage signals (about 100 mV) either spontaneously
or when stimulated. Figure 1 shows the resting potential and action potential in a neuron. The action potential
is generated by nonlinear voltage-sensitive ion channels.
Sodium (Na) channels are normally at rest and open when
the membrane voltage reaches a threshold value. A large
inux of Na current depolarizes the membrane to positive
potentials. The Na channels turn themselves off (inactivation), and the potassium channels then open bringing the
membrane voltage down to it resting value (1). This action
potential is an all-or-none phenomenon and carries information from sensory inputs to the brain, from the brain to
the muscles, and within various parts of the brain. Therefore, the measurement of the membrane potential is crucial to our understanding of the activity of excitable and
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
Micropipettes
Ag/AgCl
Pellet
KC
Ve
ic
ro
mV
Extracellular
medium
p
pi
te
Ve
Reference electrode
(Ag/AgCl)
et
70
Nichrome wire
F
Glass capillary
tube
I
(a)
Shaft
Shank
ms
Tip
(b)
Metal
Glass
(c)
Figure 2. Microelectrodes. (a) Micropipette puller. A glass capillary tubing is heated and pulled. (b) Two small diameter electrodes
are produced (c) Supported metal microelectrode with glass insulation.
Micropipettes
across the interface below threshold values for these reactions and at least partially reverse some of the reactions.
For microelectrodes designed to stimulate neural tissue,
the charge-carrying capacity is the most important characteristic. This capacity is proportional to the surface area
of the electrode. To increase the amount of charge which
can be delivered by a small electrode, a layer of metal oxide (iridium oxide for an iridium electrode) is deposited
on the surface. The oxide layer formation or activation is
achieved by cycling the electrode between the anodic and
cathodic voltages which generate electrolysis. Iridium oxide is a conductive layer which exchanges an electron for
an hydroxyl ion across the interface. The charge-carrying
capacity of the electrode is effectively increased by adding
an iridium oxide layer. The impedance of the electrodes
is also reduced by this activation process by an order of
magnitude (6). The impedance decreases as a function of
frequency and is best modeled by a capacitor in series with
a resistor which has a 300 Hz cutoff.
Metal electrodes are insulated with various types of
polymeric materials, such Teon or Parylene-C. Insulation
at the tip of the electrode is removed with electric elds or
lasers to burn the material away. By combining the properties of glass and metals, supported metal electrodes are
fabricated [Fig. 2(c)]. A micropipette is lled with a metal
which has a melting point below that of glass and is pulled
to a ne tip. Thin metal lms can also be deposited onto a
solid glass rod pulled to a sharp point. Then the electrode is
insulated with a polymer, except at the tip, to form a sharp
microelectrode (9).
ELECTRICAL PROPERTIES OF MICROELECTRODES
Micropipettes and metals microelectrodes consist of a conductive cylindrical core surrounded by an insulating layer
made of polymer or glass. The electrode is inserted within
a tissue which is a relatively good conductor (about 100
cm). The electrode is pulled to a very small diameter, and
therefore the narrow shank region of the electrode generates large resistance. This region is usually tapered, and
assuming a small amount of taper, the resistance is given
by
where is the specic resistivity of the electrodes conducting material, L is the length of the electrode shank, and d
is the internal diameter of the electrode. A distributed capacitance is also formed along the immersed part of the
shank between the interior and the extracellular (or intracellular) space. This capacitance, again assuming a small
amount of taper, is given by
Re
Cm
Rm
Ce
Vm +
Vhc
+
(a)
Rs
Cs
Ri
Vi
Ce
Vhc
+
(b)
Micropipettes
innite,
Vs
Ii
+
Rs
Vm
Is
Vm
A
Re
Electrode
Rm
Cm
Vr
(a)
+
Is
Re
Electrode
Is
Rb
Cm
Rm
Vr
Cell
(b)
CF
Vo
Vi
+
A
+
Ac
Ceq
R2
R1
(c)
Figure 4. Circuits for microelectrodes. (a) Current injection and
voltage measurement. (b) Bridge circuit for electrode resistance
measurement and compensation. (c) Negative capacitance compensation.
lel with the capacitance of the electrode. Then the net capacitance is decreased, improving the frequency response
and the rise time of the electrode-amplier recording system (12). To implement the circuit, the amplier (A) is a
buffer and is the same as in Fig. 4(a). The gain Ac amplier
is made variable and greater than 1.
Microelectrodes have also been used for voltage and
patch clamping of cells. These techniques allow researchers
Micropipettes
Micropipettes
MICROELECTRODE ARRAYS
The silicon technology used to make integrated circuits
can be adapted to manufacture arrays of microelectrodes.
The activity of single cells can be recorded with the micropipette technology discussed previously. Neuroscientists are now increasingly interested in recording simultaneously from a large number of cells. Moreover, by stimulating a large number of cells in the spinal cord or in
the brain selectively, it should be possible, in principle,
to restore motor function in paralyzed patients or vision
in blind patients, for example. Therefore, multiple arrays
of electrodes capable of recording or stimulating the nervous system are clearly important to understanding nervous system function and to designing neural prostheses.
Three silicon-based types of microelectrode arrays have
been developed: (1) A 1-D beam electrode where a thinlm platinum-iridium is deposited on a thin layer of silicon
substrate (15). This thin substrate provides a surprising
amount of exibility and can be utilized for the leads and
the electrode pads. (2) A 2-D array for recording the activity
of neurons grown in cultures and axons in nerves. A thin
lm microelectrode array is made of gold electrodes covered
with platinum black on a silicon substrate. The assembly
is built into the bottom of a neuron culture dish. Neurons
grow over these electrodes and make direct contact with
them (16). In another design, micromachining of a silicon
wafer generates a matrix of 64 square holes with a side
dimension of 90 m. Gold pads and leads are deposited
near each hole. Then the thin wafer is inserted between
the two sides of a severed nerve. As the axons grow inside
the holes, it is possible to record from a selectively small
groups of axons (17). (3) A 3-D array for cortical recording
and stimulation. The 1-D beam electrode discussed previously can be assembled to form three-dimensional arrays.
The longitudinal probes are inserted perpendicularly into a
silicon platform. The leads from each probe are transferred
to the silicon probe and are routed to a digital processing
unit (18). Current work also involves including low noise
amplication directly on the platform. Another implementation involving micromachining and etching techniques
was used to fabricate a 10 10 electrode array. One hundred conductive needlelike electrodes (80 m at the base
and 1.5 mm long) are micromachined on a 4.2 mm 4.2 mm
substrate (19). Aluminum pads are deposited on the other
side of the substrate and make contact with each needle
electrode. The tips of the electrodes are coated with gold or
platimum. A high-speed pneumatic device is used to place
the array into cortical tissue because the high density of
the electrodes makes insertion difcult. Then the microelectrode arrays are available for recording from a large
number of cortical sites.
Micropipettes
19. K. E. Jones, P. K. Campbell, R. A. Normann, A glass/silicon
composite intracortical electrode array, Ann. Biomed. Eng., 20:
423437, 1992.
20. D. W. Arrigan. Nanoelectrodes, nanoelectrode arrays and their
applications. Analyst. 2004 Dec; 129(12): 115765. Epub 2004
Nov 9. Review.
21. F. Patolsky, B. P. Timko, G. Yu, Y. Fang, A. B. Greytak, G. Zheng,
C. M. Lieber. Detection, stimulation, and inhibition of neuronal signals with high-density nanowire transistor arrays.
Science. 2006 Aug 25; 313(5790): 11004.
22. R. M. Penner, M. J. Heben, T. L. Longin, and N. S. Lewis. Fabrication and Use of Nanometer-Sized Electrodes in Electrochemistry, Science. 1990 Nov 23; 250: 11181121
23. J. J. Watkins, J. Chen, H. S. White, H. D. Abruna, E. Maisonhaute, C. Amatore. Zeptomole voltammetric detection and
electron-transfer rate measurements using platinum electrodes of nanometer dimensions. Anal Chem. 2003 Aug 15;
75(16): 396271.
DOMINIQUE M. DURAND
Case Western Reserve
University, Cleveland, OH
PACEMAKERS
Today, implantable cardiac pacemakers are used with exceptional success as a long-term, safe, and reliable form of therapy for different kinds of cardiac rhythm disturbances. Numerous pioneering, technological developments, such as
highly integrated circuits or lithium batteries, are significant
milestones in this field. During the short history of the pacemaker, beginning with the first implantation in 1958, a turbulent technical evolution has occurred. The early devices
were heavy, simple impulse generators with two transistors
and short operating lifetimes. These could pace only the right
ventricle asynchronously with impulses of a constant rate and
amplitude. Modern, rate-adaptive dual-chamber pacemakers
possess highly complex integrated circuits with several hundred thousand transistors. They can pace in the right atrium
and the ventricle, monitor the intrinsic cardiac activity, adapt
automatically to changing needs of the heart, be programmed
through inductive telemetry in a variety of ways, and guarantee an operating lifetime of 8 years or longer. These devices
now weigh approximately 25 g and are getting increasingly
smaller, given continued advances in electronics and battery
technology. Multidisciplinary cooperation between the diverse
fields of physiology, physics, electrochemistry, electronics, and
materials science have made these impressive developments
possible. Consequently, the goal of pacemaker therapy has exceeded that of merely a life-maintaining function. It has instead become more adept at the reestablishment of a high
quality of life for the patient.
Through different pacing functions, modern, rate-adaptive,
dual-chamber pacemakers can adjust to an optimal mode of
pacing and pacing rate at any time. Thus, interference between the pacemaker stimulus and the cardiac intrinsic rate
is prevented through continuous intracardiac electrogram
(IEGM) monitoring. If the intrinsic activity is absent for a
certain anticipated interval, the pacemaker commences pacing to prevent a sinking heart rate or cardiac arrest. Today,
all single- and dual-chamber pacemakers are designed to pace
on demand. The ever increasingly used, rate-adaptive pacemakers can control the stimulation rate with a physiological
sensor if a sufficient sinus rhythm is no longer present. A rate
increase is additionally accompanied by a dynamic shortening
of the atrioventricular (AV) delay to guarantee atrial and ventricular synchrony. This allows for the best possible pumping
performance at any time. In addition, modern pacemakers record large quantities of diagnostic information in an internal
memory over long periods of time. At follow-up, the physician
can use the programmer to analyze data that greatly contribute to therapy optimization. Rhythm disturbances requiring
treatment and the basic, technical solutions in a modern cardiac pacemaker will be described to augment understanding
of the versatile functions of the pacemaker.
483
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
484
PACEMAKERS
Impaired
conduction
Impaired
pacemaking
Normotopic
Sinus tachycardia
Sinus bradycardia
Sinus arrhythmia
Ectopic
Sinoatrial
Atrioventricular
Passive
Active
AV escape rhythm
Ventricular
escape rhythm
Extrasystoles
Ventricular
fibrillation
Ventricular flutter
Ventricular
tachycardia
Intraventricular
ther subdivide these, a medical differentiation is made between conduction disturbances of the first degree (delayed
conduction), the second degree (occasional interruption), and
the third degree (total interruption). A first-degree AV block
is diagnosed if the AV conduction time, the PQ interval on
the ECG, is greater than 210 ms. For second-degree AV block,
two types exist. In type I (Wenckebach or Mobitz I block), the
PQ interval is extended with each heartbeat, until a ventricular contraction is missing. In type II (Mobitz II block), the PQ
interval is constant as a rule, whereby individual atrial activity is either irregularly or regularly not continued on to the
ventricle (e.g., in a 2 : 1, 3 : 1, 4 : 1 rhythm). A total block of
conduction is called third-degree AV block. Here, a ventricular substitute rhythm is normally formed within a few seconds below the block that has a much lower rate, but it secures patient survival. The situation referred to as an
AdamsStokes attack occurs when there is a temporarily diminished, cerebral circulation resulting from an acutely occurring rhythm disturbance, as a corollary to some of the conditions described previously. Depending on the type and
length of the rhythm disturbance, symptoms of failure include
intermittent dizziness, loss of consciousness, cramps, and arrested breathing. If no substitute rhythm has formed after a
3 to 4 min arrest, even death can occur. Because of the high
level of technology attained in modern pacemakers, an artificial cardiac pacemaker can treat practically every kind of
bradycardic rhythm disturbance today. Also, in another category of electronic therapeutic devices, the implantable defibrillators are increasingly able to treat tachycardic rhythm
disturbances. Thus, rectifying a slow and a fast heart rate is
now achievable with state-of-the-art electronic technology.
THE DEVELOPMENT OF PACEMAKER THERAPY
The roots of contemporary pacemaker therapy can be traced
back to the eighteenth century. In 1791, Luigi Galvani documented, for the first time, experiments in which the heart and
muscle tissue were electrically stimulated. In the nineteenth
century, additional experiments, conducted by Bichat in 1800,
electrically stimulated animal hearts or decapitated humans.
The syndrome of bradycardia and syncope, which would be
PACEMAKERS
485
II
Chamber(s) Sensed
III
Response to Sensing
IV
Programmability,
Rate Modulation
V
Antitachyarrhythmia
Function(s)
0 None
A Atrium
V Ventricle
D Dual (A V)
0 None
A Atrium
V Ventricle
D Dual (A V)
S Single (A or V)
0 None
T Triggered
I Inhibited
D Dual (T I)
S Single (A or V)
0 None
P Simple programmable
M Multiprogrammable
C Communicating
R Rate modulation
0 None
P Pacing (antitachyarrhythmia)
S Shock
D Dual (P S)
486
PACEMAKERS
Symptomatic bradycardia
No
Is AV
No conduction Yes
presently
appropriate?
DDDR
No
Is SA mode
Yes
function
presently
appropriate?
Yes
Paroxysmal
SVT
Yes
Chronic atrial
fibrillation
VVIR
VVI
VDD
DDD
DDDR
DDDR
+ dual
demand
AAIR
DDDR
No
Is SA node
Yes
function
presently
appropriate?
No
Is AV
conduction
intact?
Yes
DDIR
AAI
DDD
AAIR
DDDR
VDD
Figure 2. Flow chart of pacing mode selection and its dependence on the rhythm disturbance.
Ventricular single-chamber pacing (VVI) should be applied only for chronic atrial fibrillation.
Otherwise, DDD(R) orif AV-conduction is still presentAAI(R) modes are indicated.
ally and uses the natural intrinsic activity to pace the ventricle, functions in the VAT mode. However, it is often necessary
to pace and sense in both chambers (DDD, DDDR). A flow
chart in Fig. 2 illustrates the selection of the most important
pacing modes given the profile of the disease. With the exception of chronic atrial fibrillation, which today still requires
implantation of single-chamber systems (VVI, VVIR), rateadaptive dual-chamber pacemakers are becoming more and
more the standard for providing therapy.
ELECTRODES
Equally important progress has been made in electrode technology in parallel with semiconductor technology. Through
new electrode technologies and coating methods, decisive
progress was achievable for better electrode-tissue interfacing. Improved ingrowth characteristics and a drastic reduction of the pacing threshold, and thus the energy consumption, are examples of these innovations. The pacemaker
impulse is conducted through a special electrode (or in the
case of a dual-chamber pacemaker, through two separate electrodes) to the atrial and/or ventricular myocardium to trigger
depolarization. To this end, principally only the chambers on
the right side are considered (right atrium or right ventricle).
Introducing the pacemaker electrode(s) is feasible only with
a low-pressure system approach. One such case would be a
transvenous entry, through the right half of the heart. Excitation of the tissue is done through the electrically conductive
tip of the electrode, which is formed or surface-treated for optimal charge transmission (3). The geometric (macroscopic)
PACEMAKERS
Tipconnection
to inner conductor
Ringconnection
to outer conductor
Insulation
Helical coil
conductors
Tip electrode
(cathode)
Ring electrode
(anode)
Tines
487
high internal losses (10 to 20%) limited the actual service lifetime (5). Various other solution concepts such as biological,
piezoelectric, biogalvanic, and even nuclear batteries were researched as well. With the exception of the nuclear battery,
none of these methods went beyond the experimental stage.
Since 1972, different types of lithium batteries have been
used and in the meantime have come to be the standard solution. Although improved rechargeable generators have been
available since the early 1970s, this has not yet had an impact on present pacemaker application. This is because lithium batteries have enabled pacemaker operation for more
than 10 years because of their significantly higher energy
density and low internal energy dissipation. An additional advantage to the lithium battery is that of no gas evolution. This
allows pacemakers to be encapsulated largely by titanium for
a hermetic seal. Electronic circuit reliability has therefore
been significantly improved because circuit failures caused by
released hydrogen have been eliminated. Of the numerous
types developed (5), mainly the LiI battery (Fig. 4) is used for
pacemakers, which delivers a voltage of 2.8 V with capacities
of 0.8 to 3 Ah. The voltage falls gradually through discharging, resulting in a very flat curve the first year. However, increased discharging results in a much steeper curve. Concurrently, internal resistance increases greatly, from
approximately 100 with a new battery to approximately 40
k at the end of the service lifetime. This point in time is
usually expressed in two stages. If the voltage has attained a
value between 2.0 and 2.2 V [internal resistance then being
20 to 30 k (8)], the beginning of the elective replacement
interval (ERI) is indicated. ERI is defined as the interval between this stage of battery depletion and the end-of-service
(EOS) stage, where the voltage has further dropped down to
1.8 V and the internal resistance has increased to 40 k. This
additionally discharged state, where the device no longer
functions regularly, is reached about six months after start of
ERI, depending on the device and pacing mode. The pacemaker should be replaced during ERI and must be immediately replaced when the EOS criteria are met.
Glass feedthrough
Insulating tube
Collector
Lithium anode
Cathode
488
PACEMAKERS
excitation, such as premature supraventricular or retrogradely conducted ventricular extrasystoles, and ectopic ventricular rhythms, limits the wide application of atrial control.
Only through additional control functions for the ventricle
can pacemaker-mediated parasystoles be suppressed. The
dual-chamber pacemaker with an atrial and ventricular control circuit for pacing (DDD) solely guarantees physiological
atrial and ventricular pacing. This is accomplished by recognizing atrial, as well as ventricular, arrhythmias under
nearly all conditions of pacemaking and conduction disturbances (10). The case of sick sinus syndrome is an exception,
in that no physiologic rate-adaptation is possible under conditions of stress, unless a rate-adaptive pacemaker is implemented.
The technical challenge for re-creating the natural course
of excitation is that of synchronizing the ventricles with the
atria. The AV delay and the atrial and ventricular refractory
periods regulate the course of pacing. Adapting AV delays automatically to the heart rate optimizes the atrial hemodynamic contribution for filling the ventricle, which contributes
up to 30% to the cardiac output. By reducing the artificial
conduction time with sensed P waves, the latency period between the atrial stimulus and atrial excitation is additionally
compensated. Arrhythmias can be eliminated by selecting appropriate control intervals. These rhythmic disturbances may
result from ventricular extrasystoles or the abnormal spread
of excitation. For the majority of clinically significant rhythm
disturbances, this option allows for an automatic adaptation
of pacing to the therapy requirements (11). In the event of
extreme arrhythmias, programmability offers specific control
programs that interrupt supraventricular and ventricular reentry tachycardia by prematurely exciting the myocardium.
TECHNICAL SOLUTION
The block diagram of a multiprogrammable dual-chamber
pacemaker in Fig. 5 affords an overview of the basic components. Atrial and ventricular channels each possess separate
input and output amplifiers connected to the myocardium
through the atrial or the ventricular electrode. The program
memory digitally controls the adjustment of the impulse energy through the amplitude and the impulse duration of the
stimulus and the sensitivity of the input amplifier. The central, quartz-controlled clock and the counter keep the timing
of all control processes, such as pacing rate, refractory period,
hysteresis, and program transmission. Programming is performed inductively through a program amplifier with a decoder, a control circuit, and a register that stores the temporary and permanent programs.
Figure 6 shows the construction of an implant, in which
the two significant components are the lithium battery, occupying the right two-thirds of the housing, and the hybrid circuit. The pacemaker housing consists of a dual-sectioned titanium capsule, which is reliably sealed by laser welding and
has a long-term resistance to corrosion. The electrode connection block consists of a cast epoxy resin head with hermetically sealed feedthrough. The hybrid substrate contains all
electronic components, including a coil that enables bidirectional inductive telemetry for postoperative programming
through a programmer. Some applications for this programmer are an inductive transmission of pacing and function pa-
PACEMAKERS
Sensing
amplifier
A
Noise
detection
Refractory
period
PR
Battery
+
EOL
EOLindicator
PR
Sensing
amplifier
V
Noise
detection
Refractory
period
Main
control
logic
489
Output
amplifier
A
A
PR
Timer
marker
CLK
Event
counter
PR
External
trigger
Quartz
crystals
Oscillators
AV delay
RUN-AWAY
protection
Output
amplifier
V
Temporary
program
register
A/D
converter
Memory
(RAM)
8 KB 8
Reed
switch
Bus
control
Permanent
program
register
Decoder
Receiver
Coil
IEGM
Histogram
Analog data
Coder
Transmitter
rameters or intracardiac signals and such operating parameters as battery current, voltage, and internal resistance;
electrode impedance; and patient information (12). The value
of programmability not only lies in the possibility of postoperative corrections, but allows also an effective reduction of the
variety of pacemaker types, lowering the overall costs involved.
Connector block
Setscrew
Fixation hole
Header
Feedthrough
O-ring
Lead ports
Lead
Housing
Sealing cap
Hybrid circuit
Welded seam
Battery
Figure 6. Major components of an assembled dual-chamber pacemaker. The hermetically sealed titanium housing contains the hybrid
(left), which holds the IC and all other electronic components, and
the battery (right), which occupies about two-thirds of the housing.
Leads are connected to the device via the cast epoxy resin head.
490
PACEMAKERS
PACEMAKERS
multilayer technology, in which conductor track levels, including the insulating dielectric layers, are produced by repeated printing and sintering. With fewer production steps
and by substituting screen printed for laminate dielectrics, a
higher level of reliability is reached. Additional advantages
are the possibilities to integrate passive components into the
layered structure and fabricate a three-dimensional substrate
by creating depressions. Both aspects lead to an increased
packing density and therefore additional miniaturization.
Reliability and Quality Assurance. Reliability in the discussed monolithic circuit and its hybrid construction has been
proven clinically in more than 600,000 implants, in which reliability can be attained with 107 /h at a 90% confidence
level for the hybrid circuit and 109 /h for ICs and passive
components. In pacemaker technology, this high reliability is
not attained by the normal methods of redundancy of critical
circuits and components. This is because operation time requirements can be fulfilled only via the current consumption
aspect, because of the volume-limited battery capacity.
Instead, it is necessary to research specifically failure
mechanisms and error sources and to eliminate them in the
development phase as a preventative measure. Above all, it
is important to provide consistent application of control measures during the different phases of production (13). This experience has led, for example, to special design rules during
dimensioning of the current mirror circuits. This is because
the design rules strongly influence the characteristics of the
ECG, as well as measuring amplifiers, VCOs, and reference
voltage sources. Especially critical production tolerance is
also identified in this manner. A parameter drift from an ionic
impurity in the semiconductor can be largely avoided by the
exact control of the process parameters during chip production (14).
Quality assurance measures for pacemaker production include a 100% final control of all components, semifinished
products, and the finished product. In general, the military
standards (MIL-STD-883 and MIL-M-38510) are used for implantable devices. In some cases, the specifications even exceed these standards. The requirements of the standards
(MIL-Q-9858 and MIL-STD-1772) must be met for the production of implantable semiconductors and hybrid circuits. All
aspects of quality control and assurance are included in these
measures. Standards set by the International Organization
for Standardization (ISO), the ISO 9000 standards, are gaining importance worldwide. Quality assurance measures for
the production process and measures for qualification and
type testing are contained in the ISO 9002 guidelines. The
ISO 9001 guidelines cover the realm of development, thereby
having an effect on the design process. In addition to strict
quality control during hybrid construction, it must be guaranteed that these high standards are being met in testing the
materials applied.
Input Amplifier. The input or sense amplifier must detect
the electrical intrinsic cardiac activity. That is, to amplify and
filter the IEGM as recorded by the electrode and to sense the
intrinsic activity (P waves in the atrium or R waves in the
ventricular signal). The central control unit (see Fig. 5) turns
off the amplifier during the pacing impulse (blanking). Protecting the amplifier and other components (e.g., the output
amplifier) during use of an external defibrillator is accom-
491
492
PACEMAKERS
2.0
the battery and electrode parameters, and a multitude of diagnostic data. This includes the number and histogram distribution of the paced and sensed events in each chamber that
were recorded during the previous follow-up interval. These
data provide the physician with valuable diagnostic information about the pacemaker function and changes in the patients symptomatology. The stored data can be roughly divided into four categoriespatient data, battery and
electrode parameters, pacing parameters, and diagnostic information.
Pacing threshold
1.5
1.0
2 VRh
0.5
Rheobase VRh
Patient Data
Chronaxie time
0
0.2
0.4
0.6
0.8
Pulse width (ms)
1.0
1.2
1.4
Figure 8. Chronaxie-rheobase curve showing the relationship between the pulse width and amplitude values necessary for tissue
stimulation. Only typical values are shown; in practice, individual
values have to be measured for each patient to determine adequate
pacing parameters.
The patient data memory stores the most important information about the patient and the pacemaker. The date of implantation, the date of the last follow-up, information concerning the symptoms, etiology, and ECG indication, in the
form of code letters according to an international code list,
are some of these particulars. Others include the implanted
electrode configuration, the initials of the patient, and the serial number of the pacemaker. Thus, the most crucial information can be retrieved in the case of an emergency, an unexpected change of the physician, or the loss of the pacemaker
identification card.
Battery and Electrode Parameters
Precise, prevailing values of the battery voltage, the internal
resistance of the battery, and the power consumption allow a
correct estimate of the operation time to be expected. Data
about the (real) lead impedance, which normally has a value
around 500 , indicate possible malfunctions such as breaks
in the lead or insulation defects. Furthermore, the pulse amplitude, current, energy, and charge are displayed separately
for the atrial and the ventricular channel.
Pacing Parameters
All rhythmological parameters, such as pacing mode (e.g.,
DDD), rates, AV delays, refractory periods, and hysteresis
setting, are considered pacing parameters. During diagnostic
examinations, temporary programs with other parameter settings need to be activated on a short-term basis. To simplify
this process, some pacemakers possess two complete program
memories. This makes it possible to switch quickly between
two programs (e.g., within one cardiac cycle).
Diagnostic Information
This category includes the internal event counter with trend
monitor. The event counter normally registers events that occur over very long periods, such as atrial sensing and pacing,
ventricular sensing and pacing, or also ventricular extrasystoles (ventricular sensing outside the AV delay). The trend
monitor graphically depicts the heart rate (paced or sensed)
over time, where the temporal resolution can be selected
within a wide range (several minutes to several days or even
months). Frequently, it can be differentiated between a rolling or a fixed mode. Thus, either the oldest values are constantly overwritten by new values, or the trend recording
stops after one complete run. The stored rates are not momentary values, but they equal the average value of the respective
scanning interval.
PACEMAKERS
Programming head
Programmer
493
value. This access code is checked for exact agreement to exclude an erroneous programming by the wrong programmer,
faulty transmission, or other interference signals (18). Also, a
parity bit follows at the end of each data package, which is
another of several safety measures during data transmission
alone. The pulse sequence is subsequently amplitude-modulated to a carrier frequency, which corresponds to the resonance frequency of the telemetry coils. This frequency is not
standardized, and it thus depends on the manufacturer. It is
usually within a range of 104 and 105 Hz.
Pacemaker
RATE-ADAPTIVE PACEMAKERS
Figure 9. Bidirectional data transfer between pacemaker and programmer via the programming head by means of inductive telemetry,
which is used to interrogate programmed parameters and stored
Holter data from the pacemaker at the beginning of each follow-up
and to reprogram it when necessary.
Aside from event counter and trend monitor, another application of analog telemetry exists for diagnostic purposes.
The PHYSIOS TC (Biotronik, Inc.), for example, offers the option to transmit the filtered or unfiltered atrial and ventricular IEGM with markers to the programmer in real time. The
sampling rate is 256 Hz for dual-channel and 512 Hz for single-channel operation. The transmitted IEGM and marker
signals can be read on the monitor or by the programmer
printer. They can also be read on a connected ECG recorder,
which allows for the simultaneous display of these signals
with the surface ECG. In this manner, the especially noisefree IEGM secures the diagnosis for atrial arrhythmias, simplifies the analysis of complicated ECGs, and recognizes muscle potentials.
Technical Realization
To interrogate and program the pacemaker, the programming
head, which is connected to the programmer, is positioned
over the pacemaker (Fig. 9). The magnet within the programming head closes a reed switch in the pacemaker, activating
the transmission and the receiving mode. The components
specific for telemetry are illustrated in the lower part of the
block diagram in Fig. 5. The data are transmitted inductively
between the coil within the programming head and the telemetry coil of the pacemaker, in most cases mounted on the
hybrid circuit (Fig. 7, right). During data transmission, a temporary program paces with a constant rate (usually asynchronous) to avoid malfunctions resulting from incomplete data
transmission. Because the relative positions of both coils are
important for inductive coupling, modern systems facilitate
exact positioning by indicating the optimum position using
LEDs in the programming head.
Manufacturers do not use a unified coding procedure. The
pulse-position coding is frequently used (17). Here, the information is transmitted by a sequence of pulses. One way is
that the value of a parameter to be transmitted is coded either in digital or analog form by the distance of pulse flanks
or of pulses of constant lengths. With digital coding, two different pulse distances are used (e.g., to code the binary values
0 and 1, which are transmitted in sequence, following a
defined start coding). Some manufacturers send a certain access code in each data package, in addition to the parameter
494
PACEMAKERS
Hemodynamic
Thermal
Metabolic
Physician
control input
Corporeal control input
Temperature
Motion
Position
pCO2, pO2
Vagus/sympath.
Venous pressure
Respiration
Sinus rate
ANS
QT interval
VIP
PEP
Stroke volume
Contractility
Pacemaker
Body process
setpoints:
Controller
Hemodynamic
Thermal
Metabolic
Circulation
center
Emotional
Controlled
variables
Body
Chronotropy
Heart rate
Neural
Adjusting
element
Cardiac
output
Humoral
Circulation:
hemodynamic
thermal
metabolic
Inotropy
Stroke volume
Controlled
system
Disturbance factors:
Hemodynamic Thermal Metabolic Emotional
Figure 10. Principle of rate-adaptive cardiac pacing using corporeal or cardiac control parameters. The lower part of the picture shows the major components of the cardiovascular control
system, while the upper part summarizes different strategies to reestablish chronotropic adaptation by the pacemaker.
PACEMAKERS
495
496
PACEMAKERS
PACEMAKERS
100
90
Walking Housework
MSR
Sleeping
Sports
Travelling
497
80
70
60
12:00
16:00
20:00
24:00
04:00
Time (h:min)
08:00
12:00
498
PACEMAKERS
BIBLIOGRAPHY
1. H. Antoni, Funktionen des Herzens, In R. F. Schmidt and G.
Thews (eds.), Physiologie des Menschen, 24th ed., Berlin:
Springer, 1990.
2. A. D. Bernstein et al., The NASPE/BPEG generic pacemaker
code for antibradyarrhythmia and adaptive-rate pacing and antitachyarrhythmia devices, PACE, 10: 794799, 1987.
3. M. Schaldach, The myocardium-electrode interface at the cellular
level. In A. E. Aubert, H. Ector, and R. Stroobandt (eds.), Cardiac
Pacing and Electrophysiology, Dordrecht: Kluwer, 1994.
4. A. Bolz, R. Frohlich, and M. Schaldach, Pacing and sensing performance of fractally coated leads, Proc. Annu. Int. Conf. IEEE
Eng. Med. Biol. Soc., 16: 365366, 1994.
5. M. Schaldach, Electrotherapy of the Heart, Berlin: Springer, 1992.
6. A. Lindgren and S. Jansson, Heart Physiology and Stimulation,
Solna, Sweden: Siemens-Elema AB, 1992.
7. W. Greatbatch, Twenty-five years of pacemaking, PACE, 7: 143
147, 1984.
8. S. Furman, Basic concepts. In S. Furman, D. L. Hayes, and D. R.
Holmes (eds.), A Practice of Cardiac Pacing, 3rd ed., Mount Kisco,
NY: Futura Publishing, 1993.
MAX SCHALDACH
University of ErlangenNuremberg
499
PATIENT MONITORING
Modern medicine allows for the monitoring of high-risk patients so that medical treatment can be applied adequately as
their condition worsens. To detect changes in the physiological condition of each patient, appropriate monitoring is applied routinely according to the patients condition, at least in
well-equipped hospitals. Patient monitoring usually means
the physiological monitoring of high-risk patients using appropriate instruments.
In hospitals, there are many sites where patient monitoring is especially important. For example, in the operating
room, instruments such as a pulse oximeter are used for monitoring anesthesia; in the intensive care unit, vital signs are
monitored; in the coronary care unit, the patients electrocardiogram (ECG) is routinely monitored and analyzed automatically; and in the incubator, the vital signs of the infant as
well as the internal environment of the incubator are monitored. In addition, during examinations such as cardiac catheterization, and therapeutic procedures such as hyper- or hypothermia therapy, patient monitoring is required for
ensuring safety. Even in the general ward, monitoring is performed fairly often when some risks are suspected. By using
a telemetry system, the patient is not constrained to a bed.
Even out of the hospital, patient monitoring is still performed
in some situations. In the ambulance, postresuscitation management requires the use of a cardiac monitor. In the home
where medical care such as oxygen therapy and intravenous
infusion therapy is carried out, monitoring instruments are
helpful. A so-called Holter recorder is used in which 24-h ECG
is recorded for detecting spontaneous events such as cardiac
arrhythmia.
There are many parameters that are used for patient monitoring: Among them are heart rate, ECG, blood pressure, cardiac output, rate of respiration, tidal volume, expiratory gas
content, blood gas concentrations, body temperature, metabolism, electroencephalogram (EEG), intracranial pressure,
blood glucose levels, blood pH, electrolytes, and body motion.
Many types of monitoring techniques and instruments have
been developed to enable measurement of these parameters.
For high-risk patients, monitoring should be performed
continuously. The real-time display of the trend or waveform
of each parameter is helpful especially in a patient who is
experiencing cardiopulmonary function problems, because if
a sudden failure of respiration or circulation is not detected
immediately it may result in the physiological state of the
patient becoming critical. The reliability of monitoring is
quite important. In some situations, invasive procedures for
monitoring are allowed if they are considered essential. For
example, an indwelling arterial catheter is used when instantaneous blood pressure has to be monitored continuously.
However, invasive methods are undesirable if the patients
condition is less critical. In some situations, noninvasive
methods are preferred. Because noninvasive methods are always more difficult to perform or less accurate than invasive
methods, the development of reliable noninvasive monitoring
techniques is highly desirable; many smart noninvasive tech-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
PATIENT MONITORING
+
(a)
Metal snap
Plastic body
Adhesive pad
Infusion tube
Stop cock
Intraarterial catheter
Pressure transducer
PATIENT MONITORING
Finger cuff
Interface module
IR LED
Pressure
transducer
Pressure
source
Photo cell
Servo valve
Plethysmogram
+
Set point
Tonometer
Artery
Skin
Bone
(b)
Figure 3. Indirect methods of instantaneous arterial pressure monitoring: (a) vascular unloading technique, and (b) tonometry.
The vascular unloading method is used to measure instantaneous intraarterial pressure by balancing externally applied pressure to the intravascular pressure using a fast
pneumatic servo-controlled system (5). As shown schematically in Fig. 3(a), a cuff is attached to a finger, and near-infrared light transmittance is measured at the site where the cuff
pressure is affected uniformly. Because absorption at nearinfrared is mainly due to the hemoglobin in blood, the change
in light absorption corresponds to the change of blood volume
at optical pass, thus a pulsatile change in transmitting light
intensity is observed from the pulsation of the artery. It is
possible to compensate for the pulsatile change of arterial
blood volume by introducing a servocontrol in which cuff pressure is controlled by the intensity of the transmitted light so
that an increase of arterial blood increases light absorption
and the signal increases cuff pressure so as to obstruct further increase of arterial flow. If such a servocontrol works fast
enough and with a sufficient loop gain at an adequate level of
light intensity, a condition is realized where the intraarterial
and the cuff pressures are balanced. At this condition, the
circumferential tension of the arterial wall is reduced to zero;
such a situation is called vascular unloading. It has been
shown that accurate blood pressure together with instantaneous arterial pressure waveforms can be obtained when an
adequate servocontrol system is introduced and adjusted correctly (6). A commercial unit that uses this principle has been
developed (Finapress, Ohmeda, Englewood, Colorado). In this
system, the interface module, which has a pneumatic servovalve is attached to the back of the hand so that the connection from the valve to the finger cuff is minimized, thus reducing the time delay.
PATIENT MONITORING
Thermodilution
catheter
Balloon
Thermistor
Aorta
Pulmonary
artery
Superior
vena cava
Left atrium
Injection
port
Right
atrium
Left ventricle
Right ventricle
Endtracheal
tube
Breathing
circuit
Sapphire
windows
Rotating filters
IR detector
Motor
PATIENT MONITORING
I1
LED1
Blood gas always means the oxygen and carbon dioxide contents of the blood. Because most of the oxygen in the blood
exists in combination with hemoglobin, the oxygen content of
the blood can be expressed in terms of the ratio of the amount
of oxyhemoglobin to that of total hemoglobin; this ratio is
called the oxygen saturation. A small amount of oxygen, usually less than 1%, remains in the plasma as dissolved oxygen,
and its amount is expressed in terms of oxygen partial pressure. Although there is a relationship between oxygen saturation and oxygen partial pressure, this relationship is nonlinear so that saturation increases steeply with increasing
oxygen partial pressure when the latter lies in the range 20
to 40 mm Hg (2.7 to 5.3 kPa), but tends to saturate when the
oxygen partial pressure reaches above 60 mm Hg (8 kPa). In
normal arterial blood, oxygen saturation is above 98%, and
oxygen partial pressure is approximately 100 mm Hg (13.3
kPa). The main purpose of monitoring oxygen level is to confirm the oxygen transport which sustains metabolic demand.
Carbon dioxide is highly soluble in body fluids, and it is
also converted, reversibly, to bicarbonate ions. Therefore,
blood plasma as well as interstitial fluids have an apparently
large storage capacity for carbon dioxide. However, changes
in the carbon dioxide content of the body fluids causes a
change in the acid-base balance of those body fluids, which is
expressed by pH. Thus, it is important to maintain an adequate carbon dioxide level in the body fluids. It is therefore
monitored by measuring the partial pressure of carbon dioxide of arterial blood.
Blood gas levels can be measured by taking a blood sample
and analyzing it using a blood gas analyzer which provides
information about the partial pressures of oxygen and carbon
dioxide, and about the pH of the blood. However, in a patient
whose respiration is unstable, blood gas values may fluctuate
so that frequent measurement is required, and hence continuous blood gas monitoring is preferred.
Arterial blood oxygen saturation can be monitored noninvasively using a pulse oximeter (15). Due to the difference in
the spectral absorption of oxyhemoglobin and reduced hemoglobin, the oxygen saturation of a particular blood sample can
be determined by absorption measurements at two wavelengths, typically in a red band between 600 nm and 750 nm
and in an infrared band between 850 nm and 1000 nm. However, the tissue in vivo contains both arterial and venous
blood, and hence light absorption occurs by both components.
To obtain the arterial component selectively, the pulsatile
component is extracted. As shown in Fig. 6(a), light absorption is usually measured in a finger. Two light-emitting diodes of different wavelengths, for example 660 nm and 910
nm, are operated alternately, and the transmitted light is detected by a photocell. The pulsatile components of both wavelengths are then extracted by a bandpass filter. Arterial oxygen saturation is determined from the ratio of these two
components.
Although the pulse oximeter is reliable enough and has
been used successfully for patient monitoring in most cases,
measurement sites of the transmittance measurement are
limited, and thus a reflection-type pulse oximeter in which
Photo cell
Logarithmic
amplifier
Multiplexer
Bandpass
filter
I2
yyyy
;;;;
;;;;
yyyy
=
log I1
S pO 2
log I2
(a)
Thermistor
Heating element
Cathode
Electrolyte
Anode
Oxygen permeable
membrane
Epidermis
Capillary
Artery
Vein
(b)
PATIENT MONITORING
Balloon
inflation port
Drainage eye
Balloon
Urinary
drainage
Silicone catheter
Thermistor
Cable
(a)
Temperature output
yy
;;
Heater
Thermal
insulator
Thermistors
(b)
Figure 7. Two methods of body temperature monitoring: (a) thermistor-tipped bladder catheter, and (b) zero-heat-flow method.
PATIENT MONITORING
;;;
yyy
;
yy
;
yyyyy
;;;
;;
yyyy
;;;;
Skull
Skin
Pressure catheter
Brain
Dura
(a)
Skull
Skin
Pressure transducer
Brain
Dura
(b)
Fontanometry transducer
Skull
Skin
Brain
Dura
(c)
Figure 8. Methods of intracranial pressure monitoring: (a) placement of a pressure catheter in the subarachnoidal space, (b) implanting a transducer in a bore hole through the skull, and (c) fontanometry.
PATIENT MONITORING
toring for fairly long periods of time has been attained. For
example, a ferrocene-mediated type of glucose sensor covered
with a newly designed biocompatible membrane could be used
for 7 days without calibration, and for 14 days with in vivo
calibration by comparison with blood sampling data (29).
As a solution for the difficulty of the in vivo application of
biosensors, ex vivo measurement has been attempted in which
a small amount of body fluid is drained from the body and
perfused through a flow-through sensor cell. An advantage of
this type of ex vivo measurement is the easiness of calibration
and replacement of the sensor. However, continuous drainage
causes loss of body fluids. Microdialysis could be a solution to
this difficulty. The microdialysis probe has a semipermeable
membrane at the tip, and a fluid is perfused through it by a
fine double-lumen catheter at a very low flow rate. When the
probe is placed in the subcutaneous tissue, small molecules,
such as glucose, are able to diffuse through the membrane
from the body fluid to the perfusion solution, and is then analyzed by a biosensor. In microdialysis, the permeability of the
membrane affects the measurement. To realize accurate measurements through the membrane without being affected by
the membrane permeability, the application of a null method
was proposed (30). In this system, the perfusion solution is
adjusted using a servocontrol system so that concentrations
at the inlet and the outlet of the probe are equal. This method
is advantageous, not only for eliminating the effect of the
change in membrane permeability but also for eliminating the
effect of drift and sensitivity change of the sensor.
As a less-invasive chemical measurement method, the effluent fluid analysis has been attempted (31). If the outermost
layer of the skin, the stratum corneum, is removed by stripping with an adhesive tape many times and negative pressure
is then applied to the skin surface, a small amount of fluid,
called effluent fluid, can be collected. It has been shown that
blood glucose can be monitored quasicontinuously using this
method.
Monitoring Brain Function
Monitoring brain function is required during anesthesia, in a
patient who lacks consciousness, and in the neonate. Electroencephalography (EEG) has been used widely for monitoring
the electrical activity of the brain. The responses of the brain
to sensory inputs such as visual and auditory stimulation can
be examined monitoring the resultant slight changes in the
EEG waveform that are known as evoked potentials. While
the amplitude of the evoked potentials is smaller than that of
ordinary EEG activity, it can be extracted by averaging many
responses. The function of the motor system can be examined
by stimulating the mortor cortex. A strong magnetic pulse can
be used for this purpose. The magnetic stimulation induces
eddy currents that cause firing of motor neurons, and visible
muscle contractions or muscular activities visualized in the
form of an electromyograms are induced if the function of the
motor system is normal (32).
Brain function is sustained by the oxygen supply through
the cerebral circulation, and thus a sufficient supply of oxygen
to the brain is of primary importance. When the blood supply
to the brain is decreased, the oxygen partial pressure in the
tissue decreases, and consequently the oxygen saturation of
venous blood returning from the brain will also decrease.
When decreasing cerebral circulation is suspected, jugular ve-
PATIENT MONITORING
slight change in such parameters can be identified. If a longterm record of health parameters is obtained, it would be utilized for a retrospective analysis when symptoms appear, and
would be utilized not only for accurate diagnosis but also for
epidemiological studies in a population if such data are accumulated for many people. It is expected that this approach
would contribute to a reduction in the need for medical services and, consequently, a reduction in medical expenses.
BIBLIOGRAPHY
1. N. V. Thakor, Computers in electrocardiography. In J. G. Webster (ed.), Encyclopedia of Medical Devices and Instrumentation,
New York: Wiley, 1988, pp. 10401061.
2. B. D. Bertolet et al., Evaluation of a novel miniature digital ambulatory ECG transient myocardial ischemia detection system, J.
Ambulat. Monitoring, 5: 3339, 1992.
3. J. De Maso et al., Ambulatory high-resolution ECG recorder using disk storage, J. Ambulatory Monitoring, 5: 317322, 1992.
4. V. L. Gordon et al., Zero stability of disposable and reusable pressure transducers, Med. Instrum., 21: 8791, 1987.
5. J. Penas, Photoelectric measurement of blood pressure, volume
and flow in the finger, Digest of the 10th International Conference
on Medical and Biological Engineering, Dresden, 1973, p. 104.
6. K. Yamakoshi, H. Shimazu, and T. Togawa, Indirect measurement of instantaneous arterial blood pressure in the human finger by vascular unloading technique, IEEE Trans. Biomed. Eng.,
BME-27: 150155, 1980.
7. J. S. Eckerle, Arterial tonometry. In J. G. Webster (ed.), Encyclopedia of Medical Devices and Instrumentation, New York: Wiley,
1988, pp. 27702776.
8. G. Manning, S. G. Vijan, and M. W. Millar-Craig, Technical and
clinical evaluation of the Medilog ABP non-invasive blood pressure monitor, J. Ambulatory Monitoring, 7: 255264, 1994.
9. W. Ganz et al., A new technique for measurement of cardiac output by thermodilution in man, Am. J. Cardiol., 27: 392396,
1971.
10. W. G. Kubicek et al., Development and evaluation of an impedance cardiac output system, Aerospace Med., 37: 12081212,
1966.
11. T. S. Chadha et al., Validation of respiratory inductive plethysmography using different calibration procedure, Am. Rev. Respirat. Dis., 125: 644649, 1982.
12. L. E. Baker, Electrical impedance pneumography. In P. Rolfe
(ed.), Noninvasive Physiol. Meas., London: Academic Press, 1979,
pp. 6594.
13. K. Bhavani-Shanker et al., Capnometry and anesthesia, Can. J.
Anesth., 39: 617632, 1992.
14. I. E. Sodal, J. S. Clark, and G. D. Swanson, Mass spectrometers
in medical monitoring. In J. G. Webster (ed.), Encyclopedia of
Medical Devices and Instrumentation, New York: Wiley, 1988,
pp. 18481859.
15. J. W. Severinghaus and J. F. Kelleher, Recent development in
pulse oximeter, Anesthesiology, 76: 10181038, 1992.
16. A. Takatani et al., Experimental and clinical evaluation of a noninvasive reflectance pulse oximeter sensor, J. Clin. Monitoring, 8:
257266, 1992.
17. A. C. M. Dassel et al., Reflectance pulse oximetry at the forehead
of newborns: the influence of varying pressure on the probe, J.
Clin. Monit., 12: 421428, 1996.
18. A. Huch et al., Continuous transcutaneous oxygen tension measurement with a heated electrode, Scand. J. Clin. Lab. Invest.,
31: 269275, 1973.
10
PATTERN RECOGNITION
J. G. Webster (ed.), Encyclopedia of Medical Devices and Instrumentation, Vol. 1Vol. 4, New York: Wiley, 1988.
. O
berg, Biomedical Transducers and
T. Togawa, T. Tamura, and P. A
Instruments, Boca Raton, FL: CRC Press, 1997.
TATSUO TOGAWA
21. J. W. Severinghaus, A combined transcutaneous pO2-pCO2 electrode with electrochemical HCO3 stabilization, J. Appl. Physiol.
Respirat. Environ. Exercise Physiol., 51: 10271032, 1981.
Reading List
R. S. C. Cobbold, Transducers for Biomedical Measurements: Principles and Applications, New York: Wiley, 1974.
P. Rolfe, ed., Non-invasive Physiological Measurements, Vol. 1, London: Academic Press, 1979.
P. Rolfe, ed., Non-invasive Physiological Measurements, Vol. 2, London: Academic Press, 1983.
D. W. Hill and A. M. Dolan, Intensive Care Instrumentation, 2nd. ed.,
London: Academic Press, 1982.
Figure 1. Relationship of systems and models. [Reprinted with
permission from W. L. Chapman, A. T. Bahill, and A. W. Wymore,
Engineering Modeling and Design, Boca Raton, FL: Copyright
CRC Press, 1992, p. 45 (1).]
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
properties, such as latency, speed, and bandwidth, are different, and they are affected differently by fatigue, drugs,
and disease.
The specic actions of these four systems can be illustrated by the example of a duck hunter sitting in a rowboat
on a lake. He scans the sky using saccadic eye movements,
jerking his eyes quickly from one xation point to the next.
When he sees a duck, he tracks it using smooth-pursuit eye
movements. If the duck lands in the water right next to his
boat, he moves his eyes toward each other with vergence
eye movements. Throughout all this, he uses vestibuloocular eye movements to compensate for the movement of
his head caused by the rocking of the boat. Thus, all four
systems are continually used to move the eyes.
This section is primarily about developing and validating a model for the human smooth-pursuit eye movement
system. Other systems are only included when they interact with the smooth-pursuit system.
The purpose of this model is to help understand the human smooth pursuit eye movement system. The level of
the model is that of eye movements from a few minutes of
arc to a few dozen degrees, of speeds up to 30 degrees per
second, and for durations up to 20 seconds.
Figure 2. Typical beginning (top) and ending (bottom) of sinusoidal tracking. When the target (dashed line) started there was
a 150 ms delay before the eye velocity increased; when the target
stopped there was a 120 ms delay before the eye velocity began to
decrease. Target movements were 5 from primary position. The
time axis is labeled in seconds, and upward deections represent
rightward movements. [Reprinted from A. T. Bahill and J. D. McDonald, Smooth pursuit eye movements in response to predictable
target motions, Vision Res., 23: 15731583, 1983, with permission
from Elsevier Science. (8).]
Figure 3. Binocular eye movements for good tracking of the cubical target waveform. [From D. E. McHugh and A. T. Bahill, Learning to track predictable target waveforms without a time delay,
Invest. Ophthalmol. Vis. Sci., 26: 933, 1985. (21).]
thought professional athletes would t the bill. So we invited some professional baseball players to participate in
our experiments. The MSEs for three members of the Pittsburgh Pirates Baseball Club are represented by circles, asterisks, and squares in Fig. 4. In viewing the target for the
rst time, professional baseball players 1 and 2 had much
smaller MSEs (0.05 and 0.08) than our other subjects. They
had never seen a cubical waveform before, yet they started
out with low MSEs. Baseball players 1 and 2 played in the
major leagues for over 10 years. Player number 3 never
got out of the class A Farm System. These data seem to
indicate that the ability to track the cubical waveform is
correlated with the ability to hit a baseball.
DEVELOPING THE MODEL
An important decision in making a model is determining
its architecture. Some architectural decisions that must
be made are whether it should be state-based or memoryless. Botta, Bahill and Bahill (22a) offer recommendations
to help make this determination. In this case, we decided
that the smooth pursuit system was a stated-based system.
(The state dynamics are modeled with an integrator K/(s
+ 1)). Next, choosing between continuous and discrete, we
decided the smooth pursuit system was discrete. To complete the high-level architecture we choose a closed-loop
feedback control system.
Most physiological systems are closed-loop negativefeedback control systems. For example, consider someone
trying to touch his or her nose with a nger. He or she would
command a new reference position and let the arm start to
move. But before long, sensory information from the visual
and kinesthetic systems would signal the actual nger position. This sensory feedback signal would be compared to
the reference or command signal to create the error signal
that drives the arm to the commanded output position.
tains a differentiator and a limiter. The box labeled smoothpursuit controller and dynamics contains a rst-order lag
(called a leaky integrator), a gain element, a time delay, a
saturation element, and an integrator to change the velocity signals into the position signals used by the extraocular motor system. The bottom branch contains the targetselective adaptive controller that identies and evaluates
target motion and synthesizes an adaptive signal, Rc , that
is fed to the smooth-pursuit branch. This signal permits
zero-latency tracking of predictable visual targets, which
the human subject can do, despite the time delays present
in the oculomotor system. The adaptive controller must be
able to predict future target velocity, and it must know and
compensate for the dynamics of the rest of the system. The
adaptive controller is separate from the smooth-pursuit
system in the model and also in the brain (11). The adaptive
controller sends signals to the smooth-pursuit system and
also other movement systems (34). All of these branches
send their signals to the extraocular motor system, consisting of motoneurons, muscles, the globe, ligaments, and
orbital tissues. And of course, the nal component of the
model is a unity gain feedback loop that subtracts eye position from target position to provide the error signals that
drive the system. The solid lines in this gure are signal
pathways, while the dashed lines are control pathways. For
instance, the dashed line between the saccadic controller
and the smooth-pursuit controller carries the command to
turn off integration of retinal error velocity during a saccade.
In the experiments, many different target waveforms
are used. The step target was presented to the subject to
verify that the technique of opening the loop using electronic feedback was working. Because the step target introduced a position error rather than a velocity error, this
experiment opened the loop on the saccadic system rather
than the pursuit system. A position error with the feedback
loop opened should have elicited a staircase of saccades. If
this expected open-loop response to the step target was
seen, then the electronic feedback was opening the loop
correctly, as between 1.5 s and 2.5 s of Fig. 7.
There was difculty in getting consistent results for sinusoids with the loop opened. The most consistent results
obtained for such presentations came from the rst few seconds after the loop has been opened. This nding suggests
that the difculties with open-loop sinusoids were proba-
where
W(j + 1) the weight vector after adaptation
W(j) the weight vector before adaptation
ks the proportionality constant controlling stability and
the rate of convergence
E(j) the difference between the desired response and the
lters output, the error
X(j) the vector of input signals
[E2 (j)] the gradient of the error squared with respect to
W(j)
In order to nd the best possible weights, we computed
the gradient (with respect to W) of the squared error, set
this equal to zero, and solved for the optimum weights. The
result is the WeinerHopf equation:
where
WLMS the vector of weights that would give the LMS error
(x, x) autocorrelation matrix of the input signals
(x, d) covariance matrix between the input signal and the
desired output signal
10
Figure 12. The LMS adaptive lter. The boxes labeled Weight
adjustment contain systems like Fig. 11. [From D. R. Harvey and
A. T. Bahill, Development and sensitivity analysis of adaptive predictor for human eye movement model, Transaction of the Society
for Computer Simulation, December 1985. 1985 by Simulation
Councils, Inc., San Diego, CA. Reprinted by permission. (20).]
According to Widrow (43, 44), for a lter using tapped delayline input signals, the time constant is related to the proportionality constant by
11
The predicted target velocity from the adaptive predictor compensates for the effects of the time delay in the numerator of the transfer function of Eq. (6). To overcome the
effects of the time delay in the denominator, compensation
for the models dynamics must be done. This means that
the brain must have a model for itself and the rest of the
physiological system, and that it uses this model to generate the required compensation signal.
Compensating for Plant Dynamics
When linear state-variable feedback notation is used for a
system, its closed-loop transfer function is
where
Figure 15. The learning curve for the adaptive predictor. [From
D. R. Harvey and A. T. Bahill, Development and sensitivity analysis of adaptive predictor for human eye movement model, Transaction of the Society for Computer Simulation, December 1985.
1985 by Simulation Councils, Inc., San Diego, CA. Reprinted by
permission. (20).]
The e+sT term shows that predictions must be made. However, the smooth-pursuit system is a velocity tracking system, not a position tracking system, so the controller must
be able to predict future values of target velocity. For example, if rs (t) is the present target velocity, it must be able to
produce rs (t + T ), where T is the time delay of the smoothpursuit system. And the controller must modify this pre-
12
This compensation signal allows the smooth-pursuit system to overcome the time delay. To synthesize this signal
the adaptive controller must be able to both predict future
values of the target velocity and compute rst derivatives.
These are reasonable computations for the neurons of the
human brain. Therefore, Eq. (14) is the algorithm that is
in the box of Fig. 10 labeled Target-selective adaptive controller.
Perform a Sensitivity Analysis
To determine which parameters have the greatest effect
on the model and when they exert their inuence, we comy
puted the semirelative sensitivity function, S , for each parameter (5,46,47):
This equation shows that a 5% change in either the proportionality constant, ks , or the number of weights, n, will
change the misadjustment of the predictor in a similar
manner.
The other parameter changed for the predictor was the
prediction time, the desired time to be estimated. The S
curve for this case also had the same shape as the curve
for the number of weights and the proportionality constant,
13
14
Figure 20. The change in the MSE of the predictor as the prediction time is changed. [From D. R. Harvey and A. T. Bahill, Development and sensitivity analysis of adaptive predictor for human eye
movement model, Transaction of the Society for Computer Simulation, December 1985. 1985 by Simulation Councils, Inc., San
Diego, CA. Reprinted by permission. (20).]
15
so the SI units come second in this section), producing angular velocities greater than 500 /s as the ball passes the
batter. Humans cannot track targets moving faster than
70 /s (50) or perhaps 100 /s (51); yet, professional batters
manage to hit the ball with force consistently and are able
to get a piece of the ball on an average of more than 80% of
their batting attempts. In this section we investigate how
they do this by examining a professional baseball player
tracking a pitched ball, and we demonstrate the superiority of his eye movements and headeye coordination to
16
the ball was 5.5 ft (1.7 m) from the plate. The peak velocity of his smooth-pursuit tracking was 120 /s; at this point,
his head velocity was 30 /s, thus producing a gaze velocity of 150 /s. In three simulated pitches to the professional
baseball player, at speeds of 60 mph (27 m/s), 67 mph (30
m/s), and 70 mph (31 m/s) the overall tracking patterns
were the same; his maximum smooth-pursuit eye velocities were 120, 130, 120 /s (52).
The gaze graph also takes into account the side-to-side
and front-to-back movements of the head; such translations of the head can produce changes in the gaze angle
(53). The data show that the contribution of the translation angle was slight until the ball was almost over the
plate.
The professional baseball player had faster smoothpursuit eye movements than our other subjects. In fact,
he had faster smooth-pursuit eye movements than any reported in the literature. He also had better headeye coordination, tracking the ball with equal-sized head and eye
movements, whereas our other subjects usually had disproportionately large head or eye movements.
Keep Your Eye on the Ball. Although the professional
baseball player was better than the college students at
tracking the simulated fastball, it is clear from our simulations that batters, even professional batters, cannot keep
their eyes on the ball. Our professional baseball player was
able to track the ball until it was 5.5 ft (1.7 m) in front of
the plate. This could hardly be improved on; we hypothesize that the best imaginable athlete could not track the
ball closer than 5 ft (1.5 m) from the plate, at which point it
would be moving three times faster than the fastest human
could track. This nding runs contrary to one of the most
often repeated axioms of batting instructorsKeep your
eye on the balland makes it difcult to account for the
widely reported claim that Ted Williams could sometimes
see the ball hit his bat.
If Ted Williams were indeed able to do this, it could only
be possible if he made an anticipatory saccade that put his
eye ahead of the ball and then let the ball catch up to his
eye. This was the strategy employed by the subject of Fig.
26; this batter observed the ball over the rst half of its
trajectory, predicted where it would be when it crossed the
plate, and then made an anticipatory saccade that put his
eye ahead of the ball. Using this strategy, the batter could
see the ball hit the bat.
17
Figure 26. In order to see the ball hit his bat, this subject made
an anticipatory saccade, indicated by the jump in the gaze angle
(thick line) that put his eye ahead of the ball (thin line); as a result,
the ball was on the fovea at the point of contact. The subject did
not move his head until after the ball crossed the plate. [From A.
T. Bahill and T. LaRitz, Why cant batters keep their eyes on the
ball, Am. Sci., 72: 251, 1984 (41).]
Figure 25. The success of a professional baseball player in tracking a simulated 60-mph (27 m/sec) pitch is shown in these graphs.
The thin line in the top graph represents the horizontal angle of
the ball, , as it would be seen by a right-handed batter facing a
left-handed pitcher; the thick line represents the actual horizontal
angle of gaze of the subject trying to track this ball. This gaze angle curve is generated by combining the horizontal head angle, the
horizontal eye angle, and the head-translation angle, which represents the eye movement necessary to compensate for side-to-side
and front-to-back movement of the head. Movements to the right
appear as upward deections. [From A. T. Bahill and T. LaRitz,
Why cant batters keep their eyes on the ball, Am. Sci., 72: 251,
1984 (41).]
But why would a batter want to see the ball hit the bat?
Because of his slow reaction time, he could not use the information gained in the last portion of the balls ight to
alter the course of the bat. We suggest that he uses the
information to discover the balls actual trajectory; that is,
he uses it to learn how to predict the balls location when it
crosses the platehow to be a better hitter in the future.
The anticipatory saccade must be made before the end of
the trajectory, because saccadic suppression prevents us
from seeing during saccades (54, 55). This suppression of
vision extends about 20 msec after the saccade. So if you
want to see the ball hit the bat, you must make your antic-
18
Figure 27. The model trying to track a baseball with the predictor turned off. The top trace is the angular position of the ball (dotted) and gaze (solid) and the bottom trace is velocity. The record is
1 s long. The model kept its eye on the ball until the ball was 9
ft (2.7 m) in front of the plate. This tracking resembles that of our
best-tracking college students.
let your body move your head, but its okay to move your
head a little in order to track the ball.
Batters do not use vergence eye movements. This is reasonable, since vergence eye movements are not needed to
track the ball between 60 ft (18 m) and 6 ft (1.8 m) from
the plate and since there is not sufcient time to make such
movements between 6 ft (1.8 m) and the point of contact;
indeed, our data contained no vergence eye movements. So
any claim that a batter actually saw the ball hit the bat
must be based on monocular vision; only the dominant eye
tracks the ball.
Strategies. Sometimes our subjects used the strategy of
tracking with head and eyes and falling behind in the last 5
ft (1.5 m), and sometimes they used the strategy of tracking
with head and eyes but also using an anticipatory saccade.
It has been speculated that baseball players might use the
latter strategy when they are learning the trajectory of a
new pitch and use the former strategy when hitting home
runs.
The professional baseball player tracked our simulated
pitch better than any other subjects did. This superior
tracking was due to (a) his use of both head and eye movements, (b) real fast smooth-pursuit eye movements, and (c)
giving his head a head start.
Modeling Baseball Players. The eye movements of baseball players were not used in the development of the TSAC
model. So if the model could simulate such eye movements,
it would be a strong validation of the model. First, the limiter in the TSAC model was increased from its nominal
value of 70 /s to the 130 /s that the professional baseball
player exhibited. Figure 27 shows the model with the predictor turned off trying to track a baseball. It fell behind
when the ball was nine feet from the plate. Figure 28 shows
the model with the predictor turned on tracking a baseball.
It was able to track the ball until the ball was 5.5 feet from
the plate. The predictor makes a big difference. With the
predictor, the model does as well as the professional baseball player whose data are shown in Fig. 25. The ball and
gaze traces of Fig. 25 look very much like those of Fig. 28.
Models Are Simplications
Remember that a model is a simplied representation of
some particular aspect of a real-world system. The real
The data presented in this section prompt the following summary about modeling human eye movements. Humans can overcome the time delays of the eye movement
systems and track predictable visual targets with no latency or phase lag. To do the same, the TSAC model had to
compensate for system dynamics and predict target velocity. Therefore, we think humans must use mental models of
their eye movement systems to help compensate for system
dynamics. These mental models must be adaptive, so that
they can change due to muscular activity, fatigue, temperature, and so on. One good way to predict target velocity is
menu selection. The baseball players menu contains fastball, curveball, and slider.
ACKNOWLEDGMENT
BIBLIOGRAPHY
1. W. L. Chapman, A. T. Bahill, A. W. Wymore, Engineering Modeling and Design, Boca Raton FL: CRC Press, 1992.
2. A. T. Bahill, W. J. Karnavas, The ideal baseball bat, New Sci.,
130 (1763): 2631, 1991.
3. A. T. Bahill, M. Morna Freitas, Two methods for recommending
bat weights, Ann. Biomed. Eng., 23 (4): 436444, 1995.
4. A. T. Bahill, The ideal moment of inertia for a baseball or softball bat, IEEE Transactions on Systems, Man and Cybernetics,
Part A: Systems and Humans, 34(2): 197204, 2004.
5. S. J. Yakowitz, F. Szidarovszky, An Introduction to Numerical
Computations, New York: Macmillan, 1989.
6. A. T. Bahill, Bioengineering: Biomedical, Medical and Clinical
Engineering, Englewood Cliffs, NJ: Prentice-Hall, 1981.
7. A. T. Bahill, B. Gissing, Re-evaluating systems engineering
concepts using systems thinking, IEEE Trans. Syst. Man Cybern., part C,28: November 1998.
8. A. T. Bahill, R. Botta and J. Daniels, The Zachman framework
populated with baseball models, Journal of Enterprise Architecture, November 2006.
9. C. Rashbass, The relationship between saccadic and smooth
tracking eye movements, J. Physiol., 159: 326338, 1961.
10. A. T. Bahill, J. D. McDonald, Smooth pursuit eye movements
in response to predictable target motions, Vision Res., 23:
15731583, 1983.
11. G. Westheimer, Eye movement responses to a horizontally
moving visual stimulus, AMA Arch. Ophthalmol., 52: 932941,
1954.
12. L. Stark, G. Vossius, L. R. Young, Predictive control of eye
tracking movements, IRE Trans. Hum. Factors Electron.,
HFE-3: 5257, 1962.
13. R. E. Kettner et al., Prediction of complex two-dimensional
trajectories by a cerebellar model of smooth pursuit eye movement, J. Neurophysiol., 77: 21152130, 1997.
14. P. J. Dallos, R. W. Jones, Learning behavior of the eye xation
control system, IEEE Trans. Autom. Control, AC-8: 218227,
1963.
15. S. Yasui, L. Young, Perceived visual motion as effective stimulus to pursuit eye movement system, Science, 190: 906908,
1975.
16. L. Young, Pursuit eye movementswhat is being pursued, inR. Baker andA. Berthoz (eds.), Control of Gaze by
Brain Stem Neurons, pp. 2936, Amsterdam: Elsevier/NorthHolland, 1977.
19
20
62. A. T. Bahill, D. G. Baldwin, Describing baseball pitch movement with right-hand rules, Computers in Biology and
Medicine, 2007.
A. TERRY BAHILL
University of Arizona, Tucson,
AZ, 85721
NEURAL PROSTHESES
339
Visual prosthesis
Auditory prosthesis
Respiratory prosthesis
Breathing assistance
Cough
Upper-extremity function
Grasp and release
Reaching
Genitourinary prosthesis
Bladder control
Bowel control
Erection and ejaculation
Lower-extremity function
Standing
Transfers
Stepping and walking
Neural prostheses are a developing technology that use electrical activation of the nervous system to restore function to
individuals with neurological impairment. Applications have
included stimulation in both the sensory and motor systems
(Fig. 1) and range in scope from experimental trials in single
individuals, as in the case of the visual prosthesis, to commercially available devices placed in thousands of individuals, as
in the case of auditory prostheses (Fig. 2). Neural prostheses
function by initiation of action potentials in nerve fibers
which carry the signal to an endpoint where chemical neurotransmitters are released, either to affect an end organ or another neuron. Thus, neural prostheses are all devices that enable selective and graded control of neurotransmitter release
and, in principle, any end organ under neural control is a
candidate for neural prosthetic control.
1200
Freehand
Vocare
Clarion
Nucleus
20000
15000
1000
800
10000
600
400
5000
200
0
1975
1980
1985
1990
1995
NEURAL PROSTHESES
1400
0
2000
Year
Figure 2. Cumulative numbers of devices implanted for restoration
of hand-grasp (FreeHand), bladder and bowel function (Vocare), and
restoration of hearing (Clarion, Nucleus). Data for Freehand and Vocare provided by NeuroControl Corp., Cleveland, OH. Data for Clarion provided by Advanced Bionics Corp., Sylmar, CA. Data for Nucleus (right axis) provided by Cochlear Corp., Denver, CO.
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
NEURAL PROSTHESES
tion potentials. Figure 3 shows the responses of a model neuron to two different amplitude stimuli. In response to the
smaller stimulus, the neural membrane responds like a parallel RC circuit. However, above a critical amplitude (threshold)
the membrane initiates an action potential as a result of flux
of sodium ions from the extracellular space to the intracellular space. This action potential is then propagated down the
fiber to its terminal where neurotransmitter is released to affect the end organ or another neuron. Artificial generation of
action potentials is the basis for all neural prostheses.
The threshold for excitation (i.e., initiation of an action potential) depends on both the amplitude and the duration of
the stimulus. As shown in Fig. 4(a), the stimulus amplitude
necessary for excitation, Ith, increases as the duration of the
stimulus, PW, is decreased. This relationship is termed the
strengthdurations relationship and is given by Eq. (1).
10
Threshold current amplitude
340
Irh
1
10
Tch
100
1000
Pulse duration ( s)
(a)
(1)
PW Irh
1 exp (PW /Tch )
(2)
100
Threshold charge
Ith
Irh
=
1 exp{PW/[ln(2)Tch ]}
10
1
10
100
1000
Pulse duration ( s)
(b)
imizing charge, by use of short pulses, is an important consideration for preventing tissue damage, preventing electrode
corrosion, and minimizing power consumption.
The current required for extracellular stimulation of axons
also depends on the spatial relationship between the electrode
and the nerve fiber and the nerve fiber diameter (3). Transmembrane potentials generated by extracellular current are
largest in the fibers close to the stimulating electrode, thus
less current is required to stimulate neurons in the proximity
of the electrode (Fig. 5). As the distance between the electrode
and the fiber increases the threshold, Ith, increases, and for
excitation of myelinated nerve fibers with a point source electrode, this relationship is described by Eq. (3).
Ith = Io + k r2
Figure 3. Subthreshold and suprathreshold responses of a nerve fiber to a stimulus current pulse. The traces show the transmembrane
voltage as a function of time in response to stimuli of subthreshold
and suprathreshold amplitude. In the subthreshold regime, the membrane behaves as a parallel RC circuit, while in the suprathreshold
regime the membrane generates an action potential.
(3)
NEURAL PROSTHESES
341
10 m
20 m
0.5
1.5
342
NEURAL PROSTHESES
cur in younger individuals and the level of movement impairment depends upon the location of the injury. SCI at thoracic
(mid-back) or lower levels results in paraplegia (paralysis of
the legs and pelvis), while SCI at cervical levels results in
tetraplegia (also known as quadriplegiaparalysis of the
legs, trunk, and arms). Within either paraplegia or tetraplegia, the level of impairment increases as the level of spinal
cord injury progresses toward the head. SCI can be complete
(i.e., interrupting all communication with the brain) or incomplete (i.e., some communication with the brain is retained).
Both motor and sensory functions are typically affected.
Design Challenges for Movement Neural Prostheses. The design of neural prostheses must take into account a number of
physiological changes that accompany neurological disorders,
as well as limitations in the current technology for artificially
exciting the nervous system. Following stroke or SCI, paralyzed muscles undergo disuse atrophy, characterized by a
rapid and marked decrease in muscle mass, a decrease in
force-generating capacity, and increased susceptibility to fatigue (7,8). In SCI, denervation occurs when direct physical
damage due to the injury or its subsequent consequences
(swelling, release of various chemical factors, etc.) leads to the
death of motoneurons in and near the area of injury (8). As
the motoneurons degenerate, the muscle fibers they innervate
can no longer be activated by the nervous system, and electrical activation is difficult or impossible. The disuse of limbs
because of paralysis often leads to a rapid reduction in bone
density (osteoporosis), which is especially of concern in lower
extremity neural prostheses because of weight bearing and
the large muscle forces required. Disuse of a limb also often
results in an increase in the passive resistance of joints to
movement (contractures), which may make it difficult for already weakened muscles to move the limb through its needed
range. Paralyzed or paretic muscles are often spastic, that is,
have hyperactive stretch reflexes causing inappropriate muscle contractions or spasms. All currently available neural
prostheses recruit the motor units within the muscle in an
order reverse that of natural recruitment. In many muscles,
full activation via electrical stimulation cannot be achieved
without undesirable spillover to other muscles, limiting the
forces available. Usually, the number of stimulation channels
available is far less than the number of muscles normally participating in the movement function, so simpler alternate
strategies for completing functional tasks must be developed.
Many tasks also involve the simultaneous movement of several different joints, all of which may be impaired. Providing
the user with reasonably natural control over all these functions simultaneously can be a significant challenge.
At least partial solutions to each of these problems have
been developed. Exercise of paralyzed muscle by electrically
stimulated contractions for 2 to 8 hours per day has been
found (7,9) to make the muscle more resistant to fatigue, although increases in force appear to occur consistently in some
muscles but not in others. The effects of denervation following
SCI may be partially compensated by sprouting, a process by
which surviving motoneurons in a muscle reinnervate nearby
denervated muscle fibers and thus maintain their ability to
produce force when electrically stimulated. In the upper extremity, muscle tendon transfer of a nondenervated (either
voluntary or paralyzed) muscle can sometimes be performed
to replace the function of a denervated muscle. Appropriate
weight bearing and exercise can arrest and perhaps even reverse bone demineralization. Joint and muscle contractures
can often be prevented by appropriate therapy, including
movement through the range of movement. Surgical procedures can also be performed in some cases to release tight
joints (10). Spasticity can often be controlled pharmacologically.
Movement Neural Prostheses, Past and Current. Neural prostheses have been developed for restoring specific movements
of both the upper and lower extremities, as summarized in
Table 1. Applications for upper extremity movements have
historically focused on hand grasp and release, primarily in
individuals with cervical SCI. Several systems are currently
available for providing these functions, including two that are
based upon surface electrodes, one using percutaneous electrodes and an external stimulator, and one using a totally
implanted stimulator and implanted electrodes. The surface
systems are relatively inexpensive and are noninvasive since
no surgery is required. However, they require accurate electrode placement before each use, and individuals with denervation may not be able to use the system. The Handmaster
(Ness Ltd.) (11,12) device has been used in individuals with
C4 to C7 SCI, and in hemiplegia. The Tetron Glove (Neuromotion, Inc.) (13) utilizes voluntary wrist function of the user to
control the stimulation, so its applications are primarily limited to C6 to C7 individuals and those with hemiplegia. The
percutaneous electrodes used by Handa, Hoshimiya, and associates at Tohoku University (12,14) offer higher selectivity
and can reach deeper muscles inaccessible from the surface,
and are relatively inexpensive because the electrodes are implemented without open surgery. Muscle-stimulation patterns
are based on templates of natural muscle activation, so separate templates must developed and stored for each task to be
performed. This system has been applied to individuals with
cervical SCI (C4 to C6) and hemiplegia to produce hand, forearm, elbow, and shoulder function. The Freehand (NeuroControl, Inc.), developed originally by Peckham and associates
(7,15) uses 7 to 8 implanted epimysial stimulating electrodes
and a pacemaker-like stimulator implanted in the upper chest
to restore two grasp patterns (key grip and palmar grasp) for
individuals with C5 to C6 level SCI. Power and stimulus commands are transmitted electromagnetically via a skinmounted antenna to the implanted stimulator. Stimulus patterns are controlled voluntarily by the user via a joystick-like
device mounted on the opposite shoulder or on the ipsilateral
wrist. This implanted technology is more expensive and invasive than the other alternatives, but it is highly reliable, has
few external components, and is thus easy to put on and take
off. Furthermore, the implant procedure is usually performed
simultaneously with reconstructive surgeries such as muscle
tendon transfers (10) to maximize voluntary and stimulated
contractions, and to release passive constraints. Continuing
research with this system is extending its functionality to include stimulation of intrinsic hand muscles, wrist function,
forearm function, elbow function, shoulder function, and bilateral hand function. The use of implanted sensors is being
investigated, and the use of movement neural prostheses is
also being extended to individuals with hemiplegia and different levels (C3 to C4, C7) of SCI.
A number of neural prostheses for lower extremity function have also been developed. As noted in Table 1, several
NEURAL PROSTHESES
343
Stimulator Type
User-Control Method
11
Surface
External
Preprogrammed
13
Surface
External
Write motion
14
Percutaneous
External
Preprogrammed
NeuroControl, Inc.
Freehand
7,15
Implanted
Implanted
Cleveland VA/CWRU
research
7,39
Implanted, percutaneous
Implanted and
external
Implanted sensors,
voluntary function
37,38
Percutaneous
External
Nerve recording
early work
Surface
External
commercial
commercial
Surface
Surface
19
17,18
38
Ness, Ltd.
Handmaster
Neuromotion, Inc. Tetron Glove
Tohoku Univ.
research
References
Electrode Type
Functions
Disorders
# Systems
SCI
Foot switch
HP
250
External
External
Foot switch
Tilt sensor
Foot drop
Foot drop
HP, MS
HP, MS
3800
40
Implanted
Implanted
Foot switch
Foot drop
HP
Epineural
Surface
Implanted
External
Foot switch
Nerve recording
Foot drop
Foot drop
HP
HP, MS
16
Surface
External
2500
Surface
External
SCI
250
22
Surface
External
HP
20,21
SCI
300
23,24,26,27
Implanted
Hand switches
50
Implanted
Hand switches
Standing, walking,
stair climbing
Standing, walking
SCI, HP
External and
implanted
External, implanted
Implanted
SCI, HP
20
Laboratory
Standing
SCI
Implanted
Laboratory
Basic research
SCI
10
Implanted
External
External
Hand switches
Hand switches
Automatic sensorbased
Hand switches
Standing, walking
Standing, walking
Standing, walking
SCI
SCI
SCI
4
70
2
Standing, walking
SCI
LARSI
29
Vienna group
LSU-RGO II hybrid
Andrews et al. hybrid
30
31
32
Cleveland VA/CWRU
research hybrid
23,24
Implanted epineural
Implanted spinal
root
Epineural
Surface
Surface
Percutaneous, implanted
External, implanted
HP, SCI
120
HP, SCI
47
HP, SCI
25
SCI
85
SCI, HP
65
31
50
2
The numbers of systems listed are cumulative and not limited to current users. Some studies have been discontinued. Some subjects have used more than
one system and thus may be counted more than once. References may not contain most recent number of systems, which were in many cases obtained via
personal communication.
HP hemiplegia; SCI spinal cord injury; MS multiple sclerosis.
LARSI Lumbosacral anterior root stimulator implant; LSU-RGO Louisiana State University Reciprocating Gait Orthosis; CWRU Case Western Reserve
University; VA Dept. of Veterans Affairs.
available. The Footlifter (Elmetec A/S) is a two-channel surface system using a foot switch. The Walkaide system (Neuromotion, Inc.) uses a tilt sensor built directly into the stimulation unit. More than 5000 individuals have used foot-drop
neural prostheses.
Stimulation of the peroneal nerve has also been used as
part of a system for restoring standing and walking in individuals with hemiplegia or thoracic SCI (2022). The user
stands up using stimulation of the quadriceps muscles to lock
the knees, in combination with upper-body exertion. During
walking, the flexion withdrawal reflex is evoked alternately
in each leg by two channels of stimulation to allow the nonweight-bearing foot to clear the ground, substituting for the
344
NEURAL PROSTHESES
tion of most of the system components (e.g., electrodes, stimulator, sensors for control). Lost-cost surface systems will be
effective in some individuals, but the effectiveness of neural
prostheses will continue to be greatly enhanced by reconstructive surgeries. Limitations in current stimulation technology, such as spillover and incomplete activation, may be
addressed by specialized nerve cuff electrodes (34) or other
approaches that selectively target motoneurons within the
nerve trunks rather than in the muscle. Routing leads to an
ever-increasing number of electrodes may be addressed by
leadless injectable electrodes (35) or other approaches which
do not require a separate lead wire from the stimulator to
each electrode. Control methods will integrate with and make
full use of retained voluntary control. Signals recorded from
natural sensory receptors in the paralyzed limbs (3638) and/
or from external or implanted artificial sensors will be used
both as command signals from the user and to implement
closed-loop or feedforward controllers (33,3943) to compensate for internal (e.g., fatigue) and external (e.g., unexpected
loads) disturbances.
Bladder and Bowel Function
Loss of control of bladder and bowel function occurs after spinal injury, and is one of the leading causes of morbidity and
mortality. Complications include frequent urinary tract infections, incontinence, damage to the upper urinary tract, and
constipation. A number of approaches using electrical stimulation to restore bladder function in individuals with spinal
cord injury have been attempted and are outlined in Fig. 6
(44,45). Many attempts have been hampered by direct or reflex activation of the urethral sphincter, which closes the outlet from the bladder, at the same time as the bladder is con-
(1)
(2)
Bladder
;
(4)
(5)
(3)
External
urethal
sphincter
Figure 6. Approaches to neural prosthetic control of the bladder include stimulation of the spinal cord (1), stimulation of the intradural
(2), and extradural (3) sacral nerve roots, stimulation of the pelvic
nerve (4), and direct stimulation of the bladder wall (5).
NEURAL PROSTHESES
345
sidual volumes, urinary tract infections, bladder trabeculation, and vesicoureteral reflux, and increases bladder capacity and continence (44). Furthermore, since the lower bowel
also receives efferent innervation from the sacral roots, many
patients using the stimulator have an increased frequency of
defecation, a reduction in constipation and fecal impaction,
and a reduction in time spent defecating (44). Penile erection
is also achieved by stimulation in some male users.
The other technique under active investigation to prevent
coactivation of the bladder and external urethral sphincter is
selective stimulation of the small fibers innervating the bladder (50). Selective stimulation of small fibers may be achieved
by arresting action potentials in large fibers (51) or by elevating the threshold of large fibers above that of the small fibers
(52). These techniques should enable selective contraction of
the bladder or lower bowel without contraction of the external
sphincters, and thereby produce better emptying.
Restoration of Respiratory Function
Maintenance of respiratory function is essential for life.
Breathing provides the lungs with fresh air for the exchange
of oxygen with carbon dioxide in the blood so that all cells
of the body can function. Coughing is used to expel foreign
substances and normal secretions from the lungs and thus
prevents obstructions and infection. If respiratory muscle
function is inadequate, a mechanical respirator is often used
to force air into and out of the lungs. Although mechanical
ventilation can maintain life, the individual is continuously
dependent on the respirator, and its use can lead to infection
and bleeding around the tracheotomy site, trauma to the
bronchi within the lungs, and impaired speech and sense of
smell. Impaired cough can lead to obstruction of portions of
the lung and/or pneumonia.
If respiratory impairment results from inadequate activation of the motoneurons of the respiratory muscles, neural
prostheses based upon electrical stimulation are often a viable option for long-term respiratory maintenance. Such systems allow users to decrease or even discontinue the use of
mechanical ventilation, reducing its side effects and significantly enhancing their independence. More than 1000 individuals worldwide have been provided with neural prostheses
that control paralyzed diaphragm function via stimulation of
the phrenic nerve (5356). The primary applications for these
devices have been individuals with high-level cervical (C3 or
above) spinal cord injury, in whom the diaphragm is paralyzed while the phrenic motoneurons remain intact, and in
individuals with central alveolar hypoventilation, where the
brain fails to activate the muscles of respiration due to a
deficit in the chemoreceptors in the carotid body.
All commercially available phrenic pacing systems work
using the principals illustrated in Fig. 7. Electrodes are implanted upon the phrenic nerves, with lead wires from the
electrodes running under the skin to an implanted pacemaker-like device that generates the electrical stimuli. The
stimulators receive power and commands from an external
controller via an electromagnetic link. In all of these systems,
stimulus parameters are set by the clinician to achieve adequate and smooth recruitment of the diaphragm. Stimulation
of the diaphragm (see Fig. 7) acts to expand the volume of the
chest cavity, lowering chest cavity pressure below atmospheric pressure and resulting in air flow into the lungs. The
346
NEURAL PROSTHESES
Glottis
Spinal cord
Phrenic nerve electrode
Trachea
Lead wires
Figure 7. A schematic diagram of the respiratory system and a typical phrenic nerve pacing system. Inhalation is provided to individuals with paralyzed diaphragms by stimulation of the phrenic nerve (usually on
both sides) via an electrode implanted onto the nerve.
An implanted stimulator receives power and stimulus
commands from a small external unit via an electromagnetic link across the skin. Stimulated contractions of the
diaphragm pull it down into the abdomen, drawing air
in through the mouth and nose. Exhalation occurs passively when the diaphragm stimulation is withdrawn
and the elastic properties of the lungs and chest wall
force air out of the lungs.
Implanted stimulator
Skin-mounted antenna
External controller/transmitter
Diaphragm
(at rest)
Diaphragm
(during inspiration)
Ribs
Phrenic nerves
Left lung
Abdomen
NEURAL PROSTHESES
347
348
NEURAL PROSTHESES
15. B. Smith et al., An externally powered, multichannel, implantable stimulator for versatile control of paralyzed muscle,
IEEE Trans. Biomed. Eng., 34: 499508, 1987.
16. A. R. Kralj et al., FES gait restoration and balance control in
spinal cord-injured patients, Prog. Brain Res., 97: 387396, 1993.
17. M. Kljajic et al., Gait evaluation in hemiparetic patients using
subcutaneous peroneal electrical stimulation, Scand. J. Rehabil.
Med., 24: 121126, 1992.
18. P. Strojnik et al., Treatment of drop foot using an implantable
peroneal underknee stimulator, Scand. J. Rehabil. Med., 19: 37
43, 1987.
19. D. R. McNeal, R. Waters, and J. Reswick, Medtronic implanted
system in 31 subjects. Experience with implanted electrodes,
Neurosurgery, 1: 228229, 1977.
20. A. Kralj and T. Bajd, Functional Electrical Stimulation: Standing
and Walking After Spinal Cord Injury, Boca Raton, FL: CRC
Press, 1989.
21. A. Kralj, R. Acimovic, and U. Stanic, Enhancement of hemiplegic
patient rehabilitation by means of functional electrical stimulation, Prosthetics & Orthotics Int., 17: 107114, 1993.
22. D. Graupe and K. H. Kohn, Transcutaneous functional neuromuscular stimulation of certain traumatic complete thoracic
paraplegics for independent short-distance ambulation, Neurol.
Res., 19: 323333, 1997.
23. E. B. Marsolais and R. Kobetic, Implantation techniques and experience with percutaneous intramuscular electrodes in the lower
extremities, J. Rehabil. Res. Dev., 23: 18, 1986.
24. E. B. Marsolais and R. Kobetic, Development of a practical electrical stimulation system for restoring gait in the paralyzed patient, Clin. Orthop. Relat. Res., 233: 6474, 1988.
25. R. Kobetic, R. J. Triolo, and E. B. Marsolais, Muscle selection and
walking performance of multichannel FES systems for ambulation in paraplegia, IEEE Trans. Rehabil. Eng., 5: 2329, 1997.
26. R. J. Triolo et al., Implanted Functional Neuromuscular Stimulation systems for individuals with cervical spinal cord injuries:
Clinical case reports, Arch. Phys. Med. Rehabil., 77: 11191128,
1996.
27. R. J. Triolo, R. Kobetic, and R. Betz, Standing and walking with
FNS: Technical and clinical challenges, in G. Harris (ed.), Human
Motion Analysis, New York: IEEE Press, 1996, pp. 318350.
28. R. Davis, W. C. MacFarland, and S. E. Emmons, Initial results
of the nucleus FES-22-implanted system for limb movement in
paraplegia, Stereotact. Funct. Neurosurg., 63: 192197, 1994.
29. D. N. Rushton et al., Lumbar root stimulation for restoring leg
function: Results in paraplegia, Artif. Organs, 21: 180182, 1997.
30. H. Kern et al., Functional electrostimulation of paraplegic patients1 years practical application. Results in patients and experiences (in German), Z. Orthop. 123: 112, 1985.
31. M. Solomonow et al., Reciprocating gait orthosis powered with
electrical muscle stimulation (RGOII), part I: Performance evaluation of 70 paraplegic patients, Orthopedics, 20: 315324, 1997.
32. B. J. Andrews et al., Hybrid FES orthosis incorporating closed
loop control and sensory feedback, J. Biomed. Eng., 10: 189
195, 1988.
33. D. B. Popovic, Functional electrical stimulation for lower extremities, in R. B. Stein, P. H. Peckham, and D. B. Popovic (eds.), Neural Prostheses: Replacing Motor Function After Disease or Disability, New York: Oxford Univ. Press, 1992, pp. 233251.
34. C. Veraart, W. M. Grill, and J. T. Mortimer, Selective control of
muscle activation with a multipolar nerve cuff electrode, IEEE
Trans. Biomed. Eng., 40: 640653, 1993.
35. T. Cameron et al., Micromodular implants to provide electrical
stimulation of paralyzed muscles and limbs, IEEE Trans. Biomed.
Eng., 44: 781790, 1997.
NEURAL PROSTHESES
349
36. J. A. Hoffer et al., Neural signals for command control and feedback in functional neuromuscular stimulation: A review, J. Rehabil. Res. Dev., 33: 145157, 1996.
37. M. Haugland et al., Restoration of lateral hand grasp using natural sensors, Artif. Organs, 21: 250253, 1997.
39. P. E. Crago et al., New control strategies for neuroprosthetic systems, J. Rehabil. Res. Dev., 33: 158172, 1996.
40. H. J. Chizeck, Adaptive and nonlinear control methods for neural
prostheses, in R. B. Stein, P. H. Peckham, and D. B. Popovic
(eds.), Neural Prostheses: Replacing Motor Function After Disease
or Disability, New York: Oxford Univ. Press, 1992, pp. 298328.
41. J. J. Abbas and H. J. Chizeck, Feedback control of coronal plane
hip angle in paraplegic subjects using functional neuromuscular
stimulation, IEEE Trans. Biomed. Eng., 38: 687698, 1991.
42. A. Kostov et al., Machine learning in control of functional electrical stimulation systems for locomotion, IEEE Trans. Biomed.
Eng., 42: 54151, 1995.
43. P. H. Veltink, Control of FES-induced cyclical movements of the
lower leg, Med. Biol. Eng. Comput., 29: NS8N12, 1991.
44. G. H. Creasey, Electrical stimulation of sacral roots for micturition after spinal cord injury, Urol. Clin. North Amer., 20: 505
515, 1993.
45. N. J. Rijkhoff et al., Urinary bladder control by electrical stimulation: Review of electrical stimulation techniques in spinal cord
injury, Neurourol. Urodyn., 16: 3953, 1997.
46. G. S. Brindley, The first 500 patients with sacral anterior root
stimulator implants: General description, Paraplegia, 32: 795
805, 1994.
47. B. S. Nashold, H. Friedman, and J. Grimes, Electrical stimulation of the conus medullaris to control the bladder in the parplegic patient: A ten year review, Appl. Neurophysiol., 44: 225
232, 1981.
48. R. R. Carter et al., Micturition control by microstimulation of the
sacral spinal cord of the cat: Acute studies, IEEE Trans. Rehabil.
Eng., 3: 206214, 1995.
49. W. M. Grill and N. Bhadra, Genitourinary responses to microstimulation of the sacral spinal cord, Neurosci. Abstr., 22: 1842,
1996.
50. N. J. Rijkhoff et al., Selective detrusor activation by electrical
sacral nerve root stimulation in spinal cord injury, J. Urol., 157:
15041508, 1997.
51. Z.-P. Fang and J. T. Mortimer, Selective activation of small motor
axons by quasitrapezoidal current pulses, IEEE Trans. Biomed.
Eng., 38: 168174, 1991.
52. W. M. Grill and J. T. Mortimer, Inversion of the current-distance
relationship by transient depolarization, IEEE Trans. Biomed.
Eng., 44: 19, 1997.
53. W. W. L. Glenn et al., Twenty years of experience in phrenic
nerve stimulation to pace the diaphragm, Pacing Clin. Electrophysiol., 9: 780784, 1986.
54. J. I. Miller et al., Phrenic pacing of the quadriplegic patient, J.
Thorac. Cardiovasc. Surg., 99: 3540, 1990.
55. G. Creasey et al., Electrical stimulation to restore respiration, J.
Rehabil. Res. Dev., 33: 123132, 1996.
56. D. Chervin and C. Guilleminault, Diaphragm pacing for respiratory insufficiency, J. Appl. Neurophysiol., 14: 369377, 1997.
57. D. K. Peterson et al., Long-term intramuscular electrical activation of the phrenic nerve: Efficacy as a ventilatory prosthesis,
IEEE Trans. Biomed. Eng., 41: 11271135, 1994.
350
NEUROCONTROLLERS
81. P. J. Blamey and G. M. Clark, A wearable multiple-electrode electrotactile speech processor for the profoundly deaf, J. Acoust. Soc.
Amer., 77: 16191621, 1985.
82. J. A. Sabolich and G. M. Ortega, Sense of feel for lower-limb amputees: A phase-one study, J. Prosthetics Orthotics, 6: 3641,
1994.
WARREN M. GRILL
Case Western Reserve University
ROBERT F. KIRSCH
Case Western Reserve University
BIOIMPEDANCE
As an introduction to electrical impedance and conductance in biology, a review of the relevant terminology is
given and the scope of the discipline is presented. The
area of bioimpedance is broad, including, for example,
impedance cardiography, electrode impedance, impedance
spectroscopy, intraluminal conductance, and impedance tomography. The eld of bioimpedance deals with the electrical conduction properties of biological materials as a
response to the injection of current. It has been known
for more than two centuries that biological structures display the phenomenon of electrical conduction. Later, it
was found that the precise electrical properties of tissues
depend on their cellular composition and their coupling.
These characteristics imply that the voltage changes at a
particular site may provide valuable information regarding the biological materials and processes concerned.
However, to date, our understanding of the electrical
impedance of biological tissues and their changes, as far
as they are associated with physiological activity, is still
limited. This article discusses the application of electrical
impedance in medicine. Briey, bioimpedance can be used
to quantitate extracellular uid, to assess volume changes,
and as an imaging tool similar to ultrasonography. Reviews
can be found in (13), and (4).
In order to meet the requirements of specic applications, electrodes for delivering or recording electrical potentials in biological structures appear in a variety of materials, sizes, and shapes. The interface between electrode
and tissue has been studied extensively (5, 6).
IMPEDANCE CARDIOGRAPHY
Impedance cardiography (ICG) is the noninvasive measurement of physiologically and clinically relevant parameters of the heart and circulation, based on electrical
impedance measurements of the thorax during the cardiac cycle. Recent reviews have been presented by (7) and
(8). The technique uses a low-current (0.5 mA to 4 mA),
high-frequency (50 kHz to 100 kHz), alternating current
across the thorax, not perceivable to the subject. The resulting impedance changes associated with the cardiac cycle (impedance decreases by about 0.2 from diastole to
systole) provide information on stroke volume, cardiac output, pulmonary capillary wedge pressure (9), and systolic
time intervals (by calculating the rst time derivative of
the impedance waveform). In 1969, (10) suggested an index
of cardiac function based on particular calculations applied
to the impedance tracing. This so-called Heather index was
shown to correlate with the severity of cardiac pathology
(7).
Among other techniques to measure cardiac size (e.g.,
X ray, ultrasound, magnetic resonance imaging), external
impedance cardiography has the advantages of being noninvasive, requiring only relatively low-cost equipment, and
permitting continuous monitoring of signals originating
from the beating heart, even during exercise (11). Major
shortcomings result from the fact that a sound physical
model and a comprehensive theory still need to be devel-
oped. (12, 13), and (14) were among the rst researchers
to study the feasibility of the method. Chest impedance
is determined by the relatively constant electrical conduction properties of all tissues concerned, plus a modulated component caused by the combination of respiration, thoracic dimensional changes, and a cardiovascualr
size-related factor. The latter component is due to cyclic
changes of size (i.e., the geometry changes with contraction) of the four compartments of the heart and the major
blood vessels, as well as to the periodic alignment and deformation of the erythrocytes in the ow. Ejection of blood
from the heart distends the walls of the arteries, thus increasing their blood volume and resulting in an impedance
decrease, besides a decrease of lung resistivity owing to
blood perfusion. During relaxation of the cardiac ventricles, blood in the systemic circulation travels downstream,
causing a reduction of the arterial diameter while the erythrocytes lose their orientation owing to the lower velocity, all leading to an increase of the thoracic impedance.
Using suitable ltering techniques, it turns out that the
contributions derived from breathing and locomotion can
be eliminated. Obviously, respiratory components are simply removed if patients are instructed to temporarily hold
their breath, but in animals this approach would not be
feasible. Various ltering procedures have been developed,
including Fourier linear combiner (FLC) and event-related
transversal types. A major problem is that many physiological signals are quasiperiodic; that is, they have a mean
period with a small random variation around this mean at
each interval. (15) introduced a scaling factor to enhance
exibility when choosing lter parameters in the FLC approach. They successfully applied their method to ICGderived stroke volume (SV) in a volunteer during exercise.
Alternatively, one may apply an ensemble averaging technique to 20 beats, thus eliminating respiratory inuences
(16). The number and position of the electrodes employed
may vary depending on the specic purpose of the study
or the particular geometrical model assumed. Basically, a
four-electrode (tetrapolar) arrangement is employed: two
current injecting electrodes form the outer pair, and the
inner two are the sensing electrodes. This setup overcomes
not only impedance problems related to electrode polarization but also the relatively high skin impedance. A typical
conguration for ICG is illustrated in Fig. 1.
The amount of blood pumped per minute by one side
of the heart is termed cardiac output (CO) and equals the
product of heart rate (HR) and SV. In ICG it is the passage of blood in the major arterial vessel (called the aorta)
during the ejection phase of the ventricle that mainly determines the changes in electrical impedance. In contrast,
the electrocardiogram (ECG) is a recording of the electrical wavefront as it spreads over the cardiac tissues, and
this signal provides no information on the amount of blood
pumped nor does it give any insight into the size of the
ventricle or the strength of contraction. Therefore it is important to emphasize that the actual (internal) source of
electricity in the heart, namely, the action potentials that
cause a periodic voltage change on the order of 100 mV
over the cell membranes of the heart, are unrelated to the
electrical impedance variations resulting from the external stimulation electrodes and as recorded by the sensing
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
Bioimpedance
Figure 1. Electrode conguration for thoracic impedance cardiography: V, the voltage recording electrodes on the lateral base of
the neck and another pair on the chest at the level of the xyphoid;
C, constant current-injecting electrodes on forehead and at abdomen placed 15 cm caudally from the voltage electrodes.
electrodes. It may be concluded that the ECG and the thoracic impedance signal provide complementary information on the activity of the heart: the ECG on the internally
generated electricity, and the thoracic impedance on the
hemodynamic changes owing to the mechanical action of
the heart.
The electrical impedance signal is obtained from a special arrangement of disposable spot or band electrodes
placed on the skin of the head, neck, and chest. A set of
current-injecting electrodes is driven by a constant sinusoidal current of less than 1 mA root mean square at a
frequency ranging from 50 kHz to 100 kHz, while another
set of electrodes senses the resulting voltage from which
the impedance signal is calculated. The peripheral ECG is
usually recorded from the limb leads (Fig. 2).
Thoracic impedance ( Z) has a baseline component Z0
and a time-variable component ( dZ). Z0 depends on posture, tissue composition, and the volume of uids within
the chest. Cardiac edema causes a decrease of the value
for Z0 (17). In noncardiac edema (e.g., in the adult respiratory distress syndrome), Z0 can increase or decrease owing
to capillary leakage of proteins (18). The component dZ corresponds to the volume change in the thoracic aorta during
the cardiac cycle. The maximum value of the time derivative of Z is proportional to the peak ascending aortic blood
ow. (13) developed a formula to derive SV (in mL) from the
thoracic ICG:
Figure 2. Left ventricular (LV) volume as obtained by intraventricular impedance catheter in the open-chest dog. Also shown is
the ow in the aorta resulting from these volume changes during
an episode of irregular heart rhythm. (From Ref. (35) with permission.)
Bioimpedance
CONDUCTANCE MEASUREMENTS
Besides the method of measuring external impedance
changes caused by the mechanical activity of the heart, it
is also technically possible to record conductance changes
within the lumen of each cardiac compartment. Since the
beginning of this century, scattered reports were published
about the recording of cardiac volume changes derived
from impedance measurements by placing electrodes on
the heart. (30) placed electrodes on the inside of the ventricular wall, but practical problems concerning the electrodes delayed progress in this eld. In 1961 the Brazilian dentist A. Mello-Sobrinho started experiments with a
Bioimpedance
Bioimpedance
Figure 3. Left ventricular pressurevolume loops recorded in the anesthetized horse. (From Ref.
(43), with permission.)
Figure 4. Variations of left ventricular and atrial cross-sectional area in the magnetic resonance imaging (MRI) and the electrical
impedance tomography (EIT) images made with the person in the supine position. (From Ref. (29), with permission.)
tachometer have the disadvantage that they require insertion into the airway. The four-electrode arrangement uses
an electrode on each wrist, to which a constant 10 kHz
low-intensity current is applied, and an electrode on each
arm to record the impedance changes. As an alternative,
one may measure variations in electrical resistance of a
mercury-in-silastic-rubber gauge mounted around the thorax. Using a Wheatstone bridge it is rather easy to follow
the periodic alterations caused by respiratory movements
of the chest (Fig. 5). For further details see the paper by
(47).
APNEA MONITORING
IMPEDANCE PNEUMOGRAPHY
The plethysmographic technique employs a device that
records respiratory excursions from movements of the
chest surface on the basis of electrical impedance variations. Originally, the method was applied to the detection of
apnea (i.e., suspension of respiration) in newborns, and for
tracking changes in intrathoracic uid accumulation (46).
Other approaches such as the spirometer and the pneumo-
Bioimpedance
Figure 6. Variations of left ventricular and atrial cross-sectional area in the magnetic resonance imaging (MRI) and the electrical
impedance tomography (EIT) images made with the person in the supine position. (From Ref. 29, with permission.)
Bioimpedance
tor model that sufciently describes the geometry and conductive properties of the medium under study. Often, the
medium can be approximated as an innite or semi-innite
structure, or as a thin layer bounded by air on both sides.
Anisotropy is an important but complex aspect of the dielectric properties of tissue (2651), which may be accommodated by considering a multilayered volume conductor
model. (52) determined local anisotropic resistivity of canine epicardial tissue (i.e., the outer muscle layer of the
ventricular wall) in vivo in two orthogonal directions, with
special attention to sample volume. The effective sample
volume for a simple homogeneous isotropic medium primarily depends on the distance between the current electrodes, but for real anisotropic media these researchers
found that in addition both longitudinal and transverse
resistivity (cm) varied not only during the cardiac cycle
but also depended on the driving frequency studied (which
was between 5 kHz and 60 kHz). Finally, it must be emphasized that electrode construction affects accuracy (51).
IMPEDANCE TOMOGRAPHY
This is a technically advanced approach whereby the imaging of an object is realized from measurements in multiple
directions. Usually, 16 to 32 electrodes are placed equidistantly in a plane around the patients. This yields anatomical slices or sections, which can be viewed from various angles. Reasonably good soft tissue contrast can be achieved
by impedance imaging, because of the different electrical
resistivities of the various tissues. Impedance images are
inferior to alternative techniques such as computed tomography and magnetic resonance imaging (MRI). Due to the
three-dimensional spread of current into the object, the
slice thickness cannot be conned to 1 mm or 2 mm. The
strength of impedance tomography, however, resides in its
functional imaging capabilities. Functional imaging is possible if variations in tissue resistivity are associated with
particular physiological events. The rst in vivo impedance
tomography images were produced in 1983 at the University of Shefeld, and the theoretical background as well
as illustration have been summarized by (53). A newer example has already been mentioned, when the right atrium
was selected as the region of interest, and compared with
results from MRI. (21) noninvasively assessed right ventricular diastolic function in patients with chronic obstructive pulmonary disease and in controls by means of region
of interest analysis applied to electrical impedance tomography. Comparison with MRI data showed a correlation of
r = 0.78 (n = 15), while pulmonary artery pressure (measured by right-sided heart catheterization) yielded an exponential relationship with r = 0.83 (p < .001). The same
authors (29) also improved cardiac imaging in electrical
impedance tomography by means of a new electrode conguration, whereby the traditional transversal positioning
at the level of the fourth intercostal space on the anterior
side was replaced by attachment at an oblique plane at the
level of the ictus cordis anteriorly and 10 cm higher posteriorly. Comparison with MRI ndings gave good results
(Fig. 6), while the reproducibility coefcient was 0.98 at
rest and 0.85 during exercise.
Bioimpedance
BIBLIOGRAPHY
IMPEDANCE SPECTROSCOPY
As stated before tissues can be considered as composites
of cells surrounded by extracellular uids. Each cell has
a cell membrane that encloses intracellular uid and consists of a thin layer of lipoproteins (6 nm). At low frequencies (<10 kHz) the cell membranes have relative high
resistances, and the current is conducted mainly by the
extracellular uid. At high frequencies the membrane capacity causes a decrease of membrane impedance so that
the current ows through the cells. The electrical behavior
of a cell can be modelled by a membrane capacitance parallel to a membrane resistance in series with pure passive
resistive elements, representing extra- and intracellular
uids. Coupling of several such modelled cells results in 2or even 3-dimensional models of tissue. For the measurement of the frequency dependency of tissues two methods
are often used, a resistance R in series with a reactance X
or a conductance G parallel to a capacitance C. The complex impedance Z in the rst approach is given by Z = R +
jX. In the second approach the admittance Y is given by Y
= G + j C. As the complex impedance is the reciprocal of
the admittance the following relations hold R = G/(G2 + 2
C2 ) and X = C/(G2 + 2 C2 ).
The complex series impedance (R + jX) can be visualized
in a diagram, in which the real component R is plotted
versus the imaginary component X. In the literature this
plot is often called the ColeCole diagram (54). In Fig. 7
the ColeCole diagram is given for a single time constant,
where the impedance as a function of frequency is given by
Bioimpedance
21. A. Vonk Noordegraaf, et al. Noninvasive assessment of right
ventricular diastolic function by electrical impedance tomography, Chest, 111: 12221228, 1997.
22. P. E. Marik J Pendelton R. Smith, A comparison of hemodynamic parameters derived from transthoracic electrical
bioimpedance with those parameters obtained by thermodilution and ventricular angiography, Crit. Care Med., 25:
15451550, 1997.
23. R. M. Heethaar, et al. Thoracic electrical bioimpedance: Suitable for monitoring stroke volume during pregnancy? Eur. J.
Obstet. Gynecol. Reprod. Biol., 58: 183190, 1995.
24. A. C. C van Oppen, et al. Use of cardiac index in pregnancy: Is
it justied? Am. J. Obstet. Gynecol., 173: 923928, 1995.
25. H. J. Burgess, et al. Sleep and circadian inuences on cardiac autonomic nervous system activity, Am. J. Physiol., 42:
H1761H1768, 1997.
26. L. Wang R Paterson Multiple sources of the impedance cardiogram based on 3-D nite difference human thorax models,
IEEE Trans. Biomed. Eng., 42: 141148, 1995.
27. M. Qu et al. Motion artifact from spot and band electrodes
during impedance cardiography, IEEE Trans. Biomed. Eng.,
33: 10291036, 1986.
28. D. W. Kim Detection physiological events by impedance, Yonsei
Med. J., 30: 111, 1989.
29. A. Vonk Noordegraaf, et al. Improvement of cardiac imaging in
electrical impedance tomography by means of a new electrode
conguration, Physiol. Meas., 17: 179188, 1996.
30. R. F. Rushmer, et al. Intracardiac impedance plethysmography, Am. J. Physiol., 174: 171, 1953.
31. M. E. Valentinuzzi J. C. Spinelli Intracardiac measurements
with the impedance technique, IEEE Eng. Med. Biol. Mag., 8
(1): 2734, 1989.
32. G. Murand J Baan Computation of the input impedance of a
catheter for cardiac volumetry, IEEE Trans. Biomed. Eng., 31:
448453, 1984.
33. J. Baan et al. Continuous stroke volume and cardiac output
from intraventricular dimensions obtained with impedance
catheter, Cardiovasc. Res., 15: 328334, 1981.
34. J. Baan et al. Ventricular volume measured from intracardiac
dimensions with impedance catheter: Theoretical and experimental aspects, in T. Kenner, R. Busse, and H. HingkeferSzalkay (eds.), Cardiovascular System Dynamics, New York:
Plenum, 1982, pp. 569579.
35. P. L. M. Kerkhof End-systolic volume and the evaluation of
cardiac pump function, Thesis, Leiden University, 1981.
36. T. J. Gawne K Gray R. E. Goldstein Estimating left ventricular
offset volume using dual-frequency conductance catheters, J.
Appl. Physiol., 63: 872876, 1987.
37. E. Zheng S. Shao J. G. Webster Impedance of skeletal muscle
from 1 Hz to 1 MHz, IEEE Trans. Biomed. Eng., 31: 477481,
1984.
38. R. S. Szwarc, et al. Conductance catheter measurement of left
ventricular volume in the intact dog: Parallel conductance
is independent of left ventricular size, Cardiovasc. Res., 28:
252258, 1994.
39. R. J. M. Klautz, et al. Interaction between afterload and contractility in the newborn heart, J. Am. Coll. Cardiol., 25:
14281435, 1995.
40. P. Schiereck, et al. Direct recording of EDP-EDV relationship
in isolated rat left ventricle: Effect of diastolic crossbridge formation, Cardiovasc. Res., 28: 715719, 1994.
41. H. Ito, et al. Left ventricular volumetric conductance catheter
for rats, Am. J. Physiol., 270: H1509H1514, 1996.
42. J. Vogel Measurement of diac output in small laboratory animals using recordings of blood conductivity, Am. J. Physiol.,
273: H2520H2527, 1997.
43. P. L. M. Kerkhof Combination of Millar and conductance
catheter for the estimation of left ventricular function in the
equine heart, Proc. 21st Int. Conf., IEEE Eng. Med. Biol. Soc.,
1999.
44. J. E. Axenborg B. Olsson, An electrical impedance method
for measurement of aortic cross-sectional areas, Proc. XII Int.
Conf. Med. Biol. Eng., Jerusalem, 1979, P. 48.1.
45. L. Kornet Extension and imrovements of the electrical conductance method, Thesis, Erasmus University, Rotterdam, The
Netherlands, 1996.
46. L. E. Baker Applications the impedance technique to the respiratory system, IEEE Eng. Med. Biol. Mag., 8 (1): 5052, 1989.
47. P. L. M. Kerkhof Beat-to-beat analysis of high-delity signals
obtained from the left ventricle and aorta in chronically instrumented dogs, Automedica, 7: 8390, 1986.
48. I. Giaever C. R. Keese A morphological biosensor for mammalian cells, Nature, 366: 591592, 1993.
49. R. Plonsey R. C. Barr The four-electrode resistivity technique
as applied to cardiac muscle, IEEE Trans. Biomed. Eng., 29:
541546, 1982.
50. B. R. Epstein K. R. Foster Anisotropy in the dielectric properties of skeletal muscle, Med. Biol. Eng. Comput., 21: 5155,
1983.
51. Y. Wang, et al. Geometric effects on resistivity measurements
with four electrode probes in isotropic and anisotropic tissues,
IEEE Trans. Biomed. Eng., 45: 877884, 1998.
52. P. Steendijk et al. The four-electrode resistivity technique
in anisotropic media: Theoretical analysis and application
to myocardial tissue in vivo, IEEE Trans. Biomed. Eng., 40:
11381148, 1993.
53. B. M. Eyuboglu B. H. Brown D. C. Barber In vivo imaging
of cardiac related impedance changes, IEEE Eng. Med. Biol.
Mag., 8 (1): 3945, 1989.
54. K. S. Cole Membranes, Ions and Impulses, Ions and Impulses,
Berkeley: Univ. California Press, 1972.
55. M. Gheorghiu E. Gersing E. Gheoghiu Quantitative analysis
of impedance spectra of organs during ischemia, Proc. 10th
Int. Conf. Electrical Bio-impedance, 1998, pp. 7376.
56. H. Schafer,
et al. Dielectric properties of skeletal muscle during ischemia in the frequency range from 50 to 200 Hz, Proc.
10th Int. Conf. Electrical Bio-impedance, 1998, pp. 7780.
Further reading:
S. Grimnes & . G. Martinsen:Bioimpedance and Bioelectricity
Basics, Academic Press (2000). ISBN 0-12-303260-1
D. S. Holder:Electrical Impedance Tomography.Institute of
Physics Publishing (2005). ISBN 0-7503-0952-0
Relevant URLs:
Oslo Bioimpedance Group: http://www.fys.uio.no/elg/bioimp/
Thoracic electrical bioimpedance: http://www.aetna.com/cpb/data/
CPBA0472.html
Clinical applications and guidelines: http://www.regence.com/
trgmedpol/medicine/med33.html
10
Bioimpedance
Animation:
Thorax cross section: http://www.cplire.ru/html/tomo/eitimage.
html
PETER L. M. KERKHOF
ROBERT M. HEETHAAR
Vrije Universiteit Medical
Center, Amsterdam,
the Netherlands
Vrije University, Amsterdam,
the Netherlands
HYPERTHERMIA THERAPY
The normal range of body temperature of human beings is maintained at a relatively stable temperature
near 37 C. Organs and tissues function most efficiently at this range. Temperature elevation even a few
degrees above this norm is associated with varying levels of biological responses. Hyperthermia is the term
used to describe significant departure of tissue temperature from the usual limit (40 C) encompassed by
thermoregulatory activity. Its use for therapeutic purposes has expanded in recent years to include a variety
of abnormal conditions. Investigations to date have shown that while hyperthermia can produce whole-body
(regional) and local tissue modifications for effective therapy, temperatures at which the desired tissue response
occurs vary over a wide range. Moreover, final tissue temperature is a complex function of energy deposition,
blood flow, and heat conduction in tissue.
Hyperthermia has been used therapeutically very early in human history. However, aside from a few
well-established medical applications, hyperthermia is still in a relatively early stage of development. Current medical applications fall into three broad categories: musculoskeletal conditions, cancer treatment, and
coagulative ablation therapy. An important aspect of its development is the production of adequate temperature distribution in the target tissue, superficial or deep-seated. Moreover, successful hyperthermia therapy
requires not only a suitable energy source for heat production, but also an understanding of the underlying
pathological condition being treated to define the critical target temperature as well as the ability to reach that
tissue with the heating modality. Energy sources that can be used for hyperthermia include ultrasonic wave
and electromagnetic field and radiation as well as conducted heat or convection.
HYPERTHERMIA THERAPY
pressure, in membrane permeability, and in the rate of metabolism. These increases can facilitate tissue
healing and can also facilitate clearance of metabolites, debris, and toxic substances from diseased tissue under
treatment. Diathermic heating of deep tissues promotes relaxation in muscles, reduces pain, and provides relief
from muscle spasms (2,3). Heating can also produce greater extensibility in fibrous collagen tissues, which is
significant in the management of joint contractures due to tightness of the capsule, fibrosis of muscle, and
scarring.
HYPERTHERMIA THERAPY
while maintaining the ability to focus energy into the tumor to raise the temperature tumor volume to above the
minimum therapeutic temperature. Its clinical application is constrained by the available ultrasound window
between the transducer and target tumor, as well as by the presence of bone and soft tissue interfaces in the
propagation pathway. Differences in tissue density can give rise to excessive temperature elevation resulting
from accumulation of reflected power at these interfaces. For a given set of anatomic and physiologic parameters, temperature distribution in a tumor is determined by transducer design, scanning pattern, scanning
speed, and output power.
Ultrasonic modalities for noninvasive hyperthermia cancer treatment include spot focus transducers and
phased arrays. By mechanically scanning a focused transducer around a treatment volume, uniform temperature distribution with a sharp falloff outside the treatment volume may be obtained (23,24,25). However, for
large, deep-seated tumors, scanning transducers often produce hot spots proximal to the tumor along the central axis ahead of the focal plane (26). It should be mentioned that frequency sweeping and transducers with a
nonvibrating center can be used to reduce the central hot spot (27). There are two classes of phased arrays that
do not require physical movement of the transducer elements (28,29,30,31,32). The class of phased arrays with
geometric focusing and spot scanning has features and limitations similar to those of mechanically scanned
spot focus transducers (28). In this case, the transducer array is fixed in position and electrical spot scanning is
accomplished through adjustment of array element phases which maximize constructive interference at each
focal plane.
Alternative array element configuration and phase excitation can avoid hot spots that result from constructive interference along the arrays central axis (29,30,31,32). Several phased array configurations have
been proposed. They include the concentric ring, sector vortex, spherical section, and the square arrays. With
proper selection of array element phases, these phased arrays can be operated to directly synthesize, without scanning, ultrasonic power deposition patterns for improved localization of heating within the tumor
volume. Phased arrays offer another advantage over single focused transducer: They enable electronically
programmable treatment planning (33). Pretreatment analysis can provide strategies aimed at satisfying
therapeutic requirements for individual patients and specific tumor sites and spare other sensitive anatomic
structures. It is noted that recently ultrasonic applicators have been tested for interstitial hyperthermia (34).
Electromagnetic Heating. Various frequencies of electromagnetic energy within the range of 0.05 MHz
to 2450 MHz have been used for hyperthermia treatment of cancer. The interaction of electromagnetic fields
and waves in biological tissue is governed by (1) source frequency and intensity, (2) antenna or applicator
design and polarization, (3) tissue structure, and (4) dielectric permittivity (35,36,37). In thermal therapeutic
applications, the final temperature is affected also by tissue blood flow and heat conduction. However, the
time rate of heating and spatial distribution of electromagnetic energy at any given moment in time are direct
functions of specific absorption rate (SAR or power deposition), which are functions of antenna or applicator
design.
At frequencies below a few hundred megahertz [i.e., radio frequencies (RF)], wavelengths in tissue are
100 cm or longer (see Table 1). Power deposition for local tissue heating is characterized by quasistatic displacement or conduction currents, and heating comes about through tissue resistance to current flow. At microwave
frequencies, wavelengths are much shorter, radiated power dominates, and dielectric loss gives rise to heat
production. Power coupling from air into tissue is substantial and can exceed 50%. In addition, the effective
depth of penetration can provide useful insight into the performance of various applicators. For example, because of both the focusing ability and the depth of energy penetration, single-contact applicators operating at
915 MHz or 2450 MHz have been used to heat well-localized superficial tumors extending to a depth up to 3
cm to 6 cm.
RF Heating. For noninvasive subcutaneous tissue heating by RF energy, simple capacitive plates and
inductive coil applicators have been used. Tissues are positioned between the plates and are heated by displacement currents (38,39). A water bolus is often placed between the plate and skin to prevent superficial
burns from large electric field concentration near the edges. A limitation of the capacitive applicator is that the
HYPERTHERMIA THERAPY
electric field is predominately normal to the interface between fat and other tissues. Overheating by as much
as 20 times that in muscle can occur in subcutaneous fat greater than 2 cm in thickness. Note that it is possible
to treat tumors of patients with subcutaneous fat as thick as 3 cm by precooling the fat prior to the initiation
of heating (40).
The common inductive applicator consisting of a planar or pancake coil with a small number of turns
when placed parallel to the body surface can avoid the excessive heating problem in fatty tissue. Since the
induced electric fields form eddy currents that flow parallel to the tissue interface, heating is highest in muscle
instead of fat. The heating pattern is toroidal with a null along the axis of the applicator (41). Some recent
designs have SARs that do not include a null in the center and are considerably more uniform than that of the
planar coil (38,42,43). While inductive applicators are used predominately for superficial treatment, some of
the newer applicators can produce effective heating up to a depth of nearly 7 cm.
Several RF applicators have been invented to provide noninvasive heating of deep-seated tumors. These
include the large capacitive applicator mentioned previously (44,45) as well as the ridged waveguide (46), helical
coils (47,48), and multielement arrays (49,50,51,52). The helical coil applicator is simple in construction, and it
provides SAR patterns that vary slowly with radial distance. However, the region to be heated must be located
near the center since the axially directed electric field has a maximum near the center of the coil structure.
Its performance may be improved by judicious selection of the diameter-to-length ratio and incorporation
of external tuning. The multielement array concept has gained considerable utility in the clinic. A primary
advantage of multielement array systems is the ability to steer the heating pattern electronically by varying
the amplitude and phase of each element, thereby allowing phased arrays operating at RF to be used for
selective heating of deep-seated tumors in a variety of anatomic sites.
In particular, the annular phased array system is utilized to heat large anatomical regions such as
the thorax, abdomen, and pelvic area (49,50,53). Regional heating is frequently complicated by systemic
HYPERTHERMIA THERAPY
hyperthermia and hemodynamic compensation and by excessive heating or adjacent normal tissue structures, especially the bonetissue interface. However, recent advances using feedback algorithms and adaptive
software modifications to control the amplitude and phase of each element showed that it is possible to maximize the SAR at a target tumor position in a complex anatomy and simultaneously minimize or reduce the
power deposition at locations where undesirable hot spots may occur.
A novel noninvasive or minimally invasive concept using ferro- or paramagnetic compounds for intracellular hyperthermia treatment of both primary and metastatic cancers was first proposed in the late 1950s.
Earlier studies have demonstrated both the preferential accumulation of submicron-sized magnetic particles
(magnetites) in tumors and the feasibility of selective heating using 0.24 MHz to 80 MHz RF magnetic fields.
Current investigations (54,55,56) are directed toward cellular uptake of fluidized magnetic particles, bounding
of magnetite with targeting activity towards cancer cells, and the hyperthermic effects of fine magnetic particles on tumor cells in vitro. It is expected that a magnetite-labelled antibody may soon be available clinically
as a therapeutic agent for hyperthermia treatment of cancer.
A related technique for RF hyperthermia involves implanted ferromagnetic seeds activated by externally
applied 0.05 to 2 MHz magnetic fields. Heating is produced by eddy currents induced on the surface of the
implant and is therefore dependent on the permeability of the thermoseed material (57,58,59,60,61,62). Using
Curie temperatures close to the maximum temperature desired in the tissue, the ferromagnetic seeds can be
designed to provide thermal self-regulation so that a constant tumor temperature can be maintained throughout
the treatment regime. Since volume tissue heating is by passive thermal conduction, these 0.1 mm to 1.0 mm
diameter thermoseed of various length must be implanted closely. Nevertheless, under certain conditions this
invasive seed implant method like the interstitial RF electrodes and microwave antennas to be discussed later
may be preferable for local hyperthermia of deep-seated tumors. It is noteworthy that ferromagnetic seed
hyperthermia in combination with other modalities are used in the control of ocular tumors in animals (63,64).
Recently, multifilament seeds such as the palladiumnickel (PdNi) thermoseeds have gained interest because
of a more effective power deposition than solid seeds (65).
For some deep-seated tumors or tumors of large volume, interstitial techniques have been employed to
generate the desired hyperthermic field. RF electrodes operate in the frequency range of 0.5 MHz to 1 MHz
(66,67,68,69). The advantages of interstitial techniques are safe (without skin burn) and more uniform heat
distribution within the tumor. RF current flowing between pairs of needle-like bare electrodes is dissipated
by the ohmic resistance of tissue and is converted to heat. The temperature distribution produced is strongly
dependent upon blood flow in the tissue and spacing between electrodes. Most clinical applications require an
array of these electrodes spaced at 1.0 cm to 1.5 cm intervals in parallel for optimal temperature uniformity.
Excessive or inadequate heating could be minimized by independent control of RF currents and by varying the
lengths of electrodes.
Microwave Heating. For superficial tumors, single-contact applicators operating at 433 MHz to
2450 MHz have been used. The shorter wavelength at these frequencies allows microwave radiation from
a small applicator some focusing ability in tissues for selective hyperthermia. Because of the limited depth of
energy penetration, these antennas have been applied to heating well-localized tumors extending to depths of
up to 3 cm to 6 cm depending on the particular applicator (39,44,69,70).
The types of external applicators that have been reported for cancer hyperthermia include horns, microstrip applicators, and circular, rectangular, and ridged waveguides (71,72,73,74,75,76,77,78,79). These applicators are used with a high-permittivity, dielectric material to match them to tissue. In the case of a water-like
bolus, it serves also to provide surface cooling of the skin and to avoid the problem of burns and blisters. Microstrip applicators are lightweight and have a low profile. They offer efficient energy coupling and are easier
to use clinically (77,78,79). One limitation of a single applicator is its small area of tissue coverage. Another
is that the SAR distribution cannot be modified during use, making it difficult to improve the nonuniform
temperature distribution that are inevitably produced during patient treatments. One approach to overcome
this problem is to scan the applicator over the tissues (80).
HYPERTHERMIA THERAPY
A favorable external system to treat tumors of wide area (tumors that exceed several cm in diameter)
is the phased array consisted of multiple microstrip applicators. The primary advantage of the multielement
array system is the ability to control electronically the SAR distribution by varying the amplitude and relative
phase of each element, independently. Moreover, a planar or quasiplanar phased array operating at microwave
frequencies can be used to improve depth of penetration for selective heating of deep-seated tumors in a variety
of anatomic sites (81,82,83,84,85,86). A further advantage is that the SAR distribution can be adjusted during
treatment, enabling it to enhance the homogeneity of temperature distribution in the target region. The added
sophistication needed for controlling a multitude of array parameters is well within the capability of current
electronic technology. Although a bolus of cooling fluids can be used to prevent undesirable heating of superficial
tissues, tissue layers and curvatures in the near field of the applicator present considerable challenge to quality
control in patient treatments. In practice, the commonly accepted SAR variation is 50% throughout the entire
treatment region.
Intracavitary techniques can be used for certain tumors at hollow viscera and cavity sites such as the
esophagus, cervix, bladder, prostate, and rectum (69,87). Properly designed intracavitary applicators and
antennas can lead to a highly targeted heating of tumors and a reduced risk of unwanted heating of normal
tissues. There are several reports of devices designed for various tumor sites (88,89,90,91). Clinical applications
may require the antennas to be equipped with an integrated cooling system.
The technical difficulty in heating deep-seated tumors without overheating adjacent normal tissue confronted by external applicators has enabled interstitial array techniques to become a viable treatment modality
(67,69,92,93,94,95). The technique has the capacity to adapt its SAR distribution to an irregularly shaped tumor
volume and to provide uniform temperature in deep-seated tumors. Also in combination with brachytherapy,
interstitial hyperthermia renders a treatment modality for malignancies with little additional risk to the
patient (69,95,96).
The efficacy of interstitial microwave heat treatment of soft-tissue tumors is predicated on a sufficient
temperature distribution throughout the tumor. A major determinant is the catheter antenna. Recent designs
have provided microwave interstitial array systems capable of inducing uniform temperature distribution
throughout the entire tumor volume without the need for insertion of the tip of the antenna well beyond the
tumor boundary (97,98,99,100). That requirement was a major drawback of many older catheter antennas
which had the tendency to produce a cold spot or low-heating zone near the distal tip of the antenna (67,101,
102,103), which creates an unnecessary situation for damage to normal tissue. A desirable feature of some of
the newer catheter antenna designs, especially those with integral sleeves or coaxial chokes, is that the SAR
distribution is independent of insertion depth (90,104,105). These antennas have also managed to alleviate the
common problem of excessive heating of the skin from current accumulation at the insertion point. In the clinic,
interstitial microwave antennas are inserted into plastic catheters implanted into the tumor. Computational
and experimental studies have shown that SAR distributions vary with antenna design, catheter size and
material, and air space between the antenna and the catheter (100,105,106).
Array configurations (i.e., geometry and antenna spacing) would also dictate the performance of the
interstitial array treatment modality. Current microwave interstitial array systems rely mostly on equilateral
triangle and square arrays of catheter antennas operating at 433 MHz to 2450 MHz and use element spacings
of 10 mm to 20 mm. Theoretical and experimental results have shown that uniform power deposition and
temperature distribution can be attained from both triangular and square arrays. However, power deposition
and temperature elevation are higher for the triangular configuration at a given level of delivered microwave
power. Moreover, for coherent phase excitations, constructive interference can provide SARs at the array centers
an order of magnitude higher than those corresponding to a single interstitial microwave antenna. A flexibility
afforded by an array of interstitial antennas is that the point of maximum SAR may be shifted from location
to location by changing the amplitude and phase of each antenna. This would avoid low SAR spots during
treatment and would ensure uniform tumor temperature over the entire treatment session. Nevertheless, it
should be noted that ideal operating conditions are difficult to assure in the clinical setting.
HYPERTHERMIA THERAPY
HYPERTHERMIA THERAPY
quite large. The higher 2450 MHz frequency is chosen because at this frequency the dielectric constant for
blood is 20% higher than that for muscle, and the dielectric constant muscle is about 800% higher than that
for fat. While conductivities of blood and muscle are approximately the same, they are about 300% higher than
that of fat. As the microwave radiates into the tissue medium, energy is absorbed and converted to heat by
dielectric loss. This absorption will result in a progressive reduction of the microwave power intensity as it
advances in the tissue. The time rate of heating and spatial distribution of radiated microwave energy at any
given moment in time are direct functions of SAR and antenna radiation pattern, respectively.
The reduction is quantified by the depth of penetration; a measure of the distance through which the
intensity of a plane wave field is reduced to 13.5% of its initial level in a medium. At 2450 MHz, the depths
of plane wave penetration for blood, muscle, and fat are 19 mm, 17 mm, and 81 mm, respectively (35,36,37).
For microwave catheter antennas which do not have plane wavefronts, the penetration depth is reduced
according to the specific antenna design. Nevertheless, these values clearly suggest that microwaves can
deposit energy directly into distant tissues. Furthermore, the difference in the dielectric permittivity yields a
depth of penetration for tissues with low water content about four times deeper for muscle or higher water
content tissue at 2450 MHz. This means that a microwave field can propagate more readily through and be
absorbed less by low water content tissues than that of high water content. It also implies that microwaves
can propagate through intervening desiccated tissue or fat to deposit energy directly into more deeply-seated
tissue.
Cardiac Ablation for Tachyarrhythmia. For a significant portion of patients suffering from tachyarrhythmias, available drug therapy has been found unsatisfactory because of a lack of meaningful response
or unacceptable side effects (107,108,109). In some cases, these patients can be meanaged by open-heart
surgery. Percutaneous catheter ablation of arrhythmogenic foci inside the heart is a potentially curative mode
of treatment. Indeed, RF ablation has emerged as an effective therapy for many supraventricular tachycardias
and has become accepted as the standard treatment for arrhythmias associated with the WolfParkinson
White syndrome (107,108,130). Typically, the catheter is inserted percutaneously into the femoral vein and,
under the guidance of a fluoroscope, is then advanced to inside the heart chamber. The cardiac conducting
tissue responsible for the tachycardia is identified with the aid of endocardiac electrograms. A burst of RF
energy is delivered through the electrodes to thermally ablate the cardiac conducting tissue responsible for the
tachycardia and restores the heart to its normal rhythm.
Rapid and reliable mapping of the endocardiac electrogram for identification remains a technical challenge. Also, the lesions induced by RF current is quite small and shallow (125,127,131). Increasing the output
HYPERTHERMIA THERAPY
power to heat tissue at a distance often results in excessive temperatures at the electrodetissue interface
without the desired enlargement of lesion size (126,132). Note that temperature-guided RF catheter ablation
with very large distal electrodes can be used to improve lesion size (133). The impedance of the ablating electrode would rise due to poor coupling between the electrodes and adjacent tissue; desiccated and coagulated
tissue raises the resistance to current flow, thwarts effective tissue heating, and limits the size of RF-induced
lesions.
There is a need for energy sources that can produce larger and deeper lesions than RF currents. Large
lesions are required for certain types of cardiac ablation to cure ventricular tachycardias secondary to coronary
artery disease, for example, and arrhythmias due to reentry located deep in the myocardium in particular.
The radiating and dielectric heating features of microwave energy theoretically may be useful for ventricular ablation. The interaction of microwaves as mentioned earlier can result in a greater volume distribution
of energy and deeper penetration. The feasibility of ablating the atrioventricular (AV) junction in dogs with
microwave catheter antennas has been shown both in vitro (134,135,136) and in vivo (137,138,139,140). Furthermore, using fresh bovine hearts and closed-chest dogs, the feasibility of a larger (4 mm long) split-tip
catheter antenna has been demonstrated for ablation treatment of ventricular tachycardia (141,142). The results suggest that if the lesion size is sufficiently large, it would be possible to ablate a ventricular tachycardia
focus using this split-tip microwave catheter antenna system. In addition to the split-tip catheter antenna,
microwave antennas reported for cardiac ablation include monopole, helical coil, cap-slot, and cap-choke designs (134,135,136,137,138,139,140,141,142,143,144,145,146). A drawback of some catheter antennas is that
a considerable amount of microwave energy is reflected by the antennas to the skin surface and is deposited at
the point of antenna insertion into the blood vessel. The problem has been addressed by integrating a sleeve
or choke in the antenna design (134,135,136,139,141,142,143,144).
It is noted that the feasibility of using ultrasound for cardiac ablation was investigated and a catheter
mounted transducer has been reported (147,148).
Endometrial Ablation. Hysterectomy is performed to surgically remove the uterus in order to stop
intractable bleeding or menorrhagia (149,150). Endometrial ablation is a relatively new treatment for menorrhagia and is a reliable alternative treatment for patients with dysfunctional uterine bleeding (151,152,153).
It is superior to hysterectomy in terms of operative complication and postoperative recovery. While still in the
beginning stages, RF and microwave thermal ablation of the endometrium have been reported as efficacious
procedures for treatment of abnormal uterine bleeding (154,155,156,157). The technique is easier and quicker
to perform than current alternatives. Quantitative measures and patients subjective responses suggest that a
meaningful fraction of patients treated with RF and microwave ablation experience significant flow reduction.
Investigations with microwave energy indicate that a treatment temperature of 55 C is related to significant reduction or complete elimination of menstrual flow (156,157). Besides the difference between microwave
and RF approaches mentioned already, the use of high-intensity RF power (500 W) could produce burns at
points where electrocardiographic (ECG) electrodes come in contact with the body (158). Considerably more
investigation is needed before microwave or RF ablation can become a safe and efficacious clinical modality.
Treatment of Benign Prostate Hyperplasia. Benign prostatic hyperplasia or hypertrophy is a major
cause of morbidity in the adult male. At present, open surgery and transurethral resection of the prostate are the
gold standards for treatment of benign prostatic hypertrophy. They can provide immediate relief of obstructive
symptoms that remain fairly to extremely durable (159). A new, less invasive procedure uses thermal energy
delivered by microwaves (160165). An early report of a thermal microwave technique from 1985 employed a
transurethral microwave applicator. It showed coagulation of the prostate in mongrel dogs and some salutary
effects in an initial six patients treated with this device (160). An ensuing study used 2450 MHz microwave
energy to treat 35 patients and compared transurethral resection alone to preliminary microwave coagulation
followed by transurethral resection of the gland (161). Significant reduction in blood loss by initial treatment
with microwave thermal therapy was observed. Numerous reports have appeared since that time on various
10
HYPERTHERMIA THERAPY
aspects of both transrectal and transurethral microwave therapy of the prostate using 915 MHz and 2450 MHz
energy (162,163,164,165,166,167).
Most of the research in human subjects to date has focused on methods of delivery. Initial attempts to
deliver the energy transrectally have not been effective, and injury to the rectal mucosa has occurred due to
the difficulty of interface cooling of this organ (166,167). Recent investigations have focused on transurethral
delivery of the energy with cooling systems within the catheter to ensure urethral preservation (143,144,
162,163,164,165,168,169). Sensors placed in the microwave antenna maintain temperature on the urethral
surface between 43 C and 45 C. It is noted that while the number of treatment sessions and the temperature
attained are extremely important predictors of response, sufficient hyperthermia volume is crucial for enhanced
efficacy. Virtually no data clearly demonstrating reduction in prostate volume in human subjects have been
reported, although most investigators have shown improvement in measured urinary flow rates compared
to preoperative studies. Randomized studies comparing microwave thermotherapy to transurethral resection
conclude that microwave hyperthermia treatment had a definite therapeutic effect on symptomatic prostatic
hypertrophy (169,170,171,172). Thus, microwave thermal ablation of prostatic tissue and enlargement of the
urethra with minimal clinical complications offers a therapeutic alternative to surgery in select patients with
benign prostatic hyperplasia.
BIBLIOGRAPHY
1. J. F. Lehmann (ed.), Therapeutic Heat and Cold, Baltimore, MD: Williams & Wilkins, 1990.
2. J. F. Lehmann, Diathermy, in F. H. Krusen, F. J. Kottke, and P. M. Elwood (eds.), Handbook of the Physical Medicine
and Rehabilitation, Philadelphia: Saunders, 1971, Chap. 11, pp. 273345.
3. E. Fisher, S. Solomon, Physiological responses to heat and cold, in S. Licht (ed.), Therapeutic Heat and Cold, New
Haven, CT: Licht, 1965, pp. 126169.
4. G. ter Haar, Effects of increased temperature on cells, on membranes and on tissues, in D. J. Watmough and W. M.
Ross (eds.), Hyperthermia, Glasgow and London: Blackie, 1986, pp. 1441.
5. R. J. Griffin et al., Mild temperature hyperthermia combined with carbogen breathing increases tumor partial
pressure of oxygen (pO2 ) and radiosensitivity, Cancer Res., 56: 55905593, 1996.
6. D. M. Brizel et al., Radiation therapy and hyperthermia improve the oxygenation of human soft tissue sarcomas,
Cancer Res., 56: 53475350, 1996.
7. J. C. Lin, M. F. Lin, Microwave hyperthermia-induced bloodbrain barrier alterations, Radiat. Res., 89: 7787, 1982.
8. E. W. Gerner, T. C. Cetas (eds.), Hyperthermia Oncology 1992, Tucson: Arizona Board of Regents, 1993.
9. C. C. Vernon et al., Radiotherapy with or without hyperthermia in the treatment of superficial localized breast
cancerresults from five randomized controlled trials, Int. J. Radiat. Oncol. Biol. Phys., 35: 731744, 1996.
10. J. Overgaard et al., Randomised trial of hyperthermia as adjuvant to radiotherapy for recurrent or metastatic
malignant melanoma, Lancet, 345: 540543, 1995; Hyperthermia as an adjuvant to radiation therapy of recurrent
or metastatic malignant melanomaa multicentre randomized trial by the European Society for Hyperthermic
Oncology, Int. J. Hyperthermia, 12: 320, 1996.
11. B. Emami et al., Phase III study of interstitial thermoradiotherapy compared with interstitial radiotherapy alone
in the treatment of recurrent or persistent human tumorsa prospectively controlled randomized study by the
Radiation Therapy Oncology Group, Int. J. Radiat. Oncol. Biol. Phys., 34: 10971104, 1996.
12. H. Kuwano et al., Preoperative hyperthermia combined with chemotherapy and irradiation for the treatment of
patients with esophageal carcinoma, Tumori, 81: 1822, 1995.
13. J. M. C. Bull, A review of systemic hyperthermia, Hyperthermia Radiat. Ther./Chemother. Treat. Cancer, 18: 171176,
1984.
14. J. van der Zee et al., Whole body hyperthermia as a treatment modality, in S. B. Field and C. Franconi (eds.), Physics
and Technology of Hyperthermia, Dordrecht, The Netherlands: Martinus Nijhoff, 1987, pp. 420440.
15. S. B. Field, J. W. Hand (eds.), An Introduction to the Practical Aspects of Clinical Hyperthermia, London: Taylor &
Francis, 1990.
HYPERTHERMIA THERAPY
11
16. H. Matsuda et al., Long duration mild whole body hyperthermia of up to 12 hours in rats: Feasibility and efficacy
on primary tumour and axillary lymph node metastases of a mammary adenocarcinomaimplications for adjuvant
therapy, Int. J. Hyperthermia, 13: 8998, 1997.
17. H. I. Robins et al., Phase I clinical trial of melphalan and 41.8-degrees-C whole-body hyperthermia in cancer patients,
J. Clin. Oncol., 15: 158164, 1997.
18. P. Vanbaren, E. S. Ebbini, Multipoint temperature control during hyperthermia treatmentstheory and simulation,
IEEE Trans. Biomed. Eng., 42: 818827, 1995.
19. C. J. Lewa, J. D. Decertaines, Body temperature mapping by magnetic resonance imaging. Spectrosc. Lett., 27:
13691419, 1994.
20. J. R. Macfall et al., H-1 MRI phase thermometry in vivo in canine brain, muscle, and tumor tissue, Med. Phys., 23:
17751782, 1996.
21. R. Seip et al., Noninvasive real-time multipoint temperature control for ultrasound phased array, IEEE Trans.
Ultrason. Ferroelectr. Freq. Control, 43: 10631073, 1996.
22. R. Benyosef, D. S. Kapp, Direct clinical comparison of ultrasound and radiative electromagnetic hyperthermia
applicators in the same tumours, Int. J. Hyperthermia, 11: 110, 1995.
23. P. Lele, Physical aspects and clinical study with ultrasound hyperthermia, in F. K. Storm (ed.), Hyperthermia in
Cancer Therapy, Boston: Hall Medical, 1983, pp. 333367.
24. K. Hynynen et al., A scanned focused multiple transducer ultrasound system for localized hyperthermia treatment,
Int. J. Hyperthermia, 3: 2125, 1987.
25. E. Moros, R. Roemer, K. Hynynen, Pre-focal plane high temperature regions induced by scanning focused ultrasound
beams, Int. J. Hyperthermia, 6: 351366, 1990.
26. W. L. Straube et al., An ultrasound system for simultaneous ultrasound hyperthermia and photon beam irradiation,
Int. J. Radiat. Oncol. Biol. Phys., 36: 11891200, 1996.
27. M. Mitsumori et al., A phase I and III clinical trial of a newly developed ultrasound hyperthermia system with an
improved planar transducer, Int. J. Radiat. Oncol. Biol. Phys., 36: 11691175, 1996.
28. K. B. Ochetree et al., An ultrasonic phased array applicator for hyperthermia, IEEE Trans. Sonics Ultrason., SU-31:
526531, 1984.
29. C. A. Cain, S. Umemura, Concentric ring and sector vortex phased arrays for tumor treatment, IEEE Trans. Microw.
Theory Tech., MTT-34: 542551, 1986.
30. E. S. Ebbini, C. A. Cain, A spherical sector ultrasound phased array applicator for deep localized hyperthermia,
IEEE Trans. Biomed. Eng., BME-38: 634643, 1991.
31. S. Umemura, C. A. Cain, Acoustical evaluation of a prototype sector-vortex phased-array applicator, IEEE Trans.
Ultrason. Ferroelectr. Freq. Control, 39: 3238, 1992.
32. R. J. McGough et al., Mode scanning: Heating pattern synthesis with ultrasound phased arrays, Int. J. Hyperthermia,
10: 433442, 1994.
33. R. J. McGough et al., Treatment planning for hyperthermia with ultrasound phased arrays, IEEE Trans. Ultrason.
Ferroelectr. Freq. Control, 43: 10741084, 1996.
34. C. J. Diederich, Ultrasound applicators with integrated catheter-cooling for interstitial hyperthermiatheory and
preliminary experiments, Int. J. Hyperthermia, 12: 279297, 1996.
35. J. C. Lin, Engineering and biophysical aspects of microwave and radio-frequency radiation, in D. J. Watmough and
W. M. Ross (eds.), Hyperthermia, Glasgow: Blackie, 1986, pp. 4275.
36. S. M. Michaelson, J. C. Lin, Biological Effects and Health Implications of Radiofrequency Radiation, New York:
Plenum, 1987.
37. J. C. Lin, O. P. Gandhi, Computer methods for predicting field intensity, in C. Polk and E. Postow (eds.), Handbook
of Biological Effects of Electromagnetic Fields, Boca Raton, FL: CRC Press, 1996, pp. 337402.
38. J. Hand, Technical and clinical advances in hyperthermia treatment of cancer, in J. C. Lin (ed.), Electromagnetic
Interaction with Biological Systems, New York: Plenum, 1989, pp. 5980.
39. M. Gautherie (ed.), Methods of External Hyperthermia Heating, Berlin: Springer-Verlag, 1990.
40. C. K. Lee et al., Clinical experience using 8 MHz radiofrequency capacitive hyperthermia in combination with
radiotherapyresults of a phase I/II study, Int. J. Radiat. Oncol. Biol. Phys., 32: 733745, 1995.
41. A. W. Guy, J. F. Lehmann, J. B. Stonebridge, Therapeutic applications of electromagnetic power, Proc. IEEE, 62:
5575, 1974.
12
HYPERTHERMIA THERAPY
42. C. Franconi et al., Low-frequency RF dipole applicator for intermediate depth hyperthermia, IEEE Trans. Microw.
Theory Tech., MTT-34: 612619, 1986.
43. Y. Fujita, H. Kato, T. Ishida, An RF concentrating method using inductive aperture-type applicators, IEEE Trans.
Biomed. Eng., 40: 110113, 1993.
44. S. B. Field, C. Franconi (eds.), Physics and Technology of Hyperthermia, Dordrecht, The Netherlands: Martinus
Nijhoff, 1987.
45. C. W. Song et al., Capacitive heating of phantom and human tumors with an 8 MHz radiofrequency applicator, Int.
J. Radiat. Oncol. Biol. Phys., 12: 365372, 1986.
46. R. Paglione et al., 27 MHz ridged waveguide applicators for localized hyperthermia treatment of deep-seated malignant tumors, Microw. J., 24: 7180, 1981.
47. P. S. Ruggera, G. Kantor, Development of a family of helical coil applicators which produce transversely uniform
axially distributed heating in cylindrical fatmuscle phantoms, IEEE Trans. Biomed. Eng., BME-31: 98106, 1984.
48. M. J. Hagmann, R. L. Levin, Analysis of the helix as an RF applicator for hyperthermia, Electron. Lett., 20: 337338,
1984.
49. P. Turner, Regional hyperthermia with an annular phased array, IEEE Trans. Biomed. Eng., BME-31: 106114,
1984.
50. P. Turner, Mini-annular phased array for limb hyperthermia, IEEE Trans. Microw. Theory Tech., MTT-34: 508513,
1986.
51. P. Wust et al., Simulation studies promote technological development of radio frequency phased array hyperthermia,
Int. J. Hyperthermia, 12: 477494, 1996.
52. A. J. Fenn, G. A. King, Experimental investigation of an adaptive feedback algorithm for hot spot reduction in
radio-frequency phased-array hyperthermia. IEEE Trans. Biomed. Eng., 43: 273280, 1996.
53. K. Urata et al., Radiofrequency hyperthermia for malignant liver tumorsthe clinical results of seven patients,
Hepatogastroenterology, 42: 492496, 1995.
54. A. Jordan et al., Inductive heating of ferromagnetic particles and magnetic fluid: Physical evaluation of their potential
for hyperthermia, Int. J. Hyperthermia, 9: 5168, 1993.
55. M. Suzuki et al., Preparation and characteristics of magnetite-labelled antibody with the use of poly(ethylene glycol)
derivatives. Biotechnol. Appl. Biochem., 21: 335345, 1995.
56. A. Jordan et al., Cellular uptake of magnetic fluid particles and their effects on human adenocarcinoma cells exposed
to AC magnetic fields in vitro, Int. J. Hyperthermia, 12: 705722, 1996.
57. J. C. Lin, Induction thermocoagulation of the brainquantitation of absorbed power, IEEE Trans. Biomed. Eng.,
BME-22: 542546, 1975.
58. W. J. Atkinson, I. A. Brezovich, D. P. Chakraborty, Usable frequencies in hyperthermia with thermal seeds, IEEE
Trans. Biomed. Eng., BME-31: 7075, 1984.
59. P. R. Stauffer, T. C. Cetas, R. C. Jones, Magnetic induction heating of ferromagnetic implants for inducing localized
heating in deep-seated tumors, IEEE Trans. Biomed. Eng., BME-31: 235251, 1984.
60. R. F. Meredith et al., Ferromagnetic thermoseeds suitable for an afterloading interstitial implant, Int. J. Radiat.
Oncol. Biol. Phys., 17: 13411346, 1989.
61. S. K. Jones et al., Evaluation of ferromagnetic materials for low frequency hysteresis heating of tumors, Phys. Med.
Biol., 37: 293299, 1992.
62. J. A. Paulus et al., Evaluation of inductively heated ferromagnetic alloy implants for therapeutic interstitial hyperthermia, IEEE Trans. Biomed. Eng., 43: 406413, 1996.
63. R. A. Steeves et al., Thermoradiotherapy of intraocular tumors in an animal modelconcurrent vs. sequential
brachytherapy and ferromagnetic hyperthermia, Int. J. Radiat. Oncol. Biol. Phys., 33: 659662, 1995.
64. T. G. Murray et al., Radiation therapy and ferromagnetic hyperthermia in the treatment of murine transgenic
retinoblastoma, Arch. Ophthalmol., 114: 13761381, 1996.
65. N. Vanwieringen et al., Power absorption and temperature control of multi-filament palladiumnickel thermoseeds
for interstitial hyperthermia, Phys. Med. Biol., 41: 23672380, 1996.
66. J. M. Cosset, Interstitial techniques, in J. Overgaard (ed.), Hyperthermia Oncology 1984, London: Taylor & Francis,
1985, pp. 309316.
67. J. M. Strohbehn, T. A. Mechling, Interstitial techniques for clinical hyperthermia, in J. W. Hand and J. R. James
(eds.), Handbook of Techniques for Clinical Hyperthermia, London: Research Studies Press, 1986, pp. 210219.
HYPERTHERMIA THERAPY
13
68. B. Gao, S. Langer, P. M. Corry, Application of the time-dependent greens function and Fourier transforms to the
solution of the bioheat equation, Int. J. Hyperthermia, 11: 267285, 1995.
69. M. H. Seegenschmiedt, P. Fessenden, C. C. Vernon (eds.), Medical Radiology Thermoradiotherapy and Thermochemotherapy, Berlin: Springer-Verlag, 1996, Vol. 2.
70. J. C. Lin (ed.), Special issue on phased arrays for hyperthermia treatment of cancer, IEEE Trans. Microw. Theory
Tech., MTT-34: 481648, 1986.
71. A. W. Guy et al., Development of a 915-MHz direct contact applicator for therapeutic heating of tissues, IEEE Trans.
Microw. Theory Tech., MTT-26: 550556, 1978.
72. J. C. Lin, G. Kantor, A. Grods, A class of new microwave therapeutic applicators, Radio Sci., 17: 119s123s, 1982.
73. Y. Nikawa et al., A direct-contact microwave lens applicator with a microcomputer-controlled heating system for
local hyperthermia, IEEE Trans. Microw. Theory Tech., MTT-34: 481648, 1986.
74. M. Hiraoka et al., Clinical evaluation of 430 MHz microwave hyperthermia system with lens applicator for cancer
therapy, Med. Biol. Eng. Comput., 33: 4447, 1995.
75. M. D. Sherar et al., Beam shaping for microwave waveguide hyperthermia applicators, Int. J. Radiat. Oncol. Biol.
Phys., 25: 849857, 1993.
76. E. G. Moros et al., Clinical system for simultaneous external superficial microwave hyperthermia and Cobalt-60
radiation, Int. J. Hyperthermia, 11: 1126, 1995.
77. T. V. Samulski et al., Spiral microstrip hyperthermia applicator: technical design and clinical performance, Int. J.
Radiat. Oncol. Biol. Phys., 18: 233242, 1990.
78. Y. Nikawa, M. Yamamoto, A multielement flexible microstrip patch applicator for microwave hyperthermia, IEICE
Trans. Commun., 78B: 145151, 1995.
79. C. Michel et al., Design and modeling of microstripmicroslot applicators with several patches and apertures for
microwave hyperthermia, Microw. Opt. Tech. Lett., 14: 121126, 1997.
80. F. Sterzer et al., A robot-operated microwave hyperthermia system for treating large malignant surface lesions,
Microw. J., 29: 147152, 1986.
81. J. W. Hand, J. L. Cheeham, A. J. Hind, Absorbed power distributions from coherent microwave arrays for localized
hyperthermia, IEEE Trans. Microw. Theory Tech., MTT-34: 484489, 1986.
82. J. T. Loane et al., Experimental investigation of a retrofocusing microwave hyperthermia applicator: conjugate-field
matching scheme, IEEE Trans. Microw. Theory Tech., MTT-34: 490494, 1986.
83. J. T. Loane, S. W. Lee, Gain optimization of a near field focusing array for hyperthermia applications, IEEE Trans.
Microw. Theory Tech., MTT-37: 16291635, 1989.
84. E. J. Gross et al., Experimental assessment of phased array heating of neck tumors, Int. J. Hyperthermia, 6: 453474,
1990.
85. T. P. Ryan, V. I. Backus, C. T. Coughlin, Large stationary microstrip arrays for superficial microwave hyperthermia,
Int. J. Hyperthermia, 11: 187209, 1995.
86. R. M. Najafabadi, A. F. Peterson, Focusing and impedance properties of conformable phased array antennas for
microwave hyperthermia, IEEE Trans. Microw. Theory Tech., 44: 17991802, 1996.
87. A. Yerushalmi, Fifteen years of experience with intracavitary hyperthermia in cancer therapy, Exp. Oncol., 17:
325332, 1995.
88. R. L. Liu et al., Heating patterns of helical microwave intracavitary oesophageal applicator, Int. J. Hyperthermia, 7:
577586, 1991.
89. D. J. Li et al., Design of intracavitary microwave applicators for the treatment of uterine cervix carcinoma, Int. J.
Hyperthermia, 7: 693701, 1991.
90. J. C. Lin, Y. J. Wang, The cap-choke catheter antenna for microwave ablation treatment, IEEE Trans. Biomed. Eng.,
43: 657660, 1996.
91. D. Roos et al., A new microwave applicator with integrated cooling system for intracavitary hyperthermia of vaginal
carcinoma, Int. J. Hyperthermia, 12: 743756, 1996.
92. J. W. Strohbehn, E. B. Douple, Hyperthermia and cancer therapy: a review of biomedical engineering contributions
and challenges, IEEE Trans. Biomed. Eng., BME-32: 779787, 1984.
93. Y. Zhang, W. T. Joines, J. R. Oleson, Microwave hyperthermia induced by a phased interstitial antenna array, IEEE
Trans. Microw. Theory Tech., MTT-38: 217221, 1990.
14
HYPERTHERMIA THERAPY
94. J. W. Hand, R. Cardossi, Therapeutic applications of electromagnetic fields, in W. R. Stone (ed.), Review of Radio
Science 19901992, London: Oxford Univ. Press, 1993, pp. 779796.
95. D. J. Lee, R. Mayer, L. Hallinan, Outpatient interstitial thermoradiotherapy, Cancer, 77: 23632370, 1996.
96. T. Nakajima et al., Pattern of response to interstitial hyperthermia and brachytherapy for malignant intracranial
tumour: A CT analysis, Int. J. Hyperthermia, 9: 491502, 1993.
97. Y. Wang, J. C. Lin, A comparison of microwave interstitial antennas for hyperthermia, Proc. IEEE Eng. Med. Biol.
Conf., 1986, pp. 14631466.
98. J. C. Lin, Y. J. Wang, Interstitial microwave antennas for thermal therapy, Int. J. Hyperthermia, 3: 3747, 1987.
99. V. Sathiaseelan et al., Performance characteristics of improved microwave interstitial antennas for local hyperthermia, Int. J. Radiat. Oncol. Biol. Phys., 20: 531539, 1991.
100. G. Schaller, J. Erb, R. Engelbrecht, Field simulation of dipole antennas for interstitial microwave hyperthermia,
IEEE Trans. Microw. Theory Tech., 44: 887895, 1996.
101. J. W. Strohbehn et al., Evaluation of an invasive microwave antenna system for heating deep-seated tumor, J. Natl.
Cancer Inst. Monogr., 61: 489491, 1982.
102. T. Z. Wong et al., SAR patterns from an interstitial microwave antenna-array hyperthermia system, IEEE Trans.
Microw. Theory Tech., MTT-34: 560567, 1986.
103. P. F. Turner, Interstitial equal-phased arrays for EM hyperthermia, IEEE Trans. Microw. Theory Tech., MTT-34:
572578, 1986.
104. T. P. Ryan, J. A. Mechling, J. W. Strohbehn, Absorbed power deposition for various insertion depth for 915 MHz
interstitial dipole antenna arrays: Experiment versus theory, Int. J. Radiat. Oncol. Biol. Phys. 19: 377387, 1990.
105. J. C. Lin, Y. J. Wang, Power deposition patterns for minature 2450 MHz interstitial antenna and arrays, in T.
Sugahara and M. Saito (eds.), Hyperthermia Oncology 1988, London: Taylor & Francis, 1988, Vol. 1, pp. 891893.
106. M. S. Wu et al., Effect of a catheter on SAR distribution around interstitial antenna for microwave hyperthermia,
IEICE Trans. Commun., 78B: 845850, 1995.
107. S. K. S. Huang (ed.), Radiofrequency Catheter Ablation of Cardiac Arrhythmias: Basic Concepts and Clinical Applications, Armonk, NY: Futura Publ. Co., 1995.
108. A. B. Wagshal, S. K. S. Huang, Application of radiofrequency energy as an energy source for ablation of cardiac
arrhythmias, in J. C. Lin (ed.), Advances in Electromagnetic Fields in Living Systems, New York: Plenum, 1997, Vol.
2, pp. 205254.
109. G. Breithardt, M. Borggrefe, D. P. Zipes, Nonpharmacological Therapy of Tachyarrhythmias, Mount Kisco, NY:
Futura Publ. Co., 1988.
110. A. W. Shaw, A risk benefit analysis of drugs used in the treatment of endometriosis, Drug Saf., 11: 104113, 1994.
111. A. Lalonde, Evaluation of surgical options in menorrhagia, Br. J. Obstet. Gynecol., 101: 814, 1994.
112. H. Lepor et al., A randomized, placebo-controlled multicenter study of the efficacy and safety of terazosin in the
treatment of benign prostatic hyperplasia, J. Urol., 148: 14671474, 1992.
113. G. J. Gormley et al., The effect of finasteride in man with benign prostatic hyperplasia, New Engl. J. Med., 327:
11851191, 1992.
114. M. J. Barry, Medical outcome research and benign prostatic hyperplasia. Prostate (Suppl.), 3: 6174, 1990.
115. J. Davis, The principles and the use of the Nd-YAG laser in gynecological surgery, Clin. Obstet. Gynecol., 1: 331350,
1987.
116. D. C. Marlin, R. Vander Zwaag, Excisional techniques for endometriosis with the CO2 laser laparoscope, J. Reprod.
Med., 32: 753758, 1987.
117. P. C. Reid et al., Nd-YAG laser endometrial ablationhistological aspects of uterine healing, Int. J. Gynecol. Pathol.,
11: 174179, 1992.
118. D. R. Phillips, A comparison of endometrial ablation using the Nd-YAG laser or electrosurgical techniques, J. Amer.
Assoc. Gynecol. Laparosc., 1: 235239, 1994.
119. M. Kelly, H. M. L. Mathews, P. Weir, Carbon dioxide embolism during laser endometrial ablation, Anaesthesia, 52:
6567, 1997.
120. H. P. Weber et al., Mapping guided laser catheter ablation of the atrioventricular conduction in dogs, Pace-Pacing
Clin. Electrophysiol., 19: 176187, 1996.
121. J. N. Kabalin et al., Comparative study of laser versus electrocautery prostatic resection18-month follow-up with
complex urodynamic assessment, J. Urol., 153: 9497, 1995.
HYPERTHERMIA THERAPY
15
122. E. Rihuela et al., Histopathological evaluation of laser thermocoagulation in the human prostateoptimization of
laser irradiation for benign prostatic hyperplasia, J. Urol., 153: 15311535, 1995.
123. P. Narayan et al., A randomized study comparing visual laser ablation and transurethral evaporation of prostate in
the management of benign prostatic hyperplasia, J. Urol., 154: 20832088, 1995.
124. R. H. Hoyt et al., Factors influencing transcatheter radiofrequency ablation of the myocardium, J. Appl. Cardiol., 1:
469486, 1986.
125. L. T. Blouin, F. I. Marcus, The effect of electrode design on the efficiency of delivery of RF energy to cardiac tissue in
vitro, PACE, 12: 136143, 1989.
126. F. H. M. Wittkampf, R. N. W. Hauer, E. O. Roblesde Medina, Control of RF lesions size by power regulation,
Circulation, 80: 962968, 1989.
127. J. J. Langberg et al., Radiofrequency catheter ablation: The effect of electrode size on lesion volume in vivo, PACE,
13: 12421248, 1990.
128. J. C. Lin, Y. J. Wang, R. J. Hariman, Comparison of power deposition patterns produced by microwave and radio
frequency cardiac ablation catheters, Electron. Lett., 30: 922923, 1994.
129. T. L. Wonnell, P. R. Stauffer, J. J. Langberg, Evaluation of microwave and radio frequency catheter ablation in a
myocardium-equivalent phantom model, IEEE Trans. Biomed. Eng., 39: 10861095, 1992.
130. M. M. Scheinman, Pattern of catheter ablation practice in the United States: results of the 1992 NASPE survey.
PACE, 17: 873875, 1994.
131. S. Nathan, J. P. Dimarco, D. E. Haines, Basic aspects of radiofrequency ablation, J. Cardiovasc. Electrophysiol., 5:
863876, 1994.
132. I. D. McRury, D. E. Haines, Ablation for the treatment of arrhythmias, Proc. IEEE, 84: 404416, 1996.
133. J. J. Langberg et al., Temperature-guided radiofrequency catheter ablation with very large distal electrode, Circulation, 88: 245249, 1993.
134. J. C. Lin, Transcatheter microwave technology for treatment of cardiovascular diseases, in M. E. OConnor, R. H.
C. Bentall, and J. C. Monahan (eds.), Emerging Electromagnetic Medicine, New York: Springer-Verlag, 1990, pp.
125134.
135. K. J. Beckman et al., Production of reversible and irreversible atrioventricular block by microwave energy [abstract],
Circulation, 16: 1612, 1987.
136. J. C. Lin et al., Microwave ablation of the atrioventricular junction in open heart dogs, Bioelectromagnetics, 16:
97105, 1995.
137. J. J. Langberg et al., Catheter ablation of the atrioventricular junction using a helical microwave antenna: A novel
means of coupling energy to the endocardium, PACE, 14: 21052113, 1991.
138. J. C. Lin et al., Microwave catheter ablation of the canine atrio-ventricular junction [abstract], J. Amer. Coll. Cardiol.,
21: 357a, 1993.
139. J. C. Lin et al., Microwave catheter ablation of the atrioventricular junction in closed-chest dogs, Med. Biol. Eng.
Comput., 34: 295298, 1996.
140. L. B. Liem et al., In vitro and in vivo results of transcatheter microwave ablation using forward-firing tip antenna
design, Pacing Clin. Electrophysiol., 19: 20042008, 1996.
141. S. K. S. Huang et al., Percutaneous microwave ablation of the ventricular myocardium using a 4-mm split tip antenna
electrode: A novel method for potential ablation of ventricular tachycardia [abstract], J. Amer. Coll. Cardiol., 25: 285a,
1994.
142. F. S. Mazzola et al., Determinants of lesions size using a 4-mm split-tip antenna electrode for microwave catheter
ablation, in North American Society for Pacing Electrophysiology (NASPE), Memphis, TN, Newton Upper Fall, MA:
NASPE, 1994, p. 814.
143. J. C. Lin, Y. J. Wang, A catheter antenna for percutaneous microwave therapy, Microw. Opt. Technol. Lett., 8: 7072,
1995.
144. J. C. Lin, Y. J. Wang, The cap-choke catheter antenna for microwave ablation treatment, IEEE Trans. Biomed. Eng.,
43: 657660, 1996.
145. S. Shetty et al., Microwave applicator design for cardiac tissue ablations, J. Microw. Power Electromagn. Energy, 31:
5966, 1996.
146. S. Labonte et al., Monopole antennas for microwave catheter ablation, IEEE Trans. Microw. Theory Tech., 44: 1832
1840, 1996.
16
HYPERTHERMIA THERAPY
147. J. E. Zimmer et al., The feasibility of using ultrasound for cardiac ablation, IEEE Trans. Biomed. Eng., 42: P891897,
1995.
148. K. Hynynen et al., Cylindrical ultrasonic transducers for cardiac catheter ablation, IEEE Trans. Biomed. Eng., 44:
144151, 1997.
149. A. H. DeCherny, M. L. Polan, Hysteroscopic management of intrauterine lesions and intractable uterine bleeding,
Obstet. Gynecol., 61: 392397, 1983.
150. A. Lalonde, Evaluation of surgical options in menorrhagia, Br. J. Obstet. Gynecol., 101: 814, 1994.
151. I. S. Fraser et al., Short and medium term outcomes after rollerball endometrial ablation for menorrhagia, Med. J.
Aust., 158: 454457, 1993.
152. M. S. Baggish, E. H. M. Sze, Endometrial ablationa series of 568 patients treated over an 11-year period, Amer. J.
Obstet. Gynecol., 174: 908913, 1996.
153. R. Garry et al., Six hundred endometrial laser ablations, Obstet. Gynecol., 85: 2429, 1995.
154. J. H. Phipps et al., Treatment of functional menorrhagia by radiofrequency-induced thermal endometrial ablation,
Lancet, 335: 374376, 1990.
155. J. H. Phipps et al., Validation of a method of treating menorrhagia by endometrial ablation, Clin. Phys. Physiol.
Meas., 13: 273280, 1992.
156. J. C. Lin, Microwave technology for minimally invasive interventional procedures, Chin. J. Med. Biol. Eng., 13:
293304, 1993.
157. N. C. Sharp et al., Microwave for menorrhagiaa new fast technique for endometrial ablation, Lancet, 346: 1003
1004, 1995.
158. V. J. Page, Anaesthesia and radiofrequency endometrial ablation, Eur. J. Anaesthesiol., 10: 2526, 1993.
159. J. Aagaard et al., Total transurethral resection vs minimal transurethral resection of the prostatea 10-year followup
study of urinary symptoms, uroflowmetry and residual volume, Br. J. Urol., 74: 333336, 1994.
160. T. Harada et al., Microwave surgical treatment of diseases of the prostate, Urology, 26: 572576, 1985.
161. T. Harada et al., Microwave surgical treatment of diseases of the prostate: clinical application of microwave surgery
as a tool for improved prostatic electroresection, Urol. Int., 42: 127131, 1987.
162. M. A. Astrahan et al., Microwave applicator of transurethral hyperthermia of benign prostatic hyperplasia, Int. J.
Hyperthermia, 5: 383396, 1989.
163. W. L. Strohmaier et al., Local microwave hyperthermia of benign prostatic hyperplasia, J. Urol., 144: 913917, 1990.
164. A. Lindner et al., Local hyperthermia of the prostatic gland for the treatment of benign prostate hypertrophy and
urinary retention, Br. J. Urol., 65: 201203, 1990.
165. S. St C. Carter et al., Single session transurethral microwave thermotherapy for the treatment of benign prostate
obstruction, J. Endourol., 5: 137143, 1991.
166. F. Montorsi et al., Transrectal microwave hyperthermia for benign prostatic hyperplasialong term clinical, pathological and ultrastructural patterns, J. Urol., 148: 321325, 1992.
167. P. Debicki et al., Temperature steering in prostate by simultaneous transurethral and transrectal hyperthermia,
Urology, 40: 300307, 1992.
168. L. Baert et al., Transurethral microwave hyperthermia: An alternative treatment for prostdynia, Prostate, 19: 113
119, 1991.
169. D. G. Bostwick, T. R. Larson, Transurethral microwave thermal therapypathologic findings in the canine prostate,
Prostate, 26: 116122, 1995.
170. M. Zerbib et al., Localized hyperthermia vs. the sham procedure in obstructive benign hyperplasia of the prostatea
prospective randomized study, J. Urol., 147: 10481052, 1992.
171. C. Dahlstrand et al., Transurethral microwave thermotherapy vs. transurethral resection for benign prostatic hyperplasia: Preliminary results of a randomized study, Eur. Urol., 23: 292298, 1993.
172. H. Matzkin, Hyperthermia as a treatment modality in benign prostatic hyperplasia, Urology, 43: 1720, 1994.
JAMES C. LIN
University of Illinois at Chicago
CLINICAL ENGINEERING
451
CLINICAL ENGINEERING
OVERVIEW
Definition
Clinical engineering is a relatively new profession. The term
clinical engineering was coined by Cesar A. Caceres, M.D. in
1967 to describe a George Washington University Medical
School program in which he envisioned engineers and physicians working together to provide better patient care. This
program, which was not medicine, engineering, nor statistics,
contained elements of all of these disciplines. As the program
focus was to be patient oriented, he chose to couple the term
clinical with engineering to describe it, so as to distinguish it
from the research-oriented activities of biomedical engineering (1).
In 1992 the American College of Clinical Engineers
(ACCE) defined a clinical engineer as a professional who supports and advances patient care by applying engineering and
management skills to healthcare technology (2). The Clinical
Engineering Board of Examiners of the International Certification Commission for Clinical Engineering and Biomedical
Technology endorses this definition.
Environment of Patient Care
At the heart of clinical engineering is the concept of providing
engineering expertise to ensure that the environment of patient care (EC) is safe for both patient and clinician. Medical
equipment used for patient care comprises a large part of this
environment. It must be safe, efficacious (performing the
function for which it was intended), and cost effective. Clinical engineers are the professionals who provide technical support services to ensure this. Their knowledge is invaluable to
health-care provider institutions, such as hospitals, nursing
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
452
CLINICAL ENGINEERING
Directory of Engineering and Engineering Technology: Undergraduate Programs from the American Society for
Engineering Education
Self-Study. Formal training can be supplemented with selfstudy of technical journals, periodicals, and trade publications, as well as equipment operator and service manuals,
VCR training tapes and computer-based training programs.
A clinical engineering library provides an invaluable tool for
the clinical engineering staff and for other health-care workers (physicians, nurses, laboratory technicians) to whom clinical engineering services are provided. Libraries could include
technical video, equipment operator and service manuals, and
technical publications (books and magazines). Such material
also allows staff to keep pace with changes in regulatory requirements and biomedical standards.
Safety Training. Employee right-to-know and safety training that discusses the hazards encountered in the workplace
is also necessary. This includes subject matter related to
blood-borne pathogens, hazardous materials, proper protection when entering patient-care areas (gloves, masks, etc.),
environmental hazards, fire hazards, patients bill of rights,
and other items. The latest trend makes use of interactive
computer program modules. This allows training at a time
convenient to the employee and no longer requires attendance
at lengthy seminars.
Certification
Certification is provided for clinical engineers [Certified Clinical Engineer (CCE)] and biomedical equipment technicians
[Certified Biomedical Equipment Technician (CBET)] by examining boards guided by the International Certification
Commission for Clinical Engineering and Biomedical Technology. The Commission is composed of health-care community members including engineering, medical, industrial, and
governmental groups and agencies. Certification provides formal recognition that an individual has mastered a body of
knowledge that is useful in job performance. This knowledge,
which is both theoretical and practical, includes theory of operation of medical equipment, physiological principles, and
safety issues related to medical equipment (8).
Clinical engineering certification requires passing a written exam (multiple-choice and essay questions), and an oral
interview, aimed at determining the candidates depth and
breadth of experience. BMET certification requires passing a
written multiple choice examination. The Association for the
CLINICAL ENGINEERING
453
Advancement of Medical Instrumentation (AAMI) assists candidates by providing certification training courses and study
materials.
Certification renewal requires demonstration of continued
training. Points are assigned and accumulated for various activities that contribute to ones ability to do his job.
Ethics
Confidentiality. Working in a health-care environment,
clinical engineers and BMETs have access to information that
must be kept confidential. If confidentiality is not adhered to
credibility is soon lost.
For example, the following applies
Patient data must not be indiscriminately discussed.
Some service manuals are proprietary.
During the bid process in which new equipment is being
purchased, bidder quotes and bid evaluations must not
be shared with competitors.
Research activities must not be discussed until data are
published.
Mid 1960s1970s
Clinical engineerings great impetus for growth occurred in
the 1970s. This came about as follows.
Equipment Problems
During the mid-1960s the medical device industry as a whole
did not yet have adequate performance or safety standards.
Equipment designers were not fully familiar with the requirements of the hospital environment. Equipment design defects
included inadequate energy from defibrillators, ungrounded
equipment chassis, and alarms that could be falsely triggered.
Quality control was also poor as evidenced by physiological
monitors that were grossly out of calibration and equipment
454
CLINICAL ENGINEERING
that was cracked, broken, or missing components. New medical equipment that was purchased and delivered in supposedly ready-to-use condition was found to have incidence of defects ranging from 25% to 50% (9).
At this time the dangers of microshock and leakage current were starting to be recognized and discussed. Of special
concern was the medical equipment used for coronary care
and the procedures used to maintain this equipment.
Ralph Nader
Ralph Nader raised national consciousness about accidents
that could occur in hospitals as a result of poorly designed or
faulty medical equipment. His article in the March 1971 Ladies Home Journal claimed that too many hospitals are hazardous electrical horror chambers. To eliminate these dangers, Nader suggested that hospitals hire engineers to provide
advice on electrical equipment and its installation, as well as
on electrical wiring (10). As a result, the clinical engineering
profession was spurred forward as hospitals hired additional
staff to test their equipment and verify electrical safety. This
was also the beginning of independent service organizations
(ISO), which provided an alternative to original equipment
manufacturer (OEM) service. Naders claims have since been
refuted (11).
Kellogg Foundation. The W. K. Kellogg Foundation (established in 1930 to help people improve their quality of life by
providing grants to solve identifiable problems) addressed the
need for improved equipment maintenance prior to the Nader
article when it funded the nations first experimental preventive maintenance (PM) program for hospital equipment. A
three-year grant starting May 1, 1970 was awarded to the
biomedical/clinical engineering department of the State University of New Yorks Downstate Medical Center. The Downstate Medical Center has since changed its name to the
Health Science Center at Brooklyn, University Hospital of
Brooklyn. The department, the Scientific and Medical Instrumentation Center (SMIC) established in 1963, one of the first
biomedical/clinical engineering programs in the nation, is
still active today. The Kellogg Foundation also funded the nations first shared clinical engineering program in 1972, the
Northwest Ohio Clinical Engineering Center. The center provided equipment maintenance, consultation, and educational
services to hospitals in that local (12,13).
Response of the Joint Commission on the Accreditation of
Healthcare Organizations. The Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) also responded to the apparent need for additional safety testing.
Their 1974 standard required quarterly leakage testing for
electrically powered equipment. Their April 1976 Accreditation Manual for Hospitals required hospitals to establish comprehensive instrumentation programs that included preventive maintenance programs with written records of inspection
testing and corrective action taken. It also required all new
patient-related equipment to be evaluated for proper performance before being used clinically (9). The JCAHO requirements further hastened the establishment of in-house clinical
engineering departments that strove to satisfy these requirements as well as to improve on manufacturer-provided maintenance.
CLINICAL ENGINEERING
455
456
CLINICAL ENGINEERING
sure the safety and efficacy of the hospitals medical instrumentation. This goal is achieved by providing appropriate
technical services. These services can range from the basic
maintenance, calibration, and repair of medical equipment, to
the more sophisticated research activities of design and development of medical equipment and devices usually associated
with biomedical engineering, thus resulting in some overlap
between these two disciplines (Fig. 3).
Equipment Responsibilities. Clinical engineers apply engineering and management principles to issues that relate to
medical equipments entire life cycle. They help determine
what equipment to purchase and how long it is cost-effective
to keep this equipment in service, and when to turn to newer
technologies. Such guidance saves hospitals money and reduces liability.
Clinical engineers manage a diverse group of medical devices located throughout their institutions. This includes instrumentation used in cardiology, intensive care, clinical laboratory, respiratory therapy, anesthesiology, neurology,
physical therapy, ultrasound, and the operating rooms. Some
clinical engineering departments also provide service for xray or ionizing radiation devices used in radiology, radiation
therapy, or nuclear medicine that are typically managed by
radiation physics staff. Others may service purely mechanical
devices such as stretchers, hospital beds, and wheelchairs,
but these are usually managed by facilities engineering.
Clinical engineers provide emergency instrumentation
troubleshooting expertise. This can take place within an operating room during cardiothoracic surgery, a patient-care
area (Fig. 4), or in a researchers laboratory during animal
experimentation. Typically clinical engineers do not operate
the medical equipment or select levels of treatment (i.e., balloon pump inflate or deflate timing), or give fluids to or take
CLINICAL ENGINEERING
457
458
CLINICAL ENGINEERING
RFQ generation
Purchase requisition review
Bid evaluation
Vendor and manufacturer interface
Coordination of outside services
Acceptance testing (initial checkout, incoming inspection,
incoming test) of new equipment safety, operation, and
technical specifications
Clinical equipment installation coordination and supervision
Defect resolution and documentation
User in-service training
Preventive maintenance test procedure generation and update
Testing of rental, loaner, demonstration, patient-owned,
and physician-owned equipment for hospital use
Scheduled PM
Equipment repairs
Equipment upgrades
Oversight and evaluation of equipment service contracts
Emergency clinical engineering support to all patient-care
areas
On-call and recall for critical care areas
Specialized clinical engineering support dedicated to cardiothoracic surgery
Quality assurance and risk management
Regulatory agency survey support
Equipment related patient incident investigation
Hazard and recall alert notification
Clinical engineering participation on hospital and health
center committees
Represent hospital on the University Healthcare Consortium (UHC) Clinical Engineering Council
Represent hospital in the New York City Metropolitan
Area Clinical Engineering Directors group
BMET internship programs
NYC Board of Education Substitute Vocational Assistance
(SVA) internship programs
Volunteer training
Clinical engineering staff and departmental development.
Equipment Modification. At times clinical engineering is
called on to modify instrumentation. Modification must not be
done indiscriminately. Care must be taken so as not to violate
the integrity of the equipment. It is best to limit modifications
to external operations. Nonmanufacturer approved internal
modifications must be approached with extreme caution and
are best not done as they may void warranties and violate
FDA guidelines. This includes securing devices to a cart so
they will not fall off in transit or be stolen, assembly of devices into working systems, and modification of equipment to
allow easier PM. For example, a monitor used in an endoscopic video system may have to be secured to a cart, or, a
medication cart may have to be modified to allow its use as a
crash cart. Crash carts typically are medication carts that
house a defibrillator, suction device, O2 tank, and supplies.
Purchased as separate entities, integration is required. Elec-
CLINICAL ENGINEERING
459
460
CLINICAL ENGINEERING
Joint Commission
The Joint Commission on Accreditation of Healthcare Organizations (JCAHO) runs voluntary three-year accreditation programs for health-care facilities aimed at improving the quality of patient care. Information gathered about a hospital can
be released to the public. Accredited health-care facilities are
eligible to receive federal Medicare reimbursement. Many
state governments recognize accreditation as a requirement
for licensure and Medicaid reimbursement (22).
The environment of care (EC) in which todays health care
is provided is complex. It includes plant facilities, medical
equipment, drugs, information, finance, staff, third-party services, and diverse technologies (23). JCAHO is concerned that
this environment be managed so as to provide a hazard-free
environment that reduces the risk of human injury.
JCAHO requires management programs to be set up that
deal with safety, security, hazardous waste, emergency preparedness, life safety, medical equipment, and utility systems. Clinical engineers tend to be most involved with activities that constitute a medical equipment management
program, the purpose of which is to promote the safe and effective use of the institutions medical equipment. The medical equipment management program encompasses equipment
acquisition, technical management, and education for both
equipment operators and maintainers. As part of this program, JCAHO requires clinical engineering to submit periodic
reports to the institutions safety committee. Performance
standards (quantifying factors relevant to program effectiveness) are developed and selected indicators (activities) such
as those dealing with timely PM and repair performance are
tracked. Changes observed in the indicators are used to spot
and correct deficiencies in the clinical engineering program.
The aim of this activity is to improve the quality and costeffectiveness of the clinical engineering services provided (24).
Safe Medical Devices Act
The Safe Medical Devices Act (SMDA) of 1990 and its 1992
amendment requires health-care institutions to report equipment incidents resulting in serious injury or death to a patient or employee. An institutions risk manager determines
whether the incident is reportable, using a documented decision-making process. Medical-device-related deaths must be
reported within 10 days to the FDA and to the manufacturer.
Medical-device-related serious injuries or illnesses must be
reported within 10 days to the manufacturer, or if the manufacturer is unknown, to the FDA. Periodic summary reports
are also required. The SMDA also requires that specific medical devices be tracked, and that medical equipment be properly disposed of when it is taken out of service (25,26). SMDA
compliance is also a requirement of the JCAHO.
However, as a result of a reform bill, the FDA Modernization Act of 1997, in a few years all hospitals may no longer be
required to submit reports to the FDA when patient deaths
or serious injuries involving medical devices occurs. Instead
the FDA will rely on a small sample of representative hospitals and nursing homes called sentinels to collect the data.
Patient Incident Investigation
When a patient incident occurs, incident reports from nursing, physicians, and others are submitted to the risk manage-
CLINICAL ENGINEERING
461
required removes it from service and sequesters it until remedial action is taken. Should equipment retrofiting be required,
the manufacturer may choose to provide an upgrade kit with
instructions, opt to send a field-service engineer on-site, or
require that the equipment be picked up or sent to the factory. Appropriate paperwork must be provided to the institution for inclusion in the instruments history folders, and entries made into the computerized equipment records. Updated
operators manuals and additional user in-service training
may also be required.
MEDICAL EQUIPMENT
Patient Monitoring
Patient-Care Equipment
Equipment used on patients or for patient care in healthcare facilities is both varied and numerous. It runs the gamut
from simple thermometers to sophisticated MRI machines.
Equipment used on the patient, such as an electrocardiogram (ECG) monitor, is readily visible in the patients immediate physical vicinity. Equipment used for patient care,
such as a clinical chemistry analyzer, may be housed in a
laboratory at a location remote from the patient. Both types
are important when considering the environment of patient
care.
Medical equipment falls mainly into three different categories. These categories are diagnostic, therapeutic, or assistive.
Diagnostic equipment such as a monitor acquires data and
uses transducers to enhance and supplement human senses.
Therapeutic instruments such as high-voltage X rays, pacemakers, and defibrillators arrest or control physiological processes affected by disease or trauma. Assistive devices supplement diminished or lost functions, and include life-support
(ventilator) and life-sustaining (dialysis unit) devices (27).
Medical instrumentation used for patient monitoring has become quite sophisticated. This microprocessor-controlled
equipment provides multiphysiological parameter monitoring
with alarm generation and recording capability. It incorporates telemetry, ST segment analysis, and full physiological
parameter disclosure capability (which stores selected waveforms for recall), allowing clinical study of abnormalities. It
also includes automatic arrhythmia detection at the bedside,
which until a few years ago required a large stand-alone computer housed in a specially cooled room. Using individual personal computers, patient data can also be collected and archived for additional statistical studies.
The patients physiological parameters are viewed on bedside monitors as well as on remote slave displays. Parameters
monitored include ECG, heart rate, respiration rate, cardiac
output, noninvasive blood pressure, invasive blood pressures
(arterial, pulmonary artery, central venous, etc.), oxygen saturation (SAO2), pulse rate, end-tidal carbon dioxide (ET CO2),
and temperature.
In critical care areas, the bedside monitors are hard-wire
connected to central nursing stations allowing centralized
462
CLINICAL ENGINEERING
Figure 7. Central nursing station. Physiological monitoring has grown quite sophisticated.
Patient information gathered at the bedside is
routed to a central nursing station providing
clinical staff with a comprehensive viewing
area. Each central station monitor typically
shows waveforms and parameters for four different patients and has the ability to zoom in
on a specific patient to show all monitored parameters. Recorders provide documented
printouts of alarm conditions including detected arrythmias. Closed circuit TVs visually
monitor patient isolation rooms as well. Clinical engineering is involved with the entire life
cycle of such equipment from prepurchase selection through acceptance testing, PM, repair,
and eventual obsolescence retirement.
viewing at one location (Fig. 7). Nursing stations may be connected together via local area ethernet-type networks,
allowing remote patient viewing between nursing stations
and sharing of full disclosure equipment. Telemetry information is likewise routed to a nursing station for centralized
viewing of ambulatory patients. In this case, the telemetry
transmitter takes the place of the bedside monitor, transmitting a signal to an antenna system that routes it to a receiver
and display unit.
Departmental owner
Service organization responsible for the equipment (inhouse, contract, etc.)
Acceptance date, when approved for clinical usage
Warranty expiration date
Equipment acquisition cost
PM frequency and PM procedure number to be used
Additional information the organization believes useful for
proper equipment management
Equipment Records
Cart, resuscitation
Pacemaker, cardiac
Heart rate monitor, ECG
ECG monitor
Diathermy unit
Equipment Inventory List
To be effective an equipment management program requires
maintenance of an up-to-date, complete inventory of medical
equipment used in the health-care institution. This equipment inventory list helps identify equipment for product recall and hazard alerts, as well as to locate equipment due for
Equipment history files are maintained to provide information for equipment management and technology assessment
purposes, as well as to satisfy regulatory requirements. When
equipment is taken out of service and is disposed of, its history file should be maintained for a minimum of three additional years (31), or longer if an institutions legal council or
risk manager deems it necessary. This will offer the institution some protection in the event that a patient incident lawsuit is initiated at the time of equipment disposal, of which
clinical engineering or risk management is unaware. Records
for equipment involved in patient incidents are usually sequestered by the risk manager so as to avoid possible tampering.
CLINICAL ENGINEERING
ELECTRICAL SAFETY
Ongoing Testing
Medical equipment is tested for electrical safety throughout
its lifetime. Baseline tests are run during acceptance testing.
Tests are also run during PM, following equipment repair,
upgrade, or patient incident. The measurements are recorded
and compared to previous readings. Changes indicate possible
electrical degredation that must be investigated to eliminate
electrical hazards before an incident can occur. Training of
equipment users in electrical safety concepts is also important.
Electrical Safety Analyzers. Electrical safety analyzers are
used to determine that electrical devices, ac receptacles, and
conductive surfaces meet required safety standards and are
safe for use. These solid-state instruments incorporate true
rms measurement capability. They allow testing of portable
medical equipment and fixed (hard-wired) installations. Internal circuitry [AAMI test load (14)] simulates the human
bodys impedance to current flow. The measurements made
are representative of the leakage currents (if present), which
could flow through the body. Normal and reverse polarity
tests, as well as current source tests, are run (32).
Micro- and Macroshock
Electrical safety as related to medical instrumentation concerns itself with limiting the amount of electric current allowed to pass through the body to a few microamps. This limits the current density (current per unit area) to values below
a threshold that could affect or damage tissue and vital organs such as the heart and brain (33).
In a health-care setting patients are compromised when
their skin is punctured and catheters are inserted, or when
their skin is prepped (rubbed and cleaned with alcohol) prior
to the placement of electrodes, and where moist environments
exist. The electrical resistance of patients bodies to current
flow is reduced from its normal range of 10,000 to 100,000
, to a range of 1000 to 10,000 . Under normal conditions
110 V ac applied to the skin results in currents of 1 mA to 10
mA. Under these compromised conditions larger currents of
10 mA to 100 mA result.
Macroshock (current above 1 mA) can be hazardous when
delivered at the bodys surface. For example, 100 mA applied
at the skin could cause ventricular fibrillation. Microshock
(current below 1 mA) can be hazardous when delivered directly or close to heart tissue. For example, current in the
order of 0.1 mA may cause ventricular fibrillation. Currents
such as these that can injure the patient are usually too low
to affect the uncompromised equipment operator.
Ac Leakage Current
Ac leakage currents are found in electrical instruments other
than battery-operated direct-current (dc) devices. Leakage
currents are produced as a result of the ac signal coupling to
the chassis of the instrument due to capacitance effects. Such
currents flow from chassis to ground when a low-resistance
path is made available.
The ground wire within the equipments three-wire line
cord provides a safe low-resistance path for the leakage cur-
463
rent. It is for this very reason that two-wire line cords are
prohibited for hospital use. The ground wire is connected to
the chassis of the instrument on one end and to the ground
pin of the ac plug on the other. While this connection is intact
the leakage current is safely conducted away from the patient, as it flows from the chassis through the ground wire to
ground via the ac wall outlet. Should this path open or present a high resistance from chassis to ground due to a loose
wire connection in the plug, or an improperly grounded ac
outlet, the leakage current seeking other pathways could flow
through the compromised patient. Leakage currents can also
flow between patient leads and ground due to poor lead isolation. The large number of medical devices that surround and
could route electrical current to the patient compounds the
problem.
Manufacturers limit leakage current by (34):
Incorporating patient isolation circuitry utilizing isolation
amplifiers, optical coupling, and infrared transmission
techniques
Doubly insulating some devices with an outer nonconductive plastic housing so that even if touched, they cannot
conduct electricity
Using specially constructed low-leakage ac line cords
Incorporating isolation transformers into systems, which
have components whose total leakage current exceeds
safety standards
Hospital Grade Plugs and Outlets
Safety is also provided by use of heavy-duty hospital grade ac
plugs (with a green dot). These plugs are mechanically keyed
to prevent polarity reversal. Explosion-proof plugs previously
used due to the explosive nature of some anesthetic gases are
no longer prevalent. Prior to opening new clinical areas, in
addition to having the clinical gases certified, all ac outlets
should be tested with a tension tester to verify that the ac
outlets will tightly grip equipment plugs when inserted and
with an ac polarity checker to ensure that the wiring has been
properly done.
PROCUREMENT OF MEDICAL DEVICES
Reasons for Equipment Acquisition
Equipment is acquired by a health-care facility for a multitude of reasons, including the following:
Replacement of obsolete equipment that cannot be repaired as parts are no longer available or that is not
cost-effective to repair as a new unit would be comparable in price to the repair cost. Included is equipment
that breaks down frequently, resulting in lost patient
revenue to the institution. Such equipment replacement
increases the hospitals cost-effectiveness and reduces
its risk exposure.
Replacement of technologically obsolete equipment that is
not as precise as newer microprocessor equipment, to
improve diagnostic and therapeutic efficiency.
Introduction of new types of technologies, such as magnetic
resonance imaging (MRI) and Catscan to provide enhanced services.
464
CLINICAL ENGINEERING
CLINICAL ENGINEERING
465
466
CLINICAL ENGINEERING
CLINICAL ENGINEERING
467
Fee-for-Service. OEM service is also available on an asneeded basis (fee-for-service). Fees include travel time (to or
from the institution), labor, parts and materials, or, using
printers as an example, a flat fee may be specified. Repair
and/or PM service can be provided. Service may be provided
either on-site (infusion pump) or at a remote depot or facility
(glucometer). A vendor-supplied repair estimate assists in determining if the repair is cost-effective. Fee-for-service may be
chosen for sophisticated repairs of equipment or when clinical
engineering staff cannot find the cause of a problem after a
reasonable troubleshooting time period has elapsed.
Third-Party Service Providers
Independent service organizations (ISO) tend to be less expensive than OEM. Service vendors should be selected based
on the quality and timeliness of past service. Service contracts and fee-for-service are available. Repair and/or PM service can be provided. It should be determined if parts other
than OEM will be used and whether the manufacturer might
void the warranty or negate product liability if a nonfactory
authorized service provider is used. The equipment has less
chance of getting factory upgrades and product recall retrofits.
Shared-Service Providers
Services can also be obtained from shared-service providers,
which can be for-profit or nonprofit. These organizations are
formed by health-care institutions usually located close together that do not have the resources necessary to maintain
an equipment management program on their own. Instead
they pool their resources and have a common entity provide
service to all of them. They share in the capital cost of setting
up such an entity and then pay for services in proportion to
their use (20). The logistical problems of providing such services must be overcome.
Some clinical engineering programs after becoming successful within their own institution expand and provide
shared services to neighboring institutions as well. As an example, Thomas Jefferson University Hospital in Philadelphia,
Pennsylvania, has a full-service in-house program as well as
a shared-service component.
Maintenance Insurance
This insurance protects against catastrophic failures by
smoothing out service cost. Service is done on an as-needed,
fee-for-service basis. The insurance company either pays the
vendor directly or reimburses the institution for the service.
Some programs pay clinical engineering personnel to handle
those repairs it wishes to in-house. Proper clinical engineering screening of service calls and good equipment management decisions can result in year-end rebates. However,
the paperwork in managing an insurance program often requires dedicating at least one full-time employee (FTE) to this
task. Maintenance insurance backup provides a reasonable
way for clinical engineering to start assuming equipment
maintenance duties in areas in which they may not as yet be
involved, such as radiology and clinical laboratories.
Preventive Maintenance (Scheduled Maintenance)
Purpose and Methodology. Scheduled maintenance ensures
that equipment previously acquired continues to function
468
CLINICAL ENGINEERING
that the equipment has been properly cleaned and/or sterilized before work on it is attempted. The infection control department has guidelines on cleaning prior to repair. Notation
must be made in the computerized maintenance management
system (CMMS) of equipment that is temporarily taken out
of service, its storage location, and whether PM must be done
while it is stored. The unit should be tagged indicating that
clinical engineering staff must inspect it prior to its being put
back into service. Clinical engineerings test equipment used
to maintain and calibrate the medical instrumentation must
also be periodically checked and calibrated. Certification
against standards traceable to the National Bureau of Standards may be required.
A recent trend is to use laptop computers to collect test
data on-site which are then imported into the CMMS. Computer-compatible test equipment can also be used to somewhat automate the test process. Such systems are available
from Bio-Tek and DNI Nevada Inc.
PM Risk Management. Risk factors are used to determine if
equipment requires scheduled PM, and if so, how often. This
allows health-care organizations to concentrate their resources on equipment presenting the greatest risk. All patient-care equipment is evaluated, independent of the manner
in which the institution acquired it.
During risk analysis, consideration is given to equipment
function, physical risk associated with clinical application,
and equipment maintenance requirements. A weighted numbering system is used and an appropriate threshold is set.
Clinical engineering experience (incident history and frequency of use) is used to modify the initial assessment as required (22). Some low-risk devices with no PM requirements
only require acceptance testing when first acquired, and a
zero PM frequency assigned. The following is one example of
assigning risk levels.
Equipment Function. This assessment considers how a device and its data are used and the possible consequences of
its failure. It is important whether a device is used for life
support, routine treatment, diagnosis, monitoring, or for minor functions.
Equipment function is weighted as follows (38):
Therapeutic
Life support
Surgical and intensive care
Physical therapy and treatment
10
9
8
Diagnostic
Surgical and intensive care monitoring
Additional physiological monitoring and diagnostic
7
6
Analytical
Analytical laboratory
Laboratory accessories
Computer and related
5
4
3
Miscellaneous
Patient related and other
Physical Risk. This assessment considers the possible consequences to the patient and/or operator in the event of an
equipment failure or malfunction.
5
4
3
2
1
5
4
3
2
1
CLINICAL ENGINEERING
469
specifications. This is accomplished by determining the malfunction and fixing it so as to retain the efficacy and safety of
the device (Fig. 11).
Prior to doing repair a determination should be made as to
whether the device is under warranty or service contract. If
so, the appropriate service provider should be contacted. If
under contract, clinical engineering screening may be required to verify that a problem does exist and warrants a vendor service call. Determination should be made as to whether
it is cost-effective to repair the device or if it should be retired
from service (due to lack of parts availability or expense) and
a replacement purchased.
Depending upon severity repairs can be done either in the
clinical engineering laboratories or on-site in the user facility.
In general, unless one has a good reason not to, original OEM
parts should be used. During repair, built-in diagnostics are
helpful, and the instrument operator and service manuals
from the clinical engineering technical library, as well as the
devices history file, prove invaluable.
On-site emergency support during the day allows the engineer to witness the problem first-hand. On-call/recall for
emergencies during off-hours allows instrumentation problem
troubleshooting by phone. This coupled with substitution of
spare equipment often eliminates the need for return to the
institution.
Work reports are filled out in a similar manner as for acceptance test and PM. Included should be the problem, steps
taken to resolve the problem, parts used, and pertinent test
data. These reports keyed to the instruments unique identification number are filed in the instruments history folder,
and suitable computerized data entry is made.
Parts and Service Manuals. A problem faced when repairing
medical devices is that manufacturers are sometimes unwill-
470
CLINICAL ENGINEERING
CLINICAL ENGINEERING
471
The goal of the analysis is to reduce the possibility of purchasing expensive and inappropriate equipment that cannot generate income for the institution. By assisting in this task, the
clinical engineering staff s awareness of new and emerging
technologies aids in more wisely allocating capital resources
(3).
Computerized Maintenance Management Systems
Computerized maintenance management systems (CMMS)
software is a powerful technology management tool used to
collect, store, and analyze data (Fig. 12). To utilize such programs, some clinical engineering departments rely on their
institutions mainframe computer or a file server and local
area network maintained by the information services department, while others maintain their own file server and local
area network taking on the responsibility for data backup and
integrity. Some use internally developed CMMS software,
while others purchase commercially available packages.
CMMS systems generate reports relating to all aspects of the
operation of a clinical engineering department, including
equipment management. Technology assessment software is
also available.
Functions of CMMS are as follows (30):
Maintain an equipment inventory and nomenclature
system
Select and schedule PM (based on risk factors)
Track work including repair, user in-service training, construction and research projects
Prioritize work load
Track equipment and vendor services provided under warranty or service contract
Track loaner and leased equipment
472
CLINICAL ENGINEERING
CLINICAL ENGINEERING
meetings is required to educate them as to clinical engineerings vital role within the institution.
Remote Service
As health-care institutions merge and collaborate, and as additional satellite clinics are established to provide hospitals
with clients, clinical engineering is faced with the logistics of
providing services to remote locations.
Regulation
The FDA is considering extending good manufacturing practices (GMP) rules and regulations to medical equipment refurbishers, reconditioners, and servicers. This would require
them to meet requirements similar to those of original medical device manufacturers and remanufacturers. Clinical engineering departments as equipment servicers may be impacted (44).
Home Safety
Clinical engineers may have to become more involved in
safety issues related to the increased use of medical equipment for home care and how to provide such services. For
example, home dialysis requires preliminary inspection of the
patients home site to ensure adequate electricity and water
and then periodic visits for PM and repair.
Year 2000 (Y2K) Compliance Healthcare devices and systems (information systems, medical equipment, and general
hospital systems) that use software or contain microprocessors may be prone to the Year 2000 problem. If so, as the date
changes from Dec. 31, 1999 to Jan. 1, 2000 they may incorrectly represent the year 2000 as 1900 (or some other date).
Some equipment might operate erroneously, others not at all.
Such failure could affect patient safety, produce incorrect
printouts and archiving, and increase risk to the institution.
Clinical engineering involvement and allocation of resources
are required to ensure equipment compliance (45).
Seeking New Opportunities
Clinical engineering departments must be flexible, adapting
to the times and conditions of the ever-changing health-care
institutions they serve. The feasibility of providing additional
services for X-ray and ionizing radiation equipment, computers, computer networks, patient information systems, telecommunications, and nurse call systems should be investigated. Although with proper training, clinical engineering
staff should be able to repair these items just as they repair
other sophisticated equipment falling within their domain, a
realistic approach must be taken with consideration given to
available resources (i.e., funds for training and FTE allocation), as well as the political realities of turf within their
particular institutions. Clinical engineers should also strive
to become more involved in technology assessment issues for
new technologies including telemedicine, robotics, PACS, and
wireless LAN, helping to determine the value of introducing
them into their institution.
Such flexibility will ensure that the relatively new profession of clinical engineering will mature and continue to provide value to the institutions it serves, as it moves forward
into the 21st century.
473
ACKNOWLEDGMENTS
Photographs were taken by Ernest Cuni, Biomedical Communication, SUNY HSCB, 1998.
I wish to thank all members of SMICs staff, both past and
present, with whom I have worked over the past 20 years. It
is through our daily interaction that the concepts presented
have been better defined. In particular, thanks to John Czap,
Luis Cornejo, Leonard Klebanov, and Marcia Wilkow. Also,
M. K. Venugopal who understood and valued SMICs services.
Thanks also to Barbara Donohue and Kelly Galanopoulos who
offered specific suggestions to enrich the content.
BIBLIOGRAPHY
1. Phone conversation between Dr. Caceres and the author, November 1997.
2. Anonymous, Whats a Clinical Engineer?, Pamphlet, Houston,
TX: ACCE.
3. Anonymous, Special report on technology management: preparing
your hospital for the 1990s, Health Technol. 3 (1): Plymouth Meeting, PA: ECRI, Spring 1989.
4. M. J. Shaffer, The reengineering of clinical engineering, Biomed.
Instrum. Technol., 31 (2): 177178, 1997.
5. T. J. Bauld, The definition of a clinical engineer, J. Clin. Eng., 16
(5): 403405, 1991.
6. Anonymous, Frequently asked questions: biomedical engineering,
Department of Biomedical Engineering, Tulane University,
http://www.bmen.tulane.edu/BMEFAQ/Welcome.html, 5/23/96.
7. S. Thanasas and Y. Dathatri, Course description of the Biomedical Engineering Technology Associates in Applied Science Degree, Farmingdale, NY: SUNY Farmingdale, January 1997.
8. Anonymous, Certification Program for Clinical Engineers, Virginia: International Certification Commission for Clinical Engineering and Biomedical Technology, February 1996.
9. S. Ben-Zvi and W. Gottlieb, Inspection and maintenance of medical instrumentation. In B. Feinberg (ed.), Handbook of Clinical
Engineering, Boca Raton, FL: CRC Press, 1980, Vol. 1, pp.5778.
10. W. J. Curran, P. E. Stanley, and D. F. Phillips, Electrical Safety
and Hazards in Hospitals, New York: MSS Information Corporation, 1974.
11. J. M. R. Bruner and P. F. Leonard, Electricity, Safety and the
Patient. Chicago: Year Book Medical Publishers, 1989.
12. D. Roth, The History of Medical Equipment Service, Part I, 24x7,
2 (4): 24-27, 34, April 1997.
13. D. Roth, The History of Medical Equipment Service, Part II, 24x7,
2 (5): 32, 3435, May 1997.
14. American National Standard, Safe current limits for electromedical apparatus, ANSI/AAMI1993. Arlington, VA: AAMI, December 2, 1993.
15. B. Klein (ed.), Health Care Facilities Handbook. 4th ed., Quincy,
MA: NFPA, 1993.
16. E. B. Fein, N.Y.U. Center and Mt. Sinai resume talks, The New
York Times, B1, B6, September 24, 1997.
17. G. Gordon, Breakthrough Management, A New Model for Hospital
Technical Services. Arlington, VA: AAMI, 1995.
18. G. F. Nighswonger, 1997 survey of salaries and responsibilities
for hospital biomedical/clinical engineering and technology personnel, J. Clin. Eng., 22 (4): 214232, 1997.
19. M. Wilkow and I. Soller, The Scientific and Medical Instrumentation Center Brochure. Brooklyn, NY: SUNY Health Science Center at Brooklyn, University Hospital of Brooklyn, 1995.
474
20. Guideline for establishing and administering medical instrumentation maintenance programs, AAMI MIM3/84, Vol. 1: Biomedical Equipment, AAMI Standards and Recommended Practices.
Arlington, VA: AAMI, 1989, pp. 342.
21. L. C. Brush et al., The Guide to Biomedical Standards. 20th ed.,
Brea, CA: Quest Publishing Company, 1995/1996.
22. Inspections and enforcement chapter 1510 JCAHO accreditation,
BNAs Health Care Facilities Guide, No. 36, 1500:1001
1500:1106, The Bureau of National Affairs, 1997.
23. Y. David and T. M. Judd, Medical Technology Management, Redmond, WA: Space Labs Medical, 1993.
24. O. R. Keil, Accreditation and clinical enginering, J. Clin. Eng., 21
(6): 410, 412, 440, November/December, 1996.
25. M. Shepherd, SMDA 90: User facility requirements of the final
medical device reporting regulation, J. Clin. Eng., 21 (2): 114
118, March/April, 1996.
26. Safe Medical Devices Act: Medical Device Tracking Regulations.
Plymouth Meeting, PA: ECRI Advisory, July 1997.
27. L. A. Geddes and L. E. Baker, Principles of Applied Biomedical
Instrumentation. 3rd ed., New York: Wiley, 1989.
28. Health Devices Sourcebook Medical Product Purchasing Directory
with Official Universal Medical Device Nomenclature System.
Plymouth Meeting, PA: ECRI, 1997.
29. Medical Device Register, The Official Directory of Medical Suppliers. Montvale, NJ: Medical Economics Data Production Company, 1994.
30. T. Cohen, Computerized Maintenance Management Systems for
Clinical Engineering. Arlington, VA: AAMI, 1994.
31. Conversation between C. Yeaton (SUNY University Hospital of
Brooklyn, Risk Manager) and the author, October 1997.
32. Digital Safety Analyzer Model 501 Operators Manual, Rev. C.
Burlington, VT: BIO-TEK Instruments, 1993.
33. S. A. Rubin, The Principles of Biomedical Instrumentation A Beginners Guide. Chicago: Year Book Medical Publishers, 1987.
34. R. Aston, Principles of Biomedical Instrumentation and Measurement. Columbus, OH: Merrill Publishing Company, 1990.
35. I. Soller and L. Klebanov, SMIC checklist for the purchase of scientific and medical instrumentation. Brooklyn, NY: SUNY Health
Science Center at Brooklyn, University Hospital of Brooklyn,
September, 1992.
36. S. Polaniecki, SMIC guide for the performance of initial checkout.
Brooklyn, NY: SUNY Health Science Center at Brooklyn, University Hospital of Brooklyn, August, 1989.
37. Siemens-BJC multivendor service breaks new ground with remote
diagnostics, 24x7, 2 (9): September 7, 1997.
38. L. Fennigkoh and B. Smith, Clinical Equipment Management.
Plant Technology and Safety Management Series, implementing
the 1989 PTSM standards: case studies, Number 2, 1989, The
Joint Commission.
39. O. R. Keil, Is preventive maintenance still a core element of clinical engineering?, Biomed. Instrum. Technol., 31 (4): 408409,
1997.
40. K. Beckley, SpaceLabs explains service stance, 24x7, 2 (11): November 7, 1997.
41. Recommended practices for a medical equipment management
program, final draft standard, AAMI EQ56-1997, June 1997, Arlington, VA: AAMI.
42. Clinical Benchmarking, Oak Brook, IL: University Healthcare
Consortium, 1993.
43. M. S. Gordon, The long and short of benchmarking, Biomed.
Technol., 6 (3): 2026, May/June, 1995.
44. FDA eyes GMP regs for refurbishers and servicers, 24x7, 2 (9):
September 7, 1977.
45. Medical devices and the year 2000 problem, Health Devices Alerts
Number 1998-F2, Plymouth Meeting, PA: ECRI, February 27,
1998.
IRA SOLLER
State University of New York
Health Science Center at
Brooklyn
366
NEUROTECHNOLOGY
NEUROTECHNOLOGY
more or less onoff way of operation, which causes fast fatiguing of the muscle. More complicated everyday functions will
require independent control of a large number of nerve fibers/
fascicles/muscle units, which allows finely tuned motion and
does not cause fatigue. Besides highly developed, multisite
contacting technology, sophisticated closed-loop control is necessary for those functions, as well as the help of mechanical
and other nonelectrical prosthetic aids. Research on all aspects is in full swing but will take many years to reach the
clinical application level.
z
x
y
( , ,z)
o
r , z
Nonmotor Systems
367
Current source
Figure 2. The volume conduction model of the nerve and its surroundings. Longitudinal and radial conductivity inside the fascicle
are z and r, respectively. Perineural sheath conductivity is s, epineural conductivity o, and extraneural conductivity e.
(1)
If an electrode is sufficiently close to a node of Ranvier, compared to , the two terms Ve,n1 and Ve,n1 may be set to zero.
This is the local approach.
The activating function sets the external potential condition but does not take into account ionic currents through the
membrane ion channels, which can be modeled by the famous
HodgkinHuxley equations and their refined forms. Because
of this, the activating function approach is only valid for short
rectangular stimulus current pulses, in the range of 10 s
to 100 s duration. Also, the well-known relationship at the
threshold of stimulation between amplitude and duration of
the stimulus (strength-duration threshold curve) is not contained in the activating function.
The effect of pulse duration has been taken into account
recently by Warman et al. (3). Nagarajan and Durand (4),
Grill and Mortimer (5), and others. It was demonstrated that
it may be a tool to influence spatial selectivity of stimulation.
The metal electrode itself, with its interface to the fluid
environment (Helmholtz layer, Warburg impedance, Faradaic
current), is not dealt with here but is an important part of
the stimulation system.
;;;;;;;;;;;;
Cell membrane
Myelin-sheath
Ve,n
Ve,n1
Rm
Cm
Vr
Vi,n1
Node of Ranvier
Axon
Ve,n+1
Rm
Cm
Vr
Ri
Vi,n
Rm
Cm
Vr
Ri
Vi,n+1
NEUROTECHNOLOGY
Ves (x, y, z) =
4 r z
I
(x r)2 + y2 + z2 r /z
(2)
and a boundary term Veb, which is an expansion of Bessel functions. Similarly, Vse(x,y,z ) follow from (Eq. 2).
Electrode configurations may be monopolar, bipolar, tripolar, and so on. Combinations of anodes and cathodes may
yield some field-steering capability, although at the expense
of higher stimulus currents (6,7).
While the cylindrical idealization of the nerve or fascicle
permits the analytical solution of Laplaces equation, as summarized previously, the more general case of a nerve volume
conductor with many irregular, inhomogeneous, anisotropic
fascicular cross sections inside asks for finite-difference modeling of the tissue (8,9).
SELECTIVITY OF STIMULATION AND EFFICIENCY OF A
STIMULATION DEVICE
At low current, an electrode can stimulate one fiber if its position is close to that fiber, compared to other fibers. Increase
of current will expand the stimulation volume, thus including
more and more fibers.
The ultimate selectivity would be reached if each fiber
would have its own electrode. This would require, however
both a blueprint of positions of fibers in the nerve so that
electrodes could be positioned close to a node of Ranvier, and
enough electrodes. In practice, no blueprint is available, and
microfabrication has technological limits. Therefore, with a
limited number of electrodes, placed optimally (in a statistical
sense), it is important to consider and test how selective stimulation can be.
In this respect one has to measure the extent to which each
electrode controls as few fibers as possible at low current, before potential fields start to overlap with those of other electrodes, with increase of current. Greater overlap means
lower selectivity.
From another point of view, one might define the efficiency
of a multielectrode device: the number of distinct fibers that
can be contacted, divided by the total number of electrodes.
Greater overlap means reduced efficiency.
Fiber selectivity has been addressed in Rutten et al. (10),
among others. It was concluded, on statistical grounds and by
overlap experiments, that an electrode separation of 128 m
was optimal for a rat peroneal nerve fascicle with 350 alpha
motor fibers.
Limited force recruitment experiments with a 2 D 24-electrode array (electrode separation 120 m) (11) yielded that 10
1.00
0.80
Probability
368
S = 1.1
0.60
0.40
0.20
0.00
0.00
S = 1.5
S=2
0.20
0.40
0.60
0.80
1.00
Conductivity of the extraneural tissue, 1/m
NEUROTECHNOLOGY
369
Probes
Spacers
Digital
Signal
Processing
Silicon platform
Probe array
(a)
370
NEUROTECHNOLOGY
Proximal stump
Distal stump
Silicon chip
(b)
Figure 6. (a) Schematic representation of an intelligent neural interface implanted into an intersected nerve. (From Ref. 43, their Fig. 1.) (b) Schematic drawing of the silicone chamber model
with the inserted silicon chip bridging a 4 mm gap between the proximal and distal stumps of a
transected rat sciatic nerve (From Ref. 42, their Fig. 3.) (c) SEM photograph view of a fabricated
chip with 100 m diameter holes. (From Ref. 42, their Fig. 2.) (d) SEM photograph of nerve
tissue sections distal to a chip with hole diameters of 100 m after 16 weeks of regeneration.
Shown is a minifascicular pattern on the distal surface of the chip. The regenerated nerve structure has a smaller diameter than that of the perforated area of the chip. The circumferential
perineurial-like cell layer is clearly visible. (From Ref. 42, their Fig. 5, top.)
NEUROTECHNOLOGY
371
Figure 7. (a) Low-density neuronal monolayer culture composed of 76 neurons growing over a
matrix of 64 electrodes. The recording craters are spaced 40 m laterally and 200 m between
rows. The transparent indium tin oxide conductors are 10 m wide. Tissue is mouse spinal cord;
culture age is 27 days in vitro; histology is Loots-modified Bodian stain. (From Ref. 60, their Fig.
2, p. 284.) (b) Cultured hippocampal neurons on patterned self-assembled monolayers. A hybrid
substrate pattern of trimethyloxysilyl propyldiethylenetriamine (DETA) and perfluorated alkylsilane (13F) showing selective adhesion and excellent retention of the neurites to the DETA regions
of the pattern. (From Ref. 6, their Fig. 4, p. 18.)
372
NEUROTECHNOLOGY
trunk or a nerve cell body (called soma) lying over an electrode site on a multielectrode substrate. This is studied by
modeling and measurement of electrode impedance as a function of cell coverage and adhesion (5456).
Except for neural network studies, cultured arrays may
once be used as cultured neuron probes. They may be implanted in living nerve tissue to serve as a hybrid interface
between electronics and nerve. The advantage would be that
the electrodecell interface may be established and optimized
in the lab, while the nerve network after implantation may be
a realistic target for ingrowth of nerve (collaterals). Studies of
the feasibility of this approach are currently underway.
CHRONIC IMPLANTATION AND BIOCOMPATIBILITY
For future use in humans, chronic implantation behavior and
biocompatibility studies of microelectrode arrays will become
of crucial importance.
McCreery et al. (57) implanted single Ir microwire electrodes in cat cochlear nucleus and found tissue damage after
long stimulation, highly correlated to the amount of charge
per phase. The safe threshold was 3 nC/phase (while the
stimulus threshold was about 1 nC/phase). Lefurge et al. (32)
implanted intrafascicularly Teflon-coated PtIr wires, diameter 25 m. They appeared to be tolerated well by cat nerve
tissue for six months, causing little damage. The influence of
silicon materials silicon microshaft array rabbit and cat cortical tissue was investigated by Edell et al. (58) and Schmidt et
al. (59). While neuron density around the 40 m shafts decreased, tissue response along the shafts was minimal over
six months (58), except at the sharp tips.
BIBLIOGRAPHY
1. D. McNeal, Analysis of a model for excitation of myelinated
nerve, IEEE Trans. Biomed. Eng., 23: 329337, 1976.
2. F. Rattay, Analysis of models for extracellular fiber stimulation,
IEEE Trans. Biomed. Eng., 36: 676683, 1989.
3. E. Warman, W. M. Grill, and D. Durand, Modeling the effect of
electric fields on nerve fibers: Determination of excitation threshold, IEEE Trans. Biomed. Eng., 39: 12441254, 1992.
4. S. Nagarajan and D. Durand, Effects of induced electric fields
on finite neuronal structures: A simulation study, IEEE Trans.
Biomed. Eng., 40: 11751188, 1993.
5. W. M. Grill, Jr. and J. T. Mortimer, The effect of stimulus pulse
duration on selectivity of neural stimulation, IEEE Trans. Biomed. Eng., 43: 161166, 1996.
6. J. H. Meier et al., Simulation of multipolar fiber selective neural
stimulation using intrafascicular electrodes, IEEE Trans. Biomed. Eng., 39: 122134, 1992.
7. J. H. Meier, W. L. C. Rutten, and H. B. K. Boom, Force recruitment during electrical nerve stimulation with multipolar intrafascicular electrodes, Med. Biol. Eng. Comput., 33 (3): 409417,
1995.
8. P. H. Veltink et al., A modeling study of nerve fascicle stimulation, IEEE Trans. Biomed. Eng., 36: 683692, 1989.
9. E. V. Goodall et al., Modeling study of size selective activation of
peripheral nerve fibers with a tripolar cuff electrode, IEEE Trans.
Rehabil. Eng., 3: 272282, 1995.
10. W. L. C. Rutten, H. van Wier, and J. M. H. Put, Sensitivity and
selectivity of intraneural stimulation using a silicon electrode
array, IEEE Trans. Biomed. Eng., 38: 192198, 1991.
373
Reading List
I. A. Boyd and M. R. Davey, Composition of Peripheral Nerves, Edinburgh: Livingstone, 1968.
W. F. Agnew and D. B. McCreery (eds.), Neural Prostheses Fundamental Studies, Englewood Cliffs, NJ: Prentice-Hall, 1990.
J. Malmivuo and R. Plonsey, Bioelectromagnetism, Oxford, UK: Oxford Univ. Press, 1995.
WIM L. C. RUTTEN
University of Twente
( ) =
x
x
z
z
=0
(2)
(which is the continuum equivalent of Ohms law and Kirchhoff s law combined) subject to the Neumann boundary condition
n | = J
(3)
dS = 0
Permittivity is measured when investigating low conductivity substances like air/oil/water mixtures in pipelines by
using capacitively coupled electrode systems. Resistive values
are typically found when low-frequency excitation is used
with directly coupled electrodes on conductive objects. Many
of the resistive substances also have a small reactance that
becomes measurable when high-frequency excitation is used.
PHYSICAL THEORY
Take a body in three-dimensional space with spatial variable x (x, y, z) outward unit normal n. Suppose the body
has possibly inhomogeneous isotropic conductivity (x), permittivity (x), and permeability (x). A time-harmonic current
density J(x, t) J(x)ejt with angular frequency is applied
to the surface , and this results after some settling time in
(x, t) E(x)ejt and magnetic field H
(x, t)
an electric field E
H(x)ejt in the body. Maxwells equations then give us
E
H = ( j)E
y
y
1.5
7.324.0
170
1.31.5
1823
2.8
6.8
2128
3.55.5
H
E = jH
301
(1)
JdS = 0
With these conditions surface current density and surface potential are related by a linear operator, the transfer impedance operator R()J (referred to in the mathematical
literature as the Neumann-to-Dirichlet mapping). The operator R() represents a complete knowledge of boundary electrical data. In EIT we sample this operator using a system of
electrodes to apply current and measure potential.
The first problem that needs to be addressed is the theoretical possibility of determining from R(). Specifically, the
question Does R(1) R(2) imply 1 2? has been answered in the affirmative under a variety of smoothness assumptions for the i. For details of these results, including
the case where the i are complex, see Isakov (4). The closely
related problem of recovering the resistance values of a planar resistor network by boundary current and voltage measurement has been investigated by Curtis and Morrow (5) and
Colin de Verdiere (6).
For the case where is not negligible, Ola and Somersalo
(7) show that the electrical parameters and are uniquely
determined by a complete knowledge of boundary data n
E and n H, provided is not the resonant frequency.
The problem of actually recovering the admittivity from a
noisy, sampled boundary data is difficult for two main reasons: The problem is nonlinear and ill posed. Notice that the
potential depends on , so that Eq. (2) is a nonlinear equa-
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright # 1999 John Wiley & Sons, Inc.
302
(4)
1.5
0.5
2
log 10(s)
Figure 1. The simple example illustrates the typical sigmoid response of boundary impedance measurement to interior conductivity
change. Let be a unit height unit radius cylinder. Suppose that the
current density on the surface (using cylindrical coordinates (, , z)
is J(1, , z) cos and J(, , 1/2) 0. Let us assume a cylindrical
anomaly with radius r and conductivity
(, , z) =
60
80
100
120
1
2
3
4
5
6
40
RECONSTRUCTION ALGORITHMS
Z(0.6,s)
2.5
20
1 r<1
s
0r
k i j dV
(7)
admittivity to produce an approximate image. This simple linear reconstruction algorithm is similar to the NOSER algorithm used by Rensselaer Polytechnic Institute (RPI) (8). As
the inverse of STS 2I can be precomputed assuming a suitable background conductivity, the algorithm is quite fast
(quadratic in the number of measurements used). However,
as it is a linear approximation the admittivity contrast will
be underestimated and some detail lost (see Fig. 1). A fully
nonlinear algorithm can be implemented by recalculating the
Jacobian using the updated admittivity and solving the regularized linear system to produce successive updates to the
admittivity until the numerical model fits the measured data
to within measurement precision. This requires an accurate
forward model, including the shape of the domain (8,9) and
modeling of the electrode boundary conditions (10). The nonlinear algorithm is more computationally expensive as at each
iteration the voltages have to be recalculated and the linear
system solved.
There is still debate about the ideal current patterns Ji to
drive. For a given constraint on the allowable current levels,
an optimal set of current drives can be calculated. In the case
where the total dissipated power is the active constraint, the
optimal currents are as described by Cheney (12). In the case
of a two-dimensional disk where the unknown conductivity is
rotationally symmetric, these are the trigonometric current
patterns Iik cos ik, where k is an angular coordinate of the
kth drive electrode and 1 i l/2 (similar for sine). If the
active constraint is the total injected current, only pairs of
electrodes should be driven (13); on the other hand, if the
maximum current on each electrode is the only constraint,
then all electrodes should be driven with positive or negative
currents (Walsh functions). In medical applications the belief
that limiting the dissipated power is the most important
safety criterion has led to the design of systems with multiple
current drives.
TISSUE IMPEDANCE
Spectral information is of particular interest in medical applications, where it can improve tissue characterization. A variety of electrical models of tissue have been proposed to explain the variation of impedance with frequency, the most
widely used being the Cole plot. The tissue model in Fig. 3
would give rise to the Cole plot in Fig. 4. The difference between the model and experimental findings is explained by
assuming that the capacitive element has a complex reactance given by K( j)a, where 1 would be a standard
capacitor, but in tissue 0.8 typically. A different interpre-
Z(dc)
z(m)
S
Figure 3. Simplest tissue impedance model where Z(m) is the cell
membrane capacitance, S is the intracellular impedance, and Z(dc) is
extracellular impedance.
303
Imaginary
impedance
negative
Real
impedance
Z
Z(dc)
Depressed center of
impedance locus
Figure 4. Locus of impedance versus frequency for the simple tissue model.
tation of tissue impedance is that the capacitance is distributed, and this may give a similarly depressed Cole plot. A
comprehensive treatment is given in a review by Rigaud (14).
ELECTRONIC SYSTEMS
In any application, the impedance contrast in the region, the
size of the smallest distinguishable object, and the rate of
change of any impedance in the whole region set the measurement parameters for the electronic and computation tasks.
The majority of systems apply current and measure voltage, and, as the analysis shows, accurate current sources are
required in multiple-source systems. Single-current drive systems only require measurement of the current, and no adjustments will be required as long as the current stays constant
for the duration of a measurement set. This will be achieved
if the output impedance is much larger than the changes in
the impedance of the region being imaged. Multiple-drive systems may include current measurement and subsequent adjustment to generate a particular current field, but greater
operating speed is possible with deterministic current
sources.
High-output impedance current sources have been adapted
or developed for EIT in a number of ways. The RPI group
uses digitally adjustable negative resistance and negative capacitance circuits on each current source to obtain 64 M
(15).
The Oxford Brookes University group uses modified Howland sources to obtain 300 k at 160 kHz (Fig. 5). A calibration system is used to measure the output characteristics and
compensate the set current level for each source, thereby
allowing very precise currents to be set for as long as the
calibration coefficients remain constant.
Alternating currents (ac) are used for several reasons: dc
applied to the skin causes electrolytic action under the electrode and may result in ulcers. The maximum ac current that
may safely be applied is frequency dependent, with the maximum safe current increasing linearly with frequency between
1 kHz and 100 kHz. Since voltage data accuracy improves
with larger applied currents, this implies that the use of
higher frequencies may result in more accurate data. However, the accuracy of the data acquisition system may become
degraded at higher frequencies due to the effects of parasitic
capacitances. The most popular frequency range is from 10
304
measurement circuits are used or if less than ideal measurement patterns are used.
This approach is applicable to both the pair-drive excitation method and multiple drive, since both require the establishment of multiple current patterns and concurrent voltage
measurement for a full data set for the reconstruction of a
single image. With 32 drive electrodes, 31 independent adjacent pair-drive combinations are possible; 31 independent
trigonometric or optimal patterns can be set by multipledrive systems.
+VS
RF
C
VC
RI
+
RL
APPLICATION AREAS
Medical Imaging
VS
Figure 5. Supply current sensing current source, modified to stabilize the dc level at the output loop gain AL A0 RI /RL.
305
306
CHRISTOPHER N. MCLEOD
WILLIAM R. B. LIONHEART
Oxford Brookes University
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
SIMULATION IN PHYSIOLOGY
Figure 2. A control system with a feedback loothat can be interrupted, thus permitting the estimation of the so-called open-loop
gain.
1. In order to keep a process under control, information must be received about the actual status of that
process, that is, the comparator must get feedback
about the outcome of its regulatory action. Feedback
rcuits result in either a negative or positive action;
in the terminology of physiology these effects are referred to as inhibitory and excitatory, respectively.
2. The onoff switch clearly introduces nonlinearities
in the behavior of the system. Such discontinuities
are not uncommon in biology. Sometimes a linearized
approach is selected for simplicity, which still yields
useful results.
3. The regulatory capabilities of a control system depend on the wiring (i.e., how the various elements
are connected), and the gain. When the feedback is
interrupted (a so-called open-loop state) during an experiment or due to a particular disease, the performance may be different from that under the normal
closed loop condition, (Fig. 2). Apart from how powerful a control system is (gain) one may also consider
the time course of a change imposed on the system.
This is generally done by calculating a time constant
that indicates the typical rate by which the system is
regulated.
MODELING IN BIOLOGY
Mathematical and graphical models are convenient because they aid in organizing the pattern of thinking and
thus facilitate communication among colleagues. Also, they
are relatively inexpensive and often applicable to the description and analysis of complex systems. In addition, they
potentially enhance insight by providing suggestions about
clues to critical experiments and validation of methods.
In the mathematical approach an analytical expression
is formulated that indicates how one variable depends on
other parameters. The equation may be precise, based on
a formal derivation, or may be a convenient approximation for the sake of simplicity. A graphical representation
may be multidimensional, with linear or (semi-) logarith-
A convergent circuitry [Fig. 3(b)], which has an additive effect with possible threshold features
The delicate interplay between macroscopic and microscopic structures such as muscle bers, bones, nerves, and
tendons permit the maintenance of posture as well as the
control of movements of humans and most animals. In contrast, some creatures such as spiders use no muscles to extend their legs. Instead, they extend them hydraulically,
and the blood pressures involved may exceed 450 mmHg.
For vertebrates, the so-called motor unit forms the basic entity for skeletal muscle action (Fig. 4). Such a unit
consists of a motor neuron (located in the spinal cord) plus
all muscle bers that it serves; physiologically this implies
that a distinct group of muscle bers contracts simultaneously once a single motor neuron res (i.e., generates
an action potential). A skeletal muscle consists of a multitude of such parallel ber groups, and total force development depends on the number of simultaneously active
motor units. At the level of muscle bers we also observe
the phenomenon of variability, called jitter, which refers to
the variation in the time interval between consecutive discharges of muscle action potentials of two bers from the
same motor unit.
Biomechanics is the branch of biophysics that is concerned with the effects of forces on the motion of a living organism or one or more of its parts, such as a leg.
Forces, torques, fulcrum, and lever action form the basic
ingredients of biomechanics. The center of mass (CM) of
the standing human body is centrally located in the abdomen at the level of the navel. The CM obviously shifts,
for example, when carrying a load with one arm. Performing dynamic exercise causes the body to become unstable.
Therefore, sports such as running, cycling, and skating imply the management of instability.
Because a complete description of a real biological system in terms of its movements is often extremely complex,
on the skin (transdermal route) is slow but has the advantage of being long-acting and rather constant. In particular,
in iontophoresis a drug is delivered transdermally by using
an electric eld to enhance the transport of small, poorly
absorbed ionic drugs across the skin surface with the advantage that only a low dosage of the drug is required (16).
All these considerations make clear that it is useful to develop models that incorporate the various (anatomical and
physiological) compartments that are spatial and chemical
in nature in order to predict the concentration and time
pattern at a target site dependent on the location of introduction of the substance, as well as the particular time
sequence (e.g., bolus versus repeated doses) of administration.
VISION RESEARCH AND EYE MOVEMENT
Figure 4. Schematic representation of a motor unit, consisting
of a motor neuron plus all connected muscle bers and intermediate structures such as collaterals of the efferent nerve ber and
neuromuscular synapses that functionally connect the nerve ber
endings with the corresponding muscle bers.
many researchers employ reduced models that nevertheless provide enough insight into the problem under investigation. A simple model of a control system encountered
in locomotion will be presented. In general, the current
contraction state of a particular muscle is reported to the
motor center (feedback loop), but the anticipated action is
also considered (feedforward control). Furthermore, information is routed to the antagonistic muscle, enabling a
smooth and balanced movement pattern. A defective coordination of movement (loosely referred to as clumsiness)
actually covers a spectrum of abnormalities. Thus, there
may be a number of potential defects in the control systems
that would involve substantial consequences. For example,
dysdiadokokinesis is a defect in which an individual lacks
the ability to perform rapid movements of both hands in
unison.
CLOSED LOOP DRUG DELIVERY
The eld of pharmacokinetics concerns the analysis of factors that affect absorption, distribution, and elimination of
drugs in the body. In contrast, pharmacodynamics refers
to interactions at the receptor site. Drugs may primarily
act locally (the so-called topical type such as eye drops or
inhaled aerosols) or they may more or less simultaneously
(intentionally or not) inuence many organs in the whole
body (i.e., systemic type). Obviously, temporal as well as
spatial considerations are relevant in pharmacokinetics.
The route of administration mainly determines where a
pharmacological substance exerts its primary action. Because of the frequently occurring side effects of almost all
drugs, it is also important to gather information about the
distribution and uptake of a drug at a site that is not the
primary target. Apart from the site of administration, it is
important to have some estimate about the optimal speed
of delivery at the desired location. Injection into the bloodstream is fast and potentially transient, while application
exchange. Expiration implies transport in the opposite direction, from the lungs towards the nal tube (the windpipe or trachea). Expansion of the lungs is normally realized by muscular activity of both the diaphragm and the
intercostal muscles (19).
The dynamics of respiration are commonly described in
terms of a pressurevolume relationship (to study restrictive diseases such as interstitial brosis and pulmonary
edema) and derived quantities such as ow ( Q, to study
obstructive diseases such as lung emphysema) and compliance (the ratio of volume changes resulting from variations in pressure). In contrast to emphysema, asthma is a
reversible obstructive airway disease, because it is caused
by an increase in smooth muscle tone in the large bronchi.
Figure 5 illustrates the nature of these abnormalities in a
lumped parameter model, consisting of the thoracic wall (
T), an overall spherical elastic element with alveolar pressure ( Palv ) inside, atmospheric pressure ( Patm ), pleural
pressure ( Ppl ), total airway resistance ( Raw ), and C is the
usual point of collapse of the airways acting as a Starling
resistor (i.e., a collapsible tube affected by the pressure of
its surroundings). It can be derived that
is expelled from the ventricles similar to the optimal direction for squeezing out the contents of a tube of toothpaste,
that is, running from the very bottom up towards the opening at the opposite site.
The rhythm of the healthy heart is not constant but exhibits a certain degree of HRV (20). Many studies have employed advanced mathematical techniques to analyze the
cardiac rhythm and estimate the relative contribution of
the sympathetic and parasympathetic drives, respectively
(21, 22). But the perpetual change of the rhythm has also
profound mechanical consequences: An increase in cycle
length implies facilitation of ejection both by increased lling (i.e., elevated preload) and reduced opposing pressure
at the time of the valve opening (i.e., lower afterload), while
during the next beat with a shorter interval the opposite
applies. In other words, impeded and facilitated beats appear to alternate, thus possibly improving stability of the
complete circulatory system.
The left ventricle has often been modeled as a sphere
or prolate ellipsoid, but neither geometry conforms with
reality. Independent of geometrical assumptions, Beringer
and Kerkhof (23) have shown that a fairly linear relationship exists between end-systolic volume (ESV) and enddiastolic volume (EDV). This notion has been veried in
human patients and also in their experimental investigations when studying the volume regulation in physiologically operating chronically instrumented dogs (24). The
regression coefcients appear to be characteristic of ventricular volume regulation and are sensitive to inotropic
intervention (i.e., adrenergic agonists and blockers). This
relationship implies that a clinically important cardiac performance indicator, namely ejection fraction (EF) is inversely related to ESV (25). Furthermore, using the Sugamodel, myocardial oxygen consumption can potentially be
predicted from a single noninvasive determination of ESV
Quite similar to the description of pulmonary dynamics,
the dynamics of heart and vessels are also characterized by
pressurevolume ( PV) relationships. Flow ( Q) equals the
time derivative of a changing volume. Another frequently
employed derived index is elastance ( E), the reciprocal of
The values of the measured properties of many physiological systems seem random. This view originates from the
tools that have been used when analyzing the details of organs. Nowadays there are new methods to analyze seemingly random experimental data, such as a time-series approach. These methods use many of the properties and
ideas of fractal analysis. The mathematician Mandelbrot
chose the word fractal to denote objects or processes with
multiple-scale properties, that is, the ever ner subdivisions of objects or processes as they are viewed at progressively higher magnication. Until recently, our ability to
understand many physiological systems was hampered by
our failure to appreciate their fractal properties and to interpret scale-free structures (29).
Wavelet analysis
Fourier analysis (i.e., decomposition of a signal into sinusoidal waveforms) is not suitable for signals with discontinuities. This type of computational headache can be successfully treated by wavelet (ondolettes) analysis, a powerful technique developed by Mallat and Meyer in France
(30). Later, Daubechies (at Princeton) discovered the dual
family, representing the high-frequency range and the
smooth parts (low frequencies). Wavelet analysis owes its
efciency to the fast pyramid algorithm, reducing the number of calculations by down-sampling operations, that is,
steps that remove every other sample at each operation
(halving the data each time). The method has vital applications in compression of signals and images, in addition to
noise reduction (a common problem in biomedical recording, in particular in magnetic resonance imaging). The current popularity of wavelet analysis may be exemplied by
the commercially available software Wavelets for Kids
developed at Duke University.
Nonlinear analysis
Heart rate (HR) and blood pressure (BP) are controlled
by several central nervous system oscillators and different control loops (see Fig. 7). Interactions among these
units may induce irregular time courses in the processes
they govern, but the underlying subprocesses also include
deterministic behavior. These irregular time courses can
be more accurately characterized by dynamic nonlinear
analysis rather than by linear time series. Typically, onedimensional time series data are transformed into multidimensional phase-space plots, thus lling selected regions.
Two essential aspects of such plots include:
1. The correlation dimension, which is a measure of the
complexity of the process studied, that is, the distribution of points in the phase space
2. The Lyapunov exponent, a measure of the predictability of the process, quantifying the exponential
divergence of initially closed state-space trajectories.
Figure 7. Gain of the various arterial pressure control mechanisms as a function of time after the onset of a disturbance.
(Reproduced from A. C. Guyton, Human Physiology and Mechanisms of Disease, 5th ed. Philadelphia: W.B. Saunders, (1992,
with permission.)
ENDOCRINOLOGY
Cells communicate by means of electrical coupling (e.g.,
direct contact interaction or neurocontrollers) and chemical interaction (i.e., endocrine control often for the longdistance type). These communication networks regulate vital processes such as growth, differentiation, and
metabolism of tissues and organs in all living systems. The
nervous system and the endocrine system are linked by the
integrating function of the hypothalamus, a specic region
of the brain.
The specic endocrine contribution to regulation forms
the subject of this section. It involves signal generation, propagation, recognition, signal transduction, and response to a particular stimulus. In endocrine signaling, the
pertinent cells release substances called hormones. The
main endocrine glands are the hypothalamus, the anterior
and posterior pituitary, the thyroid, parathyroid, adrenal
cortex and medulla, pancreas, and the gonads (i.e., ovaries
or testes). Hormones can be chemically classied into three
groups: steroid hormones (e.g., estrogen and progesterone),
peptide or protein hormones (such as insulin, prolactin,
and the various releasing hormones), and a category derived from the amino acid tyrosine (i.e., thyroxine and triidothyronine). These substances are usually transported
via the bloodstream. Therefore they are distributed within
the whole body. Lifetime in plasma may range from seconds to days. In order to ensure a selective action at a
special site, target cells exhibit sensitivity for a particular hormone and only in these cells will an appropriate
response be induced. The ability of target cells to exclusively respond to specic hormones depends on the presence on the membrane of receptor proteins that are unique
for a particular hormone. The action may take place on the
cell surface (with or without an intracellular mediator
BIBLIOGRAPHY
1. B. W. Hyndman, R. I. Kitney B. M. Sayers Spontaneous
rhythms in physiological control systems, Nature, 233:
339341, 1971.
2. J. A. Palazzolo, F. G. Estafanous, P. A. Murray, Entropy measures of heart rate variation in conscious dogs, Am. J. Physiol.,
274: H1099H1105, 1998.
3. V. Shusterman, K. P. Anderson, O. Barnea, Spontaneous skin
temperature oscillations in normal human subjects, Am. J.
Physiol., 42: R1173R1181, 1997.
4. K. B. Campbell et al., Left ventricular pressure response to
small-amplitude sinusoidal volume changes in isolated rabbit
heart, Am. J. Physiol., 273: H2044H2061, 1997.
5. P. Schiereck H. B. K. Boom, Left ventricular force-velocity relations measured from quick volume changes, Pugers
Arch.,
379: 251258, 1979.
6. R. L. Kirby et al., Coupling of cardiac and locomotor rhythms,
J. Appl. Physiol., 66: 323329, 1989.
7. M. A. Radermecker et al., Biomechanical characteristics of unconditioned and conditioned latissimus dorsi muscles used for
cardiocirculatory assistance, Cardiovasc. Surg., 5: 516525,
1997.
8. C. Bernard, An Introduction to the Study of Experimental
Medicine, New York: Dover, 1957, p. 135.
das Sehen
mit zwei Augen, Kiel: Schwersche-Buchhandlung, 1858.
18. G. K. Hung, Quantitative analysis of the accommodative convergence to accommodative ratio: Linear and nonlinear static
models. IEEE Trans. Biomed. Eng., 44: 306316, 1997.
19. D. L. Fry R. E. Hyatt, Pulmonary mechanics, Am. J. Med., 29:
672689, 1960.
20. M. L. Appel et al., Beat to beat variability in cardiovascular variables: Noise or music? J. Am. Coll. Cardiol., 14:
11391148, 1989.
21. M. Malik A. J. Camm (eds.), Heart Rate Variability, Armonk,
NY: Futura, 1995.
22. M. Pagani et al., Power spectral analysis of heart rate and
arterial pressure variabilities as a marker of sympathovagal
interaction in man and conscious dog, Cir. Res., 59: 178193,
1986.
23. J. Y. Beringer P.L.M. Kerkhof, A unifying representation of
ventricular volumetric indexes, IEEE Trans. Biomed. Eng., 45:
365371, 1998.
24. P. L. M. Kerkhof, Beat-to-beat analysis of high-delity signals
obtained from the left ventricle and aorta in chronically instrumented dogs, Automedica, 7: 8390, 1986.
25. P. L. M. Kerkhof, Importance of end-systolic volume for the
evaluation of cardiac pump performance, in E. I. Chazov, V. N.
Smirnov, and R. G. Oganov (eds.), Cardiology, An International
Perspective, New York: Plenum, 1984, pp. 13391352.
26. D. Hayoz et al., Spontaneous diameter oscillations of the radial
artery in humans, Am. J. Physiol., 264: H2080H2084, 1993.
27. C. M. Quick et al., Unstable radii in muscular blood vessels,
Am. J. Physiol., 271: H2669H2676, 1996.
28. A. C. Guyton, Human Physiology and Mechanisms of Disease,
5th ed., Philadelphia: Saunders, 1992.
29. J.B. Bassingthwaighte, L.S. Liebovitch, B. J. West, Fractal
Physiology, New York: Oxford University Press, 1994.
30. A. Bruce et al., Wavelet Analysis, IEEE Spectrum, 33 (10):
2635, 1996.
READING LIST
31. F. W. Campbell, J. G. Robson, G. Westheimer, Fluctuations of
accommodation under steady viewing conditions, J. Physiol.,
145: 579594, 1959.
32. T. Coleman, Mathematical analysis of cardiovascular function,
IEEE Trans. Biomed. Eng., 32: 289294, 1985.
33. H. Collewijn, F. van der Mark, T. C. Jansen, Precise recording
of human eye movements, Vision Res., 15: 447450, 1975.
34. J. Darnell, H. Lodish, D. Baltimore, Molecular Cell Biology,
New York: Scientic American Books, 1986.
35. P. H. Forsham (ed.), The Ciba Collection of Medical Illustrations, Vol. 4: Endocrine System, Summit, NJ: CIBA/Novartis,
1981.
36. P. A. Insel, Adrenergic receptorsevolving concepts and clinical implications, N. Engl. J. Med., 334: 580585, 1996.
37. J. Jalife (ed.), Mathematical approaches to cardiac arrhythmias, Ann. NY Acad. Sci., 591: 1416, 1990.
38. A. T. Johnson, Biomechanics and Exercise Physiology, New
York: Wiley, 1991.
39. E. R. Kandel, J. H. Schwartz, T. M. Jessell (eds.), Essentials
of Neural Science and Behavior, Norwalk, CT: Appleton and
Lange, 1995.
40. P. L. M. Kerkhof J. Y. Kresh, Cardiovascular function analysis,
in D. N. Ghista (ed.), Clinical Cardiac Assessment, Interventions and Assist Technology, Basel: Karger, 1990, Vol. 7, pp.
125.
41. M. Levy, The cardiac and vascular factors that determine systemic blood ow, Circ. Res., 44: 739747, 1979.
42. E. Maddox, Investigations in the relation between convergence and accommodation of the eyes, J. Anat. Physiol., 20:
475508, 1986.
43. V. Z. Marmarelis, Advanced Methods of Physiological System
Modeling, New York: Plenum, 1994, Vol. III.
44. J. F. Martin, A. M. Schneider, N. T. Smith, Multiple-model
adaptive control of blood pressure using sodium nitroprusside,
IEEE Trans. Biomed. Eng., 34: 603611, 1987.
45. T. A. McMahon, Muscles, Reexes and Locomotion, Princeton,
NJ: Princeton University Press, 1984.
46. S. Permutt R. Riley, Hemodynamics of collapsible vessels
with tonethe vascular waterfall, J. Appl. Physiol., 18:
924932, 1963.
47. D. A. Robinson, The oculomotor control system, a review, Proc.
IEEE, 56: 10321049, 1968.
48. L. B. Rowell J. T. Shepherd, Handbook of Physiology, New York:
Oxford University Press, 1996, Sec. 12.
49. T. C. Ruch H. D. Patton (eds.), Physiology and Biophysics,
Philadelphia: Saunders, 1965.
50. A. Vander, J. Sherman, D. Luciano, (eds.), Human Physiology,
5th ed., New York: McGraw-Hill, 1990.
51. P. Welling, PharmacokineticsProcess and mathematics,
Washington. DC: American Chemical Society, 1986.
52. M. W. Whittle, Gait Analysis, Oxford: ButterworthHeinemann, 1991.
REVIEWS
53. E. J. Crampin, M. Halstead, P. Hunter, P. Nielsen, D. Noble,
N. Smith and M. Tawhai: Computational physiology and the
physiome project. Exp. Physiol. 89 ( 2004) 126.
54. T.S. Deisboeck and J.Y. Kresh (Eds): Complex Systems Science
in Biomedicine. Springer, 2006.
55. A. A. Cuellar, C. M. Lloyd, P. F. Nielsen, D. P. Bullivant, D.
P. Nickerson, and P. J. Hunter: An Overview of CellML 1.1, a
Biological Model Description Language. Simulation 79 (2003)
740747.
56. G. Goel, I-C. Chou and E.O. Voit: Biological Systems Modeling
and Analysis; a Biomolecular Technique of the Twenty-rst
Century. J. Biomol. Techn. 17 (2006) 252269.
Relevant URLs:
Physiologic modeling: http://nsr.bioeng.washington.edu/
Physiome:
http://www.physiome.org/Models/
and
http://www.ngi-nz.co.nz/applications/physiome.html
TELECOMMUNICATION EXCHANGES
HISTORICAL BACKGROUND
J. Webster (ed.), Wiley Encyclopedia of Electrical and Electronics Engineering. Copyright 2007 John Wiley & Sons, Inc.
Telecommunication Exchanges
Telecommunication Exchanges
A great step from a functional point of view was the introduction of radio transmission for cellular coverage and
wireless access to/from mobile terminals. This allows a call
to be directed to a mobile terminal carried by a person
rather than just to a xed place (terminating a wire or
ber). Technically it was not new to reuse the radio spectrum by dividing a geographical area into regions, in this
case called cells, but connecting the cells covered by base
stations to the switched telecommunication network was
new and created many new challenges and opportunities.
Other steps in this direction are: digital subscriber lines;
new call services (often for redirection of calls) sometimes
substituting what a manual operator previously could give
help with; introduction of personal numbers that in conjunction with mobility services can help to make it easier
to reach a specic person rather than a phone terminal and
also can increase competition among operators if the personal phone number becomes a property of a person rather
than an operator; the merger of telecommunication and
data communication networks that enables new multimedia communication services.
A recent such network convergence that will simplify
the evolution of streamed and real-time multimedia services is the sharing of a more and more common infrastructure for xed and mobile voice, data and video services. In
this new setting some of the functions of a telecommunication exchange become obsolete and other more dedicated
nodes become more important. For example, circuit switch
functions may be replaced by packet forwarding while new
forms of access control, authorization, authentication, address location and charging services will be more important. This development along with international standardization is also believed to increase the competition between
different equipment suppliers.
An exchange can include support for almost all functions in a telephone network. However, one can also distinguish specialized network nodes. Early examples of
these specialized nodes are local exchanges and transit
exchanges. Today one can also distinguish other types of
nodes such as network access nodes; switch, call and service control nodes; mobile switching nodes; and network
database nodes (for example, databases for: number translations; handling of subscriber service proles; location information supporting mobility of subscribers, authentication/authorization data, equipment data). Other nodes are
different types of information service, media content, etrade transaction and access right handling servers.
Telecommunication Exchanges
Telecommunication Exchanges
can also be more or less distributed and placed in more specialized networks and nodes. With well-dened functional
areas and interfaces between these it is possible to congure networks and network nodes in many ways. However,
for a node or cluster of nodes to be regarded as an exchange
it must have some basic call access, control, switching and
connection handling capability.
Routing and Switching Techniques
The task of nding a path from source to destination is
called routing. The task to follow such a route from source
to destination guided by end-to-end address information is
called route selection or forwarding. Each node that participates in this task does not know the whole path but must
be able to analyze the destination address and nd out in
which direction the path should go and also nd such a
path with a trunk that is free to use.
To forward information (a bit, a voice sample, a message,
a cell, a packet or a frame) from a xed or mobile input terminal (address, channel, line or trunk) to a selected output
terminal (address, channel, line or trunk) of an exchange
where the route selection or forwarding is controlled by
local address information (based on the selected path) is
called switching.
The telephone exchange will use network topology or
routing information to prepare a connection. Hence, routing precedes the establishment of a full circuit switched
connection and is done only once while switching is done
Telecommunication Exchanges
Network Standards
The classic telecommunications network available all over
the world is the public switched telephone network (PSTN).
It is basically designed to allow the transmission of speech
between two or more users and services related to that.
However, this network is also used for facsimile trafc and
data trafc via modems. Examples of services are alarm
calls and abbreviated dialing; call forwarding and threeparty calls.
Integrated services digital network (ISDN) is an evolution of PSTN that gives the subscribers access to integrated or combined services. ISDN integrates different
telecommunication services into the same network that
transports voice and data in digital form between network access points. The main advantage of the evolution
from analog to digital end-to-end communication is safer
and more exible transfer of information. ISDN provides
a wide range of services divided into bearer services and
teleservices. ISDN is based on the digital telephony network using ordinary two-wire subscriber lines, 24-or 32channel PCM link structures, and Signaling System No. 7.
Integrated access implies that the user has access to both
voice and non-voice services through a single subscriber
line, whereas combined access implies the use of several
subscriber lines. Services include voice, facsimile, and computer connections.
There are two types of user-network accesses dened by
ITU-T:
the radio base stations and the calls within the PLMN
and calls to and from other telephony and data communication networks such as PSTN and ISDN.
The home location register (HLR) contains subscriber
information such as which supplementary services
are activated and information regarding in which
MSC-area the subscriber is currently located.
The visitor location register (VLR) is a database with
information of the locations of the mobile stations in
the area controlled by the MSC. The VLR also fetches
information from the HLR so that the call setup can
be performed without using the HLR each time.
The media gateway (MGW) is a switching/routing/
transferring node in the UMTS transport network,
to facilitate communication between RNCs, between
RNCs and the core network nodes, and between RNCs
and O&M nodes.
The base station controller (BSC), called radio network controller (RNC) in UMTS, coordinates and controls a number of radio resources usually located in
base stations and some interwork functions such as
handover between the cells, covered by the these base
stations.
The Radio Base Station (RBS) is responsible for radio
transmission/reception in one or more cells to/from
the User Equipment (UE).
Telecommunication Exchanges
these areas.
The open systems interconnection (OSI) reference
model is a standardized layered model of how computer
systems can be interconnected and interoperate that has
had an inuence on the way signaling networks and protocols are viewed and built. The model denes seven layers:
application, presentation, session, transport, network, link,
and physical.
The telecommunication information networking architecture (TINA) is an international collaboration for dening an open architecture for telecommunication systems. It
focuses on the software architecture. To some extent this
effort can be seen as an attempt to put together some other
standardization efforts such as OSI, IN, and TMN from a
software architecture point of view.
THE FUNCTIONS OF AN EXCHANGE
The telecommunications exchange is a multi application
digital switching product that offers its main services to
its subscribers but also services to the operator of the exchange. One example of a service offered to the subscribers
is telephony calls, and a service offered to the operator
is the ability to charge for such services by registering of
charging data in the exchange.
A telecommunications network offers various services
to the users and the operator. ITU-T has divided these services into two main categories:
Voice Mail. This service offers the subscribers the possibility to forward calls to a central location in the net-
Telecommunication Exchanges
routing algorithms that dynamically choose a link in order to minimize the congestion in the network. These dynamic routing algorithms can be either local or central;
the local provides results by using data available in its
own exchange such as previous success rates on different
link choices, while the central algorithms collect input data
from other exchanges in the network.
Connection. The connection is the through-connection
of two normally 64 kb/s circuits, one in each direction, in
the hardware devices, and particularly in the switch fabric.
The connection is required to be with limited probability
of blocking, from end-to-end. This means that the switch
fabric must add very low blocking probabilities, in order
to fulll the end-to-end requirements for calls that pass
several transit exchanges. The connection also must be well
synchronized with the rest of the exchange and with the
rest of the network, in order to handle the digital speech
connections properly.
Trunk Signaling. Trunk signaling enables a call to be connected between subscribers in separate exchanges. The basic data in all trunk signaling systems are alert messages
that a call is to be connected or disconnected, along with
routing information, mainly the relevant parts of the dialed
digits. Modern signaling systems can transmit all types of
datafor instance, in order to support detailed billing, advanced network services, and transparent user data.
Early signaling was made on the same line where the
speech was transmitted, rst by decadic pulses and later
by tones of different frequencies. A still common such inband signaling system is multi-frequency signaling, where
Telecommunication Exchanges
called party. Often the freephone service can be directed to various physical subscribers depending on
time, date, and trafc.
Conference Call. More than two parties take part in
a call.
Transfer Services. The call is transferred to another
telephone immediately or when the called number is
busy or not replying.
Universal Personal Number. One phone number is
used regardless of which mobile or xed physical connection a person is using.
Call Completion Services. When the called party is no
longer busy or nonreplying, the call is reinitiated.
Virtual Private Network. A group of subscribers, for
instance a corporation, form a private network with
their own charging and telephone numbers.
A modern telecommunications exchange should offer operation and maintenance functions that guarantee a high
quality of service to the operators and the subscribers. Operation is the normal everyday running of the exchange.
This includes activities to adapt the exchange to continuously changing demands. Examples of operational activities are:
Fault detection, testing, and repair of exchange hardware, for example, trunk lines or subscriber lines
10
Telecommunication Exchanges
Resource Modules
The resource modules typically handle and coordinate
the use of common resources and may contain both software and hardware. The most important part is the group
switch. Trunks and remote and central subscriber switches
(RSS and CSS respectively) are connected to the group
switch. The trunks are used to connect the switch to other
switches, to data networks, mobile base stations, etc. The
subscriber switch handles the subscriber calls and concentrates the trafc (see Fig. 5).
Group Switch. The main function of the group switch
is selection, connection, and disconnection of concentrated
speech or signal paths. The group switch often has a general structure.
The overall control of the group switch is performed by
the central processor system. The regional processors take
care of simpler and more routine tasks, such as periodic
scanning of the hardware, whereas the central control system handles the more complex functions. Associated functions included in the group switching resource module are
network synchronization devices and devices to create multiparty calls.
Subscriber Switch. The subscriber switch handles selection and concentration of the subscriber lines, its main
functions are as follows:
Telecommunication Exchanges
11
12
Telecommunication Exchanges
Usage
Provision/withdrawal of subscriber services and supplementary services
Central control. One or several processors that perform the non-routine, complex program control and
data-handling tasks such as execution of subscriber
services, collection of statistics and charging data, and
updating exchange data and exchange conguration.
Regional control. A set of distributed processors
that perform routine, simple and repetitive tasks but
Telecommunication Exchanges
13
xed size (53 bytes) divided into header (5 bytes) and payload (48 bytes). The header contains virtual path identiers
(VPIs) and virtual channel identiers (VCIs), where a virtual path (VP) is a bundle of virtual channels (VC). Trafc
can be switched at the VP or VC level cell by cell. Associated
14
Telecommunication Exchanges
Telecommunication Exchanges
15
The following are the common system limits for downward scalability of an exchange:
16
Telecommunication Exchanges
Transfer Capacity. A third part of the switching system capacity is the data transfer from the exchange to
other nodes, for example, to other exchanges and network
databases, billing centers, statistical post processing, and
nodes for centralized operation. There has been a growing
demand for signaling link capacity due to large STPs, for
transfer capacity from the exchange to a billing center due
to detailed billing and large amounts of real-time statistics, and to transfer capacity into the exchange due to the
increased amount of memory to reload at exchange failure.
Dependability Risks. Although dependability has increased for digital exchanges, there is a limit as to how
large the nodes in the network can be built. First, the more
hardware and the more software functions assembled in
one exchange, the more errors there are. The vast majority
of these faults will be handled by low-level recovery, transparent to the telecom function, or only affect one process.
However, a small fraction of the faults can result in a major
outage that affects the entire exchange during some time.
As an example, assume that the risk of a one-hour complete exchange failure during a year for one exchange is
1%. If we add the functionality of an SCP to an HLR node,
then we more than double the amount of software, and
presumably the number of faults, in the node. The risk of a
major outage should be larger in a new exchange introducing new software with new faults. Only if unavailability
due to completely stopped trafc execution is much less
than the total effect of process abortions and blocked hardware devices can we build exchanges of unlimited software
complexity.
The second reason for a limited complexity from a dependability point of view is that network redundancy is
required and only can be used if there are several transit
exchanges in the network.
Life-Cycle Cost
Since the 1980s, the operating cost has become larger than
the investment cost of an exchange. Thus, the emphasis on
efcient operation and maintenance has increased, regarding both ease of use and utilization of centers that remotely
operate a number of exchanges that are not staffed. For
ease of use, the telecommunication management network
(TMN) was an attempt by ITU to standardize the operator
interface. After several years this standard is still not used
much. Instead, the operator interface is to a large extent
dependent on the exchange manufacturer as well as the
requirements from the telecom operator company. Several
open and proprietary user interfaces are common.
For central operation, more robust methods of remote
activities have evolved. Software upgrades and corrections,
alarm supervision and handling, collection of statistics and
charging data, and handling and denition of subscriber
data are all made remotely. Transmission uses a multitude
of techniques and protocols. Open standard protocols have
taken over from proprietary protocols.
In addition, important parts of the life-cycle cost are
(a) product handling for ordering and installation and (b)
spare part supply.
EVOLUTION TRENDS
Technology Trends
Due to the effects of semiconductor process scaling, improved chip fabrication yield, and increasing numbers of
connectivity layers the storage capability of memory and
the execution speed of processors has doubled every 18
months during the last 40 years, following the so called
Moores law. This exponential growth of transistors-perchip will continue, but will force new hardware architectures such as chip multiprocessors and systems on chip
in order to keep energy use within reasonable limits. The
development of optical bers, including the introduction
of wavelength multiplexing, is perhaps even faster. Thus
there are many factors that lead to cheaper nodes and
higher bit rates in the network. At the same time, digital
coding and compression techniques have been improved
that makes it possible to transmit voice with traditional
telecom quality using only a fraction of the bandwidth that
is used today. These developments are changing the design
of both nodes and networks.
It is also important to provide increased interoperability between network standards since the end users do not
want to be concerned about where a person is physically
located or to which network the person is connected. The
introduction of universal personal numbers can solve this
and lead to a convergence of xed and mobile telephony.
The possibility of accessing short message services (SMS),
fax, and e-mail via xed and mobile devices and so-called
Internet telephony are examples of services that illustrate
the needs for interoperability and convergence of telecommunication and datacommunication.
High bit rates at a low price, combined with the demand
for real-time multimedia services, indicates that the network must either become more exible or consist of sev-
Telecommunication Exchanges
17
TONY LARSSON
ALEXANDER KOTSINAS
BJORN
KIHLBLOM
Halmstad University, Halmstad,
Sweden
3 Scandinavia, Stockholm,
Sweden
Ericsson Networks, Stockholm,
Sweden