Вы находитесь на странице: 1из 574

More Calculus For Biologists: Partial Differential Equations

and Control Theory

We are all made of the same stuff, Dewey.

Jim Peterson
Department of Biological Sciences
Department of Mathematical Sciences
Clemson University
email: petersj@clemson.edu c James K. Peterson Version November 3, 2008
Gneural Gnome Press

Version 10.01.2009-1: Compiled October 6, 2009


ii
Dedication

I dedicate this work to my students who have learned this material in its various preliminary
versions, the practicing biologists and immunologists who have helped an outsider think more
biologically and to my family who have listened to my ideas in the living room and over dinner
for many years. I hope that this new text helps to inspire all my students to consider mathematics
and computer science as indispensable tools in their own work in the biological sciences.

iii
iv
Abstract

This book tries to show beginning biology majors how mathematics, computer science and biology
can be usefully and pleasurably intertwined. This is a follow up text to our first set of notes on
mathematics for biologists, which is our companion volume Calculus for Biologists. In these
notes, we add the new mathematical tools of partial differentiation and a more complete study
of nonlinear differential equation models using these new tools. We then discuss carefully the
modeling of excitable neurons which leads to various types of partial differential equations. As
always, we use a few select models to illustrate how these three fields influence each other in
interesting and useful ways. We also stress our underlying motto: always take the modeling
results and go back to the scientists to make sure they retain relevance.

v
vi
Acknowledgements

We would like to thank all the students who have used the various iterations of these notes as
they have evolved from handwritten to the fully typed version here. We particularly appreciate
your interest as this course is part of a new quantitative emphasis area in biological sciences.
It continues to use mathematics and computer tools; a combination that causes fear in many
biological science majors. We have been pleased by the enthusiasm you have brought to this
interesting combination of ideas from many disciplines. Currently, this new course is being taught
as an overload to small numbers of students. We appreciate them very much!
Finally, we gratefully acknowledge the support of Hap Wheeler in the Department of Biological
Sciences for believing that this course would be useful to his students.

vii
viii
History

Based On:
Research Notes: 1992 - 1998
Class Notes: MTHSC 982 Spring 1997
Research Notes: 1998 - 2000
Class Notes: MTHSC 860 Summer Session I 2000
Class Notes: MTHSC 450, MTHSC 827 Fall 2001
Research Notes: Fall 2007 and Spring 2008
Class Notes: MTHSC 450 Fall 2008
Class Notes: MTHSC 450 Fall 2009

ix
x
Table Of Contents

Table Of Contents xi

I Introductory Matter 1

1 Introduction 3
1.1 Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 The Threads of Our Tapestry: . . . . . . . . . . . . . . . . . . 5
1.3 What Kinds of Abstraction Are Useful? . . . . . . . . . . . . 6

II Quantitative Tools I 7

2 Multivariable Calculus 9
2.1 Partial Derivatives . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Gradients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 13

3 Linear Algebra Concepts 15


3.1 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.1 The Zero Matrices . . . . . . . . . . . . . . . . . . . . 16
3.1.2 Square Matrices . . . . . . . . . . . . . . . . . . . . . 17
3.1.3 The Identity Matrices . . . . . . . . . . . . . . . . . . 18
3.1.4 The Transpose Of A Matrix . . . . . . . . . . . . . . . 18
3.1.5 Homework . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . 20

xi
TABLE OF CONTENTS TABLE OF CONTENTS

3.2.1 Homework . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.4 Vector Operations . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Vector Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5.1 Some Matrix - Vector Calculations . . . . . . . . . . . 26
3.5.2 The Inner Product Of Two Column Vectors . . . . . . 27

4 Eigenvalues and Eigenvectors 33


4.1 Homework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 The General Case . . . . . . . . . . . . . . . . . . . . . . . . . 40

5 Linear Systems Review 43


5.1 A Judicious Guess . . . . . . . . . . . . . . . . . . . . . . . . 44
5.1.1 Sample Characteristic Equation Derivations . . . . . . 50
5.1.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.2 Two Distinct Eigenvalues . . . . . . . . . . . . . . . . . . . . 53
5.2.1 Worked Out Solutions . . . . . . . . . . . . . . . . . . 58
5.2.2 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3 Graphical Analysis . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3.1 Graphing The NullClines . . . . . . . . . . . . . . . . 65
5.3.2 Graphing The Eigenvector Lines . . . . . . . . . . . . 65
5.3.3 Graphing Region I Trajectories . . . . . . . . . . . . . 67
5.3.4 Can Trajectories Cross? . . . . . . . . . . . . . . . . . 70
5.3.5 Graphing Region II Trajectories . . . . . . . . . . . . 73
5.3.6 Graphing Region III Trajectories . . . . . . . . . . . . 73
5.3.7 Graphing Region IV Trajectories . . . . . . . . . . . . 73
5.3.8 The Combined Trajectories . . . . . . . . . . . . . . . 75
5.3.9 Two Negative Eigenvalues . . . . . . . . . . . . . . . . 77
5.3.10 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.4 Repeated Eigenvalues . . . . . . . . . . . . . . . . . . . . . . 81
5.5 Complex Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . 81

6 Nonlinear ODEs 83
6.1 Predator Prey . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.1.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.1.2 Predator - Prey Trajectories . . . . . . . . . . . . . . . 86
6.1.3 Only Quadrant One Is Biologically Relevant . . . . . . 91
6.1.4 The Nonlinear Conservation Law . . . . . . . . . . . . 93
6.1.5 Can a Trajectory Hit the y axis Redux? . . . . . . . . 96
6.1.6 Qualitative Analysis . . . . . . . . . . . . . . . . . . . 98
6.1.7 The Predator - Prey Growth Functions . . . . . . . . 100

xii
TABLE OF CONTENTS TABLE OF CONTENTS

6.1.8 The Nonlinear Conservation Law Using f and


g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.1.9 Each Trajectory Stays Bounded . . . . . . . . . . . . . 103
6.1.10 The Trajectory Must Be Periodic . . . . . . . . . . . . 108
6.1.11 The Average Value of a Predator - Prey Solution . . . 115
6.1.12 A Sample Predator - Prey Model . . . . . . . . . . . . 118
6.2 Predator - Prey With Fishing Rates . . . . . . . . . . . . . . 118
6.2.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.3 Predator - Prey With Self Interaction . . . . . . . . . . . . . 122
6.3.1 Trajectories Starting On the y Axis . . . . . . . . . . 124
6.3.2 Starting On the x Axis . . . . . . . . . . . . . . . . . . 126
6.3.3 The Nullclines Cross . . . . . . . . . . . . . . . . . . . 127
6.3.4 The Nullclines Don’t Cross . . . . . . . . . . . . . . . 131
6.3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.4 Linearizing Nonlinear Systems . . . . . . . . . . . . . . . . . . 135
6.4.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . 136

7 Numerical Eigenvalues/Eigenvectors 139


7.1 Symmetric Arrays: . . . . . . . . . . . . . . . . . . . . . . . . 140

8 ODE 143
8.1 Euler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.1.1 The Matlab Implementation: . . . . . . . . . . . . . . 143
8.1.2 The RunTime: Just Tables . . . . . . . . . . . . . . . 144
8.1.3 The RunTime: Plots! . . . . . . . . . . . . . . . . . . 146
8.2 Runge-Kutta . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.2.1 Estimating The Solution Numerically: . . . . . . . . . 148
8.2.2 The Matlab Implementation: . . . . . . . . . . . . . . 155
8.2.3 The RunTime: Just Tables . . . . . . . . . . . . . . . 156
8.2.4 The RunTime: Plots! . . . . . . . . . . . . . . . . . . 158
8.3 Solving Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 160
8.3.1 Setting Up the Vector Functions: . . . . . . . . . . . . 160
8.3.2 Updating Our Solver Codes: . . . . . . . . . . . . . . . 161
8.4 Predator-Prey . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.4.1 Updating Our Solver Codes: . . . . . . . . . . . . . . . 167
8.4.2 Estimating The Period T Numerically . . . . . . . . . 168
8.4.3 Updating Our Plotting Scripts . . . . . . . . . . . . . 170
8.4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.5 Self Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8.5.1 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . 177

xiii
TABLE OF CONTENTS TABLE OF CONTENTS

9 Nonlinear Models Numerically 181


9.1 Predator-Prey . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
9.2 Harder Nonlinear Model . . . . . . . . . . . . . . . . . . . . . 185
9.2.1 Local Analysis . . . . . . . . . . . . . . . . . . . . . . 186
9.2.2 Generating A Phase Plane Portrait . . . . . . . . . . . 190

III Basic BioPhysics and Cellular Modeling 193

10 Chemistry 195
10.1 Molecular Bonds: . . . . . . . . . . . . . . . . . . . . . . . . . 195
10.2 Bond Comparisons: . . . . . . . . . . . . . . . . . . . . . . . . 199
10.3 Energy Considerations: . . . . . . . . . . . . . . . . . . . . . . 199
10.4 Hydrocarbons: . . . . . . . . . . . . . . . . . . . . . . . . . . 202

11 Amino Acids 205


11.1 Peptide Bonds: . . . . . . . . . . . . . . . . . . . . . . . . . . 215
11.2 Chains of Amino Acids: . . . . . . . . . . . . . . . . . . . . . 216

12 Nucleic Acids 221


12.1 Sugars: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
12.2 Nucleotides: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
12.3 Complementary Base Pairing: . . . . . . . . . . . . . . . . . . 225
12.4 A Quick Look at How Proteins Are Made: . . . . . . . . . . . 229

13 Ion Movement 233


13.1 Cell Membranes . . . . . . . . . . . . . . . . . . . . . . . . . . 233
13.2 Physics of Ion Movement . . . . . . . . . . . . . . . . . . . . 235
13.2.1 Ficke’s law of Diffusion: . . . . . . . . . . . . . . . . . 235
13.2.2 Ohm’s Law of Drift: . . . . . . . . . . . . . . . . . . . 236
13.2.3 Einstein’s Relation: . . . . . . . . . . . . . . . . . . . . 237
13.2.4 Space Charge Neutrality: . . . . . . . . . . . . . . . . 238
13.2.5 Ions, Volts and a Simple Cell: . . . . . . . . . . . . . . 238
13.3 Nernst-Planck Equation . . . . . . . . . . . . . . . . . . . . . 239
13.4 Nernst Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . 241
13.4.1 An Example: . . . . . . . . . . . . . . . . . . . . . . . 242
13.5 One Ion Nernst Computations . . . . . . . . . . . . . . . . . . 244
13.5.1 Exercises: . . . . . . . . . . . . . . . . . . . . . . . . . 246

14 Signalling 247
14.1 Cell Before KCl Dissolves . . . . . . . . . . . . . . . . . . . . 247
14.2 Cell With K + Gates . . . . . . . . . . . . . . . . . . . . . . . 248

xiv
TABLE OF CONTENTS TABLE OF CONTENTS

14.2.1 The Cell With Outer KCl Reduced: . . . . . . . . . . 248


14.3 Cell With NaCl . . . . . . . . . . . . . . . . . . . . . . . . . . 249
14.4 Cell With Na Gates . . . . . . . . . . . . . . . . . . . . . . . 250
14.5 Nernst Two Ions . . . . . . . . . . . . . . . . . . . . . . . . . 250
14.6 Nernst Multiple Ions . . . . . . . . . . . . . . . . . . . . . . . 252
14.7 Multiple Ions . . . . . . . . . . . . . . . . . . . . . . . . . . . 255

15 Transports 257
15.1 Transport Mechanisms: . . . . . . . . . . . . . . . . . . . . . 257
15.1.1 Ion Channels: . . . . . . . . . . . . . . . . . . . . . . . 258
15.1.2 Active Transport Using Pumps: . . . . . . . . . . . . . 262
15.1.3 A Simple Compartment Model: . . . . . . . . . . . . . 263

16 Ion Movement 269


16.1 Permeability . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
16.2 GHK Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
16.3 GHK Voltage Equation . . . . . . . . . . . . . . . . . . . . . 278
16.3.1 Examples: . . . . . . . . . . . . . . . . . . . . . . . . . 280
16.4 Electrogenic Pumps . . . . . . . . . . . . . . . . . . . . . . . 281
16.5 Excitable Cells: . . . . . . . . . . . . . . . . . . . . . . . . . . 284

17 Lumped Models 291


17.1 Modeling Radial Current: . . . . . . . . . . . . . . . . . . . . 292
17.2 Modeling Resistance: . . . . . . . . . . . . . . . . . . . . . . . 293
17.3 Longitudinal Properties: . . . . . . . . . . . . . . . . . . . . . 294
17.4 Thin Wall Currents . . . . . . . . . . . . . . . . . . . . . . . . 296

18 Cable Models 301


18.1 Core Model Assumptions . . . . . . . . . . . . . . . . . . . . 302
18.2 Building The Model . . . . . . . . . . . . . . . . . . . . . . . 304

19 Transient Cables 311


19.1 Deriving the Transient Cable Equation: . . . . . . . . . . . . 312
19.2 The Space and Time Constant of a Cable: . . . . . . . . . . . 314

20 Time Independent Cables 317


20.1 Infinite Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
20.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
20.2.1 Solving the Homogeneous Equation: . . . . . . . . . . 319
20.2.2 Solving the Non-homogeneous Equation: . . . . . . . . 320
20.3 Injection Currents . . . . . . . . . . . . . . . . . . . . . . . . 323
20.4 Impulse Currents . . . . . . . . . . . . . . . . . . . . . . . . . 327

xv
TABLE OF CONTENTS TABLE OF CONTENTS

20.4.1 What Happens Away from 0? . . . . . . . . . . . . . . 328


20.4.2 What Happens at Zero? . . . . . . . . . . . . . . . . . 330
20.5 Ideal Impulses . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
20.6 Currents Solutions . . . . . . . . . . . . . . . . . . . . . . . . 333
20.7 Voltages Solutions . . . . . . . . . . . . . . . . . . . . . . . . 337
20.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
20.9 Normalized Solutions: . . . . . . . . . . . . . . . . . . . . . . 339
20.10Matlab Fragments . . . . . . . . . . . . . . . . . . . . . . . . 341
20.10.1 Runtime Results: . . . . . . . . . . . . . . . . . . . . . 343
20.10.2 Exercises: . . . . . . . . . . . . . . . . . . . . . . . . . 344

IV Quantitative Tools II 347

21 Boundary Value Problems 349

22 Integral Transforms 351

23 Numerical Linear Algebra 353


23.1 Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 353
23.1.1 A Simple Lower Triangular System: . . . . . . . . . . 353
23.1.2 A Lower Triangular Solver: . . . . . . . . . . . . . . . 353
23.1.3 An Upper Triangular Solver: . . . . . . . . . . . . . . 355
23.1.4 The LU Decomposition of A Without Pivoting: . . . . 356
23.1.5 The LU Decomposition of A With Pivoting: . . . . . . 358

24 Root Finding and Optimization 361


24.1 Bisection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
24.1.1 The Bisection Matlab Code: . . . . . . . . . . . . . . . 361
24.1.2 Running the Code: . . . . . . . . . . . . . . . . . . . . 363
24.1.3 Exercises: . . . . . . . . . . . . . . . . . . . . . . . . . 364
24.2 Newton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
24.2.1 A Run Time Example: . . . . . . . . . . . . . . . . . . 366
24.2.2 Some Exercises: . . . . . . . . . . . . . . . . . . . . . . 368
24.2.3 Adding Finite Difference Approximations to
the Derivative: . . . . . . . . . . . . . . . . . . . . . . 368
24.2.4 A Finite Difference Global Newton Method: . . . . . . 368
24.2.5 A Run Time Example: . . . . . . . . . . . . . . . . . . 370
24.2.6 Some Exercises: . . . . . . . . . . . . . . . . . . . . . . 371

xvi
TABLE OF CONTENTS TABLE OF CONTENTS

V Finite Length Cables 373

25 Finite Cables 375


25.1 On Half Line . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
25.2 Finite Cable Starting Current . . . . . . . . . . . . . . . . . . 378
25.2.1 Parametric Studies: . . . . . . . . . . . . . . . . . . . 383
25.2.2 Some MatLab Implementations: . . . . . . . . . . . . 384
25.2.3 Run-Time Results: . . . . . . . . . . . . . . . . . . . . 386
25.2.4 Exercises: . . . . . . . . . . . . . . . . . . . . . . . . . 388
25.3 Finite Cable Starting Voltage . . . . . . . . . . . . . . . . . . 388
25.3.1 Exercises: . . . . . . . . . . . . . . . . . . . . . . . . . 391
25.4 Synaptic Currents: . . . . . . . . . . . . . . . . . . . . . . . . 391
25.4.1 A Single Impulse: . . . . . . . . . . . . . . . . . . . . . 393
25.4.2 Forcing Continuity in the Model: . . . . . . . . . . . . 395
25.4.3 The Limiting Solution: . . . . . . . . . . . . . . . . . . 396
25.4.4 Satisfying the Boundary Conditions: . . . . . . . . . . 398
25.4.5 Some Results: . . . . . . . . . . . . . . . . . . . . . . . 399
25.5 Implications: . . . . . . . . . . . . . . . . . . . . . . . . . . . 402

26 Neural Processing 405


26.1 Dendrite Model . . . . . . . . . . . . . . . . . . . . . . . . . . 407
26.2 Cable Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 411
26.2.1 Time Independent Solutions: . . . . . . . . . . . . . . 411
26.2.2 Time Dependent Solutions: . . . . . . . . . . . . . . . 413
26.3 Ball and Stick Model . . . . . . . . . . . . . . . . . . . . . . . 417
26.3.1 Applied Voltage Profile . . . . . . . . . . . . . . . . . 421
26.4 Ball and Stick Numerically . . . . . . . . . . . . . . . . . . . 426
26.4.1 Numerical Eigenvalues . . . . . . . . . . . . . . . . . . 426
26.4.2 Ball and Stick Matrix . . . . . . . . . . . . . . . . . . 431
26.4.3 Numerical Impulse Voltages . . . . . . . . . . . . . . . 433
26.5 Ball and Stick Project . . . . . . . . . . . . . . . . . . . . . . 443

27 Diffusion Equation 445


27.1 Particle Evolution . . . . . . . . . . . . . . . . . . . . . . . . 446
27.2 Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . 449
27.3 Right Moves . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
27.3.1 Finding The Average Of The Particles Dis-
tribution In Space And Time: . . . . . . . . . . . . . . 451
27.3.2 Finding The Standard Deviation Of The Par-
ticles Distribution In Space And Time: . . . . . . . . . 453

xvii
TABLE OF CONTENTS TABLE OF CONTENTS

27.3.3 Specializing To An Equal Probability Left


And Right Random Walk: . . . . . . . . . . . . . . . . 454
27.4 Macroscopic Scale . . . . . . . . . . . . . . . . . . . . . . . . 454
27.5 PDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
27.5.1 p and q Are Equal: . . . . . . . . . . . . . . . . . . . . 458
27.6 Particle Distribution . . . . . . . . . . . . . . . . . . . . . . . 458

28 Time Dependent Cable 461


28.1 Current Impulses . . . . . . . . . . . . . . . . . . . . . . . . . 461
28.1.1 Modeling The Current Pulses: . . . . . . . . . . . . . 462
28.1.2 Scaling the Cable Equation: . . . . . . . . . . . . . . . 463
28.1.3 Applying the Laplace Transform In Time: . . . . . . . 464
28.1.4 Applying the Fourier Transform In Space: . . . . . . . 465
28.1.5 The T Transform Of the Pulse: . . . . . . . . . . . . 467
28.1.6 The Idealized Impulse T Transform Solution: . . . . . 468
28.1.7 Inverting The T Transform Solution: . . . . . . . . . 468
28.1.8 A Few Computed Results: . . . . . . . . . . . . . . . . 471
28.1.9 Reinterpretation In Terms of Charge: . . . . . . . . . 471
28.2 Constant Currents . . . . . . . . . . . . . . . . . . . . . . . . 473

VI Hodgkin - Huxley Models 475

29 Hodgkin-Huxley Models 477


29.1 Voltage Clamped . . . . . . . . . . . . . . . . . . . . . . . . . 480
29.2 Hodgkin-Huxley Gates . . . . . . . . . . . . . . . . . . . . . . 481
29.2.1 Activation and Inactivation Variables: . . . . . . . . . 483
29.3 Na and K Gates . . . . . . . . . . . . . . . . . . . . . . . . . 484
29.4 Encoding The Dynamics: . . . . . . . . . . . . . . . . . . . . 486
29.5 Numerical Solution . . . . . . . . . . . . . . . . . . . . . . . . 489
29.5.1 The MatLab Implementation: . . . . . . . . . . . . . . 490
29.6 Action Potentials . . . . . . . . . . . . . . . . . . . . . . . . . 499
29.6.1 Exercise: . . . . . . . . . . . . . . . . . . . . . . . . . . 506

VII Brain Structure 513

30 Neural Structure 515


30.1 The Basic Model: . . . . . . . . . . . . . . . . . . . . . . . . . 515
30.2 Brain Structure: . . . . . . . . . . . . . . . . . . . . . . . . . 522
30.3 The Brain Stem: . . . . . . . . . . . . . . . . . . . . . . . . . 522

xviii
TABLE OF CONTENTS TABLE OF CONTENTS

31 Cortex 529
31.1 Cortical Processing: . . . . . . . . . . . . . . . . . . . . . . . 530
31.1.1 Isocortex Modeling: . . . . . . . . . . . . . . . . . . . 532

VIII References 537

References 539

IX Code Examples 541

Code Examples 543

X Detailed Indices 547

Index 549

xix
TABLE OF CONTENTS TABLE OF CONTENTS

xx
Part I

Introductory Matter

1
Chapter 1
Mathematical Modeling For Biology

In this book, we will discuss some of the principles behind the modeling of biological processes.
We begin with models of biological information processing, then use them to develop solutions to
the ubiquitous cable equation in both the abstract infinite length and the finite cable version. We
have to introduce new mathematical tools as we do this and we try to motivate them carefully
as we do so. We then use the cable equation in two ways. We look at the standard model of the
action potential of an excitable neuron and we also use it to model the development of organs. In
the process, we therefore use tools at the interface between science, mathematics and computer
science. We finish with a model of schizophrenia which requires an introduction to ideas from
control theory as we treat dopamine as a control in the model.

1.1 The Proper Level Of Abstraction:

In the field of biological information processing the usual tools used to build models of follow
from techniques that are loosely based on rather simplistic models from animal neurophysiology
called artificial neural architectures. We believe we can do much better than this if we can find
the right abstraction of the wealth of biological detail that is available; the right design to guide
us in the development of our modeling environment. Indeed, there are three distinct and equally
important areas that must be jointly investigated, understood and synthesized for us to make
major progress. We refer to this as the Software, Wetware and Hardware SWH triangle, Figure
1.1.
We use double–edged arrows to indicate that ideas from these disciplines can both enhance and
modify ideas from the others. The labels on the edges indicate possible intellectual pathways
we can travel upon in our quest for unification and synthesis: an example of the HW leg of
the triangle are algorithms mapped into Versilog Hardware Description Language (VHDL) which
are currently used to build hardware versions of a variety of low-level biological computational

3
1.1. ABSTRACTION CHAPTER 1. INTRODUCTION

Figure 1.1: Software–Hardware–Wetware Triangle

units. In addition, there are the new concepts of what is called evolvable hardware. Here, new
hardware primitives referred to as Field Programmable Gate Arrays) are offering us the ability to
program the devices Input/ Output response via a bit string which in principle can be chosen as
a consequence to environmental input (see (Higuchi et al. (11) 1997), (E.Sanchez (6) 1996) and
(Sipper (21) 1997)). There are several ways to do this: online and offline. In online strategies,
groups of FPGAs are allowed to interact and ”evolve” toward an appropriate bit string input to
solve a given problem and in offline strategies, the evolution is handled via software techniques
similar to those used in genetic programming and the solution is then implemented in a chosen
FPGA. In a related approach, it is even possible to perform software evolution within a pool
of carefully chosen hardware primitives and generate output directly to a standard hardware
language such as VHDL so that the evolved hardware can then be fabricated once an appropriate
fitness level is reached. Thus current active areas of research use new relatively plastic hardware
elements (their I/O capabilities are determined at run-time) or carefully reverse-engineered analog
VLSI chipsets to provide us with a means to take abstractions of neurobiological information
and implement them on silicon substrates; in effect, there is a blurring between the traditional
responsibilities of hardware and software for the kinds of typically event driven modeling tasks
we envision here. The term Wetware in the figure is then used to denote things that are of
biological and/or neurobiological scope. And, of course, we attempt to build software to tie these
threads together.

These issues in evolvable hardware are highly relevant to any search for useful ideas and tools
for building biological models. Algorithms that can run in hardware need to be very efficient and
to really build useful models of complicated biology will require large amounts of memory and
computational resources unless we are very careful. So there are lessons we can learn by looking
at how these sorts of problems are being solved in these satellite disciplines. Even though our
example discussion here is from the arena of information processing in neurobiology, our point is
that Abstraction is a principle tool that we use to move back and forth in the fertile grounds of
software, biology and hardware.

In general, we will try try hard to illuminate your journey as a reader through this material.
Let’s begin with a little philosophy; there are some general principles to pay attention to.

4
1.2. THE THREADS OF OUR TAPESTRY: CHAPTER 1. INTRODUCTION

1.2 The Threads of Our Tapestry:

As we have mentioned, there are many threads to pull together in our research. We should not
be dismayed by this as interesting things are complex enough to warrant it. You can take heart
from what the historian Barbara Tuchman said in speaking about her learning the art of being
a historian. She was working one of her first jobs and her editor was unhappy with her: as she
says (Tuchman (23, Page 17) 1982) in her essay In Search of History

The desk editor, a newspaperman by training, grew very impatient with my work. ‘Don’t
look up so much material,’ he said. ‘You can turn out the job much faster if you don’t know
too much.’ While this was no doubt true for a journalist working against a deadline, it was
not advice that suited my temperament.

There is a specific lesson for us in this anecdote. We also are in the almost unenviable position of
realizing that we can never know too much. The problem is that in addition to mixing disciplines,
languages and points of view, we must also find the right level of abstraction for our blended
point of view to be useful for our synthesis. However, this process of finding the right level of
abstraction requires reflection and much reading and thinking.
In fact, in any endeavor in which we are trying to create a road map through a body of material,
to create a general theory that explains what we have found, we, in this process of creating the
right level of abstraction, like the historian Tuchman, have a number of duties (Tuchman (23,
Page 17) 1982):

The first is to distill. [We] must do the preliminary work for the reader, assemble the
information, make sense of it, select the essential (boldface added), discard the irrelevant
- above all discard the irrelevant - and put the rest together so that it forms a developing
... narrative. ...To discard the unnecessary requires courage and also extra work. ...[We] are
constantly being beguiled down fascinating byways and sidetracks. But the art of writing -
the test of [it] - is to resist the beguilement and cleave to the subject.

Although our intellectual journey will be through the dense technical material of computer
science, mathematics, biology and even neuroscience, Tuchman’s advice is very relevant. It is
our job to find this proper level of abstraction and this interesting road map that will illuminate
this incredible tangle of facts that are at our disposal. We have found that this need to explain
is common in most human endeavors. In music, textbooks on the art of composing teach the
beginner to think of writing music in terms of musical primitives like the nouns, verbs, sentences,
phrases, paragraphs of a given language where in the musical context each phoneme is a short
collection of five to seven notes. Another example from history is the very abstract way that
Toynbee attempted to explain the rise and fall of civilizations by defining the individual entities
city, state and civilization (Toynbee and Caplan (22) 1995).
From these general ideas about appropriate abstraction, we can develop an understanding of which
abstractions might be of good use in an attempt to develop a working model of autonomous or
cognitive behavior.

5
1.3. WHAT KINDS OF ABSTRACTION ARE USEFUL?
CHAPTER 1. INTRODUCTION

1.3 What Kinds of Abstraction Are Useful?


One way to develop computational models in biology is to use a top-down point of view . A good
description of this philosophy in neuroscience comes from (Arbib et al. (1, Page 38) 1998):
The work of the nineteenth century neurologists led us to think of the brain in terms of large
interacting regions each with a more or less specified function....The issue for the brain theorist,
then, is to map complex functions, behaviors, and patterns of thought either on the interactions
of these rather large entities (anatomically defined brain regions) or on these very small and
numerous components (neurons). This issue has led many neuroscientists to look for structures
intermediate in size and complexity between brain regions and neurons to provide stepping stones
in an analysis of how neural structures subserve various functions. Thus, the notion of the brain as
an interconnected set of modules, intermediate in complexity between neurons and brain regions,
was established with the module as the structural entity....
Top-down brain theory is essentially functional in nature: It starts with the isolation of some
overall function, such as some pattern of behavior or linguistic performance or type of perception,
and seeks to explain it by decomposing it into the interaction of a number of subsystems. What
makes this exercise brain theory as distinct from cognitive psychology or almost all of the current
connectionism is that the choice of subsystem is biased in part by what we know about the
function of different parts of the brain.
To develop models of organ genesis, visual processing or mental disease (as we will do in
this volume!), we will have to build appropriate abstractions of how cells interact. We will
need sophisticated models of connections between computational elements (synaptic links between
classical lumped sum neuronal models are one such type of connection) and new ideas for the
asynchronous learning between objects. Our challenges are great as biological systems are much
more complicated than those we see in engineering or mathematics.
These are sobering thoughts aren’t they? Still, we are confident we can shed some insight on how
to handle these modeling problems. In this volume, we will discuss relevant abstractions that can
open the doorway for you to get involved in useful and interesting biological modeling.
Unfortunately, as we have tried to show, all of the aforementioned items are very nicely (or very
horribly!) intertwined (it depends on your point of view!). We hope that you the reader will
persevere. As you have noticed, our style in this report is to use the “royal” we rather than
any sort of third person narrative. We have always felt that this should be interpreted as the
author asking the reader to explore the material with the author. To this end, the text is liberally
sprinkled with you to encourage this point of view. Please take an active role!

6
Part II

Quantitative Tools I

7
Chapter 2
Multivariable Calculus

2.1 Partial Derivatives


We introduce partial derivatives here.

∂z ∂z
Example 2.1.1. Let z = f (x, y) = x2 + 4y 2 be a function of two variables. Find ∂x and ∂y .

Solution 2.1.1. Thinking of y as a constant, we take the derivative in the usual way with respect
to x. This gives

∂z
= 2x
∂x

as the derivative of 4y 2 with respect to x is 0. So, we know fx = 2x.


∂z
In a similar way, we find ∂y . We see

∂z
= 8y
∂y

as the derivative of x2 with respect to y is 0. So fy = 8y.

∂z ∂z
Example 2.1.2. Let z = f (x, y) = 4x2 y 3 . Find ∂x and ∂y .

Solution 2.1.2. Worked Out Solutions!Partial Derivatives


Thinking of y as a constant, take the derivative in the usual way with respect to x: This gives

∂z
= 8xy 3
∂x

9
2.2. GRADIENTS CHAPTER 2. MULTIVARIABLE CALCULUS

as the term 4y 3 is considered a “constant” here. So fx = 8xy 3 .


Similarly,

∂z
= 12x2 y 2
∂y

as the term 4x2 is considered a “constant” here. So fy = 12x2 y 2 .

2.1.1 Problems
These are for you:

Exercise 2.1.1.

x2 + 2y
f (x, y) = .
5x + y 3

Find fx and fy .

Exercise 2.1.2.

f (x, y) = x/(x + 4y)

Find fx and fy .

Exercise 2.1.3.

2 +y 2 )
f (x, y) = e(x

Find fx and fy .

Exercise 2.1.4.

f (x, y) = e−3xy

Find fx and fy .

Exercise 2.1.5.
p (x−2)2
f (t, x) = 1/ (4xt)e(− 8t

Find ft and fx .

2.2 Gradients and All That


2.2.1 Problems
Exercise 2.2.1. For the following function find its gradient and hessian.

10
2.3. APPROXIMATION CHAPTER 2. MULTIVARIABLE CALCULUS

f (x, y) = x2 + x4 y 3

Exercise 2.2.2. For the following function find its gradient and hessian.

f (x, y) = x2 − 20y 4 x5

2.3 Approximating Multivariable Functions

2.3.1 Problems

For a function of two variables, f (x, y), we can estimate the error made in approximating using
the gradient at the given point (x∗ , y ∗ ) as follows: We know

f (x, y) = f (x∗ , y ∗ ) + ∇(f )∗ [x − x∗ , y − y ∗ ]T + (1/2)[x − x∗ , y − y ∗ ]H c [x − x∗ , y − y ∗ ]T

where grad(f ∗ ) is the gradient of f evaluated at the point (x∗ , y ∗ ) and the notation H c means the
hessian is evaluated at the point (xc , yc ) on the line segment connecting (x∗ , y ∗ ) to (x, y). Further,
if we know f and its partials are bounded locally, then on the rectangle

Rr = [x∗ − r, x∗ + r] × [y ∗ − r, y ∗ + r]

there is a constant Mfr for which

max |f (x, y)| ≤ Mfr


(x,y)∈Rr
max |fx (x, y)| ≤ Mfr
(x,y)∈Rr
max |fy (x, y)| ≤ Mfr
(x,y)∈Rr
max |fxx (x, y)| ≤ Mfr
(x,y)∈Rr
max |fxy (x, y)| ≤ Mfr
(x,y)∈Rr
max |fyy (x, y)| ≤ Mfr .
(x,y)∈Rr

Thus, the error

1 r
|f (x, y) − grad(f )∗ [x − x∗ , y − y ∗ ]T | ≤ M |x − x∗ | + |y − y ∗ |2
2 f

11
2.3. APPROXIMATION CHAPTER 2. MULTIVARIABLE CALCULUS

Since the biggest |x − x∗ | and |y − y ∗ | can be is r, we therefore see the biggest absolute error is
bounded by (1/2)Mfr 4r2 or 2Mfr r2 .

Example 2.3.1. Let f (x, y) = x2 y 4 + 2x + 3y + 10. Then

fx = 2xy 4 + 2
fy = x2 4y 3 + 3
fxx = 2y 4
fxy = fyx = 8xy 3
fyy = 12x2 y 2

So at (x∗ , y ∗ ) = (0, 0), letting E denote the error, we have

x2 y 4 = 10 + < [2, 3]T [x, y]T > + E

with the largest error in Rr = [−r, r]x[−r, r] given by 2Mfr r2 . Here, since

fxx = 2y 4
fxy = fyx = 8xy 3
fyy = 12x2 y 2

in Rr , we find these estimates:

max |fxx | = 2r4


(x,y)∈Rr

max |fxy | = 8r4


(x,y)∈Rr

max |fyy | = 12r4 .


(x,y)∈Rr

So we can use Mfr = 12r4 and get the largest error for the approximation is

|x2 y 4 = 10+ < [2, 3]T [x, y]T | > ≤ 2Mfr ∗ r2 = 2(12r4 )r2 = 24r6 .

So for r = .8, the maximum error is overestimated by 24(.8)6 = 6.29 – probably bad! For r = .4,
the maximum error is 24(.4)6 = .1 – better! To make the largest error < 10−4 , solve 24r6 < 10−4 .
This gives r6 < 4.17x10−6 . Thus, r < .12687 will do the job.

12
2.3. APPROXIMATION CHAPTER 2. MULTIVARIABLE CALCULUS

We can also do this sort of error estimation at another point, say (x∗ , y ∗ ) = (1, 2), which, of
course, is much yuckier to do! Then,

Rr = [1 − r, 1 + r] × [2 − r, 2 + r]

and for our second order partials, in Rr ,

max |fxx | = 2(2 + r)4


(x,y)∈Rr

max |fxy | = 8(1 + r)(2 + r)3


(x,y)∈Rr

max |fyy | = 12(1 + r)2 (2 + r)2 .


(x,y)∈Rr

Since (1 + r) < (2 + r), we can use

Mfr = 12(2 + r)2 (2 + r)4 = 12(2 + r)6

and to make the largest error in the approximation < 10−4 , we note

2Mfr (1 + r) + (2 + r)2 = 24(2 + r)6 (1 + r) + (2 + r)2


< 24(2 + r)6 (2 + r) + (2 + r)2
= 96(2 + r)8

Now, set this estimate less than .0001 and we can find the r we need. We can always do this sort
of thing, but you can see doing this at (0, 0) is similar and has less mess!

2.3.2 Problems
Exercise 2.3.1. Approximate f (x, y) = x2 + y 4 x5 + 3x + 4y + 25. near (0, 0) as usual. Find the
r where the error is less than 10−3 .

Exercise 2.3.2. Approximate f (x, y) = 4x4 y 4 x5 + 3x + 40y + 5 near (0, 0). Find the r where
the error is less than 10−6 .

13
2.3. APPROXIMATION CHAPTER 2. MULTIVARIABLE CALCULUS

14
Chapter 3
Linear Algebra Concepts

We need to use both vector and matrix ideas in this course. Most of you should have seen this
kind of material before in earlier classes, but we will do a little review before we introduce some
computational tools in MatLab to help us solve some of the problems that arise in our models.

3.1 Matrices

A matrix is a rectangular collection of real numbers organized like this:


−2 4 5 1


−12 14 15 −6

20
(3.1)
4 1 2

8 14 5 11

In Equation 3.1, we have a collection of numbers which are organized into 4 rows and 4 columns.
We call this a square matrix because the number of rows and columns are the same. This particular
matrix has only positive or negative integers in it, but of course the number 0 could be used as
well as real numbers like 1.2356, π and e. It is just easier to type integers! We would call this a
4 × 4 matrix and read this as a 4 by 4 matrix. A matrix can also have a different number of rows
and columns. Consider the matrices shown in Equation 3.2 which is a 5 × 4 matrix and Equation
3.3, which is a 4 × 3 matrix. We call 5 × 4 the size of the matrix in Equation 3.2. In general, if
a matrix has m rows and n columns, we say its size is m × n .

15
3.1. MATRICES CHAPTER 3. LINEAR ALGEBRA CONCEPTS


−2 4 5 1

−12 14 15 −6


20
4 1 2 (3.2)
8 14 5 11


−2 −23 7 −3


4 5 1

14 15 −6


4
1 2
(3.3)
14 5 11


−23 7 −3

We usually denote a matrix by a capital letter such as A. Hence, Equation 3.1, Equation 3.2 and
Equation 3.3 could be labeled as follows:


−2 4 5 1 4 5 1
−2 4 5 1
−12 14 15 −6 14 15 −6


−12 14 15 −6
A = , B = 20 4 1 2 , and C = 4 1 2
20 4 1 2
8 14 5 11 14 5 11


8 14 5 11
−2 −23 7 −3 −23 7 −3

Each entry in a matrix can be labeled by the row and column it occurs in. Thus, the entry in
row 2 and column 3 of a matrix is labeled as A23 . So, the matrix in Equation 3.2 has the labels
shown in Equation 3.4


A11 A12 A13 A14 −2 4 5 1

A21 A22 A23 A24 −12 14 15 −6


B = A31 A32 A33 A34 = 20 4 1 2 . (3.4)
A41 A42 A43 A44 8 14 5 11


A51 A52 A53 A54 −2 −23 7 −3

3.1.1 The Zero Matrices

There are some special matrices. A matrix that only has 0 as its entries is called a zero matrix.
Now, since there are matrices of all different sizes, we can not pick just one to call the zero matrix.
So when we are working on a problem, we just use the size of the zero matrix that is appropriate
for the problem’s context. For example, a 4 × 3 zero matrix would be

16
3.1. MATRICES CHAPTER 3. LINEAR ALGEBRA CONCEPTS


0 0 0


0 0 0
0 =
0 0 0

0 0 0

while a 2 × 2 zero matrix would be



0 0
0 =


0 0

We could denote these two matrices by say 04×3 or 02×2 to distinguish them. However, that
is really cumbersome! So as we mentioned above, it is usually pretty easy to figure out the
appropriate size of the zero matrix from context.

3.1.2 Square Matrices

Square matrices often occur in our work, i.e. matrices that have the same number of rows and
columns. Consider


A A12 A13 A14 −2 4 5 1
11

A21 A22 A23 A24 −12 14 15 −6
A = = (3.5)
A31 A32 A33 A34 20 4 1 2

A41 A42 A43 A44 8 14 5 11

A square matrix has three important parts which you are subsets of the original matrix.

1. The Lower Triangular Part of A is L given by



A −2 0 0 0
11 0 0 0

A21 A22 0 0 −12 14 0 0
L =

= 20
(3.6)
A31 A32 A33 0 4 1 0

A41 A42 A43 A44 8 14 5 11

2. The Upper Triangular Part of A is U given by



A −2 4 5
11 A12 A13 A14 1

0 A22 A23 A24 0 14 15 −6
U = =
(3.7)
0 0 A33 A34 0 0 1 2


0 0 0 A44 0 0 0 11

17
3.1. MATRICES CHAPTER 3. LINEAR ALGEBRA CONCEPTS

3. The Diagonal Part of A is D given by



A −2 0 0 0
11 0 0 0

0 A22 0 0 0 14 0 0
D =
=
(3.8)
0 0 A33 0 0 0 1 0

0 0 0 A44 0 0 0 11

3.1.3 The Identity Matrices

We can also define what is called the identity matrix . An identity matrix is a square matrix
whose only nonzero entries are one’s on the diagonal. For example,


1 0 0

I = 0 1 0


0 0 1

is a 3 × 3 identity matrix, while a 4 × 4 identity matrix would be



1 0 0 0


0 1 0 0
I =
0 0 1 0

0 0 0 1

We could denote these two matrices by say I 3×3 or I 4×4 to distinguish them. However, this
notation is just as irritating to use as the previous one like this for the zero matrices. Hence, we
usually figure out the appropriate size of from context.

3.1.4 The Transpose Of A Matrix

We can illustrate this easiest with a simple example. Consider the 5 × 4 matrix A defined by


−2 4 5 1

−12 14 15 −6


A = 20 4 1 2
8 14 5 11


−2 −23 7 −3

The transpose of A is the matrix formed by switching the rows and columns of A. We denote
this new matrix by AT or sometimes A0 . If the entries of A are as usual given by Aij where i
is the row number and j is the column number, the entries of AT are reversed and become Aij .
That sounds confusing, doesn’t it? Try this: if A14 = β, then the row 4 and column 1 entry in
AT becomes β. Note the switching of rows and columns? It is that easy to find the transpose of

18
3.1. MATRICES CHAPTER 3. LINEAR ALGEBRA CONCEPTS

a matrix . So AT here is

−2 −12 20 8 −2


4 14 4 14 −23
AT

=
5 15 1 5 7

1 −6 2 11 −3

So you should be able to see that the row i, column j entry of AT is precisely the row j, column
i entry of A. We can say also by stating AT ij = Aji

Comment 3.1.1. If a matrix A has size m × n, then its transpose, AT , has size n × m

Comment 3.1.2. If a matrix A equals its own transpose, then first, we know A must be a square
matrix of size n × n for some positive integer n. Thus,

AT

ij
= Aij = Aji

In this case, we say A is symmetric .

Thus, the matrix A below is symmetric.


−2 −12 20 8


−12 14 4 14
A =

20 4 1 5

8 14 5 11

3.1.5 Homework
Exercise 3.1.1. Find the transpose of

2 3 4


−1 4 90

Exercise 3.1.2. Find the transpose of



2 3 4

−11 4 9


6 −3 8

Exercise 3.1.3. Is this matrix symmetric?



2 3 4

3 4 −3


4 −3 8

19
3.2. MATRIX OPERATIONS CHAPTER 3. LINEAR ALGEBRA CONCEPTS

Exercise 3.1.4. Is this matrix symmetric?



2 −3


3 4

3.2 Operations On Matrices

We can also perform many operations on matrices. It is easiest to show these operations with
examples.

1. We can add matrices of the same size by adding their components .



1 −2 3 −20 3 −11 1 − 20 −2 + 3 3 − 11 −19 1 −8

4 1 −8 + 16 9 5 = 4 + 16 1 + 9 −8 + 5 = 20 10 −3


−7 6 12 16 2 −8 −7 + 16 6 + 2 12 − 8 9 8 4

2. We can subtract matrices of the same size by subtracting their components .



1 −2 3 −20 3 −11 1 − −20 −2 − 3 3 − −11 21 −5 14

4 1 −8 − 16 9 5 = 4 − 16 1−9 −8 − 5 = −12 −8 −13


−7 6 12 16 2 −8 −7 − 16 6 − 2 12 − −8 −23 4 20

3. We can scale a matrix by multiplying each component of the matrix by the same number .


1 −2 3 −3 6 −9

−3 × 4 1 −8 = −12 −3 24


−7 6 12 21 −18 −36

4. We can multiply two matrices A and B if their sizes are just right . The number of columns
of A must match the number of rows of B. In the example below, the number of columns of
the first matrix is 3 which matches the number of rows in the second matrix. So the matrix
multiplication is defined. Since the size of A is 4 × 3 and the size of B is 3 × 2, the size of
the product will be 4 × 2. In this example, each row of the first matrix has 3 entries and
each column of the second matrix has 3 rows. Look at row 1 of the first matrix and column
1 of the second matrix. We multiply row 1 and column 1 like this:

−20

1 −2 3 × 16 = (1)(−20) + (−2)(16) + (3)(16).


16

20
3.2. MATRIX OPERATIONS CHAPTER 3. LINEAR ALGEBRA CONCEPTS

In general, if the entries were generic, we would have for the ith row of A and the j th column
of B

B1j

Ai1 Ai2 Ai3 × B2j = (Ai1 )(B1j ) + (Ai2 )(B2j ) + (Ai3 )(B3j )


B3j
3
X
= Aik Bkj .
k=1

where the individual components of A are denoted by Aij and those of B by Bij for appro-
priate indices i and j. Hence, the full matrix multiplication of these two matrices is given
by


1 −2 3



−20 3
4 1 −8
× 16 9

−7 6 12




16 2
12 −2 3

(1)(−20) + (−2)(16) + (3)(16) (1)(3) + (−2)(9) + (3)(2)


(4)(−20) + (1)(16) + (−8)(16) (4)(3) + (1)(9) + (−8)(2)
=

(−7)(−20) + (6)(16) + (12)(16) (−7)(3) + (6)(9) + (12)(2)


(12)(−20) + (−2)(16) + (3)(16) (12)(3) + (−2)(9) + (3)(2)

−20 + −32 + 48 3 + −18 + 6 −4 −9


−80 + 16 + −128 12 + 9 + −16 −192 5
= =
140 + 96 + 192 −21 + 54 + 24
429 57


−240 + −32 + 48 36 + −18 + 6 −224 24

Comment 3.2.1. If A is a square matrix of size n × n, then if I denotes the identity matrix of
size n × n, both multiplications I A and A I are possible and give the answer A. This is why I
is called the identity matrix!

Comment 3.2.2. If A is a matrix of any size and 0 is the appropriate zero matrix of the same
size, then both 0 + A and A + 0 are nicely defined operations and the result is just A.

Comment 3.2.3. Matrix multiplication is not communicative: i.e. for square matrices A and
B, the matrix product A B is not necessarily the same as the product B A.

21
3.2. MATRIX OPERATIONS CHAPTER 3. LINEAR ALGEBRA CONCEPTS

3.2.1 Homework
Exercise 3.2.1. Compute


−1.0 2.5 6.0 2

8.0 −1.0 2.5 −5


−3.0 4.2 12.0 7

Exercise 3.2.2. Compute



−1.0 2.5
2 −5 7
6.0 5


−10 0.3 8

8.0 −1.0 2.5 −8.2

−3.0 4.2 12.0 6.1 1
−1 2

6 16.5 −2

Exercise 3.2.3. Consider



−1.0 2.5 6.0 5 2 −5 7 9


8.0 −1.0 2.5 −8.2 −10 0.3 8 10
C = and D =

−3.0 4.2 12.0 6.1
1 −1 2 −5


2 −3 7.2 9.4 6 16.5 −2 14

1. Compute C + D

2. Compute C − D

3. Compute 2C + 3D

4. Compute −4C + 5D

5. Compute C D − D C

Exercise 3.2.4. Consider



−1.0 2.5 6.0 12 −5 7.1

C = 8.0 −1.0 2.5 and D = −10 8.3 8


−3.0 4.2 12.2 10 −1 2.4

1. Compute C + D

2. Compute C − D

3. Compute 3C + 2D

4. Compute −2C + 4D

5. Compute C D − D C

22
3.3. VECTORS CHAPTER 3. LINEAR ALGEBRA CONCEPTS

Exercise 3.2.5. Consider



−1.0 12.5 2 −6
P = and Q =


18.0 −1.0 −20 3.3

1. Compute P + Q

2. Compute P − Q

3. Compute 8P + 2Q

4. Compute −6P + 5Q

5. Compute P Q − Q P

3.3 Vectors
A vector is a matrix which has only one row or one column . We will call vectors with one
column, column vectors and those with one row as row vectors. Thus,


4

14


V = 4 and W = −2 4 5 1

14


−23

define a 4 × 1 column vector V and a 1 × 4 row vector W . The addition and subtraction of vectors
follows the usual matrix type rules. However, vector - vector is not defined as the sizes do not
match!

3.4 Operations On Vectors


It is easiest to show these operations with examples.

1. We can add row vectors of the same size by adding their components .


1 −2 3 + −20 3 −11 = 1 − 20 −2 + 3 3 − 11 = −19 1 −8

Addition of column vectors is similar, of course.

2. We can subtract column vectors of the same size by subtracting their components . Sub-
traction of row vectors is similar!

23
3.5. VECTOR MAGNITUDE CHAPTER 3. LINEAR ALGEBRA CONCEPTS


1 −20 1 − −20 21 −5 14

4 − 16 = 4 − 16 = −12 −8 −13


−7 16 −7 − 16 −23 4 20

3. We can scale a vector by multiplying each component of the vector by the same number.


1 −3

−3 × 4 = −12


−7 21

4. The transpose of a row vector is just a column vector and vice versa . If V is defined by

1

V = 4


−7

then the transpose of the column vector V is the row vector V T given by

VT = 1 4 −7

3.5 The Magnitude Of A Vector

The magnitude of a column or a row vector is defined to be its length. We start with a vector of
two components.

a
V =

c

where for convenience, we use letters to denote components of V . We usually say V is a Two
Dimensional vector because it has two components. Since it is easy to casually identify row and
column vectors, we typically think of a 1 × 2 row vector also as two dimensional even though
intellectually, there are actually different mathematical beasts! Let’s graph this vector using its
components as coordinates in the standard x − y plane. So we identity V with the ordered pair
p
(a, c). Note this ordered pair (a, c) defines a line of length (a)2 + (b)2 as is shown in Figure
3.1. Hence, the vector V has a polar coordinate representation with

a = r cos(θ)
c = r sin(θ)

24
3.5. VECTOR MAGNITUDE CHAPTER 3. LINEAR ALGEBRA CONCEPTS

just like before.

y
c

A vector V can be
identified with an order
V
pair (a, c). The compo-
nents (a, c) are graphed
r in the usual Carte-
sian manner as an or-
dered pair in plane.
θ The
p magnitude of V is
(a)2 + (c)2 which is
shown on the graph as
r. The angle associated
with V is drawn as an
arc of angle θ
a x

Figure 3.1: Graphing A Two Dimensional Vector

Since the vector V is graphed in the x−y plane as the coordinate (a, c) as shown in Figure 3.1,
we see the line connecting (0, 0) and (a, c) has equation y = (a/c)x. Thus, one way to visualize
the vector V is as a line segment starting at (0, 0) with an arrow at its head (a, c).
What if V was of size 3 × 1? We would call this column vector a Three Dimensional vector
because it has three components . Assume V is defined by


a

V = c


e

where our use of e here is not at all related to the e we use in defining the natural logarithm!
Again, think context at all times!! We can identify this vector with a triple (a, c, e) in three
dimensional space as shown in Figure 3.2.
Now let’s return to vectors of size n × 1 which we will call n Dimensional Vectors. We clearly
can only graph two and three dimensional vectors, but the formula we have used for the magnitude
p p
of the two and three dimensional vectors ( (a)2 + (c)2 and (a)2 + (c)2 + (e)2 , respectively,
suggests the n dimensional magnitude should be defined by

25
3.5. VECTOR MAGNITUDE CHAPTER 3. LINEAR ALGEBRA CONCEPTS

A vector V can be
identified with an or-
der pair (a, c, e). The
components (a, c, e)
are graphed in the
usual Cartesian man-
ner as an ordered
c pair in plane. The
V magnitude
p of V is
(a) + (c)2 + (e)2
2

which is shown on the


graph as r.
r
x
a

e
z

Figure 3.2: Graphing A Three Dimensional Vector

Definition 3.5.1 (The Norm Of A Vector).


The norm of the row or column vector V with components {V1 , . . . , Vn } is
q
|| V || = V12 + . . . + Vn2

3.5.1 Some Matrix - Vector Calculations

Note that the multiplication of a matrix A and a column vector V is defined as long as the
number of column of A is the same as the number of rows of V . Thus, if

−1 2 6 1

A = 12 −1 2 and V = 4


−3 4 9 3

26
3.5. VECTOR MAGNITUDE CHAPTER 3. LINEAR ALGEBRA CONCEPTS

then A V is

(−1)(1) + (2)(4) + (6)(3) 25

(12)(1) + (−1)(4) + (2)(3) = 14


(−3)(1) + (4)(4) + (9)(3) 40

Note that the product V T AT is also defined and gives



−1 12 −3

1 4 3 2 −1 4 = 25 14 40


6 2 9

which is the transpose of the first calculation. Note that in this case, we have found that (A V )T =
V T AT . This is true in general.

3.5.2 The Inner Product Of Two Column Vectors

If V and W are two column vectors of size n × 1, then the product V T W is a matrix of size
1 × 1 which we identify with a real number. We see if


V W
1 1

V2 W2

V = V3 and W = W3

. .
.. ..


Vn Wn

then we define the 1 × 1 matrix

VT W = W T V = [V1 W1 + V2 W2 + V3 W3 + · · · + Vn Wn ]

and we identify this one by one matrix with the real number

V1 W1 + V2 W2 + V3 W3 + · · · + Vn Wn

This product is so important, it is given a special name: it is the inner product of the two
vectors V and W . Let’s make this formal with Definition 3.5.2.

27
3.5. VECTOR MAGNITUDE CHAPTER 3. LINEAR ALGEBRA CONCEPTS

Definition 3.5.2 (The Inner Product Of Two Vectors).


If V and W are two column vectors of size n × 1, the inner product of these
vectors is denoted by < V, W > which is defined as the matrix product V T W
which is equivalent to the W T V and we interpret this 1 × 1 matrix product
as the real number

V1 W1 + V2 W2 + V3 W3 + · · · + Vn Wn

where Vi are the components of V and Wi are the components of W .

What could this number < V, W > possibly mean? To figure this out, we have to do some
algebra. Let’s specialize to nonzero column vectors with only 2 components. Let


a b
V = and W =

c d

Since these vectors are not zero, only one of the terms in (a, c) and in (b, d) can be zero because
otherwise both components would be zero and we are assuming these vectors are not the zero
vector. We will use this fact in a bit. Now here < V, W > = ab + cd. So

(ab + cd)2 = a2 b2 + 2abcd + c2 d2

|| V ||2 || W ||2 = a2 + c2 b2 + d2
 

= a2 b2 + a2 d2 + c2 b2 + c2 d2

Thus,

|| V ||2 || W ||2 − (< V, W >)2 = a2 b2 + a2 d2 + c2 b2 + c2 d2 − a2 b2 − 2abcd − c2 d2


= a2 d2 − 2abcd + c2 b2
= (ad − bc)2 .

Now, this does look complicated, doesn’t it? But this last term is something squared and so it
must be nonnegative! Hence, taking square roots, we have shown that

|< V, W >| ≤ || V || || W ||

28
3.5. VECTOR MAGNITUDE CHAPTER 3. LINEAR ALGEBRA CONCEPTS

Note, since a real number is always less than or equal to it absolute value, we can also say

< V, W > ≤ || V || || W ||

And we can say more. If it turned out that the term (ad − bc)2 was zero, then ad − bc = 0.
There are then a few cases to look at.

1. If all the terms a, b, c and d are not zero, then we can write ad = bc implies a/c = b/d.
We know the vector V can be interpreted as the line segment starting at (0, 0) on the line
with equation y = (a/c)x. Similarly, the vector W can be interpreted as the line segment
connecting (0, 0) and (b, d) on the line y = (b/d)x. Since a/c = b/d, these lines are the
same. So both points (a, c) and (b, d) line on the same line. Thus, we see these vectors lay
on top of each other or point directly opposite each other in the x − y plane; i.e. the angle
between these vectors is 0 or π radians (that is 0 or 180 degrees).

2. If a = 0, then bc must be 0 also. Since we know the vector V is not the zero vector, we can’t
have c = 0 also. Thus, b must be zero. This tells us V has components (0, c) for some non
zero c and V has components (0, d) for some non zero d. These components also determine
two lines like in the case above which either point in the same direction or opposite one
another. Hence, again, the angle between the lines determined by these vectors is either 0
or π radians.

3. We can argue just like the case above if d = 0. We would find the angle between the lines
determined by the vectors is either 0 or π radians.

We can summarize our results as a Theorem which is called the Cauchy - Schwartz Theorem
for two dimensional vectors.

Theorem 3.5.1 (Cauchy Schwartz Theorem For Two Dimensional Vectors).

If V and W are two dimensional column vectors with components (a, c) and
(b, d) respectively, then it is always true that

< V, W > ≤ |< V, W >| ≤ || V || || W ||

Moreover,

|< V, W >| = || V || || W ||

if and only the quantity ad − bc = 0. Further, this quantity is equal to 0 if


and only if the angle between the line segments determined by the vectors V
and W is 0 or 180 degrees.

29
3.5. VECTOR MAGNITUDE CHAPTER 3. LINEAR ALGEBRA CONCEPTS

Theorem 3.5.1 also holds for vectors that are n dimensional, although our proof would no
longer work. To do the more general case, we can’t resort to techniques specialized to what we
can see in the x − y plane! In particular, we don’t get that wonderful condition ad − bc = 0 if
and only if the angle between the lines determined by the vectors is 0 or 180.
Here is yet another way to look at this: assume there is a non zero value of t so that the
equation below is true.


a b 0
V + tW = + t =

c d 0

This implies

a b
= −t

c d

Since these two vectors are equal, their components must match. Thus, we must have

a = −t b
c = −t d

Thus,

c
a d = (−t b) = bc
−t

and we are back to ad − bc = 0! Hence, another way of saying that the vectors V and W are
either 0 or 180 degrees apart is to say that as vectors they are multiples of one another! Such
vectors are called collinear vectors to save writing . In general, we say two n dimensional vectors
are collinear if there is a nonzero constant t so that V = t W although, of course, we can’t really
figure out a way to visualize these vectors!
V W
Now, the scaled vectors E = ||V || and F = ||W || have magnitudes of 1. Their components
are (a/ || V ||, c/ || V || and (b/ || W ||, d/ || W ||. These points lie on a circle of radius 1
centered at the origin. Let θ1 be the angle E makes with the positive x -axis. Then, since the
hypotenuse distance that defines the cos(θ1 ) and sin(θ1 ) is 1, we must have

a
cos(θ1 ) =
|| V ||
c
sin(θ1 ) =
|| V ||

30
3.5. VECTOR MAGNITUDE CHAPTER 3. LINEAR ALGEBRA CONCEPTS

We can do the same thing for the angle θ2 that F makes with the positive x axis to see

b
cos(θ2 ) =
|| W ||
d
sin(θ2 ) =
|| W ||

The angle between vectors V and W is the same as between vectors E and F . Call this angle θ.
Then θ = θ1 − θ2 and using the formula for the cos of the difference of angles

cos(θ) = cos(θ1 − θ2 )
= cos(θ1 ) cos(θ2 ) + sin(θ1 ) sin(θ2 )
a b c d
= +
|| V || || W || || V || || W ||
ab + cd
=
|| V || || W ||
< V, W >
=
|| V || || W ||

Hence, the ratio < V, W > /(|| V || || W ||) is the same as cos(θ)! So we can use this simple
calculation to find the angle between a pair two dimensional vectors .
The more general proof of the Cauchy Schwartz Theorem for n dimensional vectors is a jour-
ney you can take in another mathematics class! We will state it though so we can use it later if
we need it.

Theorem 3.5.2 (Cauchy Schwartz Theorem For n Dimensional Vectors).


If V and W are n dimensional column vectors with components (V1 , . . . , Vn )
and (W1 , . . . , Wn ) respectively, then it is always true that

< V, W > ≤ |< V, W >| ≤ || V || || W ||

Moreover,

|< V, W >| = || V || || W ||

if and only the vector V is a non zero multiple of the vector W .

Theorem 3.5.2 then tells us that if the vectors V and W are not zero, then

31
3.5. VECTOR MAGNITUDE CHAPTER 3. LINEAR ALGEBRA CONCEPTS

< V, W >
−1 ≤ ≤ 1
|| V || || W ||

and by analogy to what works for two dimensional vectors, we can use this ratio to define the cos
of the angle between two n dimensional vectors even though we can’t see them at all! We do this
in Definition 3.5.3.

Definition 3.5.3 (The Angle Between n Dimensional Vectors).


If V and W are two non zero n dimensional column vectors with components
(V1 , . . . , Vn ) and (W1 , . . . , Wn ) respectively, the angle θ between these vectors
is defined by

< V, W >
cos(θ) =
|| V || || W ||

Moreover, the angle between the vectors is 0 degrees if < V, W > = 1 and is
180 degrees if < V, W > = −1.

32
Chapter 4
Eigenvalues and Eigenvectors

Another important aspect of matrices is called the eigenvalues and eigenvectors of a matrix. We
will motivate this in the context of 2 × 2 matrices of real numbers, and then note it can also be
done for the general square n × n matrix. Consider the general 2 × 2 matrix A given by

" #
a b
A =
c d

Is it possible to find a non zero vector v and a number r so that

A v = r v? (4.1)

There are many ways to interpret what such a number and vector pair means, but for the moment,
we will concentrate on finding such a pair (r, v). Now, if this was true, we could rewrite the
equation as

rv − Av = 0 (4.2)

where 0 denotes the vector of all zeros


" #
0
0 = .
0

Next, recall that the two by two identity matrix I is given by


" #
1 0
I =
0 1

33
CHAPTER 4. EIGENVALUES AND EIGENVECTORS

and it acts like multiplying by one with numbers; i.e. I v = v for any vector v. Thus, instead
of saying r v, we could say r I v. We can therefore write Equation 4.2 as

rI v − Av = 0 (4.3)

We know that we can factor the vector v out of the left hand side and rewrite again as Equation
4.4.
!
rI − A v = 0 (4.4)

Now recall that we want the vector v to be non zero. Note, in solving this system, there are two
possibilities:

(i): the determinant of B is non zero which implies the only solution is v = 0.

(ii): the determinant of B is zero which implies the there are infinitely many solutions for v all
of the form a constant c times some non zero vector E.

Here the matrix B = rI − A. Hence, if we want a non zero solution v, we must look for the
values of r that force det(rI − A) = 0. Thus, we want

0 = det(rI − A)
" #
r−a −b
= det
−c r − d
= (r − a) (r − d) − b c
= r2 − (a + d) r + ad − bc.

This important quadratic equation in the variable r determines what values of r will allow us
to find non zero vectors v so that A v = r v. Note that although we started out in our minds
thinking that r would be a real number, what we have done above shows us that it is possible
that r could be complex.

34
CHAPTER 4. EIGENVALUES AND EIGENVECTORS

Definition 4.0.4 (The Eigenvalues and Eigenvectors of a 2 by 2 Matrix).


Let A be the 2 × 2 matrix
" #
a b
A = .
c d

Then an eigenvalue r of the matrix A is a solution to the quadratic equation


defined by

det(rI − A) = 0.

Any non zero vector that satisfies the equation

Av = rv

for the eigenvalue r is then called an eigenvector associated with the eigenvalue
r for the matrix A.

Comment 4.0.1. Since this is a quadratic equation, there are always two roots which take the
forms below:

(i): the roots r1 and r2 are real and distinct,

(ii): the roots are repeated r1 = r2 = c for some real number c,

(iii): the roots are complex conjugate pairs; i.e. there are real numbers α and β so that r1 = α +β i
and r2 = α − β i.

Let’s look at some examples:

Example 4.0.1. Find the eigenvalues and eigenvectors of the matrix


" #
−3 4
A =
−1 2

Solution 4.0.1. The characteristic equation is


" # " #!
1 0 −3 4
det r − = 0
0 1 −1 2

or

" #!
r+3 −4
0 = det
1 r−2

35
CHAPTER 4. EIGENVALUES AND EIGENVECTORS

= (r + 3)(r − 2) + 4
= r2 + r − 2
= (r + 2)(r − 1)

Hence, the roots, or eigenvalues, of the characteristic equation are r1 = −2 and r2 = 1.


Next, we find the eigenvectors associated with these eigenvalues.

1. For eigenvalue r1 = −2, substitute the value of this eigenvalue into


" #
r+3 −4
1 r−2

This gives
" #
1 −4
1 −4

The two rows of this matrix should be multiples of one another. If not, we made a mistake
and we have to go back and find it. Our rows are indeed multiples, so pick one row to solve
for the eigenvector. We need to solve
" #" # " #
1 −4 v1 0
=
1 −4 v2 0

Picking the top row, we get

v1 − 4 v2 = 0
1
v2 = v1
4

Letting v1 = A, we find the solutions have the form

" # " #
v1 1
= A 1
v2 4

The vector

" #
1
1/4

is our choice for an eigenvector corresponding to eigenvalue r1 = −2.

36
CHAPTER 4. EIGENVALUES AND EIGENVECTORS

2. For eigenvalue r2 = 1, substitute the value of this eigenvalue into


" #
r+3 −4
1 r−2

This gives
" #
4 −4
1 −1

Again,the two rows of this matrix should be multiples of one another. If not, we made a
mistake and we have to go back and find it. Our rows are indeed multiples, so pick one row
to solve for the eigenvector. We need to solve
" #" # " #
4 −4 v1 0
=
1 −1 v2 0

Picking the bottom row, we get

v1 − v2 = 0
v2 = v1

Letting v1 = B, we find the solutions have the form

" # " #
v1 1
= B
v2 1

The vector

" #
1
1

is our choice for an eigenvector corresponding to eigenvalue r2 = 1.

Example 4.0.2. Find the eigenvalues and eigenvectors of the matrix


" #
4 9
A =
−1 −6

37
CHAPTER 4. EIGENVALUES AND EIGENVECTORS

Solution 4.0.2. The characteristic equation is


" # " #!
1 0 4 9
det r − = 0
0 1 −1 −6

or

" #!
r−4 −9
0 = det
1 r+6
= (r − 4)(r + 6) + 9
= r2 + 2 r − 15
= (r + 5)(r − 3)

Hence, the roots, or eigenvalues, of the characteristic equation are r1 = −5 and r2 = 3.


Next, we find the eigenvectors associated with these eigenvalues.
1. For eigenvalue r1 = −5, substitute the value of this eigenvalue into
" #
r−4 −9
1 r+6

This gives
" #
−9 −9
1 1

The two rows of this matrix should be multiples of one another. If not, we made a mistake
and we have to go back and find it. Our rows are indeed multiples, so pick one row to solve
for the eigenvector. We need to solve
" #" # " #
−9 −9 v1 0
=
1 1 v2 0

Picking the bottom row, we get

v1 + v2 = 0
v2 = − v1

Letting v1 = A, we find the solutions have the form

" # " #
v1 1
= A
v2 −1

38
CHAPTER 4. EIGENVALUES AND EIGENVECTORS

The vector

" #
1
−1

is our choice for an eigenvector corresponding to eigenvalue r1 = −5.

2. For eigenvalue r2 = 3, substitute the value of this eigenvalue into


" #
r−4 −9
1 r+6

This gives
" #
−1 −9
1 9

Again,the two rows of this matrix should be multiples of one another. If not, we made a
mistake and we have to go back and find it. Our rows are indeed multiples, so pick one row
to solve for the eigenvector. We need to solve
" #" # " #
−1 −9 v1 0
=
1 9 v2 0

Picking the bottom row, we get

v1 + 9 v2 = 0
−1
v2 = v1
9

Letting v1 = B, we find the solutions have the form

" # " #
v1 1
= B −1
v2 9

The vector

" #
1
−1
9

is our choice for an eigenvector corresponding to eigenvalue r2 = 3.

39
4.1. HOMEWORK CHAPTER 4. EIGENVALUES AND EIGENVECTORS

4.1 Homework

Exercise 4.1.1. Find the eigenvalues and eigenvectors of the matrix


" #
6 3
A =
−11 −8

Exercise 4.1.2. Find the eigenvalues and eigenvectors of the matrix


" #
2 1
A =
−4 −3

Exercise 4.1.3. Find the eigenvalues and eigenvectors of the matrix


" #
−2 −1
A =
8 7

Exercise 4.1.4. Find the eigenvalues and eigenvectors of the matrix


" #
−6 −3
A =
4 1

Exercise 4.1.5. Find the eigenvalues and eigenvectors of the matrix


" #
−4 −2
A =
13 11

4.2 The General Case

For a general n × n matrix A, we have the following:

40
4.2. THE GENERAL CASE CHAPTER 4. EIGENVALUES AND EIGENVECTORS

Definition 4.2.1 (The Eigenvalues and Eigenvectors of a n by n Matrix).


Let A be the n × n matrix.
 
A11 A12 ··· An1
 A21 A22 · · · An2
 

A = 
 .. .. .. .. .
 . . . .


An1 An2 · · · Ann

Then an eigenvalue r of the matrix A is a solution to the polynomial defined


by

det(rI − A) = 0.

Any non zero vector that satisfies the equation

Av = rv

for the eigenvalue r is then called an eigenvector associated with the eigenvalue
r for the matrix A.

Comment 4.2.1. Since this is a polynomial equation, there are always n roots some of which
are real numbers which are distinct, some might be repeated and some might be complex conjugate
pairs (and they can be repeated also!). An example will help. Suppose we started with a 5 × 5
matrix. Then, the roots could be

1. All the roots are real and distinct; for example, 1, 2, 3, 4 and 5.

2. Two roots are the same and three roots are distinct; for examples, 1, 1, 3, 4 and 5.

3. Three roots are the same and two roots are distinct; for examples, 1, 1, 1, 4 and 5.

4. Four roots are the same and one roots is distinct from that; for examples, 1, 1, 1, 1 and 5.

5. Five roots are the same; for examples, 1, 1, 1, 1 and 1.

6. Two pairs of roots are the same and one roots is different from them; for examples, 1, 1, 3,
3 and 5.

7. One triple root and one pair of real roots; for examples, 1, 1, 1, 3 and 3.

8. One triple root and one complex conjugate pair of roots; for examples, 1, 1, 1, 3 + 4i and
3 − 4i.

9. One double root and one complex conjugate pair of roots and one different real root; for
examples, 1, 1, 2, 3 + 4i and 3 − 4i.

41
4.2. THE GENERAL CASE CHAPTER 4. EIGENVALUES AND EIGENVECTORS

10. Two complex conjugate pair of roots and one different real root; for examples, −2, 1 + 6i,
1 − 6i, 3 + 4i and 3 − 4i.

42
Chapter 5
Linear Systems Review

Recall a linear system of differential equations has the form

x0 (t) = a x(t) + b y(t) (5.1)


y 0 (t) = c x(t) + d y(t) (5.2)
x(0) = x0 (5.3)
y(0) = y0 (5.4)

for any numbers a, b, c and d and initial conditions x0 and y0 . The full problem is called, as
usual, an Initial Value Problem or IVP for short. The two initial conditions are just called the
IC’s for the problem to save writing. For example, we might be interested in the system

x0 (t) = −2 x(t) + 3 y(t)


y 0 (t) = 4 x(t) + 5 y(t)
x(0) = 5
y(0) = −3

Here the IC’s are x(0) = 5 and y(0) = −3. Another sample problem might be the one below.

x0 (t) = 14 x(t) + 5 y(t)


y 0 (t) = −4 x(t) + 8 y(t)
x(0) = 2

43
5.1. A JUDICIOUS GUESS CHAPTER 5. LINEAR SYSTEMS REVIEW

y(0) = 7

We are interested in learning how to solve these problems.

5.1 A Judicious Guess

For linear first order problems like u0 = 3u and so forth, we have found the solution has the form
u(t) = α e3t for some number α. We would then determine the value of α to use by looking at
the initial condition. To see what to do with Equation 5.1 and Equation 5.2, first let’s rewrite
the problem in terms of matrices and vectors. In this form, Equation 5.1 and Equation 5.2 can
be written as

" # " # " #


x0 (t) a b x(t)
= .
y 0 (t) c d y(t)

The initial conditions Equation 5.3 and Equation 5.4 can then be redone in vector form as

" # " #
x(0) x0
= .
y(0) y0

Here are some examples of the conversion of a system of two linear differential equations into
matrix - vector form.

Example 5.1.1. Convert

x0 (t) = 6 x(t) + 9 y(t)


y 0 (t) = −10 x(t) + 15 y(t)
x(0) = 8
y(0) = 9

into a matrix - vector system.

Solution 5.1.1. The new form is seen to be

" # " # " #


x0 (t) 6 9 x(t)
=
y 0 (t) −10 15 y(t)
" # " #
x(0) 8
= .
y(0) 9

44
5.1. A JUDICIOUS GUESS CHAPTER 5. LINEAR SYSTEMS REVIEW

Example 5.1.2. Convert

x0 (t) = 2 x(t) + 4 y(t)


y 0 (t) = − x(t) + 7 y(t)
x(0) = 2
y(0) = −3

into a matrix - vector system.

Solution 5.1.2. The new form is seen to be

" # " # " #


x0 (t) 2 4 x(t)
=
y 0 (t) −1 7 y(t)
" # " #
x(0) 2
= .
y(0) −3

Now that we know how to do this conversion, it seems reasonable to believe that is a constant
times ert solve a first order linear problem like u0 = ru, perhaps a vector times ert will work here.
Let’s make this formal. We’ll work with a specific system first because numbers are always easier
to make sense of in the initial exposure to a technique. So let’s look at the problem below

x0 (t) = 3 x(t) + 2 y(t)


y 0 (t) = −4 x(t) + 5 y(t)
x(0) = 2
y(0) = −3

~ ert because by our remarks above since this is a


Let’s assume the solution has the form V
vector system it seems reasonable to move to using a vector rather than a constant. Let’s denote
the components of V ~ as follows:

" #
~ V1
V = .
V2

~ ert is
Then, it is easy to see that the derivative of V

 0 " # 0
~ rt V 1 rt
V e = e
V2

45
5.1. A JUDICIOUS GUESS CHAPTER 5. LINEAR SYSTEMS REVIEW

" #
0
V1 ert
=
V2 ert
  0 
 V1 ert 
= 
  0 

V2 ert
" #
V1 r ert
=
V2 r ert
" #
V1
= r ert
V2
~ ert
= rV

Let’s plug in our possible solution into the original problem. That is, we assume the solution is

" #
x(t) ~ ert .
= V
y(t)

Hence,

" #
x0 (t) ~ ert .
= rV
y 0 (t)

When, we plug these terms into the matrix - vector form of the problem, we find

" #
~ e rt 3 2 ~ ert .
rV = V
−4 5

We can rewrite this as

" # " #
~ ert − 3 2 ~ ert . = 0
rV V .
−4 5 0

Since one of these terms is a matrix and one is a vector, we need to write all the terms in terms
of matrices if possible. Recall, the two by two identity matrix is

" #
1 0
I =
0 1

~ = V
and I V ~ always. Thus, we can rewrite our system as

46
5.1. A JUDICIOUS GUESS CHAPTER 5. LINEAR SYSTEMS REVIEW

" # " # " #


1 0 ~ ert − 3 2 ~ ert = 0
r V V .
0 1 −4 5 0

Now, factor out the common ert term to give

" # " # ! " #


1 0 ~ − 3 2 ~ 0
r V V ert = .
0 1 −4 5 0

Even though we don’t know yet what values of r will work for this problem, we do know that the
term ert is never zero no matter what value r has. Hence, we can say that we are looking for a
~ so that
value of r and a vector V

" # " # " #


1 0 ~ − 3 2 ~ 0
r V V = .
0 1 −4 5 0

For convenience, let the matrix of coefficients determined by our system of differential equations
be denoted by A, i.e.,

" #
3 2
A = .
−4 5

~ must satisfy becomes


Then, the equation that r and V

" #
~ − AV
~ 0
rIV = .
0

~ is common, we factor again to get our last equation


Finally, noting the vector V

! " #
~ − A ~ 0
rIV V = .
0

~ must
We can then plug in the value of I and A to get the system of equations that r and V
~ ert to be a solution.
satisfy in order for V

" #" # " #


r−3 −2 V1 0
= .
−(−4) r − 5 V2 0

47
5.1. A JUDICIOUS GUESS CHAPTER 5. LINEAR SYSTEMS REVIEW

To finish this discussion, note that for any value of r, this is a system of two linear equations in
the two unknowns V1 and V2 . If we choose a value of r for which det (rI − A) was non zero,
then theory tells us the the two lines determined by row 1 and row 2 of this system have different
slopes. This means this system of equations has only one solution. Since both equations cross
through the origin, this unique solution must be V1 = 0 and V2 = 0. But, of course, this tells
us the solution is x(t) = 0 and y(t) = 0! We will not be able to solve for the initial conditions
x(0) = 2 and y(0) = −3 with this solution. So we must reject any choice of r which gives us
det (rI − A) 6= 0.

This leaves only one choice: the values of r where det (rI − A) = 0. The values of r where
det (rI − A) = 0 are what we called the eigenvalues of our matrix A and for these values of r,
we must find non zero vectors V~ (non zero because otherwise, we can’t solve the IC’s!) so that

" #" # " #


r−3 −2 V1 0
= .
4 r−5 V2 0

Note, the system above is the same as

" #" # " #


3 2 V1 V1
= r .
−4 5 V2 V2

Then, for each eigenvalue we find, we should have a solution of the form

" # " #
x(t) V1
= ert .
y(t) V2

In this class, we will only be interested in systems where the two eigenvalues are real numbers
which are different (so these examples are not ones of interest to us for a complete solution!). So
our general solution will be

" #
x(t) ~ 1 er1 t + b E
~ 2 er2 t ,
= aE
y(t)

~ 1 is the eigenvector for eigenvalue r1 and E


where E ~ 2 is the eigenvector for eigenvalue r2 and a
and b are arbitrary real numbers chosen to satisfy the IC’s.

We are now ready for some definitions for the characteristic equation of a linear system of differ-
ential equations.

48
5.1. A JUDICIOUS GUESS CHAPTER 5. LINEAR SYSTEMS REVIEW

Definition 5.1.1 (The Characteristic Equation Of A Linear System Of


ODEs).
For the system

x0 (t) = a x(t) + b y(t)


y 0 (t) = c x(t) + d y(t)
x(0) = x0
y(0) = y0 ,

the characteristic equation is the polynomial defined by

 
det r I − A

where A is the coefficient matrix

" #
a b
A =
c d

We can then define the eigenvalue of a linear system of differential equations.

Definition 5.1.2 (The Eigenvalues Of A Linear System Of ODEs).


The roots of the characteristic equation of the linear system are called its
~ satisfying
eigenvalues and any nonzero vector V

! " #
~ − A ~ 0
rIV V = .
0

for an eigenvalue r is called an eigenvector associated with the eigenvalue r.

Finally, the general solution of this system can be built from its eigenvalues and associated eigen-
vectors.

49
5.1. A JUDICIOUS GUESS CHAPTER 5. LINEAR SYSTEMS REVIEW

Definition 5.1.3 (The General Solution Of A Linear System Of ODEs).


In the case where the two roots are distinct real numbers, the general solution
of the system is

" #
x(t) ~ 1 er1 t + b E
~ 2 er2 t ,
= aE
y(t)

~ 1 is the eigenvector for eigenvalue r1 and E


where E ~ 2 is the eigenvector for
eigenvalue r2 and a and b are arbitrary real numbers chosen to satisfy the IC’s.

For our problem, then, the characteristic equation is

" #
r−3 −2
det = (r − 3)(r − 5) + 8
4 r−5
= r2 − 8r + 23 = 0.

We should now be able to derive the characteristic equation of any linear system of this type.

5.1.1 Sample Characteristic Equation Derivations

Let’s do some examples. Here is the first one.

Example 5.1.3. Derive the characteristic equation for the system below

x0 (t) = 8 x(t) + 9 y(t)


y 0 (t) = 3 x(t) − 2 y(t)
x(0) = 12
y(0) = 4

Solution 5.1.3. First, note the matrix - vector form is

" # " # " #


x0 (t) 8 9 x(t)
= .
y 0 (t) 3 −2 y(t)
" # " #
x(0) 12
= .
y(0) 4

50
5.1. A JUDICIOUS GUESS CHAPTER 5. LINEAR SYSTEMS REVIEW

The coefficient matrix A is thus


" #
8 9
A = .
3 −2

~ ert and plug this into the system. This gives


We assume the solution has the form V

" # " #
~ ert − 8 9 ~ ert . = 0
rV V .
3 −2 0

Now rewrite using the identity matrix I and factor to obtain

" #! " #
8 9 ~ ert = 0
rI − V .
3 −2 0

Then, since ert can never be zero no matter what value r is, we find the values of r and the vectors
~ we seek satisfy
V

" #! " #
8 9 ~ 0
rI − V = .
3 −2 0

Now, if r is chosen so that det(rI −A) 6= 0, the only solution to this system of two linear equations
in the two unknowns V1 and V2 is V1 = 0 and V2 = 0. This leads to the solution x(t) = 0 and
y(t) = 0 always and this solution does not satisfy the initial conditions. Hence, we must find
values r which give det (rI − A) = 0. The resulting polynomial is

" #! " #
8 9 r−8 −9
det rI − = det
3 −2 −3 r + 2
= (r − 8)(r + 2) − 27 = r2 − 6r − 43.

This is the characteristic equation of this system.

The next one is very similar. We will expect you to be able to do this kind of derivation also.

Example 5.1.4. Derive the characteristic equation for the system below

x0 (t) = −10 x(t) − 7 y(t)


y 0 (t) = 8 x(t) + 5 y(t)
x(0) = −1

51
5.1. A JUDICIOUS GUESS CHAPTER 5. LINEAR SYSTEMS REVIEW

y(0) = −4

Solution 5.1.4. We see the matrix - vector form is

" # " # " #


x0 (t) −10 −7 x(t)
= .
y 0 (t) 8 5 y(t)
" # " #
x(0) −1
= .
y(0) −4

with coefficient matrix A given by


" #
−10 −7
A = .
8 5

~ ert and plug this into the system giving


Assume the solution has the form V

" # " #
~ e rt −10 −7 ~ e . = rt 0
rV − V .
8 5 0

Rewriting using the identity matrix I and factoring, we obtain

" #! " #
−10 −7 ~ ert = 0
rI − V .
8 5 0

Then, since ert can never be zero no matter what value r is, we find the values of r and the vectors
~ we seek satisfy
V

" #! " #
−10 −7 ~ 0
rI − V = .
8 5 0

Again, if r is chosen so that det (rI − A) 6= 0, the only solution to this system of two linear
equations in the two unknowns V1 and V2 is V1 = 0 and V2 = 0. This gives us the solution
x(t) = 0 and y(t) = 0 always and this solution does not satisfy the initial conditions. Hence, we
must find values r which give det (rI − A) = 0. The resulting polynomial is

" #! " #
−10 −7 r + 10 7
det rI − = det
8 5 −8 r − 5
= (r + 10)(r − 5) + 56 = r2 + 5r + 6.

52
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

This is the characteristic equation of this system.

5.1.2 Problems

Exercise 5.1.1.

x0 = 2 x + 3 y
y0 = 8 x − 2 y
x(0) = 3
y(0) = 5.

1. Write the matrix - vector form.

2. Derive the characteristic equation but you don’t have to find the roots.

Exercise 5.1.2.

x0 = −4 x + 6 y
y0 = 9 x + 2 y
x(0) = 4
y(0) = −6.

1. Write the matrix - vector form.

2. Derive the characteristic equation but you don’t have to find the roots.

5.2 Two Distinct Eigenvalues


~ Since in this course, we want to concentrate
Each eigenvalue r has a corresponding eigenvector E.
on the situation where the two roots of the characteristic equation are distinct real numbers, we
~ 1 , corresponding to eigenvalue r1 and the eigenvector, E
will want to find the eigenvector, E ~ 2,
corresponding to eigenvalue r2 . The general solution will then be of the form

" #
x(t) ~ 1 er1 t + b E
~ 2 er2 t
= aE
y(t)

indexLinear ODE Systems!general solution


where we will use the IC’s to choose the correct values of a and b. Let’s do a complete example
now.
We start with the system

53
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

x0 (t) = −3 x(t) + 4 y(t)


y 0 (t) = −1 x(t) + 2 y(t)
x(0) = 2
y(0) = −4

First, note the matrix - vector form is

" # " # " #


x0 (t) −3 4 x(t)
= .
y 0 (t) −1 2 y(t)
" # " #
x(0) 2
= .
y(0) −4

The coefficient matrix A is thus


" #
−3 4
A = .
−1 2

The characteristic equation is thus

" #! " #
−3 4 r+3 −4
det rI − = det
−1 2 1 r−2
= (r + 3)(r − 2) + 4 = r2 + r − 2
= (r + 2)(r − 1).

Thus, the eigenvalues of the coefficient matrix A are r1 = −2 and r = 1. The general solution
will then be of the form

" #
x(t) ~ 1 e−2t + b E
~ 2 et
= aE
y(t)

Next, we find the eigenvectors associated with these eigenvalues.

1. For eigenvalue r1 = −2, substitute the value of this eigenvalue into


" #
r+3 −4
1 r−2

54
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

This gives
" #
1 −4
1 −4

The two rows of this matrix should be multiples of one another. If not, we made a mistake
and we have to go back and find it. Our rows are indeed multiples, so pick one row to solve
for the eigenvector. We need to solve
" #" # " #
1 −4 v1 0
=
1 −4 v2 0

Picking the top row, we get

v1 − 4 v2 = 0
1
v2 = v1
4

Letting v1 = a, we find the solutions have the form

" # " #
v1 1
= a 1
v2 4

The vector

" #
~1 = 1
E
1/4

is our choice for an eigenvector corresponding to eigenvalue r1 = −2. Thus, the first solution
to our system is

" # " #
x1 (t) −2t 1
~1e
= E = e−2t .
y1 (t) 1/4

2. For eigenvalue r2 = 1, substitute the value of this eigenvalue into


" #
r+3 −4
1 r−2

55
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

This gives
" #
4 −4
1 −1

Again,the two rows of this matrix should be multiples of one another. If not, we made a
mistake and we have to go back and find it. Our rows are indeed multiples, so pick one row
to solve for the eigenvector. We need to solve
" #" # " #
4 −4 v1 0
=
1 −1 v2 0

Picking the bottom row, we get

v1 − v2 = 0
v2 = v1

Letting v1 = b, we find the solutions have the form

" # " #
v1 1
= b
v2 1

The vector

" #
~2 = 1
E
1

is our choice for an eigenvector corresponding to eigenvalue r2 = 1. Thus, the second


solution to our system is

" # " #
x2 (t) ~ 2 et = 1
= E et .
y2 (t) 1

The general solution is therefore

" # " # " #


x(t) x1 (t) x2 (t)
= a + b
y(t) y1 (t) y2 (t)

56
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

" # " #
1 1
= a e−2t + b et .
1/4 1

Finally, we solve the IVP. Given the IC’s, we find two equations in two unknowns for a and b:

" # " # " # " #


x(0) 2 1 0 1
= = a e +b e0
y(0) −4 1/4 1
" #
a + b
= .
(1/4)a + b

This is the usual system

a + b = 2
(1/4)a + b = −4.

This easily solves to give a = 24/5 and b = −14/5. Hence, the solution to this IVP is

" # " # " #


x(t) x1 (t) x2 (t)
= a + b
y(t) y1 (t) y2 (t)
" # " #
1 1
= (24/5) e−2t + (−14/5) et .
1/4 1

This can be rewritten as

x(t) = (24/5) e−2t − (14/5) et


y(t) = (6/5) e−2t − (14/5) et .

Note when t is very large, the only terms that matter are the ones which grow fastest. Hence, we
could say

x(t) ≈ − (14/5) et
y(t) ≈ − (14/5) et .

or in vector form

" # " #
x(t) 1
≈ −(14/5) et .
y(t) 1

57
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

~ 2 ! Note the graph of x(t) and y(t) on an x − y plane


This is just a multiple of the eigenvector E
will get closer and closer to the the straight line determined by this eigenvector. So we will call
~ 2 the dominant eigenvector direction for this system.
E

5.2.1 Worked Out Solutions


You need some additional practice. Let’s work out a few more. Here is the first one.
Example 5.2.1. For the system below
" # " #" #
x0 (t) −20 12 x(t)
=
y 0 (t) −13 5 y(t)
" # " #
x(0) −1
=
y(0) 2

• Find the characteristic equation

• Find the general solution

• Solve the IVP


Solution 5.2.1. The characteristic equation is
" # " #!
1 0 −20 12
det r − = 0
0 1 −13 5

or

" #!
r + 20 −12
0 = det
13 r − 5
= (r + 20)(r − 5) + 156
= r2 + 15r + 56
= (r + 8)(r + 7)

Hence, eigenvalues, of the characteristic equation are r1 = −8 and r2 = −7. We need to


find the associated eigenvectors for these eigenvalues.
1. For eigenvalue r1 = −8, substitute the value of this eigenvalue into
" #
r + 20 −12
13 r − 5

This gives
" #
12 −12
13 −13

58
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

Again, the two rows of this matrix should be multiples of one another. If not, we made a
mistake and we have to go back and find it. Our rows are indeed multiples, so pick one row
to solve for the eigenvector. We need to solve
" #" # " #
12 −12 v1 0
=
3 −13 v2 0

Picking the top row, we get

12v1 − 12 v2 = 0
v2 = v1

Letting v1 = a, we find the solutions have the form

" # " #
v1 1
= a 1
v2 1

The vector

" #
~1 = 1
E 1
1

is our choice for an eigenvector corresponding to eigenvalue r1 = −7.

2. For eigenvalue r2 = 1, substitute the value of this eigenvalue into


" #
r + 20 −12
13 r − 5

This gives
" #
13 −12
13 −12

Again,the two rows of this matrix should be multiples of one another.


Picking the bottom row, we get

13v1 − 12v2 = 0
v2 = (13/12)v1

Letting v1 = b, we find the solutions have the form

59
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

" # " #
v1 1
= b
v2 13/12

The vector

" #
~2 = 1
E
13/12

is our choice for an eigenvector corresponding to eigenvalue r2 = −7.


The general solution to our system is thus

" # " # " #


x(t) 1 1
= a e−2t + b et
y(t) 1 13/12

We solve the IVP by finding the a and b that will give the desired initial conditions. This
gives

" # " # " #


−1 1 1
= a + b
2 1 13/12

or

−1 = a + b
2 = a + (13/12)b

This is easily solved using elimination to give a = −19/25 and b = −6/25. The solution to
the IVP is therefore

" # " # " #


x(t) 1 −8t 1
= −(19/25) e + −(6/25) e−7t
y(t) 1 13/12
" #
−(19/25) e−8t − (6/25) e−7t
=
−(19/25) e−8t − (6/25)(13/12) e−7t

Note when t is very large, the only terms that matter are the ones which grow fastest or, in this
case, the ones which decay the slowest. Hence, we could say

60
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

" # " #
x(t) 1
≈ −(6/25) e−7t .
y(t) (13/12)

~ 2 ! Note the graph of x(t) and y(t) on an x − y plane


This is just a multiple of the eigenvector E
will get closer and closer to the the straight line determined by this eigenvector. So we will call
~ 2 the dominant eigenvector direction for this system.
E
We now have all the information needed to analyze the solutions to this system graphically.
Here is another example is great detail. Again, remember you will have to know how to do
these steps yourselves.
Example 5.2.2. For the system below
" # " #" #
x0 (t) 4 9 x(t)
=
y 0 (t) −1 −6 y(t)
" # " #
x(0) 4
=
y(0) −2

• Find the characteristic equation

• Find the general solution

• Solve the IVP


Solution 5.2.2. The characteristic equation is
" # " #!
1 0 4 9
det r − = 0
0 1 −1 −6

or

" #!
r−4 −9
0 = det
1 r+6
= (r − 4)(r + 6) + 9
= r2 + 2 r − 15
= (r + 5)(r − 3)

Hence, eigenvalues of the characteristic equation are r1 = −5 and r2 = 3. Next, we find the
eigenvectors.
1. For eigenvalue r1 = −5, substitute the value of this eigenvalue into
" #
r−4 −9
1 r+6

61
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

This gives
" #
−9 −9
1 1

We need to solve
" #" # " #
−9 −9 v1 0
=
1 1 v2 0

Picking the bottom row, we get

v1 + v2 = 0
v2 = − v1

Letting v1 = a, we find the solutions have the form

" # " #
v1 1
= a
v2 −1

The vector

" #
~1 = 1
E
−1

is our choice for an eigenvector corresponding to eigenvalue r1 = −5.

2. For eigenvalue r2 = 3, substitute the value of this eigenvalue into


" #
r−4 −9
1 r+6

This gives
" #
−1 −9
1 9

This time, we need to solve


" #" # " #
−1 −9 v1 0
=
1 9 v2 0

62
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

Picking the bottom row, we get

v1 + 9 v2 = 0
−1
v2 = v1
9

Letting v1 = b, we find the solutions have the form

" # " #
v1 1
= b
v2 −1/9

The vector

" #
1
−1/9

is our choice for an eigenvector corresponding to eigenvalue r2 = 3.

The general solution to our system is thus

" # " # " #


x(t) 1 1
= a e−5t + b e3t
y(t) −1 −1/9

We solve the IVP by finding the a and b that will give the desired initial conditions. This
gives

" # " # " #


4 1 1
= a + b
−2 −1 −1/9

or

4 = a + b
−1
−2 = − a + b
9

This is easily solved using elimination to give a = 7/4 and B = 9/4. The solution to the
IVP is therefore

63
5.2. TWO DISTINCT EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

" # " # " #


x(t) 7 1 9 1
= e−5t + −1
e3t
y(t) 4 −1 4 9
" #
7
4 e−5t + 9
4 e3t
= −7 −1
4 e−5t + 4 e3t

~ 2.
Again, the dominant eigenvector is E

5.2.2 Problems

For these problems:

1. write matrix, vector form.

2. find characteristic equation. No derivation needed this time.

3. find the two eigenvalues.

4. find the two associated eigenvectors in glorious detail.

5. Write general solution.

6. solve the IVP.

Exercise 5.2.1.

x0 = 3 x + y
y0 = 5 x − y
x(0) = 4
y(0) = −6.

The eigenvalues should be −2 and 4.

Exercise 5.2.2.

x0 = x + 4 y
y0 = 5 x + 2 y
x(0) = 4
y(0) = −5.

The eigenvalues should be −3 and 6.

Exercise 5.2.3. For this problem:

64
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

x0 = −3 x + y
y 0 = −4 x + 2 y
x(0) = 1
y(0) = 6.

The eigenvalues should be −2 and 1.

5.3 Graphical Analysis

Let’s try to analyze these systems graphically. We are interested in what these solutions look like
for many different initial conditions. So let’s look at the problem

x0 (t) = −3 x(t) + 4 y(t)


y 0 (t) = −x(t) + 2 y(t)
x(0) = x0
y(0) = y0

5.3.1 Graphing The NullClines

The set of (x, y) pairs where x0 = 0 is called the nullcline for x; similarly, the points where y 0 = 0
is the nullcline for y. The x0 equation can be set equal to zero to get −3x + 4y = 0. This is the
same as the straight line y = 3/4 x. This straight line divides the x − y plane into three pieces:
the part where x0 > 0; the part where x0 = 0; and, the part where x0 < 0. In Figure 5.1, we show
the part of the x − y plane where x0 > 0 with one shading and the part where it is negative with
another. Similarly, the y 0 equation can be set to 0 to give the equation of the line −x + 2y = 0.
This gives the straight line y = 1/2 x. In Figure 5.2, we show how this line also divided the x − y
plane into three pieces.
The shaded areas shown in Figure 5.1 and Figure 5.2 can be combined into Figure 5.3. In this
figure, we divide the x − y plane into four regions marked with a I, II, III or IV. In each region,
x0 and y 0 are either positive or negative. Hence, each region can be marked with an ordered pair,
(x0 ±, y 0 ±).

5.3.2 Graphing The Eigenvector Lines

Now we add the eigenvector lines. In Section 5.2, we found that this system has eigenvalues
r1 = −2 and r2 = 1 with associated eigenvectors

65
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

x0 > 0 x0 = 0
The x0 equation for our system is
x0 = −3x + 4y. Setting this to 0, we
get y = 3/4 x whose graph is shown.
Evaluation Point At the point (0, 1), x0 = −3·0+4·1 =
+4. Hence, every point in the x − y
plane on this side of the x0 = 0 line
will give x0 > 0. Hence, we have
shaded the part of the plane where
x0 > 0 as shown.
x0 < 0

Figure 5.1: Finding where x0 < 0 and x0 > 0

y0 > 0 y0 = 0
The y 0 equation for our system is
y 0 = −x + 2y. Setting this to 0, we
get y = 1/2 x whose graph is shown.
Evaluation Point At the point (0, 1), y 0 = −1·0+2·1 =
+2. Hence, every point in the x − y
plane on this side of the y 0 = 0 line
will give y 0 > 0. Hence, we have
shaded the part of the plane where
y 0 > 0 as shown.
y0 < 0

Figure 5.2: Finding where y 0 < 0 and y 0 > 0

66
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

I (+,+)

II (−,+)

The x0 = −3x + 4y and the y 0 =


−x+2y equations determine four re-
gions in the plane. In each region,
the algebraic sign of x0 and y 0 are
shown as an ordered pair. For ex-
ample, in region I, x0 is positive and
y 0 is positive and so this is denoted
IV (+,−) by (+, +).
III (−,−)

Figure 5.3: Combining the x0 and y 0 Algebraic Sign Regions

" # " #
~1 = 1 ~2 = 1
E , E .
1/4 1

~ with components a and b,


Recall a vector V

" #
~ a
V =
b

determines a straight line with slope b/a. Hence, these eigenvectors each determine a straight
~ 1 line has slope 1 and the E
line. The E ~ 2 line has slope (1/4)/1 = 1/4. We can graph these two
lines overlaid on the graph shown in Figure 5.3.

5.3.3 Graphing Region I Trajectories

In each of the four regions, we know the algebraic signs of the derivatives x0 and y 0 . If we are
given an initial condition (x0 , y0 ) which is in one of these regions, we can use this information to
draw the set of points (x(t), y(t)) corresponding to the solution to our system

67
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

~2
E

I (+,+)
x0 = 0
II (−,+)
y0 = 0

~1
E

~1
E

In this figure, we show the sign pairs


y0 = 0
determined by the x0 = −3x + 4y
IV (+,−) and the y 0 = −x + 2y equations for
x0 = 0 regions I, II, III and IV. In addition,
III (−,−) the lines corresponding to the eigen-
vectors E~ 1 and the dominant E ~ 2 are
~2
E drawn.

Figure 5.4: Drawing the nullclines and the eigenvector lines on the same graph.

68
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

x0 (t) = −3 x(t) + 4 y(t)


y 0 (t) = −x(t) + 2 y(t)
x(0) = x0
y(0) = y0 .

This set of points is called the trajectory corresponding to this solution. The first point on the
trajectory is the initial point (x0 , y0 ) and the rest of the points follow from the solution

" # " # " #


x(t) 1 −2t 1
= a e +b et .
y(t) 1/4 1

where a and b satisfy the system of equations

x0 = a + b
y0 = (1/4)a + b

This can be rewritten as

x(t) = a e−2t + b et
y(t) = (1/4)a e−2t + b et .

Hence,

dy y 0 (t)
=
dx x0 (t)
(−2/4)a e−2t + b et
=
−2a e−2t + b et

When t is large, as long as b is not zero, the terms involving e−2t are negligible and so we have

dy y 0 (t)
=
dx x0 (t)
b et

b et
~ 2.
≈ E

69
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

~ 2 . So, we can conclude


Hence, when t is large, the slopes of the trajectory approach 1, the slope of E
that for large t, as long as b is not zero, the trajectory either parallels the line determined by E ~2
or approaches it asymptotically.
~ 1 , then a little
Of course, if an initial condition is chosen that lies on the line determined by E
thought will tell you that b is zero in this case and we have

dy y 0 (t)
=
dx x0 (t)
(−2/4)a e−2t
=
−2a e−2t
~ 1.
= E

In this case,

x(t) = a e−2t
y(t) = (1/4)a e−2t .

~ 1.
and so the coordinates (x(t), y(t)) go to (0, 0) along the line determined by E
We conclude that unless an initial condition is chosen exactly on the line determined by E ~ 1 , all
~ 2 or approach it asymp-
trajectories eventually begin to either parallel the line determined by E
~ 1 , then the trajectories
totically. If the initial condition is chosen on the line determined by E
stay on this line and approach the origin where they stop as that is a place where both x0 and
y 0 become 0. In Figure 5.5, we show three trajectories which begin in Region I. They all have a
(+, +) sign pattern for x0 and y 0 , so the x and y components should both increase. We draw the
trajectories with the concavity as shown because that is the only way they can smoothly approach
~ 2 . We show this in Figure 5.5.
the eigenvector line E

5.3.4 Can Trajectories Cross?

Is it possible for two trajectories to cross? Consider the trajectories shown in Figure 5.6. These
two trajectories cross at some point. The two trajectories correspond to different initial conditions
which means that the a and b associated with them will be different. Further, these initial condi-
~ 1 or eigenvector E
tions don’t start on eigenvector E ~ 2 , so the a and b values for both trajectories
will be non zero. If we label these trajectories by (x1 , y1 ) and (x2 , y2 ), we see

x1 (t) = a1 e−2t + b1 et
y1 (t) = (1/4)a1 e−2t + b1 et .

and

70
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

~2
E

I (+,+)
x0 = 0
II (−,+)
y0 = 0

~1
E

~1
E

In this figure, we show the sign pairs


y0 = 0 determined by the x0 = −3x + 4y
IV (+,−) and the y 0 = −x + 2y equations for
regions I, II, III and IV. In addition,
x0 = 0 the lines corresponding to the eigen-
III (−,−)
vectors E~ 1 and the dominant E ~ 2 are
drawn. Finally, we show three tra-
~2
E jectories in Region I.

Figure 5.5: Trajectories In Region I

71
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

x2 (t) = a2 e−2t + b2 et
y2 (t) = (1/4)a2 e−2t + b2 et .

Since we assume they cross, there has to be a time point, t∗ , so that (x1 (t∗ ), y1 (t∗ )) and (x2 (t∗ ), y2 (t∗ ))
match. This means, using vector notation,

" # " # " #


x1 (t) 1 −2t 1
= a1 e + b1 et ,
y1 (t) 1/4 1
" # " # " #
x2 (t) 1 −2t 1
= a2 e + b2 et .
y2 (t) 1/4 1

Setting these two equal at t∗ , then gives

" # " # " # " #


1 −2t∗ 1 t∗ 1 −2t∗ 1 ∗
a1 e + b1 e = a2 e + b2 et .
1/4 1 1/4 1

∗ ∗
This is a bit messy, so for convenience, let the number et be denoted by U and the number e−2t
be V . Then, we can rewrite as

" # " # " # " #


1 1 1 1
a1 V + b1 U = a2 V + b2 U.
1/4 1 1/4 1

Next, we can combine like vectors to find

" # " #
1 1
(a1 − a2 )V = (b2 − b1 )U .
1/4 1

No matter what the values of a1 , a2 , b1 and b2 , this tells us that

" # " #
1 1
= a multiple of .
1/4 1

This is clearly not possible, so we have to conclude that trajectories can’t cross. We can do this
sort of analysis for trajectories that start in any region, whether it is I, II, III or IV. Further, a
similar argument shows that a trajectory can’t cross an eigenvector line as if it did, the argument
~ 1 is a multiple of E
above would lead us to the conclusion that E ~ 2 , which it is not.
We can state the results here as formal rules for drawing trajectories.

72
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

Theorem 5.3.1 (Trajectory Drawing Rules).


Given the system

x0 (t) = a x(t) + b y(t)


y 0 (t) = c x(t) + d y(t)
x(0) = x0
y(0) = y0 ,

assume the eigenvalues r1 and r2 are different with either both negative or one
~ 1 and E
negative and one positive. Let E ~ 2 be the associated eigenvectors. Then,
the trajectories of this system corresponding to different initial conditions can
not cross each other. In particular, trajectories can not cross eigenvector lines.

5.3.5 Graphing Region II Trajectories

In region II, trajectories start where x0 < 0 and y 0 > 0. Hence, the x values must decrease and
the y values, increase in this region. We draw the trajectory in this way, making sure it curves
in such a way that it has no corners or kinks, until it hits the nullcline x0 = 0. At that point,
the trajectory moves into region I. Now x0 > 0 and y 0 > 0, so the trajectory moves upward along
~ 2 line like we showed in the Region I trajectories. We show this in Figure 5.7.
the eigenvector E
Note although the trajectories seem to overlap near the E ~ 2 line, they actually do not because
trajectories can not cross as was be explained in Section 5.3.4.

5.3.6 Graphing Region III Trajectories

Next, we examine trajectories that begin in region III. Here x0 and y 0 are negative, so the x and
y values will decrease and the trajectories will approach the dominant eigenvector E ~ 2 line from
the right side as is shown in Figure 5.8. The initial condition that starts in Region III above the
eigenvector E~ 2 line will move towards the y 0 = 0 line following x0 < 0 and y 0 < 0 until it hits the
line x0 = 0 using x0 < 0 and y 0 > 0. Then it moves upward towards the eigenvector E ~ 2 line as
shown. It is easier to see this in a magnified view as shown in Figure 5.9.

5.3.7 Graphing Region IV Trajectories

Finally, we examine trajectories that begin in region IV. Here x0 is positive and y 0 is negative,
so the x values will grow and y values will decrease. The trajectories will behave in this manner
until they intersect the x0 = 0 nullcline. Then, they will cross into Region III and approach the
dominant eigenvector E ~ 2 line from the left side as is shown in Figure 5.10.

73
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

~2
E

I (+,+)
x0 = 0
II (−,+)
y0 = 0

~1
E

~1
E In this figure, we show
two trajectories in Re-
gion I that cross. In
the text, we explain why
y0 = 0
this is not possible.
IV (+,−)
x0 = 0
III (−,−)

~2
E

Figure 5.6: Combining the x0 and y 0 Algebraic Sign Regions

74
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

~2
E

I (+,+)
x0 = 0
II (−,+)
y0 = 0

~1
E

~1
E In this figure, we show the sign pairs
determined by the x0 = −3x + 4y
and the y 0 = −x + 2y equations for
y0 = 0 regions I, II, III and IV. In addi-
IV (+,−) tion, the lines corresponding to the
eigenvectors E ~ 1 and the dominant
x0 = 0 ~
E 2 are drawn. Finally, we show two
III (−,−)
trajectories in Region II. These tra-
jectories do not cross, but it is hard
~2
E to draw them as they approach the
dominant eigenvector!

Figure 5.7: Trajectories In Region II

5.3.8 The Combined Trajectories

In Figure 5.11, we show all the region trajectories on one plot. We can draw more, but these
should be enough to give you an idea of how to draw them. In addition, there is a type of
trajectory we haven’t drawn yet. Recall, the general solution is

" # " # " #


x1 (t) 1 −2t 1
= a e +b et .
y1 (t) 1/4 1

~ 1 line, then b = 0. Hence, for these initial


If an initial condition was chosen to lie on eigenvector E
conditions, we have

75
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

~2
E

I (+,+)
x0 = 0
II (−,+)
y0 = 0

~1
E

~1
E
In this figure, we show the sign pairs
determined by the x0 = −3x + 4y
and the y 0 = −x + 2y equations for
y0 = 0
regions I, II, III and IV. In addition,
IV (+,−) the lines corresponding to the eigen-
vectors E~ 1 and the dominant E ~ 2 are
x0 = 0
III (−,−) drawn. Finally, we show three tra-
jectories in Region III. One is a tra-
~2
E jectory that moves from Region III
through Region II to Region I.

Figure 5.8: Region III Trajectories

76
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

~2
E

I (+,+)

x0 = 0

II (−,+)

y0 = 0

III (−,−)
~1
E

Figure 5.9: A Magnified Region III Trajectory

" # " #
x1 (t) 1
= a e−2t .
y1 (t) 1/4

~ 1 line and then as t increases, x(t)


Thus, these trajectories start somewhere on the eigenvector E
and y(t) go to (0, 0) along this eigenvector. You can easily imagine these trajectories by placing
~ 1 line with an arrow pointing towards the origin.
a dot on the E

5.3.9 Two Negative Eigenvalues


In this case, the graph of Figure 5.11 is essentially repeated. The dominant eigenvector direction
is the one which has the least negative value. For example, if eigenvalue r1 = −3 and eigenvalue
r2 = −2, since −2 is the less negative value, it’s eigenvector E ~ 2 is the dominant one. All

77
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

~2
E

I (+,+)
x0 = 0
II (−,+)
y0 = 0

~1
E

~1 In this figure, we show


E
the sign pairs deter-
mined by the x0 =
−3x + 4y and the y 0 =
y0 = 0 −x + 2y equations for
IV (+,−) regions I, II, III and
IV. In addition, the
x0 = 0 lines corresponding to
III (−,−)
~ 1 and
the eigenvectors E
~
the dominant E 2 are
~2
E drawn. Finally, we
show two trajectories in
Region IV.

Figure 5.10: Region IV Trajectories

trajectories will eventually approach (0, 0) as t increases asymptotically along E ~ 2 , except the
~ 1 line. Those trajectories will also approach (0, 0) but will
special trajectories that start on the E
~ 1 line.
do so along the E

5.3.10 Problems
You are now ready to do some problems on your own. For the problems below

• Find the characteristic equation

• Find the general solution

• Solve the IVP

• On the same x − y graph,

78
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

~2
E

I (+,+)
x0 = 0
II (−,+)
y0 = 0

~1
E

~1
E

In this figure, we show the sign pairs


determined by the x0 = −3x + 4y
y0 =0
and the y 0 = −x + 2y equations for
IV (+,−) regions I, II, III and IV. In addition,
x0 = 0 the lines corresponding to the eigen-
III (−,−) vectors E~ 1 and the dominant E ~ 2 are
drawn. Finally, we show three tra-
~2
E jectories in Region I.

Figure 5.11: Trajectories In All Regions

79
5.3. GRAPHICAL ANALYSIS CHAPTER 5. LINEAR SYSTEMS REVIEW

1. draw the x0 = 0 line


2. draw the y 0 = 0 line
3. draw the eigenvector one line
4. draw the eigenvector two line
5. divide the x − y into four regions corresponding to the algebraic signs of x0 and y 0
6. draw the trajectories of enough solutions for various initial conditions to create the
phase plane portrait

Exercise 5.3.1.
" # " #" #
x0 (t) 1 3 x(t)
=
y 0 (t) 3 1 y(t)
" # " #
x(0) −3
=
y(0) 1

Exercise 5.3.2.
" # " #" #
x0 (t) 3 12 x(t)
=
y 0 (t) 2 1 y(t)
" # " #
x(0) 6
=
y(0) 1

Exercise 5.3.3.
" # " #" #
x0 (t) −1 1 x(t)
=
y 0 (t) −2 −4 y(t)
" # " #
x(0) 3
=
y(0) 8

Exercise 5.3.4.
" # " #" #
x0 (t) 3 4 x(t)
=
y 0 (t) −7 −8 y(t)
" # " #
x(0) −2
=
y(0) 4

Exercise 5.3.5.
" # " #" #
x0 (t) −1 1 x(t)
=
y 0 (t) −3 −5 y(t)
" # " #
x(0) 2
=
y(0) −4

80
5.4. REPEATED EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

Exercise 5.3.6.
" # " #" #
x0 (t) −5 2 x(t)
=
y 0 (t) −4 1 y(t)
" # " #
x(0) 21
=
y(0) 5

5.4 Repeated Eigenvalues

5.5 Complex Eigenvalues

81
5.5. COMPLEX EIGENVALUES CHAPTER 5. LINEAR SYSTEMS REVIEW

82
Chapter 6
Nonlinear Differential Equations

We are now ready to solve nonlinear systems of nonlinear differential equations. Some of this has
been discussed in ((Peterson (18) 2008)) and we include a review in Chapter 5. In ((Peterson
(18) 2008)), the focus was on linear systems of differential equations with distinct real eigenvalues
followed by a thorough treatment of nonlinear predator - prey models as well as a simple SIR
disease model. We are now interested in doing more than that. We introduced Repeated and
Complex Eigenvalues for linear systems of differential equations in Chapter 5. Now, in this
chapter, we will review a few standard nonlinear systems of differential equations (also included
in ((Peterson (18) 2008))!) and introduce very new methods to analyze these problems. These
include

1. The use of linearization of nonlinear ordinary differential equation systems to gain insight
into their long term behavior. This requires the use of partial derivatives which we have
learned about in Chapter 2.

2. More extended qualitative graphical methods.

6.1 A Sample Nonlinear System: The Predator Prey Model


We now review a classical approach to studying the well - know nonlinear Predator - Prey model.
This was thoroughly discussed in ((Peterson (18) 2008)). In the 1920’s, the Italian biologist
Umberto D’Ancona studied population variations of various species of fish that interact with one
another. He came across the data shown in Table 6.1.

Here, we interpret the percentage we see in column two of Table 6.1 as predator fish, such as
sharks, skates and so forth. Also, the catches used to calculate these percentages were reported
from all over the Mediterranean. The tonnage from all the different catches for the entire year

83
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

Year Percent Not Food Fish


1914 11.9%
1915 21.4%
1916 22.1%
1917 21.2%
1918 36.4%
1919 27.3%
1920 16.0%
1921 15.9%
1922 14.8%
1923 10.7%

Table 6.1: The percent of the total fish catch in the Mediterranean Sea which was considered not
food fish.

Year Percent Predator Percent Food


1914 11.9% 88.1%
1915 21.4% 78.6%
1916 22.1% 77.9%
1917 21.2% 78.8%
1918 36.4% 63.6%
1919 27.3% 72.7%
1920 16.0% 84.0%
1921 15.9% 84.1%
1922 14.8% 85.2%
1923 10.7% 89.3%

Table 6.2: The percent of the total fish catch in the Mediterranean Sea considered predator and
considered food.

were then added and used to calculate the percentages in the table. Thus, we can also calculate
the percentage of catch that was food by subtracting the predator percentages from the predator
ones. This leads to what we see in Table 6.2.

D’Ancona noted the time period coinciding with World War One, when fishing was drastically
cut back due to military actions, had puzzling data. Let’s highlight this in Table 6.3. D’Ancona
expected both food fish and predator fish to increase when the rate of fishing was cut back. But
in these war years, there is a substantial increase in the percentage of predators caught at the
same time the percentage of food fish went down. Note, we are looking at percentages here. Of
course, the raw tonnage of fish caught went down during the war years, but the expectation was
that since there is reduced fishing, there should be a higher percentage of food fish because they
have not been harvested. D’Ancona could not understand this, so he asked the mathematician
Vito Volterra for help.

84
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

Year Percent Predator Percent Food


1915 21.4% 78.6%
1916 22.1% 77.9%
1917 21.2% 78.8%
1918 36.4% 63.6%
1919 27.3% 72.7%

Table 6.3: During World War One, fishing is drastically curtailed, yet the predator percentage
went up while the food percentage went down.

6.1.1 Theory

Volterra approached the modeling this way. He let the variable x(t) denote the population of food
fish and y(t), the population of predator fish at time t. He was constructing what you might call
a coarse model. The food fish are divided into categories like halibut, mackerel with a separate
variable for each and the predators are also not divided into different classes like sharks, squids
and so forth. Hence, instead of dozens of variables for both the food and predator population,
everything was lumped together. Following Volterra, we make the following assumptions:

1. The food population grows exponentially. Letting x0g denote the growth rate of the food
fish, we must have

x0g = a x

for some positive constant a.

2. The number of contacts per unit time between predators and prey is proportional to the
product of their populations. We assume the food fish are eaten by the predators at a rate
proportional to this contact rate. Letting the decay rate of the food be denoted by x0d , we
see

x0d = − b x y

for some positive constant b.

Thus, the net rate of change of food is x0 = x0g + x0d giving

x0 = a x − b x y.

for some positive constants a and b. He made assumptions about the predators as well.

85
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

1. Predators naturally die following an exponential decay; letting this decay rate be given by
yd0 , we have

yd0 = −c y

for some positive constant c.

2. We assume the predator fish can grow proportional to how much they eat. In turn, how much
they eat is assumed to be proportional to the rate of contact between food and predator fish.
We model the contact rate just like before and let yg0 be the growth rate of the predators.
We find

yg0 = d x y

for some positive constant d.

Thus, the net rate of change of predators is y 0 = yg0 + yd0 giving

y 0 = − c y + d x y.

for some positive constants c and d. The full Volterra model is thus

x0 = ax − bxy (6.1)
0
y = −cy + dxy (6.2)
x(0) = x0 (6.3)
y(0) = y0 (6.4)

Equation 6.1 and Equation 6.2 give the dynamics of this system. Note these are nonlinear dynam-
ics, the first we have seen since the logistics model. Equation 6.3 and Equation 6.4 are the initial
conditions for the system. Together, these four equations are called a Predator Prey system.
Since Volterra’s work, this model has been applied in many other places. A famous example is
the wolf - moose predator - prey system which has been extensively modeled for Island Royale in
Lake Superior. We are now going to analyze this model. We have been inspired by the analysis
given (Braun (2) 1978), but Braun can use a bit more mathematics in his explanations and we
will try to use only calculus ideas.

6.1.2 Predator - Prey Trajectories


Once we obtain a solution (x, y) to the Predator - Prey problem, we have two nice curves x(t)
and y(t) defined for all non negative time t. As we did when we developed qualitative graphs for
linear systems, if we graph in the x − y plane the ordered pairs (x(t), y(t)), we will draw a curve

86
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

C where any point on C corresponds to an ordered pair (x(t), y(t)) for some time t. At t = 0,
we are at the point (x0 , y0 ) on C . Hence, the initial conditions for the Predator - Prey problem
determine the starting point on C . As time increases, the pairs (x(t), y(t)) move in the direction
of the tangent line to C . If we knew the algebraic sign of the derivatives x0 and y 0 at any point
on C , we could decide the direction in which we are moving along the curve C .
So we begin our analysis by looking at the curves in the x − y plane where x0 and y 0 become 0.
From these curves, we will be able to find out the different regions in the plane where each is
positive or negative. From that, we will be able to decide in which direction a point moves along
the curve. Looking at the predator - prey equations, we see that if t∗ is a time point when x0 (t∗ )
is zero, the food dynamic of the predator - prey system reduce to

0 = a x(t∗ ) − b x(t∗ ) y(t∗ )

or

 
∗ ∗
0 = x(t ) a − b y(t )

Thus, the (x, y) pairs in the x − y plane where

 
0 = x a − by

are the ones where the rate of change of the food fish will be zero. Now these pairs can correspond
to many different time values t∗ so what we really need to do is to find all the (x, y) pairs where
this happens. Since this is a product, there are two possibilities:

• x = 0; the y axis and

• y = a/b; a horizontal line.

In a similar way, the pairs (x, y) where y 0 becomes zero satisfy the equation

 
0 = y −c + dx .

Again, there are two possibilities:

• y = 0; the x axis and

87
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

• x = c/d; a vertical line.

Just like we did in our analysis of linear systems, we find the parts of the x − y plane where the
algebraic signs of x0 and y 0 are (+, +), (+, −), (−, +) and (−, −). As usual, the set of (x, y) pairs
where x0 = 0 is called the nullcline for x; similarly, the points where y 0 = 0 is the nullcline for
y. The x0 = 0 equation gives us the y axis and the horizontal line y = a/b while the y 0 = 0 gives
the x axis and the vertical line x = c/d. The plane is thus divided into three pieces: the part
where x0 > 0; the part where x0 = 0; and, the part where x0 < 0. In Figure 6.1, we show the part
of the x − y plane where x0 > 0 with one shading and the part where it is negative with another.
In Figure 6.2, we show how the y 0 nullcline divides the x − y plane into three pieces as well.

y axis
x0 = 0

Evaluation Point

x0 > 0 x0 < 0 The x0 equation for our system is


y = a/b x0 = x (a − b y). Setting this to
x0 = 0 0, we get x = 0 and y = a/b
x0 < 0 x0 > 0
whose graphs are shown. At the
evaluation point shown, y is above
x axis
the critical value a/b and hence
a − b y is negative. Hence, every
point in the x − y plane on the
positive x side of the horizontal
x0 < 0 x0 > 0 line y = a/b will give x0 < 0.

Figure 6.1: Finding where x0 < 0 and x0 > 0 for the Predator - Prey Model

The shaded areas shown in Figure 6.1 and Figure 6.2 can be combined into Figure 6.3. In this
figure, we divide the x − y plane into four regions marked with a I, II, III or IV. In each region,
x0 and y 0 are either positive or negative. Hence, each region can be marked with an ordered pair,
(x0 ±, y 0 ±).
For example, if we are in Region I, the sign of x0 is negative and the sign of y 0 is positive.
Thus, the variable x decreases and the variable y increases in this region. So if we graphed the
ordered pairs (x(t), y(t)) in the x − y plane for all t > 0, we would plot a y versus x curve. That
is, we would have y = f (x) for some function of x. Note that, by the chain rule

dy dx
= f 0 (x) .
dt dt

88
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

y axis
Evaluation Point

The y 0 equation for our system is


y 0 = y (− c + d y). Setting this
to 0, we get y = 0 and x = c/d
y0 < 0 y0 < 0 y0 > 0 whose graphs are shown. At the
x axis evaluation point shown, x is to the
y0 = 0 right of the critical value c/d and
hence − c + d x is positive. Hence,
y0 > 0 y0 > 0 y0 < 0
every point in the x − y plane on
the positive x side of the vertical
line x = c/d will give y 0 > 0.

x = c/d
y0 = 0

Figure 6.2: Finding where y 0 < 0 and y 0 > 0 for the Predator - Prey Model

y axis
x0 = 0

V (+,−) II (−,−) I (−,+)

The x0 = 0 and the y 0 = 0 equa-


y = a/b tions determine four regions in
x0 = 0 VI (−,−) III (+,−) IV (+,+) the first quadrant. In each region,
the algebraic sign of x0 and y 0 are
x axis shown as an ordered pair. For ex-
y0 = 0 ample, in region I, x0 is negative
and y 0 is positive and so this is de-
noted by (−, +). We also indicate
the sign pairs in the other quad-
VII (−,+) VIII (+,+) IX (+,−) rants: regions V - IX.

x = c/d
y0 = 0

Figure 6.3: Combining the x0 and y 0 Algebraic Sign Regions

89
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

Hence, as long as x0 is not zero (and this is true in Region I!), we have at each time t, that the
slope of the curve y = f (x) is given by

df y 0 (t)
(t) = .
dx x0 (t)

Since our pair (x, y) is the solution to a differential equation, we expect that x and y both are
continuously differentiable with respect to t. So if we draw the curve for y vs x in the x − y plane,
we do not expect to see a corner in it (as a corner means the derivative fails to exist). So we can
see three possibilities:

• a straight line as x0 equals y 0 at each t meaning the slope is always the same,

• a curve that is concave up or

• a curve that is concave down.

We illustrate this three possibilities in Figure 6.4.

y axis

x0 = 0

II (−,−) I (−,+)

In this figure, we show the three


trajectory types for the signs of
region I
y = a/b

x0 = 0
III (+,−) IV (+,+)

x axis
y0 = 0
y0 = 0
x = c/d

Figure 6.4: Trajectories In Region I

When we combine trajectories from one region with another, we must attach them so that we do
not get corners in the curves. This is how we can determine whether or not we should use concave
up or down or straight in a given region. We can do this for all the different regions shown in
Figure 6.4.

90
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

6.1.3 Only Quadrant One Is Biologically Relevant

To analyze this nonlinear model, we need a fact from more advanced courses.

Assumption 6.1.1 (Trajectories do not cross in the Predator - Prey model).

We can show, in a more advanced course, that two distinct trajectories to the
Predator - Prey model

x0 = ax − bxy (6.5)
y0 = − c y + d x y (6.6)
x(0) = x0 (6.7)
y(0) = y0 (6.8)

can not cross.

Let’s begin by looking at a trajectory that starts on the positive y axis. We therefore need to
solve the system

x0 = ax − bxy (6.9)
y0 = − c y + d x y (6.10)
x(0) = 0 (6.11)
y(0) = y0 > 0 (6.12)

It is easy to guess the solution is the pair (x(t), y(t)) with x(t) = 0 always and y(t) satisfying
y 0 = −c y(t). Hence,

y(t) = y0 e−ct

and y decays nicely down to 0 as t increases. If we start on the positive x axis, we want to solve

x0 = ax − bxy (6.13)
y0 = − c y + d x y (6.14)
x(0) = x0 > 0 (6.15)
y(0) = 0 (6.16)

91
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

Again, it is easy to guess the solution is the pair (x(t), y(t)) with y(t) = 0 always and x(t)
satisfying x0 = a x(t). Hence,

x(t) = x0 eat

and the trajectory moves along the positive x axis always increasing as t increases. Since trajec-
tories can’t cross other trajectories, this tells us a trajectory that begins in Quadrant I with a
positive (x0 , y0 ) can’t hit the x axis or the y axis in a finite amount of time because it it did, we
would have two trajectories crossing.
This is good news for our biological model. Since we are trying to model food and predator
interactions in a real biological system, we always start with initial conditions (x0 , y0 ) that are
in Quadrant One. It is very comforting to know that these solutions will always remain positive
and, therefore, biologically realistic. In fact, it doesn’t seem biologically possible for the food
or predators to become negative, so if our model permitted that, it would tell us our model is
seriously flawed! Hence, for our modeling purposes, we need not consider initial conditions that
start in Regions V - IX. Indeed, if you look at Figure 6.4, you can see that a solution trajectory
could only hit the y axis from Region II. and a trajectory could only hit the x axis from a start
in Region III. And we have ruled out these possibilities already.

Exercises

For the following problems, find the x0 and y 0 nullclines and sketch using multiple colors, the
algebraic sign pairs (x0 , y 0 ) the nullclines determine in Quadrant I.

Exercise 6.1.1.

x0 (t) = 100 x(t) − 25 x(t) y(t)


y 0 (t) = −200 y(t) + 40 x(t) y(t)

Exercise 6.1.2.

x0 (t) = 1000 x(t) − 250 x(t) y(t)


y 0 (t) = −2000 y(t) + 40 x(t) y(t)

Exercise 6.1.3.

x0 (t) = 900 x(t) − 45 x(t) y(t)


y 0 (t) = −100 y(t) + 50 x(t) y(t)

Exercise 6.1.4.

x0 (t) = 10 x(t) − 25 x(t) y(t)

92
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

y 0 (t) = −20 y(t) + 40 x(t) y(t)

Exercise 6.1.5.

x0 (t) = 90 x(t) − 2.5 x(t) y(t)


y 0 (t) = −200 y(t) + 4.5 x(t) y(t)

6.1.4 The Nonlinear Conservation Law

So we can assume that for a start in Quadrant I, the solution pair is always positive. Let’s see how
far we can get with a preliminary mathematical analysis. We can analyze these trajectories like
this. For convenience, assume we start in Region II and the resulting trajectory hits the y = a/b
line at some time t∗ . At that time, we will have x0 (t∗ ) = 0 and y(t∗ ) < 0. We show this situation
in Figure 6.5.

y axis

x0 = 0

II (−,−) I (−,+)
We show a trajectory that starts
(x0 , y0 )
in Region II and terminates on the
y = a/b line at the point shown.
This points is hit at time t∗ and we
y = a/b show the intersection as the pair
(x(t∗ ), ab )
(0, y(t∗ )).
x0 = 0
III (+,−) IV (+,+)

x axis
y0 = 0
y0 = 0
x = c/d

Figure 6.5: Trajectories In Region II

In all regions, Look at the Predator - Prey model dynamics for 0 ≤ t < t∗ . Since all variables
are positive and their derivatives are not zero for these times, we can look at the fraction y 0 (t)/x0 (t).

y 0 (t) y(t) (−c + d x(t))


= .
x0 (t) x(t) (a − b y(t))

93
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

The equation above will not hold at t∗ , however, because at that point x0 (t∗ ) = 0. But for t
below that critical value, it is ok to look at this fraction.
Rearranging a bit, we find

(a − b y(t)) y 0 (t) (−c + d x(t)) x0 (t)


= .
y(t) x(t)

or, switching to the variable s for 0 ≤ s < t, for any value t strictly less than our special value
t∗ , we have

y 0 (s) x0 (s)
a − b y 0 (s) = −c + d x0 (s).
y(s) x(s)

Now integrate from s = 0 to s = t to obtain

y 0 (s) x0 (s)
Z t  Z t 
a − b y 0 (s) ds = −c + d x0 (s) ds.
0 y(s) 0 x(s)

These integrals can be split into separate pieces giving

t
y 0 (s) t t
x0 (s) t
Z Z Z Z
0
a ds − b y (s) ds = −c ds + d x0 (s) ds.
0 y(s) 0 0 x(s) 0

These can be integrated easily (yes, it’s true!) and we get

t t t t

a ln y(s) − b y(s)
= −c ln x(s) + d x(s) .

0 0 0 0

Evaluating these expressions at s = t and s = 0, using the initial condition x(0) = x0 and
y(0) = y0 , we find

       
a ln y(t) − ln y0 − b y(t) − y0 = −c ln x(t) − ln x0 + d x(t) − x0 .

Now we simplify a lot ( remember x0 and y0 are positive so absolute values are not needed around
them). First, we use a standard logarithm property:

       
y(t) x(t)
a ln − b y(T ) − y(0) = −c ln + d x(T ) − x0 .
y0 x0

Then, put all the logarithm terms on the left side and pull the powers a and c inside the logarithms:

94
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

y(t) a x(t) c
       
ln + ln = b y(t) − y0 + d x(t) − x0 .
y0 x0

Then using properties of the logarithm again,

 a  c !    
y(t) x(t)
ln = b y(t) − y0 + d x(t) − x0 .
y0 x0

Now exponentiate both sides and use the properties of the exponential function to simplify. We
find

a  c
eb y(t) ed x(t)

y(t) x(t)
= .
y0 x0 eb y0 ed x0

We can rearrange this as follows:

(y(t))a (x(t))c y0a xc0


  
= . (6.17)
eb y(t) ed x(t) eb y0 ed x0

Now the right hand side is a positive number which for convenience we will call α. Hence, we
have the equation

(y(t))a (x(t))c
  
= α
eb y(t) ed x(t)

holds for all time t strictly less than t∗ . Thus, as we allow t to approach t∗ from below, the
continuity of our solutions x(t) and y(t) allows us to say

(y(t))a (x(t))c (y(t∗ ))a (x(t∗ ))c


       
lim lim = = α.
t−>t∗ eb y(t) t−>t∗ ed x(t) eb y(t∗ ) ed x(t∗ )

Thus, Equation 6.17 holds at t∗ also.


We can do a similar analysis for a trajectory that starts in Region IV and moves up until it hits
the y = a/b line where x0 = 0. This one will start at an initial point (x0 , y0 ) in Region IV and
terminate on the y = a/b line at the point (x(t∗ ), a/b) for some time t∗ . In this case, we continue
the analysis as before. For any time t < t∗ , the variables x(t) and y(t) are positive and their
derivatives non zero. Hence, we can manipulate the Predator - Prey Equations just like before to
end up with

95
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

t
y 0 (s) t t
x0 (s) t
Z Z Z Z
a ds − b y 0 (s) ds = −c ds + d x0 (s) ds.
0 y(s) 0 0 x(s) 0

We integrate in the same way and apply the initial conditions to obtain Equation 6.17 again.

(y(t))a (x(t))c y0a xc0


  
= .
eb y(t) ed x(t) eb y0 ed x0

Then, taking the limit at t goes to t∗ , we see this equation holds at t∗ also. Again, label the right
hand side as the positive constant α. We then have

(y(t))a (x(t))c
  
= α.
eb y(t) ed x(t)

Letting t approach t∗ , as we did earlier, we find

(y(t∗ ))a (x(t∗ ))c


  
= α.
eb y(t∗ ) ed x(t∗ )

We conclude Equation 6.17 holds for trajectories that start in regions that terminate on the x0 = 0
line y = a/b. Since trajectories that start in region I and III never have x0 become 0, all of the
analysis we did above works perfectly. Hence, we can conclude that Equation 6.17 holds for all
trajectories starting at a positive initial point (x0 , y0 ) in Quadrant I.
We know the pairs (x(t), y(t)) are on the trajectory that corresponds to the initial start of (x0 , y0 ).
Hence, we can drop the time dependence (t) above and write Equation 6.18 which holds for any
(x, y) pair that is on the trajectory.

ya xc y0a xc0
  
= . (6.18)
eb y ed x eb y0 ed x0

Equation 6.18 is called the Nonlinear Conservation Law associated with the Predator - Prey
model.

6.1.5 Can a Trajectory Hit the y axis Redux?

Although we have assumed trajectories can’t cross and therefore a trajectory starting in Region
II can’t hit the y axis for that reason, we can also see this using the nonlinear conservation law.
We can do the same derivation for a trajectory starting in Region II with a positive x0 and y0
and this time assume the trajectory hits the y axis at a time t∗ at the point (0, y1 ) with y1 > 0.
We can repeat all of the integration steps to obtain

96
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

(y(t))a (x(t))c y0a xc0


  
= .
eb y(t) ed x(t) eb y0 ed x0

This equation holds for all t before t∗ . Taking the limit as t goes to t∗ , we obtain

(y(t∗ ))a (x(t∗ ))c (y1 )a (0)c y0a xc0


     
= = 0 = .
eb y(t∗ ) ed x(t∗ ) eb y1 ed 0 eb y0 ed x0

This is not possible, so we have another way to seeing that a trajectory can’t hit the y axis. A
similar argument shows a trajectory in Region III can’t hit the x axis. We will leave the details
of that argument to you!

Exercises

For the following Predator - Prey models, derive the nonlinear conservation law. Since our dis-
cussions have shown us the times when x0 = 0 in the fraction y 0 /x0 do not give us any trouble,
you can derive this law by integrating

y 0 (t) y(t) (−c + d x(t))


= .
x0 (t) x(t) (a − b y(t))

in the way we have described in this section for the particular values of a, b, c and d in the given
model. So you should derive the equation

y a xc (y(0))a (x(0))c
=
eby edx eby(0) edx(0)

Exercise 6.1.6.

x0 (t) = 100 x(t) − 25 x(t) y(t)


y 0 (t) = −200 y(t) + 40 x(t) y(t)

Exercise 6.1.7.

x0 (t) = 1000 x(t) − 250 x(t) y(t)


y 0 (t) = −2000 y(t) + 40 x(t) y(t)

Exercise 6.1.8.

x0 (t) = 900 x(t) − 45 x(t) y(t)


y 0 (t) = −100 y(t) + 50 x(t) y(t)

97
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

Exercise 6.1.9.

x0 (t) = 10 x(t) − 25 x(t) y(t)


y 0 (t) = −20 y(t) + 40 x(t) y(t)

Exercise 6.1.10.

x0 (t) = 90 x(t) − 2.5 x(t) y(t)


y 0 (t) = −200 y(t) + 4.5 x(t) y(t)

6.1.6 Qualitative Analysis

From the discussions above, we now know that given an initial start (x0 , y0 ) in Quadrant I of the
x − y plane, the solution to the Predator - Prey system will not leave Quadrant I. If we piece
the various trajectories together for Regions I, II, III and IV, the solution trajectories will either
periodic, spiraling in to some center or spiraling out to give unbounded motion. These possibilities
are shown in Figure 6.6 (periodic), Figure 6.7 (spiraling out) and Figure 6.8 (spiraling in). We
want to find out which of these possible trajectories is possible.

y axis

x0 = 0

II (−,−) I (−,+)
We show a possible periodic trajec-
tories from a given start (x0 , y0 )
in Quadrant I. Note that there
(x0 , y0 ) is a time value, call it T , so that
x(0) = x(T ) = x0 and
y = a/b y(0) = y(T ) = y0 . T is called
the period of the trajectory.
x0 = 0
III (+,−) IV (+,+)

x axis
y0 = 0
y0 = 0
x = c/d

Figure 6.6: A Periodic Trajectory

98
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

y axis

x0 = 0

II (−,−) I (−,+)

We show a trajectory starting from


(x0 , y0 ) (x0 , y0 ) in Quadrant I that spirals
out.
y = a/b

x0 = 0
III (+,−) IV (+,+)

x axis
y0 = 0
y0 = 0
x = c/d

Figure 6.7: A Spiraling Out Trajectory

y axis

x0 = 0

II (−,−) I (−,+)

We show a spiraling in trajectory


(x0 , y0 ) that starts at (x0 , y0 ) in Quadrant
I.
y = a/b

x0 = 0
III (+,−) IV (+,+)

x axis
y0 = 0
y0 = 0
x = c/d

Figure 6.8: A Spiraling In Trajectory

99
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

6.1.7 The Predator - Prey Growth Functions

Recall the Predator - Prey nonlinear conservation law is given by

ya xc y0a xc0
  
= .
eb y ed x eb y0 ed x0

Define the functions f and g by for all non negative real numbers by

xc
f (x) =
edx
ya
g(y) = .
eby

Both f and g have the same form, so to see what these functions look like lets analyze the function


h(u) =
eβu

for all non negative u and any positive α and β. We note that since eβu grows faster than uα for
any α and β, we must have that h decays to 0 from above as u grows larger. This means that h
has horizontal asymptote u = 0. Also, we have h(0) = 0. If you think about it a bit, since h is
differentiable, this means it must rise to a maximum at some value of u before it begins to drop
down towards 0. So, all we need to do is find the critical point of h. Taking the derivative of h,
we find

α uα−1 eβu − uα β eβu


h0 (u) =  2
eβu

α uα−1 − β uα
=
eβu
u α−1
= (α − β u).
eβu

When we set h = 0, we clearly see the positive critical point is α/β. Depending on whether or
not α is smaller than one, we can also have a critical point at u = 0 corresponding to a 0 slope or
an undefined slope. From our earlier analysis, we known that if we start with an initial condition
(x0 , y0 ) in Quadrant I, the solutions will never become 0. Hence, the behavior of the h curve at
0 is not of interest to us. Note that at the critical point α/beta, the value of h is

(α/β)α
h(α/beta) =
eβ(α/β)

100
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

2
xα /β
= ,

which is a completely uninteresting number. We will simply call it hmax . The generic graph of h
is thus given by Figure 6.9.

h axis

hmax The Predator - Prey model growth func-


tions have the form h = uα /eβu for non
negative u. The slope at h = 0 drawn as
a reasonable positive number. Of course,
if α < 1 this slope would be vertical, but
it will just be understood this is possible.

u axis u = α/β

Figure 6.9: The Generic Predator - Prey growth graph

The analysis we just did for the growth curve h means that f has a maximum at x = c/d and g
has a maximum at y = a/b. This gives us the graphs shown in Figure 6.10 and Figure 6.11.

f axis

fmax
The Predator - Prey model f - growth
function has the form f (x) = xc /edx for
non negative x.

x axis x = c/d

Figure 6.10: The Predator - Prey f growth graph

Exercises

For the following Predator - Prey models, state what the f and g growth functions are, use
calculus to derive where there maximum occurs (you can do either f or g as the derivation is the
same for both) and sketch their graphs nicely.

Exercise 6.1.11.

x0 (t) = 10 x(t) − 25 x(t) y(t)

101
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

g axis
gmax

The Predator - Prey model g - growth


function has the form g(y) = y a /eby for
non negative y.

y axis y = a/b

Figure 6.11: The Predator - Prey g growth graph

y 0 (t) = −20 y(t) + 40 x(t) y(t)

Exercise 6.1.12.

x0 (t) = 100 x(t) − 25 x(t) y(t)


y 0 (t) = −200 y(t) + 40 x(t) y(t)

Exercise 6.1.13.

x0 (t) = 90 x(t) − 45 x(t) y(t)


y 0 (t) = −10 y(t) + 5 x(t) y(t)

Exercise 6.1.14.

x0 (t) = 10 x(t) − 2.5 x(t) y(t)


y 0 (t) = −20 y(t) + 4 x(t) y(t)

Exercise 6.1.15.

x0 (t) = 9 x(t) − 3 x(t) y(t)


y 0 (t) = −300 y(t) + 50 x(t) y(t)

6.1.8 The Nonlinear Conservation Law Using f and g

We can write the nonlinear conservation law using the growth functions f and g in the form of
Equation 6.19:

f (x) g(y) = f (x0 ) g(y0 ). (6.19)

102
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

The trajectories formed by the solutions of the Predator - Prey model that start in Quadrant I are
powerfully shaped by these growth functions. First note that given an initial condition (x0 , y0 ),
we know f (x0 ) is some percentage of fmax and g(y0 ) is some percentage of gmax . So we can say
f (x0 ) = rf fmax and g(y0 ) = rg gmax for some positive numbers rf ≤ 1 and rg ≤ 1. So the
nonlinear conservation law tells us that

f (x) g(y) = rf rg fmax gmax


= µfmax gmax

for a constant µ which is positive and less than or equal to 1. It is easy to see that if we choose
(x0 = c/d, y0 = a/b), i.e. we start at the places where f and g have their maximums, the
resulting trajectory is very simple. It is the single point (x(t) = c/d, y(t) = a/b) for all time t.
Note, the Predator - Prey model, in this case, gives

 
0
x (t) = x(t) a − b y(t) (6.20)
 
= x0 a − b y 0 (6.21)
 
= (c/d) a − b (a/b) (6.22)

= 0 (6.23)
 
0
y (0) = y(t) − c + d x(t) (6.24)
 
= y0 − c + d x 0 (6.25)
 
= (a/b) − c + d (c/d) (6.26)

= 0, (6.27)
x(0) = x0 , (6.28)
y(0) = y0 . (6.29)

The solution to this Predator - Prey model with this initial condition is thus to simply stay at
the point where we start.
The next step is to examine what happens if we choose a value of µ < 1.

6.1.9 Each Trajectory Stays Bounded

If µ < 1, draw the f curve along with the horizontal line of value µfmax . We show this in Figure
6.12. This horizontal line intersects the f curve exactly twice. We label these intersection points
as (x1 , f (x1 )) and (x2 , f (x2 )). You can see that we have 0 < x1 < c/d < x2 .

103
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

f axis

fmax The Predator - Prey model


has initial conditions
f (x1 ) f (x2 ) (x0 , y0 ) that correspond to
µ fmax f (x0 ) g(y0 ) = µ fmax gmax .
The horizontal line µ fmax
intersects the f curve in
two points, (x1 , f (x1 )) and
(x2 , f (x2 ))

x1 c/d x2 x axis

Figure 6.12: The Conservation Law f (x) g(y) = µ fmax gmax implies there are two critical
points x1 and x2 of interest. See text for further explanation.

The solutions (x(t), y(t)) must satisfy the nonlinear conservation law. So at any time t, we must
have

f (x(t)) g(y(t)) = µ fmax gmax .

There are two questions to ask:

Question 1: Is it possible for the trajectory to pass through a point (u, v) with 0 < u x1 ? If
this happened, the nonlinear conservation law tells us that the pair (u, v) must satisfy

f (u) g(v) = µ fmax gmax .

However, we know that f (x1 ) = µ fmax and so we must have

f (u) g(v) = f (x1 ) gmax .

Now divide through by f (u) to get

f (x1 )
g(v) = gmax .
f (u)

104
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

Looking at the graph of f in Figure 6.12, we see the ratio f (x1 )/f (u) > 1! But that tells
us we are looking for a value of v which satisfies g(v) > gmax . This is not possible. So no
trajectory can pass through this kind of point.

Question 2: Is it possible for the trajectory to pass through a point (u, v) with x2 < u? The
analysis is virtually identical to what we did for Question 1. If this happened, the nonlinear
conservation law tells us that the pair (u, v) must satisfy

f (u) g(v) = µ fmax gmax .

However, we also know f (x2 ) = µ fmax and so we must have

f (u) g(v) = f (x2 ) gmax .

Now divide through by f (u) to get

f (x2 )
g(v) = gmax .
f (u)

Looking at the graph of f in Figure 6.12, we see again that the ratio f (x2 )/f (u) > 1! But
that tells us we are looking for a value of v which satisfies g(v) > gmax . This is not possible.
So no trajectory can pass through this kind of point either.

Note the answers to Question 1 and Question 2 tells us that valid trajectories have x(t) values
that are bounded: i.e. x1 ≤ x(t) ≤ x2 . Now, let’s look at the y variable.
For the y variable, if µ < 1, draw the g curve along with the horizontal line of value µgmax . We
show this in Figure 6.13. This horizontal line intersects the g curve exactly twice. We label these
intersection points as (y1 , g(y1 )) and (y2 , g(y2 )). You can see that we have 0 < y1 < a/b < y2 .
The solutions (x(t), y(t)) must satisfy the nonlinear conservation law. So at any time t, we must
have

f (x(t)) g(y(t)) = µ fmax gmax .

Again, there are two questions to ask:

Question 1: Is it possible for the trajectory to pass through a point (u, v) with 0 < v < y1 ? If
this happened, the nonlinear conservation law tells us that the pair (u, v) must satisfy

105
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

g axis

gmax The Predator - Prey model


has initial conditions
g(y1 ) g(y2 ) (x0 , y0 ) that correspond to
µ gmax
f (x0 ) g(y0 ) = µ fmax gmax .
The horizontal line µ gmax
intersects the g curve in
two points, (y1 , g(y1 )) and
(y2 , g(y2 ))

y1 a/b y2 y axis

Figure 6.13: The Conservation Law f (x) g(y) = µ fmax gmax implies there are two critical
points x1 and x2 of interest. See text for further explanation.

f (u) g(v) = µ fmax gmax .

However, we know that g(y1 ) = µ gmax and so we must have

f (u) g(v) = g(y1 ) gmax .

Now divide through by g(v) to get

g(y1 )
f (u) = gmax .
g(v)

Looking at the graph of g in Figure 6.13, we see the ratio g(y1 )/g(v) > 1! But that tells
us we are looking for a value of u which satisfies f (u) > fmax . This is not possible. So no
trajectory can pass through this kind of point.

Question 2: Is it possible for the trajectory to pass through a point (u, v) with y2 < v? The
analysis is virtually identical to what we did for Question 1. If this happened, the nonlinear
conservation law tells us that the pair (u, v) must satisfy

f (u) g(v) = µ fmax gmax .

106
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

However, we also know g(y2 ) = µ ymax and so we must have

f (u) g(v) = g(y2 ) gmax .

Now divide through by g(v) to get

g(y2 )
f (u) = gmax .
g(v)

Again, looking at the graph of g in Figure 6.13, we see that the ratio g(y2 )/g(v) > 1! But
that tells us we are looking for a value of u which satisfies f (u) > fmax . This is not possible.
So no trajectory can pass through this kind of point either.

The answers to Question 1 and Question 2 tells us that valid trajectories have y(t) values that
are also bounded: i.e. y1 ≤ y(t) ≤ y2 . We conclude that a trajectory that starts in quadrant I
must live in the box shown in Figure 6.14. Thus, any trajectory that starts in Quadrant I satisfies
0 < x1 ≤ x(t) ≤ x2 and 0 < y1 ≤ y(t) ≤ y2 . We know immediately therefore that the
trajectories can not be spiraling out.

y axis

The Predator - Prey model


y2 has solutions which must live
in the bounding box shown:
(x(t), y(t) lives in the rectan-
gle [x1 , x2 ] × [y1 , y2 ] for ini-
a/b tial conditions (x0 , y0 ) from
Quadrant I.

y1

x1 c/d x2 x axis

Figure 6.14: Predator - Prey trajectories with initial conditions from Quadrant I are bounded.

Now that we have discussed these two cases, note that we could have just done the x variable case
and said that a similar thing happens for the y variable. In many texts, it is very common to do
this. Since you are beginners at this kind of reasoning, we have presented both cases in detail.
But you should start training your mind to see that presenting one case is actually enough!

107
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

Exercises

For these Predator - Prey models, follow the analysis of the section above to show that the
trajectories must bounded.

Exercise 6.1.16.

x0 (t) = 10 x(t) − 25 x(t) y(t)


y 0 (t) = −20 y(t) + 40 x(t) y(t)

Exercise 6.1.17.

x0 (t) = 100 x(t) − 25 x(t) y(t)


y 0 (t) = −20 y(t) + 4 x(t) y(t)

Exercise 6.1.18.

x0 (t) = 80 x(t) − 4 x(t) y(t)


y 0 (t) = −10 y(t) + 5 x(t) y(t)

Exercise 6.1.19.

x0 (t) = 10 x(t) − 2 x(t) y(t)


y 0 (t) = −25 y(t) + 10 x(t) y(t)

Exercise 6.1.20.

x0 (t) = 12 x(t) − 4 x(t) y(t)


y 0 (t) = −60 y(t) + 15 x(t) y(t)

6.1.10 The Trajectory Must Be Periodic


There are more questions to ask now that we know the trajectories must lie within the rectangle
[x1 , x2 ] × [y1 , y2 ].

Question 3: Is it possible for the trajectory to pass through the point (u, v) with u = x1 or
u = x2 ? Let’s do this for the case u = x1 . If this happens, the nonlinear conservation law
tells us that the pair (u, v) must satisfy

f (x1 ) g(v) = µ fmax gmax .

However, we know that f (x1 ) = µ fmax and so we must have

108
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

f (x1 ) g(v) = f (x1 ) gmax .

or

f (x1 )
g(v) = gmax = gmax .
f (x1 )

Hence, we are looking for a value of v which satisfies g(v) = gmax . The only possible v is
v = a/b. Hence, the trajectory can pass through the point (x1 , a/b). A similar analysis
shows that the trajectory can pass through the point (x2 , a/b) as well. Thus, we know the
trajectory actually hits these two points shown as dots in Figure 6.14.

Question 4: Is it possible for the trajectory to pass through a point (u, v) with x1 < u < x2 ?
For convenience, let’s look at the case x1 < u < c/d and the case u = c/d separately.

Case 1: u = c/d: In this case, the nonlinear conservation law gives

f (u) g(v) = µ fmax gmax .

However, we also know f (c/d) = fmax also and so we must have

fmax g(v) = µ fmax gmax .

or

g(v) = µ gmax .

Since µ is less than 1, we draw the µ gmax horizontal line on the g graph as usual to
obtain the figure we previously drew as Figure 6.13. Hence, there are two values of v
that give the value µ gmax ; namely, v = y1 and v = y2 . We conclude there are two
possible points on the trajectory, (c/d, v = y1 ) and (c/d, v = y2 ). This gives two more
points shown as dots in Figure 6.14.
Case 2: x1 < u < c/d: The analysis is is very similar to the one we just did for u = c/d.
First, for this choice of u, we can draw a new graph as shown in Figure 6.15.
Here, the conservation law gives

f (u) g(v) = µ fmax gmax .

109
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

f axis

The horizontal line µ fmax


fmax intersects the f curve in
f (u)
f (x1 ) f (x2 ) two points, (x1 , f (x1 )) and
µ fmax (x2 , f (x2 )). The choice
x1 < u < c/d gives the
vertical line shown which
intersects the f curve in
the point (u, f (u)) with
f (x1 ) < f (u) < f (c/d).

x1 u c/d x2 x axis

Figure 6.15: The Predator - Prey f growth graph trajectory analysis for Question 4

Dividing through by f (u), we seek v values satisfying

fmax
g(v) = µ gmax .
f (u)

Here the ratio fmax /f (u) is larger than 1 (just look at Figure 6.15 to see this). Call
this ratio r. Hence, µ < µ (fmax /f (u)) and so µgmax < µ (fmax /f (u)) gmax . Also from
Figure 6.15, we see the ratio µ fmax < f (u) which tells us (µ fmax )/f (u) gmax < gmax .
Now look at Figure 6.16. The inequalities above show us we must draw the horizontal
line µ r gmax above the line µ gmax and below the line gmax . So we seek v values that
satisfy

fmax
µ gmax < g(v) = µ = µ r < gmax .
f (u)

We already know the values of v that satisfy g(v) = µ gmax which are labeled in Figure
6.15 as v = y1 and v = y2 . Since the number µ r is larger than µ, we see from Figure
6.16 there are two values of v, v = z1 and v = z2 for which g(v) = µ r gmax and
y1 < z1 < a/b < z2 < y2 as shown.
From the above, we see that in the case x1 < u < c/d, there are always 2 and only 2
possible v values on the trajectory. These points are (u, z1 ) and (u, z2 ).

What happens if we pick two points, x1 < u1 < u2 < c/d? The f curve analysis is essentially
the same but now there are two vertical lines that we draw as shown in Figure 6.17.
Now, applying conservation law gives two equations

110
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

g axis

gmax µ r gmax
g(z1 ) g(z2 )
g(y1 ) g(y2 ) Analyzing the Predator -
µ gmax Prey trajectories for Ques-
tion 4; the g growth dynam-
ics.

y1 z 1 a/b z2 y2 y axis

Figure 6.16: The Predator - Prey g growth graph Question 4: the g growth Analysis for
one point x1 < u < c/d.

f axis

The horizontal line µ fmax inter-


f (u2 ) fmax sects the f curve in two points,
f (u1 ) (x1 , f (x1 )) and (x2 , f (x2 )). The
f (x2 )
f (x1 ) choices x1 < u1 < u2 < c/d
µ fmax
give vertical lines as shown which
intersect the f curve in the points
(u1 , f (u1 )) and (u2 , f (u2 )) with
f (x1 ) < f (u1 ) < f (u2 ) < f (c/d).

x1 u1 u2 c/d x2 x axis

Figure 6.17: The Predator - Prey f growth graph trajectory analysis for Question 4 for two u
points

111
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

f (u1 ) g(v) = µ fmax gmax


f (u2 ) g(v) = µ fmax gmax

This implies we are searching for v values in the following two cases:

fmax
g(v) = µ gmax
f (u1 )

and

fmax
g(v) = µ gmax .
f (u2 )

Since f (u1 ) is smaller than f (u2 ), we see the ratio fmax /f (u1 ) is larger than fmax /f (u2 ) and both
ratios are larger than 1 (just look at Figure 6.17 to see this). Call these ratios r1 (the one for u1 )
and r2 (for u2 ). It is easy to see r2 < r1 from the figure. We also still have (as in our analysis of
the case of one point u) that both µ r1 and µ r2 are less than 1. We conclude

fmax
µ < µ = µ r1
f (u2 )
fmax
< µ = µ r2
f (u1 )
< 1.

Now look at Figure 6.18. The inequalities above show us we must draw the horizontal line µr1 gmax
above the line µ r2 gmax which is above the line µ gmax . We already know the values of v that
satisfy g(v) = µ gmax which are labeled in Figure 6.16 as v = y1 and v = y2 . Since the number
µ r2 is larger than µ, we see from Figure 6.18 there are two values of v, v = z21 and v = z22 for
which g(v) = µ r2 gmax and y1 < z21 < a/b < z22 < y2 as shown. But we can also do this for
the line µ r1 gmax to find two more points z11 and z12 satisfying

y1 < z21 < z11 < a/b < z12 < z22 < y2

as seen in Figure 6.18 also.


We also see that the largest spread in the y direction is at x = c/d giving the two points (c/d, y1 )
and (c/d, y2 ) which corresponds to the line segment [y1 , y2 ] drawn at the x = c/d location. If

112
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

we pick the point x1 < u2 < c/d, the two points on the trajectory give a line segment [z21 , z22 ]
drawn at the x = u2 location. Note this line segment is smaller and contained in the largest one
[y1 , y2 ]. The corresponding line segment for the point u1 is [z11 , z12 ] which is smaller yet.

y axis

The Predator - Prey model


trajectory is shown for the
points (u1 , z11 ), (u1 , z12 ),
y2
(u2 , z21 ), (u2 , z22 ) which
u2
live in the bounding box
[x1 , x2 ] × [y1 , y2 ]. Note the
u1 length of the line segments
in the vertical direction is
a/b decreasing as we move away
from the center line through
x = c/d.
y1

x1 c/d x2 x axis

Figure 6.18: The spread of the trajectory through fixed lines on the x axis gets smaller as we
move away from the center point dc .

If you think about it a bit, if we picked three points as follows, x1 < u1 < u2 < u3 < c/d and
three more points c/d < u4 < u5 < u6 < x2 , we would find line segments as follows:

Point Spread
x1 One point (x1 , a/b)
u1 [z11 , z12 ]
u2 [z21 , z22 ] contains [z11 , z12 ]
u3 [z31 , z32 ] contains [z21 , z22 ]
c/d [y1 , y2 ] contains [z21 , z22 ]
u4 [z41 , z42 ] inside [y1 , y2 ]
u2 [z51 , z52 ] inside [z41 , z42 ]
u1 [z61 , z62 ] inside [z51 , z52 ]
x2 One point (x2 , a/b) inside [z51 , z52 ]

We draw these line segments in Figure 6.19. We know the Predator - Prey trajectory must go
through these points. Every time the trajectory hits the x value c/d, the corresponding y spread
is [y1 , y2 ]. If the trajectory was spiraling inwards, then the first time we hit c/d, the spread would
be [y1 , y2 ] and the next time, the spread would have to be less so that the trajectory moved
inwards. This can’t happen as the second time we hit c/d, the spread is exactly the same. The

113
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

points shown in Figure 6.19 are always the same. We say a trajectory is periodic is there is a
number T so that

x(t + T ) = x(t) and y(t + T ) = t(t)

for all values of t. This is the behavior we are seeing in Figure 6.19. The smallest value of T for
which this happens is called the period of the trajectory. Note the value of this period is really
determined by the initial values (x0 , y0 ) as they determine the bounding box [x1 , x2 ] × [y1 , y2 ].

y axis
The Predator - Prey model
trajectory is shown for the
points (u1 , z11 ), (u1 , z12 ),
y2 u3 u4 (u2 , z21 ), (u2 , z22 ), (u3 , z31 ),
u2 u5 (u4 , z42 ) and (u5 , z51 ),
(u6 , z62 ) which all live in
u1 u6 the bounding box [x1 , x2 ] ×
[y1 , y2 ]. Note the length of
a/b the line segments in the ver-
tical direction is decreasing
as we move away from the
center line through x = c/d.
y1

x1 c/d x2 x axis

Figure 6.19: The trajectory must be periodic.

Exercises

For the following problems, show why the trajectories are periodic by mimicking the analysis in
the section above.

Exercise 6.1.21.

x0 (t) = 8 x(t) − 25 x(t) y(t)


y 0 (t) = −10 y(t) + 50 x(t) y(t)

Exercise 6.1.22.

x0 (t) = 30 x(t) − 3 x(t) y(t)


y 0 (t) = −45 y(t) + 9 x(t) y(t)

114
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

Exercise 6.1.23.

x0 (t) = 50 x(t) − 12.5 x(t) y(t)


y 0 (t) = −100 y(t) + 50 x(t) y(t)

Exercise 6.1.24.

x0 (t) = 10 x(t) − 2.5 x(t) y(t)


y 0 (t) = −2 y(t) + 1 x(t) y(t)

Exercise 6.1.25.

x0 (t) = 7 x(t) − 2.5 x(t) y(t)


y 0 (t) = −13 y(t) + 2 x(t) y(t)

6.1.11 The Average Value of a Predator - Prey Solution

If we had a positive function h defined on an interval [a, b], we can define the average value of h
over [a, b] by the integral

Z b
1
h̄ = h(t) dt. (6.30)
b − a a

We will use this idea in a bit. Now, recall the Predator - Prey model is given by

 
0
x = x a − by
 
y0 = y −c + d x

x(0) = x0
y(0) = y0

Rearrange the x0 equation like this:

x0 (s)
= a − b y(s)
x(s)

for all 0 ≤ s ≤ T where T is the period for this trajectory. Now integrate from s = 0 to s = T
to get

115
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

T
x0 (s) T
Z Z  
ds = a − b y(s) ds.
0 x(s) 0

Hence, we have

  T Z T

ln x(s) = aT − b y(s) ds.
0 0

Simplifying, we find

  Z T
x(T )
ln = aT − b y(s) ds.
x0 0

However, since T is the period for this trajectory, we know x(T ) must equal x(0). Hence,
ln(x(T )/x0 ) = ln(1) = 0. Rearranging, we conclude

Z T
0 = aT − b y(s) ds,
0
Z T
b y(s) ds = a T,
0
Z T
1 a
y(s) ds = .
T 0 b

The term on the left hand side is the average value of the solution y over the one period of time,
[0, T ]. Using the usual average notation, we will call this ȳ. Thus, we have

Z T
1 a
ȳ = y(s) ds = . (6.31)
T 0 b

We can do a similar analysis for the average value of the x component of the solution. We find

y 0 (s)
= −c + d x(s), 0 ≤ s ≤ T,
y(s)
Z T 0 Z T 
y (s)
ds = −c + d x(s) ds,
0 y(s) 0
  T Z T

ln y(s) = −c T + d x(s) ds,
0 0
  Z T
y(T )
ln = −c T + d x(s) ds.
y0 0

116
6.1. PREDATOR PREY CHAPTER 6. NONLINEAR ODES

However, since T is the period for this trajectory, we know y(T ) must equal y(0). Hence,
ln(y(T )/y0 ) = ln(1) = 0. Rearranging, we conclude

Z T
0 = −c T + d x(s) ds,
0
Z T
d x(s) ds = c T,
0
Z T
1 c
x(s) ds = .
T 0 d

The term on the left hand side is the average value of the solution x over the one period of time,
[0, T ]. Using the usual average notation, we will call this x̄. Thus, we have

Z T
1 c
x̄ = x(s) ds = . (6.32)
T 0 d

The point (x̄ = c/d, ȳ = a/b) has an important interpretation. It is the average value of the
solution over the period of the trajectory.

Exercises

For the following Predator - Prey models, derive the average x and y equations.

Exercise 6.1.26.

x0 (t) = 100 x(t) − 25 x(t) y(t)


y 0 (t) = −200 y(t) + 40 x(t) y(t)

Exercise 6.1.27.

x0 (t) = 1000 x(t) − 250 x(t) y(t)


y 0 (t) = −2000 y(t) + 40 x(t) y(t)

Exercise 6.1.28.

x0 (t) = 900 x(t) − 45 x(t) y(t)


y 0 (t) = −100 y(t) + 50 x(t) y(t)

Exercise 6.1.29.

x0 (t) = 10 x(t) − 25 x(t) y(t)


y 0 (t) = −20 y(t) + 40 x(t) y(t)

117
6.2. PREDATOR - PREY WITH FISHING RATES
CHAPTER 6. NONLINEAR ODES

Exercise 6.1.30.

x0 (t) = 90 x(t) − 2.5 x(t) y(t)


y 0 (t) = −200 y(t) + 4.5 x(t) y(t)

6.1.12 A Sample Predator - Prey Model


Example 6.1.1. Consider the following Predator - Prey model:

x0 (t) = 2 x(t) − 10 x(t) y(t)


y 0 (t) = −3 y(t) + 18 x(t) y(t)

For any choice of initial conditions (x0 , y0 ), we can solve this as discussed in the previous sections.
We find a = 2, b = 10 so that a/b = 0.2 and c = 3, d = 18 so that c/d = 0.16666̄. We know a
lot about these solutions now.

1. The solution (x(t), y(t)) has average value x̄ = c/d which is 0.16666̄.

2. The initial condition (x0 , y0 ) is some point on the curve. We

3. For each choice of initial condition (x0 , y0 ), there is a corresponding period T so that
(x(t), y(t)) = (x(t + T ), y(t + T )) for all time t.

4. Looking at Figure 6.19, we can connect the dots so to speak to generate the trajectory.

6.2 Predator - Prey With Fishing Rates


The Predator - Prey model we have looked at so far did not help Volterra explain the food and
predator fish data seen in the Mediterranean sea during World War I. The model must also handle
changes in fishing rates. War activities had decreased the rate of fishing from 1915 to 1919 or
so as shown in Table 6.2. To understand this data, Volterra added a new decay rate to the
model. He let the positive constant r represent the rate of fishing and assumed that −r x would
be removed from food fish due to fishing and also assumed that the same rate would apply to
predator removal. Hence, −r y would be removed from the predators. This led to the Predator
Prey model with fishing rates.

x0 (t) = a x(t) − b x(t) y(t) − r x(t) (6.33)


y 0 (t) = −c y(t) + d x(t) y(t) − r y(t). (6.34)

We don’t have to work to hard to understand what adding the fishing does to our model results.
We can rewrite the model as

118
6.2. PREDATOR - PREY WITH FISHING RATES
CHAPTER 6. NONLINEAR ODES

y axis
Our example The Preda-
tor - Prey model trajec-
tory has average x value of
2/18 = .16666̄ and aver-
u3 u4
y2 u2 age y value of 2/10 = .2.
u5
u1 We shown the spreads for 6
u6 different u values which all
lie inside the bounding box
[x1 , x2 ] × [y1 , y2 ]. Note the
ȳ = 2/10 length of the line segments
in the vertical direction is
decreasing as we move away
from the center line through
y1
x = .1666̄.

x1 x̄ = 2/18 x2 x axis

Figure 6.20: The theoretical trajectory for x0 = 2x − 10xy; y 0 = −3y + 18xy. We


do not know the actual trajectory as we can not solve for x and y explicitly
as functions of time. However, our analysis tells us the trajectory has the
qualitative features shown.

x0 (t) = (a − r) x(t) − b x(t) y(t) (6.35)


y 0 (t) = −(c + r) y(t) + d x(t) y(t). (6.36)

We see immediately that it doesn’t make sense for the fishing rate to exceed a as we want a − r
to be positive. We also know the new averages are

c+r
x̄r =
d
a−r
ȳr = .
b

where we label the new averages with a subscript r to denote their dependence on the fishing rate
r. What happens if we halve the fishing rate r? The new model is

x0 (t) = a x(t) − b x(t) y(t) − r/2 x(t)


y 0 (t) = −c y(t) + d x(t) y(t) − r/2 y(t).

which can be reorganized as

119
6.2. PREDATOR - PREY WITH FISHING RATES
CHAPTER 6. NONLINEAR ODES

Year x̄ ȳ Fishing Rate Change (∆ x̄, ∆ ȳ)


1914 .881 .119 starting value no change yet
1915 .786 .214 down relative to 1914 (−, +)
1916 .779 .221 down relative to 1914 (−, +)
1917 .788 .212 down relative to 1914 (−, +)
1918 .636 .364 down relative to 1914 (−, +)
1919 .727 .273 increased relative to 1918 (+, −)
1920 .840 .160 increased relative to 1918 (+, −)
1921 .841 .159 increased relative to 1918 (+, −)
1922 .852 .148 increased relative to 1918 (+, −)
1923 .893 .107 back to normal 1914 rate back to normal

Table 6.4: The average food and predator fish caught in the Mediterranean Sea.

x0 (t) = (a − r/2) x(t) − b x(t) y(t) (6.37)


y 0 (t) = −(c + r/2) y(t) + d x(t) y(t). (6.38)

leading to the new averages

c + r/2
x̄r/2 =
d
a − r/2
ȳr/2 = .
b

Note that as long as we use a feasible r value (i.e. r < a), we have the following inequality
relationships:

c + r/2 c+r
x̄r/2 = < x̄r =
d d
a − r/2 a−r
x̄r/2 = > ȳr = .
b b

Hence, if we decrease the fishing rate r, the predator percentage goes up and the food percentage
goes down. Now look at Table 6.2 rewritten with the percentages listed as fractions and interpreted
as x̄ and ȳ. We show this in Table 6.4.

Note that Volterra’s Predator - Prey model with fishing rates added has now explained this data.
During the war years, predator amounts went up and food fish amounts went down. A wonderful
use of modeling, don’t you think? Insight was gained from the modeling that had not been able
to be achieved using other types of analysis.
Let’s do an example to set this in place. Consider the following Predator - Prey model with
fishing added.

120
6.2. PREDATOR - PREY WITH FISHING RATES
CHAPTER 6. NONLINEAR ODES

Example 6.2.1.

x0 (t) = 4 x(t) − 18 x(t) y(t) − 2 x(t)


y 0 (t) = −3 y(t) + 21 x(t) y(t) − 2 y(t)

Solution 6.2.1. The averages are as follows:

x̄r=0 = 3/21 = .1429, ȳr=0 = 4/18 0.2222


x̄r=2 = 5/21 = .2381, ȳr=2 = 2/18 0.1111
x̄r=1 = 4/21 = .1905, ȳr=1 = 3/18 0.1667.

We see that halving the fishing rate, decreases the food fish amounts (.2381 down to .1905) and
increases the predator amounts (.1111 up to .1667). We could also shown this graphically by
drawing all three average pairs on the same x - y plane but we will leave that to you in the
exercises.

6.2.1 Exercises

For the following problems, add fishing to the model at some rate r which is given. Find the
new average solutions (x̄, ȳ) and explain what happens if we half the fishing rate and how this
relates to the way Volterra explained the Mediterranean Sea fishing data from World War I. Draw
a simple picture shown these three averages on the same x - y graph: show original (x̄, ȳ), the
(x̄, ȳ) when the fishing is added and the (x̄, ȳ) when the fishing is halved. You should clearly see
that adding halving the fishing rate leads to the average predator value going up with the average
food fish value going down.

Exercise 6.2.1.

x0 (t) = 4 x(t) − 18 x(t) y(t) − 2 x(t)


y 0 (t) = −3 y(t) + 21 x(t) y(t) − 2 y(t)

Exercise 6.2.2.

x0 (t) = 2 x(t) − 10 x(t) y(t) − 1 x(t)


y 0 (t) = −4 y(t) + 20 x(t) y(t) − 1 y(t)

Exercise 6.2.3.

x0 (t) = 1 x(t) − 2 x(t) y(t) − 0.5 x(t)


y 0 (t) = −4 y(t) + 8 x(t) y(t) − 0.5 y(t)

121
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

Exercise 6.2.4.

x0 (t) = 40 x(t) − 18 x(t) y(t) − .2 x(t)


y 0 (t) = −30 y(t) + 20 x(t) y(t) − .2 y(t)

Exercise 6.2.5.

x0 (t) = 7 x(t) − 8 x(t) y(t) − .2 x(t)


y 0 (t) = −3 y(t) + 4 x(t) y(t) − .2 y(t)

6.3 Predator - Prey With Self Interaction

Many biologists of Volterra’s time criticized his Predator Prey model because it did not include
self - interaction terms. These are terms that model how food fish interactions with other food fish
and sharks interactions with other predators effect their populations. We can model these effects
by assuming their magnitude is proportional to the interaction. Mathematically, we assume these
are both decay terms giving us the Predator Prey model with self interaction.

x0self = −exx
0
yself = − f y y.

for positive constants e and f . We are thus led to the new self - interaction model given below:

x0 (t) = a x(t) − b x(t) y(t) − e x(t)2


y 0 (t) = −c y(t) + d x(t) y(t) − f y(t)2

The nullclines for the self - interaction model are a bit more complicated, but still straightforward
to work with. First, we can factor the dynamics to obtain

x0 = x (a − b y − e x),
y 0 = y (−c + d x − f y).

First, let’s look at the x0 = 0 line. We find x0 = 0 if x = 0 or a − b y − e x = 0. Hence, the


relevant x0 = 0 equations are the y axis and the straight line

a a
y = − x.
b e

122
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

In a bit, we will show how we can concentrate on what happens in the first quadrant only, so in
Figure 6.21 we show how the x − y plane is divided into x0 + and x0 − regions in Quadrant I only.

x0 = 0
y axis
The x0 equation for our system is
x0 = x (a − b y − e x). Setting
this to 0, we get x = 0 and y =
x0 < 0 a/b−e/bx whose graphs are shown.
y = a/b The algebraic signs of x0 are shown
in the picture.
x0 > 0 x0 = 0
y = a/b − e/b x

x = a/e x axis

Figure 6.21: Finding where x0 < 0 and x0 > 0 for the Predator - Prey Self Interaction Model

Next, look at the y 0 = 0 line. We have y 0 = 0 if y = 0 or −c + d x − f y = 0. Hence, the


relevant y 0 = 0 equations are the x axis and the straight line

c d
y = − + x.
f f

In Figure 6.22 we show how the x − y plane is divided into y 0 + and y 0 − regions in Quadrant I
only.

y axis
The y 0 equation for our system is
y0 = 0 y 0 = y (−c + d x − f y). Setting
y = −c/f + d/f x this to 0, we get y = 0 and y =
−c/f + d/f x whose graphs are
shown. The algebraic signs of y 0
y0 < 0 y0 > 0 are then shown in the picture.

x = c/d x axis
y0 = 0

Figure 6.22: Finding where y 0 < 0 and y 0 > 0 for the Predator - Prey Self Interaction Model

We then combine Figure 6.21 and Figure 6.22 to create the combined graph we can use to
analyze trajectories. There are now two cases: first, when c/d < a/e so that the x0 = 0 line
−c + d x − f y = 0 and y 0 = 0 line −c + d x − f y cross. However, first, we look at trajectories
that start on the axes.

123
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

6.3.1 Trajectories Starting On the y Axis


If we start at an initial condition on the y axis, x0 = 0 and y0 > 0, the self - interaction model
can be solved by choosing x(t) = 0 for all time t and solving the first order equation

y 0 (t) = y (−c − f y).

This becomes

y0
= −1.
y (c + f y)

Integrating from 0 to t, we find

t
y 0 (s) t
Z Z
= − ds = −t.
0 y(s) (c + f y(s)) 0

Making the substitution u = y(t), we find

Z s=t
dy
= −t.
s=0 u (c + f u)

This is a problem that needs a partial fraction decomposition approach. We search for α and β
so that

1 α β
= + .
u(c + f u) u c + fu

This leads to α = 1/c and β = −f /c. Hence, we can complete the integration to obtain

!
Z s=t Z s=t
1 α β
du = + du
s=0 u(c + f u) s=0 u c + fu
1 s=t 1
Z Z s=t
f 1
= du − du.
c s=0 u c s=0 c + f u
t t
1 1
= ln u(s) − ln c + f u(s) .
c 0 c 0

But u = y(t) and so we have


1 y(t) 1 c + f y(t) 1 y(t) c + f y0
ln − ln = ln .
c y0 c c + f y0 c y0 c + f y(t)

124
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

Since the right hand side of the integration was −t, we can combine the result above with the
right hand side to see the solution to the model integration is


y(t) c + f y0
ln
= −ct.
y0 c + f y(t)

Exponentiating, we have


y(t) c + f y0 −ct
y0 c + f y(t) = e .

Using our standard arguments, we see that the solution y(t) can not be zero at any finite time.
Hence, the absolute values are not needed. We have

y(t) c + f y0
= e−ct
y0 c + f y(t)

or

y(t) c + f y0
= e−ct .
c + f y(t) y0

After some manipulation, we find

y0
y(t) = (c + f y(t)) e−ct .
c + f y0

Now solve for y(t) to find

cy0 e−ct
y(t) = .
(c + f y0 ) − f y0 e−ct

This trajectory thus clearly stays on the y axis and moves straight down to the origin (0, 0). We
conclude the solution to the self - interaction problem

x0 (t) = x (a − b y − e x)
y 0 (t) = y (−c + d x − f y)
x(0) = 0
y(0) = y0 > 0

125
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

is

x(t) = 0
cy0 e−ct
y(t) = .
(c + f y0 ) − f y0 e−ct

We know trajectories can not cross, so a trajectory that starts in Quadrant I with (x0 , y0 ) both
positive can not hit the y axis, because if it did we would its trajectory intersecting the trajectory
above.

6.3.2 Starting On the x Axis

If we start at a point on the positive x axis, we must solve the self - interaction model

x0 = x (a − b y − e x)
y 0 = y (−c + d x − f y)
x(0) = x0 > 0
y(0) = 0.

It is easy to see that guessing y(t) = 0 for all t is a good choice. That leaves us to solve the first
order equation

a
x0self = x (a − e x) = e x ( − x)
e
x(0) = x0 > 0

This is a standard logistics problem with α = e and resource capacity L = a/e. We therefore
know the solution is

L
x(t) =
1 + ( xL0
− 1) eα Lt
a x0
= .
e x0 − (a − e x0 )e−at

It is thus easy to see that if a trajectory starts at a point x0 > 0 with y0 = 0, the trajectory
moves to the right until it hits the point (a/e, 0). It is also clear now that trajectories that start
in Quadrant with positive (x0 , y0 ) can not cross or hit the y axis as we know trajectories can’t
cross. We are now in a position to see what happens if we pick initial conditions in Quadrant I.

126
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

6.3.3 The Nullclines Cross

The first case is the one where the nullclines cross. We show this situation in Figure 6.23. The
two lines cross when

ex+by = a
dx−f y = c
.

Solving using Cramer’s rule, we find the intersection (x∗ , y ∗ ) to be

" #
a d
det

c −f
x = " #
e b
det
d −f
af + dc
=
ef + bd
" #
e a
det
d c
y∗ = " #
e b
det
d −f
ad − ec
= .
ef + bd

In this case, we have a/e > c/d or a d − e c > 0.


We assume we start in Quadrant I in the region with (x0 , y 0 ) having algebraic signs (−, −) or
(+, −). In these two regions, we know their corresponding trajectories can’t cross the ones that
start on the positive x or y axis. Hence, at any time t, the solutions x(t) and y(t) must be positive.
Rewrite the model as follows:

x0
+ ex = a − by
x
y0
+ f y = −c + d x.
y

Now integrate from s = 0 to s = t to obtain

t
x0 (s) T t
Z Z Z
ds + e x(s) ds = a t − b y(s) ds
0 x(s) 0 0

127
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

x0 = 0 y0 = 0
y axis y = −c/f + d/f x

(−, −)
a/b
The combined (x0 , y 0 ) algebraic
x0 = 0
y = a/b − a/e x sign graph in Quadrant I for the
case dc < ae . There are four regions
(+, −) (−, +)
of interest as shown.

c/d a/e x axis


y0 = 0
(+, +)

Figure 6.23: The qualitative nullcline regions for the Predator - Prey Self Interaction Model
when c/d < a/e.

t
y 0 (s) T t
Z Z Z
ds + f y(s) ds = −c t + d x(s) ds.
0 y(s) 0 0

We obtain

  Z t Z t
x(t)
ln + e x(s) ds = a t − b y(s) ds
x0 0 0
  Z t Z t
y(T )
ln + f y(s) ds = −c t + d x(s) ds.
y0 0 0

Now the solution x and y are continuous, so the integrals

Z t
X(t) = x(s) ds
0
Z t
Y (t) = y(s) ds
0

are also continuous by the Fundamental Theorem of Calculus. Using the new variables X and Y ,
we can rewrite these integrations as

 
x(t)
ln = a t − e X(t) − b Y (t)
x0
 
y(t)
ln = −c t + d X(t) + f Y (t).
y0

Hence,

128
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

 
x(t)
d ln = a d t − e d X(t) − b d Y (t)
x0
 
y(t)
e ln = −c e t + d e X(t) + e f Y (t).
y0

Now add the bottom and top equation to get

 d  e !    
x(t) y(t)
ln = a d − c e t − e f + b d Y (t).
x0 y0

Now divide through by t to get

 d  e !    
1 x(t) y(t) 1
ln = ad − ce − ef + bd Y (t). (6.39)
t x0 y0 t

From Figure 6.23, it is easy to see that no matter what (x0 , y0 ) we choose in Quadrant I, the
trajectories are bounded and so there is a positive constant we will call B so that

 !
x(t) d y(t) e
  
ln ≤ B.

x0 y0

Thus, for all t, we have

 d  e !
1 x(t) y(t) B
ln ≤ .

t x0 y0 t

Hence, if we let t grow larger and larger, B/t gets smaller and smaller, and in fact

 d  e !
1 x(t) y(t) B
lim ln ≤ lim

t→∞ t x0 y0 t→∞ t
= 0.

But the left hand side is always nonnegative also, so we have

 !
x(t) d y(t) e
  
1
0 ≤ lim ln ≤ 0,

t→∞ t x0 y0

which tells us that

129
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

 d  e !
1 x(t) y(t)
lim ln = 0.

t→∞ t x0 y0

Finally, the above also implies

 d  e !
1 x(t) y(t)
lim ln = 0.
t→∞ t x0 y0

Now let t go to infinity in Equation 6.39 to get

 d  e !    
1 x(t) y(t) 1
lim ln = a d − c e − e f + b d lim Y (t).
t→∞ t x0 y0 t→∞ t

Rt
The term Y (t)/t is actually (1/t) 0 y(s) ds which is the average of the solution y on the interval
[0, t]. It therefore follows that

t
ad − ce
Z
1
0 = lim y(s) ds = .
t→∞ t 0 ef + bd

But the term on the right hand side is exactly the y coordinate of the intersection of the nullclines,
y ∗ . We conclude the limiting average value of the solution y is given by

Z t
1
lim y(s) ds = y ∗ . (6.40)
t→∞ t 0

We can do a similar analysis (although there are differences in approach) to shown that the
limiting average value of the solution x is given by

Z t
1
lim x(s) ds = x∗ . (6.41)
t→∞ t 0

These two results are similar to what we saw in the Predator - Prey model without self - interac-
tion. Of course, we only had to consider the averages over the period before, whereas in the self
- interaction case, we must integrate over all time. It is instructive to compare these results:

Model Average x Average y


No Self - Interaction c/d a/b
Self - Interaction x∗ y∗

130
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

6.3.4 The Nullclines Don’t Cross

The second case is the one where the nullclines do not cross. We show this situation in Figure
6.24. This occurs when c/d > a/e. The two lines now cross at a negative y value, but since in
this model also, trajectories that start in Quadrant I can’t cross the x or y axis, we only draw the
situation in Quadrant I. By Cramer’s rule, the solution to

ex+by = a
dx−f y = c
.

is the pair (x∗ , y ∗ ) as shown below.

af + dc
x∗ =
ef + bd
ad − ec
y∗ = .
ef + bd

In this case, we have a/e < c/d or a d − e c < 0 and so y ∗ is negative and not biologically
interesting.

x0 = 0
y axis
The combined (x0 , y 0 ) algebraic
(−, −) sign graph in Quadrant I for the
a/b case dc > ae ; hence, the nullclines
x0 = 0 don’t cross. There are three re-
y = a/b − a/e x gions of interest as shown. Region
I is where the algebraic signs of
y = −c/f + d/f x (x0 , y 0 ) are (−, +); Region II, where
y0 = 0
(+, −) (−, +) the signs are (−, −); and in Region
III, the signs are (+, −).
a/e c/d x axis
y0 = 0

Figure 6.24: The qualitative nullcline regions for the Predator - Prey Self Interaction Model
when c/d < a/e.

To avoid cluttering Figure 6.24, we don’t label the Regions. We use as Region I the points where
the algebraic signs of (x0 , y 0 ) are (−, +); Region II, where the signs are (−, −); and as Region III,
where the signs are (+, −). Thus, trajectories that start in Region I go up and to the left until
they enter Region II. The analysis we did for trajectories starting on the x axis or y axis in the
crossing nullclines case are still appropriate. So we know one a trajectory is in Region II, it can’t

131
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

hit the x axis in a finite amount of time. Similarly, a trajectory that starts in Region III moves
right and down towards the x axis, but can’t hit the x axis in a finite amount of time. Most of
the analysis we have done for the nullclines crossing case still are appropriate. We won’t repeat
these steps (although, you should be able to reconstruct them on your own!); instead, we state
our conclusions.

1. A trajectory that starts on the positive y axis at a point (0, y0 ) with y0 > 0 moves smoothly
down to the (0, 0).

2. A trajectory that starts on the positive x axis obeys a standard logistics equation with
resource capacity L = a/e. So if a trajectory starts at a point (x0 , 0) below the the resource
capacity a/e, it moves to the right towards a/e along the x axis. In the limit, as t grows
infinitely large, the asymptotic value of x is then a/e. If it starts at x0 > a/e, the trajectory
moves along the x axis to the left towards the resource capacity a/e. The asymptotic value
of x is again a/e. Call the asymptotic x and y values x∞ and y ∞ for convenience.

Now let’s look at a trajectory starting in Region III. It must satisfy


 d  e !    
1 x(t) y(t) 1
ln = ad − ce − ef + bd Y (t). (6.42)
t x0 y0 t

with the big difference that the term a d − c e is now negative. Exponentiate to obtain

 d  e
x(t) y(t)
= e(a d − c e) t e− (e f + b d)Y (t) . (6.43)
x0 y0

Now note

• The term e− (e f + b d Y (t)) is bounded by 1.

• The term e(a d − c e) t goes to zero as t gets large because a d − c e is negative.

• Looking at Figure 6.24, we can see the solutions x(t) and y(t) are bounded.

Hence, as t increases to infinity, we find

 d  e
x(t) y(t)
lim = 0.
t→∞ x0 y0

We conclude the asymptotic values satisfy

d  e
x∞ y∞

lim = 0.
t→∞ x0 y0

132
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

Since it is easy to see that x∞ is positive, we must have y ∞ = 0. We now know a trajectory
starting in Region III hits the x axis as t goes to infinity. Is it possible to the asymptotic value
x∞ to be less than the critical value a/e? We claim this is not possible. Here is how we reason
it out. It is possible to prove that as t goes to infinity, the asymptotic value of (x0 )∞ is 0. The
argument requires a little bit of more advanced mathematical reasoning which you would definitely
be ready for after a few more mathematics classes, but it is not essential for us here. So skip this
explanation if you want! Here goes: (the explanation is in italics so it is set off from the rest of
the text).
First, we claim that since x0 is always positive here, x is always increasing so that x(n + 1) is
always bigger than or equal to x(n) for all positive integers n. In addition, since all these values
must end up at x∞ and come up from below, these values must get very close together. We can be
very precise about this. A little thought tells us that since the asymptotic value of x is a constant
x∞ , given any small positive number , there must be a positive number R so that

x(n + 1) − x(n) < ,

for all n. Also, we claim

1
x(n + 2) − x(n)
1 ≤ x(n + 1) − x(n) < .
2

To see this, note since x is increasing, we have

1
x(n + 1) − x(n + ) ≥ 0,
2
x(n + 1) − x(n) ≥ 0,

and so adding

1
2 x(n + 1) − x(n + ) − x(n) ≥ 0.
2

Rearranging, we have

1
x(n + 1) − x(n) + x(n + 1) − x(n + ) ≥ 0,
2

which implies

133
6.3. PREDATOR - PREY WITH SELF INTERACTION
CHAPTER 6. NONLINEAR ODES

1
2 x(n + 1) − x(n) − x(n + ) ≥ 0.
2

This can be rearranged as

 
1
2 x(n + 1) − 2 x(n) − x(n + ) − x(n) ≥ 0.
2

Thus, we find

1
x(n + 2) − x(n)
x(n + 1) − x(n) ≥ 1 .
2

This shows our original claim is true. In fact, we can show

1
x(n + 3) − x(n)
1 ≤ x(n + 1) − x(n) < .
3

and, indeed, for all positive integers p,

1
x(n + p) − x(n)
1 ≤ x(n + 1) − x(n) < .
p

Now let p go to infinity and we find

1
x(n + p) − x(n)
0 ≤ (x0 )(n) = lim 1 ≤ .
p→inf ty
p

So we have shown (x0 )(n) ≤ . This holds for all n. Now let n go to infinity also and we see

0 ≤ (x0 )∞ = lim x0 (n) ≤ .


n→inf ty

We know the value of the small number  is arbitrary. From this we can conclude (x0 )∞ = 0 as
we proposed.
We also know that

(x0 )∞ = x∞ (a − e x∞ ),

134
6.4. LINEARIZING NONLINEAR SYSTEMS CHAPTER 6. NONLINEAR ODES

and since (x0 )∞ = 0 and x∞ > 0, we must have x∞ = a/e. We conclude that a trajectory
starting in Region III hit the point (a/e, 0) as t goes to infinity. A similar analysis holds for a
trajectory that starts in Region II. Such a trajectory can’t hit asymptotically a point x∞ that is
above a/e.
Our analysis is now complete. All trajectories end at the point (a/e, 0).
Now is this model biologically plausible? It seems not! It doesn’t seem reasonable
for the predator population to shrink to 0 while the food population converges to some positive
number x∞ !

6.3.5 Exercises

Analyze the following self - interaction Predator - Prey models.

Exercise 6.3.1.

x0 (t) = 10 x(t) − 20 x(t) y(t) − 2 x(t)2


y 0 (t) = −8 y(t) + 30 x(t) y(t) − 3 y(t)2

Exercise 6.3.2.

x0 (t) = 6 x(t) − 2 x(t) y(t) − 2 x(t)2


y 0 (t) = −8 y(t) + 3 x(t) y(t) − 3 y(t)2

Exercise 6.3.3.

x0 (t) = 6 x(t) − 2 x(t) y(t) − 2 x(t)2


y 0 (t) = −12 y(t) + 3 x(t) y(t) − 3 y(t)2

Exercise 6.3.4.

x0 (t) = 8 x(t) − 2 x(t) y(t) − 4 x(t)2


y 0 (t) = −18 y(t) + 3 x(t) y(t) − 1 y(t)2

Exercise 6.3.5.

x0 (t) = 10 x(t) − 10 x(t) y(t) − 2 x(t)2


y 0 (t) = −80 y(t) + 30 x(t) y(t) − 3 y(t)2

6.4 Linearizing Nonlinear Systems


First, we have already reviewed the simple nonlinear system called the Predator - Prey Model in
the previous section.

135
6.4. LINEARIZING NONLINEAR SYSTEMS CHAPTER 6. NONLINEAR ODES

6.4.1 Problems
For each of the following systems, do this:

1. Graph the nullclines x0 = 0 and y 0 = 0 and show on the x − y plane the regions where x0
and y 0 take on their various algebraic signs.

2. Find the equilibrium points.

3. At each equilibrium point, find the Jacobian of the system and analyze the linearized system
we have discussed in class. This means:

• find eigenvalues and eigenvectors if the system has real eigenvalues. You don’t need
the eigenvectors if the eigenvalues are complex.
• sketch a graph of the linearized solutions near the equilibrium point.

4. Using 1 and 3 to combine all this information into full graph of the system.

Exercise 6.4.1.

x0 = y
x3
y 0 = −x + − y
6

Exercise 6.4.2.

x0 = −x + y
y 0 = .1x − 2y − x2 − .1x3

Exercise 6.4.3.

x0 = y
y 0 = −x + y (1 − 3x2 − 2y 2 )

Exercise 6.4.4.

x0 = .5(−h(x) + y)
y 0 = .2(−x − 1.5y + 1.2)

for

h(x) = 17.76x − 103.79x2 + 229.62x3 − 226.31x4 + 83.72x5 ,

you should find three equilibrium points

136
6.4. LINEARIZING NONLINEAR SYSTEMS CHAPTER 6. NONLINEAR ODES

Q1 = (0.063, 0.758)
Q2 = (0.285, 0.61)
Q3 = (0.884, 0.21)

with the eigenvalues for Q1 of −3.57 and −0.33; for Q2 , of 1.77 and −0.25; and for Q3 , of −1.33
and −0.4. Note if you do this carefully, you will be able to see that we can ”trigger” a move from
Q1 to Q3 or vice versa by choosing the right Initial condition! So this gives us a way to implement
computer memory. Choose say Q1 to represent a binary ”1” and Q3 to represent a binary ”0” or
to implement a model of how emotional states can switch quickly.

137
6.4. LINEARIZING NONLINEAR SYSTEMS CHAPTER 6. NONLINEAR ODES

138
Chapter 7
Numerical Eigenvalues and Eigenvectors

We will now discuss certain ways to compute eigenvalues and eigenvectors for a square matrix in
MatLab. For a given A, we can compute its eigenvalues as follows:

>> A = [1 2 3; 4 5 6; 7 8 -1]

A =

1 2 3
4 5 6
7 8 -1

>> E = eig(A)

E =

-0.3954
11.8161
-6.4206

>>

So we have found the eigenvalues of this small 3 × 3 matrix. To get the eigenvectors, we do this:

>> [V,D] = eig(A)

V =

0.7530 -0.3054 -0.2580


-0.6525 -0.7238 -0.3770
0.0847 -0.6187 0.8896

D =

139
7.1. SYMMETRIC ARRAYS:
CHAPTER 7. NUMERICAL EIGENVALUES/EIGENVECTORS

-0.3954 0 0
0 11.8161 0
0 0 -6.4206

Note the eigenvalues are not returned in ranked order. The eigenvalue/ eigenvector pairs are thus

λ1 = −0.3954
 
0.7530
V1 =  −0.6525 
 

0.0847

λ2 = 11.8161
 
−0.3054
V2 =  −0.7238 
 

−0.6187

λ3 = −6.4206
 
−0.2580
V3 =  −0.3770 
 

0.8896

7.1 Symmetric Arrays:


Now let’s try a nice 5 × 5 array that is symmetric:

>> B = [1 2 3 4 5;
2 5 6 7 9;
3 6 1 2 3;
4 7 2 8 9;
5 9 3 9 6]

B =

1 2 3 4 5
2 5 6 7 9
3 6 1 2 3
4 7 2 8 9
5 9 3 9 6

>> [W,Z] = eig(B)

W =

0.8757 0.0181 -0.0389 0.4023 0.2637

140
7.1. SYMMETRIC ARRAYS:
CHAPTER 7. NUMERICAL EIGENVALUES/EIGENVECTORS

-0.4289 -0.4216 -0.0846 0.6134 0.5049


0.1804 -0.6752 0.4567 -0.4866 0.2571
-0.1283 0.5964 0.5736 -0.0489 0.5445
0.0163 0.1019 -0.6736 -0.4720 0.5594

Z =

0.1454 0 0 0 0
0 2.4465 0 0 0
0 0 -2.2795 0 0
0 0 0 -5.9321 0
0 0 0 0 26.6197

It is possible to show that the eigenvalues of a symmetric matrix will be real and eigenvectors
corresponding to distinct eigenvalues will be 90◦ apart. Such vectors are called orthogonal and
recall this means their inner product is 0. Let’s check it out. The eigenvectors of our matrix are
the columns of W above. So their dot product should be 0!

>> C = dot(W(1:5,1),W(1:5,2))

C =

1.3336e-16

>>

Well, the dot product is not actually 0 because we are dealing with floating point numbers here,
but as you can see it is close to machine zero (the smallest number our computer chip can detect.
Welcome to the world of computing!

141
7.1. SYMMETRIC ARRAYS:
CHAPTER 7. NUMERICAL EIGENVALUES/EIGENVECTORS

142
Chapter 8
Ordinary Differential Equations:

We will now try to solve differential equations of this form:

dy
= f (t, y)
dt
y(t0 ) = y0

where f is sufficiently smooth with a variety of numerical schemes.

8.1 Euler’s Method:


This method is based on essentially tangent line approximations. The iteration scheme generates
a sequence {yn } starting at y0 using the following recursion equation:

yn+1 = yn + h × f (tn , yn )
y0 = y0

where h is the step size we use for our underlying partition of the time space giving

ti = t0 + i × h

for appropriate indices.

8.1.1 The Matlab Implementation:


The basic code to implement the Euler method is quite simple. Here it is:

143
8.1. EULER CHAPTER 8. ODE

Listing 8.1: Euler Method

function [ t v a l s , y v a l s ] = F i x e d E u l e r ( fname , y0 , t0 , tmax , h )


2 %
% fname fname i s t h e name o f t h e f u n c t i o n f ( t , y )
% h our c h o i c e o f s t e p s i z e
% t0 starting point
% tmax end p o i n t
7 %
n = c e i l ( ( tmax−t 0 ) /h ) +1;
y v a l s = zeros ( n , 1 ) ;
t v a l s = zeros ( n , 1 ) ;
t v a l s (1) = t0 ;
12 for i =1:n−1
t v a l s ( i +1) = t v a l s ( i ) + h ;
end
i f t v a l s ( n ) >= tmax
t v a l s ( n ) = tmax ;
17 end
y v a l s ( 1 ) = y0 ;
for i =1:n−1
f v a l = f e v a l ( fname , t v a l s ( i ) , y v a l s ( i ) ) ;
y v a l s ( i +1) = y v a l s ( i )+h∗ f v a l ;
22 end

8.1.2 The RunTime: Just Tables

We will test the Euler method code using two types of scripts; one just does the Euler computation
and the other will generate a plot of the Euler approximation of the solution vs. the true solution.
Of course, this means you have to solve the ordinary differential system for the solution by hand!
We will start by solving the system

dy
= −y
dt
y(t0 ) = y0

for some choices of initial time t0 and initial state y0 . From basic differential equations, this
system has the solution

y(t) = y0 exp(t0 − t)

So we encode the right hand side function as func.m and the true solution as truefunc.m with
code

Listing 8.2: System Dynamics Function

144
8.1. EULER CHAPTER 8. ODE

function y = f u n c ( t , x )
y = −x ;

and

Listing 8.3: True Solution Function

function y = t r u e f u n c ( t0 , y0 , t )
y = y0 ∗exp ( t0−t ) ;

First, lets see what the Euler method generates as a table of values via the script ShowEuler.m
below

Listing 8.4: Generating Tabular Results

while input ( ’Another Step Size Choice? (1=yes, 0=no). ’ ) ;


fname = input ( ’Enter Function Name: ’ ) ;
3 h = input ( ’Input Step Size ’ ) ;
t 0 = input ( ’Enter initial time ’ ) ;
tmax = input ( ’Enter final time: ’ ) ;
y0 = input ( ’Enter initial state: ’ ) ;
f r e q = input ( ’Input print frequency ’ ) ;
8 s = [ ’Euler(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%6.3f)’ , y0 , t0 , tmax , h ) ] ;
disp ( [ ’ tvals ’ s ])
disp ( ’ ’ )
[ t v a l s , y v a l s ] = F i x e d E u l e r ( fname , y0 , t0 , tmax , h ) ;
N = size ( tvals , 1 ) ;
13 disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s ( 1 ) , y v a l s ( 1 ) ) )
f o r m = f r e q +1: f r e q :N
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s (m) , y v a l s (m) ) )
end
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s (N) , y v a l s (N) ) )
18 end

We then can test our Euler code by running this script. Sample output is given below:

Listing 8.5: The Euler Method Test Script

1 >> ShowEuler
Another Step S i z e Choice ? (1= yes , 0=no ) . 1
Enter Function Name : ’func’
Input Step S i z e 0 . 4
Enter i n i t i a l time 0 . 0
6 Enter f i n a l time : 5 . 0
Enter i n i t i a l s t a t e : 1 . 0
Input print f r e q u e n c y 5
tvals E u l e r ( func , 1 . 0 0 0 , 0 . 0 0 0 , 5 . 0 0 0 , 0 . 4 0 0 )

145
8.1. EULER CHAPTER 8. ODE

11 0.00 1.0000000000000000
2.00 0.0777600000000000
4.00 0.0060466176000000
14
5.00 0.0013060694016000
16 Another Step S i z e Choice ? (1= yes , 0=no ) . 0

8.1.3 The RunTime: Plots!


Now let’s generate some plots. We will use the following script

Listing 8.6: The Euler Method Plot Script

while input ( ’Another Step Size Choice? (1=yes, 0=no). ’ ) ;


fname = input ( ’Enter Function Name: ’ ) ;
truename = input ( ’Enter True Solution Name ’ ) ;
4 h = input ( ’Input Step Size ’ ) ;
t0 = input ( ’Enter Initial Time ’ ) ;
tmax = input ( ’Enter Final Time: ’ ) ;
y0 = input ( ’Enter Initial State: ’ ) ;
s = [ ’Euler(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%6.3f)’ , y0 , t0 , tmax , h ) ] ;
9 disp ( [ ’ tvals ’ s ])
disp ( ’ ’ )
[ t v a l s , y v a l s ] = F i x e d E u l e r ( fname , y0 , t0 , tmax , h ) ;
N = size ( tvals , 1 ) ;
t r u e v a l s = zeros (N, 1 ) ;
14 fo r n=1:N
t r u e v a l s ( n ) = f e v a l ( truename , t0 , y0 , t v a l s ( n ) ) ;
end
plot ( t v a l s , t r u e v a l s , t v a l s , y v a l s , ’*’ ) ;
print −dng e u l e r p l o t
19 end

Here is the Matlab session:

Listing 8.7: The Euler MatLab Session

1 >> P l o t E u l e r
Another Step S i z e Choice ? (1= yes , 0=no ) . 1
Enter Function Name : ’func’
Enter True S o l u t i o n Name ’truefunc’
Input Step S i z e 0 . 4
6 Enter I n i t i a l Time 1 . 0
Enter F i n a l Time : 5 . 0
Enter I n i t i a l S t a t e : 1 . 0
tvals E u l e r ( func1 , 1 . 0 0 0 , 1 . 0 0 0 , 5 . 0 0 0 , 0 . 4 0 0 )

11 Another Step S i z e Choice ? (1= yes , 0=no ) . 1


Enter Function Name : ’func’
Enter True S o l u t i o n Name ’truefunc’

146
8.2. RUNGE-KUTTA CHAPTER 8. ODE

Input Step S i z e 0 . 1
Enter I n i t i a l Time 0 . 0
16 Enter F i n a l Time : 5 . 0
Enter I n i t i a l S t a t e : 1 . 0
tvals E u l e r ( func1 , 1 . 0 0 0 , 0 . 0 0 0 , 5 . 0 0 0 , 0 . 1 0 0 )

Another Step S i z e Choice ? (1= yes , 0=no ) . 0

The plots for the case of stepsize h = 0.4 show a lot of deviation from the true solution as we
might expect since we are just using a tangent line approximation.

Figure 8.1: Comparing The True Solution To Euler Method Solution:Step Size = 0.4

The plot for the case of stepsize h = 0.1 is much better!

8.2 Runge-Kutta Methods:

These methods are based on multiple function evaluations. The iteration scheme generates a
sequence {yn } starting at y0 using the following recursion equation:

yn+1 = yn + h × F (tn , yn , h, f )
y0 = y0

where h is the step size we use for our underlying partition of the time space giving

ti = t0 + i × h

for appropriate indices and F is some function of the previous approximate solution, the step size
and the right hand side function f .

147
8.2. RUNGE-KUTTA CHAPTER 8. ODE

Figure 8.2

8.2.1 Estimating The Solution Numerically:

If we want to solve the scalar equation (this means x is not a vector)

dx
= f (t, x)
dt
x(t0 ) = x0

one way we can attack it is to assume the solution x has enough smoothness to allow us to expand
it in a Taylor series expansion about the base point (t0 , x0 ). Using the Taylor Remainder theorem,
this gives for a given time point t:

dx 1 d2 x 2 1 d3 x
x(t) = x(t0 ) + (t0 ) (t − t0 ) + (t0 ) (t − t 0 ) + (ct ) (t − t0 )3
dt 2 dt2 6 dt3

where ct is some point in the interval [t0 , t]. We also know by the chain rule that since x0 is f that

d2 x ∂f ∂f dx
= +
dt2 ∂t ∂t dt
d2 x ∂f ∂f
= + f
dt2 ∂t ∂x

148
8.2. RUNGE-KUTTA CHAPTER 8. ODE

which implies, switching to a standard subscript notation for partial derivatives (yes, these cal-
culations are indeed yucky2 !)

d3 x
= (ftt + ftx f ) + (fxt + fxx f ) f + fx (ft + fx f )
dt3

The important thing to note is that this third order derivative is made up of algebraic combinations
on f and its various partial derivatives. We typically assume that on the interval [t0 , t], that all
of these functions are continuous and bounded. Thus, letting ||g|| represent the maximum value
of the continuous function g(s, u) on the interval [t0 , t] × [x0 , x1 ] for suitable x1 , we know there is
a constant C so that

d3 x
| (ct )| ≤ (||ftt || + ||ftx || ||f ||) + (||fxt || + ||fxx || ||f ||) ||f || + ||fx || (||ft || + ||fx || ||f ||)
dt3
= C

Now using the standard abbreviations

f 0 = f (t0 , x) )
∂f
ft0 = (t0 , x0 )
∂t
∂f
fx0 = (t0 , x0 )
∂x
∂2f
ftt0 = (t0 , x(t0 ))
∂t2
0 ∂2f
ftx = (t0 , x(t0 ))
∂x∂t
0 ∂2f
fxt = (t0 , x(t0 ))
∂t∂t
0 ∂2f
fxx = (t0 , x(t0 ))
∂x2

we see our solution can be written as

1 0 1 d3 x
x(t) = x0 + f 0 (t − t0 ) + (ft + fx0 f 0 ) (t − t0 )2 + (ct ) (t − t0 )3
2 6 dt3

149
8.2. RUNGE-KUTTA CHAPTER 8. ODE

Some Detailed Error Estimates:

Now let’s use the results we just derived as follows: let’s choose a time step h and set t to be
t0 + h. Then we have (t − t0 ) is h and our solution is

1 0 1 d3 x
x(t0 + h) = x0 + f 0 h + (ft + fx0 f 0 ) h2 + (ct ) h3
2 6 dt3

Now, note that using a Taylor expansion for the function f , we can write for a base point (t0 , x0 )

1 1 1 1 1
f (t0 + h, x0 + h f 0 ) = f 0 + ft0 h + fx0 h f 0 + H 0 (d, u)
2 2 2 2 2

where H 0 (d, u) is what is called a Hessian term and is defined by

" #" #
1
h i ftt (d, u) ftx (d, u) 2h
H 0 (d, u) = 1
2h
1
2 hf
0
1 0
fxt (d, u) fxx (d, u) 2 hf

The point (d, u) satisfies d is in the interval [t0 , t0 + 21 h] and u is in the interval [x0 , x0 + 21 hf 0 ].
Note that we can estimate the effect of this term by

1 1 h2 h2 h2
||H 0 || ≤ (||ftt || + ||ftx || |f 0 | + ||fxx || |f 0 |2
2 2 4 4 4
h3
= (||ftt || + ||ftx || |f 0 | + ||fxx || |f 0 |2 )
8

Thus for suitable constant D we can say

1 h3
||H 0 || ≤ D
2 8

From the above, we note we can write

1 1 1 d3 x
x(t0 + h) = x0 + h f (t0 + h, x0 + h f 0 ) + H 0 (d, u) + (ct ) h3
2 2 6 dt3

Finally, we see that if we take our maximum function values on the domain [t0 , t0 +h]×[x0 , x0 +hf 0 ]
d3 x
for all the terms in dt3
, then there is a constant C so that

1 d3 x h3
|| || ≤ C
6 dt3 6

150
8.2. RUNGE-KUTTA CHAPTER 8. ODE

and so the error in approximating the true solution x by the term x0 + h f (t0 + 12 h, x0 + 1
2 h f 0)
can be written as e(t) where on [t0 , t0 + h]

h3 h3
|e(t)| ≤ D + C
8 6

Thus, we can build an approximate solution φ to the true solution x that uses the information
we know at t0 with data x0 to construct the approximation

1 1
φ(t) = x0 + h f (t0 + h, x0 + h f 0 )
2 2

which has a known error proportional to h3 . Since this error is due to the fact that we truncate
the Taylor series expansions for f earlier, this is known as truncation error. We can follow the
process above to build other sorts of approximations that use more terms of the Taylor Series
expansions. We would call this approximation an order 3 method because the local error is
proportional to the third power of the step size h. Note our simple approximation

• Uses to function evaluations to build the approximate value: f (t0 , x0 ) and f (t0 + 21 h, x0 +
1
2 hf (t0 , x0 )).

If we carried out our Taylor series approximations one more term, that is we use remainder terms
d4 x
due to dt4
, we would find our local error is proportional to h4 and we would need three function
values to build our approximation. In general, to obtain a local order hp method, we need p − 1
function evaluations if we use this technique. The example we have worked out here is called
the Runga - Kutte Order 2 Method. Now to solve a problem on a long time interval, we do
this approximation for each step and consecutively build an approximation solution sequence x0 ,
x(t0 + h1 ), x(t0 + h1 + h2 ) and so forth where hi denotes the step size we use at the ith step.
Finally, if we solve a vector system, we basically apply the technique above to each component
of the vector solution. So if the problem is four dimensional, we are getting a constant C and D
for each component. This of course complicates our life as one component might have such a fast
growing error that the C and D for it is very large compared to the others. To control this, we
generally cut the step size h to bring down the error. Hence, the component whose error grows
the fastest controls how big a step size we can use in this type of technique.
Thus, the Runge-Kutta methods are based on multiple function evaluations. Say you want
to solve our problem on the interval [0, tf ]. If we divide this interval into a collection of discrete
points somehow, say {tn } for n ranging from 0 to a largest index N , then at each such time point,
we generate a new estimate of the vector solution yn . So we start at the initial data y0 at time 0
and use an estimation formula to generate a new approximate solution value y1 for time t1 and
so forth until we hit the final time tN with associated final approximate solution value yN .
If we want a solution on the interval [0, 100] for example, if we used a uniform step size of
h = 1.0e − 3, we would generate 1000 y vectors for each unit of time. Hence, our solution would

151
8.2. RUNGE-KUTTA CHAPTER 8. ODE

be encoded in a vector of size 105 and yn would be the numerical solution to our problem for time
tn = nh.
Now in the theory of the numerical solution of ordinary differential equations (ODEs) we find
that there a several sources of error in this process:

1. As discussed earlier, usually we approximate the function f using a few terms of the Taylor
series expansion of f around the point (tn , yn ) at each iteration. This truncation of the
Taylor series expansion is not the same as the true function f of course, so there is an error
made. Depending on how many terms we use in the Taylor series expansion, the error can
be large or small. If we let hn denote the difference tn+1 − tn , a fourth order method is one
where this error is of the from C h5n for some constant C. This means that is you use a step
size which is one half hn , the error decreases by a factor of 32. For a Runga - Kutte method
to have a local truncation error of order 5, we would need to do 4 function evaluations. Now
this error is local to this time step, so if we solve over a very long interval with say N being
100, 000 or more, the global error can grow quite large due to the addition of so many small
errors. So the numerical solution of an ODE can be very accurate locally but still have a
lot of problems when we try to solve over a long time interval. For example, if we want to
track a voltage pulse over 1000 milliseconds, if we use a step size of 10− 4 this amounts to
107 individual steps. Even a fifth order method can begin to have problems. This error is
called truncation error.

2. We can’t represent numbers with perfect precision on any computer system, so there is
an error due to that which is called round - off error which is significant over long
computations.

3. There is usually a modeling error also. The model we use does not perfectly represent the
physics, biology and so forth of our problem and so this also introduces a mismatch between
our solution and the reality we might measure in a laboratory.

The solution scheme discussed above generates a sequence {yn } starting at y0 using the some sort
of recursion equation:

yn+1 = yn + hn × F (tn , yn , hn , f )
y0 = y0

where hn is the step size we use at time point tn . and the function F is some function of the
previous approximate solution yn , the step size hn and the dynamics vector f . We usually choose
a numerical method which allows us to estimate what a good step size would be at each step n
and then alter the step size to that optimal choice. Our order two method discussed earlier used

1 1
F (tn , yn , hn , f ) = f (tn + hn , xn + hn f (tn , xn ))
2 2

152
8.2. RUNGE-KUTTA CHAPTER 8. ODE

A typical numerical algorithm which will do this is the Runge - Kutta - Fehlberg 45, RKF45
algorithm which works like this:

• Use a Runge - Kutta order 4 method to generate a solution with local error proportional to
h5 . This needs four function evaluations.

• Do one more function evaluation to obtain a Runge - Kutta order 5 method which generates
a solution with local error proportional to h6 .

• We now have to ways to approximate the true solution x at t + h. This gives us a way to
compare the two approximations and to use one more function evaluation to get an estimate
of the error we are making. We can then use that error estimate to see if our step size h is
too large (that is the error is too big and we can compute a new h using our information),
just right (we keep h as it is) or too small (we double the step size).

Since this algorithm uses six function evaluations per time step that is actually used, it is very
costly. Indeed, if we alter h, we go back and recompute all of the solutions. There are better
methods such as Adams - Bashford and Adams - Moulton. However, for our tutorial here, we will
start with this method.
Finally, just to give you the flavor of what needs to be computed, here is a outline of the standard
Runga - Kutte method: in what follows K is a six dimensional vector which we use to store
intermediate results. Now recall that the vector function f we use in our Hodgkin - Huxley
simulations is forbiddingly complex and so these six function evaluations really slow us down!!

Listing 8.8: The Runga - Kutte Flowchart

h ⇐ h0
t ⇐ t0
while ( t i s l e s s than f i n a l time tf ) {
Compute f (t, y)
5 K[0] ⇐ hf
z ⇐ y + 0.25K[0]

Compute f (t + 14 h, z)
K[1] ⇐ hf
1.0
10 z ⇐ y + 32.0 (3.0K[0] + 9.0K[1])

Compute f (t + 3.0
8.0 h, z)
K[2] ⇐ hf
1.0
z ⇐ y + 2197.0 (1932.0K[0] − 7200.0K[1] + 7296.0K[2])
15

Compute f (t + 12.0
13.0 h, z)
K[3] ⇐ h ∗ f
z ⇐ y + 439.0
216.0 K[0] − 8.0K[1] +
3680.0
513.0 K[2] − 845.0
4104.0 K[3]

20 Compute f (t + h, z)
K[4] ⇐ hf

153
8.2. RUNGE-KUTTA CHAPTER 8. ODE

8.0 3544.0 1859.0 11.0


z ⇐ y− 27.0 K[0] + 2.0K[1] − 2565.0 K[2] + 4104.0 K[3] − 40.0 K[4]

Compute f (t + 12 h, z)
25 K[5] ⇐ hf

//Compute t h e e r r o r v e c t o r e f o r t h i s s t e p s i z e h
1.0 128.0 2197.0 1.0 2.0
e ⇐ 360.0 K[0] − 4275.0 K[2] − 75240.0 K[3] + 50.0 K[4] + 55.0 K[5]
||e||∞ ⇐ max{|ei |}
30 ||y||∞ ⇐ max{|yi |}

/∗
See i f t h i s s t e p s i z e i s a c c e p t a b l e . Given t h e t o l e r a n c e s
1 , t h e amount o f
35 e r r o r we a r e w i l l i n g t o make i n our d i s c r e t i z a t i o n
o f t h e problem , and 2 , t h e amount o f
w e i g h t we w i s h t o p l a c e on t h e s o l u t i o n we p r e v i o u s l y
computed y , compute l o c a l d e c i s i o n p a r a m e t e r s
∗/
40
η ⇐ 1 + 2 ||y||∞
i f ( ||e||∞ < η ) {
// t h i s s t e p s i z e i s a c c e p t a b l e ; use i t t o compute t h e n e x t
// v a l u e o f y}
16.0 6656.0 28561.0 9.0 2.0
45 δy ⇐ 135.0 K[0] + 12825.0 K[2] + 56430.0 K[3] − 50.0 K[4] + 55.0 K[5]
y ⇐ y + δy
t ⇐ t + h

\∗
50 Now a l t h o u g h t h i s s t e p s i z e i s a c c e p t a b l e , i t might be
s m a l l e r than n e c e s s a r y ; d e t e r m i n e i f ||e||∞
i s s m a l l e r than some f r a c t i o n o f η −− h e u r i s t i c a l l y ,
0.3 i s a r e a s o n a b l e f r a c t i o n t o u s e .
∗/
55 i f ( ||e||∞ < 0.3η ) {
\\ Double t h e s t e p s i z e i n t h i s case
h ⇐ 2.0h
}
}
60 /∗
I t i s p o s s i b l e that the step s i z e i s too big
which can be d e t e r m i n e d by c h e c k i n g t h e l o c a l e r r o r
∗/
i f ( ||e||∞ ≥ η ) {
65 i f ( ||e||∞ > 1 ) {
/∗
The maximum e r r o r i s l a r g e r than t h e d i s c r e t i z a t i o n
e r r o r . A r e a s o n a b l e way t o r e s e t t h e s t e p s i z e t o r e s e t t o
η
.9h ||e|| ∞
. The r a t i o n a l b e h i n d
70 t h i s c h o i c e i s t h a t i f t h e b i g g e s t e r r o r i n t h e new
computed s o l u t i o n i s 5 t i m e s t h e a l l o w e d t o l e r a n c e
s o u g h t −− 1 , t h e n
1 +2 ||y||∞
51 ⇐ 0.2(1 + 21 ||y||∞ )
Now i f t h e t o l e r a n c e 2 i s r t i m e s
75 1 , t h e c o m p u t a t i o n a bo v e would
r e s e t h t o be .18h(1 + r||y||∞ ) .

154
8.2. RUNGE-KUTTA CHAPTER 8. ODE

Hence t h e maximum component s i z e o f t h e s o l u t i o n


y i n f l u e n c e s how t h e s t e p s i z e i s r e s e t . I f
||y||∞ was 10 , t h e n f o r r 0.1 ,
80 t h e new s t e p s i z e i s .36h . Of c o u r s e ,
t h e c a l c u l a t i o n s a r e a b i t more messy , as t h e
||y||∞ term i s c o n s t a n t l y changing , b u t t h i s
s h o u l d g i v e you t h e i d e a .
∗/
η
85 h ⇐ 0.9h ||e|| ∞
}
}
}

8.2.2 The Matlab Implementation:


The basic code to implement the Runge-Kutta methods is broken into two pieces. The first one,
RKstep.m implements the evaluation of the next approximation solution at point (tn , yn ) given
the old approximation at (tn−1 .yn−1 ). Here is that code for Runge-Kutta codes of orders one to
four.

Listing 8.9: Runge - Kutta Codes

function [ tnew , ynew , fnew ] = RKstep ( fname , tc , yc , f c , h , k )


%
% fname t h e name o f t h e r i g h t hand s i d e f u n c t i o n f ( t , y )
4 % t i s a s c a l a r u s u a l l y c a l l e d time and
% y is a vector of size d
% yc a p p r o x i m a t e s o l u t i o n t o y ’ ( t ) = f ( t , y ( t ) ) a t t=t c
% fc f ( t c , yc )
% h The time s t e p
9 % k The o r d e r o f t h e Runge−Kutta Method 1<= k <= 4
%
% tnew t c+h
% ynew a p p r o x i m a t e s o l u t i o n a t tnew
% fnew f ( tnew , ynew )
14 %
i f k==1
k1 = h∗ f c ;
ynew = yc+k1 ;
e l s e i f k==2
19 k1 = h∗ f c ;
k2 = h∗ f e v a l ( fname , t c +(h / 2 ) , yc+(k1 / 2 ) ) ;
ynew = yc + ( k1+k2 ) / 2 ;
e l s e i f k==3
k1 = h∗ f c ;
24 k2 = h∗ f e v a l ( fname , t c +(h / 2 ) , yc+(k1 / 2 ) ) ;
k3 = h∗ f e v a l ( fname , t c+h , yc−k1+2∗k2 ) ;
ynew = yc+(k1+4∗k2+k3 ) / 6 ;
e l s e i f k==4
k1 = h∗ f c ;
29 k2 = h∗ f e v a l ( fname , t c +(h / 2 ) , yc+(k1 / 2 ) ) ;

155
8.2. RUNGE-KUTTA CHAPTER 8. ODE

k3 = h∗ f e v a l ( fname , t c +(h / 2 ) , yc+(k2 / 2 ) ) ;


k4 = h∗ f e v a l ( fname , t c+h , yc+k3 ) ;
ynew = yc+(k1+2∗k2+2∗k3+k4 ) / 6 ;
else
34 disp ( s p r i n t f ( ’The RK method %2d order is not allowed!’ , k ) ) ;
end
tnew = t c+h ;
fnew = f e v a l ( fname , tnew , ynew ) ;

Once the step is implemented, we solve the system using the RK steps like this:

Listing 8.10: The Runge - Kutta Soltion

function [ t v a l s , y v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n )


%
3 % Gives a p p r o x i m a t e s o l u t i o n t o
% y ’( t ) = f (t , y( t ))
% y ( t 0 ) = y0
% u s i n g a k t h o r d e r RK method
%
8 % t0 i n i t i a l time
% y0 initial state
% h stepsize
% k RK o r d e r 1<= k <= 4
% n Number o f s t e p s t o t a k e
13 %
% t v a l s time v a l u e s o f form
% t v a l s ( j ) = t 0 + ( j −1)∗h , 1 <= j <= n
% y v a l s approximate s o l u t i o n
% y v a l s ( : j ) = approximate s o l u t i o n at
18 % t v a l s ( j ) , 1 <= j <= n
%
tc = t0 ;
yc = y0 ;
tvals = tc ;
23 y v a l s = yc ;
f c = f e v a l ( fname , tc , yc ) ;
for j =1:n−1
[ tc , yc , f c ] = RKstep ( fname , tc , yc , f c , h , k ) ;
y v a l s = [ y v a l s yc ] ;
28 tvals = [ tvals tc ] ;
end

8.2.3 The RunTime: Just Tables

We will test the RK methods code using two types of scripts; one just does the RK computations
and the other will generate a plot of the RK approximation of the solution vs. the true solution.
Again, we will solve the system

156
8.2. RUNGE-KUTTA CHAPTER 8. ODE

dy
= −y
dt
y(t0 ) = y0

for some choices of initial time t0 and initial state y0 . From basic differential equations, this
system has the solution

y(t) = y0 exp(t0 − t)

We can then compare the RK solutions to the Euler solutions.


First, lets see what the RK method generates as a table of values via the script ShowFixedRK.m
below

Listing 8.11: Generating Tabular Data For The Runge - Kutta Method

1 while input ( ’Another Step Size Choice? (1=yes, 0=no). ’ ) ;


fname = input ( ’Enter Function Name: ’ ) ;
h = input ( ’Input Step Size ’ ) ;
t 0 = input ( ’Enter initial time ’ ) ;
tmax = input ( ’Enter final time: ’ ) ;
6 y0 = input ( ’Enter initial state: ’ ) ;
k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
f r e q = input ( ’Input print frequency ’ ) ;
n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
11 disp ( [ ’ tvals ’ s ])
disp ( ’ ’ )
[ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
16 disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s ( 1 ) , y v a l s ( 1 ) ) )
f o r m = f r e q +1: f r e q : n
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s (m) , y v a l s (m) ) )
end
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s ( n ) , y v a l s ( n ) ) )
21 end

We then can test our RK code by running this script. Sample output is given below:

Listing 8.12: RK MatLab Session

>> ShowFixedRK
Another Step S i z e Choice ? (1= yes , 0=no ) . 1
Enter Function Name : ’func’
4 Input Step S i z e 0 . 4
Enter i n i t i a l time 0 . 0

157
8.2. RUNGE-KUTTA CHAPTER 8. ODE

Enter f i n a l time : 5 . 0
Enter i n i t i a l s t a t e : 1 . 0
Enter RK Order i n r a n g e [ 1 , 4 ] : 2
9 Input print f r e q u e n c y 5
tvals FixedRK ( func , 0 . 0 0 0 , 1 . 0 0 0 , 0 . 4 0 0 , 2, 14)

0.00 1.0000000000000000
2.00 0.1073741824000000
14 4.00 0.0115292150460685
5.20 0.0030223145490366
Another Step S i z e Choice ? (1= yes , 0=no ) . 0

8.2.4 The RunTime: Plots!


Now let’s generate some plots. We will use the following script

Listing 8.13: The Generating Plots Script

while input ( ’Another Step Size Choice? (1=yes, 0=no). ’ ) ;


fname = input ( ’Enter Function Name: ’ ) ;
truename = input ( ’Enter True Solution Name ’ ) ;
4 h = input ( ’Input Step Size ’ ) ;
t0 = input ( ’Enter Initial Time ’ ) ;
tmax = input ( ’Enter Final Time: ’ ) ;
y0 = input ( ’Enter Initial State: ’ ) ;
k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
9 n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
[ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
14 t r u e v a l s = zeros ( n , 1 ) ;
fo r i =1:n
t r u e v a l s ( i ) = f e v a l ( truename , t0 , y0 , t v a l s ( i ) ) ;
end
plot ( t v a l s , t r u e v a l s , t v a l s , y v a l s , ’*’ ) ;
19 print −dpng f i x e d r k p l o t
end

Here is the Matlab session:

Listing 8.14: The Runge - Kutta MatLab Session

>> PlotFixedRK
Another Step S i z e Choice ? (1= yes , 0=no ) . 11
Enter Function Name : ’func’
Enter True S o l u t i o n Name ’truefunc’
5 Input Step S i z e 0 . 4
Enter I n i t i a l Time 0 . 0
Enter F i n a l Time : 5 . 0

158
8.2. RUNGE-KUTTA CHAPTER 8. ODE

Enter I n i t i a l S t a t e : 1 . 0
Enter RK Order i n r a n g e [ 1 , 4 ] : 2
10 Another Step S i z e Choice ? (1= yes , 0=no ) . 1
Enter Function Name : ’func’
Enter True S o l u t i o n Name ’truefunc’
Input Step S i z e 0 . 4
Enter I n i t i a l Time 0 . 0
15 Enter F i n a l Time : 5 . 0
Enter I n i t i a l S t a t e : 1 . 0
Enter RK Order i n r a n g e [ 1 , 4 ] : 3
Another Step S i z e Choice ? (1= yes , 0=no ) . 1
Enter Function Name : ’func’
20 Enter True S o l u t i o n Name ’truefunc’
Input Step S i z e 0 . 4
Enter I n i t i a l Time 0 . 0
Enter F i n a l Time : 5 . 0
Enter I n i t i a l S t a t e : 1 . 0
25 Enter RK Order i n r a n g e [ 1 , 4 ] : 4
Another Step S i z e Choice ? (1= yes , 0=no ) . 0

The plots for the case of stepsize h = 0.4 and RK order 2 are much better than the corresponding
Euler plot.

Figure 8.3: True and RK Order 2 Solution: step size = 0.4

The plots for the case of stepsize h = 0.4 and RK order 3 are even better:
The plots for the case of stepsize h = 0.4 and RK order 4 are the best yet!

159
8.3. SOLVING SYSTEMS CHAPTER 8. ODE

Figure 8.4: True and RK Order 3 Solution: step size = 0.4

8.3 How Do We Solve Systems?


We need to update our codes to handle systems.

8.3.1 Setting Up the Vector Functions:

The right hand side of our system is now a column vector

" #
y(2)
f (t, y) =
6t

and the true solution is now the vector function

" #
φ(t)
θ(t, y) = dφ
dt

We will implement these vector functions in the Matlab code vecfunc1.m and truevecfunc1.m
respectively. The right hand side is thus:

Listing 8.15: System Dynamics

160
8.3. SOLVING SYSTEMS CHAPTER 8. ODE

Figure 8.5: True and RK Order 3 Solution: step size = 0.4

function f = v e c f u n c 1 ( t , y )
f = [ y(2) ;6∗ t ] ;

while the true vector function is

Listing 8.16: True Vector Solution

function f = t r u e f u n c 1 ( t0 , t1 , b , t )
c = ( b ( 2 )−b ( 1 ) ) / ( t1−t 0 ) − t 1 ˆ2 −t 0 ∗ t 1 + 2∗ t 0 ˆ 2 ;
3 d = b(1) ;
f = [ d+(t−t 0 ) ∗ ( c+t ˆ2+ t 0 ∗ t −2∗ t 0 ˆ 2 ) ; c+t ˆ2+ t 0 ∗ t −2∗ t 0 ˆ2+( t−t 0 ) ∗ ( 2 ∗ t+t 0 ) ] ;

8.3.2 Updating Our Solver Codes:


We will possibly need to update our codes

• ShowFixedRK.m

• PlotFixedRK.m

Recall the ShowFixedRK.m script

161
8.3. SOLVING SYSTEMS CHAPTER 8. ODE

Listing 8.17: Old ShowFixedRK Script

1 while input ( ’Another Step Size Choice? (1=yes, 0=no). ’ ) ;


fname = input ( ’Enter Function Name: ’ ) ;
h = input ( ’Input Step Size ’ ) ;
t 0 = input ( ’Enter initial time ’ ) ;
tmax = input ( ’Enter final time: ’ ) ;
6 y0 = input ( ’Enter initial state: ’ ) ;
k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
f r e q = input ( ’Input print frequency ’ ) ;
n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
11 disp ( [ ’ tvals ’ s ])
disp ( ’ ’ )
[ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
16 disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s ( 1 ) , y v a l s ( 1 ) ) )
fo r m = f r e q +1: f r e q : n
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s (m) , y v a l s (m) ) )
end
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s ( n ) , y v a l s ( n ) ) )
21 end

We now run this script using the initial data

" #
1
y0 =
2

This generates the Matlab session

Listing 8.18: System Solution Matlab Session

>> ShowFixedRK
Another Step S i z e Choice ? (1= yes , 0=no ) . 1
Enter Function Name : ’vecfunc1’
4 Input Step S i z e 0 . 2
Enter i n i t i a l time 0 . 0
Enter f i n a l time : 1 . 0
Enter i n i t i a l s t a t e : [ 1 ; 2 ]
Enter RK Order i n r a n g e [ 1 , 4 ] : 4
9 Input print f r e q u e n c y 5
tvals FixedRK ( v e c f u n c 1 , 0 . 0 0 0 , 1 . 0 0 0 , 2 . 0 0 0 , 2 . 0 0 0 0 0 0 e −01 , 4) , 6.000 ,

0.00 1.0000000000000000
1.00 4.0000000000000000
14 1.00 4.0000000000000000
Another Step S i z e Choice ? (1= yes , 0=no ) .

and everything looks fine except that our header

162
8.3. SOLVING SYSTEMS CHAPTER 8. ODE

s = [’FixedRK(’ fname sprintf(’,%6.3f,%6.3f,%6.3f,%4d,%4d)’,t0,y0,h,k,n)];

will no longer work as we are trying to print out the vector y0 into a single float data field! This
could be fixed, but note the fix would require that we know the size of the vector to set up the
print statement. Similar problems arise when we try to use the plotting script from before.
Recall our original plotting script:

Listing 8.19: Original Plotting Script

while input ( ’Another Step Size Choice? (1=yes, 0=no). ’ ) ;


fname = input ( ’Enter Function Name: ’ ) ;
3 truename = input ( ’Enter True Solution Name ’ ) ;
h = input ( ’Input Step Size ’ ) ;
t0 = input ( ’Enter Initial Time ’ ) ;
tmax = input ( ’Enter Final Time: ’ ) ;
y0 = input ( ’Enter Initial State: ’ ) ;
8 k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
[ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
t v a l s = timevals ’ ;
13 yvals = solutionvals ’ ;
t r u e v a l s = zeros ( n , 1 ) ;
f o r i =1:n
t r u e v a l s ( i ) = f e v a l ( truename , t0 , y0 , t v a l s ( i ) ) ;
end
18 plot ( t v a l s , t r u e v a l s , t v a l s , y v a l s , ’*’ ) ;
print −dpng f i x e d r k p l o t
end

To get this to work, we need to do several things:

• We need to decide which component of our vectors to plot.

• We need to worry more about row vs. column orientations of our vectors in our calculations.

• We need to distinguish between the data vector which controls the solution for the boundary
value problem and the initial data vector which controls the numerical solution of the ODE.

This is our altered code:

Listing 8.20: System Plotting Code

while input ( ’Do you want to Solve This Problem? (1=yes, 0=no). ’ ) ;
fname = input ( ’Enter Function Name: ’ ) ;
truename = input ( ’Enter True Solution Name: ’ ) ;
dim = input ( ’Enter Dimension of State: ’ ) ;
5 h = input ( ’Input Step Size: ’ ) ;
t0 = input ( ’Enter Initial Time: ’ ) ;

163
8.3. SOLVING SYSTEMS CHAPTER 8. ODE

tmax = input ( ’Enter Final Time: ’ ) ;


y0 = input ( ’Enter Initial Data: ’ ) ;
b = input ( ’Enter Boundary Data: [value at to;value at t1]’ ) ;
10 index = input ( ’Enter Which Component to Plot: ’ ) ;
k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
[ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
15 t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
t r u e v a l s = zeros ( n , dim ) ;
fo r i =1:n
w = f e v a l ( truename , t0 , tmax , b , t v a l s ( i ) ) ’ ;
20 t r u e v a l s ( i , : ) = w;
end
plot ( t v a l s , t r u e v a l s ( : , i n d e x ) , t v a l s , y v a l s ( : , i n d e x ) , ’*’ ) ;
print −deps f i x e d v e c r k p l o t
end

Here is a run-time test: we will try to zero in on the right initial condition to use to solve the
boundary value problem:

Listing 8.21: System MatLab Session: First Initial Condition

1 >> PlotVecFixedRK
Do you want t o S o l v e This Problem ? (1= yes , 0=no ) . 1
Enter Function Name : ’vecfunc1’
Enter True S o l u t i o n Name : ’truevecfunc1’
Enter Dimension o f S t a t e : 2
6 Input Step S i z e : 0 . 2
Enter I n i t i a l Time : 0 . 0
Enter F i n a l Time : 1 . 0
Enter I n i t i a l Data : [ 1 ; 2 ]
Enter Boundary Data : [ v a l u e a t t o ; v a l u e a t t1 ] [ 1 ; 2 ]
11 Enter Which Component t o P l o t : 1
Enter RK Order i n r a n g e [ 1 , 4 ] : 4
Do you want t o S o l v e This Problem ? (1= yes , 0=no ) . 1
Enter Function Name : ’vecfunc1’
Enter True S o l u t i o n Name : ’truevecfunc1’
16 Enter Dimension o f S t a t e : 2
Input Step S i z e : 0 . 2
Enter I n i t i a l Time : 0 . 0
Enter F i n a l Time : 1 . 0
Enter I n i t i a l Data : [ 1 ; 0 . 0 5 ]
21 Enter Boundary Data : [ v a l u e a t t o ; v a l u e a t t1 ] [ 1 ; 2 ]
Enter Which Component t o P l o t : 1
Enter RK Order i n r a n g e [ 1 , 4 ] : 4

with associated plots


We use initial data

164
8.4. PREDATOR-PREY CHAPTER 8. ODE

" #
1
y0 =
2

Figure 8.6: System Plot First Initial Condition

Next, we use initial data

" #
1
y0 =
0.3

Clearly, initial data

" #
1
y0 =
0

does the job.

8.4 Predator - Prey Numerical Solutions


Let’s try to solve a typical predator prey system such as the one given below numerically.

165
8.4. PREDATOR-PREY CHAPTER 8. ODE

Figure 8.7: System MatLab Session: Second Initial Condition

x0 (t) = a x(t) − b x(t) y(t)


y 0 (t) = −c y(t) + d x(t) y(t)

We can easily convert our model to a matrix - vector system. The right hand side of our
system is now a column vector: we identify x with the component x(1) and y with the component
x(2). This gives the vector function

" #
a x(1) − b x(1) x(2)
f (t, y) =
−c x(2) + d x(1) x(2)

and we can no longer find the true solution, although our theoretical investigations have told us
a lot about the behavior that the true solution must have.
We will implement this vector function in the MatLab code PredPrey.m. The right hand side
is thus for the following predator - prey problem:

x0 (t) = 2 x(t) − 3 x(t) y(t)


y 0 (t) = −4 y(t) + 5 x(t) y(t)

166
8.4. PREDATOR-PREY CHAPTER 8. ODE

Figure 8.8: System MatLab Session: Third Initial Condition

Listing 8.22: System Dynamics

1 function f = PredPrey ( t , x )
f = [ 2 ∗ x ( 1 ) −3∗x ( 1 ) ∗x ( 2 ) ; −4∗x ( 2 ) +5∗x ( 1 ) ∗x ( 2 ) ] ;

Of course, the numbers we put in this file would change if we changed the problem!

8.4.1 Updating Our Solver Codes:


We will possibly need to update our codes

• ShowFixedRK.m

• PlotFixedRK.m

to handle these systems. Recall the ShowFixedRK.m script

Listing 8.23: Old ShowFixedRK Script

while input ( ’Another Step Size Choice? (1=yes, 0=no). ’ ) ;


fname = input ( ’Enter Function Name: ’ ) ;
3 h = input ( ’Input Step Size ’ ) ;
t 0 = input ( ’Enter initial time ’ ) ;

167
8.4. PREDATOR-PREY CHAPTER 8. ODE

tmax = input ( ’Enter final time: ’ ) ;


y0 = input ( ’Enter initial state: ’ ) ;
k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
8 f r e q = input ( ’Input print frequency ’ ) ;
n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
disp ( [ ’ tvals ’ s ])
disp ( ’ ’ )
13 [ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s ( 1 ) , y v a l s ( 1 ) ) )
fo r m = f r e q +1: f r e q : n
18 disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s (m) , y v a l s (m) ) )
end
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s ( n ) , y v a l s ( n ) ) )
end

This script is now altered to ShowSystemFixedRK.m which is shown below:

Listing 8.24: ShowSystemFixedRK Script

while input ( ’Another Step Size Choice? (1=yes, 0=no). ’ ) ;


fname = input ( ’Enter Function Name: ’ ) ;
h = input ( ’Input Step Size ’ ) ;
4 t 0 = input ( ’Enter initial time ’ ) ;
tmax = input ( ’Enter final time: ’ ) ;
y0 = input ( ’Enter initial state: ’ ) ;
k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
f r e q = input ( ’Input print frequency ’ ) ;
9 n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
disp ( [ ’ tvals ’ s ])
disp ( ’ ’ )
[ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
14 t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s ( 1 ) , y v a l s ( 1 ) ) )
fo r m = f r e q +1: f r e q : n
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s (m) , y v a l s (m) ) )
19 end
disp ( s p r i n t f ( ’ %5.2f %20.16f’ , t v a l s ( n ) , y v a l s ( n ) ) )
end

8.4.2 Estimating The Period T Numerically

We can use this script to estimate the period T for a given Predator - Prey problem. To see how
we can do this, let’s use this script, with the problem already encoded, with the initial condition

168
8.4. PREDATOR-PREY CHAPTER 8. ODE

" #
2
y0 = .
1

This generates the MatLab session

>> ShowSystemFixedRK
Another Step Size Choice? (1=yes, 0=no). 1
Enter Function Name: ’PredPrey’
Input Step Size 0.2
Enter initial time 0.0
Enter final time: 5.0
Enter initial state: [2;1]
Enter RK Order in range [1,4]: 4
Input print frequency 5

where we have chosen various values for the step size and so forth. This generates the table of
values:

Time Food Fish Sharks


0.00 2.0000000000000000 1.0000000000000000
1.00 0.2233823750099171 0.3873694199750348
2.00 1.0475810038309334 0.0884347592409032
3.00 0.4041621681612533 1.9339134009672625
4.00 0.3916750648746422 0.1339651272051812
5.00 1.9584190505447110 0.4005970029037315
5.00 1.9584190505447110 0.4005970029037315

We start at (2, 1) and so after some time T we will return to that value. We can’t see this return
in the table above, so let’s try again with a smaller step size.

Another Step Size Choice? (1=yes, 0=no). 1


Enter Function Name: ’PredPrey’
Input Step Size 0.2
Enter initial time 0.0
Enter final time: 10.0
Enter initial state: [2;1]
Enter RK Order in range [1,4]: 4
Input print frequency 5
tvals FixedRK(PredPrey, 0.000, 2.000, 1.000,
2.000000e-01, 4),51.000,

Time Food Fish Sharks


0.00 2.0000000000000000 1.0000000000000000
1.00 0.2233823750099171 0.3873694199750348
2.00 1.0475810038309334 0.0884347592409032
3.00 0.4041621681612533 1.9339134009672625
4.00 0.3916750648746422 0.1339651272051812
5.00 1.9584190505447110 0.4005970029037315
6.00 0.2106014152745480 0.6036132502832914

169
8.4. PREDATOR-PREY CHAPTER 8. ODE

7.00 0.8171614441112490 0.0838469239322386


8.00 0.7489400343298448 2.3007865099324856
9.00 0.3192779157695743 0.1842898716604741
10.00 1.6711511945987814 0.2033853579598897
10.00 1.6711511945987814 0.2033853579598897

We still can’t see a return to the initial values. So let’s try again:

Another Step Size Choice? (1=yes, 0=no). 1


Enter Function Name: ’PredPrey’
Input Step Size 0.05
Enter initial time 0.0
Enter final time: 20.0
Enter initial state: [2;1]
Enter RK Order in range [1,4]: 4
Input print frequency 10
tvals FixedRK(PredPrey, 0.000, 2.000, 1.000,
5.000000e-02, 4),401.000,

Time Food Fish Sharks


0.00 2.0000000000000000 1.0000000000000000
0.50 0.2987655689253791 1.6233936537789422
1.00 0.2199998527938605 0.3819097054760270
1.50 0.4362516977262944 0.1110980902833591
2.00 1.0418758936663761 0.0851522960338094
2.50 2.0582266521304660 0.5941165722918020
... lots of missing values
13.00 1.7237968736733214 1.5577860095057854
13.50 0.2456285690840624 1.3058347997560626
14.00 0.2383901289983887 0.3020753689986878
14.50 0.5010880148218820 0.0969265619996626
15.00 1.2006250859695471 0.0970645264489389
15.50 2.0031526397787780 0.9908588436175139
16.00 0.3000483343170592 1.6296797368353371
... more missing values
20.00 0.8971803854286021 0.0792541910843149
20.00 0.8971803854286021 0.0792541910843149
Another Step Size Choice? (1=yes, 0=no). 0

Now we see that the period of this solution trajectory is about 15.5 time units. But as you can
see it is not easy to get this estimate as it involves a lot of trial and error. It also is a trial and
error process to get the right step size h!

8.4.3 Updating Our Plotting Scripts


Recall our original plotting script:

Listing 8.25: Original Plotting Script

while input ( ’Another Step Size Choice? (1=yes, 0=no). ’ ) ;

170
8.4. PREDATOR-PREY CHAPTER 8. ODE

fname = input ( ’Enter Function Name: ’ ) ;


truename = input ( ’Enter True Solution Name ’ ) ;
4 h = input ( ’Input Step Size ’ ) ;
t0 = input ( ’Enter Initial Time ’ ) ;
tmax = input ( ’Enter Final Time: ’ ) ;
y0 = input ( ’Enter Initial State: ’ ) ;
k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
9 n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
[ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
14 t r u e v a l s = zeros ( n , 1 ) ;
f o r i =1:n
t r u e v a l s ( i ) = f e v a l ( truename , t0 , y0 , t v a l s ( i ) ) ;
end
plot ( t v a l s , t r u e v a l s , t v a l s , y v a l s , ’*’ ) ;
19 print −dpng f i x e d r k p l o t
end

We now want to plot the x(2) values versus the x(1) values: we should see a periodic orbit! This
is our altered code:

Listing 8.26: System Plotting Code

while input ( ’Do you want to Solve This Problem? (1=yes, 0=no). ’ ) ;
fname = input ( ’Enter Function Name: ’ ) ;
h = input ( ’Input Step Size: ’ ) ;
t0 = input ( ’Enter Initial Time: ’ ) ;
5 tmax = input ( ’Enter Final Time: ’ ) ;
y0 = input ( ’Enter Initial Data: ’ ) ;
k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
10 [ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
plot ( y v a l s ( : , 1 ) , y v a l s ( : , 2 ) ) ;
xlabel ( ’x’ ) ;
15 ylabel ( ’y’ ) ;
t i t l e ( ’Phase Plane Plot of y vs x’ ) ;
end

Note that the plot command looks a little different. Also, since we enter the name of the file we
want to print to as a variable, we print the plot to the file a little differently.

plot(yvals(:,1),yvals(:,2),’g-’);
print(’-dpng’,figname);

The associated run time session is

171
8.4. PREDATOR-PREY CHAPTER 8. ODE

>> PlotSystemFixedRK
Do you want to Solve This Problem? (1=yes, 0=no). 1
Enter Function Name: ’PredPrey’
Input Step Size: .05
Enter Initial Time: 0.0
Enter Final Time: 15.5
Enter Initial Data: [2;1]
Enter the name for the plot: ’predprey1.png’
Enter RK Order in range [1,4]: 4
Do you want to Solve This Problem? (1=yes, 0=no).

which generates the plot predprey1.png seen in Figure 8.9.

Figure 8.9: System Plot First Initial Condition

Here is what happens at a different initial condition:

>> PlotSystemFixedRK
Do you want to Solve This Problem? (1=yes, 0=no). 1
Enter Function Name: ’PredPrey’
Input Step Size: .05
Enter Initial Time: 0.0
Enter Final Time: 30.0
Enter Initial Data: [.4;.6]
Enter the name for the plot: ’predprey2.png’
Enter RK Order in range [1,4]: 4

which generates the plot predprey2.png we see in Figure 8.10.

8.4.4 Exercises
And then there are problems on using MatLab!

172
8.4. PREDATOR-PREY CHAPTER 8. ODE

Figure 8.10: System Plot Second Initial Condition

Exercise 8.4.1.

x0 (t) = 4 x(t) − 7 x(t) y(t)


y 0 (t) = −9 y(t) + 7 x(t) y(t)

1. Solve using the Runge - Kutta codes for h sufficiently small to generate a periodic orbit using
initial conditions:

(a)
" #
2
1

(b)
" #
5
2

2. Use our MatLab codes to estimate the period in each case

3. Generate plots of these trajectories

Exercise 8.4.2.

x0 (t) = 90 x(t) − 45 x(t) y(t)


y 0 (t) = −180 y(t) + 20 x(t) y(t)

173
8.4. PREDATOR-PREY CHAPTER 8. ODE

1. Solve using the Runge - Kutta codes for h sufficiently small to generate a periodic orbit using
initial conditions:

(a)
" #
4
12

(b)
" #
5
20

2. Use our MatLab codes to estimate the period in each case

3. Generate plots of these trajectories

Exercise 8.4.3.

x0 (t) = 10 x(t) − 5 x(t) y(t)


y 0 (t) = −4 y(t) + 20 x(t) y(t)

1. Solve using the Runge - Kutta codes for h sufficiently small to generate a periodic orbit using
initial conditions:

(a)
" #
40
2

(b)
" #
5
25

2. Use our MatLab codes to estimate the period in each case

3. Generate plots of these trajectories

Exercise 8.4.4.

x0 (t) = 7 x(t) − 14 x(t) y(t)


y 0 (t) = −6 y(t) + 3 x(t) y(t)

1. Solve using the Runge - Kutta codes for h sufficiently small to generate a periodic orbit using
initial conditions:

174
8.5. SELF INTERACTION CHAPTER 8. ODE

(a)
" #
7
12

(b)
" #
.2
2

2. Use our MatLab codes to estimate the period in each case

3. Generate plots of these trajectories

Exercise 8.4.5.

x0 (t) = 8 x(t) − 4 x(t) y(t)


y 0 (t) = −10 y(t) + 2 x(t) y(t)

1. Solve using the Runge - Kutta codes for h sufficiently small to generate a periodic orbit using
initial conditions:

(a)
" #
.1
18

(b)
" #
6
.1

2. Use our MatLab codes to estimate the period in each case

3. Generate plots of these trajectories

8.5 Predator - Prey Self Interaction Numerically


From our theoretical investigations, we know if the ratio c/d exceeds the ratio a/e, the solutions
should approach the ratio a/e as time gets large. Let’s see if we get that result numerically.
Let’s try this problem,

x0 (t) = 2 x(t) − 3 x(t) y(t) − 3 x(t)2


y 0 (t) = −4 y(t) + 5 x(t) y(t) − 3 y(t)2

175
8.5. SELF INTERACTION CHAPTER 8. ODE

which is encoded in the file PredPrey2.m listed below:

Listing 8.27: Predator Prey System With Self Interaction On Both Species

function f = PredPrey2 ( t , x )
f = [ 2 ∗ x ( 1 ) −3∗x ( 1 ) ∗x ( 2 ) −3∗x ( 1 ) ∗x ( 1 ) ; −4∗x ( 2 ) +5∗x ( 1 ) ∗x ( 2 ) −3∗x ( 2 ) ∗x ( 2 ) ] ;

We generate the plot predpreyself11a.png as shown in figure 8.11.

Figure 8.11: Predator Prey System: Self Interaction 3 on Food Fish and 3 on Sharks

We expect the solution to approach the fixed solution (a/e, 0) = (2/3, 0) and it does. Now let’s
look at what happens when the nullclines cross. We now use the problem

x0 (t) = 2 x(t) − 3 x(t) y(t) − 1.5 x(t)2


y 0 (t) = −4 y(t) + 5 x(t) y(t) − 1.5 y(t)2

which is encoded in the file PredPrey3.m listed below:

Listing 8.28: Predator Prey System With Self Interaction On Both Species

function f = PredPrey3 ( t , x )
f = [ 8 ∗ x ( 1 ) −3∗x ( 1 ) ∗x ( 2 ) −3∗x ( 1 ) ∗x ( 1 ) ; −4∗x ( 2 ) +5∗x ( 1 ) ∗x ( 2 ) −3∗x ( 2 ) ∗x ( 2 ) ] ;

Since a/e = 8/3 and c/d = 4/5, the nullclines cross and the trajectories should converge to
x∗ = 39/24 = 1.625 and y ∗ = 28/24 = 1.1666̄.
We generate the plot predpreyself11b.png as shown in Figure 8.12 using this MatLab session

176
8.5. SELF INTERACTION CHAPTER 8. ODE

>> PlotSystemFixedRK
Do you want to Solve This Problem? (1=yes, 0=no). 1
Enter Function Name: ’PredPrey3’
Input Step Size: .025
Enter Initial Time: 0
Enter Final Time: 30
Enter Initial Data: [2;1]
Enter the name for the plot: ’jim’
Enter RK Order in range [1,4]: 4
Do you want to Solve This Problem? (1=yes, 0=no). 0

Figure 8.12: Predator Prey System: Self Interaction 3.0 on Food Fish and 3.0 on Sharks. In
this example, the nullclines cross so the trajectories moves towards a fixed point
(1.625, 1.1666) as shown.

8.5.1 Exercises

Here are some MatLab exercises.

Exercise 8.5.1.

x0 (t) = 8 x(t) − 4 x(t) y(t) − 2 x(t)2


y 0 (t) = −10 y(t) + 2 x(t) y(t) − 1.5 y(t)2

1. Solve using the Runge - Kutta codes for h sufficiently small to generate trajectories using
initial conditions

(a)
" #
1.1
1.8

177
8.5. SELF INTERACTION CHAPTER 8. ODE

(b)
" #
6
2

2. Generate plots of these trajectories

Exercise 8.5.2.

x0 (t) = 12 x(t) − 4 x(t) y(t) − 2 x(t)2


y 0 (t) = −10 y(t) + 2 x(t) y(t) − 1.5 y(t)2

1. Solve using the Runge - Kutta codes for h sufficiently small to generate trajectories using
initial conditions

(a)
" #
4.1
1.8

(b)
" #
3.4
2.5

2. Generate plots of these trajectories

Exercise 8.5.3.

x0 (t) = 6 x(t) − 4 x(t) y(t) − 2 x(t)2


y 0 (t) = −3 y(t) + 2 x(t) y(t) − 1.5 y(t)2

1. Solve using the Runge - Kutta codes for h sufficiently small to generate trajectories using
initial conditions

(a)
" #
3.1
1.8

(b)
" #
2.5
2.3

2. Generate plots of these trajectories

178
8.5. SELF INTERACTION CHAPTER 8. ODE

Exercise 8.5.4.

x0 (t) = 6 x(t) − 4 x(t) y(t) − 2 x(t)2


y 0 (t) = −12 y(t) + 2 x(t) y(t) − 1.5 y(t)2

1. Solve using the Runge - Kutta codes for h sufficiently small to generate trajectories using
initial conditions

(a)
" #
1.6
1.8

(b)
" #
1.9
3

2. Generate plots of these trajectories

Exercise 8.5.5.

x0 (t) = 15 x(t) − 4 x(t) y(t) − 2 x(t)2


y 0 (t) = −10 y(t) + 2 x(t) y(t) − 1.5 y(t)2

1. Solve using the Runge - Kutta codes for h sufficiently small to generate trajectories using
initial conditions

(a)
" #
2.1
1.8

(b)
" #
1.6
2

2. Generate plots of these trajectories

179
8.5. SELF INTERACTION CHAPTER 8. ODE

180
Chapter 9
Solving Nonlinear Models Numerically

We can now start using our MatLab tools to generate solutions to nonlinear system models. Let’s
begin by returning to the Predator - Prey model and see if we can automate the generation of
the phase plane plots. Now there are many packages that have already been written that can do
this much better, but there is great value in learning how to do it ourselves.

9.1 A Predator - Prey Example

We already know how to solve these systems. We use the MatLab script in the file PlotSystem-
FixedRK.m as listed below.

while input ( ’Do you want to Solve This Problem? (1=yes, 0=no). ’ ) ;
fname = input ( ’Enter Function Name: ’ ) ;
3 h = input ( ’Input Step Size: ’ ) ;
t0 = input ( ’Enter Initial Time: ’ ) ;
tmax = input ( ’Enter Final Time: ’ ) ;
y0 = input ( ’Enter Initial Data: ’ ) ;
k = input ( ’Enter RK Order in range [1,4]: ’ ) ;
8 n = c e i l ( ( tmax−t 0 ) /h ) +1;
s = [ ’FixedRK(’ fname s p r i n t f ( ’,%6.3f,%6.3f,%6.3f,%4d,%4d)’ , t0 , y0 , h , k , n ) ] ;
[ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t0 , y0 , h , k , n ) ;
t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
13 xlabel ( ’x’ ) ;
ylabel ( ’y’ ) ;
t i t l e ( ’Phase Plane Plot of y vs x’ ) ;
plot ( y v a l s ( : , 1 ) , y v a l s ( : , 2 ) ) ;
end

181
9.1. PREDATOR-PREY CHAPTER 9. NONLINEAR MODELS NUMERICALLY

This is a script which asks for input uses MatLab’s built in input() function. Once the
inputs are completed, the MatLab function FixedRK() is called with appropriate arguments to
compute a numerical solution to the model. Here, the model has the form

x0 = f (x, y)
y 0 = g(x, y)

and the right hand side dynamics are encoded in the file whose name is given by the string fname.
For example, to solve a particular Predator Prey problem,

x0 = 2x − 3xy
y 0 = −4y + 5xy

we could use the dynamics in the file PredatorPrey.m which are coded as

function f = PredPrey ( t , x )
f = [ 2 ∗ x ( 1 ) −3∗x ( 1 ) ∗x ( 2 ) ; −4∗x ( 2 ) +5∗x ( 1 ) ∗x ( 2 ) ] ;

The MatLab sesssion is then

>> PlotSystemFixedRK
Do you want to Solve This Problem? (1=yes, 0=no). 1
Enter Function Name: ’PredPrey’
Input Step Size: .1
Enter Initial Time: 0
Enter Final Time: 5
Enter Initial Data: [2;4]
Enter RK Order in range [1,4]: 4

Then, after saving the plot, we see from Figure 9.1, our step size was too large and the plot
is not periodic as we know it should be.
Hence, we rerun with the smaller step size, 0.05 as shown in Figure 9.2. The MatLab session is
now

Do you want to Solve This Problem? (1=yes, 0=no). 1


Enter Function Name: ’PredPrey’
Input Step Size: .05
Enter Initial Time: 0
Enter Final Time: 5
Enter Initial Data: [2;4]
Enter RK Order in range [1,4]: 4

Although it is nice to do things interactively, sometimes it gets a bit tedious. Let’s rewrite
this while loop as a function. Let’s try this:

182
9.1. PREDATOR-PREY CHAPTER 9. NONLINEAR MODELS NUMERICALLY

Figure 9.1: Predator Prey Plot With Step Size Too Large

Figure 9.2: Predator Prey Plot With Step Size Better!

function AutoSystemFixedRK ( fname , s t e p s i z e , t i n i t , t f i n a l , y i n i t , r k o r d e r )


2 % fname i s t h e name o f t h e model dynamics
% s t e p s i z e i s our s t e p s i z e c h o i c e
% t i n i t i s t h e i n i t i a l time
% t f i n a l i s t h e f i n a l time
% r k o r d e r i s t h e RK o r d e r
7 % y i n i t i s t h e i n i t i a l d a t a e n t e r e d as [ number 1 ; number 2 ]
n = c e i l ( ( t f i n a l −t i n i t ) / s t e p s i z e ) +1;
[ t i m e v a l s , s o l u t i o n v a l s ] = FixedRK ( fname , t i n i t , y i n i t , s t e p s i z e , r k o r d e r , n ) ;
t v a l s = timevals ’ ;
yvals = solutionvals ’ ;
12 plot ( y v a l s ( : , 1 ) , y v a l s ( : , 2 ) ) ;
xlabel ( ’x’ ) ;
ylabel ( ’y’ ) ;
t i t l e ( ’Phase Plane Plot of y vs x’ ) ;

This is saved in the file AutoSystemFixedRK.m and we use it in MatLab like this:

>> AutoSystemFixedRK(’PredPrey’,0.02,0,5,[2;4],4)

183
9.1. PREDATOR-PREY CHAPTER 9. NONLINEAR MODELS NUMERICALLY

Figure 9.3: Predator Prey Plot With Step Size Even Better!

This is much more compact! Since we decreased the step size again, we see in Figure 9.3 a
better plot.
Now let’s try to generate a real phase plane portrait by automating the phase plane plots for
a selection of initial conditions. Consider the code below which is saved in the file AutoPhase-
PlanePlot.m.

function AutoPhasePlanePlot ( fname , s t e p s i z e , t i n i t , t f i n a l , r k o r d e r , . . .


x b o x s i z e , y b o x s i z e , xmin , xmax , ymin , ymax )
3 % fname i s t h e name o f t h e model dynamics
% s t e p s i z e i s t h e chosen s t e p s i z e
% t i n i t i s t h e i n i t i a l time
% t f i n a l i s t h e f i n a l time
% r k o r d e r i s t h e RK o r d e r
8 % we w i l l use i n i t i a l c o n d i t i o n s chosen from t h e box
% [ xmin , xmax ] x [ ymin , ymax ]
% This i s done u s i n g t h e l i n s p a c e command
% so x b o x s i z e i s t h e number o f p o i n t s i n t h e i n t e r v a l [ xmin , xmax ]
% y b o x s i z e i s t h e number o f p o i n t s i n t h e i n t e r v a l [ ymin , ymax ]
13 n = c e i l ( ( t f i n a l −t i n i t ) / s t e p s i z e ) +1;
% u and v a r e t h e v e c t o r s we use t o compute our ICs
u = linspace ( xmin , xmax , x b o x s i z e ) ;
v = linspace ( ymin , ymax , y b o x s i z e ) ;
%h o l d p l o t and c y c l e l i n e c o l o r s
18 xlabel ( ’x(1)’ ) ;
ylabel ( ’x(2)’ ) ;
newplot ;
hold a l l ;
fo r i =1: x b o x s i z e
23 f o r j =1: y b o x s i z e
y0 = [ u ( i ) ; v ( j ) ] ;
[ t v a l s , y v a l s , f c v a l s ] = FixedRK ( fname , t i n i t , y0 , s t e p s i z e , r k o r d e r , n ) ;
y = yvals ’ ;
plot ( y ( : , 1 ) , y ( : , 2 ) ) ;
28 end
end

184
9.2. HARDER NONLINEARCHAPTER
MODEL 9. NONLINEAR MODELS NUMERICALLY

Figure 9.4: Predator Prey Plot For Multiple Initial Conditions!

hold o f f ;

There are some new elements here. We set up vectors u and v to construct our initial conditions
from. Each initial condition is of the form (ui , vj ) and we use that to set the initial condition y0
we pass into FixedRK() as usual. We start by telling MatLab the plot we are going to build is a
new one; so the previous plot should be erased. The command hold all then tells MatLab to keep
all the plots we generate as well as the line colors and so forth until a hold off is encountered. So
here we generate a bunch of plots and we then see them on the same plot at the end! We generate
phase plane plots for initial conditions from the box [.1, 4.5] × [.1, 4.5] using a fairly small step
size of 0.2. We generate a very nice phase plane plot as shown in Figure 9.4.

>> AutoPhasePlanePlot(’PredPrey’,0.02,0.0,16.5,4,5,5,.1,4.5,.1,4.5);

9.2 A Harder Nonlinear Model


Now let’s look at the model

2xy
x0 = (1 − x)x −
 1+x

y
y0 = 1 − y
1+x

Clearly, we need to stay away from the line x = −1 for initial conditions as there the dynamics
themselves are not defined! However, we can analyze at other places. We define the nonlinear
function f and g then by

2xy
f (x, y) = (1 − x)x −
1+x

185
9.2. HARDER NONLINEARCHAPTER
MODEL 9. NONLINEAR MODELS NUMERICALLY

 
y
g(x, y) = 1 − y
1+x

We find that f (x, y) = 0 when either x = 0 or 2y = 1 − x2 . When x = 0, we find g(0, y) = (1 − y) y


which is 0 when y = 0 or y = 1. So we have equilibrium points at (0, 0 and (0, 1). Next, when
2y = 1 − x2 , we find g becomes

1 − x2 1
g(x, ) = (1 + x)(1 + x)(1 − x).
2 4

We need to discard x = −1 as the dynamics are not defined there. So the last equilibrium point
is at (1, 0). We can then use MatLab to find the Jacobians and the associated eigenvalues and
eigenvectors are each equilibrium point. We encode the Jacobian

y
" #
x
1 − 2x − 2 /(1+x)2 −2 (1+x)
J(x, y) = y2 y
/ (1 + x)2 ), 1 − 2 1+x

in the file Jacobian.m as

function A = J a c o b i a n ( x , y )
A = [1 −2∗x − 2∗ y / ( 1 + x∗x ) , −2∗x/(1+x ) ; . . .
3 y∗y /((1+ x ) ∗(1+x ) ) , 1−2∗y/(1+x ) ] ;

We can them find the local linear systems at each equilibrium.

9.2.1 Local Analysis


Here are the MatLab sessions for all the equilibrium points. As discussed in Chapter ??, the
MatLab command eig is used to find the eigenvalues and eigenvectors of a matrix.

Equilibrium Point (0, 0)

For the first equilibrium point (0, 0), we find the Jacobian at (0, 0) and the associaciated eigehval-
ues and eigenvectors with the following MatLab commands.

>> J0 = Jacobian(0,0)

J0 =

1 0
0 1

>> [V0,D0] = eig(J0)

V0 =

1 0

186
9.2. HARDER NONLINEARCHAPTER
MODEL 9. NONLINEAR MODELS NUMERICALLY

0 1

D0 =

1 0
0 1

>>

Hence, there is a repeated eigenvalue, r = 1 but there are two different eigenvectors:

" # " #
1 0
E1 = , E2 =
0 1

with general local solution near (0, 0) of the form

" # " # " #


x(t) 1 t 0 t
= a e + b e
y(t) 0 1

where a and b are arbitrary. Hence, trajectories move away from the origin locally. Recall, the
local linear system is

" # " #" #


x0 (t) 1 0 x(t)
=
y 0 (t) 0 1 y(t)

which is the same as the local variable system using the change of variables u = x and v = y.

" # " #" #


u0 (t) 1 0 u(t)
=
v 0 (t) 0 1 v(t)

Equilibrium Point (1, 0)

For the second equilibrium point (1, 0), we find the Jacobian at (1, 0) and the associaciated
eigehvalues and eigenvectors in a similar way.

>> J1 = Jacobian(1,0)

J1 =

-1 -1
0 1

>> [V1,D1] = eig(J1)

187
9.2. HARDER NONLINEARCHAPTER
MODEL 9. NONLINEAR MODELS NUMERICALLY

V1 =

1.0000 -0.4472
0 0.8944

D1 =

-1 0
0 1
>>

Now there is a two different eigenvalues, r1 = −1 and r2 = 1 with associated eigenvectors

" # " #
1 −0.4472
E1 = , E2 =
0 0.8944

with general local solution near (1, 0) of the form

" # " # " #


x(t) 1 −0.4472
= a e−t + b et
y(t) 0 0.8944

where a and b are arbitrary. Hence, trajectories move away from (1, 0) locally for all trajectories
except those that start on E1 . Recall, the local linear system is

" # " #" #


x0 (t) −1 −1 x(t) − 1
=
y 0 (t) 0 1 y(t)

which, using the change of variables u = x − 1 and v = y, is the local variable system.

" # " #" #


u0 (t) −1 −1 u(t)
=
v 0 (t) 0 1 v(t)

Equilibrium Point (0, 1)

For the third equilibrium point (0, 1), we again find the Jacobian at (0, 1) and the associated
eigenvalues and eigenvectors in a similar way.

>> J2 = Jacobian(0,1)

J2 =

-1 0
1 -1

188
9.2. HARDER NONLINEARCHAPTER
MODEL 9. NONLINEAR MODELS NUMERICALLY

>> [V2,D2] = eig(J2)

V2 =

0 0.0000
1.0000 -1.0000

D2 =

-1 0
0 -1
>>

Now there is again a repeated eigenvalue, r1 = −1. If you look at the D2 matrix, you see both
the columns are the same. In this case, MatLab does not give us useful information. We can use
the first column as our eigenvector E1 , but we still must find the other vector F .
Recall, the local linear system is

" # " #" #


x0 (t) −1 0 x(t)
=
y 0 (t) 1 −1 y(t) − 1

which, using the change of variables u = x and v = y − 1, is the local variable system

" # " #" #


u0 (t) −1 0 u(t)
=
v 0 (t) 1 −1 v(t)

Recall, the general solution to a model with a repeated eigenvalue with only one eigenvector is
given by

" #  
x(t) −t −t −t
= a E1 e + b F e + E1 t e
y(t)

where F solves (−I − A)F = −E1 . Here, that gives

" #
1
F =
0

and so the local solution near (0, 1) has the form

" # " # " # " # 


x(t) 0 −t 1 −t 0 −t
= a e + b e + te
y(t) 1 0 1

189
9.2. HARDER NONLINEARCHAPTER
MODEL 9. NONLINEAR MODELS NUMERICALLY

Figure 9.5: Nonlinear Phase Plot For Multiple Initial Conditions!

where a and b are arbitrary. Hence, trajectories move toward from (0, 1) locally.

9.2.2 Generating A Phase Plane Portrait

Piecing together the global behavior from the local trajectories is difficult, so it is helpful to write
scripts in MatLab to help us. We can use the AutoPhasePlanePlot() function from before, but
this time we use different dynamics. The dynamics are stored in the file autonomousfunc.m
which encodes the right hand side of the model

2xy
x0 = (1 − x)x −
 1+x

y
y0 = 1 − y
1+x

in the usual way.

function f = autonomousfunc ( t , x )
f = [(1 − x ( 1 ) ) . ∗ x ( 1 ) −2.∗x ( 1 ) . ∗ x ( 2 ) ./(1+ x ( 1 ) ) ; . . .
x ( 2 ) − x ( 2 ) . ∗ x ( 2 ) ./(1+ x ( 1 ) ) ] ;

Then, to generate a nice phase plane portrait, we try a variety of [xmin, xmax] × [ymin, ymax]
initial condition boxes until it looks right! Here, we avoid any initial conditions that have negative
values as for those the trajectories go off to infinity and the plots are not manageable. We show
the plot in Figure 9.5.

>> AutoPhasePlanePlot(’autonomousfunc’,0.1,0.0,3.9,4,8,8,0.0,2.5,0.0,2.5);

You should play with this function. You’ll see it involves a lot of trial and error. Any box
[xmin, xmax] × [ymin, ymax] which includes trajectories whose x or y values increase exponen-
tially causes the overall plot’s x and y ranges to be skewed toward those large numbers. This

190
9.2. HARDER NONLINEARCHAPTER
MODEL 9. NONLINEAR MODELS NUMERICALLY

causes a huge loss in resolution on all other trajectories! So it takes time and a bit of skill to
generate a nice collection of phase plane plots!

191
9.2. HARDER NONLINEARCHAPTER
MODEL 9. NONLINEAR MODELS NUMERICALLY

192
Part III

Basic BioPhysics and Cellular Modeling

193
Chapter 10
Some Chemistry Background

We will begin by going through some background material on what might be called the chemistry
of life; we had a hard time getting all of these things straight, so for all of those mathematician
and computer scientist types out there, here is the introduction we wished we had when we
started out.
Molecules depend on the interplay of non covalent and covalent interactions. Recall covalent
bonds share electrons in a tightly coupled way. There are three fundamental covalent bonds:

• Electrostatic Bonds due to Coulomb’s Law

• Hydrogen Bonds

• Vanderwaals Bonds

10.1 Molecular Bonds:


The force F between two charges is given by the formula:

1 q1 q2
F = (10.1)
4π0 r2

where q1 and q2 are the charges on the bonding units, r is the distance between the units and 0 is
a basic constant which depends on the solution (air, water and so forth) that the units live within.
For example, in Figure 10.1, we see a representation of the electrostatic attraction between two
common molecules; one with a carboxyl and the other with an amide group on their ends.
Hydrogen Bonds occur when a hydrogen atom is shared between two other atoms as shown in
Figure 10.2: The atom to which the hydrogen is held more tightly is called the hydrogen donor

195
10.1. MOLECULAR BONDS: CHAPTER 10. CHEMISTRY

O H

CH2− C N H3+ C−

O− H

Negative charge Positive charge

The molecule on the left has a net negative charge and the one on the
right has a positive charge. Hence, there is an attraction due to the
charges.

Figure 10.1: Electrostatic Attraction

O
O H O O H

Strong hydrogen bond Weak hydrogen bond

The molecule on the left has a net negative charge and the one on the
right has a positive charge. Hence, there is an attraction due to the
charges.

Figure 10.2: Typical Hydrogen Bonds

196
10.1. MOLECULAR BONDS: CHAPTER 10. CHEMISTRY

Example Bond Length


O-H· · · O 2.70 Å
O-H· · · O− 2.63 Å
O-H· · · N 2.88 Å
N-H· · · O 3.04 Å
N+ -H· · · O 2.93 Å
N-H· · · N 3.10 Å

Table 10.1: Hydrogen Bond Lengths

and the other one which is less tightly linked is called the hydrogen acceptor. You can see this
represented abstractly as follows:

H Donor Length H Acceptor


−O − H− 2.88Å · · · − N−
−N − H− 3.04Å · · · − O−

Recall, one Angstrom, (1 Å), is 10−10 meters or 10−8 cm. Hydrogen bond lengths vary as we can
see in Table 10.1:

The donor in biological systems is an oxygen or nitrogen with a covalently attached hydrogen.
The acceptor is either oxygen or nitrogen. These bonds are highly directional. The strongest
bonds occur when all the atoms line up or are collinear (think alignment of the planets like in
the Tomb Raider movie!). In Figure 10.3, we see an idealized helix structure with two attached
carbon groups. The carbons are part of the helix backbone and the rest of the molecular group
spills out from the backbone. If the two groups are separated enough along the background, the
oxygen of the top one is close enough physically to the nitrogen of the bottom group to allow
them to share a hydrogen. As you might expect, this sort of hydrogen bonding stabilizes helical
structures. This sort of upward spiral occurs in some protein configurations.

There is also a nonspecific attractive force which occurs when any two atoms are 3 to 4 Å
apart which is called a Vanderwaals Bond. The basis of this bond is that the charge distri-
bution around an atom is time dependent and at any instant is not perfectly symmetrical. This
”transient” asymmetry in an atom encourages a similar asymmetry in the electron distribution
around its neighboring atoms. The standard picture of this force is shown in Figure 10.4. The
distance for maximum attraction varies with the atom involved as we can see in Table 10.2:

Table 10.2 shows us the H − C has a 3.20 Å Vanderwaals interaction radius; H − N has a
2.70 Å; and H − P has a 13.10 Å.

197
10.1. MOLECULAR BONDS: CHAPTER 10. CHEMISTRY

Figure 10.3: Typical Hydrogen Bonds In A Helix Structure

Force
+ The horizontal axis mea-
sures the distance between
molecules while the verti-
Repulsion increases cal axis gives the force be-
tween them.

Energy 0
Distance

Repulsion decreases
Maximum attraction

Figure 10.4: Vanderwaals Forces Between Molecules

Atom Radius of Maximum Attraction


H 1.20 Å
C 2.00 Å
N 1.50 Å
O 1.40 Å
S 1.85 Å
P 1.90 Å

Table 10.2: Maximum Attraction Distances

198
10.2. BOND COMPARISONS: CHAPTER 10. CHEMISTRY

Bond Interaction Distance Bond Energy


Electrostatic 2.80 Å 3 - 7 kcal/mole
Hydrogen 2.70 - 3.10 Å 3 - 7 kcal/mole
Vanderwaals 2.70 - 3.20 Å 1 kcal/mole
Covalent 1.00 Å 80 kcal/mole

Table 10.3: Bond Distances by Type

Source Energy Stored


Green Photon Energy 57 kcal/mole
ATP (Adenine Triphostate) 12 kcal/mole
(universal currency of biochemical energy)
Each vibrational degree of freedom in a 0.6kcal/mole at 25 degrees C
molecule
Covalent bond 80 kcal/mole

Table 10.4: Energy Sources

10.2 Bond Comparisons:


These three different types of bonds are therefore ranked according to their interaction distance as
shown in Table 10.3 where although we haven’t really discussed it, it should be easy to understand
that to pull a bond apart you would have to exert a force or in other words do some work.

The amount of work you do can be measure in many units, but a common one in biology
is the calorie which can be converted to the standard physics energy measure of ergs. The
abbreviation kcal refers to 103 calories and the term mole refers to a collection of 6.02 1023
(Avogadro’s Number) of molecules. Note that the covalent bond is far stronger than the other
bonds we have discussed!

10.3 Energy Considerations:


The ultimate source of energy for most life is the sun (we will neglect here those wonderful
mid-Atlantic smokers with a sulfur based life chemistry that does not require any oxygen or
photosynthesis at all). Energy can be stored in various ways for later use - kind of like a battery.
Some common energy sources are given in Table 10.4:

Basically, energy is moved from one storage source or another via special helper molecules
so that a living thing can perform its daily tasks of eating, growing, reproducing and so forth.
Now for us, we are going to concentrate on what happens inside a cell. Most of us already know
that the inside of a cell is a liquid solution which is mostly water but contains many ions. You
probably have this picture in your head of an ion, like N a+ sitting inside the water close to other
such ions with which they can interact. The reality is more complicated. Water is made up of
two hydrogens and one oxygen and is what is called a polar molecule. As you can see from the

199
10.3. ENERGY CONSIDERATIONS: CHAPTER 10. CHEMISTRY

left side of Figure 10.5, the geometry of the molecule means that the minus and plus charges are
not equally balanced. Hence, there is a charge gradient which we show with the vertical arrow in
the figure. The asymmetrical distribution of charge in water implies that water molecules have a
high affinity for each other and so water will compete with other ions for hydrogens to share in
a hydrogen bond.


O 99Å
H 105◦ H N H O C
+

Polar Bonding Without Wa-


A Water Molecule
ter

The water molecule shown on the left has an asymmetric charge distri-
bution which has a profound effect on how molecules interact in liquid.
There is an attraction due to this asymmetry: the positive side of water
attracts the negative side of other molecules.

Figure 10.5: Special Forces Act On Water

If we had two molecules, say N H3 and COH2 , shown on the right side of Figure 10.5, we
would see a hydrogen bond from between the oxygen of COH2 and the central hydrogen of N H3
due to the asymmetrical distribution of charge in these molecules. However, in an environment
with water, the polarized water molecules are attracted to these asymmetrical charge distributions
also and so each of these molecules will actually be surrounded by a cage of water molecules as
shown in Figure 10.6(a). This shield of polar water molecules around the molecules markedly
reduces the attraction between the + and − sites of the molecules. The same thing is true for
ionized molecules. If we denote a minus ion by a triangle with a minus inside it and a plus ion
by a square with a plus inside it, then this caging effect would look something like what is shown
in Figure 10.6(b).
This reduction occurs because the electric field of the shield opposes the electric field of the
ion and so weakens it. Consider common table salt, N aCl. This salt can ionize into N a+ and
Cl− , the exact amount that ionizes being dependent on the solution you drop the salt into. The
minus side of N aCl is very attractive to the positive side of a polar water molecule which in turn
is attracted to the negative side of another water molecule. The same thing can be said about
the positive side of the salt. The presence of the polar water molecules actively encourages the
splitting or disassociating of the salt into two charged pieces. Hence, water can dissolve many
polar molecules, like the salt mentioned above, that serve as fuels, building blocks, catalysts and
information carriers. This causes a problem because the caging of an ion by water molecules also
inhibits ion interactions. Biological systems have solved this problem by creating water free
micro environments where polar interactions have maximal strength. A consequence of this is
that non polar groups aren’t split apart by water and so it is energetically more favorable for non

200
10.3. ENERGY CONSIDERATIONS: CHAPTER 10. CHEMISTRY

(a) Polar Bonding With Water (b) Ion Water Cages

Figure 10.6: The Polarity Of Water Induces Cages To Form

(a) Non Polar Molecule (b) Non Polar Molecule Group Water
Water Cages Cages

Figure 10.7: It Is More Efficient To Group Non polar Molecules In Cages

polar molecules to be placed into one cage rather that have a separate water cage for each one. If
we denote a non polar molecule by the symbol N P enclosed in a circle, as a thought experiment
we can add our non polar molecule to a water environment. A cavity in the water is created
because the non polar molecule disrupts some hydrogen bonds of water to itself. We see a picture
like the one in Figure 10.7(a). This means that the number of ways to from hydrogen bonds in
the cage around the non polar molecule is smaller that the the number of ways to from hydrogen
bonds without the non polar molecule present. This implies a cost to caging N P as order is
created. Now if a second non polar molecule N P is added, where will it go? If a second cage is
created, more hydrogen bonds are used up than if both N P molecules clump together inside one
cage. Remember always that order is costly and disorder is energetically favored. So two non
polar molecules clump together to give the picture shown in Figure 10.7(b)

201
10.4. HYDROCARBONS: CHAPTER 10. CHEMISTRY

10.4 Hydrocarbons:
We need to learn a bit about the interesting molecules we will see in our quest to build interesting
models of cell function. Any good organic chemistry book is good to have as a general reference
on your shelf if you really like this stuff. We will begin with simple hydrocarbons.

H H H

R C H R C C H

H H H
Methyl with Residue R Ethyl with Residue R

Carbon is represented by the symbol C and it can form four simple bonds. In the methyl
molecule, three of the bonds are taken up by hydrogen (symbol H) and the last one is used
to attach the residue (symbol R). In the ethyl molecule, the right most carbon uses its last
available bond to connect to the right most carbon. The right most carbon then has one last
bond that can be used to add the residue. The residue group itself be quite complicated.

Figure 10.8: Methyl and Ethyl Can Have Side Chains of Arbitrary Complexity Called Residues

The basic thing to remember is that carbon’s outer shell of electrons is four shy of being filled
up. The far right edge of the periodic table consists of the noble gases or elements because
their outer most electron shells are completely populated. Hence, they are considered special or
noble. For us, a quick way to understand hydrocarbons is to think of a carbon as wanting to add
four more electrons so that it can be like a noble gas. Of course, there are less anthropomorphic
ways to look at it and really nice physical models but looking at it in terms of a want really helps!
We typically draw a carbon atom, denoted by C, surrounded by four lines. Each of the lines is
ninety degrees apart and the lines represent a covalent bond with something. On the left side of
Figure 10.8, we see a carbon with three of its covalent bonds with hydrogen (hence, these lines go
from the central C to an H) and the last covalent bond on the left goes to an unknown molecule
we denote as R. The R is called a residue and must supply an electron for this last covalent
bond. The molecule CH4 is called methyl and is the simplest hydrocarbon (note the residue
here is just a hydrogen); here we have the molecule by CH3 R; for example, a ionized phosphate
group, P O3− , would supply such an electron and we would have the molecule CH3 P O3 . The next
simplest hydrocarbon is one that is built from two carbons. If the two carbons bond with each
other, that will leave six bonds left over to fill. If all of these bonds are filled with hydrogens, we
get the molecule ethyl with chemical formulae C2 H6 . Usually, we think of the left most hydrogen
as being replaced by a residue R which we show on the right side of Figure 10.8.

202
10.4. HYDROCARBONS: CHAPTER 10. CHEMISTRY

Clearly, these groupings can get complicated very quickly! We won’t show any more specific
molecules, but you can get the flavor of all this by looking at Figure 10.8 which shows a few of
the residues we will see attached to various hydrocarbons. Of course, we can also use more than
two carbons and if you think about it a bit, as the number of carbons we use goes up, there is
no particular reason to think our pictures will always organize the carbons in a central chain like
beads on a string. Instead, the carbons may forms side chains or branches, from circular groupings
where the last carbon in a group bonds to the first carbon and so forth. Also our pictures are just
idealizations to help us think about things; these molecules live in solutions, so there are water
cages, there are three dimensional concerns - like which carbons are in a particular plane - and so
forth. We won’t really discuss those things in a lot of detail. However, the next step is to look at
a special class of molecules called amino acids and we will see some of this complication show
up in our discussions there.

203
10.4. HYDROCARBONS: CHAPTER 10. CHEMISTRY

204
Chapter 11
Amino Acids

An α amino acid consists of the following things:

• an amide group N H2

• a carbonyl group COOH

• a hydrogen atom H

• a distinctive residue R

These groups are all attached to a central carbon atom which is called the α carbon. There
are many common residues. First, let’s look at a methyl molecule with some common residues
attached.

methyl + residue hydroxyl: The chemical formula here is OH and since O needs to add
electrons so that its outer electron shell can be filled, we think of it as having a polarity
or charge of −2. Oxygen shares the single electron of hydrogen to add one electron to
oxygen’s outer shell bringing oxygen closer to a filled outer shell. Hence, the hydrogen bond
here brings the net charge of the hydroxyl group to −1 as hydroxyl still needs one more
electron. In fact, it is energetically favorable for the hydroxyl group to accept an electron
to fill the outermost shell. Hence, hydroxyl can act as OH − in an ionic bond. Since carbon
wants electrons to fill its outer shell, it saves energy for the hydroxyl group and the carbon
to share one of carbon’s outer electrons in a covalent bond. If we use the hydroxyl group
as the residue for methyl, we replace one of the hydrogen’s in methyl with hydoxyl giving
CH3 − OH.

methyl + residue amide: The chemical formula here is N H2 and since N needs three elec-
trons to fill its outer most electron shell, we think of it as having a polarity or charge of −3.

205
CHAPTER 11. AMINO ACIDS

Here N forms a single bond with the two hydrogens. This molecule can accept an electron
and act as the ion N H2− or it can bond covalently with carbon replacing one methyl’s
hydrogens. The methyl plus amide residue would then be written as CH3 − N H2 .

methyl + residue carbonyl: The hydroxyl OH can form a covalent bond with carbon and an
oxygen can form a double covalent bond with carbon to give COOH. This molecule can
accept an additional electron and function in an ionic bond as COOH − or it can form
a covalent bond giving the group OOH which can add an electron and function in ionic
bonds as OOH − . This group can then form a double covalent bond with carbon and share
two electrons. The carbon atom then can share two more electrons with another oxygen in
a double covalent bond. This gives the molecule COOH. This molecule finds it favorable
to add an electron in the outer shell so that it can form ionic bonds. In this state, we would
call it COOH − . The carbonyl can also form a covalent bond with another carbon. The
methyl molecule plus carbonyl residue would then be written as CH3 − COOH.

methyl + phosphate: The chemical formula here is P O4 . Phosphate, P , has three covalent
bonds that can be used to fill its outer most electron shell. It carries two electrons in its 2s
orbital and three in its 2p orbitals. Since oxygen needs two electrons to fill its outermost
shell, the 2s electrons of phosphorus can be shared with one oxygen. This is still considered
a single bond as only two electrons are involved (it is actually called a coordinate covalent
bond). However, this bond is often drawn as a double bond in pictorial representations
anyway. The remaining three electrons carbon needs to fill its 2p orbitals are then obtained
by covalent sharing with oxygens. Each of these three oxygens sharing a 2p carbon orbital,
still needs an electron. Hence, phosphate can form the ionic bond using P O4−3 .
The three oxygens covalently sharing 2p orbitals can then ionically bond with hydrogens to
create the molecule P O4 H3 which is usually written reversed as H3 P O4 . If you leave off
one of the hydrogens, we have H2 P O4− . which can form a residue on methyl giving methyl
phosphate H2 P O4 CH3 .
Phosphates in dilute water solutions exist in four forms. In a strongly basic solution, P O4−3
predominates. However, in a weakly basic setting, HP O4−2 is more common. In a weakly
acid water solution, the dominant form is H2 P O4−1 and finally, in a strongly acidic solution,
H3 P O4 has the highest concentration.

methyl + thiol: The chemical formula here is SH and since sulphur has two covalent bonds
that can be used to fill its outer most electron shell, we think of it as having a polarity or
charge of −2. Hence, the hydrogen bond here gives us an ion with −1. So we can denote
this group as SH − . A methyl molecule with a thiol residue would thus have the formula
SH − CH3 .

If we replace two of the hydrogens on the methyl group with an amide group and a carboxyl
group we obtain the molecule N H2 − COOH − CH2 . We can see how a molecule of this type
will be assembled using are simple pictorial representation of covalent bonds in Figure 11.1. As we

206
CHAPTER 11. AMINO ACIDS

mentioned earlier, we think of the central carbon as the α carbon so that we distinguish it from
any other carbons that are attached to it. In fact, one way to look at all this is to think that since
a carbon lacks four electrons in its outer shell, it is energetically favorable to seek alliances with
other molecules so that it can fill these empty spots. This is, of course, a very anthropomorphic
way to look at it, but it helps you see what is going on at a gut level. So draw the α carbon with
four dots around it. These are the places in the outer shell that are not filled. Chemical theory
tells us that the outer shell of a carbon consists of four groupings of two electrons each. So when
we draw four groupings of just one electron per group we are clearly indicating that one electron
is missing in each group. Now the amide molecule is handled in a similar way: the nitrogen is
missing three electrons in its outer shell which we indicate by three single dots placed around the
nitrogen. A hydrogen has only one electron as it has a very simple shell structure; hence it is
drawn as a single dot. So hydrogen would like one more electron to fill its outer shell and the
nitrogen would also like an additional electron to fill one of its groups. So a good solution is for
the two atoms to share an electron which we denote by a box around the two single electrons.
Continuing in this way, we can build an electron dot diagram for the amide and its connection
to the α carbon. The carbonyl is a little more complicated. So far we have only looked at bonds
between atoms where one electron is shared. Another type of bond which is even stronger is one
where two of the outer shell electrons are shared between two atoms. The carbonyl group would
be written as C = OOH to reflect the fact the there is such a double bond between the carbon
and the first oxygen. Oxygen is missing only two electrons in its outer shell and so this double
bond which is indicated by the double bars = completely fills the outer shell of the oxygen and
half of the outer shell of the carbon. One of the remaining two outer shell groups is then filled by
a hydroxyl group, OH. Note the hydroxyl group is an oxygen with one of its outer shell groups
filled by a hydrogen leaving one group to fill. It does this by sharing with one of the remaining
two groups that are open on the carbon of the carbonyl group. This leaves one opening left on
the carbon of the carbonyl which is used to make a shared bond with the α carbon of the amino
acid. The entire electron dot diagram is shown in Figure 11.1 and it is very complex. So we
generally do not use this kind of explicit notation to draw an amino acid. All of this detail is
assumed in the simple skeleton formula we see on the left hand side of Figure 11.2 and in the
notation CH2 N H2 COOH. The structure shown in Figure 11.1 is that of the amino acid glycine
(G or Gly).
An amino acid can exist in ionized or non ionized forms as shown on the right side of Figure
11.2 and of course all we have said about water cages is still relevant.
Another important thing is the three dimensional (3D) configuration of an amino acid. The
L-form is shown in Figure 11.3(a) and the R-form is shown in Figure 11.3(b).
In the L-form, the α carbon, Cα , comes out of the page. To see this, take your right hand,
line up the fingers along the N H2 line and rotate your fingers left towards R. Note your thumb
points out of the page. This is called the L-form because you rotate left. If you switch the
position of the amide and carbonyl group you get the R-form. Now line your fingers along the
N H2 line again so that you can curl them naturally towards the R group. Note you can’t do this

207
CHAPTER 11. AMINO ACIDS

Figure 11.1: Methyl with Carboxyl and Amide Residues with Electron Shells

N H2 N H3+

H C COOH H C COO −

R R

Unionized Form Double Ionized Form

Here we see a typical amino acid. The common elements are the carboxyl and amide groups.
Each amino acid then has a different residue. The carbon atom in the middle is called the
central carbon and the elements attached to it can be ionized in a variety of ways. This
ionization can substantially effect how the amino acid reacts with other molecules. Also,
charge distribution in amino acids is not uniform and so one side of an amino acid may act
more positive than the other.

Figure 11.2: A typical amino acid in normal and ionized forms

208
CHAPTER 11. AMINO ACIDS

(a) L-form of an Amino Acid (b) R-form of an Amino Acid

Figure 11.3: Three Dimensional Configuration Of Amino Acids

Amino Acid Abbreviation Amino Acid Abbreviation


Glycine G, Gly Methionine M, Met
Alanine A, Ala Serine S, Ser
Valine V, Val Lysine K, Lys
Leucine L, Leu Threonine T, Thr
Isoleucine I, Ile Arginine R, Arg
Proline P, Pro Histidine H, His
Phenylalanine F, Phe Aspartate D, Asp
Tyrosine Y, Tyr Glutamate E, Glu
Tryptophan W, Trp Asparagine N, Asn
Cysteine C, Cys Glutamine G, Gln

Table 11.1: Abbreviations for the Amino Acids

unless you let your thumb point into the page. So here, we have Cα going into the page. For
unknown reasons, only L-forms are used in life on earth.

Now there are a total of twenty amino acids: we list them in Figure 11.4(glycine and alanine),
Figure 11.5(valine and leucine), Figure 11.6(isoleucine and proline), Figure 11.7(phenylalanine
and tyrosine), Figure 11.8(tryptophan and cysteine), Figure 11.9(methionine and serine), Figure
11.10(lysine and threonine), Figure 11.11(arginine and histidine), Figure 11.12(aspartate and
glutamate) and Figure 11.13(asparagine and glutamine). We have organized all of these figures so
that the residues are underneath the central carbon. As you can see, all have the common amino
acid structure with different residues R attached. The type of residue R determines the chemical
and optical reactivity of the amino acids. For convenience, we list the standard abbreviations for
the names of the amino acids in table form as well as in the figures in Table 11.1.

209
CHAPTER 11. AMINO ACIDS

COO − H

+H N +H N
3 C H 3 C COO −

H CH3
Glycine Alanine

Glycine’s residue is H and alanine’s, is CH3 . Glycine is the simplest amino acid and is
optically inactive.

Figure 11.4: The amino acids Glycine and Alanine

COO − H

+H N +H N
3 C H 3 C COO −

CH2
CH
CH
CH3 CH3
CH3 CH3

Valine Leucine

Valine’s residue is CH(CH3 )2 and Leucine’s is CH2 CH(CH3 )2 . Note Valine and Leucine
have a longer residue which makes them hydrophobic.

Figure 11.5: The amino acids Valine and Leucine

210
CHAPTER 11. AMINO ACIDS

COO − H

+H N +H N
3 C H 3 C COO −

H C CH3
CH2 CH2
CH2
CH2
CH3

Isoleucine Proline

Isoleucine’s residue is HCCH3 CH2 CH3 and Proline’s is a cyclic structure H2 CCH2 CH2
attaching to both the amide and the central carbon. Isoleucine is hydrophobic but Proline’s
cyclic residue is indifferent to water.

Figure 11.6: The amino acids Isoleucine and Proline

COO − COO −

+H N C H +H N C H
3 3

CH2 CH2
C C
H −C C−H H −C C−H
H −C C−H H −C C−H

C−H C − OH
Phenylalanine Tyrosine

Phenylalanine has a very hydrophobic phenyl group as a residue. The ring structure of the
phenyl group creates a localized cloud of pi electrons which make it very reactive. Tyrosine
replaces the bottom C − H with C − OH. The addition of the hydroxyl group makes this
amino acid hydrophilic. It is also very reactive due to the localized pi electron cloud.

Figure 11.7: The amino acids Phenylalanine and Tyrosine

211
CHAPTER 11. AMINO ACIDS

COO −
COO −
+H N C H
3
+H N C H
CH2 3
C C
H −C CH CH
CH2
H −C CH
SH
CH NH

Trytophan Cysteine

Tryptophan’s residue is fairly complicated with a phenyl group off to the left. It is very
hydrophobic. Cysteine plays a special role in biology because bonds similar to hydrogen
bonds can form between sulphur atoms occurring on different cysteine molecules. These
bonds are called disulfide links.

Figure 11.8: The amino acids Tryptophan and Cysteine

COO −
COO −

+H N C H
3
+H N C H
3
CH2
H C OH
CH2
H
S
CH3

Methionine Serine

Methionine is hydrophobic. If you look at Alalanine on the right side of Figure 11.4, you’ll
see that Serine is formed by Adding a hydroxyl group to the methyl residue on Alalanine.
This is called hydroxylation.

Figure 11.9: The amino acids Methionine and Serine

212
CHAPTER 11. AMINO ACIDS

COO −
COO −

+H N C H
3
+H N C H
3
CH2
H C OH
CH2
CH3
CH2
N H3+

Lysine Threonine

Lysine is very polar and hence, it is very hydrophobic. Threonine is a hydroxylated version
of Valine (see the left side of Figure 11.5).

Figure 11.10: The amino acids Lysine and Threonine

COO −
COO −
+H N C H
+H N C H 3
3
CH2
(CH2 )3 C
NH C CH
NH NH
C N H2+
CH
N H2
Arganine Histidine

Arganine and Histidine are very polar, hydrophobic and are positive ions at neutral pH.
However, Histidine is a negative ion at physiological pH.

Figure 11.11: The amino acids Arganine and Histidine

213
CHAPTER 11. AMINO ACIDS

H H

+H N +H N
3 C COO − 3 C COO −

CH2 (CH2 )2

C C
O O− O O−
Asparate Glutamate

Aspartate and Glutamate have similar residues.

Figure 11.12: The amino acids Asparate and Glutamate

H H

+H N +H N
3 C COO − 3 C COO −

CH2 (CH2 )2

C C
O N H2 O N H2
Asparagine Glutamine

The oxygen ion in Asparatate and Glutamate is replaced by the amide group in both As-
paragine and Glutamine. This change makes these amino acids neutral in charge.

Figure 11.13: The amino acids Asparagine and Glutamine

214
11.1. PEPTIDE BONDS: CHAPTER 11. AMINO ACIDS

11.1 Peptide Bonds:


Amino acids can link up in chains because the COOH on one can bond with the N H2 on another
as is seen in Figure 11.14(a). In this figure, there is an outlined box that contains the bond
between the COOH and the N H2 ; this is called the peptide bond and is shown again in Figure
11.14(b).

(a) The Bond Between Two Amino Acids (b) The Peptide Bond

Figure 11.14: Bonds Between Amino Acids

In Figure 11.14(a), R1 is the residue or side chain for the first amino acid and R2 is the side
chain for the other. Recall, the central carbon in the amino acid is labeled the α carbon. The
important characteristic about the peptide bond is that the grouping shown in Figure 11.15(a)
forms a rigid two dimensional plane. The peptide bond allows amino acids to link into chains as
is shown in Figure 11.15(b). In this picture there are two peptide bonds which are shaded and
three amino acids.
If you look at the shaded boxes that represent the peptide bonds, it isn’t hard to imagine that
these two boxes could rotate or spin about the Cα to Cα axes you see. This is indeed true and
the best way to see it is to add the actual detail of the peptide bond to the boxes. In the first
box, there is a N − Cα axis and in the second box, there is a Cα − C axis. These two axes have
a common center at the central Cα carbon of the amino acid chain. We have tried to represent
this in Figure 11.16(a) in which we show the two axes in question. Then in Figure 11.16(b) we
show that the first box can be rotated Φ degrees about the N − Cα axis and the second box can
be rotated by Ψ degrees about the Cα − C axis.
You can see that the side chains, R1 , R2 and R3 , then hang off of this chain of linked peptide
bonds. A longer chain is shown in Figure 11.17. In this long chain, there are two rotational degrees
of freedom at the central carbon of any two peptide bonds. This means that if we imagine the
linked peptide bonds as beads on a string, there is a great deal of flexibility possible in the three
dimensional configuration of these beads on the string. It isn’t hard to imagine that if the beads
of string were long enough, full loops could form and there could even be complicated repeated
patterns or motifs. Also, these beads on a string are molecular groupings that are inside a solution

215
11.2. CHAINS OF AMINO ACIDS: CHAPTER 11. AMINO ACIDS

(a) The Peptide Bond Plane (b) A Three Amino Acid Chain

Figure 11.15: Amino Acid Bond Into Chains

that is full of various charged groups and the residues or side chains coming off of the string are
also potentially charged. Hence, there are many forces that act on this string including hydrogen
bonds between residues, Vanderwaals forces acting on motifs and so forth. If we look at one three
amino acid piece of a long chain, we would see the following molecular form as represented in
Figure 11.18.
As discussed above, as more and more amino acids link up, we get a chain of peptide bonds
whose geometry is very complicated in solution. To get a handle on this kind of chain at a high
level, we need to abstract out of this a simpler representation. In Figure 11.19, we show how we
can first drop most of the molecular detail and just label the peptide bond planes using the letter
P.
We see we can now represent our chain in the very compact form −N CCN CCN CCN −. This
is called the backbone of the chain. The molecules such as side chains and hydrogen atoms hang
off the backbone in the form that is most energetically favorable. Of course, this representation
does not show the particular amino acids in the chain, so another representation for a five amino
acid chain would be A1 A2 A3 A4 A5 where the symbol Ai for appropriate indices i represents one
of the twenty amino acids. The peptide bonds are not even mentioned as it is assumed that they
are there. Also, note that in these amino acid chains, the left end is an amino group and the
right end is a carbonyl group.

11.2 Chains of Amino Acids:


We roughly classify chains of amino acids by their length. Hence, we say

216
11.2. CHAINS OF AMINO ACIDS: CHAPTER 11. AMINO ACIDS

(a) Details of the Peptide Bonds in a Chain

(b) The Peptide Bond Axis Rotations

Figure 11.16: Peptide Bond Details

217
11.2. CHAINS OF AMINO ACIDS: CHAPTER 11. AMINO ACIDS

Figure 11.17: A Five Amino Acid Chain

Figure 11.18: Molecular Details of a Chain

218
11.2. CHAINS OF AMINO ACIDS: CHAPTER 11. AMINO ACIDS

Figure 11.19: A First Chain Abstraction

• polypeptides are chains of amino acids less than or equal to 50 units long. Clearly, this
naming is a judgment call.

• longer chains of amino acids are called proteins

As we mentioned earlier, these long chains have side chains and other things that interact via
weak bonds or via other sorts of special bonds. For example, the amino acid cysteine (see the
right side of Figure 11.8) has the residue CH2 SH and if the residues of two cysteine’s in an amino
acid chain can become physically close (this can happen even if the two cysteines are very far
apart on the chain because the chain twists and bend in three dimensional space!), a S − S bond
can form between the sulphur in the SH groups. This is called a disulfide bond and it is yet
another important bond for amino acid chains. An example of this bond occurs in the protein
insulin as shown in Figure 11.20.

Figure 11.20: Disulfide Insulin Protein

In Figure 11.20, note we represent the amino acid chains by drawing them as beads on a string:
each bead is a circle containing the abbreviation of an amino acid as we listed in Table 11.1. This
is a very convenient representation even if much detail is hidden. In general, for a protein, there
are four ways to look at its structure: The primary structure is the sequence of amino acids in
the chain as shown in Figure 11.21(a). To know this, we need to know the order in which the
amino acids occur in the chain: this is called sequencing the protein.
Due to amino acid interactions along the chain, different regions of the full primary chain may
form local three dimensional structures. If we can determine these, we can say we know the
secondary structure of the protein. An example is a helix structure as shown in Figure 11.21(b).

219
11.2. CHAINS OF AMINO ACIDS: CHAPTER 11. AMINO ACIDS

(a) Primary Structure of a Protein (b) Secondary


Structure of a
Protein

Figure 11.21: First Order Protein Foldings

If we pack secondary structures into one or more compact globular units called domains, we
obtain the tertiary structure an example of which is shown in Figure 11.22(a). In this figure,
each rectangle represents secondary structural elements. Finally, the protein my contain several
tertiary elements which are organized into larger structures. This way the amino acid far apart
in the primary sequence structure can be brought close enough together in three dimensions to
interact. This is called the quatenary structure of the protein. An example is shown in Figure
11.22(b)

(a) Tertiary Structure of a Protein (b) Quatenary Structure of a Protein

Figure 11.22: Higher Order Protein Foldings

220
Chapter 12
Nucleic Acids

Our genetic code is contained in linear chains of what are called nucleic acids in combination with
a particular type of sugar. These nucleic acid plus sugar groups are used in a very specific way
to code for each of the twenty amino acids we mentioned in Chapter 11. So our next task is to
discuss sugars and nucleic acids and the way these things are used to code for the amino acids.

12.1 Sugars:

Consider the cyclic hydrocarbon shown in Figure 12.1(a) The ring you see in Figure 12.1(a) is
formed from five carbons and one oxygen. For sugars, we label the carbons with primes as 1 C 0
to 6 C 0 because it will be important to remember which side chains are attached to which carbons.
This type of structure is called a pyran and can be indicated more schematically as in Figure
12.1(b). Another common type of structure is that shown in Figure 12.1(c) which is formed from
four carbons and one oxygen. We label the carbons in a similar fashion to the way we labeled in
the pyran molecule. More symbolically, we would draw a furan as shown in Figure 12.1(d).
We will spend most of our time with the furan structures which have the very particular three
dimensional geometry shown in Figure 12.2(a). Note that 3 C 0 and 5 C 0 are out of the plane formed
by O − 1 C 0 − 2 C 0 − 4 C 0 ; this is called the 3 C 0 endo form. Another three dimensional version of
the furan molecule is the 2 C 0 endo form shown in Figure 12.2(b). Here, 2 C 0 and 5 C 0 are out of
the plane formed by O − 1 C 0 − 3 C 0 − 4 C 0 . Looking ahead some, these three dimensional forms are
important because only certain ones are used in biologically relevant structures.
Later, we will define the large molecules DNA and RNA and we will find that DNA uses the
2C 0 endo and RNA, the 3 C 0 endo form. Now the particular sugar we are interested in is called
ribose which will come in an oxygenated and non-oxygenated (deoxy) form. Consider the formula
for a ribose sugar as shown in Figure 12.3(a). Note the 2 C 0 carbon has an hydroxyl group on it.

221
12.1. SUGARS: CHAPTER 12. NUCLEIC ACIDS

(a) Details of a Cyclic Hydrocarbon (b) The Schematic for Pyran


Sugar: Pyran

(c) Details of a Cyclic Hydrocarbon (d) The Schematic for Furan


Sugar: Furan

Figure 12.1: The Pyran and Furan Structures

(a) The 3 C 0 endo Furan (b) The 2 C 0 endo Furan

Figure 12.2: Three Dimensional Furan Structures

222
12.2. NUCLEOTIDES: CHAPTER 12. NUCLEIC ACIDS

(a) The Ribose Sugar (b) The Deoxy Ribose Sugar

Figure 12.3: Oxygenated and De-oxygenated Ribose Sugars

(a) The Generic Purine (b) The Generic Pyrimidine

Figure 12.4: Forms Of Nitrogenous Bases

If we remove the oxygen from this hydroxyl, the resulting sugar is known as the deoxy-ribose
sugar (see Figure 12.3(b)).

12.2 Nucleotides:
There are four special nitrogenous bases which are important. They come in two flavors: purines
and pyrimidines. The purines have the form shown in Figure 12.4(a) while the pyrimidines
have the one shown in Figure 12.4(b).
There are two purines and two pyrimidines we need to know about: the purines adenine
and guanine and the pyrimidines thymine and cytosine. These are commonly abbreviated as
shown in Table 12.1:

There chemical formulae are important, so we show their respective forms in Figure 12.5(a)
(Adenine is a purine with an attached amide on 6 C 0 ), Figure 12.5(b) (Guanine is a purine with
an attached oxygen on 6 C 0 ), Figure 12.5(d) (Cytosine is a pyrimidine with an attached amide
on 4 C 0 ), and Figure 12.5(c) (Thymine is a pyrimidine with an attached oxygen on 4 C 0 ). These

223
12.2. NUCLEOTIDES: CHAPTER 12. NUCLEIC ACIDS

Type Name Abbreviation


Purine Adenine A
Purine Guanine G
Pyrimidine Thymine T
Pyrimidine Cytosine C

Table 12.1: Abbreviations for the Nitrogenous Bases

(a) Adenine (b) Guanine

(c) Thymine (d) Cytosine

Figure 12.5: Purine and Pyrimidine Nucleotides

four nitrogenous bases can bond to the ribose or deoxyribose sugars to create what are called
nucleotides. For example, adenine plus deoxyribose would give a compound called deoxy-
adenoside as shown in Figure 12.6(a).
In general, a sugar plus a purine or pyrimidine nitrogenous base give us a nucleoside. If we
add phosphate to the 5 C 0 of the sugar, we get a new molecule called a nucleotide (note the change
from side to tide!). In general, sugar plus phosphate plus nitrogenous base gives nucleotide. An
example is deoxy-adenotide as shown in Figure 12.6(b).
This level of detail is far more complicated and messy than we typically wish to show; hence,
we generally draw this in the compact form shown in Figure 12.8(a). There, we have replaced
the base with a simple shaded box and simply labeled the primed carbons with the numerical

224
12.3. COMPLEMENTARY BASE PAIRING: CHAPTER 12. NUCLEIC ACIDS

(a) deoxy-adenoside (b) deoxy-adenotide

Figure 12.6: Deoxy Forms of adenine

ranking.
In Figure 12.7 we show how nucleotides can link up into chains: bond the 5 C 0 of the ribose on
one nucleotide to the 3 C 0 of the ribose on another nucleotide with a phosphate or P O3− bridge.
Symbolically this looks like Figure 12.7.
This chain of three nucleotides has a terminal OH on the 5 C 0 of the top sugar and a terminal
OH on the 3 C 0 of the bottom sugar. We often write this even more abstractly as shown in Figure
12.8(b) or just OH− Base 3 P Base 2 P Base 1 P −OH, where the P denotes a phosphate
bridge. For example, for a chain with bases adenine, adenine, cytosine and guanine, we would
write OH − A − p − A − p − C − p − G − OH or OHApApCpGOH. Even this is cumbersome, so
we will leave out the common phosphate bridges and terminal hydroxyl groups and simply write
AACG. It is thus understood the left end is an OH terminated 5 C 0 and the right end an hydroxyl
terminated 3 C 0 .

12.3 Complementary Base Pairing:

The last piece in this puzzle is the fact that the purine and pyrimidine nucleotides can bond
together in the following ways: A to T or T to A and C to G or G to C. We say that adenine and
thymine and cytosine and guanine are complementary nucleotides. This bonding occurs because
hydrogen bonds can form between the adjacent nitrogen or between adjacent nitrogen and oxygen
atoms. For example, look at the T − A bond in Figure 12.9(a). Note the bases are inside and the
sugars outside. Finally, note how the bonding is done for the cytosine and guanine components
in Figure 12.9(b).
Now as we have said, nucleotides can link into a long chain via the phosphate bond. Each base
in this chain is attracted to a complimentary base. It is energetically favorable for two chains to

225
12.3. COMPLEMENTARY BASE PAIRING: CHAPTER 12. NUCLEIC ACIDS

Figure 12.7: A Nucleotide Chain

(a) A General Nucleotide (b) An Abstract Nucleotide Chain

Figure 12.8: Nucleotide Chain Abstractions

226
12.3. COMPLEMENTARY BASE PAIRING: CHAPTER 12. NUCLEIC ACIDS

(a) The Tyrosine - Adenine Bond

(b) The Cytosine - Guanine Bond

Figure 12.9: T-A and C-G Complimentary Bonds

Figure 12.10: A Cross-section of a Nucleotide Helix

form: chain one and its complement chain 2. In Table 12.2, we see how this pairing is done for a
short sequence of nucleotides:

Note that the 5 pairs with a 3 and vice versa. In the table, each pair of complimentary nu-
cleotides is called a complimentary base pair. The forces that act on the residues of the nucleotides
and between the nucleotides themselves coupled with the rigid nature of the peptide bond between
two nucleotides induce the two chains to form a double helix structure under cellular conditions
which in cross-section (see Figure 12.10 ) has the bases inside and the sugars outside.
The complimentary nucleotides fit into the spiral most efficiently with 100 degrees of rotation
and 1.5 Angstroms of rise between each base pair. Thus, there are 3.6 base pairs for every 360

227
12.3. COMPLEMENTARY BASE PAIRING: CHAPTER 12. NUCLEIC ACIDS

Chain One Chain Two


5 end 3 end

C G
T A
A T
C G
G C
G C
C G
T A
A T
T A
T A
C G
G C
3 end 5 end

Table 12.2: Two Complimentary Nucleotide Chains

degrees of rotation around the spiral with a rise of 3.6 × 1.5 = 5.4 Angstroms. This is, of
course, hard to draw! If you were looking down at the spiral from the top, you could imagine
each base to base pair as a set of bricks. You would see a lower set of bricks and then the next
pair of bricks above that pair would be rotated 100 degrees as shown in Figurebricks. To make it
easier to see what is going on, only the top pair of bases have the attached sugars drawn in. You
can see that when you look down at this spiral, all the sugars are sticking outwards. The double
helix is called DNA when deoxy-ribose sugars are used on the nucleotides in our alphabet. The
name DNA stands for deoxy-ribose nucleic acid. A chain structure closely related to DNA is
what is called RNA, where the R refers to the fact that oxy-ribose sugars or simply ribose sugars
are used on the nucleotides in the alphabet used to build RNA. The RNA alphabet is slightly
different as the nucleotide Thymine,T, in the DNA alphabet is replaced by the similar nucleotide
Uracil, U. The chemical structure of uracil is shown in Figure 12.11(b) right next to the formula
for thymine. Note that the only difference is that carbon 5 C 0 holds a methyl group in thymine
and just a hydrogen in uracil. Despite these differences, uracil will still bond to adenine via a
complimentary bond.
It is rare for the long string of oxygenated ribose nucleotides to form a double helix, although
within that long chain of nucleotides there can indeed be local hairpin like structures and so forth.
Amino acids are coded using nucleotides with what is called the triplet code. This name
came about because any set of three nucleotides is used to construct one of the twenty amino
acids through a complicated series of steps. We will simply say that each triplet is mapped to an
amino acid as a shorthand for all of this detail. There are 20 amino acids and only 4 nucleotides.
Hence, our alphabet here is {A, C, T, G} The number of ways to take 3 things out of an alphabet
of 4 things is 64. To see this, think of a given triplet as a set of three empty slots; there are 4
ways to fill slot 1, 4 independent ways to fill slot 2 (we know have 4 × 4 ways to fill the first two

228
12.4. A QUICK LOOK AT HOW PROTEINS ARECHAPTER
MADE: 12. NUCLEIC ACIDS

(a) The Base-Base Pair Rotation (b) The Nucleotide Uracil

Figure 12.11: Helix Base Pair Rotation And RNA Substitution Uracil

slots) and finally, 4 independent ways to fill slot 3. This gives a total of 4 × 4 × 4 or 64 ways
to fill the three slots independently. Since there are only 20 amino acids, it is clear that more
than one nucleotide triplet could be mapped to a given amino acid! In a similar way, there are 64
different ways to form triplets from the RNA alphabet {A, C, U, G}. We tend to identify these
two sets of triplets and the associated mapping to amino acids as it is just a matter of replacing
the T in one set with an U to obtain the other set.

12.4 A Quick Look at How Proteins Are Made:

Organisms on earth have evolved to use this nucleotide triplet to amino acid mapping (it is not
clear why this is the mapping used over other possible choices!). Now proteins are strings of
amino acids. So each amino acid in this string can be thought of as the output of a mapping
from the triplet code we have been discussing. Hence, associated to each protein of length N is
a long chain of nucleotides of length 3N . Even though the series of steps by which the triplets
are mapped into a protein is very complicated, we can still get a reasonable grasp how proteins
are made from the information stored in the nucleotide chain by looking at the process with the
right level of abstraction.
Here is an overview of the process. When a
protein is built, certain biological machines are
used to find the appropriate place in the DNA
double helix where a long string of nucleotides
which contains the information needed to build
the protein is stored. This long chain of
nucleotides which encodes the information to
build a protein is called a gene. Biological ma-
chinery unzips the double helix at this special
point into two chains as shown in Figure 12.12.
Figure 12.12: The Unzipped Double A complimentary copy of a DNA single strand
Helix
229
12.4. A QUICK LOOK AT HOW PROTEINS ARECHAPTER
MADE: 12. NUCLEIC ACIDS

fragment is made using complimentary pairing


but this time adenine pairs to uracil to create
a fragment of RNA. This fragment of RNA serves to transfer information encoded in the DNA
fragment to other places in the cell where the actual protein can be assembled. Hence, this RNA
fragment is given a special name – Messenger RNA or mRNA for short. This transfer process
is called transcription.
Hence, the DNA fragment 5 ACCGT T ACCGT 3 would induce the complimentary DNA fragment
3 T GGCAAT GGCA5 , but in the cell, the complimentary RNA fragment 3 U GGCAAU GGCA5 is
produced instead. Note again that the 5 pairs with a 3 and vice versa. There are many details of
course that we are leaving out. For example, there must be a special chunk of nucleotides in the
original DNA string that the specialized biological machines can locate as the place to begin the
unzipping process. The mRNA is transferred to a special protein manufacturing facility called
the ribosome where three nucleotides at a time from the mRNA string are mapped into their
corresponding amino acid. From what we said earlier, there are 64 different triplets that can be
made from the alphabet {A, C, U, G} and it is this mapping that is used to assemble the protein
chain a little at a time. For each chain that is unzipped, a complimentary chain is attracted to
it in the fashion shown by Table 12.2. This complimentary chain will however be built from the
oxygenated deoxy-ribose or simply ribose nucleotides. Hence, this complimentary chain is part of
a complimentary RNA helix. As the amino acids encoded by mRNA are built and exit from the
ribosome into the fluid inside the cell, the chain of amino acids or polypeptides begins to twist
and curl into its three dimensional shape based on all the forces acting on it. We can write this
whole process symbolically as DNA → mRNA → ribosome → Protein. This is known as the
Central Dogma of Molecular Biology.
Hence to decode a particular gene stored in DNA which has been translated to its complimen-
tary mRNA form all we need to know are which triplets are associated with which amino acids.
These triplets are called DNA Codons. The DNA alphabet form of this mapping is given in
Table 12.3; remember, the RNA form is the same, we just replace the thymine’s (T’s) by uracil’s
(U’s).

For example, the DNA sequence, TAC|TAT|GTG|CTT|ACC|TCG|ATT is translated into the


mRNA sequence AUG|AUA|CAC|GAA|UGG|AGC|UAA which corresponds to the amino acid
string

Start|Isoleucine|Histidine|Glutamic Acid|Tryptophan|Serine|Stop

Note that shifting the reading by one base to the right or left changes completely which triplets
we read for coding into amino acids. This is called a frame shift and it can certainly lead to a
very different decoded protein. Changing one base in a given triplet is a very local change and
is a good example of a mutation or a kind of damage produced by the environment or by aging
or disease. Since the triplet code is quite redundant, this may or may not result in a amino acid
change. Even if it does, it corresponds to altering one amino acid in a potentially long chain.

230
12.4. A QUICK LOOK AT HOW PROTEINS ARECHAPTER
MADE: 12. NUCLEIC ACIDS

Amino Acid DNA Triplet RNA Triplet


Alanine GCA, GCC, GCG, GCT GCA, GCC, GCG, GCU
Arginine AGA, AGG, CGA, CGC AGA, AGG, CGA, CGC
CGG, CGT CGG, CGU
Asparagine AAC, AAT AAC, AAU
Aspartic Acid GAC, GAT GAC, GAU
Cysteine TAC, TAT UAC, UAU
Glutamic Acid GAA, GAG GAA, GAG
Glutamine CAA, CAG CAA, CAG
Glycine GGA, GGC, GGG, GGT GGA, GGC, GGG, GGU
Histidine CAC, CAT CAC, CAU
Isoleucine ATA, ATC, ATT AUA, AUC, AUU
Leucine CTA, CTC, CTG, CTT CUA, CUC, CUG, CUU
TTA, TTG UUA, UUG
Lysine AAA, AAG AAA, AAG
Methionine (Start) ATG AUG
Phenylalanine TTC, TTT UUC, UUU
Proline CCA, CCC, CCG, CCT CCA, CCC, CCG, CCU
Serine AGC, AGT, TCA, TCC AGC, AGU, UCA, UCC
TCG, TCT UCG, UCU
Threonine ACA, ACC, ACG, ACT ACA, ACC, ACG, ACU
Tryptophan TGG UGG
Tyrosine TAC, TAT UAC, UAU
Valine GTA, GTC, GTG, GTT GUA, GUC, GUG, GUU
Stop TAA, TAG, TGA UAA, UAG, UGA

Table 12.3: The Triplet Code

231
12.4. A QUICK LOOK AT HOW PROTEINS ARECHAPTER
MADE: 12. NUCLEIC ACIDS

232
Chapter 13
Cell Membranes and Ion Movement

We will now begin our study of a living cell. Our first abstract cell is a spherical ball which encloses
a fluid called cytoplasm. The surface of the ball is actually a membrane with an inner and outer
part. Outside the cell there is a solution called the extracellular fluid. Both the cytoplasm and
extracellular fluid contain many molecules, polypeptides and proteins disassociated into ions as
well as sequestered into storage units. In this chapter, we will be interested in what this biological
membrane is and how we can model the flow of ionized species through it. This modeling is
difficult because some of the ions we are interested in can diffuse or drift across the membrane
and others must be allowed entry through specialized holes in the membrane called gates or even
escorted, i.e. transported or pumped, through the membrane by specialized helper molecules. So
we have a lot to talk about!

13.1 Membranes in Cells:

The functions carried out by membranes are essential to life. Membranes are highly selective
permeability barriers instead of impervious containers because they contain specific pumps and
gates as we have mentioned. Membranes control flow of information between cells and their
environment because they contain specific receptors for external stimuli and they have mechanism
by which they can generate chemical or electrical signals.
Membranes have several important common attributes. They are sheet like structures a few
molecules thick ( 60 - 100 Å). They are built from specialized molecules called lipids together
with proteins. The weight ratio of proteins to lipids is about 4 : 1. They also contain specialized
molecules called carbohydrates (we haven’t yet discussed these) that are linked to the lipids
and proteins. Membrane lipids are small molecules with a hydrophilic (i.e. attracted to water)
and a hydrophobic (i.e. repelled by water) part. These lipids spontaneously assemble or form
into closed bimolecular sheets in aqueous medium. Essentially, it is energetically most favorable

233
13.1. CELL MEMBRANES CHAPTER 13. ION MOVEMENT

for the hydrophilic parts to be on the outside near the water and the hydrophobic parts to be
inside away from the water. If you think about it a bit, it is not hard to see that forming a
sphere is a great way to get the water hating parts away from the water by placing them inside
the sphere and to get the water loving parts near the water by placing them on the outside of the
sphere. This lipid sheet is of course a barrier to the flow of various kinds of molecules. Specific
proteins mediate distinctive functions of these membranes. Proteins serve many functions: as
pumps, pumps, gates, receptors, energy transducers and enzymes among others.
Membranes are thus structures or assemblies. whose constituent protein and lipid molecules are
held together by many non-covalent interactions which are cooperative. Since the two faces of
the membrane are different, they are called asymmetric fluid structures which can be regarded as
2 − D solutions of oriented proteins and lipids. A typical membrane is built from what are called
phospholipids which have the generic appearance shown in Figure 13.1(a). Note the group
shown in Figure 13.1(d) is polar and water soluble – i.e. hydrophilic and the other side, Figure
13.1(c), is water phobic. In these drawings, we are depicting the phosphate bond as a double one:
see the discussion in Chapter 11 where this bond is called a coordinate covalent bond.

(a) A Typical Phospholipid (b) An Abstract Lipid

(c) The Hydrophobic Phospho- (d) The Hydrophilic Phospholipid End Group
lipid End Group

Figure 13.1: The Phospholipid Membranes

Of course, this is way too much detail to draw; hence, we use the abstraction shown in Figure
13.1(b) using the term head for the hydrophilic part and tail for the hydrophobic part. Thus, these
lipids will spontaneously assemble so that the heads point out and the tails point in allowing us

234
13.2. PHYSICS OF ION MOVEMENT CHAPTER 13. ION MOVEMENT

to draw the self assembled membrane as in Figure 13.2(a).


We see the heads orient towards the water and the tails away from the water spontaneously
into this sheet structure. The assembly can also form a sphere rather than a sheet as shown in
Figure 13.2(b). A typical mammalian cell is 25 µm in radius where a µm is 10−6 meter. Since
1Å is 10−10 meter or 10−4 µm, we see a cell’s radius is around 250,000 Å. Since the membrane
is only 60Å or so in thickness, we can see the percentage of real estate of the cell concentrated
in the membrane is very small. So a molecule only has to go a small distance to get through the
membrane but to move through the interior of the cell (say to get to the nucleus) is a very long
journey! Another way of looking at this is that the cell has room in it for a lot of things!

(a) The Self Assembled Lipid Sheet Membrane Struc- (b) The Self Assembled Lipid Sphere
ture Membrane Structure

Figure 13.2: Membrane Structures

13.2 The Physical Laws of Ion Movement:


We have relied on the wonderful books of Johnston and Wu (Johnston and Wu (14) 1995) and
Weiss (Weiss (26) 1996) in developing this discussion. They provide even more details and you
should feel free to look at these books. However, the amount of detail in them can be over-
whelming, so we are trying to offer a short version with just enough detail for our mathematical/
biological engineer and computer scientist audience! An ion c can move across a membrane due
to several forces.

13.2.1 Ficke’s law of Diffusion:

First, let’s talk about what concentration of a molecule means. For an molecule b, the concen-
molecules
tration of the ion is denoted by the symbol [b] and is measured in liter . Now, we hardly
ever measure concentration in molecules per unit volume; instead we use the fact that there are
M oles
6.02 × 1023 molecules in a Mole and usually measure concentration in the units cm3
= M where

235
13.2. PHYSICS OF ION MOVEMENT CHAPTER 13. ION MOVEMENT

for simplicity, the symbol M denotes the concentration in Moles per cm3 . This special number is
called Avogadro’s Number and we will denote it by NA . In the discussions that follow, we will
at first write all of our concentrations in terms of molecules, but remember what we have said
about Moles as we will eventually switch to those units as they are more convenient.
The force that arises from the rate of change of the concentration of molecule b acts on the
molecules in the membrane to help move them across. The amount of molecules that move across
per unit area due to this force is labeled the diffusion flux as flux is defined to a rate of transfer
something
( second ) per unit area. Ficke’s Law of Diffusion is an empirical law which says the rate
of change of the concentration of molecule b is proportional to the diffusion flux and is written in
mathematical form as follows:

∂ [b]
Jdif f = −D (13.1)
∂x

where

molecules
• Jdif f is diffusion flux which has units of cm2 −second
.

cm2
• D is the diffusion coefficient which has units of second .

molecules
• [b] is the concentration of molecule b which has units of cm3
.

The minus sign implies that flow is from high to low concentration; hence diffusion takes place
down the concentration gradient. Note that D is the proportionality constant in this law.

13.2.2 Ohm’s Law of Drift:

Ohm’s Law of Drift relates the electrical field due to an charged molecule, i.e. an ion, c across
a membrane to the drift of the ion across the membrane where drift is the amount of ions that
moves across the membrane per unit area. In mathematical form

Jdrif t = − ∂el E (13.2)

where it is important to define our variables and units very carefully. We have:

molecules
• Jdrif t is the drift of the ion which has units of cm2 −second
.

molecules
• ∂el is electrical conductivity which has units of volt−cm−second .

We know from basic physics that an electrical field is the negative gradient of the potential so
if V is the potential across the membrane and x is the variable that measures our position on the
membrane, we have

236
13.2. PHYSICS OF ION MOVEMENT CHAPTER 13. ION MOVEMENT

∂V
E = −
∂x

Now the valence of ion c is the charge on the ion as an integer; i.e. the valence of Cl− is −1
and the valence of Ca+2 is +2. We let the valence of the ion c be denoted by z. It is possible to
derive the following relation between concentration [c] and the electrical conductivity ∂el :

∂el = µ z [c]

where dimensional analysis shows us that the proportionality constant µ, called the mobility of
cm2
ion c, has units volt−second . Hence, we can rewrite Ohm’s Law of Drift as

∂V
Jdrif t = −µz[c] (13.3)
∂x

We see that the drift of charged particles goes against the electrical gradient.

13.2.3 Einstein’s Relation:


There is a relation between the diffusion coefficient D and the mobility µ of an ion which is called
Einstein’s Relation. It says

κT
D = µ (13.4)
q

where

joule
• κ is Boltzmann’s constant which is 1.38 10−23 degree Kelvin .

• T is the temperature in degrees Kelvin.

• q is the charge of the ion c which has units of coulombs.

To see that the units work out, we have to recall some basic physics: we know that electrical
κT
work is measured in volt − coulombs or joules. Hence, we see q µ has units

volt − coulomb degree kelvin cm2


degree Kelvin coulombs volt − second

which reduces to the units of D.


Further, we see that Einstein’s Law says that diffusion and drift processes are additive because
Ohm’s Law of Drift says Jdrif t is proportional to µ which by Einstein’s Law is proportional to D
and hence Jdif f .

237
13.2. PHYSICS OF ION MOVEMENT CHAPTER 13. ION MOVEMENT

13.2.4 Space Charge Neutrality:

When we look at a given volume element enclosed by a non permeable membrane, we also know
that the total charge due to positively charged ions, cations, and negatively charged ions, anions
is the same. If we have N cations ci with valences zi+ and M anions aj with valences zj− , we have
the charge due to an ion is its valence times the charge due to an electron e giving

N
X M
X
zi+ e [ci ] = zj+ e [cj ] (13.5)
i=1 j=1

Of course, in a living cell, the membrane is permeable, so equation 13.5 is not valid!

13.2.5 Ions, Volts and a Simple Cell:

The membrane capacitance of a typical cell is one micro


fahrad per unit area. Typically, we use F to denote the
unit fahrads and the unit of area is cm2 . Also, recall that
1F = 1coulomb . Thus, the typical capacitance is 1.0 µF .
volt cm2
Now our simple cell will be a sphere of radius 25µM with the
inside and outside filled with a fluid. Let’s assume the ion
c is in the extracellular fluid with concentration [c] = 0.5M
and is inside the cell with concentration [c] = .5M . The
Figure 13.3: A Simple
Cell inside and outside of the cell are separated by a biological
membrane of the type we have discussed. We show our
simple cell model in Figure 13.3.
Right now in our picture, the number of ions on both sides of the membrane are the same.
What if one side had more or less ions than the other? These uncompensated ions would produce
a voltage difference across the membrane because charge is capacitance times voltage (q = cV
). Hence, if we wanted to produce a one volt potential difference across the membrane, we can
compute how many uncompensated ions, δ[c] would be needed:

10−6 F coulombs
δ[c] = 2
× 1.0V = 10−6
cm cm2

Now the typical voltage difference across a biological membrane is on the order of 100 millivolts
or less (1 millivolt is 10−3 V and is abbreviated mV). The capacitance of the membrane per cm2
multiplied by the desired voltage difference of 100 mV will give the the uncompensated charge, n,
per cm2 we need. Thus, we find

10−6 coulombs coulombs


n = 2
× 10−1 V = 10−7
cm cm2

238
13.3. NERNST-PLANCK EQUATION CHAPTER 13. ION MOVEMENT

For convenience, let’s assume our ion c has a valence of −1. Now one electron has a charge of e of
1.6 × 10−19 coulombs, so the ratio n
e tells us that 6.3 × 1011 ions
cm2
are needed. We know the surface
area, SA, and volume, V ol, of our simple cell of radius r are SA = 4πr2 = 7.854 × 10−5 cm2
and V ol = 4
3 πr
3 = 6.545 × 10−8 cm3 . Thus, the number of uncompensated ions per cell to get
this voltage difference is m = n SA giving

n
m = SA = 4.95 107 ions
e

This is a very tiny fraction of the total number of ions inside or outside the cell as for a .5M
solution, there are

ions M ole
0.5 × NA × V ol = 1.97 × 1016 ions
M ole cm3

implying the percentage of uncompensated ions to give rise to a voltage difference of 100 mV is
only 2.51 10−7 %.

13.3 The Nernst - Planck Equation:


Under physiological conditions, ion movement across the membrane is influenced by both electric
fields and concentration gradients. Let J denote the total flux, then we will assume that we can
add linearly the diffusion due to the molecule c and the drift due to the ion c giving

J = Jdrif t + Jdif f

Thus, applying Ohm’s Law 13.3 and Ficke’s Law 13.1, we have

∂V ∂[c]
J = −µ z [c] − D
∂x ∂x

Next, we use Einstein’s Relation 13.4 to replace the diffusion constant D to obtain what is called
the Nernst - Planck equation.

∂V κT ∂[c]
J = −µ z [c] Jdrif t − µ (13.6)
∂x q ∂x
 
∂V κT ∂[c]
= −µ z [c] Jdrif t +
∂x q ∂x

moles J
We can rewrite this result by moving to units that are cm2 −second
. To do this, note that NA has
the proper units and using the Nernst-Planck equation 13.7 we obtain

239
13.3. NERNST-PLANCK EQUATION CHAPTER 13. ION MOVEMENT

 
J µ ∂V κT ∂[c]
= − z [c] + (13.7)
NA NA ∂x q ∂x

The relationship between charge and moles is given by Faraday’s Constant, F , which has the
coulombs
value F = 96, 480 mole . Hence, the total charge in a mole of ions is the valence of the ion
times Faraday’s constant F , zF . Multiply equation 13.7 by zF on both sides to obtain

 
J µ zF ∂V κT ∂[c]
zF = − z [c] + (13.8)
NA NA ∂x q ∂x

This equation has the units of current per unit area because

J moles coulombs coulombs


zF = =
NA cm2 − second mole cm2 − sec
amps
=
cm2

We can measure energy in two different units: joules (we use these in Boltzmann’s constant κ)
or calories. One calorie is the amount of energy needed to raise one gram of water at 25 degrees
Centigrade one degree Centigrade. So it is certainly not obvious how to convert from joules to
calories. An argument that is based on low level principles from physics gives us the following
conversion One Calorie is 4.184 joules. From level physics, there is another fundamental physical
constant called the Gas Constant which is traditionally denoted by R. The constant can be
expressed in terms of calories or joules as follows:

1.98 calories
R =
degree Kelvin M ole
8.31 joules
=
degree Kelvin M ole

Hence, if we let q be the charge on one electron, e, we have for T is one degree Kelvin

joules
κ (T = 1) 1.38 10−23 degree Kelvin
= (1degree Kelvin)
q=e 1.6 10−19 coulombs
joules
= 8.614 10−5
coulomb
joules
R (T = 1) 8.31 degree Kelvin M ole
=
F 96, 480 coulomb
M ole
joules
= 8.614 10−5
coulomb

For later purposes, we will need to remember that

240
13.4. NERNST EQUILIBRIUM CHAPTER 13. ION MOVEMENT

RT joules
= 8.614 10−5 (13.9)
F coulomb

κT RT
Thus, since q is the same as F , they are interchangeable in equation 13.8 giving

 
J µ 2 ∂V ∂[c]
I = zF = − z F [c] + z RT (13.10)
NA NA ∂x ∂x

amps
where the symbol I denotes this current density cm2
that we obtain with this equation. The
current I is the ion current that flows across the membrane per unit area due to the forces acting
on the ion c. Clearly, the next question to ask is what happens when this system is at equilibrium
and the net current is zero?

13.4 Equilibrium Conditions: The Nernst Equation:

The current form of the Nernst-Planck equation given in equation 13.10 describes ionic current
flow driven by electro-chemical potentials (concentration gradients and electric fields). We know
∂V ∂[c] ∂[c]
that the current I is opposite to ∂x , with ∂x if the valence z is negative and against ∂x if
the valence z is positive. When the net current due to all of these contributions is zero, we have
I = 0 and by the Nernst-Planck Current Equation 13.10, we have

 
µ 2 ∂V ∂[c]
0 = − z F [c] + z RT
NA ∂x ∂x

implying

∂V ∂[c]
z 2 F [c] = −z RT
∂x ∂x

or since there is only one independent variable x

dV RT 1 d[c]
= −
dx zF [c] dx

This equation can be integrated as follows: between positions x1 and x2 , we find

Z x2 Z x2
dV RT d[c]
dx = − dx
x1 dx zF x1 [c]

241
13.4. NERNST EQUILIBRIUM CHAPTER 13. ION MOVEMENT

Now assume that the membrane voltage and the concentration [c] are functions of the position
x in the membrane and hence can be written as V (x) and [c](x); we will then let V (x1 ) = V1 ,
V (x2 ) = V2 , [c](x1 ) = [c]1 and [c](x2 ) = [c]2 . Then, upon integrating, we find

Z V2 Z [c]2
RT d[c]
dV = −
V1 zF [c]1 [c]
RT [c]2
V2 − V1 = − ln
zF [c]1

It is traditional to define the membrane potential Vm of a cell to be the difference between


the inside (Vin ) and outside potential (Vout ); hence we say

Vm = Vin − Vout

For a given ion c, the equilibrium potential of the ion is denoted by Ec and is defined as the
potential across the membrane which gives a zero Nernst-Planck current. We will let the position
x1 be the place where the membrane starts and x2 , the place where the membrane ends. Here, the
thickness of the membrane is not really important. So the potential at x1 will be considered the
inner potential Vin and the potential at x2 will be considered the inner potential Vout . From our
discussions above, we see that the assumption that I is zero implies that the difference V1 − V2 is
−Ec and so labeling [c]2 and [c]1 as [c]out and [c]in respectively, we arrive at the following equation:

RT [c]out
Ec = ln (13.11)
zF [c]in

This important equation is called the Nernst equation and is an explicit expression for the
equilibrium potential of an ion species in terms of its concentrations inside and outside of the cell
membrane.

13.4.1 An Example:

Let’s compute some equilibrium potentials. In Table 13.1, we see some typical inner and outer
ion concentrations and the corresponding equilibrium voltages. Unless otherwise noted, we will
assume a temperature of 70 degree Fahrenheit – about normal room temperature – which is 21.11
R
Celsius and 294.11 Kelvin as Kelvin is 273 plus Celsius.Since the factor F is always a constant
here, note

R 8.31 joules
= (13.12)
F 96, 480 coulomb − degrees Kelvin
V olts
= 8.61 10−5 (13.13)
degrees Kelvin

242
13.4. NERNST EQUILIBRIUM CHAPTER 13. ION MOVEMENT

mV
= .0861 (13.14)
degrees Kelvin

Let’s look at some examples of this sort of calculation. While it is not hard to do this
calculation, we have found that all the different units are confusing to students coming from the
RT
mixed background we see. Now at a temperature of 294.11 Kelvin, F becomes 25.32 mV. Hence,
all of our equilibrium voltage calculations take the form

1 [c]out
Ec = 25.32 ln mV
z [c]in

where all we have to do is to use the correct valence of our ion c. Also, remember that the
symbol ln means we should use a natural logarithm! Here are some explicit examples for this
temperature:

1. For frog muscle, typical inner and outer concentrations for potassium are [K + ]out is 2.25
mM (the unit mM means milli Moles) with [K + ]in at 124.0 mM. Then, since z is +1, we
have

2.25
EK + = 25.32 ln mV
124.0
= 25.32 × (−4.0094) mV
= −101.5168 mV

2. For frog muscle, typical inner and outer concentrations for chlorine are [Cl− ]out is 77.5 mM
with [Cl− ]in at 1.5 mM. Then, since z is −1, we have

77.5
ECl− = −25.32 ln mV
1.5
= −25.32 × (3.944) mV
= −99.88 mV

3. For frog muscle, typical inner and outer concentrations for Calcium are [Ca+2 ]out is 2.1 mM
(the unit mM means milli Moles) with [Ca+2 ]in at 10−4 mM. Then, since z is +2, we have

2.1
EK + = 0.5 × 25.32 ln mV
10−4
= 12.66 × (9.9523) mV
= 126.00 mV

We summarize the results above as well as two other sets of calculations in Table 13.1. In the
RT
first two parts of the table we use a temperature of 294.11 Kelvin and the conversion F is 25.32
RT
mV. In the last part, the temperature is higher (310 Kelvin) and so F becomes 26.69 mV. All
concentrations are given in mM.

243
13.5. ONE ION NERNST COMPUTATIONS CHAPTER 13. ION MOVEMENT

Frog Muscle (Conway 1957) [c]in [c]out Ec


K+ 124.0 2.25 -101.52
Na+ 10.4 109.0 59.30
Cl− 1.5 77.5 -99.88
Ca+2 10−4 2.1 126.00
Squid Axon (Hodgkin 1964)
K+ 400.0 20.0 -75.85
Na+ 50.0 440.0 55.06
Cl− 40.0 - 150.0 560.0 -66.82 - 33.35
Ca+2 10−4 10.0 145.75
Mammalian Cell
K+ 140.0 5.0 -88.94
Na+ 5.0 - 15.0 145.0 89.87 - 60.55
Cl− 4.0 110.0 -88.46
Ca+2 10−4 2.5 - 5.0 135.13 - 144.39

Table 13.1: Typical Inner and Outer Ion Concentrations

13.5 One Ion Nernst Computations in MatLab:


Now let’s do some calculations using MatLab for various ions. First, we will write a MatLab
function to compute the Nernst voltage. Here is a simple MatLab function to do this

Listing 13.1: Nernst.m

function v o l t a g e = Nernst ( v a l e n c e , Temperature , InConc , OutConc )


%
3 % compute Nernst v o l t a g e f o r a g i v e n i o n
%
R = 8.31;
T = Temperature + 2 7 3 . 0 ;
F = 96480.0;
8 P r e f i x = (R∗T) / ( v a l e n c e ∗F) ;
%
% output v o l t a g e in m i l l i v o l t s
%
v o l t a g e = 1 0 0 0 . 0 ∗ ( P r e f i x ∗ log ( OutConc/ InConc ) ) ;

It is then straightforward to compute a Nernst potential. We are computing the Nernst


potential for a potassium ion. The valence is thus 1. We will use a temperature in Centigrade
of 37 degrees C (quite hot!) and the inside concentration is 124 milliMoles with the outside
concentration 2.5 milliMoles.

% Our function expects the argments to be


% entered as follows:
%
% Your variable name valence temperature(C) InsideConc OutsideConc
% | | | | |
% v v v v v

244
13.5. ONE ION NERNST COMPUTATIONS CHAPTER 13. ION MOVEMENT

% E_K = Nernst(1, 37.0, 124, 2.5)


%
% so here is our line
>> E_K = Nernst(1,37.0,124,2.5)

This function call produces the following output (edited to remove extra blank lines)

valence =
1
Temperature =
37
OutConc =
2.5000
InConc =
124.5000
T =
310
F =
96480
Prefix =
0.0267
E_K =
-104.2400
>>

Now let’s do a simple plot. We are still in MatLab, so we divide the interval [15,37] into 200
uniformly spaced points and save these points into a vector called DegreesC.

>> DegreesC = linspace(15,37,201);

Using this vector, we call our Nernst voltage function to compute a corresponding vector of
voltage values and generate a plot of voltage versus temperature.

>> V_K = Nernst(1,DegreesC,124,2.5);


>> plot(DegreesC,V_K,’g-’);

As discussed above, we saved this plot as a file. In Figure 13.4, you can see what we have
generated:

Figure 13.4: The Potassium Voltage vs. Temperature

245
13.5. ONE ION NERNST COMPUTATIONS CHAPTER 13. ION MOVEMENT

13.5.1 Exercises:
Exercise 13.5.1. 1. Use the MatLab functions we have written above to generate a plot of
the Nernst potential versus inside concentration for the Sodium ion at T = 20 degrees C.
Assume the outside concentration is always 440 milliMoles and let the inner concentration
vary from 2 to 120 in 200 uniformly spaced steps.

2. Rewrite our Nernst and NernstMemVolt functions to accept temperature arguments in de-
grees Fahrenheit.

3. Rewrite our NernstMemVolt function for just Sodium and Potassium ions.

Exercise 13.5.2. For the following outside and inside ion concentrations, calculate the Nernst
voltages at equilibrium for the temperatures 45◦ F, 55◦ F, 65◦ F and 72◦ F.

[c]in [c]out
K+ 130.0 5.25
Na+ 15.4 129.0
Cl− 1.8 77.5
Ca+2 10−5 3.1

246
Chapter 14
Electrical Signaling

The electrical potential across the membrane is determined by how well molecules get through the
membrane ( its permeability ) and the concentration gradients for the ions of interest. To get a
handle on this let’s look at an imaginary cell which we will visualize as an array. The two vertical
sides you see on each side of the array represent the cell membrane. There is cell membrane on
the top and bottom of this array also, but we don’t show it. We will label the part outside the
box as the Outside and the part inside as Inside. If we wish to add a way for a specific type of
ion to enter the cell, we will label this entry port as Gates on the bottom of the array.We will
assume our temperature is 70 degree Fahrenheit which is 21.11 Celsius and 294.11.

14.1 The Cell Prior to KCl Disassociation:

To get started, let’s assume no potential difference across the membrane and add 100 mM of KCl
to both the inside and outside of the cell as shown.

 

Inside
 
 
 
100 mM 



Outside 100 mM
 
 
 
KCl 



KCl
 
 

The KCl promptly disassociates into an equal amount of K + and Cl− in both the inside and
outside of the cell as shown below:

247
14.2. CELL WITH K + GATES CHAPTER 14. SIGNALLING

 

Inside
 
 
 
100 mM K + 



Outside 100 mM K +
 
 
 
100 mM Cl− 



100 mM Cl−
 
 

14.2 The Cell With K + Gates:

Now, we add to the cell channels that are selectively permeable to the ion K + . These gates allow
K + to flow back and forth across the membrane until there is a balance between the chemical
diffusion force and the electric force. We don’t expect a nonzero equilibrium potential because
there is charge and concentration balance already. Using Nernst’s Equation 13.11 we see since z
RT
is 1 that for our temperature, zF is 25.32 mV and

RT [K + ]out
EK = ln
F [K − ]in
100
= 25.32 ln
100
= 0

 
 

 Inside 

 
100 mM K +
 
100 mM K +
 
 
Outside 



100 mM Cl−
 
100 mM Cl−
 
 
 
 
 
(K + Gates)

14.2.1 The Cell With Outer KCl Reduced:

Now reduce the outside KCl to 10 mM giving the following cell:

248
14.3. CELL WITH NACL CHAPTER 14. SIGNALLING

 
 

 Inside 

 
10 mM K +
 
100 mM K +
 
 
Outside 



10 mM Cl−
 
100 mM Cl−
 
 
 
 
 
(K + Gates)

This set up a concentration gradient with an implied chemical and electrical force. We see

10
EK = 25.32 ln
100
= −58.30

Hence, Vin − Vout is −58.30 mV and so the outside potential is 58.30 more than the inside;
more commonly, we say the inside is 58.30 mV more negative than the outside.

Exercise 14.2.1. Consider a cell permeable to K + with the following concentrations on ions:

 
 

 Inside 

 
90 mM K +
 
120 mM K +
 
 
Outside 



90 mM Cl−
 
120 mM Cl−
 
 
 
 
 
(K + Gates)

Find the equilibrium Potassium voltage at temperature 69◦ F.

14.3 The Cell With NaCl Inside and Outside Changes:

Now add 100 mM NaCl to the outside and 10 mM to the inside of the cell. We then have

249
14.4. CELL WITH NA GATES CHAPTER 14. SIGNALLING

 
 

 Inside 

 
10 mM K + 100 mM N a+
 
100 mM K + 10 mM N a+
 
 
Outside 



10 mM Cl− 100 mM Cl−
 
100 mM Cl− 10 mM Cl−
 
 
 
 
 
(K + Gates)

Note there is charge balance inside and outside but since the membrane is permeable to K + ,
there is still a concentration gradient for K + . There is no concentration gradient for Cl− . Since
the membrane wall is not permeable to N a+ , the N a+ concentration gradient has no effect. The
K + concentration gradient is still the same as our previous example, the equilibrium voltage for
potassium remains the same.

14.4 The Cell with N a+ Gates:

Next replace the potassium gates with sodium gates as shown below. We can then use the Nernst
equation to calculate the equilibrium voltage for N a+ :

100
EN a = 25.32 ln
10
= 58.30

which is the exact opposite of the equilibrium voltage for potassium.

 
 

 Inside 

 
10 mM K + 100 mM N a+
 
100 mM K + 10 mM N a+
 
 
Outside 



10 mM Cl− 100 mM Cl−
 
100 mM Cl− 10 mM Cl−
 
 
 
 
 
(N a+ Gates)

14.5 The Nernst Equation For Two Ions:

Let’s look at what happens if the cell has two types of gates. The cell now looks like this:

250
14.5. NERNST TWO IONS CHAPTER 14. SIGNALLING

 
 

 Inside 

 
10 mM K + 100 mM N a+
 
100 mM K + 10 mM N a+
 
 
Outside 



10 mM Cl− 100 mM Cl−
 
100 mM Cl− 10 mM Cl−
 
 
 
 
 
(K + Gates)(N a+ Gates)

There are now two currents: K + is leaving the cell because there is more K + inside than
outside and N a+ is going into the cell because there is more N a+ outside. What determines the
resting potential now?
Recall Ohm’s Law for a simple circuit: the current across a resistor is the voltage across
V
the resistor divided by the resistance; in familiar terms I = R using time honored symbols for
current, voltage and resistance. It is easy to see how this idea fits here: there is resistance to
1
the movement of an ion through the membrane. Since I = R V , we see that the ion current
1
through the membrane is proportional to the resistance to that flow. The term R seems to be
1
a nice measure of how well ions flow or conduct through the membrane. We will call R the
conductance of the ion through the membrane. Conductance is generally denoted by the symbol
g. Clearly, the resistance and conductance associated to a given ion are things that will require
very complex modeling even if they are pretty straightforward concepts. We will develop some
very sophisticated models in future chapters, but for now, for an ion c, we will use

Ic = gc (Vm − Ec )

where Ec is the equilibrium voltage for the ion c that comes from the Nernst Equation, Ic is
the ionic current for ion c and gc is the conductance. Finally, we denote the voltage across the
membrane to be Vm . Note the difference between the membrane voltage and the equilibrium ion
voltage provides the electromotive force or emf that drives the ion. In our cell, we have both
potassium and sodium ions, so we have

iK = gK (Vm − EK )
iN a = gN a (Vm − EN a )

.
At steady state, the current flows due to the two ions should sum to zero; hence

251
14.6. NERNST MULTIPLE IONS CHAPTER 14. SIGNALLING

iK + iN a = 0

implying

0 = gK (Vm − EK ) + gN a (Vm − EN a )

which we can solve for the membrane voltage at equilibrium:

gK gN a
Vm = EK + EN a
gK + gN a gK + gN a

You will note that although we have already calculated EK and EN a here, we are not able to
compute Vm because we do not know the particular ionic conductances.

Exercise 14.5.1. Let’s look at what happens if the cell has K + and N a+ gates. The cell now
looks like this:

 
 

 Inside 

 
20 mM K + 130 mM N a+
 
80 mM K + 10 mM N a+
 
 
Outside 



20 mM Cl− 130 mM Cl−
 
15 mM Cl− 15 mM Cl−
 
 
 
 
 
(K + Gates)(N a+ Gates)

1. Compute EK and EN a at temperature 71◦ F.

2. Compute the equilibrium membrane voltage for a gK to gN a ratio of 5.0.

3. Compute the equilibrium membrane voltage for a gK to gN a ratio of 3.0.

4. Compute the equilibrium membrane voltage for a gK to gN a ratio of 2.0.

14.6 The Nernst Equation For More Than Two Ions:


Consider Figure 14.1. We see we are thinking of a patch of membrane as a parallel circuit with
one branch for each of the three ions K + , N a+ and Cl− and a branch for the capacitance of the
membrane. We think of this patch of membrane as having a voltage difference of Vm across it. In
general, there will current that flows through each branch. We label these currents as IK , IN a ,

252
14.6. NERNST MULTIPLE IONS CHAPTER 14. SIGNALLING

Figure 14.1: A Simple Membrane Model

ICl and Ic where Ic denotes the capacitative current. The conductances for each ion are labeled
with resistance symbols with a line through them to indicate that these conductances might be
variable in a real model. For right now, we will assume all of these conductances are constant.
Each of our ionic currents have the form

iion = gc (Vm − Ec )

where Vm , as mentioned, is the actual membrane voltage, c denotes our ion, gc is the con-
ductance associated with ion c and Ec is the Nernst equilibrium voltage). Hence for three ions,
potassium (K + ), sodium (N a+ ) and chlorine (Cl− ), we have

iK = gK (Vm − EK )
iN a = gN a (Vm − EN a )
iCl = gCl (Vm − ECl )

There is also a capacitative current. We know the voltage drop across the capacitor Cm is
qm
given by Vm ; hence, the charge across the capacitor is Cm Vm implying the capacitative current is

dVm
im = Cm
dt

At steady state, im is zero and the ionic currents must sum to zero giving

iK + iN a + iCl = 0

Hence,

253
14.6. NERNST MULTIPLE IONS CHAPTER 14. SIGNALLING

0 = gK (Vm − EK ) + gN a (Vm − EN a ) + gCl (Vm − ECl )

leading to a Nernst Voltage equation for the equilibrium membrane voltage Vm of a membrane
permeable to several ions:

gK gN a gCl
Vm = EK + EN a + ECl
gK + gN a + gCl gK + gN a + gCl gK + gN a + gCl

We usually rewrite this in terms of conductance ratios: rN a is the gN a to gK ratio and rCl is
the gCl to gK ratio:

1 rN a rCl
Vm = EK + EN a + ECl
1 + rN a + rCl 1 + rN a + rCl 1 + rN a + rCl

Hence, if we are given the needed conductance ratios, we can compute the membrane voltage
at equilibrium for multiple ions. Some comments are in order:

• By convention, we set Vout to be 0 so that Vm = Vin − Vout is −58.3 mV in our first K+


gate example.

• These equations are only approximately true, but still of great importance in guiding our
understanding.

• Ion currents flowing through a channel try to move the membrane potential toward the
equilibrium potential value for that ion.

• If several different ion channels are open, the summed currents drive the membrane potential
to a value determined by the relative conductances of the ions.

• Since an ion channel is open briefly, the membrane potentials can’t stabilize and so there
will always be transient ion currents. For example, if N a+ channels pop open briefly, there
will be transient N a+ currents.

Finally, let’s do a simple example: assume there are just sodium and potassium gates an the
relative conductances of N a+ and K + are 4 : 1. Then rN a is 4 and for the concentrations in our
examples above:

1 4
Vm = (−58.3)mV + (58.3)mV
5 5
= 34.98mV

Exercise 14.6.1. Assume we know

254
14.7. MULTIPLE IONS CHAPTER 14. SIGNALLING

• The temperature is 67 degrees Fahrenheit.

• The ratio gN a to gK is 0.08.

• The ratio of gCl to gK is 0.12.

• The inside and outside concentrations for potassium are 410 and 35 milliMoles respectively.

• The inside and outside concentrations for sodium are 89 and 407 milliMoles respectively.

• The inside and outside concentrations for chlorine are 124 and 450 milliMoles respectively.

Calculate the equilibrium membrane voltage.

14.7 Multiple Ion Nernst Computations in MatLab:


Now let’s do some calculations using MatLab for this situation. First, we will write a MatLab
function to compute the Nernst voltage across the membrane using our conductance model. Here
is a simple MatLab function to do this which uses our previous Nernst function. We will build
a function which assumes the membrane is permeable to using potassium, sodium and chlorine
ions.

Listing 14.1: NernstMemVolt.m

function v o l t a g e = Nernst ( v a l e n c e , Temperature , InConc , OutConc )


2 %
% compute Nernst v o l t a g e f o r a g i v e n i o n
%
R = 8.31;
T = Temperature + 2 7 3 . 0 ;
7 F = 96480.0;
P r e f i x = (R∗T) / ( v a l e n c e ∗F) ;
%
% output v o l t a g e in m i l l i v o l t s
%
12 v o l t a g e = 1 0 0 0 . 0 ∗ ( P r e f i x ∗ log ( OutConc/ InConc ) ) ;

Let’s try this out for the following example: we assume we know

• The temperature is 20 degrees Celsius.

• The ratio gN a to gK is 0.03.

• The ratio of gC to gK is 0.1.

• The inside and outside concentrations for potassium are 400 and 20 milliMoles respectively.

• The inside and outside concentrations for sodium are 50 and 440 milliMoles respectively.

255
14.7. MULTIPLE IONS CHAPTER 14. SIGNALLING

• The inside and outside concentrations for chlorine are 40 and 560 milliMoles respectively.

We then enter the function call into MatLab like this:

>> NernstMemVolt(20,0.03,0.1,400,20,50,440,40,560)
ans =
-71.3414

If there was a explosive change in the gN a to gK ratio (this happens in a typical axonal pulse
which we will discuss in later chapters), we see a large swing of the equilibrium membrane voltage
from the previous −71, 34 mV to 46.02. The code below resets this ratio to 15 from its previous
value of 0.03:

>> NernstMemVolt(20,15.0,0.1,400,20,50,440,40,560)
ans =
46.0241

256
Chapter 15
Transport Mechanisms

15.1 Transport Mechanisms:

There are five general mechanisms by which molecules are moved across biological membranes.
All of these methods operate simultaneously so we should know a little about all of them.

• Diffusion: here dissolved substances are transported because of concentration gradient.


These substances can go right through the membrane. Some examples are water, cer-
tain molecules known as anesthetics and other large proteins which are soluble in the lipids
which comprise the membrane such as the hormones known as steroids.

• Transport Through Water Channels: water will flow from a region in which a dissolved
substance is at high concentration in order to equalize a high - low concentration gradient.
This process is called osmosis and the force associated with it is called osmotic pressure.
The biological membranes are thus semi-permeable to water and there are channels or pores
in the membrane which selectively allow the passage of water molecules. In Figure 15.1(b),
we see an abstract picture of the process of diffusion (part a) and the water channels (part
b). The movement of water from a high to low concentration environment of the ion c is
shown in Figure 15.1(a)

• Transport Through Gated Channels: some ion species are able to move through a membrane
because there are special proteins embedded in the membrane which can be visualized as
a cylinder whose throat can be open or blocked depending on some external signal. This
external signal can be the voltage across the membrane or a molecule called a trigger. If the
external signal is voltage, we call these channels voltage gates. When the gate is open,
the ion is able to physically move through the opening.

257
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

(a) Osmosis (b) Diffusion and Water Channel Transport

Figure 15.1: Transport Through Water Channels

• Carrier Mediated Transport: in this case, the dissolved substance combines with a carrier
molecule on the one side of the membrane. The resulting complex that is formed moves
through the membrane. At the opposite end of the membrane, the solute is released from
the complex. A probable abstraction of this process is shown in Figure 15.2.

• Ion Pumps: ions can also be transported by a mechanism which is linked to the addition of
an OH group to the molecule adenine triphosphate. This process is called hydrolysis
and this common molecule is abbreviated ATP. The addition of the hydroxyl group to ATP
liberates energy. This energy is used to move ions against their concentration gradient.
In Figure 15.3, parts c through e, we see abstractions of the remaining types of transport
embedded in a membrane.

15.1.1 Ion Channels:


In Figure 15.4, we see a schematic of a typical voltage gate. Note that the inside of the gate
shows a structure which can be in an open or closed position. The outside of the gate has a
variety of molecules with sugar residues which physically extend into the extracellular fluid and
carry negative charges on their tips. At the outer edge of the gate, you see a narrowing of the
channel opening which is called the selectivity filter. As we have discussed, proteins can take
on very complex three dimensional shapes. Often, their actual physical shape can switch from one
form to another due to some external signal such as voltage. This is called a conformational
change. In a voltage gate, the molecule which can block the inner throat of the gate moves
from its blocking position to its open position due to such a conformational change. In fact,
this molecule can also be in between open and closed as well. The voltage gated channel is
actually a protein macromolecule which is inserted into an opening in the membrane called a
pore. We note that this macromolecule is quite big (1800 - 4000 amino acids) with one or more
polypeptide chains and 100’s of sugar residues hang off the extracellular face. When open, the

258
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

Figure 15.2: A Carrier Moves a Solute Through the Membrane

Figure 15.3: Molecular Transport Mechanisms

259
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

channel is a water filled pore with a fairly large inner diameter which would allow the passage
of many things except that there is one narrow stretch of the channel called a selectivity filter
which inhibits access. The inside of the pore is lined with hydrophilic amino acids which therefore
like being near the water in the pore and the outside of the pore is lined with hydrophobic
amino acids which therefore dislike water contact. These therefore lie next to the lipid bilayer.
Ion concentration gradients
can be maintained by selective
permeabilities of the mem-
brane to various ions. Most
membranes are permeable to
K + , maybe Cl− and much
less permeable to N a+ and
Ca+2 . This type of passage
of ions through the membrane
requires no energy and so it is
called the passive distribu-
tion of the ions. If there is no
other way to transport ions,
a cell membrane permeable to
several ion species will reach
an equilibrium potential de-
termined by the Nernst equa-
tion. Let c+n denote a cation
of valence n and a−m be an an-
ion of valence −n. Then the
Nernst equilibrium potentials
for these ions are given by
Figure 15.4: Typical Voltage Channel

RT [c+n ]out
Ec = ln +n
nF [c ]in
RT [a−m ]out
Ea = ln −m
−mF [a ]in

At equilibrium, the ionic currents must sum to zero forcing these two potentials to be the
same. Hence, Ec is the same as Ea and we find

RT [c+n ]out RT [a−m ]out


ln +n = ln −m
nF [c ]in −mF [a ]in

implying

260
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

m [c+n ]out [a−m ]in


ln +n = ln
n [c ]in [a−m ]out

Exponentiating these expressions, we find that the ion concentrations must satisfy the following
expression which is known as Donnan’s Law of Equilibrium.

1 1
[c+n ]out m [a−m ]in n
=
[c+n ]in [a−m ]out

K + and Cl− Donnan Equilibrium:

For example, for the ions K + and Cl− ,

 

Inside
 
 
 
[K + ]out 



Outside [K + ]in
 
 
 
[Cl− ]out 



[Cl− ]in
 
 

and we would have that at Donnan equilibrium, the inner and outer ion concentrations would
satisfy

[K + ]out [Cl− ]in


=
[K + ]in [Cl− ]out

or

[K + ]out [Cl− ]out = [K + ]in [Cl− ]in

K + , Cl− and A−m Donnan Equilibrium:

Now assume that there are other negative ions in the cell, say A−m . Then, [A−m ]out is zero and
we have

261
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

 

Inside
 
 
 
[K + ]out 



Outside [K + ]in
 
 
 
[Cl− ]out 



[Cl− ]in [A−m ]in
 
 

Then because there must be charge neutrality, [K + ]in = [Cl− ]in + [A−m ]in and [K + ]out =
[Cl− ]out
Also, we must have the usual Donnan Equilibrium state (remember the ion A−m does not
play a role in this):

[K + ]out [Cl− ]out = [K + ]in [Cl− ]in

Thus,

[K + ]2in = [K + ]in ([Cl− ]in + [A−m ]in )


= [K + ]in [Cl− ]in + [K + ]in [A−m ]in
= [K + ]out [Cl− ]out + [K + ]in [A−m ]in
= [K + ]2out + [K + ]in [A−m ]in
> [K + ]2out

Hence, if we can only use passive transport mechanisms, we must have

[K + ]in > [K + ]out

Now in Table 13.1, we list some typical potassium inner and outer concentrations. Note that
in all of these examples, we indeed have the inner concentration exceeds the outer.

15.1.2 Active Transport Using Pumps:

There are many pumps within a cell that move substances in or out of a cell with or against a
concentration gradient.

• N a+ K + Pump: this is a transport mechanism driven by the energy derived from the
hydrolysis of ATP. Here 3 N a+ ions are pumped out and 2 K + ions are pumped in. So

262
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

this pump gives us sodium and potassium currents which raise [N a+ ]out and [K + ]in and
decrease [N a+ ]in and [K + ]out . From Table 13.1, we see that [N a+ ]out is typically bigger, so
this pump is pushing N a+ out against its concentration gradient and so it costs energy.

• N a+ Ca+2 Pump: this pump drives out 3 N a+ ions inside the cell for every Ca+2 ion that
is moved out. Hence, this pump gives us sodium and calcium currents which raise [N a+ ]in
and [Ca+2 ]in and decrease [N a+ ]out and [Ca+2 ]in . Here, the N a+ movement is with its
concentration gradient and so there is no energy cost.

• Ca+2 Pump:this pump is also driven by the hydrolysis of ATP but it needs the magnesium
ion M g +2 to work. We say M g +2 is a cofactor for this pump. Here, Ca+2 ] is pumped into
a storage facility inside the cell called the endoplasmic reticulum which therefore takes
Ca+ 2 out of the cytoplasm and so brings down [Ca+2 ]in .

• HCO3− Cl− Exchange: this is driven by the N a+ concentration gradient and pumps HCO3−
into the cell and CL− out of the cell.

• Cl − N a+ K + Co transport: This is driven by the influx of N a+ into the cell. For every
one N a+ and K + that are pumped into the cell via this mechanism, 2 Cl− are driven out.

15.1.3 A Simple Compartment Model:

Consider a system which has two compartments filled with ions and a membrane with K + and
Cl− gates between the two compartments as shown below:

 

100 mM A−
 
 
 
 
 
150 mM K +
 
 
 

 One 

50 mM Cl−
 
 
 
 
 
 
 
(Cl− Gates)(K + Gates)

263
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

(Cl− Gates)(K + Gates)


 
 
 
 
 
 
0 mM A−
 
 
 

 T wo 

150 mM K+
 
 
 
 
 
150 mM Cl−
 
 

We will assume the system is held at the temperature of 70 degree Fahrenheit – about normal
room temperature – which is 21.11 Celsius and 294.11. This implies our Nernst conversion factor
is 25.32 mV for K + and −25.32 for Cl− . Is this system is electrochemical equilibrium or ECE?
In each compartment, we do have space charge neutrality as in Compartment One, 150 mM
of K + balances 100 mM of A− and 50 mM of Cl− ; and in Compartment Two, 150 mM of K +
balances 150 mM of Cl− . However, Cl− is not concentration balanced and so Cl− is not at ECE
implying that Cl− will diffuse into Compartment Two from Compartment One. This diffusion
shifts ions in the compartments to

 

100 mM A−
 
 
 
 
 
150 mM K +
 
 
 

 One 

100 mM Cl−
 
 
 
 
 
 
 
(Cl− Gates)(K + Gates)
(Cl− Gates)(K + Gates)
 
 
 
 
 
 
0 mM A−
 
 
 

 T wo 

150 mM K+
 
 
 
 
 
100 mM Cl−
 
 

So to get concentration balance, 50 Cl− moves Compartment Two to Compartment One. Now
counting ion charges in each compartment, we see that we have lost space charge neutrality. We

264
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

can regain this by shifting 50 mM of K + from Compartment Two to Compartment One to give:

 

100 mM A−
 
 
 
 
 
200 mM K +
 
 
 

 One 

100 mM Cl−
 
 
 
 
 
 
 
(Cl− Gates)(K + Gates)
(Cl− Gates)(K + Gates)
 
 
 
 
 
 
0 mM A−
 
 
 

 T wo 

100 mM K+
 
 
 
 
 
100 mM Cl−
 
 

Now we see that there is a concentration imbalance in K + ! The point here is that this kind
of analysis, while interesting, is so qualitative that is doesn’t give us the final answer quickly! To
get to the final punch line, we just need to find the Donnan Equilibrium point. This is where

[K + ]One [Cl− ]One = [K + ]T wo [Cl− ]T wo

From our discussion above, we know that the number of mM’s of K + and Cl− that move from
Compartment Two to Compartment One will always be the same in order to satisfy space charge
neutrality. Let x denote this amount. Then we must have

[K + ]One = 150 + x
[K + ]T wo = 150 − x
[Cl− ]T wo = 150 − x
[Cl− ]One = 50 + x

yielding after a little algebra:

265
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

(150 + x) (50 + x) = (150 − x) (150 − x)


7500 + 200x + x2 = 22, 500 − 300x2 + x2
500x = 15, 000
x = 30

So at ECE, we have

 

100 mM A−
 
 
 
 
 
180 mM K +
 
 
 

 One 

80 mM Cl−
 
 
 
 
 
 
 
(Cl− Gates)(K + Gates)
(Cl− Gates)(K + Gates)
 
 
 
 
 
 
0 mM A−
 
 
 

 T wo 

120 mM K+
 
 
 
 
 
120 mM Cl−
 
 

What are the membrane voltages? Of course, they should match! Recall, in the derivation of
the Nernst equation, the voltage difference for ion c between side 2 and 1 was

Ec = V1 − V2
RT [c]2
= ln
zF [c]1

Here, Compartment One plays the role of side 1 and Compartment Two plays the role of side
2. So for K + , we have

EK = VOne − VT wo

266
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

[K + ]T wo
= 25.32 ln
[K + ]One
120
= 25.32 ln
180
= −10.27mV

and for Cl− ,

ECl = VOne − VT wo
[Cl− ]T wo
= −25.32 ln
[Cl− ]One
120
= −25.32 ln
80
= −10.27mV

Are we at osmotic equilibrium? To see this, we need to add up how many ions are in each
compartment. In Compartment One, there are 360 ions and in Compartment Two there are 240
ions. Since there are more ions in Compartment One than Compartment Two, water will flow
from Compartment One to Compartment Two to try to dilute the ionic strength in Compartment
One. We are ignoring this effect here.

Exercise 15.1.1. Consider a system which has two compartments filled with ions and a membrane
with K + and Cl− gates between the two compartments as shown below:

 

80 mM A−
 
 
 
 
 
160 mM K +
 
 
 

 One 

80 mM Cl−
 
 
 
 
 
 
 
(Cl− Gates)(K + Gates)

267
15.1. TRANSPORT MECHANISMS: CHAPTER 15. TRANSPORTS

(Cl− Gates)(K + Gates)


 
 
 
 
 
 
30 mM A−
 
 
 

 T wo 

50 mM K+
 
 
 
 
 
20 mM Cl−
 
 

Note the ion A can’t pass between compartments. Assume the system is held at the temperature
of 70 degree Fahrenheit.

1. Find the concentrations of ions that this system has at ECE.

2. For the ECE ion concentrations, calculate ECl and EK .

268
Chapter 16
Movement of Ions Across Biological
Membranes

We now will work our way through another way to view the movement of ions across biological
membranes that was developed by Goldman in 1943 and Hodgkin and Katz in 1949. This is the
Goldman - Hodgkin - Katz or GHK model. Going though all of this will give you an increased
appreciation for both the kind of modeling we need to do: what to keep from the science, what
the throw away and what tools to use for the model itself . We start with defining carefully what
we mean by the word permeability. We have used this word before but always in a qualitative
sense. Now we will be quantitative.

16.1 Membrane Permeability:


Let’s consider a substance c moving across a biological membrane. We will assume that the
relationship between the number of moles of c moving across the membrane in unit time and unit
area is proportional to the change in concentration of c. This gives

Jmolar = −P ∆[c] (16.1)

where

moles
• Jmolar is the molar flux which is in the units cm2 −second
.

• P is the membrane permeability of the ion c which has units of cm


second .

moles
• ∆[] is the change in the concentration of c measured in the units cm3
.

269
16.1. PERMEABILITY CHAPTER 16. ION MOVEMENT

Ficke’s Law of diffusion gives Jdif f = −D ∂∂x


[c]
where Jdif f is measured in molecules
cm2 −second
. Hence, we
moles
can convert to cm2 −second
by dividing through by Avogadro’s number NA . This gives (remember,
partial differentiation becomes regular differentiation when there is just one variable)

Jdif f Dm d[c]
= −
NA NA dx

where we now explicitly label the diffusion constant as belonging to the membrane. These two
fluxes should be the same; hence

Dm d[c]
Jmolar = −P ∆[c] = − (16.2)
NA dx

We will want to use the above equation for the concentration of c that is in the membrane
itself. So we need to distinguish from the concentration of c inside and outside the cell and the
concentration in the membrane. We will let [c] denote the concentration inside and outside the
cell and [cm ], the concentration inside the membrane. Hence, we have

Dm d[cm ]
Jmolar = −P ∆[cm ] = − (16.3)
NA dx

Now a substance will probably dissolve differently in the cellular solution and in the biological
membrane, let’s model this quantitatively as follows: Let [c]in denote the concentration of c inside
the cell, and [c]out denote the concentration of c outside the cell. The membrane has two sides;
the side facing into the cell and the side facing to the outside of the cell. If we took a slice through
the cell, we would see a straight line along which we can measure our position in the cell by the
variable x. We have x is 0 when we are at the center of our spherical cell and x is x0 when we
reach the inside wall of the membrane (i.e. x0 is the radius of the cell for this slice). We will
assume that the thickness of the membrane is a uniform `. Hence, we are at position x0 + ` when
we come to the outer boundary of the membrane. Inside the membrane, we will let [cm ] denote
the concentration of c. We know the the membrane concentration will be some fraction of the cell
concentration. So

[cm ](x0 + `) = βo [c]out


[cm ](x0 ) = βi [c]in

for some constants βi and βo . The ratio of the boundary concentrations of c and the membrane
concentrations is an important parameter. From the above discussions, we see these critical ratios
are given by:

270
16.1. PERMEABILITY CHAPTER 16. ION MOVEMENT

Figure 16.1: The Linear Membrane Concentration Model

[cm ]x0 +`
βo =
[c]out
[cm ]x0
βi =
[c]in

For this model, we will assume that these ratios are the same: i.e. βo = βi ; this common
value will be denoted by β. The parameter β is called the partition coefficient between the
solution and the membrane. We will also assume that the membrane concentration [cm ] varies
linearly across the membrane from [c]in at x0 to [c]out at x0 + `. This model is shown in Figure
16.1:
Thus, since we assume the concentration inside the membrane is linear, we have for some
constants a and b:

[cm ](x) = a x + b
[cm ]x0 = [cm ]in = β[c]in
[cm ]x0 + ` = [cm ]out = β[c]out

This implies that

[cm ]in = a x0 + b
[cm ]out = a (x0 + `) + b
= [cm ]in + a`

Thus, letting ∆C denote [c]out − [c]in , we have

271
16.1. PERMEABILITY CHAPTER 16. ION MOVEMENT

[cm ]out − [cm ]in


a =
`
β
= ([c]out − [c]in )
`
β
= ∆C
`

We can also solve for the value of b:

b = [cm ]in − ax0


β
= [cm ]in − x0 ( ([c]out − [c]in ))
`
x0 x0
= β(1 + )[c]in − [c]
` ` out

Hence,

d[cm ] β
= ([c]out − [c]in )
dx `

Now recall that

Jmolar = −P ∆[cm ]
Dm d[cm ]
= −
NA dx
Dm β
= − ([c]out − [c]in )
NA `

Thus, we have way to express the permeability P in terms of low level fundamental constants:

Dm β
P =
NA `

Finally, letting µm be the mobility within the membrane, we note that Einstein’s relation tells us
κT RT
that Dm = q µm = F µm , so we conclude

βRT µm
P = (16.4)
`F NA

272
16.2. GHK MODEL CHAPTER 16. ION MOVEMENT

16.2 The Goldman-Hodgkin-Katz (GHK) Model:


The GHK model is based on several assumptions about how the substance c behaves in the
membrane:

1. [cm ] varies linearly across the membrane as discussed in the previous subsection.

2. The electric field in the membrane is constant.

3. The Nernst-Planck equation holds inside the membrane.

We let V m (x) denote the voltage or potential and E m (x), the electrical field, in the membrane
at position x. By assumption (2), we know that E m (x) is a constant we will call E m in the
membrane for all x. Now we also know from standard physics that an electric field is the negative
of the derivative of the potential so

dV m
Em = −
dx

implying

dV m
−E m dx =
dx
Z x0 +` Z x0 +`
m
−E dx = dV m
x0 x0
m m
= Vout − Vin

Hence, we have

−E m ` = Eout
m m
− Ein

The Nernst equation will hold across the membrane, so we have that the voltage due to c across
the membrane is Vcm with

Ec = Vcm
m m
= Vin − Vout
RT [c]
= ln out
zF [c]in

Further, by assumption (3), the Nernst-Planck equation holds across the membrane, so the ion
current in the membrane, [I m ] is given by

273
16.2. GHK MODEL CHAPTER 16. ION MOVEMENT

dV m d[cm ]
 
m µm 2 m
I = − z F [c ] + z RT
NA dx dx

Also, since,

dV m
= −E m
dx
m − Vm
Vout in
=
`
−Vcm
=
`

leading to

−Vcm d[cm ]
 
µm
Im = − z 2 F [cm ] + z RT
NA ` dx
µm 2 V m µm d[cm ]
= z F [cm ] c − z RT
NA ` NA dx

This implies that

µm z 2 F [cm ] m µm d[cm ]
Im − Vc = − z RT (16.5)
NA ` NA dx

To make sense out of this complicated looking equation, we will make a change of variable: define
y m as follows:

µm z 2 F [cm ] m
ym = I m − Vc (16.6)
NA `

This implies that

dy m dI m µm z 2 F m d[cm ]
= − Vc
dx dx NA ` dx

dI m
We know that at equilibrium, the current flow will be zero and so dx = 0. Hence, at steady
state, we have

dy m µm z 2 F m d[cm ]
= − Vc
dx NA ` dx

or

274
16.2. GHK MODEL CHAPTER 16. ION MOVEMENT

` dy m µm z [cm ]
− =
zF Vcm dx NA dx

d[cm ]
This gives us an expression for the term dx in Equation 16.5. Using our change of variable y m
and substituting for the concentration derivative, we find

µm z d[cm ]
 
m
y = −RT
NA dx
dy m
 
`
= −RT −
zF Vcm dx

We have thus obtained the simpler differential equation

RT ` dy m
ym = (16.7)
zF Vcm dx

This equation can be integrated easily:

RT ` dy m
dx =
zF Vcm y m
x0 +` Z x0 +`
dy m
Z
RT `
dx =
x0 zF Vcm x0 ym

which gives

RT ` y m (x0 + `)
` = ln (16.8)
zF Vcm y m (x0 )

To simplify this further, we must go back to the definition of the change of variable y m . From
Equation 16.6, we see

µm z 2 F [cm ](x0 + `) m
y m (x0 + `) = I m (x0 + `) − Vc
NA `
µm z 2 F [cm ](x0 ) m
y m (x0 ) = I m (x0 ) − Vc
NA `

At steady state the currents I m (x0 + `) and I m (x0 ) are the same value. Call this constant steady
state value I0m . Then we have

275
16.2. GHK MODEL CHAPTER 16. ION MOVEMENT

µm z 2 F [cm ](x0 + `) m
y m (x0 + `) = I0m − Vc
NA `
µm z 2 F [cm ](x0 ) m
y m (x0 ) = I0m − Vc
NA `

Finally, remember how we set up the concentration terms. We used the symbols [cm ]out to
represent [cm ](x0 + `) and [cm ]in to represent [cm ](x0 ). Thus, we have

µm z 2 F [cm ]out m
y m (x0 + `) = I0m − Vc
NA `
µm z 2 F [cm ]in m
y m (x0 ) = I0m − Vc
NA `

Now, to finish our derivation, let’s introduce an additional parameter ξ defined by

z F Vcm
ξ = (16.9)
RT

Recall that we defined the permeability by

Dm β
P =
NA `

Now from the definition of ξ and P, we find that Equation 16.8 gives

y m (x0 + `)
` = `ξ ln
y m (x0 )

Thus, after canceling ` from both sides, we obtain

y m (x0 + `)
ξ = ln
y m (x0 )

Now, plug in the expressions for the y m terms evaluated at x0 + ` and x0 to find

 
µm z 2 F [cm ]out
I0m − NA ` Vcm
ξ = ln  µm z 2 F [cm ](x0 )

I0m − NA ` V c
m

276
16.2. GHK MODEL CHAPTER 16. ION MOVEMENT

zµm
!
I0m − m m
NA ` z F Vc [c ]out
= ln zµm
I0m − m m
NA ` z F Vc [c ]in
zµm
!
I0m − NA ` ξRT [cm]
out
= ln zµm
I0m − m
NA ` ξRT [c ]in
µm ξ RT z
!
I0m − NA ` [cm ]out
= ln µm ξ RT z
I0m − NA ` [cm ]in

But we also know that by assumption

[cm ]out = β[c]out


[cm ]in = β[c]in

and so

µm ξ RT z m β RT µm
[c ]out = ξz [c]out
NA ` NA `

But we know from the definition of P that the fractional expression is just PF and so we have

µm ξ RT z m
[c ]out = zPF ξ [c]out
NA `

In a similar fashion, we can derive that

µm ξ RT z m
[c ]in = zPF ξ [c]in
NA `

Substituting these identities into expression for xi we derived earlier, we find

I0m − zPF ξ [c]out


 
ξ = ln
I0m − zPF ξ [c]in

Now exponentiate these expressions to get

I0m − zPF ξ [c]out


eξ =
I0m − zPF ξ [c]in
(I0m − zPF ξ [c]in )eξ = I0m − zPF ξ [c]out

leading to

277
16.3. GHK VOLTAGE EQUATION CHAPTER 16. ION MOVEMENT

I0m (eξ − 1) = zPF ξ([c]in eξ − [c]out )

We can solve the above equation for the steady state current term I0m . This gives

[c]in eξ − [c]out
 
I0m = zPF ξ
eξ − 1

When we replace xi by its definition and reorganize the resulting expression, we obtain the GHK
current equation for the ion c.

zF Vcm
 

z 2 PF 2 Vcm  [c]in − [c]out e m
RT
I0m = zF Vc
 (16.10)
RT 1 − e− RT

16.3 The GHK Voltage Equation:

Consider a cell that is permeable to K + , Cl− and N a+ ions with no active pumping. Let’s
calculate the resting potential of the cell. For each ion, we can apply our GHK theory to obtain
an ionic current, IK , IN a and ICl . At equilibrium, the current through the membrane should be
zero and so all of the ionic currents should sum to zero. Thus

IK + IN a + ICl = 0

Further, the associated Nernst potentials for the ions should all match because otherwise there
would be current flow:

V0m = VKm+ = VN a+ = VCl


m

where we denote this common potential by V0m . From the GHK current equation, we have (note
the valences for potassium and sodium are +1 and for chlorine is −1)

F V0m
 
PK F 2 V0m +
 [K ]in − [K + ]out e− RT
IK = F V0m

RT −
1 − e RT

F V0m
 
PN a F 2 V0m [N a+ ] in − [N a+ ]out e− RT
IN a = 
F V0m

RT −
1 − e RT

278
16.3. GHK VOLTAGE EQUATION CHAPTER 16. ION MOVEMENT

−F V0m
 

PCl F 2 V0m −
 [Cl ]in − [Cl− ] out e
RT
ICl = −F V m

RT − RT0
1 − e

F V0m
For convenience, let ξ0 denote the common term RT . Then we have

[K + ]in − [K + ]out e−ξ0


 
IK = PK F ξ0
1 − e−ξ0
[N a+ ]in − [N a+ ]out e−ξ0
 
IN a = PN a F ξ0
1 − e−ξ0
 −
[Cl ]in − [Cl− ]out eξ0

ICl = PCl F ξ0
1 − eξ0

Now sum these ionic currents to obtain

[K + ]in − [K + ]out e−ξ0 [N a+ ]in − [N a+ ]out e−ξ0


   
0 = PK F ξ0 + PN a F ξ 0
1 − e−ξ0 1 − e−ξ0
 −
[Cl ]in e−ξ0 − [Cl− ]out

− PCl F ξ0
1 − e−ξ0

or

F ξ0 −
P +
P +
P

0 = −ξ K [K ] in + N a [N a ]in + Cl [Cl ]out
1 − e 0
 F ξ0 e−ξ0
− PK [K + ]out + PN a [N a+ ]out + PCl [Cl]in .
1 − e−ξ0

Rewriting, we obtain

eξ0 PK [K + ]in + PN a [N a+ ]in + PCl [Cl− ]out PK [K + ]out + PN a [N a+ ]out + PCl [Cl]in
 
=
PK [K + ]out + PN a [N a+ ]out + PCl [Cl]in
eξ0 =
PK [K + ]in + PN a [N a+ ]in + PCl [Cl− ]out

Finally, taking logarithms, we obtain the GHK Voltage equation for these ions:

PK [K + ]out + PN a [N a+ ]out + PCl [Cl]in


 
RT
V0m = ln (16.11)
F PK [K + ]in + PN a [N a+ ]in + PCl [Cl− ]out

Now to actually compute the GHK voltage, we will need the values of the three permeabilities.
It is common to rewrite this equation in terms of permeability ratios. We choose one of the
permeabilities as the reference, say PN a , and compute the ratios

279
16.3. GHK VOLTAGE EQUATION CHAPTER 16. ION MOVEMENT

PK
rK =
PN a
PCl
rCl =
PN a

We can then rewrite the GHK voltage equation as

[N a+ ]out + rK [K + ]out + rCl [Cl]in


 
RT
V0m = ln (16.12)
F [N a+ ]in + rK [K + ]in + rCl [Cl− ]out

16.3.1 Examples:

Now let’s use the GHK voltage equation for a few sample calculations.

A First Example:

Consider a squid giant axon. Typically, the axon at rest has the following permeability ratios:

PK : PN a : PCl = 1.0 : 0.03 : 0.1

where we interpret this string of ratios as

PK 1.0
= = 33.33
PN a 0.03
PN a 0.03
= = 0.3
PCl 0.1

Thus, we have

PK
rK = = 33.33
PN a
PCl
rCl = : 3.33
PN a

Now look at the part of Table 13.1 which shows the ion concentrations for the squid axon. As
RT
we discussed in Chapter 13, we will use a temperature of 294.11 Kelvin and the conversion F is
25.32 mV. Also, recall that all concentrations are given in mM. Thus, the GHK voltage equation
for the squid axon gives

[N a+ ]out + rK [K + ]out + rCl [Cl]in


 
RT
V0m = ln
F [N a+ ]in + rK [K + ]in + rCl [Cl− ]out

280
16.4. ELECTROGENIC PUMPS CHAPTER 16. ION MOVEMENT

 
440.0 + 33.33 × 20.0 + 3.333 × 40.0
= 25.32(mV ) ln
50.0 + 33.33 × 400.0 + 3.333 × 560.0
 
1239.9
= 25.32(mV ) ln
15248.48
= −63.54 mV

A Second Example:

Now assume there is a drastic change in these permeability ratios: we now have:

PK : PN a : PCl = 1.0 : 15.0 : 0.1

where we interpret this string of ratios as

PK 1.0
rK = = = 0.0667
PN a 15.0
PCl 0.1
rCl = = = 0.00667
PN a 15.0

The GHK voltage equation now gives

 
440.0 + 0.0667 × 20.0 + 0.00667 × 40.0
V0m = 25.32(mV ) ln
50.0 + 0.0667 × 400.0 + 0.00667 × 560.0
 
441.60
= 25.32(mV ) ln
80.4152
= 43.12 mV

We see that this large change in the permeability ratio between potassium and sodium triggers a
huge change in the equilibrium voltage across the membrane. It is actually far easier to see this
effect when we use permeabilities than when we use conductances. So as usual, insight is often
improved by looking at a situation using different parameters! Later, we will begin to understand
how this explosive switch from a negative membrane voltage to a positive membrane voltage is
achieved. That’s for a later chapter!

16.4 The Effects of an Electrogenic Pump:

We haven’t talked much about how the pumps work in our model yet, so just to give the flavor
of such calculations, let’s look at a hypothetical pump for an ion c. Let Ic denote the passive
current for the ion which means no energy is required for the ion flow (see Chapter 15 for further
discussion). The ion current due to a pump requires energy and we will let Icp denote the ion
current due to the pump. At steady state, we would have

281
16.4. ELECTROGENIC PUMPS CHAPTER 16. ION MOVEMENT

p
IN a+ + IN a+
= 0
p
IK + + IK + = 0

Let the parameter r be the number of N a+ ions pumped out for each K + ion pumped in.
This then implies that

p p
rIK + + IN a + = 0

From the GHK current equation, we know

[K + ]in − [K + ]out e−ξ0


 
IK + = PK F ξ0
1 − e−ξ0
[N a+ ]in − [N a+ ]out e−ξ0
 
IN a + = PN a F ξ0
1 − e−ξ0

and thus from our steady state assumptions, we find

p
IN a+ = −IN a+
p
IK + = −IK +

and so

rIK + + IN a+ = 0 (16.13)

Substituting the GHK current expressions into Equation 16.13, we have

F ξ0  
−ξ0
 
−ξ0

rP K [K +
] in − [K +
] out e + P Na [N a+
]in − [N a+
]out e = 0
1 − e−ξ0

giving

   
rPK [K + ]in − [K + ]out e−ξ0 − PN a [N a+ ]in − [N a+ ]out e−ξ0 = 0
rPK [K + ]in + PN a [N a+ ]in − rPK [K + ]out + PN a [N a+ ]out e−ξ0 = 0
 

We can manipulate this equation a bit to get

282
16.4. ELECTROGENIC PUMPS CHAPTER 16. ION MOVEMENT

rPK [K + ]in + PN a [N a+ ]in = rPK [K + ]out + PN a [N a+ ]out e−ξ0




or

rPK [K + ]out + PN a [N a+ ]out


eξ0 =
rPK [K + ]in + PN a [N a+ ]in

Since we know that

F V0m
ξ0 =
RT

we can solve for V0m and obtain

rPK [K + ]out + PN a [N a+ ]out


 
RT
V0m =
F rPK [K + ]in + PN a [N a+ ]in

The N a+ − K + pump moves out 3N a+ ions for every 2K+ ions that go in. Hence the pumping
ratio is r is 1.5. Thus for our usual squid axon using only potassium and sodium ions and
neglecting chlorine, we find for a permeability ratio of

PK : PN a = 1.0 : 0.03

that

 
1.5 · 33.33 · 20.0 + 440.0
V0m = 25.32(mV ) ln
1.5 · 33.33 · 400.0 + 50.0
1439.90
= 25.32(mV ) ln
200480.0
= −66.68 mV

Without a pump, r will be 1 and the calculation becomes

 
33.33 · 20.0 + 440.0
V0m = 25.32(mV ) ln
33.33 · 400.0 + 50.0
1106.60
= 25.32(mV ) ln
13382.0
= −63.11 mV

283
16.5. EXCITABLE CELLS: CHAPTER 16. ION MOVEMENT

Our sample calculation thus shows that the pump contributes −66.68 − −63.11 mV or −3.57 mV
3.57
to the rest voltage. This is only 66.68 or 5.35 percent!

16.5 Excitable Cells:

There are specialized cells in most living creatures called neurons which are adapted for generating
signals which are used for the transmission of sensory data, control of movement and cognition
through mechanisms we don’t fully understand. However, a neuron is an example of what is
called an excitable cell which is a whose membrane is studded with many voltage gated sodium
and potassium channels. In terms of ionic permeabilities, the GHK voltage equation for the usual
sodium, potassium and chlorine ions gives

PK [K + ]out + PN a [N a+ ]out + PCl [Cl]in


 
V0m = 25.32 (mV ) ln
PK [K + ]in + PN a [N a+ ]in + PCl [Cl− ]out

which is about −60 mV at rest but which we have already seen can rapidly increase to +40 mV
upon a large shift in the sodium and potassium permeability ratio. Recalling our discussion in
Chapter 14, we can also write the rest voltage in terms of conductances as

gK gN a gCl
V0m = EK + EN a + ECl
gK + gN a + gCl gK + gN a + gCl gK + gN a + gCl

Either the conductance or the permeability model allows us to understand this sudden increase in
voltage across the membrane in terms of either sodium to potassium permeability or conductance
ratio shifts.
We will study all of this in a lot of detail later, but for right now, let’s just say that under
certain circumstances, the rest potential across the membrane can be stimulated just right to
cause a rapid rise in the equilibrium potential of the cell, followed by a sudden drop below the
equilibrium voltage and then ended by a slow increase back up to the rest potential. The shape
of this wave form is very characteristic and is shown in Figure 16.2. This type of wave form is
called an action potential and is a fundamental characteristic of excitable cells. In the figure,
we draw the voltage across the membrane and simultaneously we draw the conductance curves
for the sodium and potassium ions. Recall that conductance is reciprocal resistance, so a spike
in sodium conductance, for example, is proportional to a spike in sodium ion current. So in the
figure we see that sodium current spikes first and potassium second. Now that we have discussed
so many aspects of cellular membranes, we are at a point where we can develop a qualitative
understanding of how this behavior is generated. We can’t really understand the dynamic nature
of this pulse yet (that is it’s time and spatial dependence) but we can explain in a descriptive
fashion how the potassium and sodium gates physical characteristics cause the behavior we see
in Figure 16.2.

284
16.5. EXCITABLE CELLS: CHAPTER 16. ION MOVEMENT

The changes in the sodium


and potassium conductances
that we see in the action po-
tential can be explained by
molecular properties of the
sodium and potassium chan-
nels. As in the example we
worked through for the rest-
ing potential for the squid
nerve cells, for this explana-
tion, we will assume the chlo-
rine permeability or equiv-
alently the chlorine conduc-
tance does not change. So
all the qualitative features of
the action potential will be
explained solely in terms of
relative sodium to potassium
conductance ratios. We have
Figure 16.2: A Typical Action Potential already seen that if we allow
a huge increase in the sodium
to potassium conductance ratio, we generate a massive depolarization of the membrane. So now,
we will try to explain how this happens. We have talked about basic voltage operated gates before
in Chapter 15, but now let’s specialize to a typical sodium channel as shown in Figure 16.3. The
drawing of a potassium channel will be virtually identical.

Figure 16.3: A Typical Sodium Channel

When you look at the drawing of the sodium channel, you’ll see it is drawn in three parts. Our
idealized channel has a hinged cap which can cover the part of the gate that opens into the cell.
We call this the inactivation gate. It also has a smaller flap inside the gate which can close off
the throat of the channel. This is called the activation gate. As you see in the drawing, these two
pieces can be in one of three positions: resting (activation gate is closed and the inactivation gate

285
16.5. EXCITABLE CELLS: CHAPTER 16. ION MOVEMENT

is open); open (activation gate is open and the inactivation gate is open); and closed (activation
gate is closed or closed and the inactivation gate is closed). Since this is a voltage activated gate,
the transition from resting to open depends on the voltage across the cell membrane. We typically
use the following terminology:

• When the voltage across the membrane is above the resting membrane voltage, we say the
cell is depolarized.

• When the voltage across the membrane is below the resting membrane voltage, we say the
cell is hyperpolarized.

These gates transition from resting to open when the membrane depolarizes. In detail, the
probability that the gate opens increases upon membrane depolarization. However, the probability
that the gate transitions from open to closed is NOT voltage dependent. Hence, no matter what
the membrane voltage, once a gate opens, there is a fixed probability it will close again.
Hence, an action potential can be described as follows: when the cell membrane is sufficiently
depolarized, there is an explosive increase in the opening of the sodium gates which causes a
huge influx on sodium ions which produces a short lived rapid increase in the voltage across the
membrane followed by a rapid return to the rest voltage with a typical overshoot phase which
temporarily keeps the cell membrane hyperpolarized. We can explain much of this qualitatively
as follows:
The voltage gated channels move randomly back and forth between their open and closed
states. At rest potential, the probability that a channel will move to the open position is small
and hence, most channels are in the closed position. When the membrane is depolarized from
some mechanism, the probability that a channel will transition to the open position increases. If
the depolarization of the membrane lasts long enough or is large enough, then there is an explosive
increase in the cell membrane potential fueled by

Explosive Depolarization of the Membrane: The depolarization causes a few N a+ channels


to open. This opening allows N a+ to move into the cell through the channel which increase
the inside N a+ concentration. Now our previous attempts to explain the voltage across the
membrane were all based on equilibrium or steady state arguments; here, we want to think about
what happens in that brief moment when the N a+ ions flood into the cell. This is not a steady
state analysis. Here, the best way to think of it is that the number of plus ions in the cell has
gone up. So since the electrical field is based on minus to plus gradients, we would anticipate that
the electrical field temporarily increases. We also know from basic physics that

dV
E = −
dx

implying since the membrane starts at x0 and ends at x0 + ` that

286
16.5. EXCITABLE CELLS: CHAPTER 16. ION MOVEMENT

E` = −V (x0 + `) + V (x0 ) = Vin − Vout .

where, as usual, we have labeled the voltage at the inside, V (x0 ) as Vin and the other as Vout .
Hence, since we define the membrane voltage V0m as

V0m = Vin − Vout ,

we see that

E` = V0m

Thus if N a+ ions come in, the electric field goes up and so does the potential across the mem-
brane. Hence, the initial depolarizing event further depolarizes the membrane. This additional
depolarization of the membrane increases the probability that the sodium gates open. Hence,
more sodium gates transition from resting to closed and more N a+ ions move into the cell caus-
ing further depolarization as outlined above. We see depolarization induces further depolarization
which is known as positive feedback. Roughly speaking, the membrane voltage begins to drive
towards EN a as determined by the Nernst voltage equation. For biological cells, this is in the
range of 70 mV. Qualitatively, this is the portion of the voltage curve that is rising from rest
towards the maximum in Figure 16.2.

Sodium Inactivation: The probability that a sodium gate will move from open to closed is
independent of the membrane voltage.
Hence, as soon as sodium gates begin to open, there is
a fixed probability they will then begin to move towards
closed. Hence, even as the membrane begins its rapid de-
polarization phase, the sodium gates whose opening pro-
vided the trigger begin to close. This is most properly
described in terms of competing rates in which one rate
is highly voltage dependent, but we can easily see that
Figure 16.4: Spiking Due essentially , the membrane continues to depolarize until
To Sodium the rate of gate closing exceeds the rate of opening and
Ions
the net flow of sodium ions is out of the cell rather than
into the cell. At this point, the membrane voltage begins
to fall. The sodium gate therefore does not remain open for long. It’s action provides a brief spike
in sodium conductance as shown in Figure 16.4

287
16.5. EXCITABLE CELLS: CHAPTER 16. ION MOVEMENT

Potassium Channel Effects: The potassium gate is also a voltage dependent channel whose
graphical abstraction is similar to that shown in Figure 16.3. These channels respond much slower
to the membrane depolarization induced by the sodium channel activations. As the K + channels
open, K + inside the cell flows outward and thereby begins to restore the imbalance caused by
increased N a+ in the cytoplasm. The explanation for this uses the same electrical field and
potential connection we invoked earlier to show why the membrane depolarizes. This outward
flow, however, moves the membrane voltage the other way and counteracts the depolarization
and tries to move the membrane voltage back to rest potential. The K+ channels stay open
a long time compared to the N a+ channels. Since the sodium channels essentially pop open
and then shut again quickly, this long open time of the potassium channel means the potassium
conductance, gK falls very slowly to zero. As the potassium outward current overpowers the
diminishing sodium inward current, the potential accelerates its drop towards rest which can be
interpreted as a drive towards the Nernst potential EK which for biological cells is around −80
mV.

Chlorine Ion Movement: The rest potential is somewhere in between the sodium and potas-
sium Nernst potentials and the drive to reach it is also fueled by several other effects. There is also
a chlorine current which is caused by the movement of chlorine through the membrane without
gating. The movement of negative ions into the cell is qualitatively the same as the movement
of positive potassium ions out of the cell – the membrane voltage goes down. Thus, chlorine
current in the cell is always acting to bring the cell back to rest. We see the cell would return to
its rest voltage even without the potassium channels, but the potassium channels accelerate the
movement. One good way to think about it is this: the potassium channels shorten the length
of time the membrane stays depolarizes by reducing the maximum height and the width of the
pulse we see in Figure 16.2.

The N a+ - K + Pump: The active sodium - potassium pump moves 3 sodium ions out of the
cell for each 2 potassium ions that are moved in. This means that the cell’s interior plus charge
goes down by one. This means the membrane voltage goes down. Hence, the pump acts to bring
the membrane potential back to rest also.

The Hyperpolarization Phase: In Figure 16.2, you can see that after the downward voltage
pulse crosses the rest potential line, the voltage continues to fall before it bottoms out and slowly
rises in an asymptotic fashion back up to the rest potential. This is called the hyperpolarization
phase and the length of time a cell spends in this phase and the shape of the this phase of the
curve are important to how this potential change from this cell can be used by other cells for
communication. As the potassium channels begin to close, the K + outward current drops and
the voltage goes up towards rest.

288
16.5. EXCITABLE CELLS: CHAPTER 16. ION MOVEMENT

The Refractory Period: During the hyperpolarization phase, many sodium channels are in-
activated. The probability a channel will reopen simultaneously is small and most sodium gates
are closed. So for a short time, even if there is an initial depolarizing impulse event, only a small
number of sodium channels are in a position to move to open. However, if the magnitude of the
depolarization event is increased, more channels will open in response to it. So it is possible that
with a large enough event, a new positive feedback loop of depolarization could begin and gen-
erate a new action potential. However, without such an event, the membrane is hyperpolarized,
the probability of opening is a function of membrane voltage and so is very low, and hence most
gates remain closed. The membrane is continuing to move towards rest though and the closer
this potential is to the rest value, the higher the probability of opening for the sodium gates.
Typically, this period where the cell has a hard time initiating a new action potential is on the
order of 1 − 2 mS. This period is called the refractory period of the cell.

With all this said, let’s go back to the initial depolarizing event. If this event is a small pulse of
N a+ current that is injected into a resting cell, there will be a small depolarization which does
not activate the sodium channels and so the ECE force acting on the K + and Cl− ions increase.
K + goes out of the cell and Cl− comes in. This balances the inward N a+ ion jet and the cell
membrane voltage returns to rest. If the initial depolarization is larger, some N a+ channels open
but if the K + and Cl− net current due to the resulting ECE forces is larger than the inward N a+
current due to the jet and the sodium channels that are open, the membrane potential still goes
back to rest. However, if the depolarization is sufficiently large, the ECE derived currents can not
balance the inward N a+ current due to the open sodium gates and the explosive depolarization
begins.

289
16.5. EXCITABLE CELLS: CHAPTER 16. ION MOVEMENT

290
Chapter 17
Lumped and Distributed Cell Models

We can now begin to model a simple biological cell. We can think of a cell as having an input
line (this models the dendritic tree), a cell body (this models the soma) and an output line (this
models the axon). We could model all these elements with cables – thin ones for the dendrite and
axon and a fat one for the soma. To make our model useful, we need to understand how current
injected into the dendritic cable propagates a change in the membrane voltage to the soma and
then out across the axon. Our simple model is very abstract and looks like that of Figure 17.1.

Note that the dendrite in Figure 17.1 is of length L and its position variable is w and the soma
also has length L and its position variable is w. Note that each of these cables has two endcaps
– the right and left caps on the individual cylinders – and we eventually will have to understand
the boundary conditions we need to impose at these endcaps when current is injected into one of
these cylinders. We could also model the soma as a spherical cell, but for the moment let’s think
of everything as the cylinders you see here. So to make progress on the full model, we first need
to understand a general model of a cable.

Figure 17.1: A Simple Cell Model

291
17.1. MODELING RADIAL CURRENT: CHAPTER 17. LUMPED MODELS

(a) The Cable Cylinder Model (b) 2D Cross-section

Figure 17.2: Cable Cylinders

17.1 Modeling Radial Current:

Now a fiber or cable can be modeled by some sort of cylinder framework. In Figure 17.2(a), we
show the cylinder model. In our cylinder, we will first look at currents that run out of the cylinder
walls and ignore currents that run parallel or longitudinally down the cylinder axis. We label the
interesting variables as follows:

• The inner cylinder has radius a.

• L is the length of the wire.

• V is the voltage across the wall or membrane.

• Ir is the radial current flow through the cylinder. This means the current is flowing through
the walls. This is measured in amps.

• Jr is the radial current flow density which has units of amps per unit area –for example,
amps
cm2
.

amps
• Kr is the radial current per unit length – typical units would be cm .

The 2D cross-section shown in Figure 17.2(b), further illustrates that the radial current Jr
moves out of the cylinder.
Clearly, since the total surface area of our cylinder is 2πaL, we have the following conversions
between the various current variables.

Ir = 2πaL Jr
Ir
Kr = = 2πa Jr
L

292
17.2. MODELING RESISTANCE: CHAPTER 17. LUMPED MODELS

(a) The Cable Cylinder Model (b) Part of the Cable Bilipid Membrane Structure
Cross Section

Figure 17.3: Cable Details

Now, imagine our right circular cylinder as being hollow with a very thin skin filled with some
conducting solution, as shown in Figure 17.3(a). Further, imagine the wall is actually a bilipid
layer membrane as shown in Figure 17.3(b).
We know have a simple model of a biological cable we can use for our models. This is called
the annular model of a cable. In other words, if we take a piece of dendritic or axonal fiber, we
can model it as two long concentric cylinders. The outer cylinder is filled with a fluid which is
usually considered to be the extracellular fluid outside the nerve fiber itself. The outer cylinder
walls are idealized to be extremely thin and indeed we don’t usually think about them much. In
fact, the outer cylinder is actually just a way for us to handle the fact that the dendrite or axon
lives in an extracellular fluid bath. The inner cylinder has a thin membrane wall wrapped around
an inner core of fluid. This inner fluid is the intracellular fluid and the membrane wall around
it is the usual membrane we have discussed already. The real difference here is that we are now
looking at a specific geometry for our membrane structure.

17.2 Modeling Resistance:

We also will think of these cables as much as possible in terms of simple circuits. Hence, our finite
piece of cable should have some sort of resistance associated with it. We assume from standard
physics that the current density per unit surface area, Jr , is proportional to the electrical field
density per length E . The proportionality constant is called conductivity and is denoted by σe .
The resulting equation is thus

Jr = σe E

293
17.3. LONGITUDINAL PROPERTIES: CHAPTER 17. LUMPED MODELS

Figure 17.4: Longitudinal Cable Currents

Another traditional parameter is called the resistivity which is reciprocal conductivity and is
labeled ρ.

1
ρ =
σe

The units here are

• E is measured in V olts
cm .

1
• σe is measured in ohm−cm .

• ρ is measured in ohm − cm.

V olts amps
This gives us the units of ohm−cm2
or cm2
for the current density Jr which is what we expect
them to be! The parameters ρ and σe measure properties that tell us what to do for one cm of our
cable but do not require us to know the radius. Hence, these parameters are material properties.

17.3 Longitudinal Properties:


Now let’s look at currents that run parallel to the axis of the cable. These are called longitudinal
currents which we will denote by the symbol Ia . Now that we want to look at currents down the
fiber, we need to locate at what position we are on the fiber. We will let the variable z denote the
cable position and we will assume to cable has length L. We show this situation in Figure 17.4.
The surface area of each endcap of the cable cylinder is πa2 . This implies that if we cut the
cable perpendicular to its longitudinal axis, the resulting slice will have cross-sectional area πa2 .
Our currents are thus:

• Ia is the longitudinal current flow which has units of amps.


amps
• Ja is the longitudinal current flow density which has units of amps per unit area – cm2
.

294
17.3. LONGITUDINAL PROPERTIES: CHAPTER 17. LUMPED MODELS

with the usual conversion

Ia = πa2 Ja

Since the inside of the cable is filled with the same conducting fluid we used in our radial current
discussions, we know

Ja = σe E

and so

Ia = πa2 σe E
πa2 E
=
ρ

using the definition of resistivity. From standard physics, it follows that


E = −
dz

where Ψ is the potential in the conductor. Hence, we have

ρIa dΨ
= −
πa2 dz

implying

V = Ψ(0) − Ψ(L)
ρIa
= L
πa2

Now if we use Ohm’s law to relate potential, current and resistance as usual, we would expect a
relationship of the form

V = Resistance Ia
ρL
= Ia
πa2

This suggests that the term in front of Ia plays the role of a resistance. Hence, we will define R
to be the resistance of the cable by

295
17.4. THIN WALL CURRENTS CHAPTER 17. LUMPED MODELS

(a) Cable Membrane With Wall Thickness (b) The Detailed Cross Section
Shown

Figure 17.5: Adding Thickness To The Cable Wall

ρL
R =
πa2

R ρ
The resistance per unit length would then be r = L which is πa2
. This notation is a bit
unfortunate as we usually use the symbol r in other contexts to be a resistance measured in ohms;
ohms
here it is a resistance per unit length measured in cm . Note that the resistance per unit length
ρ
here is of the form A.

17.4 Current in a Cable with a Thin Wall:

So far, we haven’t actually thought of the cable wall as having a thickness. Now let’s let the wall
thickness be ` and consider the picture shown in Figure 17.5(a). with exaggerated cross section
as shown in Figure 17.5(b).
We let the variable s indicate where we are at inside the membrane wall. Hence, s is a variable
which ranges from a value of a (the inner side of the membrane) to the value a + ` (the outer
side of the membrane). Let Ir (s) denote the radial current at s; then the radial current density
is defined by

Ir (s)
Jr (s) =
2πsL

We will assume the radial current is constant; i. e. Ir is independent of s. Thus, we have

Ir
Jr (s) = .
2πsL

296
17.4. THIN WALL CURRENTS CHAPTER 17. LUMPED MODELS

Now following the arguments we used before, we assume the current density is still proportional
to the electric field density and that the electrical field density E is constant. Thus, we have
Jr (s) = σe E . Hence

Jr (s) dΨ
E = = −
σe ds

This implies

Ir dΨ
= −σe
2πsL ds

Integrating, we find

Z a+` Z Ψ(a+`)
Ir dΨ
= −σe
a 2πsL Ψ(a) ds
 
Ir a+`
ln = −σe (Ψ(a + `) − Ψ(a))
2πL a

Now Ψ(a + `) is the voltage outside the cable at the end of the cell wall (Vout ) and Ψ(a) is the
voltage inside the cable at the start of the cell wall (Vin ). As usual, we represent the potential
difference Vin − Vout by Vr . Thus, using the definition of ρ, we have

 
a+` 2πLσe Vr
ln =
a Ir
2πL Vr
=
ρIr

We conclude

 
ρIr `
Vr = ln 1 +
2πL a

Recall that ` is the thickness of the cell wall which in general is very small compared to the radius
a. Hence the ratio of ` to a is usually quite small. Recall from basic calculus that for a twice
differentiable function f defined near the base point x0 that we can replace f near x0 as follows:

1 00
f (x) = f (x0 ) + f 0 (x0 )(x − x0 ) + F (c)(x − x0 )2
2

where the number c lies in the interval (x0 , x). Unfortunately, this number c depends on the
choice of x, so this equality is not very helpful in general. The straight line L(x, x0 ) given by

297
17.4. THIN WALL CURRENTS CHAPTER 17. LUMPED MODELS

L(x, x0 ) = f (x0 ) + f 0 (x0 )(x − x0 )

is called the tangent line approximation or first order approximation to the function f near x0 .
Hence, we can say

1 00
f (x) = L(x, x0 ) + F (c)(x − x0 )2
2

This tells us the error we make in replacing f by its first order approximation, e(x, x0 ), can be
defined by

e(x, x0 ) = |f (x) − L(x, x0 )|


1 00
= |F (c)|(x − x0 )2
2

Now ln(1 + x) is two times differentiable on (0, ∞) so we have the first order approximation to
ln(1 + x) near the base point 0 is given by


1
L(x, x0 ) = ln(1) + (x − 0)
1 + x x=0
= x

with error

e(x, 0) = | ln(1 + x) − x|
1 1
= x2
2 (1 + c)2

where c is some number between 0 and x. Even though we can’t say for certain where c is in this
interval, we can say that

1
≤ 1
(1 + c)2

no matter what x is! Hence, the error in making the first order approximation on the interval
[0, x] is always bounded above like so

1 2
e(x, 0) ≤ x
2

298
17.4. THIN WALL CURRENTS CHAPTER 17. LUMPED MODELS

`
Now for our purposes, let’s replace ln(1 + a) with a first order approximation on the interval
` `
(0, a ). Then the x above is just a and we see that the error we make is

 2
1 `
e(x, 0) ≤
2 a

If the ratio of the cell membrane to radius is small, the error we make is negligible. For example,
a biological membrane is about 70 nM thick and the cell is about 20, 000 nM in radius. Hence the
membrane to radius ration is 3.5e − 3 implying the error is on the order of 10−5 which is relatively
small. Since in biological situations, this ratio is small enough to permit a reasonable first order
`
approximation with negligible error, we replace ln(1 + a) by a` . Thus, we have to first order

ρIr `
Vr =
2πL a
Vr ρ`
R = =
Ir 2πLa

We know R is measured in ohms and so the resistance R of the entire cable at the inner wall is
R times the inner surface area of the cable which is 2πLa. Hence, R = ρ` which has units of
ohms − cm2 . We know the wall has an inner and an outer surface area. The outer surface area is
2πa(1 + a` )L which is a bit bigger. In order to define the resistance of a unit surface area, we need
to know which surfaced area we are talking about. So here, since we have already approximated
the logarithm term around the base point 0 (which means the inside wall as 1 + 0 corresponds to
the inside wall!) we choose to use the surface area of the inside wall. Note how there are many
approximating ideas going on here behind the scenes if you will. We must always remember these
assumptions in the models we build!
The upshot of all of this discussion is that for a cable model with a thin membrane, the resistance
ohms
per unit length, r, with units cm , should be defined to be

R ρ
r = =
` 2πLa

299
17.4. THIN WALL CURRENTS CHAPTER 17. LUMPED MODELS

300
Chapter 18
The Cable Model

In a uniform isolated cell, the potential difference across the membrane depends on where you are
on the cell surface. For example, we could inject current into a cylindrical cell at position z0 as
shown in Figure 18.2(a). In fact, in the laboratory , we could measure the difference between V m
(the membrane potential) and V0m (the rest potential) that results from the current injection at
z = 0 at various positions z downstream and we would see potential difference curves vs. position
that have the appearance of Figure 18.1.
1
The z = 0 curve is what we would measure at the site of the current injection; the z = 2
1
and z = 2 curves are what we would measure 2 or 2 units downstream from the injection site
respectively. Note the spike in potential we see is quite localized to the point of injection and falls

Figure 18.1: Potential Difference vs Position Curves

301
18.1. CORE MODEL ASSUMPTIONS CHAPTER 18. CABLE MODELS

(a) Current Injection (b) Radial Cross Section

Figure 18.2: Cylindrical Cell Details

rapidly off as we move to the right or left away from the site. Our simple cylindrical model gave
us

ρIr `
Vr =
2πLa

as the voltage across the membrane or cell wall due to a radial current Ir flowing uniformly
radially across the membrane along the entire length of cable. So far our model do not allow us
to handle dependence on position so we can’t reproduce voltage vs position curves as shown in
Figure 18.1. We also currently can’t model explicit time dependence in our models!
Now we wish to find a way to model V m as a function of the distance downstream from the
current injection site and the time elapsed since the injection of the current. This model will be
called the Core Conductor Model.

18.1 The Core Conductor Model Assumptions:


Let’s start by imagining our cable as a long cylinder with another cylinder inside it. The inner
cylinder has a membrane wall of some thickness small compared to the radius of the inner cylinder.
The outer cylinder simply has a skin of negligible thickness. If we take a radial cross section as
shown in Figure 18.2(b) we see
In this radial cross section, let’s label the important currents and voltages as shown in Figure
18.3(a). As you can see, we are using the following conventions:

• t is time usually measured in milli-seconds or mS.

• z is position usually measured in cm.

• (r, θ) are the usual polar coordinates we could use to label points in any given radial cross
section.

302
18.1. CORE MODEL ASSUMPTIONS CHAPTER 18. CABLE MODELS

(a) Currents and Voltages in the Radial (b) A Longitudinal Slice of the Cable
Cross Section

Figure 18.3: Radial and Longitudinal Details

• Ke (z, t) is the current per unit length across the outer cylinder due to external sources
applied in a cylindrically symmetric fashion. This will allow us to represent current applied
amp
to the surface through external electrodes. This is usually measured in cm

• Km (z, t) is the membrane current per unit length from the inner to outer cylinder through
amp
the membrane. This is also measure in cm .

• Vi (z, t) is the potential in the inner conductor measured in milli-volts or mV.

• Vm (z, t) is the membrane potential measured in milli-volts or mV.

• Vo (z, t) is the potential in the outer conductor measured in milli-volts or mV.

We can also look at a longitudinal slice of the cable as shown in Figure 18.3(b)
The longitudinal slice allows us to see the two main currents of interest, Ii and Io as shown
in Figure 18.4, we see
where

• Io (z, t) is the total longitudinal current flowing in the +z direction in the outer conductor
measured in amps.

• Ii (z, t) is the total longitudinal current flowing in the +z direction in the inner conductor
measured in amps.

The Core Conductor Model is built on the following assumptions:

1. The cell membrane is a cylindrical boundary separating two conductors of current called
the intracellular and extracellular solutions. We assume these solutions are homogeneous,
isotropic and obey Ohm’s Law.

303
18.2. BUILDING THE MODEL CHAPTER 18. CABLE MODELS

Figure 18.4: Longitudinal Currents

2. All electrical variables have cylindrical symmetry which means the variables do not depend
on the polar coordinate variable θ.

3. A circuit theory description of currents and voltages is adequate for our model.

4. Inner and outer currents are axial or longitudinal only. Membrane currents are radial only.

5. At any given position longitudinally (i.e. along the cylinder) the inner and outer conductors
are equipotential. Hence, potential in the inner and outer conductors is constant radially.
The only radial potential variation occurs in the membrane.

Finally, we assume the following geometric parameters:

ohm
• r0 is the resistance per unit length in the outer conductor measured in cm .

ohm
• ri is the resistance per unit length in the inner conductor measured in cm .

• a is the radius of the inner cylinder measured in cm.

It is also convenient to define the current per unit area variable Jm :

amp
• Jm (z, t) is the membrane current density per unit area measured in cm2
.

18.2 Building the Core Conductor Model:


Now let’s look at a slice of the model between positions z − ∆z and z + ∆z. In Figure 18.5, we
see one half of the full model that stretches a full 2∆z in length.
From the 2∆z slice in Figure 18.5, we can abstract the electrical network model we see in
Figure 18.6:
Now in the inner conductor, we have current Ii (z, t) entering the face of the inner cylinder. At
that point the inner cylinder is at voltage Vi (z, t). A distance ∆z away, we see current I(z + ∆z, t)
leaving the cylinder and we note the voltage of the cylinder is now at V (z + ∆z, t). Finally, there

304
18.2. BUILDING THE MODEL CHAPTER 18. CABLE MODELS

Figure 18.5: The Two Slice Model

Figure 18.6: The Equivalent Electrical Network Model

Figure 18.7: Inner Cylinder Currents

305
18.2. BUILDING THE MODEL CHAPTER 18. CABLE MODELS

is a radial membrane current coming out of the cylinder uniformly all through this piece of cable.
We illustrate this in Figure 18.7:
Now we know that the total current I through the membrane is given by

I(z, t) = 2πa Jm (z, t) ∆z


= Km (z, t) ∆z

and at Node d, the currents going into the node should match the currents coming out of the
node:

Io (z, t) + Km (z, t) ∆z = Io (z + ∆z, t) + Ke (z, t) ∆z

Also, the voltage drop across the resistance ri ∆z between Node a and Node b satisfies

ri ∆z Ii (z + ∆z, t) = Vi (z, t) − Vi (z + ∆z, t)

Similarly, between Node d and Node c we find the voltage drop satisfies

ro ∆z Io (z + ∆z, t) = Vo (z, t) − Vo (z + ∆z, t)

Also at Node a, we find

Ii (z, t) − Km (z, t) ∆z = Ii (z + ∆z, t)

Next, note that Vm is Vi − Vo . Now we know that the inner current per unit area density is Ji
and it is defined by

Ii (z, t)
Ji (z, t) =
πa2
= σi Ei

= −σi
dz

where z is the position in the inner cylinder, σi is the conductivity and Ei is the electrical field
density per length of the inner solution, respectively and Ψ is the potential at position z in the
inner cylinder. It then follows that

306
18.2. BUILDING THE MODEL CHAPTER 18. CABLE MODELS

Ii (s, t) dΨ
= −
πa2 σi dz

Now let’s assume the inner longitudinal current from position z to position z + ∆z is constant
with value I(z, t). Then, integrating we obtain

Z z+∆z
Ii (z, t) dΨ
2
∆z = − dz
πa σi z dz
= Ψ(z) − Ψ(z + ∆z).

But this change in potential from z to z + ∆z can be approximated by the inner cylinder voltage
at z at time t, Vi (z, t). Thus, noting the resistivity of the inner solution, ρi , is the reciprocal of
the conductivity, our approximation allows us to say

ρi Ii (z, t)
∆z = Vi (z, t)
πa2

This implies that resistance of this piece of cable can be modeled by

Vi (z, t) ρi ∆z
Ri (z, t) = =
Ii (z, t) πa2

and so we conclude that since Ri must be the same as ri ∆z, we have

ρi
ri =
πa2

Rearranging the relationships we have found, we summarize as follows:

Ii (z + ∆z, t) − Ii (z, t)
= −Km (z, t) ( current balance at Node a)
∆z
Io (z + ∆z, t) − Io (z, t)
= Km (z, t) − Ke (z, t) ( current balance at Node d)
∆z
Vi (z + ∆z, t) − Vi (z, t)
= −ri Ii (z + ∆z, t) ( voltage drop between Nodes a and b)
∆z
Vo (z + ∆z, t) − Vo (z, t)
= −ro Io (z + ∆z, t) ( voltage drop between Nodes c and d)
∆z

Now from standard multivariable calculus, we know that is a function f (z, t) has a partial deriva-
tive at the point (z, t), then the following limit exists

307
18.2. BUILDING THE MODEL CHAPTER 18. CABLE MODELS

f (z + ∆z, t) − f (z, t)
lim
∆z→0 ∆z

∂f
and the value of this limit is denoted by the symbol ∂z . Now the equations above apply for
any choice of ∆z. Physically, we expect the voltages and currents we see here to be smooth
differentiable functions of z and t. Hence, we expect that if we let ∆z go to zero, we will obtain
the equations:

∂Ii
= −Km (z, t) (18.1)
∂z
∂Io
= Km (z, t) − Ke (z, t) (18.2)
∂z
∂Vi
= lim (−ri Ii (z + ∆z, t))
∂z ∆z→0
= −ri Ii (z, t) (18.3)
∂Vo
= lim (−ro Io (z + ∆z, t))
∂z ∆z→0
= −ro Io (z, t) (18.4)
Vm = Vi − V0 (18.5)

We call Equations 18.1 - 18.2 the Core Equations. Note we can manipulate these equations as
follows: Equation 18.4 implies that

∂Vm ∂Vi ∂V0


= −
∂z ∂z ∂z

From Equations 18.3 and 18.4, it then follows that

∂Vm
= −ri Ii + ro Io
∂z

Thus, using Equations 18.1 and 18.2, we find

∂ 2 Vm ∂Ii ∂I0
= −ri + ro
∂z 2 ∂z ∂z
= ri Km + ro Km − ro Ke

Thus, the core equations imply that the membrane voltage satisfies the partial differential equation

308
18.2. BUILDING THE MODEL CHAPTER 18. CABLE MODELS

∂ 2 Vm
= (ri + ro )Km − ro Ke (18.6)
∂z 2

Note that the units here do work out. The second partial derivative of Vm with respect to z
involves ratios of first order partials of Vm with respect to z. The first order terms, by the
volt
definition of the partial derivative, have units of cm ; hence, the second order terms have units of
volt
cm2
. Each of the ri or rO terms have units ohm
cm and the Km or Ke are current per length terms
amps amp−ohm
with units cm . Thus the products have units cm2
or volt
cm . This partial differential equation
is known as the Core Conductor Equation.

309
18.2. BUILDING THE MODEL CHAPTER 18. CABLE MODELS

310
Chapter 19
The Transient Cable Equations

Normally, we find it useful to model stuff that is happening in terms of how far things deviate or
move away from what are called nominal values. We can use this idea to derive a new form of
the Core Conductor Equation which we will call the Transient Cable Equation. Let’s denote the
rest values of voltage and current in our model by

• Vm0 is the rest value of the membrane potential.


0 is the rest value of the membrane current per length density.
• Km

• Ke0 is the rest value of the externally applied current per length density.

• Ii0 is the rest value of the inner current.

• Io0 is the rest value of the inner current.

• Vi0 is the rest value of the inner voltage.

• Vo0 is the rest value of the inner voltage.

It is then traditional to define the transient variables as perturbations from these rest values using
the same variables but with lower case letters:

• vm is the deviation of the membrane potential from rest.

• ii is the deviation of the current in the inner fluid from rest.

• io is the deviation of the current in the outer fluid from rest.

• vi is the deviation of the voltage in the inner fluid from rest.

• vo is the deviation of the voltage in the outer fluid from rest.

311
19.1. DERIVING THE TRANSIENT CABLE CHAPTER
EQUATION:
19. TRANSIENT CABLES

• km is the deviation of the membrane current density from rest.

These variables are defined by

vm (z, t) = Vm (z, t) − Vm0


vi (z, t) = Vi (z, t) − Vi0
vo (z, t) = Vo (z, t) − Vo0
0
km (z, t) = Km (z, t) − Km
ii (z, t) = Ii (z, t) − Ii0
io (z, t) = Io (z, t) − Io0

19.1 Deriving the Transient Cable Equation:

Now in our core conductor model so far, we have not modeled the membrane at all. For this
transient version, we need to think more carefully about the membrane boxes we showed in Figure
18.6. We will replace our empty membrane box by a parallel circuit model. Now this box is really
a chunk of membrane that is ∆z wide. We will assume our membrane has a constant resistance
and capacitance. We know that conductance is reciprocal resistance, so our model will consist to
a two branch circuit: one branch is contains a capacitor and the other, the conductance element.
f ahrad
We will let cm denote the membrane capacitance density per unit length (measured in cm ).
Hence, in our membrane box, the since the box is ∆z wide, we see the value of capacitance should
1
be cm ∆z. Similarly, we let gm be the conductance per unit length (measured in ohm−cm ) for the
membrane. The amount of conductance for the box element is thus gm∆z. In Figure 19.1, we
illustrate our new membrane model. Since this is a resistance - capacitance parallel circuit, it is
traditional to call this an RC membrane model.
In Figure 19.1, the current going into the element is Km (z, t)∆z and we draw the rest voltage for
the membrane as a battery of value Vm0 .
What happens when the membrane is at rest? At rest, all of the transients must be zero. At rest,
there can be no capacitative current and so all the current flows through the conductance branch.
Thus applying Ohm’s law,

0
Km (z, t) ∆z = gm ∆z Vm0 (19.1)

On the other hand, when we are not in a rest position, we have current through both branches.
Recall that the for a capacitor C, the charge q held in a capacitor due to a voltage V is q = CV
dq
which implies that the current through the capacitor due to a time varying voltage is i = dt
given by

312
19.1. DERIVING THE TRANSIENT CABLE CHAPTER
EQUATION:
19. TRANSIENT CABLES

Figure 19.1: The RC Membrane Model

∂V
i = C
∂t

From Kirchhoff’s voltage law, we see

0 ∂
+ km (z, t) ∆z = gm ∆z Vm0 + vm (z, t) cm ∆z + Vm0 + vm (z, t)
  
Km
∂t

Using Equation 19.1 and taking the indicated partial derivative, we simplify this to

∂vm
km (z, t) ∆z = gm ∆z vm (z, t) + cm ∆z
∂t

Upon canceling the common ∆z term, we find the fundamental identity

∂vm
km (z, t) = gm vm (z, t) + cm (19.2)
∂t

Now the core conductor equation in terms of our general variables is

∂ 2 Vm
= (ri + ro )Km − ro Ke
∂z 2
∂2
V0 0
+ km (z, t) − ro Ke0 + ke (z, t)
  
+ vm (z, t) = (ri + ro ) Km
∂z 2 m

Thus, in terms of transient variables, we have

313
19.2. THE SPACE AND TIME CONSTANT OF
CHAPTER
A CABLE:
19. TRANSIENT CABLES

∂ 2 vm 0
= (ri + ro )Km + −ro Ke0 + (ri + ro )km − ro ke
∂z 2

This leads to

∂ 2 vm 0
− (ri + ro )km + ro ke = (ri + ro )Km − ro Ke0
∂z 2

Now at steady state, both sides of the above equation must be zero. This gives us the identities:

∂ 2 vm
= (ri + ro )km − ro ke
∂z 2
0
(ri + ro )Km = ro Ke0

However, Equation 19.2, allows us to replace km by an equivalent relationship. We obtain

∂ 2 vm ∂vm
= (ri + ro )(gm vm + cm ) − ro ke
∂z 2 ∂t
0
(ri + ro )Km = ro Ke0

The Transient Cable Equation or just Cable Equation is then

∂ 2 vm ∂vm
2
− (ri + ro )cm − (ri + ro )gm vm = −ro ke (19.3)
∂z ∂t

19.2 The Space and Time Constant of a Cable:

The Cable Equation 19.3 can be further rewritten in terms of two new constants, the space
constant of the cable, λc , and the time constant, τm . Note, we can rewrite 19.3 as

1 ∂ 2 vm cm ∂vm ro
2
− − vm = − ke
(ri + ro )gm ∂z gm ∂t (ri + ro )gm

Define the new constants

s
1
λc =
(ri + ro )gm
cm
τm =
gm

314
19.2. THE SPACE AND TIME CONSTANT OF
CHAPTER
A CABLE:
19. TRANSIENT CABLES

Then

ro
= ro λ2c
(ri + ro )gm

and the Cable Equation 19.3 can be written in a new form as

∂ 2 vm ∂vm
λ2c 2
− τm − vm = −ro λ2c ke (19.4)
∂z ∂t

The new constants τm and λc are very important to understanding how the solutions to this
equation will behave. We call τm the time constant and λc the space constant of our cable.

cm
The Time Constant τm : Consider the ratio gm . Note that the units of this ratio are f ahrad −
ohm. Recall that charge deposited on a Capacitor of value C fahrads is q = CV where V is
the voltage across the capacitor. Hence, a dimensional analysis shows us that coulombs equal
fahrad-volts. But we also know from Ohm’s law that voltage is current times resistance;
coulomb−ohm
hence, dimensional analysis tells us that volts equal sec . We conclude that f ahrad−
coulomb coulomb sec
ohm equals volt times ohm. But a volt is a ohm by Ohm’s law as discussed above.
cm
Hence, simplifying, we see the ration gm has unit of seconds. Hence, this ratio is a time
variable. This is why we define this constant to be the Time Constant of the cable, τm .
Note that τm is a constant whose value is independent of the size of the cell; hence it is a
membrane property.
Also, if we let Gm be the conductance of a square centimeter of cell membrane. Then Gm
1
has units of ohm−cm2
. The conductance of our ∆z box of membrane in our model was
gm ∆z and the total conductance of the box is also Gm times the surface area of the box or
Gm 2πa ∆z. Equating expressions, we see that

gm = 2πa Gm

In a similar manner, if Cm is the membrane capacitance per unit area, we have

cm = 2πa Cm

Cm
We see the time constant can thus be expressed as Gm also.

1
The Space Constant λc : Consider the dimensional analysis of the term (ri +ro )gm . The sum of
ohm 1
the resistances per length have units cm and gm has units ohm−cm . Hence, the denominator
of this fraction has units ohm
cm times 1
ohm−cm or cm−2 . Hence, this ratio has units of cm2 .

315
19.2. THE SPACE AND TIME CONSTANT OF
CHAPTER
A CABLE:
19. TRANSIENT CABLES

This is why the square root of the ratio functions as a length parameter. Now in Chapter
17, we looked carefully at how to define the notion of resistance for a longitudinal flow.
Applying this to the inner flow in our cable, we see

ρi
ri =
πa2

Now in biological settings, we typically have that ri is very large compared to ro . Hence,
the term ri + ro is nicely approximated by just ri . In this case, since gm is 2πa Gm , we see

r
1
λc =
ri gm
s
1
= ρi
πa2
2πaGm
r
a
=
2ρi Gm

Now ρi and Gm are membrane constants independent of cell geometry. So we see that the
space constant is proportional to the square root of the fiber radius. Note also that the
space constant decreases as the fiber radius shrinks.

316
Chapter 20
Time Independent Solutions to the
Cable Equation

In Figure 20.1, we see a small piece of cable as described in Chapter 17. We are injecting current
Ie into the cable via an external current source at z = 0. We assume the current that is injected
in uniformly distributed around the cable membrane.

20.1 The Infinite Cable:

We will begin by assuming that the cable extends to infinity both to the right and to the left
of the current injection site. This is actually easier to handle mathematically, although you will
probably find it plenty challenging! The picture we see in Figure 20.1 is thus just a short piece
of this infinitely long cable. Now if a dendrite or axon was really long, this would probably not

Figure 20.1: Current Injection Into a Cable

317
20.2. SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

be a bad model, so there is a lot of utility in examining this case as an approximation to reality.
We also assume the other end of the line that delivers the current Ie is attached some place so
far away it has no effect on our cable.
The time dependent cable equation is just the full cable equation 19.4 from Chapter 19. Recall
that ke is the external current per unit length. Hence, λc ke is a current which is measured in
amps. We also know that λc ro is a resistance measured in ohms. Hence, the product ro λc times
λc ke is a voltage as from Ohm’s law, resistance times current is voltage. Thus, the right hand
side of the cable equation is the voltage due to the current injection. On the left hand side, a
similar analysis shows that each term represents a voltage.

∂ 2 vm volt
• λ2c ∂z 2
is measured in cm2 times cm2
or volts.

∂vm volt
• τm ∂t is measured in seconds times second or volts.

• vm is measured volts.

∂vm
Now if we were interested only in solutions that did not depend on time, then the term ∂t would
be zero. Also, we could write all of our variables as position dependent only; i.e. vm (z, t) as just
vm (z) and so on. In this case, the partial derivatives are not necessary and we obtain an ordinary
differential equation:

d2 vm
λ2c − vm (z) = −ro λ2c ke (z) (20.1)
dz 2

20.2 Solving the Time Independent Infinite Cable Equation:

Equation 20.1 as written does not impose any boundary or initial conditions on the solution.
Eventually, we will have to make a decision about these conditions, but for the moment, let’s
solve this general differential equation as it stands. The typical solution to such an equation is
written in two parts: the homogeneous part and the particular part. The homogeneous part or
solution is the function φh that solves

d2 vm
λ2c − vm (z) = 0 (20.2)
dz 2

This means that if we plug φh into 20.2, we would find

d2 φh
λ2c − φh (z) = 0
dz 2

The particular part or solution is any function φp that satisfies the full equation 20.1; i.e.

318
20.2. SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

d2 φp
λ2c − φp (z) = −ro λ2c ke (z)
dz 2

It is implied in the above discussions that φh and φp must be functions that have two derivatives
for all values of z that are interesting. Since this first case concerns a cable of infinite length, this
means here that φh and φp must be twice differentiable on the entire z axis. We can also clearly
see that adding the homogeneous and particular part together will always satisfy the full time
dependent cable equation. Let φ denote the general solution. Then

φ(z) = φh (z) + φp (z)

and φ will satisfy the time independent cable equation. If the external current ke is continuous in
z, then since the right hand side must equal the left hand side, the continuity of the right hand
side will force the left hand side to be continuous also. This will force the solution φ we seek to
be continuous in the second derivative. So usually we are looking for solutions that are very nice:
they are continuous in the second derivative. This means that there are no corners in the second
derivative of voltage.

20.2.1 Solving the Homogeneous Equation:

The standard way to solve the homogeneous equation is to assume the solution φh has the form
erz . Plugging this into the homogeneous equation and taking the needed derivatives, we find the
factored form

λ2c r2 − 1 erz = 0


Since this equation must hold for all z and the exponential term is never zero, we must have the
term in parenthesis is zero. Thus

λ2c r2 − 1 = 0

This is called the characteristic or auxiliary equation of the homogeneous equation. This is a
simple quadratic equation which has the roots

1
r+ =
λc
1
r− = −
λc

319
20.2. SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

+z −z
From what we have said above, we see that the functions er and er are both homogeneous
+z −z
solutions in the sense we have mentioned. Thus, any combination of the form A1 er + A2 e r ,
for real numbers A1 and A2 , will also work. Thus, the homogeneous solution we seek is

φh (z) = A1 eZ + A2 e−Z (20.3)

20.2.2 Solving the Non-homogeneous Equation:


Since we don’t know the explicit function ke we wish to use in the non-homogeneous equation, the
common technique to use to find the particular solution is the one called Variation of Parameters.
In this technique, we take the homogeneous solution and replace the constants A1 and A2 by
unknown functions U1 (z) and U2 (z). Then we see if we can derive conditions that the unknown
functions U1 and U2 must satisfy in order to work.
So we start by assuming

φp (z) = U1 (z)eZ + U2 (z)e−Z

Now, typing the term z/λc makes the equations we must present look quite cluttered. So, let’s
make the following notational assumption for convenience:

Assumption 20.2.1 (Notation Assumptions For The Cable Model Deriva-


tions).
We will let Z denote the fraction z/λc . Note this means when we take deriva-
tives that

d z dZ 1
( ) = =
dz λc dz λc
d −z d 1
( ) = (−Z) = −
dz λc dz λc

Using the chain and product rule for differentiation, the first derivative of φp thus gives:

dφp dU1 Z 1 dU2 −Z


= e + U1 (z) eZ + e
dz dz λc dz
1 dU1 Z dU2 −Z
− U2 (z) e−Z e + e
λc dz dz
1 1
+ U1 (z) eZ − U2 (z) e−Z
λc λc

The theory of ordinary differential equations forces us to impose the first condition:

320
20.2. SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

dU1 Z dU2 −Z
e + e = 0
dz dz

This simplifies the first derivative of φp to be

dφp 1 Z 1
= U1 (z) e − U2 (z) e−Z
dz λc λc

Now take the second derivative to get

dφp dU1 1 Z dU2 1 −Z


= e − e
dz dz λc dz λc
1 1
+ U1 (z) 2 eZ + U2 (z) 2 e−Z
λc λc

Now plug these derivative expressions into the non-homogeneous equation to find


dU1 1 Z dU2 1 −Z
−ro λ2c ke (z) = λ2c e − e
dz λc dz λc

1 Z 1 −Z
+ U1 (z) 2 e + U2 (z) 2 e
λc λc
− U1 (z)eZ + U2 (z)e−Z


Now

 
1 1
0 = U1 (z) 2 eZ + U2 (z) 2 e−Z
λ2c
λc λ
Z −Z
c
− U1 (z)e + U2 (z)e

so all of this reduces to

dU1 1 Z dU2 1 −Z
e − e = −ro ke (z)
dz λc dz λc

This gives us a second condition on the unknown functions U1 and U2 . Combining we have

dU1 Z dU2 −Z
e + e = 0
dz dz
dU1 1 Z dU2 1 −Z
e − e = −ro ke (z)
dz λc dz λc

321
20.2. SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

This can be rewritten in a matrix form:

" # " # " #


eZ e−Z U10 0
=
1 Z
λc e − λ1c e−Z U20 −ro ke (z)

where, for simplicity of exposition, we just write U10 and U20 for the derivatives dU1 /dz and dU2 /dz.
We then use Cramer’s Rule to solve for the unknown functions U10 and U20 where the superscript
indicates the derivative with respect to z. Let W denote the matrix

" #
eZ e−Z
W = 1 Z
λc e − λ1c e−Z

Then the determinant of W is det(W ) = − λ2c and by Cramer’s Rule

" #
0 e−Z
U10 (z) = (det(w)) −1
det
−ro ke − λ1c e−Z
ro ke e−Z
=
− λ2c
ro λc
= − ke e−Z
2

and

" #
eZ 0
U20 (z) = (det(w)) −1
det 1 Z
λc e −ro ke
−ro ke eZ
=
− λ2c
ro λc
= ke eZ
2

Thus, integrating,

Z z
r0 λc u
U1 (z) = − ke (u) e− λc du
2
Z z0
r0 λc u
U2 (z) = ke (u) e λc du
2 0

where 0 is a convenient starting point for our integration. Hence, the particular solution to the
non-homogeneous time independent infinite cable equation is

322
20.3. INJECTION CURRENTS CHAPTER 20. TIME INDEPENDENT CABLES

φp (z) = U2 (z) e−Z + U1 (z) eZ


 Z z   Z z 
r0 λc u
−Z r0 λc − λu
φp (z) = ke (u) e c du e
λ − ke (u) e c du eZ .
2 0 2 0

The general solution is thus

Z z Z z
Z −Z r0 λc λz − λs r0 λc − λz s
φ(z) = A1 e + A2 e + − e c ke (s) e c ds + e c ke (s) e λc ds
2 0 2 0

for any real constants A1 and A2 .

20.3 Modeling Current Injections:


We are interested in understanding what the membrane voltage solution should look like in the
event of a current injection at say z = 0 for a very short period of time. We could then use this
idealized solution to understand the response of the cable model to current injections occurring
over very brief time periods of various magnitudes at various spatial locations. Now let’s specialize
as follows: we assume

• The current injection ke is symmetric about 0.

• The current ke (z) is zero on (−∞, −C) ∪ (C, ∞) for some nonzero C.

• The current smoothly varies to zero at C and −C; i.e. ke is at least differentiable on the z
axis.
RC
• The area under the curve, which is the current applied to the membrane, −C ke (u)du is I.

Given this type of current injection, we see we are looking for a solution to the problem


 0 z < −C
d2 v

m
λ2c − vm = −ro λ2c ke −C ≤ z ≤ C
dz 2 
0 C < z

This amounts to solving three differential equations and then recognizing that the total solution is
the sum of the three solutions with the condition that the full solution is smooth, i. e. continuous,
at the points −C and C where the solutions must connect. Now we know that membrane voltages
are finite. In the first and third region we seek a solution, we are simply solving the homogeneous
equation. We know then the solution is of the form AeZ +Be−Z in both of these regions. However,
in the (−∞, −C) region, the finiteness of the potential means that the AeZ solution is the only one
possible and in the (C, ∞) region, the only solution is therefore of the form Be−Z . In the middle

323
20.3. INJECTION CURRENTS CHAPTER 20. TIME INDEPENDENT CABLES

region, the solution is given by the general solution we found from the Variation of Parameters
method. Thus, for each choice of positive C, we seek a solution of the form



 AeZ z < −C

 A eZ + A e−Z

1 2
φC (z) = r 0 λc Z
Rz −s
r 0 λc
Rz s


 − 2 e 0 ke (s) e λc ds +
2 e−Z 0 ke (s) e λc ds −C ≤ z ≤ C
Be−Z

C < z

Our solution and its derivative should be continuous at −C and C. Between −C and C, we
compute the derivative of the solution to be

A1 Z A2 −Z
φ0C (z) = e − e
λc λc
r0 Z z
Z
r0 λc s
− ke (z) e−Z eZ − e ke (s) e− λc ds
2 2 0
r0 −Z z
Z
r0 λc Z −Z
s
+ ke (z) e e − e ke (s) e λc ds
2 2 0

This simplifies to

A1 Z A2 −Z
φ0C (z) = e − e
λc λc
r0 Z z
Z
s
− e ke (s) e− λc ds
2 0
r0 −Z z
Z
s
− e ke (s) e λc ds (20.4)
2 0

Continuity in the solution at the points −C gives:

C C C
Ae− λc = A1 e− λc + A2 e λc
Z −C
r0 λc − λC s
− e c ke (s) e− λc ds
2 0
Z −C
r0 λc λC s
+ e c ke (s) e λc ds
2 0

This can be rewritten as

C C C
Ae− λc = A1 e− λc + A2 e λc
Z 0
r0 λc − λC s
+ e c ke (s) e− λc ds
2 −C

324
20.3. INJECTION CURRENTS CHAPTER 20. TIME INDEPENDENT CABLES

Z 0
r0 λc + λC s
− e c ke (s) e λc ds.
2 −C

Then, the continuity condition at C gives

C C C
Be− λc = A1 e λc + A2 e− λc
Z C Z C
r0 λc λC −s r0 λ c − C s
− e c ke (s) e λc ds + e λc ke (s) e λc ds.
2 0 2 0

Continuity in the derivative at −C gives:

A − λC A1 − λC A2 λC
e c = e c − e c
λc λc λc
Z −C
r0 − λC s
− e c ke (s) e− λc ds
2 0
Z −C
r0 λ C s
− e c ke (s) e λc ds
2 0

This simplifies to

A − λC A1 − λC A2 λC
e c = e c − e c
λc λc λc
Z 0
r0 − λC s
+ e c ke (s) e− λc ds
2 −C
Z 0
r0 λC s
+ e c ke (s) e λc ds.
2 −C

Finally, the continuity condition in the derivative at the point C gives

B − λC A1 λC A2 − λC
− e c = e c − e c
λc λc λc
Z C
r0 λC s
− e c ke (s) e− λc ds
2 0
Z C
r0 − λC s
− e c ke (s) e λc ds
2 0

To simplify the exposition, define

r0 C
Z
s
J1+ = ke (s) e− λc ds
2 0
r0 0
Z
s
J1− = ke (s) e− λc ds
2 −C

325
20.3. INJECTION CURRENTS CHAPTER 20. TIME INDEPENDENT CABLES

r0 C
Z
s
J2+ = ke (s) e λc ds
2 0
r0 0
Z
s
J2− = ke (s) e λc ds
2 −C

and

C
w=
λc
r0 λc
M= .
2

We can then rewrite our continuity conditions as

Ae−w = A1 e−w + A2 ew + M e−w J1− − M ew J2−


Be−w = A1 ew + A2 e−w − M ew J1+ + M e−w J2+
A −w A1 −w A2 w r0 −w − r0 w −
e = e − e + e J1 + e J2
λc λc λc 2 2
B A1 w A2 −w r0 w + r0 −w +
− e−w = e − e − e J1 − e J2
λc λc λc 2 2

Multiplying the third and fourth equation by λc , we find

Ae−w = A1 e−w + A2 ew + M e−w J1− − M ew J2−


Be−w = A1 ew + A2 e−w − M ew J1+ + M e−w J2+
Ae−w = A1 e−w − A2 ew + M e−w J1− + M ew J2−
Be−w = A1 ew − A2 e−w − M ew J1+ − M e−w J2+

This gives us the equations:

(A − A1 − M J1− )e−w + (−A2 + M J2− )ew = 0 (20.5)


(B − A2 − M J2+ )e−w + (−A1 + M J1+ )ew = 0 (20.6)
(A − A1 − M J1− )e−w + (−A2 − M J2− )ew = 0 (20.7)
(−B + A2 + M J2+ )e−w + (−A1 + M J1+ )ew = 0 (20.8)

Computing the four new equations (20.5) - (20.7), (20.5) + (20.7), (20.6) - (20.8) and (20.6) +
(20.8), we find

(−2A2 − 2M J2− )ew = 0

326
20.4. IMPULSE CURRENTS CHAPTER 20. TIME INDEPENDENT CABLES

(2A − 2A1 + 2M J1− )e−w = 0


(2B − 2A2 − 2M J2+ )e−w = 0
(−2A1 + 2M J1+ )ew = 0

Since the exponentials can never be zero, we have

−2A2 − 2M J2− = 0
2A − 2A1 + 2M J1− = 0
2B − 2A2 − 2M J2+ = 0
−2A1 + 2M J1+ = 0

Hence, the solution we seek is

A2 = −M J2−
A1 = M J1+
A = A1 − M J1− = M (J1+ − J1− )
B = A2 + M J2+ = M (J2+ − J2− )

Then the solution to this sort of current injection is thus



 (J1+ − J1− ) eZ z < −C
− λs
 Rz
 r0 λc −J − e−Z + J + eZ  −
 r 0 λc
eZ ke (s) e c ds
2 2 1 2 0
φC (z) = r0 λc −Z
Rz s


 + 2 e k
0 e (s) e λc ds −C ≤ z ≤ C
 r0 λc J + − J − e−Z

C < z.
2 2 2

20.4 Modeling Instantaneous Current Injections:


We can perform the analysis we did in the previous section for any pulse of that form. What hap-
pens as the parameter C approaches 0? As before, we look at pulses which have these properties
for each positive C:

• The current injection keC is symmetric about 0.

• The current keC (z) is zero on (−∞, −C ∪ (C, ∞)

• The current smoothly varies to zero at ±C keC is differentiable on the z axis.


RC
• The area under the curve, which is the current applied to the membrane, −C keC (s)ds is I
for all C. This means that as the width of the symmetric pulse goes to zero, the height of

327
20.4. IMPULSE CURRENTS CHAPTER 20. TIME INDEPENDENT CABLES

the pulse goes to infinity but in a controlled way: the area under the pulses is always the
same number I. So no matter what the base of the pulse, the same amount of current is
delivered to the membrane.

We let φC denote our solution:


+ −

 (J1C − J1C ) eZ z < −C
− λs
 Rz
 r0 λc −J − e−Z + J + eZ  −
 r0 λc
eZ ke (s) e c ds
2 2C 1C 2 0
φC (z) = r0 λc −Z
Rz s


 + 2 e 0 ke (s) e λc ds −C ≤ z ≤ C
 r0 λc J + − J −  e−Z

C < z.
2 2C 2C

where

r0 C C
Z
s
+
J1C = ke (s) e− λc ds
2 0
r0 −C C
Z
s

J1C = ke (s) e− λc ds
2 0
r0 C C
Z
s
+
J2C = ke (s) e λc ds
2 0
r0 −C C
Z
s

J2C = ke (s) e λc ds
2 0

Now we have to look at the limit as C goes to 0.

20.4.1 What Happens Away from 0?

Pick a point z0 that is not zero. Since z0 is not zero, there is a value of positive number C ∗ so
that z0 is not inside the interval (−C ∗ , C ∗ ). This means that z0 is outside of (−C, C) for all C
smaller that C ∗ . Hence, either the first part or the third part of the definition of φC applies. We
will argue the case for z0 is positive which implies that the third part of the definition is the one
that is applicable. The case where z0 is negative would be handled in a similar fashion.
Since we assume that z0 is positive, we have for all C smaller than C ∗ :

r0 λc + −
 −Z
φC (z) = J2C − J2C e
2

Next, we can prove as C goes to 0, we obtain

328
20.4. IMPULSE CURRENTS CHAPTER 20. TIME INDEPENDENT CABLES

Lemma 20.4.1 (Limiting Current Densities).

ro + ro I
lim J1C =
C→0 2 4
ro − ro I
lim J1C = −
C→0 2 4
ro + ro I
lim J2C =
C→0 2 4
ro − ro I
lim J2C = −
C→0 2 4

Proof 20.4.1. It suffices to show this for just one of these four cases. We will show the first
ro I + ro I
one. In mathematics, if the limit as C goes to 0 of 2 J1C exists and equals 4 , this means that
given any positive number , we can find a C ∗∗ so that if C < C ∗∗ , then


ro +
J − ro I < 

2 1C 4

whenever C < C ∗∗ . We will show how to do this argument below. Consider

Z C
ro +
J − ro I = ro C − λs ro I

2 1C ke (s) e c ds −
4 2
0 2 2

However, we know that the area under the curve is always I; hence for any C,

Z C
I = keC (s)ds
−C

Since our pulse is symmetric, this implies that

Z C
I
= keC (s)ds
2 0

Substituting into our original expression, we find

r0 C C
Z Z C
ro +
J − ro I = − λs C

2 1C k e (s) e c ds − ke (s)ds
4 2 0 0

Z C
ro s
≤ keC (s) |e− λc − 1|ds
2 0

329
20.4. IMPULSE CURRENTS CHAPTER 20. TIME INDEPENDENT CABLES

4
Now we know that this exponential function is continuous at 0; hence, given ro I , there is a δ so
that

s 4
|e− λc − 1| < if |s| < δ
ro I

Since, C goes to zero, we see that there is a positive number C ∗∗ , which we can choose smaller
than our original C ∗ , so that C ∗∗ < δ. Thus, if we integrate over [0, C] for any C < C ∗∗ , all the
s values inside the integral are less than δ. So we can conclude

Z C 
ro + ro I ro 4
| J − | < keC (s) ds = .
2 1C 4 2 0 r0 I

ro + ro I
This shows that the limit as C goes to 0 of 2 J1C is 4 . 

Hence, for z0 positive, we find

r0 λc −
lim φC (z) = lim (J + − J2C ) e−z0
C→0 2 C→0 2C
r0 + r0 − −z0
= λc lim ( J2C − J )e
C→0 2 2 2C
ro λc I −z0
= e
2

In a similar fashion, for z0 negative, we find

r0 λc −
lim φC (z) = lim (J + − J2n ) e z0
C→0 2 C→0 2n
ro λc I z0
= e
2

20.4.2 What Happens at Zero?

When we are at zero, since 0 is in (−C, C) for all C, we must use the middle part of the definition
of φC always. Now at 0, we see

 
r0 λc − −0 + 0
φC (0) = −J2C e + J1C e
2
 
r0 λc − +
= −J2n + J1n .
2

From Lemma 20.4.1, we then find that

330
20.5. IDEAL IMPULSES CHAPTER 20. TIME INDEPENDENT CABLES

 
ro λc I ro λc I
lim φC (0) = +
C→0 4 4
ro λc I
=
2

Combining all of the above parts, we see we have shown that as n goes to infinity, the solutions
φC converge pointwise to the limiting solution φ defined by

ro λc −|Z|
φ(z) = Ie
2

or in terms of the original space variable z

r0 λc |z|
φ(z) = I e− λc
2

Now if the symmetric pulse sequence was centered at z0 with pulse width C, we can do a similar
analysis (it is yucky though and tedious!) to show that the limiting solution would be

ro λc − |z−z0|
φ(z) = I e λc .
2

20.5 Idealized Impulse Currents

Now as C gets small, our symmetric pulses have very narrow base but very large magnitudes.
Clearly, these pulses are not defined in the limit as C goes to 0 as the limit process leads to a
current Iδ (z) which is the limit of the base pulses we have described:

Iδ (z) = lim keC (z).


C→0

Note, in the limit as C gets small, we can think of this process as delivering an instantaneous
current value. This “idealized” current thus delivers a fixed amount I at z = 0 instantly!
Effectively, this means that the value of the current density keeps getting larger while its spatial
window gets smaller. A good example (though not quite right as this example is not continuous at
±1/n) is a current density of size nI which on the spatial window [−1/n, 1/n]. The current for any
n that is applied is the product of the current density and the distance it is applied: nI ×1/n = I.
From a certain point of view, we can view the limiting current density as depositing an “infinite”
amount of current in zero time: crazy, but easy to remember! Hence, we can use the definition

331
20.5. IDEAL IMPULSES CHAPTER 20. TIME INDEPENDENT CABLES

below as a short hand reminder of what is going on:


(
0 z 6= 0
Iδ (z) =
∞ z = 0

In fact, since the total current deposited over the whole cable is I, and integration can be inter-
R∞
preted as giving the area under a curve, we can use the integration symbol −∞ to denote that
the total current applied over the cable is always I and write
Z ∞
Iδ (z)dz = I.
−∞

Again, this is simply a mnemonic to help us remember complicated limiting current density
behavior! Again, think of Iδ as an amount of current I which is delivered instantaneously at z is
0. Of course, the only way to really understand this idea is to do what we have done and consider
a sequence of constant current density pulses keC .
Summarizing, it is convenient to think of a unit idealized impulse called δ functions defined by

(
0 z 6= 0
δ(z − 0) =
∞ z = 0
Z ∞
δ(u − 0)du = 1
−∞

Then using this notation, we see our idealized current Iδ can be written as

Iδ (z − 0) = I δ(z − 0).

Moreover, if an idealized pulse is applied at z0 rather than 0, we can abuse this notation to define

(
0 z 6= z0
δ(z − z0 ) =
∞ z = z0
Z ∞
δ(u − z0 )du = 1
−∞

and hence I δ(z − z0 ) is a idealized pulse applied at z0 . Note that the notion of an idealized pulse
allows us to write the infinite cable model for an idealized pulse applied at z0 of magnitude I in a
compact form. We let the applied current density be ke (z) = I δ(z − z0 ) giving us the differential
equation

d2 vm
λ2c − vm (z) = −ro λ2c I δ(z − z0 )
dz 2

332
20.6. CURRENTS SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

which we know has solution

r0 λc z−z0
φ(z) = I e−| λc |
2

In all of our analysis, we use a linear ordinary differential equation as our model. Hence, the
linear superposition principle applies and hence for N idealized pulses applied at differing centers
and with different magnitudes, the underlying differential equation is

N
d2 vm X
λ2c 2
− vm (z) = −ro λc Ii δ(z − zi )
dz 2
i=0

with solution

N
r0 λc X z−zi
φ(z) = Ii e−| λc |
2
i=0

Note that the membrane voltage solution is a nonlinear summation of the applied idealized current
injections.
For example, let’s inject current of magnitudes I at 0 and 2I at λc respectively. The solution to
1 and at λ , v 2 . Thus,
the current injection at 0 is vm c m

ro λc |z|
1
vm (z) = I e− λc
2
ro λc |z−λc |
2
vm (z) = 2I e− λc
2

We get a lot of information by looking at these two solutions in terms of units of the space constant
λc . If you look at Figure 20.2, you’ll see the two membrane voltages plotted together on the same
1 added to v 2 which we don’t show.
access. Now the actual membrane voltage is, of course, vm m
However, you can clearly see how quickly the peak voltages fall off as you move away from the
injection site.
1
Note that one space constant away from an injection site z0 , the voltage falls by a factor of e
or 37%. This is familiar exponential decay behavior.

20.6 The Inner and Outer Current Solutions:


Recall that the membrane current density km satisfies

∂vm
km (z, t) = gm vm (z, t) + cm
∂t

333
20.6. CURRENTS SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

Figure 20.2: Voltages Due to Two Current Impulses

Here, we are interested in time independent solutions, so we have the simpler equation

km (z) = gm vm (z)

Thus, we have for an idealized current impulse injection at 0

ro λc |z|
km (z) = gm I e− λc
2

From our core conductor equations, we know that

∂Ii
= −Km (z)
∂z

and thus, using our transient variables

Io (z) = Io0 + i0 (z)


Ii (z) = Ii0 + ii (z)
0
Km (z) = Km + km (z)

we see that

334
20.6. CURRENTS SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

∂ii 0
= −Km − km (z)
∂z

Integrating, we see

Z z
0
ii (z) = − (Km + km (u))du
−∞

0
We expect that the internal current is finite and this implies that the initial current density Km
must be zero as otherwise we obtain unbounded current. We thus have

Z z
ii (z) − ii (−∞) = − km (u)du
Z−∞
z
= − gm vm (u)du
−∞
( Rz u
ro λc e λc z < 0
= − gm I R−∞
0 u Rz − λu
2 −∞ e λc du + 0 e c du z ≥ 0

Also, for an impulse current applied at zero, we would expect that the inner current vanish at
both ends of the infinite cable. Hence, ii (−∞) must be zero. We conclude

( u
ro λc λc e λc z−∞ = λc eZ z < 0
ii (z) = − gm I u u
2 λc e λc 0−∞ − λc e− λc |z0 = λc (2 − e−Z ) z ≥ 0

Using the definition of the space constant λc , we note the identity

ro
λ2c gm =
ri + ro

allowing us to rewrite the inner current solution as

(
ro I eZ z < 0
ii (z) = −
ri + ro 2 (2 − e−Z ) z ≥ 0

Now, from Chapter 18, Equations 18.1 and (18.2, we know in our time independent case

∂Ii
= −Km (z)
∂z
∂Io
= Km (z) − Ke (z)
∂z

335
20.6. CURRENTS SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

which implies

∂ii ∂io
+ = −Ke (z)
∂z ∂z
= −Ke0 − ke (z)

Integrating, we have

Z z Z z Z z
∂ii ∂io
du + du = − (Ke0 + ke (u))du
−∞ ∂z −∞ ∂z −∞

In order for this integration to give us finite currents, we see the constant Ke0 must be zero
implying

Z z
ii (z) − ii (−∞) + io (z) − io (−∞) = − ke (u)du
−∞

We already know that ii is zero at z = −∞ for our idealized current impulse ke = Iδ(u).
Further, we know from our lengthy analysis of sequences of current pulses of constant area I, ken ,
that integrating from −∞ to z > 0 gives I and 0 if z ≤ 0. Hence, the inner and outer transient
currents for an idealized pulse must satisfy

(
−I z > 0
ii (z) + io (z) − io (−∞) =
0 z ≤ 0

The argument to see this can be sketched as follows. If z is positive, for small enough C, the
impulse current keC is active on the interval [−C, C] with C less than z. For such values of C, the
integral becomes

Z C
keC (u)du = I
−C

On the other hand, if z is negative, eventually the interval where keC is non zero lies outside the
region on integration and so we get the value of the integral must be zero. Physically, since we
are using an idealized injected current, we expect that the outer current satisfies io (−∞) is zero
giving

(
−I z > 0
ii (z) + io (z) =
0 z ≤ 0

336
20.7. VOLTAGES SOLUTIONS CHAPTER 20. TIME INDEPENDENT CABLES

We can rewrite our current solutions more compactly by defining the signum function sgn and
the unit step function u as follows:

(
−1 z < 0
sgn(z) =
+1 z ≥ 0
(
0 z < 0
u(z) =
1 z ≥ 0

Then we have

 
ro I −λ
|z|
ii (z) = e c sgn(z) − 2u(z)
2(ri + ro )

Next, we can solve for io to obtain

io (z) = −Iu(z) − ii (z)


 
ro I −λ
|z|
= −Iu(z) − e c sgn(z) − 2u(z)
2(ri + ro )
 
ro ro I |z|
= −u(z) 1 − I − e− λc sgn(z)
2(ri + ro ) 2(ri + ro )
ri ro I |z|
= −u(z) I − e− λc sgn(z)
ri + ro 2(ri + ro )

This further simplifies to the final forms

 
ro I −λ
|z| ri
io (z) = − e c sgn(z) − 2 u(z)
2(ri + ro ) ro
2
 
ro λc gm I −λ
|z| ri
= − e c sgn(z) − 2 u(z)
2 ro

20.7 The Inner and Outer Voltage Solutions:

From Chapter 18, Equation (18.3) and Equation (18.4), we see that for our time independent
case

∂Vi
= −ri Ii (z)
∂z
∂Vo
= −ro Io (z)
∂z

Rewriting in terms of transient variables, we have

337
20.8. SUMMARY CHAPTER 20. TIME INDEPENDENT CABLES

∂vi
= −ri [Ii0 + ii (z)]
∂z
∂vo
= −ro [Io0 + i0 (z)]
∂z

We expect our voltages to remain bounded and so we must conclude that Ii0 and Io0 are zero,
giving

Z z
vi (z) − vi (−∞) = −ri ii (u)du
Z−∞
z
vo (z) − vo (−∞) = −ro io (u)du
−∞

To finish this step of our work, we must perform these messy integrations. They are not hard,
but are messy! After a bit of arithmetic, we find

 
ri ro Iλc |z|
−λ z
vi (z) − vi (−∞) = e c + 2 u(z)
2(ri + ro ) λc
ro2 Iλc
 
−λ
|z| ri z
vo (z) − vo (−∞) = −e c + 2 u(z)
2(ri + ro ) ro λc

Finally, note that after a bit of algebraic magic

ro Iλc − |z|
vi (z) − vo (z) = vi (−∞) − vo (−∞) + e λc
2

Recall that vm is precisely the last term in the equation above; hence we have

vi (z) − vo (z) = vi (−∞) − vo (−∞) + vm (z)

The usual convention is that the voltages at infinity vanish; hence vi (−∞) and vo (−∞) are zero
and we have the membrane voltage solution we expect:

vi (z) − vo (z) = vm (z)

20.8 Summarizing The Infinite Cable Solutions:

We have shown that the solutions here are

338
20.9. NORMALIZED SOLUTIONS:CHAPTER 20. TIME INDEPENDENT CABLES

ro λc I − |z|
vm (z) = e λc
2
ro λ2c gm I −|z|
ii (z) = ( e λc sgn(z) − 2u(z) )
2
ro λ2c gm I −|z| ri
io (z) = − ( e λc sgn(z) + 2 u(z) )
2 ro
ri ro λ3c gm I −|z| z
vi (z) = − ( e λc + 2 u(z) )
2 λc
r2 λ3 gm I −|z| ri z
vo (z) = − o c (− e λc + 2 u(z) )
2 ro λc
vm (z) = vi (z) − vo (z)

where we assume vi and vo are zero at negative infinity. We can also write these is a normalized
form by noting that the parameter α describe earlier can be used to modify the equations above
into a dimensionless form.

Since λ2c can be expressed in terms of ri and ro , we have also shown that

ro λc I − |z|
vm (z) = e λc
2
ro I −|z|
ii (z) = ( e λc sgn(z) − 2u(z) )
2 (ri + ro )
ro I −|z| ri
io (z) = − ( e λc sgn(z) + 2 u(z) )
2 (ri + ro ) ro
ri ro I −|z| z
vi (z) = − ( e λc + 2 u(z) )
2 (ri + ro ) λc
ro2 I −|z| ri z
vo (z) = − (− e λc + 2 u(z) )
2 (ri + ro ) ro λc

20.9 Normalized Solutions:

Then we can normalize (and therefore remove most of the interesting physical content!) with the
change of variables:

ri
α =
ro
z
λ =
λc
∗ 2
vm = vm
ro λc I
2
vi∗ = vi
ro λc I

339
20.9. NORMALIZED SOLUTIONS:CHAPTER 20. TIME INDEPENDENT CABLES

Figure 20.3: Normalized Current Solutions

2
vo∗ = vo
ro λc I
ii
i∗i =
I
ii
i∗o =
I

which leads to the solutions you see in Figure 20.3.


With this change of variables, the solutions to the cable equation can be written:


vm (λ) = e − |λ|
1
i∗i (λ) = (e−|λ| sgn(λ) − 2 u(λ))
2 (α + 1)
−1
i∗o (λ) = (e−|λ| sgn(λ) + 2 u(λ))
2 (α + 1)
α
vi∗ (λ) = (e−|λ| + 2 λ u(λ))
(α + 1)
1
vo∗ (λ) = (− e−|λ| + 2 α λ u(λ))
(α + 1)

340
20.10. MATLAB FRAGMENTS CHAPTER 20. TIME INDEPENDENT CABLES

The behavior of these normalized solutions at 0 is interesting. Note

1
lim i∗i = −
λc → 0+ 2(α + 1)
1
lim i∗i = −
λc → 0− 2(α + 1)
2α + 1
lim i∗o = −
λc → 0+ 2(α + 1)
1
lim i∗o =
λc → 0− 2(α + 1)

and the asymptotic values at ∞ and −∞ are given by

1
lim i∗i = −
λc → ∞ α+1
lim i∗i = 0
λc → −∞
α
lim i∗o = −
λc → ∞ α+1
lim i∗o = 0
λc → −∞

From this, we see that i∗i is not necessarily continuous at λ is zero. Also, note that at the point
where i∗i crosses zero (call this B) and the point where the right hand limit of i∗o at zero (call this
A) satisfy

1
B = −
2(α + 1)
2α + 1
A = −
2(α + 1)
1 2α
= − −
2(α + 1) 2(α + 1)

= B −
2(α + 1)

Since α is non-negative, this tells us that we should draw A below B in Figure 20.3.

20.10 Some MatLab Implementations:


We can implement the sgn or signum function in MatLab as follows:

Listing 20.1: MySignum.m

341
20.10. MATLAB FRAGMENTS CHAPTER 20. TIME INDEPENDENT CABLES

1 function t = MySignum ( a r g )

%
% a r g i s a r e a l number and
% Mysignum ( a r g ) r e t u r n s 1 i f a r g i s >= 0 and −1 e l s e .
6 %
n = length ( a r g ) ;
t = zeros ( 1 , n ) ;
fo r i = 1 : n
t ( i ) = 1;
11 i f arg ( i ) < 0
t ( i ) = −1.0;
end
end

The unit step function u is listed below:

Listing 20.2: MyStep.m

function t = MyStep ( a r g )

%
% a r g i s a r e a l number and
5 % MyStep ( a r g ) r e t u r n s 1 i f a r g i s >= 0 and 0 e l s e .
%
n = length ( a r g ) ;
t = zeros ( 1 , n ) ;
for i = 1 : n
10 t ( i ) = 1;
i f arg ( i ) < 0
t( i ) = 0.0;
end
end

We can then define the normalized inside current using these two files:

Listing 20.3: InSideCurrent.m

function t = I n S i d e C u r r e n t ( alpha , lambda )


%
% compute c u r r e n t i n s i d e t h e f i b e r
%
5 n = length ( lambda ) ;
t = zeros ( 1 , n ) ;
for i =1:n
t ( i ) = ( exp ( −1.0∗ abs ( lambda ( i ) ) ) ∗ MySignum ( lambda ( i ) ) . . .
− 2 . 0 ∗ MyStep ( lambda ( i ) ) ) / . . .
10 ( 2 . 0 ∗ ( alpha + 1 . 0 ) ) ;
end

342
20.10. MATLAB FRAGMENTS CHAPTER 20. TIME INDEPENDENT CABLES

You might wonder why we made the signum and unit step function so complicated looking.
A first try at the signum function, step function and inner current function might be

Listing 20.4: NaiveStep.m

function t = MySignum ( a r g )
%
t = 1;
4 i f arg < 0
t = −1;
end

function t = MyStep ( a r g )
9 %
t = 1.0;
i f arg < 0
t = 0.0;
end
14
I n s i d e C u r r e n t ( alpha , lambda )
%
t = ( exp ( −1.0∗ abs ( lambda ) ) ∗ MySignum ( lambda ) − 2 . 0 ∗ MyStep ( lambda ) ) . . .
/ ( 2 . 0 ∗ ( alpha + 1 . 0 ) ) ;

This seems fine, but when we want to plot, we must use the lines

Listing 20.5: naivefrag1.m

>> lambda = linspace ( −6 , 6 ,20 0) ;


2 >> I 0 = I n S i d e C u r r e n t ( 0 . 5 , lambda ) ;

and you’ll see the plot is wrong. The reason is that when we send in a vector lambda into
InSideCurrent, a vector is sent into the MyStep and MySignum functions. The conditional
if arg is the MySignum function does not behave the way we want it to in this case. What we
want is that the vector lambda is used in the inequality check to create a new vector t whose
value t(i) is either 1 of −1 depending on the value of lambda(i).
However what happens is that we perform the inequality check on each component of lambda
and keep resetting the value to t in the function to 1 or −1 as appropriate. Since the last value
of lambda is 6, the value returned from MySignum is 1. A similar thing happens in MyStep
and the value returned from MyStep is always 1. Hence the inner current value is wrong and
we must handle the conditional differently. We do this by introducing the vector character of the
space argument lambda directly. Now the inner current plots are correct.

20.10.1 Runtime Results:


To use our functions to generate some plots is very easy. Here is a code fragment to do that:

343
20.10. MATLAB FRAGMENTS CHAPTER 20. TIME INDEPENDENT CABLES

Figure 20.4: Inner Currents

Listing 20.6: Plotting Cable Currents

>> path ( path , ’/local/petersj/BioInfo/Cable’ ) ;


>> lambda = linspace ( −6 , 6 ,20 0) ;
3 >> I 0 = I n S i d e C u r r e n t ( 0 . 5 , lambda ) ;
>> I 1 = I n S i d e C u r r e n t ( 1 . 0 , lambda ) ;
>> I 2 = I n S i d e C u r r e n t ( 2 . 0 , lambda ) ;
>> I 3 = I n S i d e C u r r e n t ( 4 . 0 , lambda ) ;
>> I 4 = I n S i d e C u r r e n t ( 2 0 . 0 , lambda ) ;
8 %
% p l o t a l l g r a p h s on t h e same p l o t
%
>> plot ( lambda , I0 , ’r-’ , . . .
lambda , I1 , ’g-’ . . .
13 lambda , I2 , ’b-’ . . .
lambda , I3 , ’y-’ . . .
lambda , I4 , ’k-’ ) ;

We show the normalized inner current, i∗i versus lambda plots for a variety of α ration in Figure
( 20.4). After we generated the plot via the MatLab command, we use options in the pop-up
graph for the axis legends, titles and so forth to alter the appearance of the graph to our liking.

20.10.2 Exercises:

Exercise 20.10.1. Write Matlab functions to implement all the infinite cable transient variable
solutions using as many arguments to the functions as are needed.

344
20.10. MATLAB FRAGMENTS CHAPTER 20. TIME INDEPENDENT CABLES

1. vm

2. ii

3. io

4. vi

5. vo

Exercise 20.10.2. Generate a parametric plot of each of these variables versus the space variable
z on a reasonable size range of z for the parameters λc and Ie∗ .

1. vm

2. ii

3. io

4. vi

5. vo

Exercise 20.10.3. Write Matlab functions to implement all the remaining infinite cable normal-
ized transient variable solutions:

1. vm

2. i∗o

3. vi∗

4. vo∗

Exercise 20.10.4. Generate a parametric plot of each of these variables versus the space variable
λ on a reasonable size range of λ for the parameter α. This is what we did for the inner normalized
current already.

1. vm

2. i∗i

3. i∗o

4. vi∗

5. vo∗

345
20.10. MATLAB FRAGMENTS CHAPTER 20. TIME INDEPENDENT CABLES

346
Part IV

Quantitative Tools II

347
Chapter 21
Boundary Value Problems

Here we discuss boundary value problems.

349
CHAPTER 21. BOUNDARY VALUE PROBLEMS

350
Chapter 22
Integral Transforms

351
CHAPTER 22. INTEGRAL TRANSFORMS

352
Chapter 23
Numerical Linear Algebra

We will need to solve large linear systems of equations now and then for our work. We now
introduce a few basic tools using MatLab that can help us with this.

23.1 Solving Systems of linear Equations


We need to explore how to solve the general linear system of equations

Ax = b

where A is a n × n matrix, x is a column vector with n rows whose components are the unknowns
we wish to solve for and b is the data vector.

23.1.1 A Simple Lower Triangular System:

We will start by writing a function to solve a special system of equations; we begin with a lower
triangular matrix system Lx = b.

23.1.2 A Lower Triangular Solver:

Here is a simple function to solve such a system.

Listing 23.1: LowerTriangular Solver

function x = L T r i S o l (L , b )
%
% L i s n x n Lower T r i a n g u l a r Matrix

353
23.1. LINEAR SYSTEMS CHAPTER 23. NUMERICAL LINEAR ALGEBRA

% b i s nx1 d a t a v e c t o r
5 % Obtain x by f o r w a r d s u b s t i t u t i o n
%
n = length ( b ) ;
x = zeros ( n , 1 ) ;
for j =1:n−1
10 x ( j ) = b ( j ) /L( j , j ) ;
b ( j +1:n ) = b ( j +1:n ) − x ( j ) ∗L( j +1:n , j ) ;
end
x ( n ) = b ( n ) /L( n , n ) ;

To use this function, we would enter the following commands at the Matlab prompt. For now, we
are assuming that you are running Matlab in a local directory which contains your Matlab code
LTriSol.m. So we fire up Matlab and enter these commands:

Listing 23.2: Lower Triangular Solver Matlab Session

1 a l b e r t : S o u r c e ) matlab
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
Your MATLAB l i c e n s e w i l l e x p i r e i n 57 days .
P l e a s e c o n t a c t your system a d m i n i s t r a t o r o r
The MathWorks t o renew t h i s l i c e n s e .
6 −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

<MA T L A B >
Copyright 1984 −1999 The MathWorks , I n c .
V e r s i o n 5 . 3 . 1 . 2 9 2 1 5 a ( R11 . 1 )
11 Oct 6 1999

To get s t a r t e d , type one o f t h e s e : helpwin , h e l p d e s k , o r demo .


For p r o d u c t i n f o r m a t i o n , type t o u r o r v i s i t www. mathworks . com .
16
>> A = [ 2 0 0 ; 1 5 0 ; 7 9 8 ]

A =

21 2 0 0
1 5 0
7 9 8

>> b = [ 6 ; 2 ; 5 ]
26
b =

6
2
31 5

>> x = L T r i S o l (A, b )

x =
36

354
23.1. LINEAR SYSTEMS CHAPTER 23. NUMERICAL LINEAR ALGEBRA

3.0000
−0.2000
−1.7750

which solves the system as we wanted.

23.1.3 An Upper Triangular Solver:


Here is a simple function to solve a similar system where A is upper triangular.

Listing 23.3: Upper Triangular Solver

function x = UTriSol (U, b )


%
% U i s nxn n o n s i n g u l a r Upper T r i a n g u l a r m a t r i x
% b i s nx1 d a t a v e c t o r
5 % x i s s o l v e d by b a c k s u b s t i t u t i o n
%
n = length ( b ) ;
x = zeros ( n , 1 ) ;
f o r j = n : −1:2
10 x ( j ) = b ( j ) /U( j , j ) ;
b ( 1 : j −1) = b ( 1 : j −1) − x ( j ) ∗U( 1 : j −1, j ) ;
end
x ( 1 ) = b ( 1 ) /U( 1 , 1 ) ;

As usual, to use this function, we would enter the following commands at the Matlab prompt.
We are still assuming that you are running Matlab in a local directory /LOCAL and that your
Matlab code UTriSol.m is also in this directory.
So we fire up Matlab and enter these commands:

Listing 23.4: Upper Triangular Solver Matlab Session

>> C = [ 7 9 8 ; 0 1 5 ; 0 0 2 ]
2

C =

7 9 8
0 1 5
7 0 0 2

>> b = [ 6 ; 2 ; 5 ]

b =
12
6
2
5

17 >> x = UTriSol (C, b )

355
23.1. LINEAR SYSTEMS CHAPTER 23. NUMERICAL LINEAR ALGEBRA

x =

11.5000
22 −10.5000
2.5000

which solves the system as we wanted.

23.1.4 The LU Decomposition of A Without Pivoting:


It is possible to take a general matrix A and rewrite it as the product of a lower triangular matrix
L and an upper triangular matrix U . Here is a simple function to solve a system using the LU
decomposition of A. First, it finds the LU decomposition and then it uses the lower triangular
and upper triangular solvers we wrote earlier.

Listing 23.5: LU Decomposition of A Without Pivoting

function [ L ,U] = GE(A)


2 %
% A i s nxn m a t r i x
% L i s nxn l o w e r t r i a n g u l a r
% U i s nxn upper t r i a n g u l a r
%
7 % We compute t h e LU d e c o m p o s i t i o n o f A u s i n g
% Gaussian E l i m i n a t i o n
%

[ n , n ] = s i z e (A) ;
12 for k=1:n−1
% find multiplier
A( k+1:n , k ) = A( k+1:n , k ) /A( k , k ) ;
% z e r o o u t column
A( k+1:n , k+1:n ) = A( k+1:n , k+1:n ) − A( k+1:n , k ) ∗A( k , k+1:n ) ;
17 end
L = eye ( n , n ) + t r i l (A, −1) ;
U = t r i u (A) ;

No in MatLab, we enter these commands:

Listing 23.6: LU Decomposition No Pivoting MatLab Session

1 >> A = [ 1 7 24 1 8 1 5 ; 23 5 7 14 1 6 ; 4 6 13 20 2 2 ; 10 12 19 21 3 ; . . .
11 18 25 2 9 ]

A =

6 17 24 1 8 15
23 5 7 14 16
4 6 13 20 22

356
23.1. LINEAR SYSTEMS CHAPTER 23. NUMERICAL LINEAR ALGEBRA

10 12 19 21 3
11 18 25 2 9
11

>> [ L ,U] = GE(A) ;


>> L

L =
16
1.0000 0 0 0 0
1.3529 1.0000 0 0 0
0.2353 −0.0128 1.0000 0 0
0.5882 0.0771 1.4003 1.0000 0
21 0.6471 −0.0899 1.9366 4.0578 1.0000

>> U

U =
26
17.0000 24.0000 1.0000 8.0000 15.0000
0 −27.4706 5.6471 3.1765 −4.2941
0 0 12.8373 18.1585 18.4154
0 0 0 −9.3786 −31.2802
31 0 0 0 0 90.1734

>> b = [ 1 ; 3 ; 5 ; 7 ; 9 ]

b =
36
1
3
5
7
41 9

>> y = L T r i S o l (L , b )

y =
46

1.0000
1.6471
4.7859
−0.4170
51 0.9249

>> x = UTriSol (U, y )

x =
56
0.0103
0.0103
0.3436
0.0103
61 0.0103

>> c = A∗x

357
23.1. LINEAR SYSTEMS CHAPTER 23. NUMERICAL LINEAR ALGEBRA

c =
66

1.0000
3.0000
5.0000
7.0000
71 9.0000

which solves the system as we wanted.

23.1.5 The LU Decomposition of A With Pivoting:


Here is a simple function to solve a system using the LU decomposition of A with what is called
pivoting. This means we find the largest absolute value entry in the column we are trying to
zero out and perform row interchanges to bring that one to the pivot position. The MatLab code
changes a bit; see if you can see what we are doing and why!

Listing 23.7: LU Decomposition of A With Pivoting

1 function [ L , U, p i v ] = GePiv (A) ;


%
% A i s nxn m a t r i x
% L i s nxn l o w e r t r i a n g u l a r m a t r i x
% U i s nxn upper t r i a n g u l a r m a t r i x
6 % p i v i s a nx1 i n t e g e r v e c t o r t o h o l d v a r i a b l e o r d e r
% permutations
%
[ n , n ] = s i z e (A) ;
piv = 1: n ;
11 for k=1:n−1
[ maxc , r ] = max( abs (A( k : n , k ) ) ) ;
q = r+k −1;
piv ( [ k q ] ) = piv ( [ q k ] ) ;
A( [ k q ] , : ) = A( [ q k ] , : ) ;
16 i f A( k , k ) ˜=0
A( k+1:n , k ) = A( k+1:n , k ) /A( k , k ) ;
A( k+1:n , k+1:n ) = A( k+1:n , k+1:n ) − A( k+1:n , k ) ∗A( k , k+1:n ) ;
end
end
21 L = eye ( n , n ) + t r i l (A, −1) ;
U = t r i u (A) ;

We use this code to solve a system as follows:

Listing 23.8: LU Decomposition of A With Pivoting MatLab Session

1 >> A = [ 1 7 24 1 8 1 5 ; 23 5 7 14 1 6 ; 4 6 13 20 2 2 ; 10 12 19 21 3 ; . . .
11 18 25 2 9 ]

358
23.1. LINEAR SYSTEMS CHAPTER 23. NUMERICAL LINEAR ALGEBRA

A =

6 17 24 1 8 15
23 5 7 14 16
4 6 13 20 22
10 12 19 21 3
11 18 25 2 9
11
>> b = [ 1 ; 3 ; 5 ; 7 ; 9 ]

b =

16 1
3
5
7
9
21
>> [ L , U, p i v ] = GePiv (A) ;
>> L

L =
26
1.0000 0 0 0 0
0.7391 1.0000 0 0 0
0.4783 0.7687 1.0000 0 0
0.1739 0.2527 0.5164 1.0000 0
31 0.4348 0.4839 0.7231 0.9231 1.0000

>> U

U =
36
23.0000 5.0000 7.0000 14.0000 16.0000
0 20.3043 −4.1739 −2.3478 3.1739
0 0 24.8608 −2.8908 −1.0921
0 0 0 19.6512 18.9793
41 0 0 0 0 −22.2222

>> p i v

piv =
46
2 1 5 3 4

>> y = L T r i S o l (L , b ( p i v ) ) ;
>> y
51
y =

3.0000
−1.2174
56 8.5011
0.3962
−0.2279

359
23.1. LINEAR SYSTEMS CHAPTER 23. NUMERICAL LINEAR ALGEBRA

>> x = UTriSol (U, y ) ;


61 >> x

x =

0.0103
66 0.0103
0.3436
0.0103
0.0103

71 >> c = A∗x

c =

1.0000
76 3.0000
5.0000
7.0000
9.0000

which solves the system as we wanted.

360
Chapter 24
Root Finding And Simple Optimization

Let’s discuss root finding using both the bisection and Newton method.

24.1 The Bisection Method:


We need a simple function to find the root of a nice function f of the real variable x using what is
called bisection. The method is actually quite simple. We know that if f is a continuous function
on the finite interval [a, b] then f must have a zero inside the interval [a, b] if f has a different
algebraic sign at the endpoints a and b. This means the product f (a) f (b) is not zero. So we
assume we can find an interval [a, b] on which this change in sign satisfies f (a) f (b) ≤ 0 (which
we can do by switching to −f if we have to!) and then if we divide the interval [a, b] into two
equal pieces [a, m] and [m, b], f (m) can’t have the same sign as both f (a) and f (b) because of
the assumed sign difference. So at least one of the two halves has a sign change.
Note that if f (a) and f (b) was zero then we still have f (a) f (b) ≤ 0 and either a or b could be
our chosen root and either half interval works fine. If only one of the endpoint function values is
zero, then the bisection of [a, b] into the two halves still finds the one half interval that has the
root.
So our prototyping Matlab code should use tests like f (x) f (y) ≤ 0 rather than f (x) f (y) < 0
to make sure we catch the root.

24.1.1 The Bisection Matlab Code:


Here is a simple Matlab function to perform the Bisection routine.

Listing 24.1: Bisection Code

function r o o t = B i s e c t i o n ( fname , a , b , d e l t a )

361
24.1. BISECTION CHAPTER 24. ROOT FINDING AND OPTIMIZATION

%
% fname i s a s t r i n g t h a t g i v e s t h e name o f a
% continuous function f ( x )
5 % a,b t h i s i s t h e i n t e r v a l [ a , b ] on
% which f i s d e f i n e d and f o r which
% we assume t h a t t h e p r o d u c t
% f ( a ) ∗ f ( b ) <= 0
% d e l t a t h i s i s a non n e g a t i v e r e a l number
10 %
% root t h i s i s the midpoint of the i n t e r v a l
% [ alpha , beta ] having the property t h a t
% f ( a l p h a ) ∗ f ( b e t a ) <= 0 and
% | b e t a −a l p h a | <= d e l t a + e p s ∗ max ( | a l p h a | , | b e t a | )
15 % and e p s i s machine z e r o
%
disp ( ’ ’ )
disp ( ’ k | a(k) | b(k) | b(k) - a(k) ’ )
k = 1;
20 disp ( s p r i n t f ( ’ %6d | %12.7f | %12.7f | %12.7f’ , k , a , b , b−a ) ) ;
f a = f e v a l ( fname , a ) ;
f b = f e v a l ( fname , b ) ;
while abs ( a−b ) > d e l t a + eps∗max( abs ( a ) , abs ( b ) )
mid = ( a+b ) / 2 ;
25 fmid = f e v a l ( fname , mid ) ;
i f f a ∗ fmid <= 0
% t h e r e i s a r o o t i n [ a , mid ]
b = mid ;
f b = fmid ;
30 else
% t h e r e i s a r o o t i n [ mid , b ]
a = mid ;
f a = fmid ;
end
35 k = k+1;
disp ( s p r i n t f ( ’ %6d | %12.7f | %12.7f | %12.7f’ , k , a , b , b−a ) ) ;
end
r o o t = ( a+b ) / 2 ;

We should look at some of these lines more closely. First, to use this routine, we need to write a
function definition for the function we want to apply bisection to. We will do this in a file called
func.m (Inspired Name, eh?) An example would be the one we wrote for the function

x
f (x) = tan( ) − 1;
4

which is coded in Matlab by

Listing 24.2: Function Definition In MatLab

function y = f u n c ( x )
2 %

362
24.1. BISECTION CHAPTER 24. ROOT FINDING AND OPTIMIZATION

% x real input
% y real output
%
y = tan ( x / 4 ) −1;

So to apply bisection to this function on the interval [2, 4] with a stopping tolerance of say 10−4 ,
in Matlab, we would type the command

root = Bisection(’func’,2,4,10ˆ-4)}

Note that the name of our supplied function, the uninspired choice func is passed in as the first
argument in single quotes as it is a string.
Also, in the Bisection routine, we have added the code to print out what is happening at each
iteration of the while loop. Matlab handles prints to the screen a little funny, so do set up a table
of printed values we use this syntax:

% this prints a blank line and then a table heading.


% note disp prints a string only
disp(’ ’)
disp(’ k | a(k) | b(k) | b(k) - a(k) ’)
% now to print the k, a, b and b-a, we must first
% put their values into a string using the c like function
% sprintf and then use disp to disply that string.
% so we do this
% disp( sprintf(’ output specifications here ’,variables here))
% so inside the while loop we use
disp(sprintf(’ %6d | %12.7f | %12.7f | %12.7f’,k,a,b,b-a));

24.1.2 Running the Code:


As mentioned above, we will test this code on the function

x
f (x) = tan( ) − 1;
4

on the interval [2, 4] with a stopping tolerance of δ = 10−6 . Our function has been written as
the Matlab function func supplied in the file func.m.
The Matlab run time looks like this:

Listing 24.3: Bisection MatLab Session

>> eps

3 ans =

2 . 2 2 0 4 e −16
>> r o o t = B i s e c t i o n ( ’func’ , 2 , 4 , 1 0 ˆ −6 )

363
24.2. NEWTON CHAPTER 24. ROOT FINDING AND OPTIMIZATION

8 k | a(k) | b(k) | b(k) − a(k)


1 | 2.0000000 | 4.0000000 | 2.0000000
2 | 3.0000000 | 4.0000000 | 1.0000000
3 | 3.0000000 | 3.5000000 | 0.5000000
4 | 3.0000000 | 3.2500000 | 0.2500000
13 5 | 3.1250000 | 3.2500000 | 0.1250000
6 | 3.1250000 | 3.1875000 | 0.0625000
7 | 3.1250000 | 3.1562500 | 0.0312500
8 | 3.1406250 | 3.1562500 | 0.0156250
9 | 3.1406250 | 3.1484375 | 0.0078125
18 10 | 3.1406250 | 3.1445312 | 0.0039062
11 | 3.1406250 | 3.1425781 | 0.0019531
12 | 3.1406250 | 3.1416016 | 0.0009766
13 | 3.1411133 | 3.1416016 | 0.0004883
14 | 3.1413574 | 3.1416016 | 0.0002441
23 15 | 3.1414795 | 3.1416016 | 0.0001221
16 | 3.1415405 | 3.1416016 | 0.0000610
17 | 3.1415710 | 3.1416016 | 0.0000305
18 | 3.1415863 | 3.1416016 | 0.0000153
19 | 3.1415863 | 3.1415939 | 0.0000076
28 20 | 3.1415901 | 3.1415939 | 0.0000038
21 | 3.1415920 | 3.1415939 | 0.0000019
22 | 3.1415920 | 3.1415930 | 0.0000010

root =
33

3.1416

24.1.3 Exercises:
Well, you have to practice this stuff to see what is going on. So here are two problems to sink
your teeth into!

Exercise 24.1.1. Use bisection to find the first five positive solutions of the equation x = tan(x).
You can see where this is roughly by graphing tan(x) and x simultaneously. Do this for tolerances
{10−1 , 10−2 , 10−3 , 10−4 , 10−5 , 10−6 , 10−7 }. For each root, choose a reasonable bracketing interval
[a, b], explain why you chose it, provide a table of the number of iterations to achieve the accuracy
and a graph of this number vs. accuracy.

Exercise 24.1.2. Use the Bisection Method to find the largest real root of the function f (x) =
x6 − x − 1. Do this for tolerances {10−1 , 10−2 , 10−3 , 10−4 , 10−5 , 10−6 , 10−7 }. Choose a reasonable
bracketing interval [a, b], explain why you chose it, provide a table of the number of iterations to
achieve the accuracy and a graph of this number vs. accuracy.

24.2 Newton’s Method:


first, we have to decide if we need to do a gradient based step or just use bisection. The following
code uses a simple test to see which we should do in our zero finding routine.

364
24.2. NEWTON CHAPTER 24. ROOT FINDING AND OPTIMIZATION

Listing 24.4: Should We Do A Newton Step?

function ok = S t e p I s I n ( x , fx , fpx , a , b )
%
% x current approximate root
4 % fx value of f at the approximate root
% fpx value of d e r i v a t i v e of f at the approximate
% root
% a,b the i n t e r v a l the root i s in .
%
9 % ok 1 i f t h e Newton S t e p x − f x / f p x i s i n [ a , b ]
% 0 i f not
%
i f fpx > 0
ok = ( ( a−x ) ∗ f p x <= −f x ) & (− f x <= ( b−x ) ∗ f p x ) ;
14 e l s e i f fpx < 0
ok = ( ( a−x ) ∗ f p x >= −f x ) & (− f x >= ( b−x ) ∗ f p x ) ;
else
ok = 0 ;
end

A Global Newton Method:

Listing 24.5: Global Newton Method

function [ x , fx , nEvals , aF , bF ] = . . .
2 GlobalNewton ( fName , fpName , a , b , t o l x , t o l f , nEvalsMax )
%
% fName a s t r i n g t h a t i s t h e name o f t h e f u n c t i o n f ( x )
% fpName a s t r i n g t h a t i s t h e name o f t h e f u n c t i o n s d e r i v a t i v e
% f ’( x)
7 % a,b we l o o k f o r t h e r o o t i n t h e i n t e r v a l [ a , b ]
% tolx t o l e r a n c e on t h e s i z e o f t h e i n t e r v a l
% tolf t o l e r a n c e of f ( current approximation to root )
% nEvalsMax Maximum Number o f d e r i v a t i v e E v a l u a t i o n s
%
12 % x Approximate z e r o o f f
% fx The v a l u e o f f a t t h e a p p r o x i m a t e z e r o
% nEvals The Number o f D e r i v a t i v e E v a l u a t i o n s Needed
% aF , bF t h e f i n a l i n t e r v a l t h e a p p r o x i m a t e r o o t l i e s in ,
% [ aF , bF ]
17 %
% Termination I n t e r v a l [ a , b ] has s i z e < t o l x
% | f ( approximate root ) | < t o l f
% Have e x c e e d e d nEvalsMax d e r i v a t i v e E v a l u a t i o n s
%
22 f a = f e v a l ( fName , a ) ;
f b = f e v a l ( fName , b ) ;
x = a;
f x = f e v a l ( fName , x ) ;
f p x = f e v a l ( fpName , x ) ;

365
24.2. NEWTON CHAPTER 24. ROOT FINDING AND OPTIMIZATION

27
nEvals = 1 ;
k = 1;
disp ( ’ ’ )
disp ( ’ Step | k | a(k) | x(k) | b(k) ’ )
32 disp ( s p r i n t f ( ’Start | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
while ( abs ( a−b )>t o l x ) & ( abs ( f x )> t o l f ) & ( nEvals<nEvalsMax ) | ( nEvals==1)
%[ a , b ] b r a c k e t s a r o o t and x=a or x=b
check = S t e p I s I n ( x , fx , fpx , a , b ) ;
i f check
37 %Take Newton S t e p
x = x − fx / fpx ;
else
%Take a B i s e c t i o n S t e p :
x = ( a+b ) / 2 ;
42 end
f x = f e v a l ( fName , x ) ;
f p x = f e v a l ( fpName , x ) ;
nEvals = nEvals +1;
i f f a ∗ fx <=0
47 %t h e r e i s a r o o t i n [ a , x ] . Use r i g h t e n d p o i n t .
b = x;
fb = fx ;
else
%t h e r e i s a r o o t i n [ x , b ] . Bring i n l e f t e n d p o i n t .
52 a = x;
fa = fx ;
end
k = k+1;
i f ( check )
57 disp ( s p r i n t f ( ’Newton | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
else
disp ( s p r i n t f ( ’Bisection | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
end
end
62 aF = a ;
bF = b ;

24.2.1 A Run Time Example:


We will apply our global newton method root finding code to a simple example: find a root for
f (x) = sin(x) in the interval [ −7π
2 , 15π + 0.1]. We code the function and its derivative in two
simple Matlab files; f1.m and f1p.m. These are

Listing 24.6: Global Newton Function

function y = f 1 ( x )
2 y = sin ( x ) ;

and

366
24.2. NEWTON CHAPTER 24. ROOT FINDING AND OPTIMIZATION

Listing 24.7: Global Newton Function Derivative

function y = f 1 p ( x )
y = cos ( x ) ;

To run this code on this example, we would then type a phrase like the one below:

[x,fx,nEvals,aLast,bLast] = GlobalNewton(’f1’,’f1p’,-7*pi/2,15*pi+.1,...
10ˆ-6,10ˆ-8,200)

Here is the runtime output:

Listing 24.8: Global Newton MatLab Session

>> [ x , fx , nEvals , aLast , bLast ] = GlobalNewton ( ’f1’ , ’f1p’,−7∗ pi / 2 , 1 5 ∗ pi + . 1 , . . .


2 10ˆ −6 ,10ˆ −8 ,200)

Step | k | a(k) | x(k) | b(k)


Start | 1 | −10.9955743 | −10.9955743 | 47.2238898
Bisection | 2 | −10.9955743 | 18.1141578 | 18.1141578
7 Bisection | 3 | −10.9955743 | 3.5592917 | 3.5592917
Newton | 4 | 3.1154761 | 3.1154761 | 3.5592917
Newton | 5 | 3.1154761 | 3.1415986 | 3.1415986
Newton | 6 | 3.1415927 | 3.1415927 | 3.1415986

12 x =

3.1416

17 fx =

1 . 2 2 4 6 e −16

22 nEvals =

27 aLast =

3.1416

32 bLast =

3.1416

367
24.2. NEWTON CHAPTER 24. ROOT FINDING AND OPTIMIZATION

24.2.2 Some Exercises:


Exercise 24.2.1. Use the Global Newton Method to find the first five positive solutions of the
equation x = tan(x). You can see where this is roughly by graphing tan(x) and x simultaneously.
Do this for tolerances {10−1 , 10−2 , 10−3 , 10−4 , 10−5 , 10−6 , 10−7 }. For each root, choose a reason-
able bracketing interval [a, b], explain why you chose it, provide a table of the number of iterations
to achieve the accuracy and a graph of this number vs. accuracy.

Exercise 24.2.2. Use the Global Newton Method to find the largest real root of the function
f (x) = x6 − x − 1. Do this for tolerances {10−1 , 10−2 , 10−3 , 10−4 , 10−5 , 10−6 , 10−7 }. Choose
a reasonable bracketing interval [a, b], explain why you chose it, provide a table of the number of
iterations to achieve the accuracy and a graph of this number vs. accuracy.

24.2.3 Adding Finite Difference Approximations to the Derivative:


We can also choose to replace the derivative function for f with a finite difference approximation.
We will use

f (xc + δc ) − f (xc )
f 0 (x) ≈
δc

to approximate the value of the derivative at the point xc . As we have discussed earlier, some
care is required to pick a size for δc so that round-off errors do not destroy the accuracy of our
finite difference approximation to f 0 .
The simple Matlab code to implement this is given below:

fval = feval(fname,x);
fpval = (feval(fname,x+delta) - fval)/delta;

We can also use a secant approximation as follows:

f (xc ) − f (x−
f 0 (x) ≈
xc − x−

where x− is the previous iterate from our routine. The Matlab fragment we need is then:

fpc = (fc - f_)/(xc - x_);

24.2.4 A Finite Difference Global Newton Method:


We add the finite difference routines into our Global Newton’s Method as follows:

Listing 24.9: Finite Difference Global Newton Method

368
24.2. NEWTON CHAPTER 24. ROOT FINDING AND OPTIMIZATION

function [ x , fx , nEvals , aF , bF ] = . . .
GlobalNewton ( fName , a , b , t o l x , t o l f , nEvalsMax )
3 %
% fName a s t r i n g t h a t i s t h e name o f t h e f u n c t i o n f ( x )
% a,b we l o o k f o r t h e r o o t i n t h e i n t e r v a l [ a , b ]
% tolx t o l e r a n c e on t h e s i z e o f t h e i n t e r v a l
% tolf t o l e r a n c e of f ( current approximation to root )
8 % nEvalsMax Maximum Number o f d e r i v a t i v e E v a l u a t i o n s
%
% x Approximate z e r o o f f
% fx The v a l u e o f f a t t h e a p p r o x i m a t e z e r o
% nEvals The Number o f D e r i v a t i v e E v a l u a t i o n s Needed
13 % aF , bF t h e f i n a l i n t e r v a l t h e a p p r o x i m a t e r o o t l i e s in ,
% [ aF , bF ]
%
% Termination I n t e r v a l [ a , b ] has s i z e < t o l x
% | f ( approximate root ) | < t o l f
18 % Have e x c e e d e d nEvalsMax d e r i v a t i v e E v a l u a t i o n s
%
f a = f e v a l ( fName , a ) ;
f b = f e v a l ( fName , b ) ;
x = a;
23 f x = f e v a l ( fName , x ) ;
d e l t a = sqrt ( eps ) ∗ abs ( x ) ;
f p v a l = f e v a l ( fName , x+d e l t a ) ;
f p x = ( f p v a l −f x ) / d e l t a ;

28 nEvals = 1 ;
k = 1;
disp ( ’ ’ )
disp ( ’ Step | k | a(k) | x(k) | b(k) ’ )
disp ( s p r i n t f ( ’Start | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
33 while ( abs ( a−b )>t o l x ) & ( abs ( f x )> t o l f ) & ( nEvals<nEvalsMax ) | ( nEvals==1)
%[ a , b ] b r a c k e t s a r o o t and x=a or x=b
check = S t e p I s I n ( x , fx , fpx , a , b ) ;
i f check
%Take Newton S t e p
38 x = x − fx / fpx ;
else
%Take a B i s e c t i o n S t e p :
x = ( a+b ) / 2 ;
end
43 f x = f e v a l ( fName , x ) ;
f p v a l = f e v a l ( fName , x+d e l t a ) ;
f p x = ( f p v a l −f x ) / d e l t a ;
nEvals = nEvals +1;
i f f a ∗ fx <=0
48 %t h e r e i s a r o o t i n [ a , x ] . Use r i g h t endpoint .
b = x;
fb = fx ;
else
%t h e r e i s a r o o t i n [ x , b ] . Bring i n l e f t endpoint .
53 a = x;
fa = fx ;
end

369
24.2. NEWTON CHAPTER 24. ROOT FINDING AND OPTIMIZATION

k = k+1;
i f ( check )
58 disp ( s p r i n t f ( ’Newton | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
else
disp ( s p r i n t f ( ’Bisection | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
end
end
63 aF = a ;
bF = b ;

Note, we use for our finite difference stepsize machine |x|.

24.2.5 A Run Time Example:


We will apply our finite difference global newton method root finding code to the same simple
example: find a root for f (x) = sin(x) in the interval [ −7π
2 , 15π + 0.1]. We only need the code
for the function now which is as usual in the file f1.m.
To run this code on this example, we would then type a phrase like the one below:

[x,fx,nEvals,aLast,bLast] = GlobalNewtonFD(’f1’,-7*pi/2,15*pi+.1,...
10ˆ-6,10ˆ-8,200)

Here is the runtime output:

Listing 24.10: Finite Difference Netwon Method MatLab Session

>> [ x , fx , nEvals , aLast , bLast ] = GlobalNewtonFD ( ’f1’,−7∗ pi / 2 , 1 5 ∗ pi + . 1 , . . .


2 10ˆ −6 ,10ˆ −8 ,200)

Step | k | a(k) | x(k) | b(k)


Start | 1 | −10.9955743 | −10.9955743 | 47.2238898
Bisection | 2 | −10.9955743 | 18.1141578 | 18.1141578
7 Bisection | 3 | −10.9955743 | 3.5592917 | 3.5592917
Newton | 4 | 3.1154761 | 3.1154761 | 3.5592917
Newton | 5 | 3.1154761 | 3.1415986 | 3.1415986
Newton | 6 | 3.1154761 | 3.1415927 | 3.1415927

12 x =

3.1416

17 fx =

−4.3184 e −15

22 nEvals =

370
24.2. NEWTON CHAPTER 24. ROOT FINDING AND OPTIMIZATION

27 aLast =

3.1155

32 bLast =

3.1416

24.2.6 Some Exercises:


Exercise 24.2.3. Use the Finite Difference Global Newton Method to find the second positive
solution of the equation x = tan(x). Do this for tolerances 10−8 . This time alter the GlobalNew-
tonFD code to allow the finite difference step size delta to be a parameter and do a parametric

study on the effects of delta. Note that the code now uses the reasonable choice of machine |x|
but you need to use the additional δ choices {10−4 , 10−6 , 10−8 , 10−10 }. This will give you five δ
choices. Provide a table and a graph of δ vs. accuracy of the root approximation.

Exercise 24.2.4. Use the Finite Difference Global Newton Method to find the largest real root of
the function f (x) = x6 − x − 1. Do this for tolerances 10−8 . Again use altered GlobalNewtonFD
code with the finite difference step size delta as a parameter and do a parametric study on the

effects of delta. Note that the code now uses the reasonable choice of machine |x| but you need
to use the additional δ choices {10−4 , 10−6 , 10−8 , 10−10 }. This will give you five δ choices. Provide
a table and a graph of δ vs. accuracy of the root approximation.

Exercise 24.2.5. Do the same thing for the problems above, but replace the Finite Difference
Global Newton Code with a Secant Global Newton Code. This will only require a few lines of code
to change really, so don’t freak out!

371
24.2. NEWTON CHAPTER 24. ROOT FINDING AND OPTIMIZATION

372
Part V

Finite Length Cables

373
Chapter 25
The Finite and Half-Infinite Space Cable

We are actually interested in a model of information processing that includes a dendrite, a cell
body and an axon. Now we know that the cables that make up dendrites and axons are not
infinite in extent. So although we understand the currents and voltages change in our infinite
cable model, we still need to figure out how these solutions change when the cable in only finite
in extent. A first step in this direction is to consider a half-infinite cable such as shown in Figure
25.1 and then a finite length cable like in Figure 25.2.
In both figures, we think of a real biological dendrite or axon as an inner cable surrounded by
a thin cylindrical sheath of seawater. So the outer resistance ro will be the resistance of seawater.
At first, we think of the cable as extending to the right forever; i.e. the soma is infinitely far
from the front end cap of the cable. This is of course not realistic, but we can use this thought
experiment as a vehicle towards understanding how to handle a finite cable attached to a soma.

Figure 25.1: A Half-Infinite Dendrite or Axon Model

375
25.1. ON HALF LINE CHAPTER 25. FINITE CABLES

Before we had an impulse current of magnitude I injected at some point on the outer cylinder
and uniformly distributed around the outer cylinder wall. Now we inject current directly into
the front face of our cables. In the finite length L cable case, there will also be a back end cap
at z = L. This back endcap will be attached to the soma. Then, although we could have the
membrane properties of the cable endcap and the soma itself be different, a reasonable assumption
is to make them identical. Hence the back endcap of the cable is a portion of the cell soma. At
any rate, in both situations, the front endcap of the cable is a logical place to think of current as
being injected.
We will begin our modeling with a half-infinite cable. Once we know how to model the front
endcap current injection in this case, we will move directly to the finite cable model.

25.1 The Half-Infinite Cable Solution:

We inject an idealized pulse of current into the inner cable at z = 0 but instead inject into the
front face of the cable. Since the cable is of radius a, this means we are injected current into a
membrane cap of surface area π a2 . Note there is no external current source here and hence ke is
zero. The underlying cable equation is

d2 vm
λ2c − vm = 0, z ≥ 0
dz 2

with solution

z
vm (z) = A e − λc , z ≥ 0

for some value of A which we will determine in a moment. We know that the membrane current
density is given by

km (z) = gm vm (z)

and further, the inner current satisfies

dii
= −km (z) = −gm vm (z)
dz

Since the cable starts at z = 0 and moves off to infinity, we will integrate this differential equation
from z to ∞ so that we can take advantage of the fact that the current at ∞ must be zero. We
find

376
25.1. ON HALF LINE CHAPTER 25. FINITE CABLES

Z ∞ Z ∞
dii u
du = −gm A e− λc du
z dz z
z
ii (∞) − ii (z) = −gm λc Ae− λc

We conclude

z
ii (z) = gm λc Ae− λc

We also know that current of magnitude I is being injected into the front face of the inner cable;
hence ii (0) must be I. This tells us that

ii (0) = gm λc A = I

Combining, we have for positive z:

I z
vm (z) = e− λc
gm λc
z
ii (z) = I e− λc

From Ohm’s Law, we know that the ratio of current to voltage is conductance. The work above
ii (0)
suggests that we can define the conductance of the inner cable at z = 0 by the ratio vm (0) .
Therefore the ratio of this current to voltage at z = 0, defines an idealized conductance for the
end cap of the cable. This conductance is called the Thevenin Equivalent Conductance of the
half-infinite cable. It is denoted by G∞ . The ratio is easy to compute

ii (0)
G∞ =
vm (0)
= gm λc

We can show that G∞ is dependent on the geometry of the cable; indeed, G∞ is proportional to a
to the three-halves power when ri is much bigger than ro . To see this, recall that the definition
of λc gives us that

r
gm
G∞ =
(ri + ro )

and so if ri is much bigger than ro , we find that

377
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

r
gm
G∞ ≈
ri

For a cylindrical cell of radius a, we know

ρi
ri =
πa2
gm = 2πaGm

and so

s
2Gm 3
G∞ ≈ π a2
ρi

which tells us that G∞ is proportional to the 3/2 power of a. Thus, larger fibers have larger
characteristic conductances!
If a cable is many space constants long (remember the space constant is proportional to the square
root of a, then the half-infinite model we discuss here may be appropriate. We will be able to
show this is a reasonable thing to do after we handle the true finite cable model. Once that is
done, we will see that the solutions there approach the half-infinite model solutions as L goes to
infinity.

25.2 The Finite Cable Solution: Current Initialization:


If the cable becomes a piece of length L as shown in Figure 25.2, then there are now two faces to
deal with; the input face through which a current pulse of size I is delivered into some conductance
and an output face at z is L which has a output load conductance of Ge . Ge represents either
the conductance of the membrane that caps the cable or the conductance of another cable or cell
soma attached at this point.
We again have no external source and so the cable equation is

d2 vm
λ2c − vm (z) = 0, 0 ≤ z ≤ L
dz 2

The general solution to this homogeneous equation has been discussed before. The solution we seek
will also need two boundary conditions to be fully specified. The general form of the membrane
potential solution is

z z
vm (z) = A1 e λc + A2 e− λc , 0 ≤ z ≤ L

378
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

Figure 25.2: Finite Cable

We will rewrite this in terms of the new functions hyperbolic sine and hyperbolic cosine defined
as follows:

z z
z e λc + e− λc
cosh( ) =
λc 2
z z
z e λc − e− λc
sinh( ) =
λc 2

leading to the new form of the homogeneous solution

z z
vm (z) = A1 cosh( ) + A2 sinh( ), 0 ≤ z ≤ L
λc λc

It is convenient to reorganize this yet again and rewrite in another equivalent form as

L−z L−z
vm (z) = A1 cosh( ) + A2 sinh( ), 0 ≤ z ≤ L
λc λc

Now the membrane current density satisfies

km (z) = gm vm (z)
L−z L−z
= gm A1 cosh( ) + gm A2 sinh( )
λc λc

Further, since

379
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

d
cosh(z) = sinh(z)
dz
d
sinh(z) = cosh(z)
dz

we can use the internal current equation to find

dii
= −km (z)
dz
L−z L−z
= −gm A1 cosh( ) − gm A2 sinh( )
λc λc

Integrating, we find the possible inner current solution to be

L−z L−z
ii (z) = gm λc A1 sinh( ) + gm λc A2 cosh( )
λc λc

At z = 0, ii is I and since the conductance at z = L is Ge , the current at L must be

ii (L) = Ge vm (L)

Now

0 0
vm (L) = A1 cosh( ) + A2 sinh( )
λc λc
= A1

and

L L
ii (0) = gm λc A1 sinh( ) + gm λc A2 cosh( )
λc λc
0 0
ii (L) = gm λc A1 sinh( ) + gm λc A2 cosh( )
λc λc
= A2 g m λ c

and so

L L
I = gm λc (A1 sinh( ) + A2 cosh( )
λc λc
A1 Ge = A2 gm λc

380
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

This implies using the definition of G∞

L L
I = G∞ (A1 sinh( ) + A2 cosh( )
λc λc
A1 Ge = A2 G∞

Thus,

L Ge L
I = A1 G∞ (sinh( ) + cosh( )
λc G∞ λc

giving us

I 1
A1 =
G∞ sinh( λL ) + GGe cosh( λL )
c ∞ c
IGe 1
A2 =
G∞ sinh( λ ) + GGe cosh( λL )
2 L
c ∞ c

This leads to the solution we are looking for

L−z Ge
I cosh( λc ) + G∞ sinh( L−z
λc )
vm (z) = Ge
G∞ sinh( λL ) + G∞ cosh( λLc )
c

I G∞ sinh( λc ) + Ge cosh( λc )
L−z L−z
ii (z) =
G∞ sinh( λLc ) + GG∞e cosh( λLc )

Note that at 0, we find

L Ge
I cosh( λc ) + G∞ sinh( λLc )
vm (0) = Ge
G∞ sinh( λL ) + G∞ cosh( λLc )
c

ii (0)
From Ohm’s Law, the conductance we see at 0 is given by vm (0) . We will call this the Thevenin
Equivalent Conductance looking into the cable of length L at 0 or simply the Thevenin Input
Conductance of the Finite Cable and denote it by the symbol GT (L) since its value clearly depends
on L. It is given by

Ge
sinh( λLc ) + G∞ cosh( λLc )
GT (L) = G∞ Ge
cosh( λLc ) + G∞ sinh( λLc )

The hyperbolic function tanh is defined by

381
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

sinh(u)
tanh(u) =
cosh(u)

and it is easy to show that as u goes to infinity, tanh(u) goes to 1. We can rewrite the formula
for GT (L) to be

Ge L
G∞ + tanh( λc )
GT (L) = G ∞ Ge L
G∞ tanh( λc ) + 1

and so as L goes to infinity, we find

Ge
G∞ + 1
lim GT (L) = G∞ Ge
L→∞
G∞ + 1
= G∞

Hence, the Thevenin input conductance of the cable approaches the idealized Thevenin input
conductance of the half-infinite cable.
There are several interesting cases: for convenience of exposition, let’s define

L−z Ge
Z = , H = ,
λc G∞
L
L = , D = sinh(L ) + H cosh(L )
λc

These symbols allow us to rewrite our solutions more compactly as

I cosh(Z) + H sinh(Z)
vm (z) =
G∞ sinh(L ) + H cosh(L )
I cosh(Z) + H sinh(Z)
=
G∞ D
I G∞ sinh(Z) + Ge cosh(Z)
ii (z) =
G∞ sinh(L ) + H cosh(L )
I G∞ sinh(Z) + Ge cosh(Z)
=
G∞ D
sinh(L ) + H cosh(L )
GT (L) = G∞
cosh(L ) + H sinh(L )

Ge = 0 If the conductance of the end cap is zero, then H is zero and no current flows through
the endcap. We see

382
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

sinh(L )
GT (L) = G∞
cosh(L )
= G∞ tanh(L )

Ge = G∞ If the conductance of the end cap is G∞ , then H is one and we see the finite cable acts
like the half-infinite cable:

sinh(L ) + cosh(L )
GT (L) = G∞
cosh(L ) + sinh(L )
= G∞

Ge = ∞ If the conductance of the end cap is ∞, the end of the cable acts like it is short-circuited,
H is infinity. Divide our original conductance solution by H top and bottom and letting K
denote the reciprocal of H, we see:

K sinh(L ) + cosh(L )
GT (L) = G∞
K cosh(L ) + sinh(L )

Now K is zero here so we get

cosh(L )
GT (L) = G∞
sinh(L )
= G∞ coth(L )

25.2.1 Parametric Studies:


We can calculate that

vm (L) 1
= Ge
vm (0) cosh( λLc ) + G∞ sinh( λLc )

Ge
For convenience, let’s think of L as L
λc and ρ as G∞ . Then the ratio of the voltage at the end of
the cable to the voltage at the beginning can be expressed as

vm (L) 1
=
vm (0) cosh(L ) + ρ sinh(L )

This ratio measures the attenuation of the initial voltage as we move down the cable toward the
end. We can plot a series of these attenuations for a variety of values of ρ. In Figure 25.3, the

383
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

Figure 25.3: Attenuation Increases with ρ

highest ρ value is associated with the bottom most plot and the lowest value is associated with
the top plot. Note as ρ increases, there is more conductivity through the end cap and the voltage
drops faster. The top plot is for ρ is zero which is the case of no current flow through the end
cap.
We can also look how the Thevenin equivalent conductance varies with the value of ρ. We
can easily show that

GT (L)
GT∗ (L) =
G∞
tanh(L ) + ρ
=
1 + ρ tanh(L )

In Figure 25.4, we see that the ρ is one is the plot to which the other choices approach. When
ρ is above one, the input conductance ratio curve starts above the ρ is one curve; when ρ is below
one, the ratio curve approaches from below.

25.2.2 Some MatLab Implementations:

We want to compare the membrane voltages we see in the infinite cable case to the ones we see
in the finite cable case. Now in the infinite cable case, I current is deposited at z = 0 and the
current injection spreads out uniformly to both sides of 0 leading to the solution

r0 I λc −|z|
vm (z) = e λc
2

384
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

Figure 25.4: Finite Cable Input Conductance Study

where the finite cable case is missing the division by 2 because in a sense, the current is not
allowed to spread backwards. Hence to compare solutions, we will inject 2 I into the infinite cable
at 0 and I into the finite cable.
Now at z = 0, the finite cable solution gives

I cosh(L ) + H sinh(L )
vm (0) =
G∞ sinh(L ) + H cosh(L )
I 1 + H tanh(L )
=
G∞ H + tanh(L )

In the examples below, we set G∞ , λc and r0 are one and set the cable length to 3. Hence, since
for L = 3, tanh(L ) is very close to one, we have

1 + H
vm (0) ≈ I
H + 1
≈ I

So for these parameters, we should be able to compare the infinite and finite cable solutions nicely.
MatLab code to implement the infinite cable voltage solution is given below:

Listing 25.1: InfiniteMembraneVoltage.m

function t = I n i n i t e M e m b r a n e V o l t a g e ( I e h a t , . . .
r0 , . . .

385
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

lambda c , . . .
z)
5 %
% compute membrane v o l t a g e i n s i d e a f i n i t e f i b e r
%
P r e f i x = ( I e h a t ∗ r 0 ∗ lambda c ) / 2 . 0 ;
t = P r e f i x ∗exp( −1.0∗ abs ( z ) / lambda c ) ;

It is straightforward to modify the code above to handle the finite cable case:

Listing 25.2: FiniteMembraneVoltage.m

1 function t = FiniteMembraneVoltage ( I e h a t , . . .
Ginfinity , . . .
Ge , . . .
L,...
lambda c , . . .
6 z)
%
% compute membrane v o l t a g e i n s i d e a f i n i t e f i b e r
%
Prefix = Iehat / Ginfinity ;
11 FixedArg = L/ lambda c ;
Ratio = Ge/ G i n f i n i t y ;
Denominator = sinh ( FixedArg )+R at i o ∗cosh ( FixedArg ) ;
Arg = (L−z ) / lambda c ;
Numerator = cosh ( Arg )+ Ra ti o ∗ sinh ( Arg ) ;
16 t = ( I e h a t / G i n f i n i t y ) ∗ ( Numerator / Denominator ) ;

25.2.3 Run-Time Results:


We try out these new functions in the following MatLab sessions:

Listing 25.3: ; A Finite Membrane Voltage MatLab Session

>> path ( path , ’/local/petersj/BioInfo/Cable’ ) ;


>> Ge = 0 . 1 ;
>> I e h a t = 1 . 0 ;
4 >> G i n f i n i t y = 1 . 0 ;
>> L = 3 . 0 ;
>> lambda c = 1 . 0 ;
>> r 0 = 1 . 0 ;
>> Z = linspace ( 0 , L , 2 0 0 ) ;
9 %
%
% Find V o l t a g e f o r i n f i n i t e c a b l e f o r 2∗ I e h a t
>> V m i n f i n i t y = I n f i n i t e M e m b r a n e V o l t a g e ( 2 ∗ I e h a t , r0 , lambda c , Z ) ;

14 % Find V o l t a g e f o r f i n i t e c a b l e f o r I e h a t , Ge = 0 . 1
>> V m0 = FiniteMembraneVoltage ( I e h a t , G i n f i n i t y , Ge , L , lambda c , Z ) ;

386
25.2. FINITE CABLE STARTING CURRENT CHAPTER 25. FINITE CABLES

(a) End Cap Load is 0.1 (b) End Cap Load is 1.0

Figure 25.5: Low End Cap Loads

% Find V o l t a g e f o r f i n i t e c a b l e f o r I e h a t , Ge = 1 . 0
>> V m8 = FiniteMembraneVoltage ( I e h a t , G i n f i n i t y , 1 . 0 , L , lambda c , Z ) ;
19
% Find V o l t a g e f o r f i n i t e c a b l e f o r I e h a t , Ge = 5 . 0
>> V m6 = FiniteMembraneVoltage ( I e h a t , G i n f i n i t y , 5 . 0 , L , lambda c , Z ) ;

% Find V o l t a g e f o r f i n i t e c a b l e f o r I e h a t , Ge = 5 0 . 0
24 >> V m7 = FiniteMembraneVoltage ( I e h a t , G i n f i n i t y , 5 0 . 0 , L , lambda c , Z ) ;

% P l o t I n f i n i t e c a b l e and f i n i t e c a b l e V o l t a g e s f o r I e h a t , Ge = 0 . 1

lambda c , Z) ;
>> plot ( Z , V m i n f i n i t y , ’g-’ , Z , V m0 , ’r-’ ) ;

29 % P l o t I n f i n i t e c a b l e and f i n i t e c a b l e V o l t a g e s f o r I e h a t , Ge = 1 . 0
>> plot ( Z , V m i n f i n i t y , ’g-’ , Z , V m8 , ’r-.’ ) ;

% P l o t I n f i n i t e c a b l e and f i n i t e c a b l e V o l t a g e s f o r I e h a t , Ge = 5 . 0
>> plot ( Z , V m i n f i n i t y , ’g-’ , Z , V m6 , ’r-’ ) ;
34
% P l o t I n f i n i t e c a b l e and f i n i t e c a b l e V o l t a g e s f o r I e h a t , Ge = 5 0 . 0
>> plot ( Z , V m i n f i n i t y , ’g-’ , Z , V m7 , ’r-.’ ) ;

We are choosing to look at these solutions for a cable length of 3 with all the other parameters
set to 1 for convenience except for the cable length which will be 3 and the endcap load conductance
Ge which we will vary. We are also injecting 2 Ie∗ into the infinite cable as we mentioned we would
do. Note the finite cable response attenuated more quickly than the infinite cable unless Ge is
G∞ ! You can see a variety of results in Figures 25.5(a) - 25.5(b).

387
25.3. FINITE CABLE STARTING VOLTAGE CHAPTER 25. FINITE CABLES

(a) End Cap Load is 5.0 (b) End Cap Load is 50.0

Figure 25.6: High End Cap Loads

25.2.4 Exercises:

Exercise 25.2.1. Write Matlab functions to implement the finite cable transient variable solutions
using as many arguments to the functions as are needed.

1. vm

2. ii

Exercise 25.2.2. Generate a parametric plot of each of these variables versus the space variable
z on a reasonable size range of z for the parameters λc , Ie∗ , cable length L and Ge . The ratio of
Ge to G∞ is a very reasonable parameter to use.

1. vm

2. ii

Exercise 25.2.3. Plot GT (L) versus L and the horizontal line G∞ on the same plot and discuss
what is happening.

25.3 The Finite Cable: Voltage Initialization:


We can redo what we have discussed in the above using a different initial condition. Instead of
specifying initial current, we will specify initial voltage. As you might expect, this will generate
slightly different solutions. We use the same start:

L−z L−z
vm (z) = A1 cosh( ) + A2 sinh( )
λc λc

388
25.3. FINITE CABLE STARTING VOLTAGE CHAPTER 25. FINITE CABLES

L−z L−z
ii (z) = A1 G∞ sinh( ) + A2 G∞ cosh( )
λc λc

Our boundary conditions are now vm (0) = V0 and ii (L) = Ge vm (L). Also, we will use the
abbreviations from before: H, Z and L but change D to

E = cosh(L ) + H sinh(L )

A2
Now let VL denote vm (L). Then it follows that A1 = VL . If we set BL = VL , then

 
L−z L−z
vm (z) = VL cosh( ) + BL sinh( )
λc λc
 
L−z L−z
ii (z) = VL G∞ sinh( ) + BL cosh( )
λc λc

We know that ii (L) = Ge VL and so

0 0
ii (L) = VL G∞ (sinh( ) + BL cosh( ))
λc λc
Ge VL = VL BL G∞

The above implies BL = H. Finally, note that

L L
vm (0) = VL (cosh( ) + H sinh( ))
λc λc
= VL (sinh(L ) + H cosh(L ))
= VL E

V0
Hence, E = VL and from this we obtain our final expression for the solution:

V0
vm (z) = (cosh(Z) + H sinh(Z))
E
V0
ii (z) = G∞ (sinh(Z) + H cosh(Z))
E

We will find that the Thevenin equivalent conductance at the end cap is still the same. We have
the same calculation as before

ii (0)
GT (L) =
vm (0)
sinh(L ) + H cosh(L )
= G∞
cosh(L ) + H sinh(L )

389
25.3. FINITE CABLE STARTING VOLTAGE CHAPTER 25. FINITE CABLES

Finally, it is easy to see that the relationship between the current and voltage initialization
condition is given by

V0 GT (L) = ii (0)

We can then find voltage and current equations for various interesting end cap conductance loads:

Ge = 0: If the conductance of the end cap is zero, then H is zero and no current flows through
the endcap. We see

cosh(Z)
vm (z) = V0 ,
cosh(L )
sinh(Z)
ii (z) = V0 G∞ ,
cosh(L
GT (L) = G∞ tanh(L ).

Ge = G∞ : If the conductance of the end cap is G∞ , then H is one and we see the finite cable acts
like the half-infinite cable:

cosh(Z) + sinh(Z)
vm (z) = V0
cosh(L ) + sinh(L )
−z
= V0 e λc ,
sinh(Z) + cosh(Z)
ii (z) = V0 G∞ ,
cosh(L ) + sinh(L )
−z
= V0 G∞ e λc
GT (L) = G∞ .

Ge = ∞: If the conductance of the end cap is ∞, the end of the cable acts like it is short-circuited,
H is infinity. We see

sinh(Z)
vm (z) = V0
sinh(L )
cosh(Z)
ii (z) = V0 G∞
sinh(L
GT (L) = G∞ coth(L )

390
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

25.3.1 Exercises:
Exercise 25.3.1. Write Matlab functions to implement vm , ii and GT (L) using as many argu-
ments to the functions as are needed. The arguments will be L, λc , z, V0 , G∞ and Ge .

Exercise 25.3.2. Using V0 = 1, λc = 5, L = 10, and G∞ = 2.0 and generate a parametric plot
of each of these variables versus the space variable z on [0, 10] for a reasonable size range of Ge .
Discuss how the special conductance cases above fit into these plots.

Exercise 25.3.3. Assume L = 1 and λc = 1 also. Suppose sufficient current is injected to give
V0 = 10 mV. Let G∞ = 2.0.

1. Let Ge = 0.5

(a) Compute vm at z = 0.6 and 1.0. Compute vm for the infinite cable model too at these
points.

2. Let Ge = 2.0

(a) Compute vm at z = 0.6 and 1.0. Compute vm for the infinite cable model too at these
points.

3. Let Ge = 50.0

(a) Compute vm at z = 0.6 and 1.0. Compute vm for the infinite cable model too at these
points.

4. Discus the results.

25.4 Synaptic Currents:


We can have two types of current injection. The first is injected through the outer membrane
and is modeled by the pulses ke . We know the outer cylinder is a theoretical abstraction and so
there is really no such physical membrane. The second is injected into the front face of the cable
in either a finite or half-infinite case. This current is modeled as an initialization of the inner
current ii . So consider the model we see in Figure 25.7
The differential equation we would solve here would be

d2 vm
λ2c − vm = −ro λ2c I0 δ(z − z0 ) − ro λ2c I1 δ(z − z1 ) 0 ≤ z ≤ L
dz 2
ii (0) = I
ii (L) = Ge vm (L) (25.1)

Here current I is injected into the front face and two impulse currents are injected into the outer
cylinder. If we are thinking of this as a model of an axon, we could throw away the external sources

391
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

Figure 25.7: Dendrite Model With Synaptic Current Injections

and think of the current I injected into the front face as the current that arises from membrane
voltage changes that propagate forward from the dendrite and soma system. Hence, the front face
current I is a lumped sum approximation of the entire dendrite and soma subsystem response.
On the other hand, if we are modeling a dendrite, we could think of the front face current I as a
lumped sum model of the synaptic voltages that have propagated forward up to that point from
the the rest of the dendrite system we are not modeling. The external current sources are then
currents induced by synaptic interactions or currents flowing through pores in the membrane. Of
course, voltage modulated gates would be handled differently!
Since our differential equation system is linear, to solve a problem like (25.1), we can simply add
together the solutions to individual problems. This is called superposition of solutions and it is a
very important tool in our work. Hence, to solve (25.1) we solve

d2 vm
λ2c − vm = 0, 0 ≤ z ≤ L
dz 2
ii (0) = I
ii (L) = Ge vm (L) (25.2)

and

d2 vm
λ2c − vm = −ro λ2c I0 δ(z − z0 ) 0 ≤ z ≤ L
dz 2
ii (0) = 0
ii (L) = Ge vm (L) (25.3)

392
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

and

d2 vm
λ2c − vm = −ro λ2c I1 δ(z − z1 ) 0 ≤ z ≤ L
dz 2
ii (0) = 0
ii (L) = Ge vm (L) (25.4)

and add the solutions together. Since we already know the solution to (25.2), it suffices to solve
(25.3).

25.4.1 A Single Impulse:

Consider a family of problems of the form (25.5)

d2 vm
λ2c − vm = −ro λ2c I0 keC (z − z0 ) 0 ≤ z ≤ L
dz 2
ii (0) = 0
ii (L) = Ge vm (L) (25.5)

where the family {kec } of impulses are modeled similar to what we have done before: each is zero
off [z0 − C, z0 + C], symmetric around z0 and the area under the curve is 1 for all C. So the only
difference here is that our pulse family always delivers a constant 1 amp of current and we control
the magnitude of the delivered current by the multiplier I0 . This way we can do the argument
just once and know that our results hold for different magnitudes I1 and so on. We assume for
now that the site of current injection is z0 which is in (0, L). The case of z0 being 0 or L will then
require only a slight modification of our arguments which we will only sketch leaving the details
to the reader. As usual, each current pulse delivers I0 amps of current even though the base of
the pulse is growing smaller and smaller. Physically, we expect, like in the infinite cable problem,
that the voltage acts like a decaying exponential on either side of z0 . To see this is indeed the
idealized solution we obtain when we let C go to 0, we resort to the following model:

 z−z
C λc 0 , 0 ≤ z ≤ z0 − C
 A1 e


z−z0 z−z0
C
vm (z) = αC e λc + β C e− λc + φC
p (z), z0 − C ≤ z ≤ z0 + C
z−z

− λ 0

AC
2e , z0 + C ≤ z ≤ L
 c

The parts of the model before z0 − C and after z0 + C are modeled with exponential decay, while
the part where the pulse is active is modeled with the full general solution to the problem having
the form φh (z) + φC C C
p (z), where φh is the homogeneous solution to the problem and φp is the
particular solution obtained from the variations of parameters technique for a pulse keC . Since the
pulse keC is smooth, we expect the voltage solution to be smooth also; hence our solution and its

393
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

derivative must be continuous at the points z0 − C and z0 + C. This will give us four equations
in four unknowns we can solve for the constants AC C C C
1 , A2 , α and β .
Recall the auxiliary equation for this differential equation is

λ2c r2 − 1 = 0

1
with roots λc and − λ1c . The homogeneous solution is then

z−z0 z−z0
φh (z) = B1 e λc + B2 e− λc

Using the method of Variation of Parameters, we search for a particular solution of the form

z−z0 z−z0
φnp (z) = U1 (z)e λc + U2 (z)e− λc

where the coefficient functions U1 and U2 satisfy

 z−z0 z−z0
 " # " #
− dU1
 e e 0
λc λc
dz =
z−z0 z−z0

dU2
1
− λ1c e− −ro I0 keC
λc e
λc λc
dz

This is easily solved using Cramer’s rule to give

dU1 ro λc I0 C − z−z0
= − ke e λc
dz 2
dU2 ro λc I0 C z−z 0
= ke e λc
dz 2

We can integrate then to obtain

Z z
r0 λc I0 s−z0
U1 (z) = − keC (s − z0 ) e− λc ds
2
Z zz0
r0 λc I0 s−z0
U2 (z) = keC (s − z0 ) e λc ds
2 z0

giving

z−z0 z−z0
− λ
φC
p (z) = U1 (z) e c + U2 (z) e
λ c
 Z z   Z z 
r0 λc I0 s−z
− λ 0
z−z0 r0 λ c I 0 s−z0 z−z0
= − C
ke (s − z0 ) e c ds e λc + ke (s − z0 ) e λc ds e− λc
C
2 z0 2 z0

394
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

Now we will rewrite this a bit more compactly than usual:

Z z Z z
r0 λc I0 s−z r0 λc I0 s−z
φC
p (z) = − keC (s − z0 ) e λc ds + keC (s − z0 ) e− λc ds
2 z 2 z0
Z z0  
z−x
= −r0 λc I0 keC (s − z0 ) sinh ds
z0 λc

Hence, the particular solution for pulse keC is

z
z−s
Z
φC
p (z) = −r0 λc I0 ke (s − z0 ) sinh( ) dx
z0 λc

25.4.2 Forcing Continuity in the Model:

Note that

z−z0
AC


λc e
1 λc , 0 ≤ z ≤ z0 − C
C

dvm  z−z0 C z−z0
dφC
− βλc e− λc
αC
= λc e
λc + p
dz , z0 − C ≤ z ≤ z0 + C
dz  z−z0
AC
− λ2c e− λc ,

z0 + C ≤ z ≤ L

Now continuity at z0 − C and z0 + C gives

−C
AC
1e = αC e−C + β C eC + φC
p (z0 − C)
−C
AC
2e = Cn eC + Dn e−C + φC
p (z0 + C)
AC
1 −C αC −C βC C dφnp
e = e − e + (z0 − C)
λc λc λc dz
AC αC C β C −C dφnp
− 2 e−C = e − e + (z0 + C)
λc λc λc dz

which can be rewritten in the form

−C
AC
1e − αC e−C − β C eC = φC
p (z0 − C) (25.6)
−C
AC
2e − αC eC − β C e−C = φC
p (z0 + C) (25.7)
−C
dφC
p
AC
1e − αC e−C + β C eC = λc (z0 − C) (25.8)
dz
−C
dφC
p
−AC
2e − αC eC + β C e−C = λc (z0 + C) (25.9)
dz

395
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

Computing (Equation 25.7 + Equation 25.8), (Equation 25.7 - Equation 25.8), (Equation 25.9
and Equation 25.9) and (Equation 25.9 - Equation 25.9), we find

−C
dφC
p
2AC
1e − 2αC e−C = φC
p (z0 − C) + λc (z0 − C)
dz
dφC
p
−2β C eC C
= φp (z0 − C) − λc (z0 − C)
dz
dφC
p
−2αC eC = φC
p (z 0 + C) + λ c (z0 + C)
dz
−C
dφC
p
2AC
2e − 2β C e−C = φC
p (z 0 + C) − λ c (z0 + C)
dz

Although a bit messy, this can easily be solved to give

−C
dφCp
2AC
1e = 2αC e−C + φC p (z 0 − C) + λ c (z0 − C)
dz
dφC dφC
 
C p −2C C p
= − φp (z0 + C) + λc (z0 + C) e + φp (z0 − C) + λc (z0 − C)
dz dz
−C
dφCp
2AC
2e = 2Dn e−C + φC p (z0 + C) − λc (z0 + C)
dz
dφC dφC

p p
= − φnp (z0 − C) − λc (z0 + C))e−2C + φC p (z 0 + C) − λc (z0 + C)
dz dz

or

dφC dφC
   
1 C p −C 1 C p
AC
1 = − φp (z0 + C) + λc (z0 + C) e + φ (z0 − C) + λc (z0 − C) eC
2 dz 2 p dz
dφC dφC
 
1 C p −C 1 C p
AC
2 = − φp (z0 − C) − λc (z0 + C))e + φp (z0 + C) − λc (z0 + C))eC
2 dz 2 dz

25.4.3 The Limiting Solution:

The above solutions work for all positive C. We note

z0 + C  
z0 + C − s
Z
φC
p (z0 + C) = −r0 λc I0 ken (s − z0 ) sinh
z0 λc

and so, using arguments very similar to those presented in Lemma 20.4.1, we can show

1
lim φC
p (z0 + C) = −r0 λc I0 sinh(0)
C→0 2
= 0

396
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

In a similar fashion, we can show

1
lim φC
p (z0 − C) = r0 λc I0 sinh(0)
C→0 2
= 0

Finally, since

dφC z
z−s
Z
p
(z) = −r0 λc I0 keC (z − z0 )sinh(0) − r0 I0 keC (s − z0 )cosh( )ds
dz z0 λc

we see

dφC z0 + C
z−s
Z
p
(z0 + C) = −r0 I0 keC (s − z0 )cosh( )ds
dz z0 λc

which gives

dφC
p 1
lim (z0 + C) = −r0 I0 λc cosh(0)
C→0 dz 2
r0 I0 λc
= − .
2

and similarly

dφC
p I0
lim (z0 − C) = r0 I0 λc cosh(0)
C→0 dz 2
r0 I0 λc
= .
2

Thus in the limit, we obtain the limiting constants

A1 = lim AC
1
C→0
r0 λc I0
=
2
A2 = lim AC 2
C→0
r0 λc I0
=
2

This gives the limiting solution

397
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

 z−z 0
r0 λc I0

2 e
λc , 0 ≤ z ≤ z0
vm (z) = z−z
r0 λc I0 − λc 0

2 e , z0 ≤ z ≤ L

which is essentially our usual idealized impulse solution from the infinite cable model

r0 λc I0 − |z−z0|
vm (z) = e λc , 0 ≤ z ≤ L
2

Note the values of αC and β C are not important for the limiting solution.

25.4.4 Satisfying the Boundary Conditions:

Since

dii
= −gm vm
dz

we see that for z below z0 − C, we have, for integration constant γ C ,

z−z0
ii (z) = −gm λc AC
1e
λc + γC
z−z0
= −G∞ AC
1e
λc + γC

and the boundary condition ii (0) = 0 then implies

z0
−λ
γC = G∞ AC
1e c

and for the second boundary condition, ii (L) = Ge vm (L), we use the last part of the definition
of vm to give us, for a new integration constant ζ C ,

z−z0

ii (z) = G∞ AC
2e
λc + ζC

and so

L−z0 L−z0
G∞ An2 e− λc + ζC = Ge AC
2e
− λc

or

398
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

L−z0

ζC = AC
2e
λc (Ge − G∞ )

Hence, from the above, we see that the limiting inner current solutions satisfy

γ = lim γn
n→∞
r0 λc I0 − λz0
= G∞ e c
2
ζ = lim ζn
n→∞
r0 λc I0 − L−z 0
= (Ge − G∞ ) e λc
2

giving for z below z0

r0 λc I0 z−z0 r0 λc I0 − λz0
ii (z) = −G∞ e λc + G∞ e c
2 2

and for z above z0 ,

r0 λc I0 − z−z0 r0 λc I0 − L−z0
ii (z) = G∞ e λc + (Ge − G∞ ) e λc
2 2

25.4.5 Some Results:

We can now see the membrane voltage solutions for the stated problem

d2 vm
λ2c − vm = 0, 0 ≤ z ≤ L
dz 2
ii (0) = I
ii (L) = Ge vm (L)

in Figure 25.8. Note here there are no impulses applied. If we add the two impulses at z0 and
z1 as described by the differential Equations (25.3) and (25.1), the solution to the full problem
for somewhat weak impulses at two points on the cable can be seen in Figure 25.9. For another
choice of impulse strengths, we see the summed solution in Figure 25.10. For ease of comparison,
we can plot the solutions for no impulses, two impulse of low strength and two impulses of higher
strength simultaneously in Figure 25.11

399
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

Figure 25.8: Finite Cable Initial Endcap Current

Figure 25.9: Finite Cable Current Initializations: Strong Impulses

400
25.4. SYNAPTIC CURRENTS: CHAPTER 25. FINITE CABLES

Figure 25.10: Finite Cable Current Initializations: Strong Impulses

Figure 25.11: Three Finite Cable Current Initializations

401
25.5. IMPLICATIONS: CHAPTER 25. FINITE CABLES

Species a Cm Gm ri λc τm
Squid (Loligo Peali) 250 1 1 0.015 6.5 1
Lobster (Homanus Vulgaris) 37.5 1 0.5 1.4 2.5 2
Crab (Carcinus Maenas) 15 1 0.14 13.0 2.3 7
Lobster (Homanus Americanus) 50 - 0.13 1.0 5.1 -
Earthworm (Lumbricus Terrestris) 52.5 0.3 0.083 2.3 4.0 3.6
Marine Worm (Myxicula Infundibulum) 280 0.75 0.012 0.023 5.4 0.9
Cat Motoneuron - 2 0.4 - - 5.0
Frog Muscle Fiber (Sartorius) 37.5 2.5-6.0 0.25 4.5 2.0 10.0-24.0

Table 25.1: Typical Cable Constants

25.5 Implications:
Even though we have still not looked into the case of time dependent solutions, we can still say
a lot about the nature of the solutions we see in the time independent case.

• The space constant λc tells us how far we must be from the site of input current to see
significant attenuation of the resulting potential. Thus, if λc << 1, the cable’s membrane
potential is close to position independent.

• An electrode used to input current or measure voltage can b e thought of as infinitesimal in


tip size if the tip size is very small compared to λc .

• Our underlying cable equation is a linear PDE. Hence, the superposition principle applies
and we can use it to handle arbitrary arrangements of current sources.

• We already know that


r
a
λc ≈
2ρi Gm
s
1 √
≈ a
2ρ)iGm

implying λc decreases as the cable fiber inner radius decreases.

µF
Now let as usual a be cable radius in µm, Cm be membrane capacitance in cm2
, Gm be
mS MΩ
membrane conductance in cm2
, ri be internal resistance of the protoplasm per unit length in cm ,
λc be the cable space constant in mm and τm be the cable membrane time constant in msec.
Then consider a table of these typical cable constants as shown in Table 25.1.

1000λc µm 4000
From Table 25.1, we see that the earthworm give aµm is 52.5 or 76.2. If we assume this
ratio holds, then for a cable with a equal to 0.5 µm, we see that the space constant for this cable
would be 76.2 × 0.5 µm or 38.1 µm which is 0.038 mm. The earthworm cable fiber is unmylenated
and so signals are not insulated from transmission loss. This extrapolation shows that in this

402
25.5. IMPLICATIONS: CHAPTER 25. FINITE CABLES

unmylenated case, we would expect the fiber to transmit signal poorly as the fiber radius drops.
Of course, many species protect themselves against this transmission loss by shielding the fibers
using mylenin, but that is another story.

403
25.5. IMPLICATIONS: CHAPTER 25. FINITE CABLES

404
Chapter 26
Simplified Dendrite - Soma - Axon
Information Processing

Let’s review the basics of information processing in a typical neuron. There are many first sources
for this material; some of them are Introduction to Neurobiology (Hall (10) 1992), Ionic Channels
of Excitable Membranes (Hille (12) 1992), Foundations of Cellular Neurophysiology (Johnston and
Wu (14) 1995) ], Rall’s review of cable theory in the 1977 Handbook of Physiology (Rall (20) 1977)
and Cellular Biophysics: Transport and Electrical Properties (Weiss (24) 1996) and (Weiss (25)
1996).
Our basic model consists of the following structural elements: A neuron which consists of
a dendritic tree (which collects sensory stimuli and sums this information in a temporally and
spatially dependent way), a cell body (called the soma) and an output fiber (called the axon).
Individual dendrites of the dendritic tree and the axon are all modeled as cylinders of some radius
a whose length ` is very long compared to this radius and whose walls are made of a bilipid
membrane. The inside of each cylinder consists of an intracellular fluid and we think of the
cylinder as lying in a bath of extracellular fluid. So for many practical reasons, we can model a
dendritic or axonal fiber as two concentric cylinders; an inner one of radius a (this is the actual
dendrite or axon) and an outer one with the extracellular fluid contained in the space between
the inner and outer membranes.
The potential difference across the inner membrane is essentially due to a balance between the
electromotive force generated by charge imbalance, the driving force generated by charge concen-
tration differences in various ions and osmotic pressures that arise from concentration differences
in water molecules on either side of the membrane. Roughly speaking, the ions of importance in
our simplified model are the potassium K + , sodium N a+ and chloride Cl− ions. The equilibrium
potential across the inner membrane is about −70 millivolts and when the membrane potential is
driven above this rest value, we say the membrane is depolarized and when it is driven below the

405
CHAPTER 26. NEURAL PROCESSING

rest potential, we say the membrane is hyperpolarized. The axon of one neuron interacts with the
dendrite of another neuron via a site called a synapse. The synapse is physically separated into
two parts: the presynaptic side (the side the axon is on) and the postsynaptic side (the side the
dendrite is on). There is an actual physical gap, the synaptic cleft, between the two parts of the
synapse. This cleft is filled with extracellular fluid.
If there is a rapid depolarization of the presynaptic site, a chain of events is initialized which
culminates in the release of specialized molecules called neurotransmitters into the synaptic cleft.
There are pores embedded in the postsynaptic membrane whose opening and closing are dependent
on the potential across the membrane that are called voltage-dependent gates. In addition, the
gates generally allow the passage of a specific ion; so for example, there are sodium, potassium
and chloride gates. The released neurotransmitters bind with the sites specific for the N a+ ion.
Such sites are called receptors. Once bound, N a+ ions begin to flow across the membrane into
the fiber at a greater rate than before. This influx of positive ions begins to drive the membrane
potential above the rest value; that is, the membrane begins to depolarize. The flow of ions across
the membrane is measured in gross terms by what are called conductances. Conductance has
the units of reciprocal ohms; hence, high conductance implies high current flow per unit voltage.
Thus the conductance of a gate is a good way to measure its flow. We can say that as the
membrane begins to depolarize, the sodium conductance, gN a , begins to increase. This further
depolarizes the membrane. However, the depolarization is self-limited as the depolarization of
the membrane also triggers the activation of voltage-dependent gates for the potassium ion, K + ,
which allow potassium ions to flow through the membrane out of the cell. So the increase in the
sodium conductance, gN a triggers a delayed increase in potassium conductance, gK (there are also
conductance effects due to chloride ions which we will not mention here). The net effect of these
opposite driving forces is the generation of a potential pulse that is fairly localized in both time
and space. It is generated at the site of the synaptic contact and then begins to propagate down
the dendritic fiber toward the soma. As it propagates, it attenuates in both time and space. We
call these voltage pulses Post Synaptic Pulses or PSPs.
We model the soma itself as a small isopotential sphere, small in surface area compared to the
surface area of the dendritic system. The possibly attenuated values of the PSPs generated in the
dendritic system at various times and places are assumed to propagate without change from any
point on the soma body to the initial segment of the axon which is called the axon hillock. This
is a specialized piece of membrane which generates a large output voltage pulse in the axon by
a coordinated rapid increase in gN a and gK once the axon hillock membrane depolarizes above a
critical trigger value. The axon itself is constructed in such a way that this output pulse, called
the action potential, travels without change throughout the entire axonal fiber. Hence, the initial
depolarizing voltage impulse that arrives at a given presynaptic site is due to the action potential
generated in the presynaptic neuron by its own dendritic system.
The salient features of our model are thus:

• Axonal and dendritic fibers are modeled as two concentric membrane cylinders.

406
26.1. DENDRITE MODEL CHAPTER 26. NEURAL PROCESSING

• The axon carries action potentials which propagate without change along the fiber once
they are generated. Thus if an axon makes 100 synaptic contacts, we assume that the
depolarizations of each presynaptic membrane are the same.

• Each synaptic contact on the dendritic tree generates a time and space localized depolariza-
tion of the postsynaptic membrane which is attenuated in space as the pulse travels along
the fiber from the injection site and which decrease in magnitude the longer the time is since
the pulse was generated.

• The effect of a synaptic contact is very dependent on the position along the dendritic fiber
that the contact is made–in particular, how far was the contact from the axon hillock (i.e.,
in our model, how far from the soma?)? Contacts made in essentially the same space
locality have a high probability of reinforcing each other and thereby possibly generating a
depolarization high enough to trigger an action potential.

• The effect of a synaptic contact is very dependent on the time at which the contact is made.
Contacts made in essentially the same time frame have a high probability of reinforcing each
other and thereby possibly generating a depolarization high enough to trigger an action
potential.

26.1 A Simple Model of a Dendrite: The Core Conductor Model:


We can model the above dendrite fiber reasonably accurately by using what is called the core
conductor model (see Figure 26.1).
We assume the following:

• The dendrite is made up of two concentric cylinders. Both cylinders are bilayer lipid mem-
branes with the same electrical characteristics. There is conducting fluid between both the
inner and outer cylinder (extracellular solution) and inside the inner cylinder (intracellular
solution). These solutions are assumed homogeneous and isotropic; in addition, Ohm’s law
is valid within them.

• All electrical variables are assumed to have cylindrical symmetry; hence, all variables are
independent of the traditional polar angle θ. In particular, currents in the inner and outer
fluids are longitudinal only (that is, up and down the dendritic fiber). Finally, current
through the membrane is always normal to the membrane (that is the membrane current is
only radial).

• A circuit theory description of currents and voltages is adequate for our model. At a given
position along the cylinder, the inner and outer conductors are at the same potential (if
you slice the cylinder at some point along its length perpendicular to its length, all voltage
measurements in the inner and outer solutions will have the same voltage). The only radial
voltage variation occurs in the membrane itself.

407
26.1. DENDRITE MODEL CHAPTER 26. NEURAL PROCESSING

Figure 26.1: The Dendrite Fiber Model

There are many variables associated with this model; although the model is very simplified
from the actual biological complexity, it is still formidably detailed. These variables are described
below:

z: the position of along the cable measured from some reference zero (m),

t: the current time (sec),

Io (z, t): the total longitudinal current flowing in the +z direction in the outer conductor (amps),

Ii (z, t): the total longitudinal current flowing in the +z direction in the inner conductor (amps),

Jm (z, t): the membrane current density in the inner conductor to the outer conductor (amp/m2 ),

Km (z, t): the membrane current per unit length from the inner conductor to the outer conductor
(amp/m),

Ke (z, t): the current per unit length due to external sources applied in a cylindrically symmetric
manner. So it we think of a presynaptic neuron’s axon generating a postsynaptic pulse in
a given postsynaptic neuron, we can envision this synaptic contact occurring at some point
z along the cable and the resulting postsynaptic pulse as a Dirac delta function impulse
applied to the cable as a high current Ke which lasts for a very short time (amp/m),

408
26.1. DENDRITE MODEL CHAPTER 26. NEURAL PROCESSING

Vm (z, t): the membrane potential which is consider + when the inner membrane is positive with
respect to the outer one (volts),

Vi (z, t): the potential in the inner conductor (volts),

Vo (z, t): the potential in the outer conductor (volts),

ro : the resistance per unit length in the outer conductor (ohms/m),

ri : the resistance per unit length in the inner conductor (ohms/m),

a: the radius of the inner cylinder (m).

Careful reasoning using Ohm’s law and Kirchhoff’s laws for current and voltage balance lead to
the well-known steady state equations:

∂Ii ∂Io
= −Km (z, t), = Km (z, t) − Ke (z, t) (26.1)
∂z ∂z
∂Vi ∂Vo
= −ri Ii (z, t), = −ro Io (z, t), Vm = Vi − Vo (26.2)
∂z ∂z

These equations look at what is happening in the concentric cylinder model at equilibrium; hence,
the change in the potential across the inner membrane is due entirely to the longitudinal variable
z. From equation 26.2, we see

∂Vm ∂Vi ∂Vo


= − = ro Io (z, t) − ri Ii (z, t)
∂z ∂z ∂z

implying

∂ 2 Vm ∂Io ∂Ii
= ro − ri = (ri + ro )Km (z, t) − ro Ke (z, t).
∂z 2 ∂z ∂z

Using equation 26.2, we then obtain the core conductor equation

∂ 2 Vm
= (ri + ro )Km (z, t) − ro Ke (z, t). (26.3)
∂z 2

It is much more useful to look at this model in terms of transient variables which are pertur-
bations from rest values. We define

Vm (z, t) = Vm0 + vm (z, t), Km (z, t) = Km


0
+ km (z, t), Ke (z, t) = Ke0 + ke (z, t) (26.4)

where the rest values are respectively Vm0 (membrane rest voltage), Km
0 (membrane current per

length base value) and Ke0 (injected current per length base value). With the introduction of

409
26.1. DENDRITE MODEL CHAPTER 26. NEURAL PROCESSING

these transient variables, we are able to model the flow of current across the inner membrane
more precisely. We introduce the conductance per length gm ( Siemens/cm or 1/(ohms cm) and
capacitance per length cm (fahrads/cm) of the membrane and note that we can think of a patch
of membrane as as simple RC circuit. This leads to the transient cable equation

∂ 2 vm ∂vm
= (ri + ro )gm vm (z, t) + (ri + ro )cm − ro ke (z, t). (26.5)
∂z 2 ∂t

If we write the transient cable equation into an appropriate scaled form, we gain great insight into
how membrane voltages propagate in time and space relative to what may be called fundamental
scales. Define

cm
τM = (26.6)
gm
1
λc = p (26.7)
(ri + ro ) gm

Note that τM is independent of the geometry of the cable and depends only on dendritic fiber
characteristics. We will call τM the fundamental time constant (that is, this constant determines
how quickly a membrane potential decays to one half on its initial value) of the solution for reasons
we will see shortly. On the other hand, the constant λc is dependent on the geometry of the cable
fiber and we will find it is the fundamental space constant of the system that is, the membrane
potential decays to one half of its value within this distance along the cable fiber).
If we let CM and GM denote the capacitance and conductance per square cm of the membrane,
the circumference of the cable is 2πa and hence the capacitance and conductance per length are
given by

cm = 2πaCM (26.8)
gm = 2πaGM (26.9)

CM
and we see clearly that the ratio τM is simply GM . This clearly has no dependence on the cable
radius showing yet again that this constant is geometry independent. We note that the units of
τM are in seconds. The space constant can be shown to have units of cm and defining ρi to be
the resistivity (ohm-cm) of the inner conductor, we can show that

ρi
ri = . (26.10)
π a2

From this it follows that

410
26.2. CABLE SOLUTIONS CHAPTER 26. NEURAL PROCESSING

1 1
λC = √ a2 . (26.11)
2ρi GM

Clearly, the space constant is proportional to the square root of the fiber radius and the pro-
portionality constant is geometry independent. Another important constant is the Thévenin
equivalent conductance, G∞ , which is defined to be

r
gm
G∞ = λC gm = . (26.12)
ri + ro

For most biologically plausible situations, the outer conductor resistance per unit length is very
small in comparison to the inner conductor’s resistance per unit length. Hence, ri  ro and
equation 26.12 can be rewritten using equations 26.9 and 26.10 to have the form

r s
gm 2GM 3
G∞ = = π a2 . (26.13)
ri ρi

which shows that G∞ is proportional to the three halves power of the fiber radius a with a
proportionality constant which is geometry independent. With all this said, we note that ri +ro =
1
λ2C gm
and hence, we can rewrite the transient cable equation as

∂ 2 vm ∂vm
λ2C = vm + τM − ro λ2C ke . (26.14)
∂z 2 ∂t

26.2 Solutions to the Cable Model:

There are three important classes of solution to the properly scaled cable equation: for the cases of
an infinite, semi-infinite and finite length cable respectively; further, we are interested in both time
independent and dependent solutions. Since these kinds of solution will be useful in understanding
why we choose to augment our feed forward models of computation in the way that we do, we
need to discuss some of these details. We will avoid the derivations of these solutions as there are
a number of places where you can find such information, e.g. Rall, (Rall (20) 1977) (though it is
NOT an easy journey). Instead, let’s build a table which lists the salient characteristics of these
solutions.

26.2.1 Time Independent Solutions:

The three types of solution here are for the infinite, semi-infinite and finite cable models. We
model the applied current as a current impulse of the form ke Ie δ(z −0), where Ie is the magnitude

411
26.2. CABLE SOLUTIONS CHAPTER 26. NEURAL PROCESSING

of the impulse which is applied at position z = 0 using the Dirac delta function δ(z − 0). The
resulting steady state equation is given by

d2 vm
λ2C = vm − ro λ2C Ie δ(z − 0).
dz 2

which has solution

λC ro Ie −|z|
vm (z) = e λC (26.15)
2

The noticeable characteristics of this solution are that its spatial decay is completely determined
by the space constant λC and the decay is symmetric across the injection site at z = 0. Within
one space constant, the potential drops by 1e .
In the semi-infinite cable case, we assume the cable begins at z = 0 and extends out to infinity
to the right. We assume a current Ie is applied at z = 0 and we can show the appropriate
differential equation to solve is

d2 vm
λ2C = vm , ii (0) = Ie
dz 2

where ii denotes the transient inner conductor current. This system has solution

Ie − z
vm (z) = e λC
g λ
rm C
ri + ro − z
= Ie e λC
gm

ii (0)
which we note is defined only for z ≥ 0. Note that the ratio vm (0) reduces to the Thévenin
constant G∞ . This ratio is current to voltage, so it has units of conductance. Therefore, it is
very useful to think of this ratio as telling us what resistance we see looking into the mouth of
the semi-infinite cable. While heuristic in nature, this will give us a powerful way to judge the
capabilities of dendritic fibers for information transmission.
Finally, in the finite cable version, we consider a finite length, 0 ≤ z ≤ `, of cable with current
Ie pumped in at z = and a conductance load of Ge applied at the far end z = `. The cable can
be thought of as a length of fiber whose two ends are capped with membrane that is identical to
the membrane that makes up the cable. The load conductance Ge represents the conductance of
the membrane capping the cable or the conductance of another cell body attached at that point.
The system to solve is now

412
26.2. CABLE SOLUTIONS CHAPTER 26. NEURAL PROCESSING

d2 vm
λ2C = vm , ii (0) = Ie , ii (`) = Ge vm (`).
dz 2

This has solution

`−z G `−z
Ie cosh( λC ) + G∞e sinh( λC )
vm (z) = , 0 ≤ z ≤ `. (26.16)
G∞ sinh( λ` ) + G Gnfe ty cosh( λ` )
C i C

ii (0)
with Thévenin equivalent conductance vm (0) given by

Ge
sinh( λ`C ) + `
G∞ cosh( λC )
GT (`) = G∞ Ge
, 0 ≤ z ≤ `. (26.17)
cosh( λ`C ) + `
G∞ sinh( λC )

Now if the end caps of the cable are patches of membrane whose specific conductance is the same
as the rest of the fiber, the surface area of the cap is πa2 which is much smaller than the surface
of the cylindrical fiber, 2πa`. Hence, the conductance of the cylinder without caps is 2πa`GM
which is very large compared to the conductance of a cap, πa2 GM . So little current will flow
through the caps and we can approximate this situation by thinking of the cap as an open circuit
(thereby setting Ge = 0). We can think of this as the open-circuit case. Using equations 26.16
and 26.17, we find

`−z
Ie cosh( λC )
vm (z) = , 0 ≤ z ≤ `. (26.18)
G∞ sinh( λ` )
C
`
GT (`) = G∞ tanh( ), 0 ≤ z ≤ `. (26.19)
λC

26.2.2 Time Dependent Solutions:

If we look for solutions to the infinite cable model that are time dependent, we must use the full
transient cable equation. The full transient cable equation is given by

∂ 2 vm (z, t) ∂vm
λ2C = vm (z, t) + τM − ro λ2C Qe δ(z, t). (26.20)
∂z 2 ∂t

where the external current term ke is modeled by a time-space impulse of magnitude Qe using a
two dimensional Dirac delta function. By a variety of techniques (not covered here!), this equation
can be solved to obtain

413
26.2. CABLE SOLUTIONS CHAPTER 26. NEURAL PROCESSING

( λz )2
!
ro λC Qe 1 − τt
vm (z, t) = q exp − Ct e M . (26.21)
τM t
4π( τM ) 4( τM )

If we apply a current Ie for all t greater than zero, it is possible to apply the superposition principle
for linear partial differential equations and write the solution in terms of a standard mathematical
Rx 2
function, the error function, erf (x) = √2π 0 e−y dy. Note that this situation is the equivalent
of the steady state solution for the infinite cable with a current impulse applied at z = 0:

   
| λzC |
r
ro λC Ie  t | |z
vm (z, t) = erf  + q  − 1 e λC
4 τM 2 τM t

   
z
|λ |
r
ro λC Ie  t −| λz |
+ erf  − qC  + 1 e C (26.22)
4 τM 2 τM t

Although this solution is much more complicated in appearance, note that as t → ∞, we obtain
the usual steady state solution given by equation 26.15. We can use this solution to see how
quickly the voltage due to current applied to the fiber decays. We want to know when the voltage
vm decays to one half of its starting value. This is difficult to do analytically, but if you plot the
voltage vm vs. the t for various cable lengths `, you can read off the time at which the voltage
crosses the one half value. This leads to the important empirical result that the time it takes for
the voltage to drop to half of its starting value, t 1 satisfies
2

λC λC
z = 2( )t 1 − . (26.23)
τM 2 2

This gives us a relationship between the position on the fiber, z, and the half-life time. The slope
of this line can then be interpreted as a velocity, the rate at which the fiber position for half-life
changes; this is a conduction velocity and is given by

λC
v1 = 2( ), (26.24)
2 τM

having units of (cm/sec). Using our standard assumption that ri  ro and our equations for λC
and gm in terms of membrane parameters, we find

s
2GM 1
v1 = 2 a ,
2 (26.25)
2 ρi CM

414
26.2. CABLE SOLUTIONS CHAPTER 26. NEURAL PROCESSING

indicating that the induced voltage attenuates proportional to the square root of the fiber radius
a. Hence, the double the ratio of the fundamental space to time constant is an important heuristic
measure of the propagation speed of the voltage pulse.
The case of a finite cable, 0 ≤ z ≤ ` requires a different set of boundary conditions than
we have seen before. We think of the dendritic cylinder as having end caps made of the same
membrane as its walls. For boundary conditions, we will impose what are called zero-rate end
cap potentials–this means that each end cap is at equipotential: hence the rate of change of the
potential vm with respect to the space variable λ is zero at both ends. Thus, we look the solution
to the homogeneous full transient cable equation

∂ 2 vm (z, t) ∂vm
λ2C = vm (z, t) + τM , 0 ≤ z ≤ `, t ≥ 0,
∂z 2 ∂t
∂v̂m (0, τ ) ∂v̂m (L, τ )
= 0, = 0. (26.26)
∂λ ∂λ

For convenience, we can replace equation 26.26 with a normalized form using the transformations

t z `
τ = , λ= , L= ,
τM λC λC
v̂m = vm (λC λ, τM τ ),

∂ 2 v̂m ∂vm
= v̂m + , 0 ≤ z ≤ `, t ≥ 0. (26.27)
∂λ2 ∂τ

Assuming a solution of the form v̂m (λ, τ ) = u(λ) w(τ ), the technique of separation of variables
leads to the coupled system as follows. The original equation becomes

d2 u dw
2
w(τ ) = u(λ) w(τ ) + u(λ) .
dλ dτ

This leads immediately to the ratio equation

d2 u dw
dλ2
− u(λ) dτ
= .
u(λ) w(τ )

Since these ratios are equal for all choices of λ and τ , they must equal a common constant value we
will denote by β which is called the separation of variables constant. This leads to the decoupled
equations Equation 26.28 and Equation 26.29.

d2 u du du
= (1 + β)u, 0 ≤ λ ≤ L, (0) = 0, , (L) = 0, (26.28)
dλ2 dλ dλ

415
26.2. CABLE SOLUTIONS CHAPTER 26. NEURAL PROCESSING

dw
= βw, τ ≥ 0, (26.29)

where the separation constant β, determined by looking for nontrivial solutions to equations 26.28
and 26.29, must have the form

β = −1 − αn2 ,

αn = ,
L

for integers n ≥ 0. This leads to a general solution of the form

2
n
v̂m (λ, τ ) = An cos(αn λ) e−(1+αn )τ . (26.30)

This implies that the most general solution is given by


2
X
−τ
v̂m (λ, τ ) = A0 e + An cos(αn λ) e−(1+αn )τ (26.31)
n=1

If we apply voltage data to the finite cable with spatial distribution v̂m (λ, 0) = V (λ), since the
eigenfunctions given by equation 26.30 form a complete or total orthonormal set in the space of
square integrable functions on [0, L], by the superposition principle, we can match the desired
voltage data V using standard Fourier series techniques to obtain:


X
V (λ) = A0 + An cos(αn λ), (26.32)
n=1
1 L
Z
A0 = V (λ)dλ, (26.33)
L 0
2 L
Z
An = V (λ) cos(αn λ) dλ. (26.34)
L 0

We are primarily interested in modeling the effect of the brief pulse of charge that is generated
by a synaptic event. We will think of these Post Synaptic Pulses or PSPs as generated by Dirac
RL
delta functions which are normalized a bit different: we assume that L1 0 δ(λ)dλ = 1. We can
show that if the amount of charge deposited in the PSP is Qe , then a voltage impulse applied
Qe
at the spatial point λ = ξ will be given by V (λ) = λC cm δ(λ − ξ). This leads to analogs of the
Fourier coefficient equations 26.33 - 26.34 given by


X
V (λ) = A0 + An cos(αn λ), (26.35)
n=1

416
26.3. BALL AND STICK MODEL CHAPTER 26. NEURAL PROCESSING

Figure 26.2: The Ball and Stick Model

Qe
A0 = , (26.36)
L λC cm
An = 2 A0 cos(αn ξ), (26.37)

with solution


2
X
v̂m (λ, τ ) = A0 eτ + 2A0 cos(αn ξ) cos(αn λ) e−(1+αn )τ . (26.38)
n=1

26.3 The Ball and Stick Model:

We can now extend our model to what is called the ball and stick neuron model which consists
of an isopotential sphere (the soma) coupled to a single dendritic fiber input line. We model the
soma as a simple parallel resistance/ capacitance network and the dendrite as a finite length cable
as previously discussed (see Figure 26.2).
In Figure 26.2, you see the terms I0 , the input current at the soma/ dendrite junction starting
at τ = 0; ID , the portion of the input current that enters the dendrite (effectively determined by
the input conductance to the finite cable, GD ); IS , the portion of the input current that enters
the soma (effectively determined by the soma conductance GS ); and CS , the soma membrane
capacitance. We assume that the electrical properties of the soma and dendrite membrane are
the same; this implies that the fundamental time and space constants of the soma and dendrite
are given by the same constant (we will use our standard notation τM and λC as usual). It takes
a bit of work, but it is possible to show that with a reasonable zero-rate left end cap condition
the appropriate boundary condition at λ = 0 is given by

417
26.3. BALL AND STICK MODEL CHAPTER 26. NEURAL PROCESSING

 
∂v̂m ∂v̂m
ρ (0, τ ) = tanh(L) v̂m (0, τ ) + (0, τ ) , (26.39)
∂λ ∂τ

GD
where we introduce the fundamental ratio ρ = GS , the ratio of the dendritic conductance to soma
conductance. For more discussion of the ball and stick model boundary conditions we use here,
you can look at the treatments in (Johnston and Wu (14) 1995) and (Rall (20) 1977). The full
system to solve is therefore:

∂ 2 v̂m ∂vm
= v̂m + , 0 ≤ λ ≤ L, τ ≥ 0. (26.40)
∂λ2 ∂τ
∂v̂m
(L, τ ) = 0, (26.41)
∂λ  
∂v̂m ∂v̂m
ρ (0, τ ) = tanh(L) v̂m (0, τ ) + (0, τ ) . (26.42)
∂λ ∂τ

Applying the technique of separation of variables, v̂m (λ, τ ) = u(λ)w(τ ), leads to the system:

u00 (λ)w(τ ) = u(λ)w(τ ) + u(λ)w0 (τ )


ρu0 (0)w(τ ) = tanh(L) u(0)w(τ ) + u(0)w0 (τ )


u0 (L)w(τ ) = 0

This leads again to the ratio equation

u00 (λ) − u(λ) w0 (τ )


= .
u(λ) w(τ )

Since these ratios hold for all τ and λ, they must equal a common constant β. Thus, we have

d2 u
= (1 + β)u, 0 ≤ λ ≤ L, (26.43)
dλ2
dw
= βw, τ ≥ 0. (26.44)

The boundary conditions then become

u0 (L) = 0
ρu0 (0) = (1 + β) tanh(L)u(0).

418
26.3. BALL AND STICK MODEL CHAPTER 26. NEURAL PROCESSING

The only case where we can have non trivial solutions to Equation 26.43 occur when 1 + β = −α2
for some constant α. The general solution to Equation 26.43 is then of the form A cos(α(L −
λ)) + B sin(α(L − λ)).
Since

u0 (λ) = α A sin(α(L − λ)) − α B cos(α(L − λ))

we see

u0 (L) = −α B = 0,

and so B = 0. Then,

u(0) = A cos(αL)
u0 (0) = αA sin(αL)

to satisfy the last boundary conditions, we find, since 1 + β = −α2 ,

 
α A ρ sin(αL) + α tanh(L) cos(αL)

A non trivial solution for A requires α must satisfy the transcendental equation

tanh(L)
tan(αL) = −α = −k(αL), (26.45)
ρ

tanh(L)
where k = ρL . The values of α that satisfy Equation 26.45 give us the eigenvalue of our
original problem, β = −1 − α2 . The eigenvalues of our system can be determined by the solution
of the transcendental equation 26.45. This is easy to do graphically as you can see in Figure
26.3). It can be shown that the eigenvalues form a monotonically increasing sequence starting
2n−1
with α0 = 0 and with the values αn approaching asymptotically the values 2 π. Hence, there
are a countable number of eigenvalues of the form βn = −1 − αn2 leading to a general solution of
the form

2
n
v̂m (λ, τ ) = An cos(αn λ) e−(1+αn )τ . (26.46)

419
26.3. BALL AND STICK MODEL CHAPTER 26. NEURAL PROCESSING

Figure 26.3: The Ball and Stick Eigenvalue Problem

Hence, this system has the eigenvalue/ eigenfunction pairs given by αn (the solution to the tran-
scendental equation 26.45) and cos[αn (L − λ)]. These eigenfunctions are not mutually orthogonal
in the L2 inner product (this system is not a Stürm-Liouville system). In fact, we can show by
direct integration that for n 6= m,

L
sin((αn − αm )L)
Z
sin((αn + αm )L)
cos(αn (L − λ)) cos(αm (L − λ)) dλ = +
0 αn + αm αn − αm

(2n−1)π
Since lim αn = 2 , we see there is an integer Q so that

Z L
cos(αn (L − λ)) cos(αm (L − λ)) ≈ 0
0

if n and m exceed Q. Thus, as usual, we expect the most general solution is given by

Q
2
X
−τ
v̂m (λ, τ ) = A0 e + An cos(αn (L − λ))e−(1+αn )τ (26.47)
n=1

2
X
+ An cos(αn (L − λ))e−(1+αn )τ (26.48)
n=Q+1

420
26.3. BALL AND STICK MODEL CHAPTER 26. NEURAL PROCESSING

Since the spatial eigenfunctions are approximately orthogonal, the computation of the coefficients
An for n > Q can be handled with a straightforward inner product calculation. The calculation
of the first Q coefficients must be handled as a linear algebra problem.

26.3.1 The Ball and Stick Model With Applied Voltage


Let’s assume that we are applying a spatially varying voltage V (λ) at τ = 0 to our model. This
means our full problem is

∂ 2 v̂m ∂vm
= v̂m + , 0 ≤ λ ≤ L, t ≥ 0.
∂λ2 ∂τ
v̂m (λ, 0) = V (λ), τ ≥ 0
∂v̂m
(L, τ ) = 0,
∂λ  
∂v̂m ∂v̂m
ρ (0, τ ) = tanh(L) v̂m (0, τ ) + (0, τ ) .
∂λ ∂τ

We know that we can approximate the solution v̂m as

Q
2
X
−τ
v̂m (λ, τ ) = A0 e + An cos(αn (L − λ))e−(1+αn )τ
n=1

for any choice of integer Q we wish. Of course, we make error in making this approximation! At
τ = 0, to satisfy the desired applied voltage condition, we must have

Q
X
v̂m (λ, 0) = A0 + An cos(αn (L − λ)).
n=1

We can approximately solve this problem then by finding constanst A0 to AQ so that

Q
X
V (λ) = A0 + An cos(αn (L − λ)).
n=1

For convenience, let φi (λ) denote the function cos(αn (L − λ)), where φ0 (s) = 1 always. Then,
note for each integer i, 0 ≤ i ≤ Q, we have

Z L Z L Q Z L
2 X 2
V (s) φi (s) ds = A0 φ0 (s) ds + An φn (s) φi (s) ds
L 0 0 L 0
n=1

To make this easier to write out, let’s introduce a kind of “inner product” notation: we define
< f, g > here to be

421
26.3. BALL AND STICK MODEL CHAPTER 26. NEURAL PROCESSING

Z L
2
< f, g > = f (s)g(s) ds.
L 0

Using this new notation, we see we are trying to find constants A0 to AQ so that

    
< φ0 , φ0 > < φ1 , φ0 > ··· < φQ , φ0 > A0 < V, φ0 >
    
 < φ0 , φ1 > < φ1 , φ1 > · · · < φQ , φ1 > 
  A1
   < V, φ1 > 
.. .. .. ..   .. ..
   
   

 . . . .  .

 
 =  . 

 < φ ,φ > < φ ,φ > ··· < φQ , φi > 
  Ai
   < V, φ > 
 0 i 1 i   i 
 .. .. .. ..  .
  ..
  .. 

 . . . . 



 . 

< φ0 , φQ > < φ1 , φQ > · · · < φQ , φQ > AQ < V, φQ >

For convenience, call this coefficient matrix M and define the vectors B and D by

   
A0 < V, φ0 >
   
 A1   < V, φ1 > 
 .. ..
   
  
 .   . 
B = 
 A
, D = 
  < V, φ >


 i   i 
 .   .. 
 ..   . 
   
AQ < V, φQ >

Then we find the desired constants A0 to AQ by solving

M B = D.

If M is invertible, this is an easy calculation. We can simplify this a bit further. We know

1. The case n 6= m with n and m not equal to 1: Here, we have

L
sin((αn − αm )L)
Z
sin((αn + αm )L)
cos(αn (L − λ)) cos(αm (L − λ)) dλ = +
0 αn + αm αn − αm

However, we also know

sin(αn L) = −k (αn L) cos(αn l)

422
26.3. BALL AND STICK MODEL CHAPTER 26. NEURAL PROCESSING

Standard trigonometric formulae tell us

sin(αn L + αm L) = sin(αn L) cos(αm L) + sin(αm L) cos(αn L)


sin(αn L − αm L) = sin(αn L) cos(αm L) − sin(αm L) cos(αn L)

So, the pieces of the “inner products” become

sin((αn + αm )L) sin(αn L) cos(αm L) + sin(αm L) cos(αn L)


=
αn + αm αn + αm
= −k αn L cos(αm L) cos(αn L) − k αm L cos(αm L) cos(αn L)
= −k(αn + αm )L cos(αm L) cos(αn L)

and

sin((αn − αm )L) sin(αn L) cos(αm L) + sin(αm L) cos(αn L)


=
αn − αm αn + αm
= −k (αn L) cos(αm L) cos(αn l) + k (αm L) cos(αm L) cos(αn l)
= −k(αn − αm )L cos(αm L) cos(αn L).

and hence,

L
sin((αn − αm )L)
Z
2 sin((αn + αm )L)
cos(αn (L − λ)) cos(αm (L − λ)) dλ = 2 + 2
L 0 (αn + αm )L (αn − αm )L
= −4k cos(αm L) cos(αn L).

2. If n = 0 with m > 0: Since n = 0, the “inner product” integration is a bit simpler. We


have

Z L
2 sin((αm )L)
cos(αm (L − λ)) dλ = 2
L 0 αm L

However, of course, we also know

sin(αm L) = −k (αm L) cos(αm L)

423
26.3. BALL AND STICK MODEL CHAPTER 26. NEURAL PROCESSING

So, the “inner product” becomes

sin(αm L
2 = −2k cos(αm L)
αm L

Hence, we conclude

Z L
2
cos(αm (L − λ)) = −2k cos(αm L).
L 0

3. n = m, n > 0: Here, a direct integration gives

L L
1 + cos(αn (L − λ))
Z Z
2 2 2
cos (αn (L − λ)) dλ = dλ
L 0 L0 2
1
= 1 + sin(αn L).
2αn L

4. n = m = 0: Finally, we have
Z L
2
dλ = 2.
L 0

Plugging this into our “inner products”, we see we need to solve

    
1 1 1
− 2k 2 cos(β1 ) ··· 2 cos(βQ ) A0 < V, φ0 >
 1 1     
 2 cos(β1 ) − 4k γ1 ··· cos(β1 ) cos(βQ )   A1   < V, φ1 > 
.. .. .. ..  .. ..
     
    

 . . . . 

 .


 = − 1 
 . 

1 4k  < V, φi >

 2 cos(βi ) cos(β1 ) cos(βi ) ··· cos(βi ) cos(βQ ) 

 A
 i


 

 .. .. .. ..   .
 ..
  .. 

 . . . . 
 



 . 

1 1
2 cos(βQ ) cos(β1 ) cos(βQ ) · · · − 4k γQ AQ < V, φQ >

1
where we use βi to denote αi L and the γi to replace the term 1 + 2αi L sin(2αi L).
For example, if L = 5, ρ = 10 and Q = 5, we find with an easy MatLab calculation that

-25.0023 -0.4991 0.4962 -0.4917 0.4855 -0.4778


-0.4991 -12.2521 -0.9906 0.9815 -0.9691 0.9538
0.4962 -0.9906 -12.2549 -0.9760 0.9637 -0.9485
-0.4917 0.9815 -0.9760 -12.2594 -0.9548 0.9397
0.4855 -0.9691 0.9637 -0.9548 -12.2655 -0.9279
-0.4778 0.9538 -0.9485 0.9397 -0.9279 -12.2728

424
26.3. BALL AND STICK MODEL CHAPTER 26. NEURAL PROCESSING

which has a non zero determinant so that we can solve the required system.

Applied Voltage Pulses

If we specialize to an applied voltage pulse V (λ) = V ∗ δ(λ − λ0 ), note this is interpreted as a


sequence of pulses keC satisfying for all positive C

Z λ0 +C
keC (s) ds = V ∗ .
λ0 −C

Hence, for all C, we have

2 λ0 +C C
Z
< keC , φ0 > = k (s) ds
L λ0 −C e
2 ∗
= V
L

Thus, we use

2V ∗
< V ∗ δ(λ − λ0 ), φ0 > = .
L

Following previous discussions, we then find for any i larger than 0, that

2V ∗
< V ∗ δ(λ − λ0 ), φi > = cos(αi (L − λ0 )).
L

Therefore, for an impulse voltage applied to the cable at position λ0 of size V ∗ , we find the
constants we need by solving M B = D for

 
1
 
 cos(α1 (L − λ0 )) 
..
 

 
V  . 
D = −  .
2kL  cos(αi (L − λ0 ))
 

 .. 

 . 

cos(αQ (L − λ0 ))

If we had two impulses applied, one at λ0 of size V0∗ and one at λ1 of size V1∗ , we can find the
desired solution by summing the solution to each of the respective parts. In this case, we would
find a vector solution B0 and B1 following the procedure above. The solution at λ = 0, the axon
hillock, would then be

425
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

Q
2
X
−τ
v̂m (0, τ ) = (B00 + B10 ) e + (B0n + B1n ) cos(αn L) e−(1+αn )τ
n=1

26.4 Ball and Stick Numerical Solutions


To see how to handle the Ball and Stick model numerically with MatLab, we will write scripts that
do several things. We will try to explain, as best we can, the general principles of our numerically
modeling using home grown code rather than built in MatLab functions. We need

1. A way to find Q eigenvalues numerically.

2. A routine to construct the matrix M we need for our approximations.

3. A way to solve the resulting matrix system for the coefficients we will use to build our
approximate voltage model.

26.4.1 Numerical Eigenvalues


To solve for the eigenvalues numerically, we will use root finding techniques that were discussed
in Chapter 24. Using the finite difference Newton’s method, we can solve for the roots of

tanh(L)
tan(αL) + (αL) = 0.
ρL

for various choices of L and ρ using a MatLab script. We know the roots we seek are always in
the intervals [ π2 , π], [ 3π
2 , 2π] and so on, so it is easy for us to find useful lower and upper bounds
π
for the root finding method. Since tan is undefined at the odd multiples of 2, numerically, it is
much easier to solve instead

tanh(L)
sin(αL) + (αL) cos(αL) = 0.
ρL

To do this, we write the function f2 as

Listing 26.1: Ball and Stick Eigenvalues

function y = f 2 ( x , L , rho )
%
3 % L = l e n g t h of the dendrite cable in s p a t i a l l e n g t h s
% rho = r a t i o o f d e n d r i t e t o soma conductance , G D/G S
%
%
kappa = tanh (L) / ( rho ∗L) ;

426
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

8 u = x∗L ;
y = sin ( u ) + kappa ∗u∗ cos ( u ) ;

The original finite difference Newton Method was given in Chapter 24 and we reproduce it here
as we will be modifying it.

Listing 26.2: Original Newton Finite Difference Method

1 function [ x , fx , nEvals , aF , bF ] = . . .
GlobalNewton ( fName , a , b , t o l x , t o l f , nEvalsMax )
%
% fName a s t r i n g t h a t i s t h e name o f t h e f u n c t i o n f ( x )
% a,b we l o o k f o r t h e r o o t i n t h e i n t e r v a l [ a , b ]
6 % tolx t o l e r a n c e on t h e s i z e o f t h e i n t e r v a l
% tolf t o l e r a n c e of f ( current approximation to root )
% nEvalsMax Maximum Number o f d e r i v a t i v e E v a l u a t i o n s
%
% x Approximate z e r o o f f
11 % fx The v a l u e o f f a t t h e a p p r o x i m a t e z e r o
% nEvals The Number o f D e r i v a t i v e E v a l u a t i o n s Needed
% aF , bF t h e f i n a l i n t e r v a l t h e a p p r o x i m a t e r o o t l i e s in ,
% [ aF , bF ]
%
16 % Termination I n t e r v a l [ a , b ] has s i z e < t o l x
% | f ( approximate root ) | < t o l f
% Have e x c e e d e d nEvalsMax d e r i v a t i v e E v a l u a t i o n s
%
f a = f e v a l ( fName , a ) ;
21 f b = f e v a l ( fName , b ) ;
x = a;
f x = f e v a l ( fName , x ) ;
d e l t a = sqrt ( eps ) ∗ abs ( x ) ;
f p v a l = f e v a l ( fName , x+d e l t a ) ;
26 f p x = ( f p v a l −f x ) / d e l t a ;

nEvals = 1 ;
k = 1;
disp ( ’ ’ )
31 disp ( ’ Step | k | a(k) | x(k) | b(k) ’ )
disp ( s p r i n t f ( ’Start | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
while ( abs ( a−b )>t o l x ) & ( abs ( f x )> t o l f ) & ( nEvals<nEvalsMax ) | ( nEvals==1)
%[ a , b ] b r a c k e t s a r o o t and x=a or x=b
check = S t e p I s I n ( x , fx , fpx , a , b ) ;
36 i f check
%Take Newton S t e p
x = x − fx / fpx ;
else
%Take a B i s e c t i o n S t e p :
41 x = ( a+b ) / 2 ;
end
f x = f e v a l ( fName , x ) ;
f p v a l = f e v a l ( fName , x+d e l t a ) ;
f p x = ( f p v a l −f x ) / d e l t a ;

427
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

46 nEvals = nEvals +1;


i f f a ∗ fx <=0
%t h e r e i s a r o o t i n [ a , x ] . Use r i g h t e n d p o i n t .
b = x;
fb = fx ;
51 else
%t h e r e i s a r o o t i n [ x , b ] . Bring i n l e f t e n d p o i n t .
a = x;
fa = fx ;
end
56 k = k+1;
i f ( check )
disp ( s p r i n t f ( ’Newton | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
else
disp ( s p r i n t f ( ’Bisection | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
61 end
end
aF = a ;
bF = b ;

Since our function f2(x,L,rho) depends on three parameters, we need to modify our Newton
finite difference method feval statements to reflect this. The new code is pretty much the same
with each feval statement changed to add ,L,rho to the argument list.

Listing 26.3: Modified Finite Difference Newton Method

1 function [ x , fx , nEvals , aF , bF ] = . . .
GlobalNewtonFDTwo ( fName , L , rho , a , b , t o l x , t o l f , nEvalsMax )
%
% fName a s t r i n g t h a t i s t h e name o f t h e f u n c t i o n f ( x )
% a,b we l o o k f o r t h e r o o t i n t h e i n t e r v a l [ a , b ]
6 % tolx t o l e r a n c e on t h e s i z e o f t h e i n t e r v a l
% tolf t o l e r a n c e of f ( current approximation to root )
% nEvalsMax Maximum Number o f d e r i v a t i v e E v a l u a t i o n s
%
% x Approximate z e r o o f f
11 % fx The v a l u e o f f a t t h e a p p r o x i m a t e z e r o
% nEvals The Number o f D e r i v a t i v e E v a l u a t i o n s Needed
% aF , bF t h e f i n a l i n t e r v a l t h e a p p r o x i m a t e r o o t l i e s in ,
% [ aF , bF ]
%
16 % Termination I n t e r v a l [ a , b ] has s i z e < t o l x
% | f ( approximate root ) | < t o l f
% Have e x c e e d e d nEvalsMax d e r i v a t i v e E v a l u a t i o n s
%
f a = f e v a l ( fName , a , L , rho ) ;
21 f b = f e v a l ( fName , b , L , rho ) ;
x = a;
f x = f e v a l ( fName , x , L , rho ) ;
d e l t a = sqrt ( eps ) ∗ abs ( x ) ;
f p v a l = f e v a l ( fName , x+d e l t a , L , rho ) ;
26 f p x = ( f p v a l −f x ) / d e l t a ;

428
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

nEvals = 1 ;
k = 1;
disp ( ’ ’ )
31 disp ( ’ Step | k | a(k) | x(k) | b(k) ’ )
disp ( s p r i n t f ( ’Start | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
while ( abs ( a−b )>t o l x ) & ( abs ( f x )> t o l f ) & ( nEvals<nEvalsMax ) | ( nEvals==1)
%[ a , b ] b r a c k e t s a r o o t and x=a or x=b
check = S t e p I s I n ( x , fx , fpx , a , b ) ;
36 i f check
%Take Newton S t e p
x = x − fx / fpx ;
else
%Take a B i s e c t i o n S t e p :
41 x = ( a+b ) / 2 ;
end
f x = f e v a l ( fName , x , L , rho ) ;
f p v a l = f e v a l ( fName , x+d e l t a , L , rho ) ;
f p x = ( f p v a l −f x ) / d e l t a ;
46 nEvals = nEvals +1;
i f f a ∗ fx <=0
%t h e r e i s a r o o t i n [ a , x ] . Use r i g h t e n d p o i n t .
b = x;
fb = fx ;
51 else
%t h e r e i s a r o o t i n [ x , b ] . Bring i n l e f t e n d p o i n t .
a = x;
fa = fx ;
end
56 k = k+1;
i f ( check )
disp ( s p r i n t f ( ’Newton | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
else
disp ( s p r i n t f ( ’Bisection | %6d | %12.7f | %12.7f | %12.7f’ , k , a , x , b ) ) ;
61 end
end
aF = a ;
bF = b ;

We can then write the script FindRoots.

Listing 26.4: Finding Ball and Stick Eigenvalues

function z = FindRoots (Q, L , rho , l b t o l , u b t o l )


%
% Q = t h e number o f e i g e n v a l u e s we want t o f i n d
% L = the l e n g t h of the dendrite in space constants
5 % rho = t h e r a t i o o f d e n d r i t e t o soma conductance , G D/G S
% l b t o l = a s m a l l t o l e r a n c e t o add t o t h e l o w e r bounds we use
% u p t o l = a s m a l l t o l e r a n c e t o add t o t h e upper bounds we use
%
z = zeros (Q, 1 ) ;
10 LB = zeros (Q, 1 ) ;
UB = zeros (Q, 1 ) ;

429
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

kappa = tanh (L) / ( rho ∗L) ;


for n=1:Q
LB( n ) = ( ( 2 ∗ n−1) / ( 2 ∗L) ) ∗ pi ∗ l b t o l ;
15 UB( n ) = n∗ pi /L∗ u b t o l ;
[ z ( n ) , fx , nEvals , aF , bF ] = GlobalNewtonFDTwo ( ’f2’ ,L , rho , LB( n ) ,MB( n )
,10ˆ −6 ,10ˆ −8 ,500) ;
w = z ∗L ;
l b = L∗LB ;
disp ( s p r i n t f ( ’n = %3d EV = %12.7f LB = %12.7f Error = %12.7f ’ , n , w( n
) , l b ( n ) ,w( n )−l b ( n ) ) ) ;
20 disp ( s p r i n t f ( ’n %3d function value = %12.7f original function value =
%12.7f ’ , n , sin (w( n ) ) + kappa ∗w( n ) ∗ cos (w( n ) ) , tan (w( n ) ) + kappa ∗w( n ) ) )
;

end

Let’s try running this for the first two eigenvalues. Here is the sesssion:

>> FindRoots(2,5,10,1.0,1.0);

Step | k | a(k) | x(k) | b(k)


Start | 1 | 0.3141593 | 0.3141593 | 0.6283185
Bisection | 2 | 0.4712389 | 0.4712389 | 0.6283185
Bisection | 3 | 0.5497787 | 0.5497787 | 0.6283185
Newton | 4 | 0.5497787 | 0.6186800 | 0.6186800
Newton | 5 | 0.6160148 | 0.6160148 | 0.6186800
Newton | 6 | 0.6160148 | 0.6160149 | 0.6160149
n = 1 EV = 3.0800745 LB = 1.5707963 Error = 1.5092782
n 1 function value = -0.0000000 original function value = 0.0000000

Step | k | a(k) | x(k) | b(k)


Start | 1 | 0.9424778 | 0.9424778 | 1.2566371
Bisection | 2 | 1.0995574 | 1.0995574 | 1.2566371
Bisection | 3 | 1.1780972 | 1.1780972 | 1.2566371
Newton | 4 | 1.1780972 | 1.2335644 | 1.2335644
Newton | 5 | 1.2321204 | 1.2321204 | 1.2335644
n = 2 EV = 6.1606022 LB = 4.7123890 Error = 1.4482132
n 2 function value = -0.0000000 original function value = -0.0000000
>>

Now we will run this for 20 eigenvalues for L = 5 and ρ = 10 but we have gone back into
GlobalNewtonFDTwo and commented out the intermediate prints to save space! We keep print-
ing out the actual f 2 values just to make sure our numerical routines are working like we want.

>> FindRoots(20,5,10,1.0,1.0);
n = 1 EV = 3.0800745 LB = 1.5707963 Error = 1.5092782
n 1 function value = -0.0000000 original function value = 0.0000000
n = 2 EV = 6.1606022 LB = 4.7123890 Error = 1.4482132
n 2 function value = -0.0000000 original function value = -0.0000000
n = 3 EV = 9.2420168 LB = 7.8539816 Error = 1.3880352
n 3 function value = -0.0000000 original function value = 0.0000000
n = 4 EV = 12.3247152 LB = 10.9955743 Error = 1.3291410
n 4 function value = 0.0000000 original function value = 0.0000000
...

430
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

n = 20 EV = 61.9402349 LB = 61.2610567 Error = 0.6791782


n 20 function value = 0.0000000 original function value = 0.0000000

26.4.2 The Ball and Stick Matrix Construction


To construct the matrix M , we need, we will use the following funtion.

Listing 26.5: Finding the Ball and Stick Coefficient Matrix

function M = FindM (Q, L , rho , z )


2 %
% Q = t h e number o f e i g e n v a l u e s we want t o f i n d
% L = the l e n g t h of the dendrite in space constants
% rho = t h e r a t i o o f d e n d r i t e t o soma conductance , G D/G S
%M = m a t r i x o f c o e f f i c i e n t s f o r our a p p r o x i m a t e v o l t a g e
7 % model
% In MatLab t h e r o o t numbering i s o f f by 1 ; so
% t h e r o o t s we f i n d s t a r t a t 1 and end a t Q.
% So M i s Q+1 by Q+1.
kappa = tanh (L) / ( rho ∗L) ;
12 w = zeros (1+Q, 1 ) ;
M = zeros (1+Q,1+Q) ;
w( 1 ) = 0 ;
% set f i r s t diagonal position
M( 1 , 1 ) = −1/(2∗ kappa ) ;
17 f o r n=1:Q
w( n+1) = z ( n ) ∗L ;
end
% s e t f i r s t column
f o r n = 2 :Q+1
22 M( 1 , n ) = 0 . 5 ∗ cos (w( n ) ) ;
end
% s e t f i r s t row
f o r n = 2 :Q+1
M( n , 1 ) = 0 . 5 ∗ cos (w( n ) ) ;
27 end
% s e t main b l o c k
f o r m=2:Q+1
f o r n = 2 :Q+1
i f m ˜= n
32 M(m, n ) = cos (w(m) ) ∗ cos (w( n ) ) ;
end
i f m == n
M(m,m) = −1/(4∗ kappa ) ∗ ( 1 + sin ( 2 ∗w(m) ) / ( 2 ∗w(m) ) ) ;
end
37 end
end

This is pretty easy to use. For example, for Q = 6, we have

>> Q = 6;
>> L = 5;
>> rho = 10;

431
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

>> z = FindRoots(Q,L,rho,1.0,1.0);
n = 1 EV = 3.0800745 LB = 1.5707963 Error = 1.5092782
n 1 function value = -0.0000000 original function value = 0.0000000
n = 2 EV = 6.1606022 LB = 4.7123890 Error = 1.4482132
n 2 function value = -0.0000000 original function value = -0.0000000
n = 3 EV = 9.2420168 LB = 7.8539816 Error = 1.3880352
n 3 function value = -0.0000000 original function value = 0.0000000
n = 4 EV = 12.3247152 LB = 10.9955743 Error = 1.3291410
n 4 function value = 0.0000000 original function value = 0.0000000
n = 5 EV = 15.4090436 LB = 14.1371669 Error = 1.2718767
n 5 function value = -0.0000000 original function value = 0.0000000
n = 6 EV = 18.4952884 LB = 17.2787596 Error = 1.2165288
n 6 function value = 0.0000000 original function value = 0.0000000
>> M = FindM(Q,L,rho,z);

The resulting M matrix is

M =

-25.0023 -0.4991 0.4962 -0.4917 0.4855 -0.4778 0.4690


-0.4991 -12.2521 -0.9906 0.9815 -0.9691 0.9538 -0.9361
0.4962 -0.9906 -12.2549 -0.9760 0.9637 -0.9485 0.9309
-0.4917 0.9815 -0.9760 -12.2594 -0.9548 0.9397 -0.9223
0.4855 -0.9691 0.9637 -0.9548 -12.2655 -0.9279 0.9106
-0.4778 0.9538 -0.9485 0.9397 -0.9279 -12.2728 -0.8963
0.4690 -0.9361 0.9309 -0.9223 0.9106 -0.8963 -12.2812

Since we know that eventually, the roots or eigenvalues will become close to an odd multiple of
π
2, we see that the axon voltage which is v̂m (λ, 0 is

Q
X ∞
X
v̂m (λ, 0) = A0 + An cos(αn (L − λ)) + An cos(αn (L − λ))
n=1 n=Q+1

where Q + 1 is the value where the cos(αn L) = 0 within our tolerance choice. At λ = 0, we have

Q
X ∞
X
v̂m (0, 0) = A0 + An cos(αn L) + An cos(αn L)
n=1 n=Q+1

However, the terms past Q + 1 are zero and so our approximate solution at the axon hillock at
time 0 is just

Q
X
v̂m (0, 0) = A0 + An cos(αn L).
n=1

An interesting question is how do we estimate Q? We did it with a short MatLab script.

432
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

Listing 26.6: Finding Q

function N = F i n d R o o t S i z e ( EndValue , L , rho , l b t o l , u b t o l )


%
% EndValue = how f a r we want t o s e a r c h t o f i n d
% where a p p r o x i m a t i o n can s t o p
5 % Q = t h e number o f e i g e n v a l u e s we want t o f i n d
% L = the l e n g t h of the dendrite in space constants
% rho = t h e r a t i o o f d e n d r i t e t o soma conductance , G D/G S
% lbtol = a s m a l l t o l e r a n c e t o add t o t h e l o w e r bounds we use
% uptol = a s m a l l t o l e r a n c e t o add t o t h e upper bounds we use
10 %
% set Q
Q = EndValue ;
z = zeros (Q, 1 ) ;
LB = zeros (Q, 1 ) ;
15 UB = zeros (Q, 1 ) ;
kappa = tanh (L) / ( rho ∗L) ;
N = Q;
f o r n=1:Q
LB( n ) = ( ( 2 ∗ n−1) / ( 2 ∗L) ) ∗ pi ∗ l b t o l ;
20 UB( n ) = n∗ pi /L∗ u b t o l ;
[ z ( n ) , fx , nEvals , aF , bF ] = GlobalNewtonFDTwo ( ’f2’ ,L , rho , LB( n ) ,UB( n )
,10ˆ −6 ,10ˆ −8 ,500) ;
w = z ∗L ;
t e s t v a l u e = abs ( cos (w( n ) ) ) ;
i f ( abs ( t e s t v a l u e ) < 10ˆ −3)
25 disp ( s p r i n t f ( ’n = %3d root = %12.7f testvalue = %12.7f’ , n , w( n ) ,
testvalue ) ) ;
N = n;
break ;
end
end
30 disp ( s p r i n t f ( ’n = %3d root = %12.7f testvalue = %12.7f’ , n , w( n ) , t e s t v a l u e ) ) ;

We can find Q for given values of L and ρ as follows:

>> L = 5;
>> rho = 10;
>> N = FindRootSize(20000,L,rho,1.0,1.0);
n = 15918 root = 50006.3020635 testvalue = 0.0010000
>>

Of course, we can’t really use this information! If you try to solve our approximation problem
using Q = 15918, you will probably do as we did and cause your laptop to die in horrible ways.
The resulting matrix M we need is 15918 × 15918 in size and most of us don’t have enough
memory for that! There are ways around this, but for our purposes, we don’t really need to use
a large value of Q.

26.4.3 Solving For Impulse Voltage Sources


First, we need to set up the data vector. We do this with the script

433
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

Listing 26.7: Finding the Ball and Stick Data Vector

function D = FindData (Q, L , rho , z , Vmax, lambda0 )


%
% Q = t h e number o f e i g e n v a l u e s we want t o f i n d
4 % L = the l e n g t h of the dendrite in space constants
% rho = t h e r a t i o o f d e n d r i t e t o soma conductance , G D/G S
% z = eigenvalue vector
% D = data vector
% Vmax = s i z e of v o l t a g e impulse
9 % lambda0 =
%M = m a t r i x o f c o e f f i c i e n t s f o r our a p p r o x i m a t e v o l t a g e
% model
%
kappa = tanh (L) / ( rho ∗L) ;
14 w = zeros (1+Q, 1 ) ;
D = zeros (1+Q, 1 ) ;
m u l t i p l i e r = −Vmax/ ( 2 ∗ kappa ∗L) ;
%
w( 1 ) = 0 ;
19 for n=1:Q
w( n+1) = z ( n ) ;
end
D( 1 ) = m u l t i p l i e r ;
for n = 2 :Q+1
24 D( n ) = m u l t i p l i e r ∗ cos (w( n ) ∗ (L − lambda0 ) ) ;
end

Next, we can use the linear system solvers we have discussed in Chapter 23. To solve our system
M B = D, we then use the Lu decomposition and the lower and upper triangular solvers as
follows:

1. We find the LU decomposition of M .

2. Then, we compute apply Gaussian elimination to transform the system into lower triangular
form.

3. Finally, we use our upper triangular solver tool to find the solution.

A typical MatLab session would have this form for our previous L = 5, Q = 6 (remember, this
gives us 7 values as the value 0 is also used in our solution), rho − 10 for a pulse of size 200
administered at location λ = 2.5.

Listing 26.8: Finding The Approximate Solution

>> [ L , U, p i v ] = GePiv (M) ;


>> y = L T r i S o l (L ,D( p i v ) ) ;
>> B = UTriSol (U, y ) ;

434
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

where B is the vector of coefficients we need to form the solution

Q
2
X
−τ
V (λ, τ ) = B0 e + Bn cos(αn (L − λ)) e−(1+αn )τ .
n=1

The axon hillock voltage is then V (0, τ ). We compute the voltage with the script

Listing 26.9: Finding the Axon Hillock Voltage

function [ V, A x o n H i l l o c k ] = Fi n d Vo l t ag e (Q, L , rho , z ,M, D, Vmax, l o c a t i o n )


2 %
% Q = t h e number o f e i g e n v a l u e s we want t o f i n d
% L = the l e n g t h of the dendrite in space constants
% rho = t h e r a t i o o f d e n d r i t e t o soma conductance , G D/G S
% z = eigenvalue vector
7 % D = data vector
% Vmax = s i z e of v o l t a g e impulse
% location = location of the pulse
%M = m a t r i x o f c o e f f i c i e n t s f o r our a p p r o x i m a t e v o l t a g e
% model
12 %
[ Lower , Upper , p i v ] = GePiv (M) ;
y = L T r i S o l ( Lower ,D( p i v ) ) ;
B = UTriSol ( Upper , y ) ;
% s e t s p a t i a l and time bounds
17 lambda = linspace ( 0 , 5 , 3 0 1 ) ;
tau = linspace ( 0 , 5 , 1 0 1 ) ;
V = zeros ( 3 0 1 , 1 0 1 ) ;
% f i n d v o l t a g e a t s p a c e p o i n t lambda ( s ) and time p o i n t t a u ( t )
for s = 1:301
22 for t = 1:101
V( s , t ) = B( 1 ) ∗exp(−tau ( t ) ) ;
f o r n=1:Q
V( s , t ) = V( s , t ) + B( n+1)∗ cos ( z ( n ) ∗ (L − lambda ( s ) ) ) ∗exp(−(1+ z ( n ) ˆ 2 ) ∗ tau (
t));
end
27 end
end
A x o n H i l l o c k = V( 1 , : ) ;

We can compute a representative axon hillock voltage as follows:

>> Q = 40;
>> L = 5;
>> rho = 10;
>> z = FindRoots(Q,L,rho,1.0,1.0);
>> M = FindM(Q,L,rho,z);
>> Vmax = 100;
>> location = 2.0;
>> D = FindData(Q,L,rho,z,Vmax,location);

435
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

>> [V,AxonHillock] = FindVoltage(Q,L,rho,z,M,D,Vmax,location);


>> plot(t,AxonHillock);

This generates the plot of Figure 26.4.

Figure 26.4: Axon Hillock Voltage for Pulse of Size 100 at location 2.0

Note the model V returned here evaluated at time 0 tells us how we have approximated our
impulse at location 2.0 of magnitude 100. We show this in Figure 26.5.

>> x = linspace(0,5,301);
>> Input = V(:,2);
>> plot(x,Input);

Now, we suspect that the axon hillock response to this impulse near time 0 is a numerical artifiact.
If this is true, we should do better by increasing the value of Q. We can test this as follows:

Listing 26.10: Using more terms in the approximation

1 >> x = linspace ( 0 , 5 , 3 0 1 ) ;
>> t = linspace ( 0 , 5 , 1 0 1 ) ;
>> [ V, A, I ] = GetAxonHillock ( 8 0 , 5 , 1 0 , 1 0 0 , 2 . 0 ) ;
norm o f LU r e s i d u a l = 0 . 0 0 0 0 0 0 0 norm o f MB−D = 0.0000000
>> plot ( x , I ) ;
6 >> plot ( t ,A) ;

The new plot in Figure 26.6 shows that the axon hillock voltage is indeed better and the odd
negative axon hillock voltages near 0 are artifacts as we suspect.
Now, just to carry on this discussion to the point of “beating you over the head”, let’s run this
experiment for Q = 40, 80, 160, 800 and 1600. Suppressing some MatLab output, we use these
commands:

436
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

Figure 26.5: Model Approximation to an Impulse of Size 100 at location 2.0

Figure 26.6: Axon Hillock Response to an Impulse of Size 100 at location 2.0 Using Q = 80

437
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

Listing 26.11: Many Choices For Q!

>> [ V, A, I ] = GetAxonHillock ( 4 0 , 5 , 1 0 , 1 0 0 , 2 . 0 ) ;
>> [ V2 , A2 , I 2 ] = GetAxonHillock ( 8 0 , 5 , 1 0 , 1 0 0 , 2 . 0 ) ;
>> [ V3 , A3 , I 3 ] = GetAxonHillock ( 1 6 0 , 5 , 1 0 , 1 0 0 , 2 . 0 ) ;
4 >> [ V4 , A4 , I 4 ] = GetAxonHillock ( 8 0 0 , 5 , 1 0 , 1 0 0 , 2 . 0 ) ;
>> [ V5 , A5 , I 5 ] = GetAxonHillock ( 1 6 0 0 , 5 , 1 0 , 1 0 0 , 2 . 0 ) ;
>> plot ( t , A, t , A2 , t , A3 , t , A4 , t , A5) ;
>> plot ( x , I , x , I2 , x , I3 , x , I4 , x , I 5 ) ;

It should be clear to you we could rewrite our GetAxonHillock script to use vector inputs for
the Vmax and location arguments!
The choice of Q = 1600 take a lot of time by the way! We show the axon hillock voltages on
the same plot in Figure 26.7.

Figure 26.7: Axon Hillock Response to an Impulse of Size 100 at location 2.0 Using Q = 40,
80, 160, 800 and 1600

We show the model approximations in Figure 26.8


It is quite clear that we are definitely obtaining a depolarization of the membrane potential due
to this impulse in all cases. However, a Q of at least 80 is needed to avoid the hyperpolarization
artifact.

438
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

Figure 26.8: Model Approximation to an Impulse of Size 100 at location 2.0 Using Q = 40, 80,
160, 800 and 1600

Next, we add a second impulse of size 300 at location 4.5 to see how the axon hillock voltage
changes. This response is seen in Figure 26.9.

>> Vmax2 = 300;


>> location2 = 4.5;
>> D2 = FindData(Q,L,rho,z,Vmax2,location2);
>> [V2,AxonHillock2] = FindVoltage(Q,L,rho,z,M,D2,Vmax2,location2);
>> plot(t,AxonHillock2);

Again, we see model V 2 evaluated at time 0 tells us how we have approximated our impulse at
location 4.5 of magnitude 300. We show this in Figure 26.10.

>> x = linspace(0,5,301);
>> Input = V(:,2);
>> plot(x,Input);

Finally, we add the responses together in Figure 26.11 and the inputs in Figure 26.12.

>> plot(t,AxonHillock+AxonHillock2);
>> plot(x,Input+Input2);

439
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

Figure 26.9: Axon Hillock Voltage for Pulse of Size 300 at location 4.5

Figure 26.10: Model Approximation to an Impulse of Size 300 at location 4.5

440
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

Figure 26.11: Axon Hillock Voltage for Pulse of Size 300 at location 4.5 and Size 100 at location
2.0

Figure 26.12: Model Approximation for Pulse of Size 300 at location 4.5 and Size 100 at location
2.0

441
26.4. BALL AND STICK NUMERICALLY CHAPTER 26. NEURAL PROCESSING

We can put all of these things together in a short MatLab script called GetAxonHillock. Here
it is:

Listing 26.12: Getting The Axon Hillock Voltage

function [ V, AxonHillock , Input ] = GetAxonHillock (Q, L , rho , Vmax, l o c a t i o n )


2 %
% Q = t h e number o f e i g e n v a l u e s we want t o f i n d
% L = the l e n g t h of the dendrite in space constants
% rho = t h e r a t i o o f d e n d r i t e t o soma conductance , G D/G S
% z = eigenvalue vector
7 % D = data vector
% Vmax = s i z e of v o l t a g e impulse
% location = location of pulse
%M = m a t r i x o f c o e f f i c i e n t s f o r our a p p r o x i m a t e v o l t a g e
% model
12 % Input = t h e s o l u t i o n as ( z , 0 ) t o s e e i f match t o
% input voltage is reasonable
% AxonHillock = the s o l u t i o n at (0 , t )
% V = the solution at ( z , t )
%
17 % get eigenvalue vector z
z = FindRoots (Q, L , rho , 1 . 0 , 1 . 0 ) ;
% get c o e f f i c i e n t matrix M
M = FindM (Q, L , rho , z ) ;
% compute d a t a v e c t o r f o r i m p u l s e
22 D = FindData (Q, L , rho , z , Vmax, l o c a t i o n ) ;
% S o l v e MB = D system
[ Lower , Upper , p i v ] = GePiv (M) ;
y = L T r i S o l ( Lower ,D( p i v ) ) ;
B = UTriSol ( Upper , y ) ;
27 % check errors
E r r o r = Lower ∗ Upper ∗B − D( p i v ) ;
D i f f = M∗B−D;
e = norm( E r r o r ) ;
e2 = norm( D i f f ) ;
32 d i s p l a y ( s p r i n t f ( ’ norm of LU residual = %12.7f norm of MB-D = %12.7f’ , e , e2 ) ) ;
% s e t s p a t i a l and time bounds
lambda = linspace ( 0 , 5 , 3 0 1 ) ;
tau = linspace ( 0 , 5 , 1 0 1 ) ;
V = zeros ( 3 0 1 , 1 0 1 ) ;
37 % f i n d v o l t a g e a t s p a c e p o i n t lambda ( s ) and time p o i n t t a u ( t )
for s = 1 : 3 0 1
fo r t = 1 : 1 0 1
V( s , t ) = B( 1 ) ∗exp(−tau ( t ) ) ;
f o r n=1:Q
42 V( s , t ) = V( s , t ) + B( n+1)∗ cos ( z ( n ) ∗ (L − lambda ( s ) ) ) ∗exp(−(1+ z ( n ) ˆ 2 ) ∗ tau (
t));
end
end
end
A x o n H i l l o c k = V( 1 , : ) ;
47 Input = V( : , 2 ) ;

442
26.5. BALL AND STICK PROJECT CHAPTER 26. NEURAL PROCESSING

To use, we do something like this:

>> Q = 45;
>> L = 5;
>> rho = 10;
>> Vmax = 275;
>> location = 1.3;
>> [V,AxonHillock,Input] = GetAxonHillock(Q,L,rho,Vmax,location);
norm of LU residual = 0.0000000 norm of MB-D = 0.0000000
>> plot(x,Input);
>> plot(t,AxonHillock);

26.5 Ball and Stick Project


Let’s compute the axon - hillock voltage given above for a number of possible choices. It is clear
that the solution v̂m we obtain depends on

1. The length of the cable L.

2. The ratio of the dendritic to soma conductance, ρ.

3. The space constant λc as it depends on the cable radius a.

4. The truncation constant Q; we clearly make error by using only a finite number of terms.

5. The values of α1 to αQ . These have to be determined numerically as the solution of a


transcendental equation as discussed in the text.

With this said, let’s calculate the axon - hillock voltage for various given applied voltages to the
cable.

Exercise 26.5.1. Compute this voltage for sums of impulses voltages applied at integer spatial
constant distances: λ = 0 to λ = L:

L
X
V (λ) = Vi∗ δ(λ − i).
i=0

Use the following voltage profiles and use a variety of values of Q to do the approximations. Do
these for the choices ρ ∈ {0.1, 1.0, 10.0}.

1. L = 1: Here V0∗ is applied at location 0.25 and V1∗ is applied at location 0.75.

(a) Use V0∗ = 10 and V1∗ = 0.


(b) Use V0∗ = 0 and V1∗ = 10.
(c) Use V0∗ = 100 and V1∗ = 0.
(d) Use V0∗ = 0 and V1∗ = 100.

443
26.5. BALL AND STICK PROJECT CHAPTER 26. NEURAL PROCESSING

2. L = 5: Here V0∗ is applied at location 0.25 and V5∗ is applied at location 4.75. The others
are applied at integer locations.

(a) Use V0∗ = 10, V1∗ = 0, V2∗ = 0, V3∗ = 0, V4∗ = 0 and V5∗ = 10
(b) Use V0∗ = 0, V1∗ = 0, V2∗ = 10, V3∗ = 0, V4∗ = 0 and V5∗ = 0
(c) Use V0∗ = 100, V1∗ = 0, V2∗ = 0, V3∗ = 0, V4∗ = 0 and V5∗ = 100
(d) Use V0∗ = 0, V1∗ = 0, V2∗ = 100, V3∗ = 0, V4∗ = 0 and V5∗ = 0

3. L = 10: Since the cable is so long, we will assume all Vi∗ are 0 except the ones listed. Here
V0∗ is applied at location 0.25 and V10
∗ is applied at location 9.75. The others are applied at

integer locations.

(a) Use V0∗ = 10 and V10


∗ = 10

(b) Use V5∗ = 10


(c) Use V0∗ = 100 and V10
∗ = 100

(d) Use V5∗ = 100

• Use your judgment to provide reasonable plots of the information you have gathered above.

• Can you draw any conclusions about how to choose Q, L and ρ for our modeling efforts?

444
Chapter 27
The Diffusion Equation

The full cable equation

∂ 2 vm ∂vm
λ2c = vm + τm − ro λ2c ke ,
∂z 2 ∂t

where ke is current per unit length can be converted into a simpler partial differential equation
called a diffusion equation with a change of variables. First, we introduce a dimensionless scaling
z t
to make it easier via the change of variables: y = λc and s = τm . With these changes, space
will be measured in units of space constants and time in units of time constants. We then define
the a new voltage variable w by

w(s, y) = vm (τm t, λc z)

It is then easy to show using the chain rule that

∂2w ∂ 2 vm
= λ2c
∂y 2 ∂z 2
∂w ∂vm
= τm
∂s ∂t

giving us the scaled cable equation

∂2w ∂w
= w + − ro λ2c ke (τm s, λc y)
∂y 2 ∂s

Now to further simplify our work, let’s make the additional change of variables

445
27.1. PARTICLE EVOLUTION CHAPTER 27. DIFFUSION EQUATION

Φ(s, y) = w(s, y) es

Then

∂Φ ∂w
= ( + w) es
∂s ∂s
∂2Φ ∂2w s
= e
∂y 2 ∂y 2

leading to

∂ 2 Φ −s ∂Φ −s
e = e − r0 λ2c ke (τm s, λc y)
∂y 2 ∂s

After rearranging, we have the version of the transformed cable equation we need to solve:

∂2Φ ∂Φ
= − r0 λ2c τm ke (τm s, λc y) es (27.1)
∂y 2 ∂s

Recall that ke is current per unit length. We are going to show you a physical way to find
a solution to the above diffusion equation when there is an instantaneous current impulse of
strength I0 applied at some nonnegative time t0 and nonnegative spatial location z0 . Essentially,
we will think of this instantaneous impulse as a Dirac delta function input as we have discussed
before: i.e., we will need to solve

∂2Φ ∂Φ
= − ro λ2c I0 δ(s − s0 , y − y0 ) es
∂y 2 ∂s

In this chapter, we will show how we can reason out what the solution to this equation looks like
and its qualitative behavior relative to the important physical constants λc and τm .

27.1 The Microscopic Space-Time Evolution of a Particle:

The simplest probabilistic model that links the Brownian motion of particles to the macroscopic
laws of diffusion is the 1D random walk model. In this model, we assume a particle moves every
1
τm seconds along the y axis a distance of either λc or −λc with probability 2. Consider the
thought experiment shown in Figure 27.1 where we see a volume element which has length 2λc
and cross sectional area A. Since we want to do microscopic analysis of the space time evolution
of the particles, we assume that λc < y.

446
27.1. PARTICLE EVOLUTION CHAPTER 27. DIFFUSION EQUATION

Figure 27.1: Walking In a Volume Box

Let φ+ (s, y) denote the flux density of particles crossing from left to right across the plane
located at position y at time s; similarly, let φ− (s, y) be the flux density of particles crossing from
right to left. Further, c(s, y) denote the concentration of particles at coordinates (s, y). What is
the net number of particles that cross the plane of area A?

Since the particles randomly change their position every τ seconds by ±λc , we can calculate
flux as follows: first, recall that flux here is the number of particles per unit area and time;
particles
i.e., the units are sec−cm2
. Since the walk is random, half of the particles will move to the right
λc
and half to the left. Since the distance moved is λc , half the concentration at c(y − 2 , s) will
λc
move to the right and half the concentration at c(y + 2 , s) will move to the left. Now the number
of particles crossing the plane is concentration times the volume. Hence, the flux terms are

1 Aλc c(y − λ2c , s)


φ+ (s, y) =
2 Aτm
λc λc
= c(y − , s)
2τm 2
1 Aλc c(y + λ2c , s)
φ− (s, y) =
2 Aτm
λc λc
= c(y + , s)
2τm 2

The net flux, φ(s, y) is thus

447
27.1. PARTICLE EVOLUTION CHAPTER 27. DIFFUSION EQUATION

φ(s, y) = φ+ (s, y) − φ− (s, y)


λc λc λc
= (c(y − , s) − c(y + , s))
2τm 2 2

λc
Since, y is very small in microscopic analysis, we can approximate the concentration c using a first
order Taylor series expansion in two variables if we assume that the concentration is sufficiently
smooth. Our knowledge of the concentration functions we see in the laboratory and other physical
situations implies that it is very reasonable to make such a smoothness assumption. Hence, for
small perturbations s + a and y + b from the base point (s, y), we find

∂c ∂c
c(s + a, y + b) = c(s, y) + (s, y) a + (s, y) b + e(s, y, a, b)
∂s ∂y

where e(s, y, a, b is an error term which is proportional to the size of the largest of |a| and |b|.
Thus, e goes to zero as (a, b) goes to zero in a certain way. In particular, for a = 0 and b = ± λ2c ,
we obtain

λc ∂c λc λc
c(s, y − ) = c(s, y) − (s, y) + e(s, y, 0, − )
2 ∂y 2 2
λc ∂c λc λc
c(s, y + ) = c(s, y) + (s, y) + e(s, y, 0, )
2 ∂y 2 2

and we note that the error terms are proportional to λ2c Thus,

λc ∂c λc λc
φ(s, y) = − (s, y)) λc + (e(s, y, 0, − ) − e(s, y, 0, ))
2τm ∂y 2 2
λ2 ∂c
≈ − c (s, y)
2τm ∂y

as the difference in error terms is still proportional to λ2c at worst and since λc is very small
compared to y, the error term is negligible.
Recall from Chapter 13, that Ficke’s law of diffusion for particles across a membrane (think
of our plane at y as the membrane) is given by Equation (13.1) which is written as

∂ [b]
Jdif f = −D
∂x

Equating c with [b] and Jdif f with φ, we see that the diffusion constant D in Ficke’s Law of
diffusion can be interpreted in this context as

448
27.2. RANDOM WALKS CHAPTER 27. DIFFUSION EQUATION

Figure 27.2: The Random Walk Of A Particle

λ2c
D =
τm

This will give us a powerful connection between the macroscopic diffusion coefficient of Ficke’s
Law with the microscopic quantities that define a random walk as we will see in the next section.

27.2 The Random Walk And The Binomial Distribution:


In this section, we will be following the discussion presented in Weiss (Weiss (24) 1996), but
paraphrasing and simplifying it for our purposes. More details can be gleaned from a careful
study of that volume. Let’s assume that a particle is executing a random walk starting at position
x = 0 and time t = 0. This means that from that starting point, the particle can move either
+λc or −λc at each tick of a clock with time measured in time constant units τm . We can draw
this as a tree as shown in Figure 27.2.
In this figure, the labels shown in each node refer to three things: the time, in units of the
time constant τm (hence, t = 3 denotes a time of 3τm ); spatial position in units of the space
constant λc (thus, x = −2 means a position of −2λc units); and the number of possible paths
that read that node (therefore, Paths equal to 6 means there are six possible paths that can be
taken to arrive at that terminal node.
Since time and space are discretized into units of time and space constants, we have a physical
system where time and space are measured as integers. Thus, we can ask what is the probability,
W (m, n), that the particle will be at position m at time n. In the time interval of n units, let’s
define some auxiliary variables: n+ is the number of time steps where the particle moves to the
right – i.e., the movement is +1; and n− is the number of time steps the particle moves to the
left, a movement of −1. We clearly see that M is really the net displacement and

n = n+ + n−

449
27.3. RIGHT MOVES CHAPTER 27. DIFFUSION EQUATION

m = n= − n−

Solving, we see

n + m
n+ =
2
n − m
n− =
2

Let the usual binomial coefficients be denoted by Bn,j where

!
n n!
Bn,j = =
j j! (n − j)!

Now look closely at Figure 27.2. At a given node, all of the paths that can be taken to that node
have the same n+ value as shown in Table 27.1:

Time Paths n+ Binomial Coefficient Bn,n+


1 1-1 0-1 B1,0 - B1,1
2 1-2-1 0-1-2 B2,0 - B2,1 - B2,2
3 1-3-3-1 0-1-2-3 B3,0 - B3,1 - B3,2 - B3,3
4 1-6-4-6-1 0-1-2-3-4 B4,0 - B4,1 - B4,2 - B4,3 - B4,4

Table 27.1: Comparing Paths And Rightward Movements

27.3 Rightward Movement Has Probability 0.5 Or Less:

Let the probability that the particle moves to the right be p and the probability it moves to the
left be q. Then we have p + q = 1. Let’s assume that p ≤ 0.5. This forces q ≥ 0.5. Then, in n
time units, the probability a given path of length n is taken is ρ(n) where

+ − + +
ρ(n) = pn q n = pn q n − n

and the probability that a path will terminate at position m, W (m, n), is just the number of paths
that reach that position in n time units multiplied by ρ(n). We know that for a given number
of time steps, certain position will never be reached. Note that if the fraction 0.5(n + m) is
not an integer, then there will be no paths that reach that position m. Let nf be the results of
the computation nf = 0.5(n + m). We know nf need not be an integer. Define the extended
binomial coefficients Cn,nf by

450
27.3. RIGHT MOVES CHAPTER 27. DIFFUSION EQUATION

(
Bn,nf if nf is an integer
Cn,nf =
0 else

Then, we see

W (m, n) = Cn,nf pnf q n − nf

From our discussions above, it is clear for any position m that is reached in n time units, that
this can be rewritten in terms of n+ as

+ (m) + (m)
W (m, n) = Bn,n+ (m) pn qn − n

where n+ (m) is the value of n+ associated with paths terminating on position m. If you think
about this a bit, you’ll see that for even times n, only even positions m are reached; similarly, for
odd times, only odd positions are reached.

27.3.1 Finding The Average Of The Particles Distribution In Space And Time:

The average position m is defined by

mX
=n
m = mW (m, n)
m = −n

where of course, many of the individual terms W (m, n) are actually zero for a given time n because
the positions are never reached. From this, we can infer that

n+ =n
+ +
X
m = m Bn,n+ pn q n − n
n+ = 0

To compute this, first, switch to a simple notation. Let j = n+ . Then, we note that we have, for
m = 2n+ − n or m = 2j − n,

jX
=n
m = m Bn,j pj q n − j
j=0

451
27.3. RIGHT MOVES CHAPTER 27. DIFFUSION EQUATION

jX
=n
= (2j − n) Bn,j pj q n − j
j=0

= 2jnS

where

jX
=n
j = j Bn,j pj q n − j
j=0
jX
=n
S = Bn,j pj q n − j
j=0

Since p + q = 1, we know that

jX
=n
(p + q)n = = S = Bn,j pj q n − j = 1
j=0

Further, by taking derivatives with respect to p, we see that

d
p ((p + q)n ) = p n (p + q)n−1 = p n
dp

Thus,

jX
=n
j = j Bn,j pj q n − j
j=0
jX
=n
= j p Bn,j pj−1 q n − j
j=0
jX
=n
d
Bn,j pj q n − j

= p
dp
j=0
 
jX=n
d 
= p Bn,j pj q n − j 
dp
j=0
d
= p ((p + q)n )
dp
= pn

by our calculations above. We conclude that

452
27.3. RIGHT MOVES CHAPTER 27. DIFFUSION EQUATION

m = 2jnS
= 2pn − n

27.3.2 Finding The Standard Deviation Of The Particles Distribution In Space And Time:

We compute the standard deviation of our particle’s movement through space and time in a
similar way. We need to find

mX
=n
2
σ = m2 W (m, n)
m = −n

Our earlier discussions still apply and we find we can rewrite this as

jX
=n
σ 2
= (2j − n)2 Bn,j pj q n − j
j=0
jX
=n
= (4j 2 − 4jn + n2 ) Bn,j pj q n − j
j=0

= 4j 2 − 4 n j + n2 S

Where j 2 is the standard deviation squared of the usual binomial distribution. We know S is 1
and j is pn. So we only have to compute j 2 . Note that

d2
p2 ((p + q)n ) = p2 n (n − 1) (p + q)n−2 = p2 n (n − 1)

2
d p

Also,

d2
p2 n (n − 1) = p2 ((p + q)n )
d2 p
 
2 jX
=n
d
= p2 2  Bn,j pj q n − j 
d p
j=0
jX
=n
d2
p2 Bn,j pj q n − j

= 2
d p
j=0
jX
=n
= p2 j (j − 1) Bn,j pj−2 q n − j
j=0

453
27.4. MACROSCOPIC SCALE CHAPTER 27. DIFFUSION EQUATION

jX
=n
= (j 2 − j) Bn,j pj q n − j
j=0

= j2 − j

We conclude that

j 2 = p2 n (n − 1) + p n

Since, we also have a formula for σ 2 , we see

σ 2 = 4j 2 − 4 n j + n2 S
= 4(p2 n (n − 1) + p n) − 4 n p n + n2
= n2 − 4p n (1 − p) (n − 1)
= (1 − α)n2 + α n

where α = 4p (1 − p) ranges from 0 to 1 with a global maximum at α = 0.5 or p = 0.5.

27.3.3 Specializing To An Equal Probability Left And Right Random Walk:

Here, p and q are both 0.5. We see that α = 1 and

m = 2pn − n = 0
m2 = (1 − α)n2 + α n = n

Note, that if the random walk is skewed, with say p = 0.1, then we would obtain α = 0.36
and

m = 2 p n − n = −0.8n
m2 = (1 − α)n2 + α n = 0.64n2 + 0.36n

so that for large n, the standard deviation of our particle’s movement would be approximately
0.8n rather than n.

27.4 Macroscopic Scale


For a very large number of steps, the probability distribution, W (m, n), will approach a limiting
form. This is done by using an approximation to k! that is know as the Stirling Approximation.

454
27.4. MACROSCOPIC SCALE CHAPTER 27. DIFFUSION EQUATION

It is known that for very large k,

 k
√ k
k! ≈ 2πk .
e

The distribution of our particle’s position throughout space and time can be written as

n+m n−m
W (m, n) = Bn, n+m p 2 q 2
2

using our definitions of n+ and n− (it is understood that W (m, n) is zero for non integer values
of these fractions). We can apply Stirling’s approximation to Bn, n+m :
2

√  n n
n! ≈ 2πn
e
r   n+m
n+m n+m n+m 2
( )! ≈ 2π
2 2 2e
p  n  n+m
2
 m  n+m
2
≈ π(n + m) 1+
2e n
r   n−m
n−m n−m n−m 2
( )! ≈ 2π
2 2 2e
 n  n−m  m  n−m
2 2
p
≈ π(n − m) 1−
2e n

From this, we find

r n
m 2  n n m2 2  m  −m m  m2
    
n+m n−m 2

! ! ≈ πn 1− 2 1− 2 1− 1+
2 2 n 2e n n n

Hence, we see

√  −1   −n
m2 2 m2 2  m  m2  m  −m

2πn  n n   n −n 2
Bn, n+m ≈ ) 1− 2 1− 2 1− 1+
2 πn e 2e n n n n
r  −1   −n
m2 2 m2 2  m  m2  m  −m

2 n 2
≈ 2 1− 2 1− 2 1− 1+
πn n n n n

Thus,

455
27.4. MACROSCOPIC SCALE CHAPTER 27. DIFFUSION EQUATION

m2 m2
     
  1 2 −1 −n
ln Bn, n+m ≈ ln + n ln(2) + ln 1 − 2 + ln 1 − 2
2 2 πn 2 n 2 n
m  m  −m  m 
+ ln 1 − + ln 1 +
2 n 2 n

m
Now, for small x, the standard Taylor’s series approximation gives ln(1 + x) ≈ x; hence, for n
sufficiently small, we can say

1 m2 n m2
 
  1 2 m m m m
ln Bn, n+m ≈ ln + n ln(2) + 2
+ 2
− −
2 2 πn 2 n 2 n 2 n 2 n
2 2
 
1 2 1 m m
≈ ln + n ln(2) + −
2 πn 2 n2 2n

m2
For very large n (i.e. after a very large number of time steps nτm ), since |m| ≤ n, the term n2
is negligible. Hence dropping that term and exponentiating, we find

r
−m2
 
2
Bn, n+m ≈ exp 2n
2 nπ 2n

This implies that

r
−m2
 
2 n+m n−m
W (m, n) ≈ exp 2n p 2 q 2
nπ 2n
r  m
−m2
 
2 n p 2
≈ exp (4pq) 2
nπ 2n q

Note that if the particle moves with equal probability 0.5 to the right or the left at any time tick,
this reduces to

r
−m2
 
2
W (m, n) ≈ exp
nπ 2n

1 2
and for p = 3 and q = 3, this becomes

r   n  m
−m2

2 8 2 1 2
W (m, n) ≈ exp
nπ 2n 9 2

456
27.5. PDF CHAPTER 27. DIFFUSION EQUATION

27.5 Obtaining The Probability Density Function:

From our discrete approximations in previous sections, we can now derive the probability, P (x, t),
that the particle will be at position x at time t. In what follows, we will assume that p ≤ 0.5.
Let ∆x be a small number which is approximately mλc for some m. The probability that the
∆x ∆x
particle is in an interval [x − 2 ,x + 2 ] can then be approximated by

X
P (x, t) ∆x ≈ W (k, n)
k

where the sum is over all indices k such that the position kλc lies in the interval [− ∆x
2 ,
∆x
2 ].
Hence,

∆x ∆x
[− , ] ≡ {(m − j)λc , ..., mλc , ..., (m + j)λc }
2 2

for some integer j. Now from the way the particle moves in a random walk, only half of these
tick marks will actually be positions the particle can occupy. Hence, half of the probabilities
∆x
W (m − i, n) for i from −j to j are zero. The number of nonzero probabilities is thus ≈ 2λc . We
can therefore approximate the sum by taking the middle term W (m, n) and multiplying by the
number of nonzero probabilities.

∆x
P (x, t) ∆x ≈ W (m, n)
2λc

which implies, since x = mλc and t = nτm , that for very large n,

1
P (x, t) = W (m, n)
2λc
!   x
1 −x2 t p 2λc
= q exp 2 (4pq) 2τm
2
4π τλmc t 4 2τλcm t q

λ2c
Note that the term 2τm is the diffusion constant D. Thus,

  x
−x2
 
1 t p 2λc
P (x, t) = √ exp (4pq) 2τm
4πD t 4D t q

Next, rewrite all the power terms as exponentials:

457
27.6. PARTICLE DISTRIBUTION CHAPTER 27. DIFFUSION EQUATION

−x2
     
1 t p x
P (x, t) = √ exp exp ln(4pq) exp ln( )
4πD t 4D t 2τm q 2λc

27.5.1 p and q Are Equal:


The equal probability random walk has A and B zero and therefore generates the probability
density function

x2
 
1
P (x, t) = √ exp −
4πD t 4Dt

1 B 2 − 2A
On the other hand, for p = 6, we find A = 0.59 and B = 1.61 and so 4 is 0.35 giving the
density

 !
1.61Dt 2
  
1 1 0.35t
P (x, t) = √ exp − x + exp
4πD t 4Dt λc τm

1 B 2 − 2A
Similarly, if p = 30 , we find A = 2.05 and B = 3.36 and so 4 is 1.81 giving the density

 !
3.36Dt 2
  
1 1 1.81t
P (x, t) = √ exp − x + exp
4πD t 4Dt λc τm

27.6 Understanding The Probability Distribution Of The Particle:


It is important to get a strong intuitive feel for the probability distribution of the particle under
the random walk and skewed random walk protocols. A normal distribution has the form

x2
 
1
P (x) = √ exp − 2 )
2πσ 2σ

In Figure 27.3, we see a plot of a particle’s position probability as a function of position for
three different standard deviations, σ. In the plot, the standard deviation is labeled as D.
Now if we skew the distribution so that the probability of moving to the right is now 16 , we
find

 !
1.61Dt 2
  
1 1 0.35t
P (x, t) = √ exp − x − exp
4πD t 4Dt λc τm

which generates the plot shown in Figure 27.4.

458
27.6. PARTICLE DISTRIBUTION CHAPTER 27. DIFFUSION EQUATION

Figure 27.3: Normal Distribution: Spread Depends on Standard Deviation

Figure 27.4: Skewed Random Walk Probability Distribution: p is 0.1666

459
27.6. PARTICLE DISTRIBUTION CHAPTER 27. DIFFUSION EQUATION

460
Chapter 28
The Time Dependent Cable Solution

Recall the full cable equation

∂ 2 vm ∂vm
λ2c = vm + τm − ro λ2c ke
∂z 2 ∂t

Recall that ke is current per unit length. We are going to show you a mathematical way to solve the
above cable equation when there is an instantaneous current impulse applied at some nonnegative
time t0 and nonnegative spatial location z0 . Essentially, we will think of this instantaneous impulse
as a Dirac delta function input as we have discussed before: i.e. we will need to solve

∂ 2 vm ∂vm
λ2c = vm + τm − ro λ2c Ie δ(t − t0 , z − z0 )
∂z 2 ∂t

where δ(t − t0 , z − z0 ) is a Dirac impulse applied at the ordered pair (t0 , z0 ). We will simplify
our reasoning by thinking of the impulse applied at (0, 0) as we can simply translate our solution
later as we did before for the idealized impulse solution to the time independent cable equation.

28.1 The Solution For A Current Impulse:

We assume that the cable is infinitely long and there is a current injection at some point z0
on the cable which instantaneously delivers I0 amps of current. As usual, we will model this
instantaneous delivery of current using a family of pulses. For convenience of exposition, we will
consider the point of application of the pulses to be z0 = 0.

461
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

28.1.1 Modeling The Current Pulses:

Consider the two parameter family of pulses below:

2 2
I0 nm n2 t2τ2 −τ
m
2
2λc
2 2 2
Pnm (t, z) = e m e m z −λc
τm λc γ 2
amp
At this point, γ is a constant to be determined. Note that Pnm is measured in cm−sec . We know
amp
the currents ke have units of cm ; hence, we model the family of current impulses kenm by

kenm (t, z) = τm Pnm (t, z)

This gives us the proper units for the current impulses. This is then a specific example of the
type of pulses we have used in the past. There are several differences:

• Here we give a specific functional form for our pulses which we did not do before. It is
straightforward to show that these pulses are zero off the (t, z) rectangle [− τnm , τnm ]×[− λmc , λmc ]
and are infinitely differentiable for all time and space points. Most importantly, this means
the pulses are very smooth at the boundary points t = ± τnm and z = ± λmc . Note we are
indeed allowing these pulses to be active for a small interval of time before zero. When we
solve the actual cable problem, we will only be using the positive time portion.

• The current delivered by this pulse is obtained by the following integration:

Z ∞ Z ∞
J = Pnm (t, z)dtdz
−∞ −∞
Z λc Z τm
m n
= Pnm (t, z)dtdz
− λmc − τn
m

Since we will be interested only in positive time, we will want to evaluate

Z λc Z τm
J m n
= Pnm (t, z)dtdz
2 − λmc 0

The constant γ will be chosen so that these integrals give the constant value of J = 2I0 for
all n and m. If we integrate over the positive time half of the pulse, we will then get the
constant I0 instead.

amps amp
• The units of our pulse are cmsec . The time integration then gives us cm and the following
spatial integration gives us amperes.

462
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

Let’s see how we should choose γ: consider

Z λc Z τm
m n
J = Pnm (t, z)dtdz
− λmc − τn
m

nt mz
Make the substitutions β1 = τm and β2 = λc . We then obtain with a bit of algebra

τm λc 1
Z Z 1
I0 nm β22−1 β22−1
J = e 1 e 2 dβ1 dβ2
nm −1 −1 τm λc γ 2
Z 1 2
Z 1 2

I0 2 −1
β1 2 −1
β1
= e dβ1 e dβ2
γ2 −1 −1
Z 1 2
I0 2
2
= e x −1 dx
γ2 −1

Clearly, if we choose the constant γ to be

Z 1
1 2
γ = √ e x2 −1 dx
2 −1

then all full pulses Pnm deliver 2I0 amperes of current when integrated over space and time and
all pulses with only nonnegative time deliver I0 amperes.

28.1.2 Scaling the Cable Equation:

We can convert the cable equation into the scaled version. Recall, we introduce the changes of
z t
variables: y = λc and s = τm . and define the new voltage variable w by

w(s, y) = vm (τm t, λc z)

This gives the scaled equation

∂2w ∂w
= w + − ro λ2c ke (τm s, λc y)
∂y 2 ∂s

In particular, for our family of pulses, we have

∂2w ∂w
= w + − ro λ2c τm Pnm (τm s, λc y)
∂y 2 ∂s

and since we know the functional form of Pnm , we have

463
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

2 2
I0 nm n2 τ 22τsm2 −τ 2 m2 λ2λ c
2 y 2 −λ2
Pnm (τm s, λc y) = e m m e c c
τm λc γ 2
I0 nm n2 s22 −1 m2 y22 −1
= e e
τm λc γ 2

Then, introducing the further change of variables

Φ(s, y) = w(s, y) es

we find

∂2Φ ∂Φ
= − r0 λ2c τm Pnm (τm s, λc y) es (28.1)
∂y 2 ∂s

28.1.3 Applying the Laplace Transform In Time:

We will apply some rather sophisticated mathematical tools to solve the Equation (28.1).
The Laplace transform acts on the time domain of a problem as follows: give the function x
defined on s ≥ 0, we define the Laplace Transform of x to be

Z ∞
L (x) = x(s) e−βs ds
0

The new function L (x) is defined for some domain of the new variable β. The variable β’s
domain is called the frequency domain and in general, the values of β where the transform is
defined depend on the function x we are transforming. Also, in order for the Laplace transform
of the function x to work, x must not grow to fast - roughly, x must decay like an exponential
function with a negative coefficient. The solutions we seek to our cable equation are expected on
physical grounds to decay to zero exponentially as we let t (and therefore, also s!) go to infinity
and as we let the space variable z (and hence w) go to ±∞. Hence, the function Φ we seek will
have a well-defined Laplace transform with respect to the s variable.
Now, what about the Laplace transform of a derivative? Consider

Z ∞
dx dx −βs
L( ) = e ds
ds 0 ds

Integrating by parts, we find

Z ∞ ∞ Z ∞
dx −βs
e ds = x(s) e−βs + β x(s) e−βs ds

0 ds 0 0

464
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

= lim (x(R) e−βR ) − x(0) + β L (x)


R→∞

Now if the function x grows slower than some e−cR for some constant c, the limit will be zero and
we obtain

dx
L( ) = β L (x) − x(0)
ds

Hence, applying the Laplace transform to both sides of this equation, we have

∂2Φ ∂Φ
L( 2
) = L( ) − r0 λ2c L (Pnm (τm s, λc y) es )
∂y ∂s
∂ 2 L (Φ)
= β L (Φ) − Φ(0, y) − r0 λ2c τm L (Pnm (τm s, λc y) es )
∂y 2

This is just the transform of the time portion of the equation. The space portion has been left
alone. We will now further assume that

Φ(0, y) = 0, y 6= 0

This is the same as assuming

vm (0, z) = 0, z 6= 0

which is a reasonable physical initial condition. This gives us

∂ 2 L (Φ)
= β L (Φ) − r0 λ2c τm L (Pnm (τm s, λc y) es )
∂y 2

28.1.4 Applying the Fourier Transform In Space:


To handle the space portion of the function, we will use the Fourier Transform. Given a function
g defined on the y axis, we define the Fourier Transform of g to be

Z ∞
F (g) = g(y) e−jξy dy
−∞

where j denotes the square root of minus 1 and the exponential term is defined to mean

e−jξy = cos(ξy) − jsin(ξy)

465
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

To properly understand this transform, you need to have studied what is called complex analysis
as this integral is a complex line integral, but for our purposes it suffices to note that for decaying
functions of the type we expect Φ to be, this integral will be well-defined.
Note that we can compute the Fourier Transform of the derivative of g as follows:

Z ∞
dg dg −jξy
F( ) = e dy
dy −∞ dy

Integrating by parts, we find

Z ∞ ∞ Z ∞
dg −jξy −jξy
e dy = g(y) e + jξ g(y) e−jξy dy
dy

−∞ −∞ −∞
= lim (g(R) e−jξR ) − lim (g(R) e−jξR ) + jξ F (g)
R→∞ R→−∞

Since we assume that the function g decays sufficiently quickly as y → ±∞, the first term vanishes
and we have

Z ∞
dg −jξy
e dy = +jξ F (g)
−∞ dy

Applying the same type of reasoning, we can see that


dg −jξy ∞
Z ∞
d2 g −jξy
Z
dg −jξy
e dy = e + jξ e dy
−∞ dy 2 dy
−∞ −∞ dy
dg dg dg
= lim ( (R) e−jξR ) − lim ( (R) e−jξR ) + jξ F ( )
R→∞ dy R→−∞ dy dy

We also assume that the function g’s derivative decays sufficiently quickly as y → ±∞. Thus the
the first term vanishes and we have


d2 g −jξy
Z
dg
2
e dy = +jξ F ( )
−∞ dy dy
= j 2 ξ 2 F (g)
= −ξ 2 F (g)

because j 2 = −1. Hence,

d2 g
F( ) = −ξ 2 F (g)
dy 2

466
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

Now apply the Fourier Transform in space to the equation we obtained after applying the Laplace
Transform in time. We have

∂ 2 L (Φ)
F( ) = β F (L (Φ)) − r0 λ2c τm F (L (Pnm (τm s, λc y) es ))
∂y 2

or

−ξ 2 F (L (Φ)) = β F (L (Φ)) − r0 λ2c τm F (L (Pnm (τm s, λc y) es ))

For convenience, let

T (Φ) = F (L (Φ))

we see we have

−ξ 2 T (Φ)) = β T (Φ) − r0 λ2c τm T (Pnm (τm s, λc y) es )

28.1.5 The T Transform Of the Pulse:

We must now compute the T transform of the pulse term Pnm (τm s, λc y) es . Since the pulse is
zero off the (t, z) rectangle [− τnm , τnm ] × [− λmc , λmc ], we see the (s, y) integration rectangle reduces
to [− n1 , n1 ] × [− m
1 1
, m ]. Hence,

Z 1 Z 1
m n
T (Pnm (τm s, λc y) e ) =
s
Pnm (τm s, λc y) es e−βs e−jξy dsdy
1
−m 0
Z 1 Z 1
m n I0 nm n2 s22 −1 m2 y22 −1 s −βs −jξy
= e e e e e dsdy
1
−m 0 τm λc γ 2

Now use the change of variables ζ = ns and u = my to obtain

Z 1 Z 1
I0 nm ζ 22−1 u22−1 τ ζ n − βζ −jξu dζ du
T (Pnm (τm s, λc y) e ) =
s
e e e m e n e m
−1 0 τm λc γ 2 n m
Z 1 Z 1
1 I0 2 2 ζ βζ −jξu
= 2
e ζ 2 −1 e u2 −1 e n e− n e m dζ du
2 −1 −1 τm λc γ

Note that this implies that

467
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

Z 1 Z 1
1 I0 2 2
lim T (Pnm (τm s, λc y) e ) =
s
e ζ 2 −1 e u2 −1 dζ du
n,m→∞ 2 −1 −1 τm λc γ 2
I0
= 2γ 2
2τm λc γ 2
I0
=
τm λc

28.1.6 The Idealized Impulse T Transform Solution:

For a given impulse Pnm , we have the T transform solution satisfies

−ξ 2 T (Φ)) = β T (Φ) − r0 λ2c τm T (Pnm (τm s, λc y) es )

and so the idealized solution we seek is obtained by letting n and m go to infinity to obtain

I0
−ξ 2 T (Φ)) = β T (Φ) − r0 λ2c τm
τm λc
= β T (Φ) − r0 λc I0

or

(ξ 2 + β)T (Φ)) = r0 λc I0

or finally

Φ∗ = T (Φ))
1
= r0 λc I0
β + ξ2

where for convenience, we denote the T transform of Φ by Φ∗ .

28.1.7 Inverting The T Transform Solution:

To move back from the transform (β, ξ) space to our original (s, y) space, we apply the inverse of
our T transform. For a function h(β, ξ), this is defined by

Z ∞ Z ∞
1
T −1 (h) = h(β, ξ) eβs ejξy dβdξ
2π −∞ 0

468
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

To find the solution to our cable equation, we will apply this inverse transform to Φ.
Consider the Laplace Transform of the simple function

(
e−as s ≥ 0
f (s) =
0 s < 0

where a is positive. It is easy to show that

Z ∞
L (f ) = e−as eβs ds
0
1
=
β + a

Hence,

Z ∞
2s r0 λc I0 1
r0 λc I0 e−ξ eβs dβ =
0 2 β + ξ2

Thus, we can compute the inner inverse Laplace transform to obtain

Z ∞ Z ∞ 
1 1
T −1 (Φ∗ ) = r0 λc I0 2
eβs
dβ ejξy dξ
2π β + ξ
Z ∞ −∞ 0
r0 λc I0 2
= e−ξ s ejξy dξ
2π −∞
Z ∞
r0 λc I0 2
= e−ξ s + jξy dξ
2π −∞

Now to invert the remaining part, we rewrite the exponent by completing the square:

 
ξ jy jy
−ξ s + jξy = −s ξ − j ξ + ( )2 − ( )2
2 2
s 2s 2s
 
jy jy
= −s (ξ − )2 − ( )2
2s 2s

Hence,

2s jy 2 y2
e−ξ + jξy
= e−s(ξ − 2s ) e− 4s

because j 2 = −1.
To handle the inversion here, we note that for any positive a and positive base points x0 and y0 :

469
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

Z ∞ Z ∞ Z ∞ Z ∞
−a(x−x0 )2 −a(y−y0 )2 2 2 1
e e dxdy = e−u e−v dudv
−∞ −∞ −∞ −∞ a
√ √
using the change of variables u = a(x − x0 ) and v = a(y − y0 ).
Now we convert to polar coordinates to find

Z ∞ Z ∞ Z ∞ Z 2π
−u2 −v 2 1 1 2
e e dudv = e−r rdrdθ
−∞ −∞ a a 0 0
π
=
a

This implies that

Z ∞ 2
−a(x−x0 )2 π
e dx =
−∞ a

or


Z r
−a(x−x0 )2 π
e dx =
−∞ a

We can apply this result to our problem to see

Z ∞ Z ∞
r0 λc I0 2s r0 λc I0 jy 2 y2
e−ξ + jξy
dξ = e−s(ξ − 2s ) e− 4s dξ
2π −∞ 2π −∞
r
r0 λc I0 π − y2
= e 4s
2π s
1 y2
= r0 λc I0 √ e− 4s
4πs

Hence, our idealized solution is

1 y2
Φ(s, y) = r0 λc I0 √ e− 4s
4πs

which tells us that

w(s, y) = Φ(s, y) e−s


1 y2
= r0 λc I0 √ e− 4s e−s
4πs

470
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

Figure 28.1: One Time Dependent Pulse

We can write this in the unscaled form at pulse center (t0 , z0 ) as

z−z0 2
( )
λ2
c
1 − t−t t−t0
vm (t, z) = r0 λc I0 q e 4( τ 0 )
m e−( τm
)
(28.2)
(t−t0 )
4π τm

28.1.8 A Few Computed Results:

We can use the scaled solutions to generate a few surface plots. In Figure 28.1 we see a pulse
applied at time zero and spatial location 1.0 of magnitude 4.
We can use the linear superposition principle to sum two applied pulses: in Figure 28.2, we see
the effects of a pulse applied at space position 1.0 of magnitude 4.0 add to a pulse of strength
10.0 applied at position 7.0.
Finally, we can take the results shown in Figure 28.2 and simply plot the voltage at position 10.0
for the first three seconds. This is shown in Figure 28.3.

28.1.9 Reinterpretation In Terms of Charge:

Note that our family of impulses are

kenm (t, z) = τm Pnm (t, z)


2 2
!
I0 nm n2 t2τ2 −τ
m
2
2λc
2 2 2
= τm e m e m z −λc
τm λc γ 2

471
28.1. CURRENT IMPULSES CHAPTER 28. TIME DEPENDENT CABLE

Figure 28.2: Two Time Dependent Pulses

Figure 28.3: Summed Voltage at 3 Time and 10 Space Constants

472
28.2. CONSTANT CURRENTS CHAPTER 28. TIME DEPENDENT CABLE

2 2
(τm I0 )nm n2 t2τ2 −τ
m
2
2λc
2 2 2
= e m e m z −λc
τm λc γ 2
2 2
(Q0 )nm n2 t2τ2 −τ
m
2
2λc
2 2 2
= e m e m z −λc
τm λc γ 2

where Q0 is the amount of charge deposited in one time constant. The rest of the analysis is quite
similar. Thus, we can also write our solutions as

z−z0 2
( )
λ2
c
r0 λc Q0 1 − t−t t−t0
vm (t, z) = q e 4( τ 0 )
m e−( τm
)
(28.3)
τm 4π (t−t0)
τm

28.2 The Solution To A Constant Current:


Now we need to attack a harder problem: we will apply a constant external current ie which is
defined by

(
Ie , t > 0
ie (t) =
0, t ≤ 0

Recall that we could rewrite this as ie = Ie u(t) for the standard unit impulse function u. Now
in a time interval h, charge Qe = Ie h is delivered to the cable through the external membrane.
t
Fix the positive time t. Now divide the time interval [0, t] into K equal parts using h equals K.
This gives us a set of K + 1 points {ti }

t
ti = i , 0 ≤ i ≤ K
K

where we note that t0 is 0 and tK is t. This is called a partition of the interval [0, t] which we
denote by the symbol PK . Here h is the fraction t
K.
Now let’s think of the charge deposited into the outer membrane at ti as being the full amount
Ie h deposited between ti and ti+1 . Then the time dependent solution due to the injection of this
charge at ti is given by

( z2 )2
λc
r0 λc Ie h 1 − t−t t−ti
e 4( τm ) e−( τm )
i i
vm (t, z) = q
τm 4π (t−ti)
τm

and since our problem is linear, by the superposition principle, we find the solution due to charge
Ie h injected at each point ti is given by

473
28.2. CONSTANT CURRENTS CHAPTER 28. TIME DEPENDENT CABLE

( z2 )2
K λc
r0 λc Ie h X 1 − t−t t−ti
vm (t, z, PK ) = q e 4( τ i )
m e−( τm )
τm 4π (t−ti)
j=0 τm
( z2 )2
K λc
r0 λc Ie X 1 − t−t t−ti
= q e 4( τ i )
m e−( τm ) (ti+1 − ti )
τm 4π (t−ti)
j=0 τm

Now we can reorder the partition PK using

uK−i = t − ti

which gives uK = t and u0 = 0; hence, we are just moving backwards through the partition.
Note that

ti+1 − ti = uK−i − uK−i−1


= h

This relabeling allows us to rewrite the solution as

( z2 )2
K λc
− u
r0 λc Ie X 1 4( τ i )
u
−( τ i )
vm (t, z, PK ) = q e m e m (ui+1 − ui )
τm 4π (ui)
j=0 τm

We can do this for any choice of partition PK . Since all of the functions involved here are
continuous, we see that as K → ∞, we obtain the Riemann Integral of the idealized solution for
an impulse of size Ie applied at the point u

( z2 )2
λc
r0 λc Ie 1 − 4( u
e−( τm )
I u
vm (u, z) = q e τm )
τm 4π u
τm

leading to

Z t
I
vm (t, z) = vm (u, z) du
0
( z2 )2
Z t λc
r0 λc Ie 1 − 4( u
e−( τm ) du
u
= q e τm )

0 τm 4π u
τm

474
Part VI

Hodgkin - Huxley Models

475
Chapter 29
The Basic Hodgkin - Huxley Model

Let’s recall the standard setup of our cable model. The salient variables needed to describe what
is happening inside and outside the cellular membrane and to some extent, inside the membrane
are

• Vm0 is the rest value of the membrane potential.

0 is the rest value of the membrane current per length density.


• Km

• Ke0 is the rest value of the externally applied current per length density.

• Ii0 is the rest value of the inner current.

• Io0 is the rest value of the inner current.

• Vi0 is the rest value of the inner voltage.

• Vo0 is the rest value of the inner voltage.

• ri is the resistance of the inner fluid of the cable.

• ro is the resistance of the outer fluid surrounding the cable.

• gm is the membrane conductance per unit length.

• cm is the membrane capacitance per unit length.

The membrane voltage can be shown to satisfy the partial differential equation 29.1

∂ 2 Vm
= (ri + ro )Km − ro Ke (29.1)
∂z 2

477
CHAPTER 29. HODGKIN-HUXLEY MODELS

Figure 29.1: The Equivalent Electrical Network Model

In the standard core conductor model, the membrane is not modeled at all. We will think more
carefully about the membrane boxes shown in Figure 29.1.

We will replace our empty membrane


box by a parallel circuit model. Now this
box is really a chunk of membrane that is
∆z wide. In the first cable model, we assume
our membrane has a constant resistance and
capacitance. We know that conductance is
reciprocal resistance, so our model will con-
sist to a two branch circuit: one branch is
contains a capacitor and the other, the con-
ductance element. We will let cm denote
the membrane capacitance density per unit
Figure 29.2: The RC Membrane Model f ahrad
length (measured in cm ). Hence, in our
membrane box, the since the box is ∆z wide,
we see the value of capacitance should be cm ∆z. Similarly, we let gm be the conductance per unit
1
length (measured in ohm−cm ) for the membrane. The amount of conductance for the box element
is thus gm∆z. In Figure 29.2, we illustrate our new membrane model. Since this is a resistance -
capacitance parallel circuit, it is traditional to call this an RC membrane model. In Figure 29.2,
the current going into the element is Km (z, t)∆z and we draw the rest voltage for the membrane
as a battery of value Vm0 . We know that the membrane current, Km , satisfies Equation 29.2:

478
CHAPTER 29. HODGKIN-HUXLEY MODELS

Figure 29.3: The Membrane and Gate Circuit Model

∂Vm
Km (z, t) = gm Vm (z, t) + cm (29.2)
∂t

In terms of membrane current densities, all of the above details come from modeling the simple
equation

Km = Kc + Kion

where Km is the membrane current density, Kc is the current through the capacitative side of the
circuit and Kion is the current that flows through the side of the circuit that is modeled by the
conductance term, gm . We see that in this model

∂Vm
Kc = cm
∂t
Kion = gm Vm

However, we can come up with a more realistic model of how the membrane activity contributes
to the membrane voltage by adding models of ion flow controlled by gates in the membrane. We
have not done this before. Our models will initially be based on work that Hodgkin and Huxley
performed in the 1950’s.
We start by expanding our membrane model to handle potassium, sodium and an all purpose
current, called leakage current, using a modification of our original simple electrical circuit model
of the membrane. We will think of a gate in the membrane as having an intrinsic resistance and
the cell membrane itself as having an intrinsic capacitance as shown in Figure 29.3:
Thus, we expand the single branch of our old circuit model to multiple branches – one for each
ion flow we wish to model. The ionic current consists of the portions due to potassium, Kk ,
sodium, KN a and leakage KL . The leakage current is due to all other sources of ion flow across
the membrane which are not being’ explicitly modeled. This would include ion pumps; gates
for other ions such as Calcium, Chlorine; neurotransmitter activated gates and so forth. We

479
29.1. VOLTAGE CLAMPED CHAPTER 29. HODGKIN-HUXLEY MODELS

will assume that the leakage current is chosen so that there is no excitable neural activity at
equilibrium.
The standard Hodgkin - Huxley model of an excitatory neuron then consists of the equation
for the total membrane current, KM , obtained from Ohm’s law:

∂Vm
Km = cm + KK + KN a + KL (29.3)
∂t

The new equation for the membrane voltage is thus

∂ 2 Vm
= (ri + ro )Km − ro Ke
∂z 2
∂Vm
= (ri + ro ) cm + (ri + ro ) KK + (ri + ro ) KN a + (ri + ro ) KL − ro ke
∂t

which can be simplified to what is seen in Equation 29.5:

1 ∂ 2 Vm ro
= Km − Ke (29.4)
ri + ro ∂z 2 ri + ro
∂Vm ro
= cm + KK + KN a + KL − Ke (29.5)
∂t ri + ro

29.1 The Voltage Clamped Protocol:

Under certain experimental conditions, we can force the membrane voltage to be independent of
the spacial variable z. In this case, we find

∂ 2 Vm
= 0
∂z 2

which allows us to rewrite Equation 29.6 as follows

dVm ro
cm + KK + KN a + KL − Ke = 0 (29.6)
dt ri + ro

The replacement of the partial derivatives with a normal derivative reflects the fact that in the
voltage clamped protocol, the membrane voltage depends only on the one variable, time t. Since,
cm is capacitance per unit length, the above equation can also be interpreted in terms of capac-
itance, Cm , and currents, IK , IN a , IL and an external type current Ie . This leads to Equation
29.7

480
29.2. HODGKIN-HUXLEY GATES CHAPTER 29. HODGKIN-HUXLEY MODELS

dVm ro
Cm + IK + IN a + IL − Ie = 0 (29.7)
dt ri + ro

Finally, if we label as external current, IM (think of it as a membrane current), the term

ro
IM = Ie ,
ri + ro

the equation we need to solve under the voltage clamped protocol becomes Equation 29.8.

dVm 1
= (IK + IN a + IL − IM ) (29.8)
dt CM

29.2 The Hodgkin - Huxley Gate Model:

In Figure 29.4, we show an idealized cell with a small portion of the membrane blown up into an
idealized circuit. We see a small piece of the lipid membrane with an inserted gate. We think of
the gate as having some intrinsic resistance and capacitance. Now for our simple Hodgkin - Huxley
model here, we want to model a sodium and potassium gate as well as the cell capacitance. So we
will have a resistance for both the sodium and potassium. In addition, we know that other ions
move across the membrane due to pumps, other gates and so forth. We will temporarily model
this additional ion current as a leakage current with its own resistance. We also know that
each ion has its own equilibrium potential which is determined by applying the Nernst equation.
The driving electromotive force or driving emf is the difference between the ion equilibrium
potential and the voltage across the membrane itself. Hence, if Ec is the equilibrium potential
due to ion c and Vm is the membrane potential, the driving force is Vc − Vm . In Figure 29.4, we
see an electric schematic that summarizes what we have just said. We model the membrane as a
parallel circuit with a branch for the sodium and potassium ion, a branch for the leakage current
and a branch for the membrane capacitance.
From circuit theory, we know that the charge q across a capacitor is q = C E, where C is the
capacitance and E is the voltage across the capacitor. Hence, if the capacitance C is a constant,
we see that the current through the capacitor is given by the time rate of change of the charge

dq dE
= C
dt dt

If the voltage E was also space dependent, then we would write E(z, t) to indicate its dependence
on both a space variable z and the time t. Then the capacitive current would be

481
29.2. HODGKIN-HUXLEY GATES CHAPTER 29. HODGKIN-HUXLEY MODELS

Figure 29.4: The Simple Hodgkin - Huxley Membrane Circuit Model

dq ∂E
= C
dt ∂t

From Ohm’s law, we know that voltage is current times resistance; hence for each ion c, we can
say

Vc = Ic Rc

where we label the voltage, current and resistance due to this ion with the subscript c. This
implies

1
Ic = Vc
Rc
= Gc I c

where Gc is the reciprocal resistance or conductance of ion c. Hence, we can model all of our ionic
currents using a conductance equation of the form above. Of course, the potassium and sodium
conductances are nonlinear functions of the membrane voltage V and time t. This reflects the
fact that the amount of current that flows through the membrane for these ions is dependent on
the voltage differential across the membrane which in turn is also time dependent. The general
functional form for an ion c is thus

Ic = Gc (V, t)(V (t) − Ec (t))

where as we mentioned previously, the driving force, V − Ec , is the difference between the
voltage across the membrane and the equilibrium value for the ion in question, Ec . Note, the

482
29.2. HODGKIN-HUXLEY GATES CHAPTER 29. HODGKIN-HUXLEY MODELS

ion battery voltage Ec itself might also change in time (for example, extracellular potassium
concentration changes over time ). Hence, the driving force could be time dependent. The
conductance is modeled as the product of a activation, Mc , and an inactivation, Hc , term that
are essentially sigmoid nonlinearities. The activation and inactivation are functions of V and t
also. The conductance is assumed to have the form

Gc (V, t) = G0 Mcp (V, t) Hcq (V, t)

where appropriate powers of p and q are found to match known data for a given ion conductance.
We model the leakage current, IL , as

IL = gL (V (t) − EL )

where the leakage battery voltage, EL , and the conductance gL are constants that are data driven.
Hence, in terms of current densities, letting gK , gN a and gL respectively denote the ion conduc-
tances per length, our full model would be

KK = gK (V − EK )
KN A = gN a (V − EN a )
KL = gL (V − EL )

We know the membrane voltage satisfies:

1 ∂ 2 Vm ∂Vm ro
= cm + KK + KN a + KL − Ke
ri + ro ∂z 2 ∂t ri + ro

We can rewrite this as

1 ∂ 2 Vm ∂Vm
= cm + gK (Vm − EK ) + gN a (Vm − EN a ) + gL (Vm − EL )
ri + ro ∂z 2 ∂t
ro
− Ke
ri + ro

29.2.1 Activation and Inactivation Variables:

We assume that the voltage dependence of our activation and inactivation has been fitted from
data. Hodgkin and Huxley modeled the time dependence of these variables using first order
kinetics. They assumed a typical variable of this type, say Φ, satisfies for each value of voltage,
V:

483
29.3. NA AND K GATES CHAPTER 29. HODGKIN-HUXLEY MODELS

dΦ(V )
= αΦ (V ) (1 − Φ(V )) − βΦ (V ) Φ(V )
dt
Φ(V, 0) = Φ0 (V )

For convenience of exposition, we usually drop the functional dependence of Φ on V and just
write:


= αΦ (1 − Φ) − βΦ Φ
dt
Φ(0) = Φ0

Rewriting, we see

1 dΦ αΦ
= − Φ
αΦ + βΦ dt αΦ + βΦ
Φ(0) = Φ0

We let

1
τΦ =
αΦ + βΦ
αΦ
Φ∞ =
αΦ + βΦ

allowing us to rewrite our rate equation as


τΦ = Φ∞ − Φ
dt
Φ(0) = Φ0

29.3 The Hodgkin-Huxley Sodium and Potassium Model:

Hodgkin and Huxley modeled the sodium and potassium gates as

a MN A (V ) HN A (V )
M ax 3
gN a (V ) = gN
M ax
gK (V ) = gK MK 4 (V )

where the two activation variables, MN A and MK , and the one inactivation variable, HN A , all
satisfy the first order Φ kinetics as we have discussed. Hence, we know

484
29.3. NA AND K GATES CHAPTER 29. HODGKIN-HUXLEY MODELS

MN0 A (t) = (MN A )∞ − MN A


HN0 A (t) = (HN A )∞ − HN A
MK0 (t) = (MK )∞ − MK

with

1
τMN A =
αMN A + βMN A
αMN A
(MN A )∞ =
αMN A + βMN A
1
τHN A =
αHN A + βHN A
αHN A
(HN A )∞ =
αHN A + βHN A
1
τMK =
αMK + βMK
αMK
(MK )∞ =
αMK + βMK

Further, the coefficient functions, α and β for each variable required data fits as functions of
voltage. These were determined to be

V + 35.0
αMN A = −0.10
e−.1 (V +35.0)
− 1.0
−(V +60.0
βMN A = 4.0 e 18.0

αHN A = 0.07 e−.05 (V +60.0) (29.9)


1.0
βHN A = −.1
(1.0 + e (V +30.0) )
0.01 ∗ (V + 50.0)
αMK = − −.1(V +50.0)
(e − 1.0)
βMK = 0.125 e−0.0125 (V +60.0)

Of course these data fits were obtained at a certain temperature and assumed values for all the
other constants needed. These other parameters are given in the units below

voltage mV milli volts 10−3 Volts


current na nano amps 10−9 Amps
time ms milli seconds 10−3 Seconds
concentration mM milli moles 10−3 Moles
conductance µS micro Siemens 10−6 ohms−1
capacitance nF nano Fahrads 10−9 Fahrads

485
29.4. ENCODING THE DYNAMICS:CHAPTER 29. HODGKIN-HUXLEY MODELS

Our model of the membrane dynamics here thus consists of the following differential equations:

dMN A
τMN A = (MN A )∞ − MN A
dt
dHN A
τHN A = (HN A )∞ − HN A
dt
dMK
τMK = (MK )∞ − MK
dt
dV IM − IK − IN a − IL
=
dt CM

with initial conditions

MN A (0) = (MN A )∞ (V0 , 0)


HN A (0) = (HN A )∞ (V0 , 0)
MK (0) = (MK )∞ (V0 , 0)
V (0) = V0

We note that at equilibrium there is no current across the membrane. Hence, the sodium and
potassium currents are zero and the activation and inactivation variables should achieve their
steady state values which would be m∞ , h∞ and n∞ computed at the equilibrium membrane
potential which is here denoted by V0 .

29.4 Encoding The Dynamics:

Now these dynamics are more difficult to solve than you might think. Then, the sequence of steps
is this:

• Given the time t and voltage V compute

V + 35.0
αMN A = −0.10
e−.1 (V +35.0)
− 1.0
−(V +60.0)
βMN A = 4.0 e 18.0

αHN A = 0.07 e−.05 (V +60.0)


1.0
βHN A = −.1
(1.0 + e (V +30.0)
0.01 ∗ (V + 50.0)
αMK = − −.1(V +50.0)
(e − 1.0)
βMK = 0.125 e−0.0125 (V +60.0)

486
29.4. ENCODING THE DYNAMICS:CHAPTER 29. HODGKIN-HUXLEY MODELS

• Then compute the τ and steady state activation and inactivation variables

1
τMN A =
αMN A + βMN A
αMN A
(MN A )∞ =
αMN A + βMN A
1
τHN A =
αHN A + βHN A
αHN A
(HN A )∞ =
αHN A + βHN A
1
τMK =
αMK + βMK
αMK
(MK )∞ =
αMK + βMK

• Then compute the sodium and potassium potentials. In this model, this is easy as each is
set only once since the internal and external ion concentrations always stay the same and
so Nernst’s equation only has to used one time. Here we use the concentrations

[NA]o = 491.0
[NA]i = 50.0
[K]o = 20.11
[K]i = 400.0

These computations are also dependent on the temperature.

• Next compute the conductances since we now know (MN A (V ), HN A (V ) and MK (V ).

a MN A (V ) HN A (V )
M ax 3
gN a (V ) = gN
M ax
gK (V ) = gK MK 4 (V )

M ax and g M ax
Now here we will need the maximum sodium and potassium conductances gN a K
to finish the computation. These values must be provided as data and in this model are not
time dependent. Here we use

M ax
gN a = 120.0
M ax
gK = 36.0

487
29.4. ENCODING THE DYNAMICS:CHAPTER 29. HODGKIN-HUXLEY MODELS

• Then compute the ionic currents:

IN a = gN A (V )(V (t) − EN A )
IK = gK (V )(V (t) − EK )
IL = gL (V )(V (t) − EL )

where we use suitable values for the leakage conductance and battery voltage such as

gL = −0.0013
EL = −50.0

We generally obtain these values with the rest.m computation which we will discuss later.

• Finally compute the total current

IT = IN A + IK + IL

• We can now compute the dynamics of our system at time t and voltage V : we let IM denote
the external current to our system which we must supply.

dV IM − IT
=
dt CM
dMN A (MN A )∞ − MN A
=
dt τMN A
dHN A (HN A )∞ − HN A
=
dt τHN A
dMK (MK )∞ − MK
=
dt τMK

where we use CM = 1.0.

The basic Hodgkin - Huxley model thus needs four independent variables which we place in a
four dimensional vector y which components assigned as follows:

 
y[0] = V
= MN A 
 
 y[1]
y =  
 y[2]
 = HN A 

y[3] = MK

488
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

We encode the dynamics calculated above into a four dimensional vector f whose components are
interpreted as follows:

 
IM − IT
f [0] = CM

 f [1] (MN A )∞ − MN A 
= τMN A

f= = 
 
(HN A )∞ − HN A 
 f [2] = τHN A

(MK )∞ − MK
 
f [3] = τM K

which in terms of our vector y becomes

 (MN A )∞ − y[1] 
f [1] = τMN A

 f [2] (HN A )∞ − y[2] 
 = τHN A


(MK )∞ − y[3]
f [3] = τMK

So to summarize, at each time t and V , we need to calculate the four dimensional dynamics vector
f for use in our choice of ODE solver.

29.5 Computing the Solution Numerically:

As an example of what we might do to solve this kind of a problem, we note that this system can
now be written in vector form as

dy
= f (t, y)
dt
y(0) = y0

where we have found that

 
V
 MN A
 

y = 
 H
,

 NA 
MK

 
IM − IT
CM

 (MN A )∞ − MN A 

τMN A
f = 
 
(HN A )∞ − HN A 
τHN A
 
(MK )∞ − MK
 
τM K

489
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

with initial data

 
V0
 
 (MN A )∞ 
y0 = 
 (H )


 NA ∞ 
(MK )∞

The hardest part is to encode all of this in the vector form. The dynamic component f [0] is
actually a complicated function of (MN A ), (HN A ) and (MK ) as determined by the encoding
sequence we have previously discussed. Hence, the calculation of the dynamics vector f is not as
simple as defining each component of f as a functional expression. Now recall that the Runga -
Kutte order 4 requires multiple vector function f evaluations per time step so that there is no
way around the fact that solving this system numerically is expensive!

29.5.1 The MatLab Implementation:

This MatLab is fairly complicated and it is worth your while to study it carefully. Simple code
to manage our solution of the Hodgkin-Huxley equations is given below. We simply integrate
the equations using a fixed time step repeatedly with a Runga-Kutte method of order 4. This is
not terribly accurate but it illustrates the results nicely nevertheless. We begin with an auxiliary
function which computes ion battery voltages.

Listing 29.1: Battery Voltage Calculation

function [EK,ENA, T ] = I o n B a t t e r y V o l t a g e s ( KIn , KOut , NaIn , NaOut , TF)


%
% KIn i s t h e i n s i d e p o t a s s i u m c o n c e n t r a t i o n
% KOut i s t h e o u t s i d e p o t a s s i u m c o n c e n t r a t i o n
5 % NaIN i s t h e i n n e r sodium c o n c e n t r a t i o n
% NaOut i s t h e o u t e r sodium c o n c e n t r a t i o n
% TF i s t h e t e m p e r a t u r e i n F a h r e n h e i t
%
% ================================================
10 % Constants f o r Equilibrium Voltage C a l c u l a t i o n s
% ===============================================
%

% Rydberg ’ s Constant
15 R = 8.31;
% K e l v i n Temperature ; c o n v e r t f a h r e n h e i t t o C e l s i u s
% d e f a u l t i s 9.3 Celsius
T = ( 5 / 9 ) ∗ (TF−32) ;
% Faraday ’ s c o n s t a n t
20 F = 9 . 6 4 9 e +4;
%
% Compute Nernst v o l t a g e s o f E K and E Na

490
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

% v o l t a g e = Nernst ( v a l e n c e , Temperature , InConc , OutConc )


%
25

%
% Sodium
%
% NA O = 4 9 1 . 0 ;
30 % NA I = 5 0 . 0 ;
ENA = Nernst ( 1 ,T, NaIn , NaOut ) ;

%
% Potassium
35 %
% K O = 20.11;
% K I = 400.0;
EK = Nernst ( 1 ,T, KIn , KOut) ;

SolveSimpleHH.m

The code to manage the solution of the ODE is then given below:

Listing 29.2: Managing The MatLab Hodgkin - Huxley Simulation

function [ t v a l s , g NA , g K , V, m NA, h NA , n K ] = . . .
2 SolveSimpleHH ( fname , t0 , t f , y0 , h , k , IE , VE,EK,ENA, EL , GL, g Kmax , g NAmax )

%
% We use a s i m p l e Runge−Kutta scheme
% u s i n g t h e Matlab f u n c t i o n b e l o w :
7 %
%f u n c t i o n [ t v a l s , g Na , g K ,V] = HHFixedRK( fname , . . . . )
%
% Gives a p p r o x i m a t e s o l u t i o n t o
% y ’( t ) = f (t , y( t ))
12 % y ( t 0 ) = y0
% u s i n g a k t h o r d e r RK method
%
% t0 i n i t i a l time
% tf f i n a l time
17 % y0 initial state
% h stepsize
% k RK o r d e r 1<= k <= 4
%
% tvals l i n s p a c e ( t0 , t f , n )
22 % where n = round ( ( t f −t 0 ) /h ) ;
%
% fname = dynamics f u n c t i o n
% IE = e x t e r n a l c u r r e n t f u n c t i o n
% VE = e x t e r n a l v o l t a g e f u n c t i o n
27 % g Na = sodium c o n d u c t a n c e
% g K = potassium conductance
% V = membrane v o l t a g e

491
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

%
n = round ( ( t f −t 0 ) /h ) ;
32 m NA = zeros ( 1 , n ) ;
h NA = zeros ( 1 , n ) ;
n K = zeros ( 1 , n ) ;
g NA = zeros ( 1 , n ) ;
g K = zeros ( 1 , n ) ;
37 V = zeros ( 1 , n ) ;

[ t v a l s , y v a l s ] = HHFixedRK( fname , t0 , t f , y0 , IE , VE,EK,ENA, EL , GL, g NAmax , g Kmax , h ,


k) ;

%
42 % store values in p h y s i c a l v a r i a b l e s
%
V = yvals (1 ,1: n) ;
m NA = y v a l s ( 2 ,1: n) ;
h NA = y v a l s ( 3 ,1: n) ;
47 n K = yvals (4 ,1: n) ;

%
% Compute g Na and G K
%
52 for i = 1 : n
u = m NA( i ) ∗m NA( i ) ∗m NA( i ) ∗h NA ( i ) ;
g NA ( i ) = g NAmax∗u ;
end

57 for i = 1 : n
u = n K ( i ) ∗n K ( i ) ∗n K ( i ) ∗n K ( i ) ;
g K ( i ) = g Kmax ∗ u ;
end

rest.m

The initial conditions for the activation and inactivation variables for this model should be calcu-
lated carefully. When the excitable nerve cell is at equilibrium, with no external current applied,
there is no net current flow across the membrane. At this point, the voltage across the membrane
should be the applied voltage EM . Therefore,

0 = IT
= IN A + IK1 + IL

A (EM − EN A ) ((MN A )∞ ) HN A )∞ + gK
M ax 3 M ax
= gN (EM − EK ) ((HN A )∞ )4 + gL (EM − EL )

The two parameters, gL and EL , are used to take into account whatever ionic currents flow across
the membrane that are not explicitly modeled. We know this model does not deal with various
pumps, a variety of potassium gates, calcium gates and so forth. We also know there should be
no activity at equilibrium in the absence of an external current. However, it is difficult to choose

492
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

these parameters. So first, solve to the leakage parameters in terms of the rest of the variables.
Also, we can see that the activation and inactivation variables for each gate at equilibrium will
take on the values that are calculated using the voltage EM . We can use the formulae given in the
Hodgkin - Huxley dynamics to compute the values of activation/ inactivation that occur when
the membrane voltage is at rest, EM . We label these values with a superscript r for notational
convenience.

(MN A )r ≡ (MN A )∞ (EM )


(HN A )r ≡ (HN A )∞ (EM )
(MK )r ≡ (MK )∞ (EM )

Then, we can calculate the equilibrium currents

r max r 3 r
IN A = gN A (EM − EN A ) ((MN A ) ) (HN A )
r M ax
IK = gK (EM − EK ) ((MK )r )4

Thus, we see we must choose gL and EL so that

r r
gL (EL − EM ) = IN A + IK

or

r r
gL (EL − EM ) = IN A + IK

If we choose to fix EL , we can solve for the leakage conductance

r
IN r
A + IK
gL =
EL − EM

We do this in a startup function rest.m. This code is almost identical to the code we see in
simpleHH.m.

Listing 29.3: Initializing The Simulation: rest.m

function [ g L , m N A i n f i n i t y , h N A i n f i n i t y , m K i n f i n i t y ] = r e s t ( E L , V R ,
g NA bar , g K bar )
%
% inputs :

493
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

% E L = leakage voltage
5 % g NA bar = maximum sodium c o n d u c t a n c e
% g K b a r = maximum p o t a s s i u m c o n d u c t a n c e
% VR = rest voltage
%
% outputs :
10 % g L = l e a k a g e conductance
%
% ==================================================
% C o n s t a n t s f o r Ion E q u i l i b r i u m V o l t a g e C a l c u l a t i o n s
% ==================================================
15 [ E K , E NA ] = I o n B a t t e r y V o l t a g e s ( 4 0 0 . 0 , 2 0 . 1 1 , 5 0 . 0 , 4 9 1 . 0 , 6 9 ) ;
%

%
% a c t i v a t i o n / i n a c t i v a t i o n p a r a m e t e r s f o r NA
20 %
% alpha mNA , beta mNA
%

sum = V R+ 3 5 . 0 ;
25 alpha mNA = −0.10∗sum/ ( exp(−sum/ 1 0 . 0 ) − 1 . 0 ) ;
%
sum = V R+ 6 0 . 0 ;
beta mNA = 4 . 0 ∗ exp ( −sum/ 1 8 . 0 ) ;
%
30 m N A i n f i n i t y = alpha mNA / ( alpha mNA+beta mNA )

%
% a c t i v a t i o n , i n a c t i v a t i o n parameter f o r I NA
%
35

% alpha hNA , beta mNA


%
sum = (V R+60.0) / 2 0 . 0 ;
alpha hNA = 0 . 0 7 ∗ exp ( −sum) ;
40 %
sum = V R+ 3 0 . 0 ;
beta hNA = 1 . 0 / ( 1 . 0 + exp(−sum/ 1 0 . 0 ) ) ;
%
h N A i n f i n i t y = alpha hNA / ( alpha hNA+beta hNA )
45
%
% I NA c u r r e n t
%
I NA = g NA bar ∗ (V R−E NA) ∗ m N A i n f i n i t y ∗ m N A i n f i n i t y ∗ m N A i n f i n i t y ∗
h NA infinity
50
%
% a c t i v a t i o n / i n a c t i v a t i o n parameters f o r K
%
% alpha mK
55 %
sum = V R+ 5 0 . 0 ;
alpha mK = −0.01∗sum/ ( exp(−sum/ 1 0 . 0 ) − 1 . 0 ) ;

494
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

%
sum = (V R+60.0) ∗ 0 . 0 1 2 5 ;
60 beta mK = 0 . 1 2 5 ∗ exp ( −sum) ;

m K infinity = alpha mK / ( alpha mK+beta mK )

%
65 % I K current
%
I K = g K bar ∗ (V R−E K ) ∗ m K i n f i n i t y ∗ m K i n f i n i t y ∗ m K i n f i n i t y ∗ m K i n f i n i t y

numerator = −I NA − I K ;
70 denominator = V R − E L ;

%
% compute g L
%
75
% Note we want
% I NA + I K + g L ∗ (V R − E L ) = 0
% which g i v e s t h e e q u a t i o n b e l o w assuming we a r e g i v e n E L .
%
80
g L = −(I NA + I K ) / (V R − E L )

simpleHH.m

The Hodgkin-Huxley dynamics are then encoded in the function below as we have previously
discussed.
hh2simplehh

Listing 29.4: The MatLab Hodgkin - Huxley Dynamics: simpleHH.m

function f = simpleHH ( t , y , ti me in de x , IE , VE,EK,ENA, EL , GL, gNAmax, gKmax)

% Standard Hodgkin − Huxley Model


4 % voltage mV
% current na
% time ms
% c o n c e n t r a t i o n mM
% conductance micro Siemens
9 % capacitance nF
% ===============================================
% y vector assignments
% ===============================================
% y (1) = V
14 % y ( 2 ) = m NA
% y ( 3 ) = h NA
% y (4) = m K
%
% set size of f
19 %

495
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

% t = time c o u n t e r : t s t a r t s a t 1 and g o e s t o n
% t h e v a l u e o f t p a s s e d i n i s an i n t e g e r i n t h i s
% range and c o r r e s p o n d s t o t 0 + ( t −1)h
%
24 % IE i s an e x t e r n a l c u r r e n t s o u r c e v e c t o r o f s i z e n
% VE i s an e x t e r n a l v o l t a g e s o u r s e v e c t o r o f s i z e n
% EK i s t h e p o t a s s i u m b a t t e r y v o l t a g e
% ENA i s t h e sodium b a t t e r y v o l t a g e
% EL i s t h e l e a k a g e v o l t a g e
29 % GL i s t h e l e a k a g e c o n d u c t a n c e

f = zeros ( 1 , 4 ) ;

% ===============================================
34 % f vector assignments
% ===============================================
% y ( 1 ) = V dynamics
% y ( 2 ) = m NA dynamics
% y ( 3 ) = h NA dynamics
39 % y ( 4 ) = m K dynamics
%
% ================================================
% Fast Sodium Current
% ===============================================
44
% max c o n d u c t a n c e f o r NA: d e f a u l t i s 120
g NA bar = gNAmax ;

% max c o n d u c t a n c e f o r K: d e f a u l t i s 3 . 6 0
49 g K bar = gKmax ;

%
% a c t i v a t i o n / i n a c t i v a t i o n p a r a m e t e r s f o r NA
%
54 % alpha mNA , beta mNA
%
sum = y ( 1 ) + 3 5 . 0 ;
alpha mNA = −0.10∗sum/ ( exp(−sum/ 1 0 . 0 ) − 1 . 0 ) ;
%
59 sum = y ( 1 ) + 6 0 . 0 ;
beta mNA = 4 . 0 ∗ exp ( −sum/ 1 8 . 0 ) ;
%
m N A i n f i n i t y = alpha mNA / ( alpha mNA+beta mNA ) ;
t m NA = 1 . 0 / ( alpha mNA+beta mNA ) ;
64 %
f (2) = ( m N A i n f i n i t y − y ( 2 ) ) /t m NA ;

%
% a c t i v a t i o n , i n a c t i v a t i o n parameter f o r I NA
69 %
% alpha hNA , beta mNA
%
sum = ( y ( 1 ) +60.0) / 2 0 . 0 ;
alpha hNA = 0 . 0 7 ∗ exp ( −sum) ;
74 %

496
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

sum = y ( 1 ) + 3 0 . 0 ;
beta hNA = 1 . 0 / ( 1 . 0 + exp(−sum/ 1 0 . 0 ) ) ;
%
h N A i n f i n i t y = alpha hNA / ( alpha hNA+beta hNA ) ;
79 t h NA = 1 . 0 / ( alpha hNA+beta hNA ) ;
f (3) = ( h N A i n f i n i t y − y ( 3 ) ) / t h NA ;

%
% I NA c u r r e n t
84 %
I NA = g NA bar ∗ ( y ( 1 )−ENA) ∗y ( 2 ) ∗y ( 2 ) ∗y ( 2 ) ∗y ( 3 ) ;

%
% a c t i v a t i o n / i n a c t i v a t i o n parameters f o r K
89 %
% alpha mK
%
sum = y ( 1 ) + 5 0 . 0 ;
alpha mK = −0.01∗sum/ ( exp(−sum/ 1 0 . 0 ) − 1 . 0 ) ;
94 %
sum = ( y ( 1 ) +60.0) ∗ 0 . 0 1 2 5 ;
beta mK = 0 . 1 2 5 ∗ exp ( −sum) ;
%
t m K = 1 . 0 / ( alpha mK+beta mK ) ;
99 m K infinity = alpha mK / ( alpha mK+beta mK ) ;
f (4) = ( m K i n f i n i t y − y ( 4 ) ) /t m K ;

%
% I K current
104 %
I K = g K bar ∗ ( y ( 1 )−EK) ∗y ( 4 ) ∗y ( 4 ) ∗y ( 4 ) ∗y ( 4 ) ;

%
% l e a k a g e c u r r e n t : run r e s t .m t o f i n d a p p r o p r i a t e g L v a l u e
109 % d e f a u l t GL = 0 . 0 0 9 2 , EL = −50.0
g l e a k = GL;
E l e a k = EL ;

%
114 % I L current
%
I l e a k = g l e a k ∗ ( y ( 1 )−E l e a k ) ;

%
119 % C e l l Capacitance
%
C M = 1.0;
f ( 1 ) = ( IE ( t i m e i n d e x ) + VE( t i m e i n d e x ) − I NA − I K − I l e a k ) /C M ;

HHFixedRK.m

The code that manages the call to a fixed Runga-Kutte order method from an earlier chapter has
been modified to accept more arguments such as the name of the axon hillock current vector.

497
29.5. NUMERICAL SOLUTION CHAPTER 29. HODGKIN-HUXLEY MODELS

Listing 29.5: The MatLab Fixed Runga - Kutte Method

function [ t v a l s , y v a l s ] = HHFixedRK( fname , t0 , t f , y0 , IE , VE,EK,ENA, EL , GL, gNAmax,


gKmax , h , k )
%
3 % Gives a p p r o x i m a t e s o l u t i o n t o
% y ’( t ) = f (t , y( t ))
% y ( t 0 ) = y0
% u s i n g a k t h o r d e r RK method
%
8 % t0 i n i t i a l time
% tf f i n a l time
% y0 initial state
% h stepsize
% k RK o r d e r 1<= k <= 4
13 % n Number o f s t e p s t o t a k e
%
% t v a l s time v a l u e s o f form
% t v a l s ( j ) = t 0 + ( j −1)∗h , 1 <= j <= n
% y v a l s approximate s o l u t i o n
18 % y v a l s ( : j ) = approximate s o l u t i o n at
% t v a l s ( j ) , 1 <= j <= n
% fname name o f dynamics f u n c t i o n
% iename name o f c u r r e n t i n j e c t i o n f u n c t i o n
% VE external voltage input
23 %
tc = t0 ;
yc = y0 ;
% compute how many s t e p s we t a k e
n = round ( ( t f −t 0 ) /h ) ;
28 t v a l s = linspace ( t0 , t f , n ) ;
y v a l s = zeros ( 4 , n ) ;
y v a l s ( 1 : 4 , 1 ) = t r a n s p o s e ( yc ) ;
tc = tvals (1) ;
timeindex = 1;
33 f c = f e v a l ( fname , tc , yc , t im ei nd ex , IE , VE,EK,ENA, EL , GL, gNAmax, gKmax) ;
for t i m e i n d e x =1:n
tc = t v a l s ( timeindex ) ;
[ yc , f c ] = HHRKstep ( fname , tc , yc , f c , ti me in de x , IE , VE,EK,ENA, EL , GL, gNAmax, gKmax
,h , k) ;
y v a l s ( 1 : 4 , t i m e i n d e x ) = t r a n s p o s e ( yc ) ;
38 end

HHRKstep.m

The actual Runga-Kutte code was also modified to allow additional arguments.

Listing 29.6: Adding Injection Current To The Runga - Kutte Code

function [ ynew , fnew ] = HHRKstep ( fname , tc , yc , f c , t im ei nde x , IE , VE,EK,ENA, EL , GL,


gNAmax, gKmax , h , k )

498
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

2 %
% fname t h e name o f t h e r i g h t hand s i d e f u n c t i o n f ( t , y )
% t i s a s c a l a r u s u a l l y c a l l e d time and
% y is a vector of size d
% yc a p p r o x i m a t e s o l u t i o n t o y ’ ( t ) = f ( t , y ( t ) ) a t t=t c
7 % fc f ( t c , yc )
% h The time s t e p
% k The o r d e r o f t h e Runge−Kutta Method 1<= k <= 4
%
% tnew t c+h
12 % ynew a p p r o x i m a t e s o l u t i o n a t tnew
% fnew f ( tnew , ynew )
%
i f k==1
k1 = h∗ f c ;
17 ynew = yc+k1 ;
e l s e i f k==2
k1 = h∗ f c ;
k2 = h∗ f e v a l ( fname , t c +(h / 2 ) , yc+(k1 / 2 ) , ti me in de x , IE , VE,EK,ENA, EL , GL, gNAmax,
gKmax) ;
ynew = yc + ( k1+k2 ) / 2 ;
22 e l s e i f k==3
k1 = h∗ f c ;
k2 = h∗ f e v a l ( fname , t c +(h / 2 ) , yc+(k1 / 2 ) , ti me in de x , IE , VE,EK,ENA, EL , GL, gNAmax,
gKmax) ;
k3 = h∗ f e v a l ( fname , t c+h , yc−k1+2∗k2 , ti me in de x , IE , VE,EK,ENA, EL , GL, gNAmax,
gKmax) ;
ynew = yc+(k1+4∗k2+k3 ) / 6 ;
27 e l s e i f k==4
k1 = h∗ f c ;
k2 = h∗ f e v a l ( fname , t c +(h / 2 ) , yc+(k1 / 2 ) , ti me in de x , IE , VE,EK,ENA, EL , GL, gNAmax,
gKmax) ;
k3 = h∗ f e v a l ( fname , t c +(h / 2 ) , yc+(k2 / 2 ) , ti me in de x , IE , VE,EK,ENA, EL , GL, gNAmax,
gKmax) ;
k4 = h∗ f e v a l ( fname , t c+h , yc+k3 , t im ei nd ex , IE , VE,EK,ENA, EL , GL, gNAmax, gKmax) ;
32 ynew = yc+(k1+2∗k2+2∗k3+k4 ) / 6 ;
else
disp ( s p r i n t f ( ’The RK method %2d order is not allowed!’ , k ) ) ;
end
tnew = t c+h ;
37 fnew = f e v a l ( fname , tnew , ynew , ti me in de x , IE , VE,EK,ENA, EL , GL, gNAmax, gKmax) ;

29.6 The Development of An Action Potential:


Let’s examine how an action potential is generated with this model. We will inject both current
using the function IE.m and apply an axon hillock voltage using the techniques from Chapter 26.
The injected current is modeled in this way.

Listing 29.7: Injected Current

499
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

function i e = IE ( t0 , t f , h , t s , te , Imax )
%
3 %
%
n = round ( ( t f −t 0 ) /h ) ;
t = linspace ( t0 , t f , n ) ;
for i =1:n
8 ie ( i ) = 0;
i f t ( i ) >= t s & t ( i ) <= t e
i e ( i ) = Imax ;
end
end

This injects a current lasting only te−ts seconds of magnitude IM ax . We start with the excitation
given by external current as shown in Figure 29.5 which injects a modest amount of current, 100
namps over 0.62 seconds.

Figure 29.5: An Injected Current

In addition to this injected current, we can generate a voltage trace applied to the axon hillock
with the function AxonHillock. Now, the axon hillock voltage trace generates a current which is
applied to the Hodgkin - Huxley model as an external current. Hence, our term IM corresponds in
a sense to an axon hillock voltage generated by some sequence of voltage impulses on the dendritic
cable. Recall, the voltage we obtain on the dendrite due to a voltage impulse has the form

Q
2
X
−τ
v̂m (λ, τ ) ≈ A0 e + An cos(αn (L − λ))e−(1+αn )τ
n=1

~ as outlined in Chapter 26. The axon hillock voltage,


where we solve for the coefficient vector A
Ψ is thus

500
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

Q
2
X
−τ
Ψ(λ, τ ) ≈ A0 e + An cos(αn L))e−(1+αn )τ .
n=1

We also know that

∂vm ∂ 2 v̂m
= − v̂m ,
∂τ ∂λ2
∂Ψ
and hence, it is reasonable to approximate the current ∂τ by

Q
∂Ψ −τ
X 2
≈ −A0 e − (1 + αn2 ) An cos(αn L))e−(1+αn )τ .
∂τ
n=1

The axon hillock generated current is thus

Q
2
X
0 −τ
Ψ ≈ −A0 e − (1 + αn2 ) An cos(αn L))e−(1+αn )τ .
n=1

The one side value at λ = 0, τ = 0, is numerically unstable, so we will neglect it when we calculate
the current. We simply set the value Ψ0 (0) Ψ0 (h) where h is our numerical time step value. We
do this in the following code.

Listing 29.8: Generate An Axon Hillock Voltage Curve

function [ tau ,VAH, dVAHdt ] = A x o n H i l l o c k (Q, L , rho , t0 , t f , h , Vmax, l o c a t i o n )


%
3 % Q = t h e number o f e i g e n v a l u e s we want t o f i n d
% L = the l e n g t h of the dendrite in space constants
% rho = t h e r a t i o o f d e n d r i t e t o soma conductance , G D/G S
% t0 = i n i t i a l time
% tf = f i n a l time
8 % h = time i n c r e m e n t
% z = eigenvalue vector
% D = data vector
% Vmax = s i z e of v o l t a g e impulse
% location = location of pulse
13 %M = m a t r i x o f c o e f f i c i e n t s f o r our a p p r o x i m a t e v o l t a g e
% model
% Input = t h e s o l u t i o n as ( z , 0 ) t o s e e i f match t o
% input voltage is reasonable
% AxonHillock = the s o l u t i o n at (0 , t )
18 % V = the solution at ( z , t )
%
% get eigenvalue vector z

501
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

z = FindRoots (Q, L , rho , 1 . 0 , 1 . 0 ) ;


% get c o e f f i c i e n t matrix M
23 M = FindM (Q, L , rho , z ) ;
% compute d a t a v e c t o r f o r i m p u l s e
D = FindData (Q, L , rho , z , Vmax, l o c a t i o n ) ;
% S o l v e MB = D system
[ Lower , Upper , p i v ] = GePiv (M) ;
28 y = L T r i S o l ( Lower ,D( p i v ) ) ;
B = UTriSol ( Upper , y ) ;
% check errors
E r r o r = Lower ∗ Upper ∗B − D( p i v ) ;
D i f f = M∗B−D;
33 e = norm( E r r o r ) ;
e2 = norm( D i f f ) ;
d i s p l a y ( s p r i n t f ( ’ norm of LU residual = %12.7f norm of MB-D = %12.7f’ , e , e2 ) ) ;
% s e t s p a t i a l and time bounds
lambda = linspace ( 0 , 5 , 3 0 1 ) ;
38 N = round ( ( t f −t 0 ) /h ) ;
tau = linspace ( t0 , t f ,N) ;
VAH = zeros ( 1 ,N) ;
A = zeros (Q, 1 ) ;
for j = 1 :Q
43 A( j ) = B( j +1) ;
end
% f i n d v o l t a g e a t s p a c e p o i n t 0 and time p o i n t t a u ( t )
for t = 2 :N
w = z ∗L ;
48 u = −(1+z . ∗ z ) ∗ tau ( t ) ;
x = (1+ z . ∗ z ) ;
y = cos (w) . ∗ exp ( u ) ;
VAH( t ) = B( 1 ) ∗exp(−tau ( t ) )+ dot (A, cos (w) . ∗ exp ( u ) ) ;
dVAHdt( t ) = −B( 1 ) ∗exp(−tau ( t ) ) − dot (A, x . ∗ y ) ;
53 end
dVAHdt ( 1 ) = dVAHdt ( 2 ) ;

A typical voltage trace generated by an impulse of size 200 at location 1.0 is seen in Figure 29.6
and the corresponding axon hillock current is shown in Figure 29.7.
As this current flows into the membrane, the membrane begins to depolarize. The current flow
of sodium and potassium through the membrane is voltage dependent and so this increase in the
voltage across the membrane causes changes in the ion gates. We have used our computational
tools to compute the voltage and current plots for this input. Here is a typical MatLab session.
We will start with no external current IE. We will generate our results over the time interval
[0, 15].

% generate the injected current


>> ie = IE(0,15,.01,10.0,10.062,0.0);
% generate the axon hillock voltage
>> [t,VAH] = AxonHillock(80,5,10,0,15.0,.01,200,1.0);
% Calculate the initial conditions with rest.m
>> [g_L,m_NA_infinity,h_NA_infinity,m_K_infinity] = rest(-50,-70,120.0,36.0);
% This gives

502
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

Figure 29.6: A Voltage Pulse

Figure 29.7: An Injected Current

503
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

m_NA_infinity =

0.0154

>> h_NA_infinity

h_NA_infinity =

0.8652

>> m_K_infinity

m_K_infinity =

0.1810

>> g_L

g_L =

0.0084
% Find the action potential response
>> [tvals,g_NA,g_K,V,m_NA,h_NA,m_K] = ...
SolveSimpleHH(’simpleHH’,0,15,[-70.0,m_NA_infinity,h_NA_infinity,m_K_infinity
],...
.01,4,ie,VAH,EK,ENA,-50.0,g_L,36.0,120.0);
% plot results
>> plot(tvals,V);
% Compute ion currents
>> INA = g_NA.*(V - ENA);
>> IK = g_K.*(V - EK);
% plot currents
>> plot(tvals,INA,tvals,IK);

We show in Figure 29.8 and Figure 29.9, the voltage time trace and the superimposed plots of
sodium and potassium current.

Recall the ion current equations: The nonlinear conductance is modeled by

a MN A (V ) HN A (V )
M ax 3
gN a (V, t) = gN
M ax
gK (V, t) = gK MK 4 (V )

and the full current equations are

IN a = gN A (V, t)(V (t) − EN A )


IK = gK (V, t)(V (t) − EK )
IL = gL (V, t)(V (t) − EL )

504
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

Figure 29.8: Voltage Trace

Figure 29.9: Potassium and Sodium Ion Current for Voltage Impulse Of Size 200 at 1.0

505
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

The conductance is in micro Siemens or units of 10−6 1/ohms. Time is in milliseconds or 10−3
seconds and voltage is in millivolts or 10−3 volts. The currents here are thus in units of 10−9 amps
or naonamps. Note the sodium current is negative and the potassium current is positive. To see
how the sodium current activates first and then the potassium current, we can plot the absolute
value of IN a and IK simultaneously. This is done in Figure 29.10.

Figure 29.10: Absolute Sodium and Potassium Current During a Pulse

We can study the activation and inactivation terms more closely using some carefully chosen plots.
We show the plot of MN A in Figure 29.11 and HN A in Figure 29.12
Finally, the potassium activation MK is shown in Figure 29.13.
Of even more interest is the products that are proportional to the sodium and potassium current.
We plot MN A 3 HN A versus time in Figure 29.14 and MK 4 ’s time trace in Figure 29.15.
We note that if we apply a voltage pulse at a location further from the axon hillock, say at location
3, we will not generate an action potential!

29.6.1 Exercise:

Exercise 29.6.1. Run the Hodgkin - Huxley simulation as has been done in the work above and
write a complete report in your favorite editor which includes the following things: For this study,
we use the maximum sodium conductance is 120, the maximum potassium conductance is 36, the
leakage voltage is −50 with temperature 70 degrees Fahrenheit. We will use IM is zero always.
Then, find

1. The plots of voltage V , MN A , HN A and MK versus time for 20 milliseconds.

506
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

Figure 29.11: Sodium Activation MN A During A Pulse

Figure 29.12: Sodium Inactivation HN A During A Pulse

507
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

Figure 29.13: Potassium Activation MK During A Pulse

Figure 29.14: The product MN A 3 HN A During A Pulse

508
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

Figure 29.15: The product MK 4 During A Pulse

2. The plots of the time constant and the asymptotic value for MN A , HN A and MK versus
time.

3. The plots of MN A 3 HN A and MK 4 versus time.

4. The plots of sodium and potassium current versus time.

for the following applied voltage pulses on a cable of length 5 with ρ = 10. Use suitable Q for the
following pulses:

1. Pulse of size 300 at 1.5.

2. Pulse of size 200 at 2.0 and 100 at 1.0.

3. Pulse of size 300 at 4.0 and 200 at 1.0.

Exercise 29.6.2. Run the Hodgkin - Huxley simulation as has been done in the work above and
write a complete report in your favorite editor which includes the following things: For this study,
we use the maximum sodium conductance is 120, the maximum potassium conductance is 36, the
leakage voltage is −50 with temperature 70 degrees Fahrenheit. We will still use IM is zero always.
Then, find

1. The plots of voltage V , MN A , HN A and MK versus time for 20 milliseconds.

2. The plots of the time constant and the asymptotic value for MN A , HN A and MK versus
time.

3. The plots of MN A 3 HN A and MK 4 versus time.

509
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

4. The plots of sodium and potassium current versus time.

for the following applied voltage pulses on a cable of length 5 with ρ = 1. Use suitable Q for the
following pulses:

1. Pulse of size 300 at 1.5.

2. Pulse of size 200 at 2.0 and 100 at 1.0.

3. Pulse of size 300 at 4.0 and 200 at 1.0.

Exercise 29.6.3. Run the Hodgkin - Huxley simulation as has been done in the work above and
write a complete report in your favorite editor which includes the following things: For this study,
we use the maximum sodium conductance is 120, the maximum potassium conductance is 36, the
leakage voltage is −50 with temperature 70 degrees Fahrenheit. We will still use IM is zero always.
Then, find

1. The plots of voltage V , MN A , HN A and MK versus time for 20 milliseconds.

2. The plots of the time constant and the asymptotic value for MN A , HN A and MK versus
time.

3. The plots of MN A 3 HN A and MK 4 versus time.

4. The plots of sodium and potassium current versus time.

for the following applied voltage pulses on a cable of length 5 with ρ = 0.1. Use suitable Q for the
following pulses:

1. Pulse of size 300 at 1.5.

2. Pulse of size 200 at 2.0 and 100 at 1.0.

3. Pulse of size 300 at 4.0 and 200 at 1.0.

Exercise 29.6.4. Run the Hodgkin - Huxley simulation as has been done in the work above and
write a complete report in your favorite editor which includes the following things: For this study,
we use the maximum sodium conductance is 120, the maximum potassium conductance is 36, the
leakage voltage is −50 with temperature 70 degrees Fahrenheit. We will still use IM is zero always.
Then, find

1. The plots of voltage V , MN A , HN A and MK versus time for 20 milliseconds.

2. The plots of the time constant and the asymptotic value for MN A , HN A and MK versus
time.

3. The plots of MN A 3 HN A and MK 4 versus time.

4. The plots of sodium and potassium current versus time.

510
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

for the following applied voltage pulses on a cable of length 1 with ρ = 10.0. Use suitable Q for
the following pulses:

1. Pulse of size 300 at 1.0.

2. Pulse of size 200 at 1.0 and 100 at 0.6.

511
29.6. ACTION POTENTIALS CHAPTER 29. HODGKIN-HUXLEY MODELS

512
Part VII

Brain Structure

513
Chapter 30
Neural Structure

In this chapter, we will discuss the basic principles of the neural organization that subserves
cognitive function.

30.1 The Basic Model:

Our basic neural model is based on abstractions from neurobiology. The two halves or hemispheres
of the brain are connected by the corpus callosum which is like a cap of tissue that sits on top
of the brain stem. The structures in this area are very old in an evolutionary sense. The outer
surface of the brain is the cortex which is a thin layer organized into columns. There is too
much cortex to fit comfortably inside the human skull, so as the human species evolved and the
amount of cortical tissue expanded, the cortex began to develop folds. Imagine a deep canyon in
the earth’s surface. The walls of the canyon are called a gyrus and are the cortical tissue. The
canyon itself called a sulcus or fissure. There are many such sulci with corresponding gyri. Some
of the gyri are deep enough to touch the corpus callosum. One such gyrus is the longitudinal
cerebral gyrus fissure which contains the cingulate gyri at the very bottom of the fissure touching
the corpus callosum.

Consider the simplified model of information processing in the brain that is presented in Figure
30.2. This has been abstracted out of much more detail ((Nolte (17) 2002)). In (Brodal (3)
1992) and (Diamond et al. (5) 1985), we can trace out the the details of the connections between
cortical areas and deeper brain structures near the brain stem and construct a very simplified
version which will be suitable for our modeling purposes. Raw visual input is sent to area 17
of the occipital cortex where is is further processed by the occipital association areas 18 and 19.
Raw auditory input is sent to area 41 of the Parietal cortex. Processing continues in areas 5,
7 (the parietal association areas) and 42. There are also long association nerve fiber bundles

515
30.1. THE BASIC MODEL: CHAPTER 30. NEURAL STRUCTURE

starting in the cingulate gyrus which connect the temporal, parietal and occipital cortex together.
The Temporal - Occipital connections are labeled C2, the superior longitudinal fasciculus; and
B2, the inferior occipitofrontal nerve bundles, respectively. The C2 pathway also connects to the
cerebellum for motor output. The Frontal - Temporal connections labeled A, B and C are the
cingular, uncinate fasciculus and arcuate fasciculus bundles and connect the Frontal association
areas 20, 21, 22 and 37.
Area 37 is an area where inputs from multiple sensor modalities are fused into higher level
constructs. The top boundary of area 17 in the occipital cortex is marked by a fold in the surface
of the brain called the lunate sulcus. This sulcus occurs much higher in a primate such as a
chimpanzee. Effectively, human like brains have been reorganized so that the percentage of cortex
allotted to vision has been reduced. Comparative studies show that the human area 17 is 121%
smaller than it should be if its size was proportionate to other primates. The lost portion of area
17 has been reallocated to area 7 of the parietal cortex. There are special areas in each cortex that
are devoted to secondary processing of primary sensory information and which are not connected
directly to output pathways. These areas are called associative cortex and there are primarily
defined by function, not a special cell structure. In the parietal cortex, the association areas are
5 and 7; in the temporal cortex, areas 20, 21, 22 and 37; and in the frontal, areas 6 and 8. Hence,
human brains have evolved to increase the amount of associative cortex available for what can be
considered symbolic processing needs. Finally, the same nerve bundle A connects the Parietal and
Temporal cortex. In addition to these general pathways, specific connections between the cortex
association areas are shown as bidirectional arrows. The box labeled cingular gyrus is essentially a
simplified model of the processing that is done in the limbic system. Note the bidirectional arrows
connecting the cingulate gyrus to the septal nuclei inside the subcallosal gyrus, the anterior nuclei
inside the thalamus and the amygdala. There is also a two way connection between the anterior
nuclei and the mamillary body inside the hypothalamus. Finally, the cingulate gyrus connects to
the cerebellum for motor output. We show the main cortical areas of the brain in Figure 30.1.

Figure 30.1: The main cortical subdivisions of the brain.

Our ability to process symbolic information is probably due to changes in the human brain that

516
30.1. THE BASIC MODEL: CHAPTER 30. NEURAL STRUCTURE

have occurred over evolutionary time. A plausible outline of the reorganizations in the brain that
have occurred in human evolution has been presented in (Holloway (13, page 96) 1999) which we
have paraphrased somewhat in Table 30.1:

Brain Changes (Reorganization Hominid Fossil Time (mya) Evidence


Reduction of Area 17 A. afarensis 3.5 - 3.0 brain endocast
Increase in Posterior Parietal Cortex
Reorganization of Frontal Cortex Homo Habilis 2.0 - 1.8 brain endocast
Third inferior frontal convolution
Broca’s area
Cerebral asymmetries Homo Habilis 2.0 - 1.8 brain endocast
left occipital, right frontal petalias
Refinements in Cortical Organization Homo Erectus 1.5 - 0.0 brain endocast
to a modern human pattern

Table 30.1: Brain Reorganizations

In the table presented, we use some possible unfamiliar terms which are the abbreviation mya,
this refers to millions of years ago; the term petalias, which are asymmetrical projections of the
occipital and frontal cortex; and endocast, which is a cast of the inside of a hominid fossil’s skull.
An endocast must be carefully planned and examined, of course, as there are many opportunities
for wrong interpretations. (Holloway (13, page 77) 1999) believes that there are no new evolu-
tionarily derived structures in the human brain as compared to other animals - nuclear masses
and the fiber systems interconnecting them are the same. The differences are in the quantitative
relationships between and among these nuclei and fiber tracts, and the organization of cerebral
cortex structurally, functionally and in integration. We make these prefatory remarks to motivate
why we think a reasonable model of cognition (and hence, cognitive dysfunction) needs a work-
ing model of cortical processing. Our special symbolic processing abilities appear to be closely
linked to the reorganizations of our cortex that expanded our use of associative cortex. Indeed, in
((Holloway (13) 1999), p. 97), we find a statement of one of the guiding principles in our model
building efforts.

our humanness resides mostly in our brain, endowed with symbolic abilities, which have
permitted the human animal extraordinary degrees of control over its adaptive powers in both
the social and material realms.

As might be expected, clear evidence of our use of symbolic processing does not occur until
humans began to leave archaeological artifacts which were well enough preserved. Currently,
such evidence first appears in painted shells used for adornment found in African sites from
approximately seventy thousand years ago. Previous to those discoveries, the strongest evidence
for abstract and symbolic abilities came in the numerous examples of Paleolithic art from the
Europe and Russia in the form of cave paintings and carved bone about twenty-five thousand

517
30.1. THE BASIC MODEL: CHAPTER 30. NEURAL STRUCTURE

years ago (25 kya). Paleolithic art comprises thousands of images made on bone, antler, ivory
and limestone cave walls using a variety of techniques, styles and artistic conventions. In ((Conkey
(4) 1999), p. 291), it is asserted that

If,...,we are to understand the ‘manifestations and evolution of complex human behavior’,
there is no doubt that the study of paleolithic ‘art’ has much to offer.

In the past, researchers assumed that the Paleolithic art samples that appeared so suddenly 25
kya were evidence of the emergence of a newly reorganized human brain that was now capable
of symbolic processing of the kind needed to create art. Conkey argues persuasively that this is
not likely. Indeed, the table of brain reorganization presented earlier, we noted that the increase
in associative parietal cortex in area 7 occurred approximately 3 mya. Hence, the capability of
symbolic reasoning probably steadily evolved even though the concrete evidence of cave art and
so forth does not occur until really quite recently. However, our point is that the creation of ‘art’
is intimately tied up with the symbolic processing capabilities that must underlies any model of
cognition. Further, we assert that the abstractions inherent in mathematics and optimization are
additional examples of symbolic processing.

In addition, we indicate in simplified form, three major neurotransmitter pathways. In the brain
stem, we focus on the groups of neurons called the raphe and the locus coerulus as well as dopamine
producing cells which are labeled by their location, the substantia nigra in the brain stem. These
cell groups produce neurotransmitters of specific types and send their outputs to a collection
of neural tissues that surround the thalamus called the basal ganglia. The basal ganglia are not
shown in Figure 30.2 but you should imagine them as another box surrounding the thalamus. The
basal ganglia then sends outputs to portions of the cerebral cortex; the cerebral cortex in turn
sends connections back to the basal ganglia. These connections are not shown explicitly; instead,
for simplicity of presentation, we use the thalamus to cingulate gyrus to associative connections
that are shown. The raphe nuclei produce serotonin, the locus coerulous produce norepinephrine
and the substantia nigra (and other cells in the brain stem) produce dopamine. There are many
other neurotransmitters, but the model we are presenting here is a deliberate choice to focus on
a few basic neurotransmitter pathways.

The limbic processing presented in Figure 30.2 is shown in more detail in Figure 30.4. Cross
sections of the brain in a plane perpendicular to a line through the eyes and back of head shown
that the cingulate gyrus sits on top of the corpus callosum. Underneath the corpus callosum is
a sheath of connecting nerve tissue known as the fornix which is instrumental in communication
between these layers and the structures that lie below. The arrows in Figure 30.4 indicate struc-
tural connections only and should not be used to infer information transfer. A typical model of
biological information processing that can be abstracted from what we know about brain func-
tion is thus shown in the simplified brain model of Figure 30.2. This shows a chain of neural
modules which subserve cognition. It is primarily meant to illustrate the hierarchical complexity

518
30.1. THE BASIC MODEL: CHAPTER 30. NEURAL STRUCTURE

Figure 30.2: This diagram shows a simplified path of information processing in the brain. Arrows
indicate information processing pathways.

of the brain structures we need to become familiar with. There is always cross-talk and feedback
between modules which is not shown.
Some of the principle components of the information processing system of the brain are given
in Table 30.2. The central nervous system (CNS) is divided roughly into two parts; the brain
and the spinal cord. The brain can be subdivided into a number of discernible modules. For
our purposes, we will consider the brain to consist of the cerebrum, the cerebellum and the brain
stem. Finer subdivisions are then shown in Table 30.2 where some structures are labeled with
a corresponding number for later reference in other figures. The numbers we use here bear no
relation to the numbering scheme that is used for the cortical subdivisions shown in Figure 30.1
and Figure 30.2. The numbering scheme there has been set historically. The numbering scheme
shown in Table 30.2 will help use locate brain structures deep in the brain that can only be seen
by taking slices. These numbers thus correspond to the brain structures shown in Figure 30.3
(modules that can be seen on the surface) and the brain slices of Figure 30.5(a), Figure 30.5(b),
Figure 30.6(a) and Figure 30.6(b).

A useful model of the processing necessary to combine disparate sensory information into higher

519
30.1. THE BASIC MODEL: CHAPTER 30. NEURAL STRUCTURE

Brain → Cerebrum Cerebellum 1 Brain Stem


Cerebrum → Cerebral Hemisphere Diencephalon
Brain Stem → Medulla 4 Pons 3 Midbrain 2
Cerebral Hemisphere → Amygdala 6 Hippocampus 5
Cerebral Cortex Basal Ganglia
Diencephalon → Hypothalamus 8 Thalamus 7
Cerebral Cortex → Limbic 13 Temporal 12 Occipital 11
Parietal 10 Frontal 9
Basal Ganglia → Lenticular Nucleus 15 Caudate Nucleus 14
Lenticular Nucleus → Global Pallidus 16 Putamen 15

Table 30.2: Information Processing Components

Figure 30.3: Brain Structures: The numeric labels correspond to structures listed in Table 30.2

level concepts is clearly built on models of cortical processing. There is much evidence that
cortex is initially uniform prior to exposure to environmental signal and hence a good model of
such generic cortical tissue, isocortex, is needed. A model of isocortex is motivated by recent
models of cortical processing outlined in (Raizada and Grossberg (19) 2003). This article uses
clues from visual processing to gain insight into how virgin cortical tissue (isocortex) is wired to
allow for its shaping via environmental input. Clues and theoretical models for auditory cortex
can then be found in the survey paper of (Merzenich (16) 2001).
For our purposes, we will use the terms auditory cortex for area 41 of the Parietal cortex
and visual cortex for area 17 of the occipital cortex. These are the areas which receive primary
sensory input with further processing occurring first in cortex where the input is received and
then in the temporal cortex in areas 20, 21, 22 and 37 as shown in Figure 30.2. The first layer of
auditory cortex is bathed in an environment where sound is chunked or batched into pieces of 200
mS length which is the approximate size of the phonemes of a person’s native language. Hence,
the first layer of cortex develops circuitry specialized to this time constant. The second layer of

520
30.1. THE BASIC MODEL: CHAPTER 30. NEURAL STRUCTURE

cortex then naturally develops a chunk size focus that is substantially larger, perhaps on the order
of 1000 mS to 10000 mS. Merzenich details how errors in the imprinting of these cortical layers
can lead to cognitive impairments such as dyslexia. As processing is further removed from the
auditory cortex via mylenated pathways, additional meta level concepts (tied to even longer time
constants) are developed. While it is clear that a form of Hebbian learning is used to set up these
circuits, the pitfalls of such learning are discussed clearly in the literature (McClelland (15) 2001).
For example, the inability of adult speakers of Japanese to distinguish the sound of an ell and
an r is indicative of a bias of Hebbian learning that makes it difficult to learn new strategies by
unlearning previous paradigms. For this reason, we will not use strict hebbian learning protocols;
instead, we will model auditory and visual cortex with three layers using modified Grossberg
processing with Hebbian learning as discussed in Chapter 31 ). Our third layer of cortex is then
an abstraction of the additional anatomical layers of cortex as well as appropriate mylenated
pathways which conduct upper layer processing results to other cognitive modules.

Figure 30.4: This diagram shows the major structures of the limbic system in the brain. Arrows
indicate structural connections only.

521
30.2. BRAIN STRUCTURE: CHAPTER 30. NEURAL STRUCTURE

30.2 Brain Structure:


The connections from the limbic core to the cortex are not visible from the outside. The outer
layer of the cortex is deeply folded and wraps around an inner core that consists of the limbic lobe
and corpus callosum. Neural pathways always connect these structures. In Figure 30.5(a), we
can see the inner structures known as the amygdala and putamen in a brain cross-section. Figure
30.5(a) show the orientation of the brain cross-section while Figure 30.5(b) displays a cartoon of
the slice itself indicating various structures.

(a) Slice 1 Orientation (b) Neural Slice 1 Cartoon

Figure 30.5: Brain Slice 1 Details

The thalamus is located in a portion of the brain which can be seen using the cross-section
indicated by Figure 30.6(a). The structures the slice contains are shown in Figure 30.6(b).

30.3 The Brain Stem:


Cortical columns interact with other parts of the brain in sophisticated ways. The patterns of
cortical activity ( modeled by the Folded Feedback Pathways (FFP) and On-Center Off-surround
(OCOS) circuits as discussed in Section 31.1.1 ) are modulated by neurotransmitter inputs that
originate in the reticular formation or RF of the brain. The outer cortex is wrapped around an
inner core which contains, among other structures, the midbrain. The midbrain is one of the
most important information processing modules. Consider Figure 30.7 which shows the midbrain
in cross section. A number of integrative functions are organized at the brainstem level. These
include complex motor patterns, respiratory and cardiovascular activity and some aspects of
consciousness.

522
30.3. THE BRAIN STEM: CHAPTER 30. NEURAL STRUCTURE

(a) Slice 2 Orientation (b) Neural Slice 2 Cartoon

Figure 30.6: Brain Slice 2 Details

Figure 30.7: The Brainstem Structure

523
30.3. THE BRAIN STEM: CHAPTER 30. NEURAL STRUCTURE

The brain stem has historically been subdivided into functional groups starting at the spinal cord
and moving upward toward the middle of the brain. The groups are, in order, the caudal and
rostral medulla, the caudal, mid and rostral pons and the caudal and rostral midbrain.
We will refer to these as the brainstem layers 1 to 7. In this context, the terms caudal and rostral
refer to whether a slice is closest to the spinal cord or not. Hence, the pons is divided into the
rostral pons (farthest from the spinal cord) and the caudal pons (closest to the spinal cord). The
seven brain stem layers can be seen by taking cross-sections through the brain as indicated by
repeated numbers 1 to 7 shown in Figure 30.7. Each shows interesting structure which is not
visible other than in cross section. Each slice of the midbrain also has specific functionality and
shows interesting structure which is not visible other than in cross section. The layers are shown
in Figure 30.8.

Figure 30.8: The Brainstem Layers

Slice 1 is the caudal medulla and is shown in Figure 30.9(a). The Medial Longitudinal Fascilus
(MLF) controls head and eye movement. The Medial Lemniscus is not shown but is right below
the MLF.

Slice 2 is the rostral medulla. The Fourth Ventricle is not shown in Figure 30.9(b) but would be
right at the top of the figure. The Medial Lemniscus is still not shown but is located below the
MLF like before. In slice 2, a lot of the inferior olivary nucleus can be seen. The caudal pons is
shown in Slice 3, Figure 30.10(a), and the mid pons in Slice 4, figure 30.10(b).

524
30.3. THE BRAIN STEM: CHAPTER 30. NEURAL STRUCTURE

(a) The Caudal Medulla: Slice 1 (b) The Rostral Medulla: Slice 2

Figure 30.9: The Medulla Cross-sections

(a) The Caudal Pons: Slice 3 (b) The Mid Pons: Slice 4

(c) The Rostral Pons: Slice 5

Figure 30.10: The Pons Cross-sections

525
30.3. THE BRAIN STEM: CHAPTER 30. NEURAL STRUCTURE

The rostral pons and the caudal and rostral midbrain are shown in Figure 30.10(c), Figure 30.11(a)
and Figure 30.11(b), respectively. Slice 4 and 5 contain pontine nuclei and Slice 7 contains the
cerebral peduncle, a massive nerve bundle containing cortico-pontine and cortico-spinal fibers.
Fibers originating in the frontal, parietal, and temporal cortex descend to pontine nuclei. The
Pontine also connects to cerebral peduncles.

(a) The Caudal Midbrain: Slice 6 (b) The Rostral Midbrain: Slice 7

Figure 30.11: The Rostral Midbrain Cross-sections

The Reticular Formation or RF is this central core of the brain stem which contains neurons
whose connectivity is characterized by huge fan-in and fan-out. Reticular neurons therefore have
extensive and complex axonal projections. Note we use the abbreviation PAG to denote the cells
known as periaqueductal gray and NG for the Nucleus Gracilis neurons. The extensive nature
of the afferent connections for a RF neuron are shown in Figure 30.12. The indicated neuron in
Figure 30.12 sends its axon to many CNS areas. If one cell has projections this extensive, we can
imagine the complexity of reticular formation as a whole. Its neurons have ascending projections
that terminate in the thalamus, subthalamus, hypothalamus, cerebral cortex and the basal ganglia
(caudate nucleus, putamen, globus pallidus, substantia nigra).

The midbrain and rostral pons RF neurons thus collect sensory modalities and project this infor-
mation to intralaminar nuclei of thalamus. The intralaminar nuclei project to widespread areas
of cortex causes heightened arousal in response to sensory stimuli; e.g. attention. Our corti-
cal column models must therefore be able to accept modulatory inputs from the RF formation.
It is also know that some RF neurons release monoamines which are essential for maintenance
of consciousness. For example, bilateral damage to the midbrain RF and the fibers through it
causes prolonged coma. Hence, even a normal intact cerebrum can not maintain consciousness.
It is clear that input from brainstem RF is needed. The monoamine releasing RF neurons release
norepinephrine, dopamine and serotonin. The noradrenergic neurons are located in the pons and

526
30.3. THE BRAIN STEM: CHAPTER 30. NEURAL STRUCTURE

Figure 30.12: Typical Neuron in the Reticular Formation

medulla (locus coerulus). This is Slice 5 of midbrain and it connects to cerebral cortex. Figure
30.13(b) shows how thoroughly noradrenergic neurons innervate the brain.

(a) Location (b) The Innervation Pathways

Figure 30.13: The Norepinephrine Releasing Neurons

This system is inactive in sleep and most active in startling situations, or those calling for watchful
situations. Dopaminergic neurons in slice 7 of midbrain are are located in the Substantia Nigra
(SG). Figure 30.14(b) shows the brain areas that are influenced by dopamine. Figure 30.14(a) also
shows the SC (superior collicus), PAG (periaqueductal gray) and RN (red nucleus) structures.

These neurons project in overlapping fiber tracts to other parts of the brain. The nigrostriatal
sends information to the substantia nigra and then to the caudate, putamen and midbrain. The
medial forebrain bundle projects from the substantia nigra to the frontal and limbic lobes. The
indicated projections to the motor cortex are consistent with initiation of movement. We know
that there is disruption to cortex function due to dopamine neurons ablation in Parkinson’s disease.
The projections to other frontal cortical areas and limbic structures imply there is a motivation

527
30.3. THE BRAIN STEM: CHAPTER 30. NEURAL STRUCTURE

(a) Location (b) The Innervation Pathways

Figure 30.14: The Dopamine Releasing Neurons

and cognition role. Hence, imbalances in these pathways will play a role in mental dysfunction.
Furthermore, certain drugs cause dopamine release in limbic structures which implies a pleasure
connection. Serotonergic neurons occur in in most levels of brain stem, but concentrate in raphe
nuclei in slice 5 of midbrain – also near the locus coerulus (Figure 30.15(a). Their axons innervate
many areas of the brain as shown in Figure 30.15(b).

(a) Location (b) The Innervation Pathways

Figure 30.15: The Serotonin Releasing Neurons

It is known that serotonin levels determine the set point of arousal and influence the pain control
system.

528
Chapter 31
Cortical Structure

The cortex consists of the frontal, occipital, parietal, temporal and limbic lob The cortex is folded
as is shown in figure 31.1(a) and consists of a number of regions which have historically been
classified as illustrated in Figure 31.1(b).
The cortical layer is thus subdivided into several functional areas known as the Frontal, Oc-
cipital, Parietal, Temporal and Limbic lobes. Most of what we know about the function of these
lobes comes from the analysis of the diminished abilities of people who have unfortunately had
strokes or injuries. By studying these brave people very carefully we can discern what functions
they have lost and correlate these losses with the areas of their brain that has been damaged. This
correlation is generally obtained using a variety of brain imaging techniques such as Computer
Assisted Tomography (CAT) and Functional Magnetic Resonance Imagery (fMRI) scans.
The Frontal lobe has a number of functionally distinct areas. It contains the primary motor
cortex which is involved in initiation of voluntary movement. Also, it has an specialized area known
as Broca’s area which is important in both written and spoken language ability. Finally, it has the
prefrontal cortex which is instrumental in the maintenance of our personality and is involved in the
critical abilities of insight and foresight. The Occipital lobe is concerned with visual processing
and visual association. In the Parietal lobe, primary somato-sensory information is processed in

(a) Cortical Folding (b) Cortical Lobes

Figure 31.1: The Outer Appearance of the Cortex

529
31.1. CORTICAL PROCESSING: CHAPTER 31. CORTEX

Figure 31.2: The Limbic Lobe is Interior to the Cortex

the area known as the primary somato-sensory cortex. There is also initial cortical processing of
tactile and proprioceptive input. In addition, there are areas devoted to language comprehension
and complex spatial orientation and perception The Temporal lobe contains auditory processing
circuitry and develops higher order auditory associations as well. For example, the temporal lobe
contains Wernicke’s area which is involved with language comprehension. It also handles higher
order visual processing and learning and memory. Finally, the Limbic system lies beneath the
cortex as in shown in Figure 31.2 and is involved in emotional modulation of cortical processing.

31.1 Cortical Processing:

In order to build a useful model of cognition, we must be able to model interactions of cortical
modules with the limbic system. Further, these interactions must be modulated by a number of
monoamine neurotransmitters. It also follows that we need a reasonable model of cortical process-
ing. There is much evidence that cortex is initially uniform prior to exposure to environmental
signal. Hence, a good model of generic cortical tissue, called isocortex, is needed. A model of
isocortex is motivated by recent models of cortical processing outlined in (Raizada and Grossberg
(19) 2003). This article uses clues from visual processing to gain insight into how virgin cortical
tissue (isocortex) is wired to allow for its shaping via environmental input. Clues and theoretical
models for auditory cortex can then be found in the survey paper of (Merzenich (16) 2001). We
begin with a general view of a typical cortical column taken from the standard references of (Bro-
dal (3) 1992), (Diamond et al. (5) 1985) and (Nolte (17) 2002) as shown in Figure 31.3(a). This
column is oriented vertically with layer one closest to the skull and layer six furthest in. We show
layer four having a connection to primary sensory data. The details of some of the connections
between the layers are shown in Figure 31.3(b). The details of some of the connections between
the layers are shown in Figure 31.3(b). The six layers of the cortical column consist of specific
cell types and mixtures described in Table 31.1.

We can make some general observations about the cortical architecture. First, layers three and
five contain pyramidal cells which collect information from layers above themselves and send their
processed output for higher level processing. Layer three outputs to motor areas and layer five, to

530
31.1. CORTICAL PROCESSING: CHAPTER 31. CORTEX

Layer Description Use


One Molecular
Two External Granule Layer
Three External Pyramidal Layer Output to other cortex areas
Four Internal Granule Layer Collect primary sensory input or input from other brain areas
Five Internal Pyramidal Layer Output to motor cortex
Six Multiform Layer Output to thalamus brain areas

Table 31.1: Cortical Column Cell Types

other parts of the cerebral cortex. Layer six contains cells whose output is sent to the thalamus
or other brain areas. Layer four is a collection layer which collates input from primary sensory
modalities or from other cortical and brain areas. We see illustrations of the general cortical
column structure in Figure 31.3(a) and Figure 31.3(b).

(a) Generic Overview (b) Anatomical Structure

Figure 31.3: The Structure of a cortical column

The cortical columns are organized into larger vertical structures following a simple stacked pro-
tocol: sensory data → cortical column 1 → cortical column 2 → cortical column 3 and so forth.
For convenience, our models will be shown with three stacked columns. The output from the last
column is then sent to other cortex, thalamus and the brain stem.

531
31.1. CORTICAL PROCESSING: CHAPTER 31. CORTEX

31.1.1 Isocortex Modeling:

A useful model of generic cortex, isocortex, is that given in (Grossberg (8) 2003), (Grossberg and
Seitz (9) 2003) and (Raizada and Grossberg (19) 2003). Two fundamental cortical circuits are
introduced in these works: the on - center, off - surround (OCOS) and the folded feedback pathway
(FFP) seen in Figure 31.4(a) and Figure 31.4(b). In Figure 31.4(a), we see the On - Center, Off
- Surround control structure that is part of the cortical column control circuitry. Outputs from
the thalamus (perhaps from the nuclei of the Lateral Geniculate Body) filter upward into the
column at the bottom of the picture. At the top of the figure, the three circles that are not filled
in represent neurons in layer four whose outputs will be sent to other parts of the column. There
are two thalamic output lines: the first is a direct connection to the input layer four, while the
second is an indirect connection to layer six itself. This connection then connects to a layer of
inhibitory neurons which are shown as circles filled in with black. The middle layer four output
neuron is thus innervated by both inhibitory and excitatory inputs while the left and right layer
four output neurons only receive inhibitory impulses. Hence, the center is excited and the part of
the circuit that is off the center, is inhibited. We could say the surround is off. It is common to
call this type of activation the off surround.

Next consider a stacked cortical column consisting of two columns, column one and column two.
There are cortico-cortical feedback axons originating in layer six of column two which input into
layer one of column one. From layer one, the input connects to the dendrites of layer five pyramidal
neurons which connects to the thalamic neuron in layer six. Hence, the higher level cortical input
is fed back into the previous column layer six and then can excite column one’s fourth layer via
the on - center, off -surround circuit discussed previously. This description is summarized in
Figure 31.4(b). We call this type of feedback a folded feedback pathway. For convenience, we use
the abbreviations OCOS and FFP to indicate the on - center, off - surround and folded feedback
pathway, respectively.

The layer six - four OCOS is connected to the layer two - three circuit as shown in Figure 31.5(a).
Note that the layer four output is forwarded to layer two - three and then sent back to layer six
so as to contribute to the standard OCOS layer six - layer four circuit. Hence, we can describe
this as another FFP circuit. Finally, the output from layer six is forwarded into the thalamic
pathways using a standard OCOS circuit. This provides a way for layer six neurons to modulate
the thalamic outputs which influence the cortex. This is shown in Figure 31.5(b).

It is known that cortical outputs dynamically assemble into spatially and temporally localized
phase locked structures. A review of such functional connectivity is in (Fingelkurts et al. (7) 2005).
We have abstracted a typical snapshot of the synchronous activity of participating neurons from
Fingelkurts’ discussions which is shown in Figure 31.6.
532
31.1. CORTICAL PROCESSING: CHAPTER 31. CORTEX

(a) The On - Center, Off - Surround Control Structure

(b) The Folded Feedback Pathway Control Structure

Figure 31.4: Cortical Control Circuits

533
31.1. CORTICAL PROCESSING: CHAPTER 31. CORTEX

(a) The Layer Six - Four Connections to Layer Two - Three are another
FFP

(b) The Layer Six to Thalamic OCOS

Figure 31.5: Higher Level Cortical Control Circuits

Each burst of this type of synchronous activity in


the cortex is measured via skull-cap EEG equip-
ment and Fingelkurts presents a reasoned discus-
Figure 31.6: Synchronous Corti-
cal Activity sion why such coordinated ensembles are high level
representations of activity. A cognitive model is
thus inherently multi-scale in nature. The cortex uses clusters of synchronous activity as shown
in Figure 31.6 acting on a sub second to second time frame to successively transform raw data rep-
resentations to higher level representations. Also, cortical transformations from different sensory
modalities are combined to fuse representations into new representations at the level of cortical
modules. These higher level representations both arise and interact on much longer time scales.
Further, monoamine inputs from the brain stem modulate the cortical modules as shown in Figure
30.13(b), Figure 30.14(b) and Figure 30.15(b). Additionally, reticular formation neurons mod-
ulate limbic and cortical modules. It is therefore clear that cognitive models require abstract

534
31.1. CORTICAL PROCESSING: CHAPTER 31. CORTEX

neurons whose output can be shaped by many modulatory inputs. To do this, we need a theoret-
ical model of the input/output (I/O) process of a single excitable cell that is as simple as possible
yet still computes the salient characteristics of our proposed system.

535
31.1. CORTICAL PROCESSING: CHAPTER 31. CORTEX

536
Part VIII

References

537
References

[1] M. Arbib, P. Erdi, and P. Szentagothal. Neural Organization: Structure, Function and
Dynamics. A Bradford Book. MIT Press, 1998.

[2] M. Braun. Differential Equations and Their Applications. Springer-Verlag, 1978.

[3] P. Brodal. The Central Nervous System: Structure and Function. Oxford University Press,
New York, 1992.

[4] M. Conkey. ”A History of the Interpretation of European ‘paleolithic art’: magic, mythogram,
and metaphors for modernity”. In A. Lock and C. Peters, editors, Handbook of Human
Symbolic Evolution. Blackwell Publishers, Massachusetts, 1999.

[5] M. Diamond, A. Scheibel, and L. Elson. The Human Brain Coloring Book. Barnes and Noble
Books, New York, 1985.

[6] E.Sanchez. ”Towards Evolvable Hardware: The Evolutionary Engineering Approach”. In


E. Sanchez and M. Tomassini, editors, Proceedings of the First International Workshop,
Lausanne, Switzerland, October 1995. Springer-Verlag, 1996. Volume 1259.

[7] Andrew Fingelkurts, Alexander Fingelkurts, and S. Kähkönen. ”Functional connectivity in


the brain – is it an elusive concept?”. Neuroscience and BioBehavioral Reviews, 28:827 –
836, 2005.

[8] S. Grossberg. ”How Does The Cerebral Cortex Work? Development, Learning, Attention and
3D Vision by Laminar Circuits of Visual Cortex”. Technical Report TR-2003-005, Boston
University, CAS/CS, 2003.

[9] S. Grossberg and A. Seitz. ”Laminar Development of Receptive Fields, Maps, and Columns
in Visual Cortex: The Coordinating Role of the Subplate”. Technical Report 02-006, Boston
University, CAS/CS, 2003.

[10] Z. Hall. An Introduction to Molecular Neurobiology. Sinauer Associates Inc., Sunderland,


MA, 1992.

539
REFERENCES REFERENCES

[11] T. Higuchi, M. Iwata, and W. Liu, editors. ”Evolvable Systems: From Biology to Hardware”,
Tsukuba, Japan, 1997. Proceedings of the First International Conference, October 1996.
Lecture Notes in Computer Science Volume 1259.

[12] B. Hille. Ionic Channels of Excitable Membranes. Sinauer Associates Inc., 1992.

[13] R. Holloway. ”Evolution of the Human Brain”. In A. Lock and C. Peters, editors, Handbook
of Human Symbolic Evolution, pages 74 – 116. Blackwell Publishers, Massachusetts, 1999.

[14] D. Johnston and S. Miao-Sin Wu. Foundations of Cellular Neurophysiology. MIT Press,
1995.

[15] J. McClelland. ”Failures to Learn and Their Remediation”. In J. McClelland and R. Siegler,
editors, Mechanisms of Cognitive Development: Behavioral and Neural Perspectives, pages
97 – 122. Lawrence Erlbaum Associates, Publishers, 2001.

[16] M. Merzenich. ”Cortical Plasticity Contributing to Child Development”. In J. McClelland


and R. Siegler, editors, Mechanisms of Cognitive Development: Behavioral and Neural Per-
spectives, pages 67 – 96. Lawrence Erlbaum Associates, Publishers, 2001.

[17] J. Nolte. The Human Brain: An Introduction to Its Functional Anatomy. Mosby, A Division
of Elsevier Science, 2002.

[18] J. Peterson. Calculus For Biologists: A Beginning – Getting Ready For Models
and Analyzing Models. Gneural Gnome Press: www.lulu.com/GneuralGnome, 2008.

[19] R. Raizada and S. Grossberg. ”Towards a Theory of the Laminar Architecture of Cerebral
Cortex: Computational Clues from the Visual System”. Cerebral Cortex, pages 100 – 113,
2003.

[20] W. Rall. ”Core Conductor Theory and Cable Properties of Neurons”, chapter 3, pages 39 –
67. Unknown, 1977.

[21] M. Sipper. Evolution of Parallel Cellular Machines: The Cellular Programming Approach.
Number 1194 in Lecture Notes in Computer Science. Springer-Verlag, 1997.

[22] A. Toynbee and J. Caplan. A Study of History: A New Edition. Barnes and Noble Books,
1995. revised and abridged, Based on the Oxford University Press Edition 1972.

[23] B. Tuchman. Practicing History: Selected Essays. Ballantine Books, 1982.

[24] T. Weiss. Cellular Biophysics: Volume 1, Transport. MIT Press, 1996.

[25] T. Weiss. Cellular Biophysics: Volume 2, Electrical Properties. MIT Press, 1996.

[26] T. Weiss. Cellular Biophysics: Volume 2, Electrical Properties, chapter ”The Hodgkin -
Huxley Model”, pages 163 – 292. MIT Press, 1996.

540
Part IX

Code Examples

541
Code Examples

8.1 Euler Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 144


8.2 System Dynamics Function . . . . . . . . . . . . . . . . . . . 144
8.3 True Solution Function . . . . . . . . . . . . . . . . . . . . . . 145
8.4 Generating Tabular Results . . . . . . . . . . . . . . . . . . . 145
8.5 The Euler Method Test Script . . . . . . . . . . . . . . . . . . 145
8.6 The Euler Method Plot Script . . . . . . . . . . . . . . . . . . 146
8.7 The Euler MatLab Session . . . . . . . . . . . . . . . . . . . . 146
8.8 The Runga - Kutte Flowchart . . . . . . . . . . . . . . . . . . 153
8.9 Runge - Kutta Codes . . . . . . . . . . . . . . . . . . . . . . . 155
8.10 The Runge - Kutta Soltion . . . . . . . . . . . . . . . . . . . 156
8.11 Generating Tabular Data For The Runge - Kutta Method . . 157
8.12 RK MatLab Session . . . . . . . . . . . . . . . . . . . . . . . 157
8.13 The Generating Plots Script . . . . . . . . . . . . . . . . . . . 158
8.14 The Runge - Kutta MatLab Session . . . . . . . . . . . . . . 158
8.15 System Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 160
8.16 True Vector Solution . . . . . . . . . . . . . . . . . . . . . . . 161
8.17 Old ShowFixedRK Script . . . . . . . . . . . . . . . . . . . . 162
8.18 System Solution Matlab Session . . . . . . . . . . . . . . . . . 162
8.19 Original Plotting Script . . . . . . . . . . . . . . . . . . . . . 163
8.20 System Plotting Code . . . . . . . . . . . . . . . . . . . . . . 163
8.21 System MatLab Session: First Initial Condition . . . . . . . . 164
8.22 System Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.23 Old ShowFixedRK Script . . . . . . . . . . . . . . . . . . . . 167
8.24 ShowSystemFixedRK Script . . . . . . . . . . . . . . . . . . . 168
8.25 Original Plotting Script . . . . . . . . . . . . . . . . . . . . . 170
8.26 System Plotting Code . . . . . . . . . . . . . . . . . . . . . . 171
8.27 Predator Prey System With Self Interaction On Both
Species . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

543
CODE EXAMPLES CODE EXAMPLES

8.28 Predator Prey System With Self Interaction On Both


Species . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
13.1 Computing The Nernst Voltage . . . . . . . . . . . . . . . . . 244
14.1 Computing The Membrane Voltage . . . . . . . . . . . . . . . 255
20.1 Implementing The Signum Function . . . . . . . . . . . . . . 341
20.2 Implementing The Unit Step Function . . . . . . . . . . . . . 342
20.3 The Normalized Inner Current . . . . . . . . . . . . . . . . . 342
20.4 A Naive Step Function Implementation . . . . . . . . . . . . 343
20.5 Plotting The Naive Step Function . . . . . . . . . . . . . . . 343
20.6 Plotting Cable Currents . . . . . . . . . . . . . . . . . . . . . 344
23.1 Lower Triangular Solver . . . . . . . . . . . . . . . . . . . . . 353
23.2 Lower Triangular Solver Matlab Session . . . . . . . . . . . . 354
23.3 Upper Triangular Solver . . . . . . . . . . . . . . . . . . . . . 355
23.4 Upper Triangular Solver Matlab Session . . . . . . . . . . . . 355
23.5 LU Decomposition of A Without Pivoting . . . . . . . . . . . 356
23.6 LU Decomposition No Pivoting MatLab Session . . . . . . . . 356
23.7 LU Decomposition of A With Pivoting . . . . . . . . . . . . . 358
23.8 LU Decomposition of A With Pivoting MatLab Session . . . 358
24.1 Bisection Code . . . . . . . . . . . . . . . . . . . . . . . . . . 361
24.2 Function Definition In MatLab . . . . . . . . . . . . . . . . . 362
24.3 Bisection MatLab Session . . . . . . . . . . . . . . . . . . . . 363
24.4 Should We Do A Newton Step? . . . . . . . . . . . . . . . . . 365
24.5 Global Newton Method . . . . . . . . . . . . . . . . . . . . . 365
24.6 Global Newton Function . . . . . . . . . . . . . . . . . . . . . 366
24.7 Global Newton Function Derivatived . . . . . . . . . . . . . . 367
24.8 Global Newton MatLab Session . . . . . . . . . . . . . . . . . 367
24.9 Finite Difference Global Newton Method . . . . . . . . . . . . 368
24.10Finite Difference Netwon Method MatLab Session . . . . . . 370
25.1 Computing The Infinite Cable Voltage . . . . . . . . . . . . . 385
25.2 Computing The Finite Cable Voltage . . . . . . . . . . . . . . 386
25.3 A Finite Membrane Voltage MatLab Session . . . . . . . . . . 386
26.1 Ball and Stick Eigenvalues . . . . . . . . . . . . . . . . . . . . 426
26.2 Original Newton Finite Difference Method . . . . . . . . . . . 427
26.3 Modified Finite Difference Newton Method . . . . . . . . . . 428
26.4 Finding Ball and Stick Eigenvalues . . . . . . . . . . . . . . . 429
26.5 Finding the Ball and Stick Coefficient Matrix . . . . . . . . . 431
26.6 Finding Q . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
26.7 Finding the Ball and Stick Data Vector . . . . . . . . . . . . 434
26.8 Finding The Approximate Solution . . . . . . . . . . . . . . . 434
26.9 Finding the Axon Hillock Voltage . . . . . . . . . . . . . . . . 435

544
CODE EXAMPLES CODE EXAMPLES

26.10Using more terms in the approximation . . . . . . . . . . . . 436


26.11Many Choices For Q! . . . . . . . . . . . . . . . . . . . . . . . 438
26.12Getting The Axon Hillock Voltage . . . . . . . . . . . . . . . 442
29.1 Battery Voltage Calculation . . . . . . . . . . . . . . . . . . . 490
29.2 Managing The MatLab Hodgkin - Huxley Simulation . . . . . 491
29.3 Initializing The Simulation: rest.m . . . . . . . . . . . . . . . 493
29.4 The MatLab Hodgkin - Huxley Dynamics: simpleHH.m . . . 495
29.5 The MatLab Fixed Runga - Kutte Method . . . . . . . . . . 498
29.6 Adding Injection Current To The Runga - Kutte Code . . . . 498
29.7 Injected Current . . . . . . . . . . . . . . . . . . . . . . . . . 499
29.8 Generate An Axon Hillock Voltage Curve . . . . . . . . . . . 501

545
CODE EXAMPLES CODE EXAMPLES

546
Part X

Detailed Indices

547
Index

53 x0 = 0 lines for

abstraction x0 (t) = −3 x(t) + 4 y(t)


SWH (Software, Wetware and Hardware) y 0 (t) = −x(t) + 2 y(t)
Triangle, 3 x(0) = x0
y(0) = y0
Cable Equation
Time Independent , 65
Limiting Current Densities, 317 y 0 = 0 lines for
Cable Model
Notational Assumptions, 308 x0 (t) = −3 x(t) + 4 y(t)
cognition y 0 (t) = −x(t) + 2 y(t)
computation
x(0) = x0
top-down, 6
y(0) = y0
Cognitive
Biological Information Processing, 3 , 65
~ ert , 45
Assume solution has form V
Definition
Eigenvalues and Eigenvectors of a 2 by 2 Can trajectories cross for
Matrix, 35
x0 (t) = −3 x(t) + 4 y(t)
Eigenvalues and Eigenvectors of a n by n
Matrix, 41 y 0 (t) = −x(t) + 2 y(t)

The Angle Between n Dimensional Vec- x(0) = x0


tors, 32 y(0) = y0
The Inner Product Of Two Vectors, 28
, 70
Linear ODE Systems Characteristic Equation, 49
r can not satisfy det (rI − A) 6= 0, 48 Combining the x0 = 0 and y 0 lines for
r satisfying det (rI − A) 6= 0 are called
eigenvalues of the system, 48 x0 (t) = −3 x(t) + 4 y(t)

549
INDEX INDEX

y 0 (t) = −x(t) + 2 y(t) y(0) = y0 ,


x(0) = x0
, 72
y(0) = y0
General form, 43
, 65 General graphical analysis for a linear ODE
Derivation of the characteristic equation, systems problem, 78
45 General solution, 48
Dominant solution to General solution for

x0 (t) = −3 x(t) + 4 y(t) x0 (t) = −3 x(t) + 4 y(t)


y 0 (t) = −1 x(t) + 2 y(t) y 0 (t) = −1 x(t) + 2 y(t)
x(0) = 2 x(0) = 2
y(0) = −4 y(0) = −4

, 58 , 57
Drawing, 73 Graphical analysis for
eigenvalues for
x0 (t) = −3 x(t) + 4 y(t)
x0 (t) = −3 x(t) + 4 y(t) y 0 (t) = −x(t) + 2 y(t)
y 0 (t) = −1 x(t) + 2 y(t) x(0) = x0
x(0) = 2 y(0) = y0
y(0) = −4
, 65
, 54 Graphing all trajectories for
eigenvectors for
x0 (t) = −3 x(t) + 4 y(t)
x0 (t) = −3 x(t) + 4 y(t) y 0 (t) = −x(t) + 2 y(t)
y 0 (t) = −1 x(t) + 2 y(t) x(0) = x0
x(0) = 2 y(0) = y0
y(0) = −4
, 75
, 56 Graphing Region I trajectories for
~
Factored form matrix system r and V
must solve, 47 x0 (t) = −3 x(t) + 4 y(t)
Formal rules for drawing trajectories for y 0 (t) = −x(t) + 2 y(t)
x(0) = x0
x0 (t) = a x(t) + b y(t)
y(0) = y0
y 0 (t) = c x(t) + d y(t)
x(0) = x0 , 67

550
INDEX INDEX

Graphing Region II trajectories for , 73


Matrix - vector form for
x0 (t) = −3 x(t) + 4 y(t)
y 0 (t) = −x(t) + 2 y(t) x0 (t) = −3 x(t) + 4 y(t)
x(0) = x0 y 0 (t) = −1 x(t) + 2 y(t)
y(0) = y0 x(0) = 2
y(0) = −4
, 73

Graphing Region III trajectories for , 54


Matrix system r and V~ must solve, 47
x0 (t) = −3 x(t) + 4 y(t) ~ ert must solve,
Matrix system solution V
y 0 (t) = −x(t) + 2 y(t) 46
~ satisfying (rI − A) V
Nonzero V ~ = ~0 are
x(0) = x0
called eigenvectors corresponding to
y(0) = y0
the eigenvalue r of the system, 48
, 73 nullclines, 65
Rewriting in matrix - vector form, 44
Graphing Region IV trajectories for
Solution to IVP for
0
x (t) = −3 x(t) + 4 y(t)
x0 (t) = −3 x(t) + 4 y(t)
y 0 (t) = −x(t) + 2 y(t)
y 0 (t) = −1 x(t) + 2 y(t)
x(0) = x0
x(0) = 2
y(0) = y0
y(0) = −4
, 73
, 57
Graphing the eigenvector lines for ~ ert , 46
The derivative of V
The Eigenvalues Of A Linear System Of
x0 (t) = −3 x(t) + 4 y(t)
ODEs, 49
y 0 (t) = −x(t) + 2 y(t)
The General Solution Of A Linear System
x(0) = x0
Of ODEs, 50
y(0) = y0 Two negative eigenvalues case, 77
Vector form of the solution to
, 65

magnified Region II trajectories for x0 (t) = −3 x(t) + 4 y(t)


y 0 (t) = −1 x(t) + 2 y(t)
0
x (t) = −3 x(t) + 4 y(t)
x(0) = 2
y 0 (t) = −x(t) + 2 y(t)
y(0) = −4
x(0) = x0
y(0) = y0 , 57

551
INDEX INDEX

Matrix Phrasing the nonlinear conservation law


Aij Notation, 16 in terms of f and g, 101
(A V )T = V T AT , 27 Plotting the Predator Prey trajectory, 172
Adding, 20 Positive x axis trajectory, 91
Adding with O, 21 Positive y axis trajectory, 91
Definition, 15 Predator assumptions, 85
Derivation of the characteristic equation Quadrant I trajectory possibilities, 98
for eigenvalues, 34 Showing a trajectory can’t spiral in, 113
Diagonal part, 18 Showing a trajectory can’t spiral out, 107
Eigenvalue equation, 33 Showing a trajectory is periodic, 108
Identity, 18 Showing the x coordinate of a trajectory
Lower Triangular Part, 17 is bounded, 105
Multiplication Is Not Communicative, 21 Showing the y coordinate of a trajectory
Multiplication with I, 21 is bounded, 107
Multiplying, 20 Showing Trajectories in Quadrant I are
Nonzero vectors which solve the eigenvalue bounded, 103
equation for an eigenvalue are called The average value of x and y on a trajec-
its eigenvectors, 35 tory, 115
Roots of the characteristic equation are The Predator Prey model explains
called eigenvalues, 35 the Mediterranean Sea data, 120
Scaling, 20 Predator Prey Self Interaction Models
Square, 17 x0 = 0 line, 123
Subtracting, 20
y 0 = 0 line, 123
symmetric, 19
Asymptotic values, 132
Transpose, 19
Details of the analysis that trajectories
Upper Triangular Part, 17
must hit a/e, 133
Zero, 16
Intersection point of the nullclines, 127
Predator Prey Models Limiting average values, 127
f and g growth functions, 100 Modeling food - food and predator - preda-
A sample Predator - Prey analysis, 118 tor interaction, 122
Adding fishing rates, 118 Solving numerically, 175
Distinct trajectories do not cross, 91 The nullclines cross: c/d < a/e, 127
Finding the period T numerically, 168 The nullclines don’t cross, 131
Food fish assumptions, 85 Trajectory starting on positive y axis, 124
Nonlinear conservation law in Region II, Trajectory starting on the positive x axis,
93 126
Nonlinear conservation law Region IV, 95
nullclines, 88 Vector
Numerical solutions, 165 n Dimensional, 25
Original Mediterranean sea data, 84 Adding, 23

552
INDEX INDEX

Angle Between A Pair Of Two Dimen- y 0 (t) = − x(t) + 7 y(t)


sional Vectors, 31 x(0) = 2
Definition, 23
y(0) = −3
Inner product of two vectors, 27
n Dimensional Cauchy Schwartz Theorem, into a matrix - vector system, 45
31
Deriving the characteristic equation for
Norm, 26
Scaling, 24 x0 (t) = −10 x(t) − 7 y(t)
Subtracting, 23
y 0 (t) = 8 x(t) + 5 y(t)
Three Dimensional, 25
x(0) = −1
Transpose, 24
Two Dimensional, 24 y(0) = −4
Two Dimensional Cauchy Schwartz The-
, 52
orem, 29
Two Dimensional Collinear, 30 Deriving the characteristic equation for
What does the inner product mean?, 29
x0 (t) = 8 x(t) + 9 y(t)
Worked Out Solutions y 0 (t) = 3 x(t) − 2 y(t)
Matrix - Vector Multiplication, 27 x(0) = 12
Transpose Of A Matrix - Vector Multipli- y(0) = 4
cation, 27
Adding fishing rates to the predator - prey , 50
model
Eigenvalues and eigenvectors

x0 (t) = 4 x(t) − 18 x(t) y(t) − 2 x(t)


" #
4 9
A =
y 0 (t) = −3 y(t) + 21 x(t) y(t) − 2 y(t) −1 −6

, 121 , 37
Adding Matrices, 20 Eigenvalues and eigenvectors
Adding Vectors, 23 " #
Convert −3 4
A =
−1 2
0
x (t) = 6 x(t) + 9 y(t)
, 35
y 0 (t) = −10 x(t) + 15 y(t)
Multiplying Matrices, 21
x(0) = 8
Partial Derivatives, 9
y(0) = 9
Scaling Matrices, 20
to a matrix - vector system, 44 Solving the IVP
Converting " # " #" #
x0 (t) −20 12 x(t)
=
x0 (t) = 2 x(t) + 4 y(t) y 0 (t) −13 5 y(t)

553
INDEX INDEX

" # " #
x(0) −1
=
y(0) 2

completely, 58
Solving the IVP
" # " #" #
x0 (t) 4 9 x(t)
=
y 0 (t) −1 −6 y(t)
" # " #
x(0) 4
=
y(0) −2

completely, 61
Solving the Predator - Prey model

x0 (t) = 2 x(t) − 10 x(t) y(t)


y 0 (t) = −3 y(t) + 18 x(t) y(t)

, 118
Subtracting Matrices, 20
Subtracting Vectors, 23
Transposing A Vector, 24

554

Вам также может понравиться