Вы находитесь на странице: 1из 535

Contents

1 Introduction 9

1.1 What Is a System? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.2 Lumped Mechanical Components . . . . . . . . . . . . . . . . . . . . . . . . 12

1.3 Review of Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . 16

1.3.1 Homogeneous Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 18

1.3.2 Particular Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

1.3.3 Matching Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . 24

1.4 Practice Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2 First-Order Systems 33

2.1 Standard Form and Time Constant . . . . . . . . . . . . . . . . . . . . . . . 33

2.2 Forced Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

2.2.1 Step Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

2.2.2 Impulse Response and Delta Function . . . . . . . . . . . . . . . . . . 38

1
2 CONTENTS

2.3 Practice Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3 Second-Order Systems 51

3.1 Formulation of Second-Order Systems . . . . . . . . . . . . . . . . . . . . . . 51

3.2 Homogeneous Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.2.1 Underdamped Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.2.2 Overdamped Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.2.3 Critically Damped Systems . . . . . . . . . . . . . . . . . . . . . . . 64

3.3 Forced Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.3.1 Step Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.3.2 Impulse Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

3.4 Practice Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

4 State-Space Formulation 85

4.1 State Equation and Output Equation . . . . . . . . . . . . . . . . . . . . . . 86

4.2 The Big Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

4.3 Deriving State Equations from Ordinary Differential Equations . . . . . . . . 104

4.4 Deriving Ordinary Differential Equations from State Equations . . . . . . . . 108

4.4.1 Differential Operator d/dt . . . . . . . . . . . . . . . . . . . . . . . . 108

4.4.2 Crammer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

4.4.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110


CONTENTS 3

4.5 Practice Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

5 One-Port Elements 121

5.1 Multi-Domain Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5.2 One-Port Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

5.2.1 Electrical Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

5.2.2 Translational Elements . . . . . . . . . . . . . . . . . . . . . . . . . . 127

5.2.3 Rotational Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

5.2.4 Fluid Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

5.2.5 Thermal Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

5.2.6 Lumped-Parameter Systems . . . . . . . . . . . . . . . . . . . . . . . 140

5.3 Through- and Across-Variables . . . . . . . . . . . . . . . . . . . . . . . . . 142

5.4 Classification of Element Types . . . . . . . . . . . . . . . . . . . . . . . . . 146

5.4.1 A-Type Elements: Generalized Capacitors . . . . . . . . . . . . . . . 147

5.4.2 T -Type Elements: Generalized Inductors . . . . . . . . . . . . . . . . 148

5.4.3 D-Type Elements: Generalized Resistors . . . . . . . . . . . . . . . . 149

5.5 Classification of Excitation Sources . . . . . . . . . . . . . . . . . . . . . . . 150

6 Linear Graphs 151

6.1 Notation of Linear Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

6.2 Linear Graphs: An Appetizer . . . . . . . . . . . . . . . . . . . . . . . . . . 155


4 CONTENTS

6.3 Element Interconnect Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

6.4 Linear Graphs for Translational Systems . . . . . . . . . . . . . . . . . . . . 168

6.5 Linear Graphs for Rotational Systems . . . . . . . . . . . . . . . . . . . . . . 178

6.6 Physical Source Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

7 Deriving State Equations from Linear Graphs 191

7.1 A Heuristic Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192

7.2 Trap 1: Which Loops and Nodes? . . . . . . . . . . . . . . . . . . . . . . . . 195

7.3 Trap 2: Which State Variables? . . . . . . . . . . . . . . . . . . . . . . . . . 197

7.4 Trap 3: Is A Singular? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

7.5 Concepts of Trees and Links . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

7.6 Primary and Secondary Variables . . . . . . . . . . . . . . . . . . . . . . . . 209

7.7 Normal Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214

7.8 Deriving State Equations from Linear Graphs . . . . . . . . . . . . . . . . . 221

7.9 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

8 Deriving State Equations of Multi-Domain Systems 263

8.1 Two-Port Transducing Elements: An Introduction . . . . . . . . . . . . . . . 263

8.2 Mathematical Model of Two-Port Elements . . . . . . . . . . . . . . . . . . . 265

8.3 Linear Graph Models Involving Two-Port Elements . . . . . . . . . . . . . . 272

8.4 Primary and Secondary Variables for Two-Port Elements . . . . . . . . . . . 276


CONTENTS 5

8.5 Normal Trees of Multi-Domain Systems . . . . . . . . . . . . . . . . . . . . . 279

8.6 State Equations for Multi-Domain Systems . . . . . . . . . . . . . . . . . . . 287

9 Nonlinear Systems 305

9.1 Exact Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306

9.2 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

9.3 More Examples on Linearization . . . . . . . . . . . . . . . . . . . . . . . . . 313

10 Operational Block Diagrams 321

10.1 Basic Operations and Notations . . . . . . . . . . . . . . . . . . . . . . . . . 321

10.2 Block Diagrams for State Equations . . . . . . . . . . . . . . . . . . . . . . . 323

11 Solution of State Equations in Time Domain 329

11.1 A Review of Linear Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . 330

11.1.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . 330

11.1.2 Modal Matrix and Similarity Transformation . . . . . . . . . . . . . . 341

11.1.3 Jordan Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343

11.2 Solution Structures of State Equations . . . . . . . . . . . . . . . . . . . . . 346

11.3 Homogeneous Solution: Eigenvector Expansion . . . . . . . . . . . . . . . . . 347

11.4 Homogeneous Solution: State Transition Matrix . . . . . . . . . . . . . . . . 354

11.5 Particular Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

11.6 Stability of a System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362


6 CONTENTS

12 Transfer Functions 371

12.1 A Review of Complex Numbers . . . . . . . . . . . . . . . . . . . . . . . . . 372

12.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372

12.1.2 Polar Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

12.1.3 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374

12.1.4 Reciprocals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375

12.1.5 Complex Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375

12.1.6 Complex Elementary Functions . . . . . . . . . . . . . . . . . . . . . 376

12.1.7 Polar Form Revisted . . . . . . . . . . . . . . . . . . . . . . . . . . . 377

12.1.8 Multiplication in Polar Form . . . . . . . . . . . . . . . . . . . . . . . 378

12.1.9 Reciprocals in Polar Form . . . . . . . . . . . . . . . . . . . . . . . . 379

12.2 What Is a Transfer Function? . . . . . . . . . . . . . . . . . . . . . . . . . . 380

12.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381

12.2.2 Heuristic Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 382

12.2.3 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387

12.3 Magnitude and Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390

12.3.1 Heuristic Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

12.3.2 Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395

12.4 Poles and Zeros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400

12.4.1 Heuristic Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 401


CONTENTS 7

12.4.2 Definition of Poles and Zeros . . . . . . . . . . . . . . . . . . . . . . . 404

12.5 Connection to Operational Block Diagrams . . . . . . . . . . . . . . . . . . . 416

12.5.1 Revisit of Block Diagrams . . . . . . . . . . . . . . . . . . . . . . . . 416

12.5.2 Interconnected Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 418

12.6 Transfer Functions in State Formulation . . . . . . . . . . . . . . . . . . . . 424

13 Frequency Response Functions 429

13.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429

13.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431

13.3 G(ω) as an Input-Output Relation . . . . . . . . . . . . . . . . . . . . . . . 435

13.4 First-Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443

13.5 Second-Order Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451

13.6 Bode Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462

13.6.1 Commonly Used Bode Plots . . . . . . . . . . . . . . . . . . . . . . . 464

13.6.2 Bode Plots for Complex Systems . . . . . . . . . . . . . . . . . . . . 471

14 Fourier Analysis 475

14.1 Response under Periodic Excitations . . . . . . . . . . . . . . . . . . . . . . 475

14.1.1 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476

14.1.2 Complex Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . 483

14.1.3 Frequency Response . . . . . . . . . . . . . . . . . . . . . . . . . . . 485


8 CONTENTS

14.1.4 Concept of Spectrums . . . . . . . . . . . . . . . . . . . . . . . . . . 496

14.2 Response under Non-Periodic Excitations . . . . . . . . . . . . . . . . . . . . 503

14.2.1 Fourier Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504

14.2.2 Finding Response using Fourier Transforms . . . . . . . . . . . . . . 513

14.2.3 Applications of Fourier Analysis . . . . . . . . . . . . . . . . . . . . . 516

15 The Method of Impedance 521

15.1 Impedance and Admittance . . . . . . . . . . . . . . . . . . . . . . . . . . . 521

15.2 Impedance Derived from Linear Graphs . . . . . . . . . . . . . . . . . . . . . 523

15.2.1 Impedance of One-Port Elements . . . . . . . . . . . . . . . . . . . . 524

15.2.2 Elements in Series or Parallel Connections . . . . . . . . . . . . . . . 525

15.2.3 Impedance of Two-Port Elements . . . . . . . . . . . . . . . . . . . . 529

15.3 Relation to Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 534


Chapter 1

Introduction

System dynamics is a subject that studies response of a multi-component system subjected


to external excitations. Modeling and analysis of system dynamics is often done through
use of ordinary differential equations. In Section 1.1, we will discuss the definition of a
system and introduce the concepts of inputs and outputs. We will also explain the multi-
domain nature of system dynamics. In Section 1.2, we will introduce the simplest way
to model a mechanical system in terms of a point mass, a viscous damper, and a linear
spring. Example will be shown to demonstrate modeling of system dynamics through use
of ordinary differential equations. In Section 1.3, we will summarize and review important
results in ordinary differential equations, such as homogeneous solutions, particular solutions,
and matching the initial conditions.

1.1 What Is a System?

A system is a collection of things that we want to analyze. Therefore, we – the users – define
the system to help us analyze situations and solve problems. Once a system is defined,
things that are outside the system become the surrounding environment. The system and
its environment communicate through inputs and output.

9
10 CHAPTER 1. INTRODUCTION

Input: Child Output:


Medication Temperature

Time Time

Figure 1.1: Input and output as a function of time for a child with fever

Let me use my children as an example to explain the concept of systems, inputs, and
outputs. As a parent, I am very interested in my children’s growth. Therefore, I can define
each child as an individual system. The input to each system (i.e., each child) can be
realistic quantities, such as food intake, or imaginary quantities, such as love. Similarly, the
output can be realistic quantities, such as weight and height, or imaginary quantities, such
as attitude. When my kids are sick with a fever, I will give them a lot of fluid and some
medication. I will also monitor their temperature constantly. In this case, the fluid and
medication intakes become the input, and the temperature becomes the output that I am
eager to know.

Through this simple example, there are three things worth noting. First, inputs and
outputs can be single or multiple. A system can have a single input and multiple outputs, for
example. Second, the inputs and outputs are chosen according to our needs. Third, inputs
and outputs are usually functions of time. For example, if we plot the medication intake
(input) and the temperature (output) in the previous example as a function of time, we get
the results in Fig. 1.1. The medication dosage peaks every 6 hours. After the medication is
given, the temperature comes down for several hours and it comes back up. Since the inputs
and outputs vary with time, we often call such systems as dynamic systems.

As a second example, let us define US stock market as our system. We can consider the
1.1. WHAT IS A SYSTEM? 11

Electrical Chemical
Input Input
Battery Fuel

Gas
Pedal ENGINE Heat
Thermal
Mechanical Output
Input
Generator Torque
Electrical Mechanical
Output Output

Figure 1.2: Startup of a car engine

amount of capital flowing in and out of the market as inputs. We can take the Dow Jones
index as the output. Again, the input and output can be single or multiple. We define the
inputs and output according to our needs. Finally, the inputs and outputs are functions of
time.

As a third example, let us analyze startup of a car – a real situation with some engi-
neering flavors; see Fig. 1.2. Let us define the engine as our system for analysis. When we
get into a car, we turn the key. At this time, the battery gives an electrical input to the
motor, which in turns gives the engine a mechanical input to start the engine. At the same
time, we floor the gas pedal giving a mechanical input to the fuel pump, which in turns gives
the engine a chemical input (i.e., the fuel). After the engine starts, the engines generates
a torque (mechanical output) to drive the wheels, a current (electrical output) to drive the
generator, and a lot of heat (thermal output) exhausted to the environment.

The car example shows that a real engineering system consists of multiple components
that input, output, and transform various forms of energy, such as mechanical, electrical,
12 CHAPTER 1. INTRODUCTION

thermal, and chemical. For the rest of this book, we will call each form of energy as a
domain.

Now we have defined systems, inputs, and outputs. What do we do in system analysis?
The purposes of system analysis are twofolds. The first purpose is to mathematically model
a system that contains multiple domains. Often, the model appears in the form of ordinary
differential equations. Unfortunately, ordinary differential equations are not very easy to
handle numerically, when the size of the system becomes large. Therefore, an alternative
formulation called state equations becomes more appropriate. The second purpose is to
predict the outputs as a function of time when the inputs are given (or prescribed) by
solving the governing ordinary differential equations or state equations. Since the solutions
can be very cumbersome and overwhelming, there are various techniques to help us better
understand the physics instead of getting lost in number crunching.

1.2 Lumped Mechanical Components

As shown in Fig. 1.3, we now demonstrate modeling of a mechanical system through use
of three lumped components: point masses, viscous dampers, and linear springs. These are
the simplest ways to model inertia, damping, and stiffness effects of a mechanical system
for several reasons. First, these lumped components do not have dimensions and do not
occupy space. Second, these lumped components have linear governing equations. They are
explained as follows.

For the point mass, let the point mass have a constant mass m and experience a dis-
placement x(t) relative to an inertia frame, where t is the time. When the point mass is
subjected to an external load F (t), the position of the mass follows the Newton’s second law

F = mẍ (1.1)

where the dots denote time derivatives. Note that the positive direction of F (t) and x(t)
are the same in Fig. 1.3. The positive directions shown in Fig. 1.3 are usually called sign
convention. When modeling using lumped components, it is important to make sure that
the derivation is consistent with the sign convention. Otherwise, a sign error will occur.
1.2. LUMPED MECHANICAL COMPONENTS 13

x F
Mass m F = m˙˙
x
O X

x1 x2
Viscous F = c( ẋ2 − ẋ1 )
Damper F F
c

x1 x2
Linear
F F F = k(x2 − x1 )
Spring
k

Figure 1.3: Lumped mechanical components: mass, damper, and spring

For the viscous dampers, let the damping coefficient be c. In addition, the two ends
of the damper have absolute displacement x1 (t) and x2 (t). Moreover, the viscous damper is
subjected to a pair of forces F (t). The governing equation of the viscous damper is

F = c (ẋ2 − ẋ1 ) (1.2)

There are two things worth noting. First, the governing equation (1.2) is consistent with
the sign convention in Fig. 1.3. If the force F is tensile, the damper will stretch resulting
in ẋ2 > ẋ1 . Since the damping coefficient c is positive, the sign for both sides of (1.2) is
positive and is consistent. Second, the force appearing on both sides of the viscous damper
must be equal. This is because the damper has no mass. The forces acting on the damper
must be in equilibrium. Otherwise, the damper will have infinitely large acceleration.

For the linear springs, let the spring constant be k. In addition, the two ends of the
damper experience absolute displacement x1 (t) and x2 (t). Also, the linear spring is subjected
to a pair of forces F (t). The governing equation of the linear spring is

F = k (x2 − x1 ) (1.3)
14 CHAPTER 1. INTRODUCTION

x1 x2

m Fs
B k

Figure 1.4: A very interesting spring-mass-damper system

Similarly, the governing equation (1.3) is consistent with the sign convention in Fig. 1.3 and
the forces of acting on the spring must be in equilibrium.

Although lumped masses, dampers, and springs are simple ways to model mechanical
systems, it is sometimes difficult to visualize how a complex system, such as an airplane, can
be modeled through these parameters. For a simple analysis, one can model the fuselage
and turbine engines as point masses. Also, the wings can be modeled as linear springs. If
the modeling of interest is the vibration of fuselage, more advanced analysis, such as finite
element analysis, can reduce a complicated continuous airplane model into a discrete model
with many, many lumped masses with springs. It should also be noted that the lumped
models in this section are linear. In many practical applications, nonlinear effects cannot
be ignored. For example, the drag force from air is not proportional to the velocity of the
moving object.

Example 1.1 Consider the spring-mass-damper system as shown in Fig. 1.4. This is a very
interesting and somewhat pathetic system, because it presents many strange phenomena
that we will revisit many times later in this book. In the future, we will refer this system
as the flagship model. The system consists a point mass with mass m, a linear spring with
spring constant k, and a viscous damper with damping coefficient B. Initially, the system is
in equilibrium, i.e., the spring is unstretched and there is no motion. Then an external force
Fs (t) is applied to the mass. Let x1 (t) be the displacement of the spring-damper connecting
point and x2 (t) be the displacement of the point mass. Determine
1.2. LUMPED MECHANICAL COMPONENTS 15

k(x2 − x1 ) m Fs

Bẋ1 k(x2 − x1 )

Figure 1.5: Free-body diagram of this interesting problem

1. the ordinary differential equation governing x2 (t) with corresponding initial conditions,
and

2. the ordinary differential equation governing v2 (t) ≡ ẋ2 (t) with corresponding initial
conditions.

To obtain the governing equation, let us start with two free-body diagrams in Fig. 1.5.
The first free-body diagram is for the point mass. Through use of Newton’s second law, the
sum of forces results in
Fs − k (x2 − x1 ) = mẍ2 (1.4)
The second free-body diagram is for the spring-damper connecting point. Note that the
forces on both sides must be in equilibrium resulting in

k (x2 − x1 ) = B ẋ1 (1.5)

Since we want to obtain the equation governing x2 , we need to eliminate x1 from (1.4)
and (1.5). Solving k (x2 − x1 ) from (1.4) to obtain

k (x2 − x1 ) = Fs − mẍ2 (1.6)

and solving x1 from (1.4) to obtain


1
x1 = x2 − [Fs − mẍ2 ] (1.7)
k
Finally, substitution of (1.6) and (1.7) back to (1.5) gives
d3 x2
  
1
Fs − mẍ2 = B ẋ2 − Ḟs − m 3 (1.8)
k dt
16 CHAPTER 1. INTRODUCTION

Finally, rearranging the terms in (1.8) results in


d3 x2 mk k
m + ẍ 2 + k ẋ 2 = Fs (t) + Ḟs (t) (1.9)
dt3 B B
This is a third order differential equation in x2 , and we need three initial conditions to solve
for x2 . They are x2 (0), ẋ2 (0), and ẍ2 (0). Since the system is at rest in equilibrium initially,
x2 (0) = ẋ2 (0) = 0. The initial condition for ẍ2 (0) is tricky. According to (1.4)
1
ẍ2 (0) = {Fs (t) − k [x2 (t) − x1 (t)]}t=0 = Fs (0) (1.10)
m

To obtain the equation governing v2 (t), recall that v2 (t) ≡ ẋ2 (t). Then (1.9) becomes
mk k
mv̈2 + v̇2 + kv2 = Fs (t) + Ḟs (t) (1.11)
B B
with initial conditions v2 (0) ≡ ẋ2 (0) = 0 and v̇2 (0) ≡ ẍ2 (0) = Fs (0).

This example has many implications. First, there may be more than one way to write
down the governing equation for a given system. The governing equations for x2 and v2 are
both correct. Second, the choice of the governing equation depends on the needs. If position
information is needed, then equation (1.9) must be use. If velocity information is needed,
then (1.11) will suffice. Third, when a system becomes more complicated, the approach of
ordinary differential equation becomes inefficient. Let’s imagine that a system consists of
2000 lumped masses, springs, and dampers. For each of these 2000 components, we need to
write down the governing equations. Worse yet is that we need to eliminate 1999 variables
and find out initial conditions up to 1999-th order. This is an impossible task, and a better
way to model complicated system is highly desirable.

1.3 Review of Ordinary Differential Equations

This section quickly reviews linear ordinary differential equations and their solutions. In
system dynamics, governing equations often appear in the form of linear ordinary differential
equations with prescribed initial conditions. This type of problem is called initial value
problems. Mathematically, a linear ordinary differential equation takes the form of
dn x dn−1 x dx
L[x(t)] ≡ n
+ a 1 n−1
+ . . . + an−1 + an x = h(t) (1.12)
dt dt dt
1.3. REVIEW OF ORDINARY DIFFERENTIAL EQUATIONS 17

where t is the independent variable, x(t) is the dependent variable, h(t) is the forcing function,
and a1 , . . . , an are constant coefficients. The left side of the ordinary differential equation
can be abbreviated using a linear operator L[•] operating on the dependent variable x(t).
For initial value problems, the ordinary differential equation (1.12) is subjected to initial
conditions prescribed for t = 0. More specifically,
dx(0) dn−1 x(0)
x(0) = b0 , = b1 , . . . , = bn−1 (1.13)
dt dtn−1
where b1 , . . . , bn−1 are constants.

Example 1.2 As an engineer, we need to understand the physical meaning of ordinary


differential equations. Let me use the experience of buying a used car to explain the physical
meaning of ordinary differential equations. One quick way to feel the suspension is to push
down the car and let go. Then the car will oscillate a few cycles before it stops. In this
little experiment, the car is our system to be analyzed because we are interest in it. Note
that each car is different; some cars have soft suspensions and others have hard suspensions.
Mathematically, this means that each car has its own linear operator L[•] governing the
vibration of the car.

If we define t = 0 as the moment immediately before the force is applied, the force
applied to the car is h(t) and serves as an input to the system. The vibration (i.e., vertical
displacement) of the car is x(t), and serves as the output of the system. The initial displace-
ment and velocity of the car before the push are the initial conditions. (They turn out to be
zero in this case.)

Now we are test-driving the used car on a free way. We can feel that the car is vibrating,
because the road surface is not perfectly flat and the wind is blowing. In this case, the wavy
road surface and the forces from the surrounding air become the input excitation h(t). Again,
the vibration of the car is x(t).

For the initial value problem formulated in (1.12) and (1.13), there are several things
worth noting.

1. In practical applications, a1 , a2 , . . . , an often depend on a system’s physical properties


that do not vary significantly with respect to time, such as inertia and stiffness. As
18 CHAPTER 1. INTRODUCTION

a result, the coefficients a1 , a2 , . . . , an are constants. In this case, the system is call
time-invariant.

2. The ordinary differential equation (1.12) is linear, because each term of the equation
only involves x(t) or one of its derivatives.

3. The solution of initial value problem (1.12) and (1.13) is unique. This makes a lot of
sense in practical applications. In the used car example, if we give a system with a
prescribed input force h(t) and initial conditions, we expect that the car will have a
unique response x(t).

4. The differential equation (1.12) has n-order of time derivatives in the equation. There-
fore, the equation needs to be integrated n times to obtain the solution x(t). Each
integration will result in an integration constant. Hence, the solution of (1.12) will
have n integration constants. These n integration constants will be determined by the
n initial conditions in (1.13).

5. Homogeneous solutions xh (t) satisfy L[xh (t)] = 0, and particular solutions xp (t) satisfy
L[xh (t)] = h(t). The complete solution of the initial value problem is

x(t) = xh (t) + xp (t) (1.14)

They are explained in more detail in the following sections.

1.3.1 Homogeneous Solutions

By definition, a homogeneous solution xh (t) satisfy the homogeneous equation

d n xh dn−1 xh dxh
L[xh (t)] ≡ n
+ a 1 n−1
+ . . . + an−1 + an x h = 0 (1.15)
dt dt dt
To explain the physical meaning of the homogeneous solution, let us use the used car example.
If we define t = 0 as the moment immediately after the push is complete (i.e., the moment
immediately before the car is released), the car will have non-trivial initial displacement and
velocity at t = 0. For t > 0, there will be no external force acting on the car, and h(t) = 0.
In this case, the subsequent vibration of the car is the homogeneous solution xh (t). For many
1.3. REVIEW OF ORDINARY DIFFERENTIAL EQUATIONS 19

practical applications, the homogeneous solution will die out eventually (c.f., the used car
example). If so, the homogeneous solution is often called transient response.

To determine xh (t), assume that

xh (t) = Ceλt (1.16)

where C and λ are both constants. Substitution of (1.16) into (1.15) gives

λn + a1 λn−1 + . . . + an−1 λ + an = 0 (1.17)

Equation (1.17) is often called characteristic equation. Let λ1 , . . . , λn be the roots of (1.17).
Then (1.17) can be rewritten as

(λ − λ1 ) (λ − λ2 ) . . . (λ − λn ) = 0 (1.18)

and
λ = λ1 , λ2 . . . λn (1.19)
λ1 , . . . , λn are also called characteristic values. Since each solution λi of (1.19) will give a
homogeneous solution eλi t , the complete homogeneous solution is a linear combination in the
form of
xh (t) = C1 eλ1 t + C2 eλ2 t + . . . + Cn eλn t (1.20)
where C1 , C2 , . . . Cn are arbitrary constants to be determined from initial conditions.

The homogeneous solution in (1.20) can take various forms, when the characteristic
values are complex or repeated.

Complex Roots Consider the case when λ1 = a + jb and λ2 = a − jb, where a and b are

both real and j ≡ −1 is the pure imaginary number. Then

C1 eλ1 t + C2 eλ2 t = eat (d1 cos bt + d2 sin bt) (1.21)

where d1 ≡ C1 + C2 and d2 ≡ j (C1 − C2 ).

Repeated Real Roots If λ1 = λ2 , the homogeneous solutions C1 eλ1 t and C2 eλ2 t are lin-
early dependent. Instead, the other independent homogeneous solution often takes the
form of teλ1 t . Therefore, the homogeneous solution becomes

xh (t) = C1 eλ1 t + C2 teλ1 t = eλ1 t (C1 + C2 t) (1.22)


20 CHAPTER 1. INTRODUCTION

If λ1 = λ2 = λ3 , the process can be repeated to obtain the homogeneous solution

xh (t) = C1 eλ1 t + C2 teλ1 t + C3 t2 eλ1 t = eλ1 t C1 + C2 t + C3 t2



(1.23)

If λ1 = λ3 = a + jb and λ2 = λ4 = a − jb, the homogeneous solution is

xh (t) = eat [(d1 + d2 t) cos bt + (d3 + d4 t) sin bt] (1.24)

Example 1.3 Let us consider the flagship model in Example 1.1. The governing equation
is
d3 x2 mk k
m 3 + ẍ2 + k ẋ2 = Fs (t) + Ḟs (t) (1.25)
dt B B
Find the homogeneous solutions when

1. m = 1 kg, k = 2 N/m, B = 2/3 Ns/m;

2. m = 1 kg, k = 1 N/m, B = 0.5 Ns/m;

3. m = 1 kg, k = 2 N/m, B = 1 Ns/m;

The characteristic equation from (1.17) is


 
3mk 2 2 mk
mλ + λ + kλ = λ mλ + λ+k =0 (1.26)
B B

1. For m = 1 kg, k = 2 N/m, and B = 2/3 Ns/m, (1.26) gives

λ λ2 + 3λ + 2 = 0

(1.27)

Hence λ = 0, −1, −2. According to (1.20), the homogeneous solution is

xh (t) = C1 + C2 e−t + C3 e−2t (1.28)

where C1 , C2 , and C3 are arbitrary constants.


1.3. REVIEW OF ORDINARY DIFFERENTIAL EQUATIONS 21

2. For m = 1 kg, k = 1 N/m, and B = 0.5 Ns/m, (1.26) gives

λ λ2 + 2λ + 1 = 0

(1.29)

Hence λ = 0, −1, −1. According to (1.22), the homogeneous solution is

xh (t) = C1 + C2 e−t + C3 te−t (1.30)

where C1 , C2 , and C3 are arbitrary constants.

3. For m = 1 kg, k = 2 N/m, and B = 1 Ns/m, (1.26) gives

λ λ2 + 2λ + 2 = 0

(1.31)

Hence λ = 0, −1 ± j. According to (1.21), the homogeneous solution is

xh (t) = C1 + e−t (d1 cos t + d2 sin t) (1.32)

where C1 , d1 , and d2 are arbitrary constants.

1.3.2 Particular Solutions

By definition, a particular solution xp (t) satisfy the equation

d n xp dn−1 xp dxp
L[xp (t)] ≡ + a 1 + . . . + an−1 + an xp = h(t) (1.33)
dtn dtn−1 dt
To explain the physical meaning of the particular solution, let us use the used car example
again. When the used car is moving on a freeway, the car is subjected to excitations from
the road and the wind. These excitations form the forcing term h(t). The vibration of the
car is then a particular solution xp under these excitations h(t).

The particular solution can often be found through the method of undetermined co-
efficient as follows. If h(t) takes a form shown in Table 1.1, the particular solution xp (t)
will take the corresponding form in Table 1.1 with undetermined coefficients. For exam-
ple, if h(t) = c2 t2 , the particular solution will take the form xp = K0 + K1 t + K2 t2 , where
K0 , K1 , and K2 are the undetermined coefficients. These coefficients will be determined by
substituting the particular solution back to (1.33).
22 CHAPTER 1. INTRODUCTION

Table 1.1: Table for the Method of Undetermined Coefficients

h(t) xp (t)
Constant c0 K0
c1 t K0 + K1 t
c2 t2 K0 + K1 t + K2 t2
c0 + c1 t + . . . + cn tn K0 + K1 t + . . . + Kn tn
ept K0 ept
cos pt, sin pt K1 cos pt + K2 sin pt
e cos bt, eat sin bt
at at
e (K1 cos bt + K2 sin bt)

The particular solution xp (t) in Table 1.1 will only work when xp is not a homogeneous
solution. If the form of xp (t) in Table 1.1 is indeed a homogeneous solution, the form xp (t)
needs to be replaced by txp (t). If txp (t) is still a homogeneous solution, replace t2 xp (t). One
can repeat the process until a form different from the homogeneous solution is obtained.

Example 1.4 Let us consider the flagship model in Example 1.1. The governing equation
is
d3 x2 mk
m 3 + ẍ2 + k ẋ2 = h(t) (1.34)
dt B
where
k
h(t) ≡ Fs (t) + Ḟs (t) (1.35)
B
Consider the case when m = 1 kg, k = 2 N/m, and B = 2/3 Ns/m. Determine the particular
solution when

1. h(t) = et ,

2. h(t) = 1, and

3. h(t) = t.

For m = 1 kg, k = 2 N/m, and B = 2/3 Ns/m, the ordinary differential equation (1.34)
becomes
d3 x2
+ 3ẍ2 + 2ẋ2 = h(t) (1.36)
dt3
1.3. REVIEW OF ORDINARY DIFFERENTIAL EQUATIONS 23

and the homogeneous solution from (1.28) is

xh (t) = C1 + C2 e−t + C3 e−2t (1.37)

where C1 , C2 , and C3 are arbitrary constants.

1. For h(t) = et , Table 1.1 indicates that the particular solution should take the form of

xp = K0 et (1.38)

Substitution of (1.38) into (1.36) gives

d 3 xp
+ 3ẍp + 2ẋp = K0 et + 3K0 et + 2K0 et = et (1.39)
dt3
Hence
1 1
K0 = , xp (t) = et (1.40)
6 6
2. For h(t) = 1, Table 1.1 indicates that the particular solution should take the form of

xp = K 0 (1.41)

The constant K0 , however, is a homogeneous solution according to (1.37). Therefore,


the proper form of the particular solution should be

x p = K0 t (1.42)

Substitution of (1.42) into (1.36) gives

d3 x p
+ 3ẍp + 2ẋp = 2K0 = 1 (1.43)
dt3
Hence
1 t
K0 = , xp (t) = (1.44)
2 2
3. For h(t) = t, Table 1.1 indicates that the particular solution should take the form of

xp = K 0 + K 1 t (1.45)

The constant K0 , however, is a homogeneous solution according to (1.37). Therefore,


K0 should be replaced by K0 t. Since K0 t is linearly dependent on K1 t in (1.45), K1 t
24 CHAPTER 1. INTRODUCTION

should be replaced by K1 t2 . Therefore, the proper form of the particular solution


should be
xp = K0 t + +K1 t2 (1.46)

Substitution of (1.46) into (1.36) gives

d 3 xp
+ 3ẍp + 2ẋp = (2K0 + 6K1 ) + 4K1 t = t (1.47)
dt3
Equating the coefficients in (1.47) leads to a set of simultaneous equation
(
4K1 = 1
(1.48)
2K0 + 6K1 = 0

Hence
3 1
K0 = − , K1 = (1.49)
4 4
and the particular solution is
3 1
xp (t) = − t + t2 (1.50)
4 4

1.3.3 Matching Initial Conditions

After the complete solution x(t) = xh (t) + xp (t) is found, the initial conditions can then
be applied to determine the arbitrary constants in the homogeneous solution. A common
mistake is to apply the initial conditions only to the homogeneous solution xh (t). This will
lead to erroneous results when xp (0) 6= 0.

Example 1.5 Let us consider the flagship model in Example 1.1. The governing equation
is
d3 x2 mk k
m 3 + ẍ2 + k ẋ2 = h(t) ≡ Fs (t) + Ḟs (t) (1.51)
dt B B
Consider the case when m = 1 kg, k = 2 N/m, and B = 2/3 Ns/m. Determine the complete
solution when
1.3. REVIEW OF ORDINARY DIFFERENTIAL EQUATIONS 25

1. h(t) = 0, x2 (0) = 0, ẋ2 (0) = 1, ẍ2 (0) = 0, and

2. h(t) = 1,and x2 (0) = ẋ2 (0) = ẍ2 (0) = 0,

For m = 1 kg, k = 2 N/m, and B = 2/3 Ns/m, the ordinary differential equation (1.34)
becomes
d3 x2
+ 3ẍ2 + 2ẋ2 = h(t) (1.52)
dt3
and the homogeneous solution from (1.28) is

xh (t) = C1 + C2 e−t + C3 e−2t (1.53)

where C1 , C2 , and C3 are arbitrary constants.

1. For h(t) = 0, the particular solution xp = 0, and the complete solution is simply

x2 (t) = xh (t) + xp (t) = C1 + C2 e−t + C3 e−2t (1.54)

Derivatives of x2 (t) from (1.54) show that

ẋ2 (t) = −C2 e−t − 2C3 e−2t (1.55)

and
ẍ2 (t) = C2 e−t + 4C3 e−2t (1.56)
Application of the initial conditions into (1.54), (1.55), and (1.56) gives

 x2 (0) = C1 + C2 + C3 = 0

ẋ2 (0) = −C2 − 2C3 = 1 (1.57)

ẍ2 (0) = C2 + 4C3 = 0

Solution of (1.57) gives


3 1
C1 = , C2 = −2, C3 = (1.58)
2 2
and the complete solution is
3 1
x2 (t) = − 2e−t + e−2t (1.59)
2 2
26 CHAPTER 1. INTRODUCTION

2. For h(t) = 1, the particular solution xp = t/2 according to (1.44). Therefore, the
complete solution is simply
t
x2 (t) = xh (t) + xp (t) = C1 + C2 e−t + C3 e−2t + (1.60)
2
Derivatives of x2 (t) from (1.60) show that
1
ẋ2 (t) = −C2 e−t − 2C3 e−2t + (1.61)
2
and
ẍ2 (t) = C2 e−t + 4C3 e−2t (1.62)

Application of the initial conditions into (1.60), (1.61), and (1.62) gives

 x2 (0) = C1 + C2 + C3 = 0

ẋ2 (0) = −C2 − 2C3 + 21 = 0 (1.63)

ẍ2 (0) = C2 + 4C3 = 0

Solution of (1.63) gives


3 1
C1 = − , C2 = 1, C3 = − (1.64)
4 4
and the complete solution is
3 1 t
x2 (t) = − + e−t − e−2t + (1.65)
4 4 2

1.4 Practice Problems


1. Let’s consider the Boeing Company as a system. List 5 possible inputs and 5 possible
outputs of this system. Explain why you think they are inputs and outputs.

2. In the mechanical system shown in Fig. 1.6, assume that the rod is massless, perfectly
rigid, and pivoted at point P . The mass of the small ball is m, the stiffness of the spring
is k, and the damping coefficient of the dashpot is c. The displacement x is measured
1.4. PRACTICE PROBLEMS 27

a 2a
m

c k x
g

Figure 1.6: A massless mechanical lever

from the horizontal position, where the spring is undeformed. Initially, the system is
at rest and the spring is underformed. Assuming that displacement x is small, derive
the equation of motion governing the motion x and the initial conditions. Also, how
does the damping force from the dashpot depend on x?

3. Consider the system shown in Fig. 1.7, where the cylinder of radius r and mass m
is pulled through a massless spring with spring constant k and a massless dashpot
with damping coefficient c. Assume that the cylinder rotates freely about its axis and
that the input displacement u(t) is known. Initially, the spring is undeformed and the
cylinder is at rest. Derive the equation governing x that describes the motion of the
center of the cylinder. Also describe the initial conditions. Assume that the cylinder
undergoes pure rolling.

4. Figure 1.8 shows a rigid bar of length l, hinged at one end, free at the other, and
supported through a frictionless ring that is connected to a spring and dashpot system.
The spring has stiffness k and the dashpot has damping coefficient c. When θ = 0, the
spring and dashpot are undeformed. In addition, the spring and dashpot can only move
horizontally. The free end of the rod is loaded with a force P whose direction remains
vertical during the motion. Also, the moment of inertia of the rigid bar about the
pivot point is I, and the gravity can be ignored in this problem. Consider only small
angular position θ. Derive the equation of motion of the rigid rod. [Hint: sin θ ≈ θ
and cos θ ≈ 1.]
28 CHAPTER 1. INTRODUCTION

x(t)
g u(t)
k
r
m c

Figure 1.7: A cylinder pulled by a spring

k l
θ
a

Figure 1.8: Spring-loaded rigid bar with P


1.4. PRACTICE PROBLEMS 29

5. Find the complete solution of the following differential equation.


dy
+ y = 6e−2t cos 4t, y(0) = 0. (1.66)
dt
6. Find the complete solution of the following differential equation.
d2 y dy dy(0)
2
+ 2 + 5y = 3e−3t , y(0) = 0, = 0. (1.67)
dt dt dt
7. Find the complete solution of the following differential equation.
d2 y dy dy(0)
2
+ 6 + 9y = 1, y(0) = 0, = 0. (1.68)
dt dt dt
8. A mechanical system is governed by the following differential equation.
d2 y dy
2
+ 6 + 9y = 4e−t (1.69)
dt dt
(a) What is the homogeneous solution of this system?
(b) What is the particular solution of this system?

9. Solve the following ordinary differential equation


d3 x d2 x dx
3
+ 3 2
+2 =2 (1.70)
dt dt dt
with initial conditions
x(0) = 1, ẋ(0) = 1, ẍ(0) = 0 (1.71)

10. Shakers are often used to excite structures in vibration testing; see Fig. 1.9. What
shakers do is to provide a prescribed displacement x(t) to the structure. Let’s assume
that the structure can be simplified to a single degree-of-freedom system with mass m,
stiffness k, and damping coefficient c as shown in Fig. 1.9, where x(t) is the prescribed
displacement given by the shaker and y(t) is the displacement of the structure relative
to the shaker. (Therefore, the absolute displacement of the structure is x(t) + y(t).)
Of course, the shaker will deliver a force F (t) to the structure in order to achieve
the prescribed motion x(t). Note that the force F (t) is not prescribed and will vary
according to the structure’s response y(t). The purpose of this problem is to analyze
the motion of the structure and to determine the power needed by the shaker through
the following steps.
30 CHAPTER 1. INTRODUCTION

y(t)
m
Structure
c k
x(t)
Shaker
F(t)

Figure 1.9: A shaker exciting a structure

(a) Draw a free-body diagram and formulate the equation of motion for y(t). Identify
the input.
(b) Derive the force F (t) in terms of x(t) and y(t).

11. Figure 1.10 shows a simplified model to simulate a recording head flying over a rough
disk surface in computer hard disk drives. The head has mass m and is supported by
a suspension with stiffness k1 . Moreover, the moving disk surface will generate an air
bearing lifting the head slightly above the disk surface (e.g., in the order of 20 nm).
The air bearing is simplied as a linear spring with stiffness k2 and damping coefficient
c. Let x(t) be the roughness of the disk surface and serve as the input excitation to
the head/suspension system. Moreover, y(t) is the relative displacement of the head
to the disk. In real hard disk drive applications, we want to keep y(t) almost constant,
so that the head can follow the disk surface to perform read/write operations. Derive
the equation of motion.
1.4. PRACTICE PROBLEMS 31

Suspension k1
y(t)
Head m

Air c x(t)
Bearing 1 k2

Disk Surface

Figure 1.10: A recording head flying over a rough disk surface


32 CHAPTER 1. INTRODUCTION
Chapter 2

First-Order Systems

This chapter is to discuss free and forced response of first-order systems. First-order systems
not only appear often in many simplified system dynamics models, but also serve as a
building block for complicated system dynamics models. For all first-order systems, they
all have one common characteristic parameter known as time constant. In Section 2.1, we
introduce the standard form of first-order equation, discuss its free response, and explain the
physical meaning of time constant. In Section 2.2, we discuss forced responses of first-order
systems in terms of unit step response function and impulse response function. The impulse
response function is derived in two ways. One is to go through physical law, such as the
impulse-momentum equation in particle dynamics. The other way is to define Dirac delta
function and solve for the ordinary differential equation.

2.1 Standard Form and Time Constant

In this book, first-order equations take the following standard form

dx
τ + x = h(t) (2.1)
dt

33
34 CHAPTER 2. FIRST-ORDER SYSTEMS

B m F(t) Bv m F(t)

(a) (b)

Figure 2.1: A point mass sliding on viscous lubricant and its free-body diagram

where τ is time constant and h(t) is the excitation. The first order equation is also subjected
to an initial condition
x(0) = x0 (2.2)
where x0 is a constant.

Example 2.1 There are many practical applications whose dynamics is governed by first-
order equations. Consider a rigid block with mass m sliding on a layer of viscous lubricant
with damping coefficient B as shown in Fig. 2.1(a). The rigid block is subjected to an
applied force F (t). Also, the velocity of the block is v(t). The arrows in Fig. 2.1(a) shows
the positive directions of F (t) and v(t). Figure 2.1(b) shows the free-body diagram. The
mass is subjected to two forces: F (t) pointing to the right and the viscous damping force
Bv pointing to the left. Application of Newton’s second law gives
dv
F (t) − Bv = m (2.3)
dt
or
m dv 1
+ v = F (t) (2.4)
B dt B
Comparison of (2.4) with (2.2) shows that the time constant is τ = m/B.

Now let us consider the special case when h(t) = 0 in (2.1), i.e., we are looking for the
homogeneous solution or transient response. The solution of (2.1) and (2.2) is then
x(t) = x0 e−t/τ (2.5)
2.1. STANDARD FORM AND TIME CONSTANT 35

x(t)

x0
τ increases

x0 / e

τ t

Figure 2.2: Free response of first-order systems

If τ > 0, the response x(t) decays exponentially. In contrast, if τ < 0, the response x(t)
grows exponentially. Let us focus on the case for τ > 0, because it is most often encountered
in practical applications.

Figure 2.2shows the transient response (2.5) corresponding to three different time con-
stant τ . All the three different responses have the same initial condition x0 . When t = τ , all
three responses decay to x0 /e according to (2.5). Moreover, when τ is large, it takes more
time to decay to 1/e of the initial condition. When τ is small, it take less time to decay
to 1/e of the initial condition. Therefore, the time constant τ dictates the decay rate and
serves as a characteristic parameter of first-order systems.

Return to Example 2.1, the time constant is τ = m/B. When the damping coefficient
B increases, the time constant decreases and it takes less time to decay to 1/e of the initial
velocity. This matches with our physical intuition of damping.
36 CHAPTER 2. FIRST-ORDER SYSTEMS

2.2 Forced Response

There are, in general, two ways to set a system into motion in practical applications. The
first way is to have non-zero initial conditions but no excitation h(t). Mathematically, the
response is the homogeneous solution of the governing ordinary differential equation of the
system. Since there is no external excitation h(t) involved, the response is often called free
response. In practical applications, systems often present some inherent damping causing
the free response to die out eventually. In this case, the free response is also called transient
response.

The second way is to have non-zero excitation h(t) regardless of the initial conditions.
Mathematically, the response is the sum of the homogeneous solution and the particular
solution. The homogeneous solution depends on the initial conditions to determine the
arbitrary coefficients. In contrast, the particular solution does not depend on the initial
conditions. In many applications, the homogeneous solution dies out eventually. In this
case, only the particular solution remains significant or dominant. Therefore, the particular
solution is often called forced response.

There are three types of forced response often used in system analysis: step response,
impulse response, and frequency response. Frequency response is extremely important, but
it also requires a lot more mathematical preparation. Therefore, we will discuss it in later
chapters of the book. For now, we will focus only on step response and impulse.

2.2.1 Step Response

When the system is subjected to a unit excitation (i.e., h(t) = 1) and zero initial condition,
the response of the system is called unit step response function. In other words, unit step
response function satisfies
dx
τ + x = 1; x(0) = 0 (2.6)
dt
Solution of (2.6) is
xs (t) = 1 − e−t/τ (2.7)
2.2. FORCED RESPONSE 37

Unit Input Step response


System
h(t)=1 xs(t)
h(t)
1
1

t t

Figure 2.3: Unit step response function of first-order systems

where the subscript s stands for step response. Note that the step response consists of two
parts. The first part is the particular solution 1, and the second part is the homogeneous
solution e−t/τ . The homogeneous solution eventually dies out and the step response ap-
proaches 1. Figure 2.3 shows a symbolic input-output relation of first-order systems via step
response. When the input is unity, i.e., h(t) = 1, the output of the system is the system’s
unit step response function.

Example 2.2 A stone of mass m is initially at rest on the surface of the water pond as
shown in Fig. 2.4(a). The stone drops into the water, and accelerates downwards because of
the gravity. The viscous damping coefficient of the water is b. Determine the velocity of the
stone as a function of time.

Figure 2.4(b) shows the free-body diagram. The stone is subjected two forces. One is
the gravity mg downwards. The other is the drag force from the water modeled as bv upward.
Let us assume that the positive directions of displacement x(t) and velocity v(t) ≡ ẋ(t) are
both downward. Application of Newton’s second law gives
dv
mg − bv = m (2.8)
dt
or
dv
τ + v = τg (2.9)
dt
38 CHAPTER 2. FIRST-ORDER SYSTEMS

g m bv

x(t) mg
b
(a) (b)

Figure 2.4: A stone dropped into water

where τ ≡ m/b is the time constant. Moreover, the initial condition is v(0) = 0, because
the stone is initially at rest. Note that (2.9) is almost identical to (2.6), except that the
input excitation in (2.9) is τ g instead of unity. Since (2.9) and (2.6) are linear equations,
magnifying the input of (2.6) by a factor of τ g implies that the output of (2.6) will be
magnified by τ g as well. Therefore, the solution of (2.9) is

v(t) = τ gxs (t) = τ g 1 − e−t/τ



(2.10)

2.2.2 Impulse Response and Delta Function

When the input excitation h(t) takes the form of a unit impulse with zero initial conditions,
the response of the system is called an impulse response function. The impulse can be defined
through use of physical laws, such as impulse-momentum equation in particle dynamics. This
formulation is attractive, because it has good physical meaning. Alternatively, the impulse
can be defined mathematically in terms of delta function. This formulation is mathematically
rigorous and is valid a wide range of problems. They are explained as follows.
2.2. FORCED RESPONSE 39

B m F(t) Bv m F(t)

(a) (b)

Figure 2.5: A point mass sliding on viscous lubricant and its free-body diagram

Impulse Response via Physical Laws

Let us use an example in particle dynamics to demonstrate impulse response functions.

Example 2.3 Example 2.1 is revisited here for impulse response function; see Fig. 2.5(a).
The block has mass m = 2 kg, and the lubricant has viscous damping coefficient B = 0.1
Ns/m. The force F (t) takes the form of an impact with impulse I = 10 Ns. The system is
initially at rest. Calculate the velocity immediately after the impact, and the time required
for the block to reduce its speed to half.

Let t = 0− and t = 0+ be the moment immediately before and after the impact is ap-
plied, respectively. The duration of the impact ∆t ≡ 0+ − 0− → 0 is infinitesimal. According
to the free-body diagram in Fig. 2.5(b), application of the impulse-moment equation from
t = 0− to t = 0+ results in
Z 0+ Z 0+
F (t)dt − Bvdt = mv(0+ ) − mv(0− ) (2.11)
0− 0−

The first term of (2.11) is the impulse generated by F (t) given by


Z 0+
F (t)dt = I (2.12)
0−
40 CHAPTER 2. FIRST-ORDER SYSTEMS

In addition,
Z 0+
Bvdt = 0 (2.13)
0−

because the velocity v(t) is finite and the duration is infinitesimal. Finally,

v(0− ) = 0 (2.14)

because the initial velocity immediately before the impact is zero. Substitution of (2.12),
(2.13), and (2.14) into (2.11) results in

I 10
v(0+ ) = = = 5 m/s (2.15)
m 2
After the impact is applied, F (t) = 0. According to (2.4), the equation of motion is

dv
m + Bv = 0 (2.16)
dt
with initial condition v(0+ ) given in (2.15). According to (2.5), the solution of (2.16) with
initial condition (2.15) is
I
v(t) = e−t/τ (2.17)
m
where the time constant τ is
m 2
τ= = = 20 s (2.18)
B 0.1
The time required to reduce the speed in half is

I −t/τ 1 I
v(t) = e = · (2.19)
m 2 m
Hence
t = τ ln 2 = 20 · ln 2 = 13.9 s (2.20)

This simple example – motivated by impulse-momentum equation in particle dynamics


- demonstrate several important features of impulse response functions. First, the impulse
response undergoes a sudden change in magnitude. In this example, velocity has a sudden
change. (In fact this is the way to give initial conditions for many experiments.) The impulse
response decays exponentially to zero. Second, when the impulse has unit magnitude, the
response is called an impulse response function. As shown in Fig. 2.6, the impulse I (input)
2.2. FORCED RESPONSE 41

Impulse I System Output v(t)

Unit Impulse response


Impulse System
function vδ(t)
1
h(t) m
1/δ

t
δ t

Figure 2.6: Impulse response function of first-order systems

results in an output response v(t) in (2.19). Since the system is linear, a unit impulse input
will give an impulse response function vδ (t) given by

1 1
vdelta (t) = v(t) = e−t/τ (2.21)
I m
Also note from Fig. 2.6 that the unit impulse can be generated by a constant force of
magnitude 1/δ with a duration δ, where δ is infinitesimal.

Dirac Delta Function

A rigorous way to model an impulse mathematically is to use Dirac delta function. Consider
the square pulse shown in Fig. 2.7. The pulse begins at t = a. Moreover, the duration of
the pulse is  and the height of the pulse is 1/. Accordingly, the are area is
Z a+
1
h(t)dt =  · =1 (2.22)
a 
42 CHAPTER 2. FIRST-ORDER SYSTEMS

h(t)

a 1/ε

ε t

Figure 2.7: Definition of Dirac Delta function

When the duration  approaches zero, the height of the pulse approaches infinity. In the
limit, the resulting pulse is defined as a Dirac delta function δ(t − a), where a defines the
location of the Dirac delta function. In fact, the delta function is not a ”real” function in a
rigorous sense, because it looks like
(
∞, t = a
δ(t − a) = (2.23)
0, t 6= a

When a = 0, the delta function is often written as δ(t). The Dirac delta function has the
following two properties. The first property deals with the integral of the delta function.
Z t2 (
1, t1 < a < t2
δ(t − a)dt = (2.24)
t1 0, Otherwise

This property indicates that integration of the delta function can either be 1 or zero de-
pending on the integration limits. If the integration limits enclose the delta function (i.e.,
t1 < a < t2 ), the integral is the area under the delta function, which is 1. Otherwise, the
integral is zero. The second property deals with the integral of the delta function δ(t − a)
with a continuous function g(t) as follows.
Z t2 (
g(a), t1 < a < t2
g(t)δ(t − a)dt = (2.25)
t1 0, Otherwise

Again, if the integration limits enclose the delta function (i.e., t1 < a < t2 ), the integral is
g(a). Otherwise, the integral is zero.
2.2. FORCED RESPONSE 43

The Dirac delta function has wide applications in engineering. If h(t) is (2.22) represents
a force and t represents the time, the Dirac delta function δ(t − a) models an impact force
with unit impulse occurring at t = a. If h(t) is (2.22) represents a force and t represents
a space variable, the Dirac delta function δ(t − a) models a concentrated force in space at
t = a with unit magnitude.

Impulse Response Function via Dirac Delta Function

When the input takes the form of the Dirac delta function with zero initial condition, the out-
put is the impulse response function by definition. According to (2.1), the impulse response
function satisfies
dx
τ + x = δ(t) (2.26)
dt
where τ is the time constant. In addition, the initial condition is

x(0− ) = 0 (2.27)

where 0− denotes the instant immediately before the delta function δ(t) is applied. Since
the delta function appears at t = 0, the response of (2.26) needs to be found in a two-step
process. The first step is to find the response from t = 0− to t = 0+ , where t = 0+ is the
instant immediately after the delta function δ(t) is applied. An integration of (2.26) with
respect to t from t = 0− to t = 0+ results in
Z 0+ Z 0+

 + 
τ x(0 ) − x(0 ) + xdt = δ(t) = 1 (2.28)
0− 0−

Since the impulse response function of interest is bounded,


Z 0+
xdt = 0 (2.29)
0−

Substitution of (2.27) and (2.29) into (2.28) leads to


1
x(0+ ) = (2.30)
τ
The second step is to determine the response when t > 0+ . In this case, (2.26) becomes
dx
τ +x=0 (2.31)
dt
44 CHAPTER 2. FIRST-ORDER SYSTEMS

cdT k(T-T∞ )dt

h(t)dt

Figure 2.8: Energy balance of an engine

subjected to the initial condition in (2.30). Therefore, the impulse response function is
1 −t/τ
xδ (t) = e (2.32)
τ

2.3 Practice Problems


1. Consider a RC circuit whose voltage is governed by
dv
RC + v = V0 (t) (2.33)
dt
where R is resistance in Ω (Ohm), C is capacitance in F (Farad), and V0 (t) is an
externally applied voltage in V (Volt).

(a) What is the time constant of the circuit?


(b) if V0 (t) = 0 and the initial condition is v(0) = 1 V, how much time does it take
for the voltage to drop to 0.25 V?
(c) If R = 1 × 106 Ω, C = 1 × 10−6 F, and V0 = 2e−t , determine the response of v(t)
if the initial condition is v(0) = 1.

2. In this problem, we try to model how temperature of an engine varies according to


surrounding environments. Figure 2.8 shows the energy balance of the engine. At any
2.3. PRACTICE PROBLEMS 45

instant t and an infinitesimal time duration dt, T is the temperature of the engine, T∞
is the constant temperature of the surrounding, and h(t) is the rate of heat rejected
by the engine, i.e., rate of heat generated by the fuel less the output work. Assume
that the heat transfer is dominated by convection, and the rate of heat dissipated to
the surrounding will be k(T − T∞ ), where k is a heat transfer coefficient. Therefore,
the thermal energy left in the engine will be h(t)dt − k(T − T∞ )dt during time dt.
This thermal energy is absorbed by the engine in the form of cdT , where c is the
heat capacity and dT is the temperature increase of the engine. Answer the following
questions.

(a) Introduce the relative temperature

θ = T − T∞ (2.34)

Show that the equation governing the engine temperature is



c + kθ = h(t) (2.35)
dt
What is the time constant of this engine?
(b) During a warmup process, in which h(t) is a constant that can be controlled by
the driver through the driver’s pedal. Initially, the temperature of the engine is
T∞ . Determine and plot the relative temperature θ(t) as a function of time by
solving (2.35). If you were to design this engine, what could you do to reduce the
time for warmup? If you were the driver, what could you do to reduce the time
for warmup?
(c) If you are cruising on a freeway using this engine with h0 being the heat rejected
by the engine. At steady-state, what will be the temperature of the engine? If,
suddenly, the thermostat is out of work, and the heat transfer coefficient reduces
from k to 0.1k. Determine and plot the temperature as a function of time, and
what is the final temperature of the engine? If you were to design this engine,
what could you do to make the overheat condition less severe? If you were the
driver, what could you do?
(d) Finally, you get home and the engine has an initial temperature T0 . Find the
temperature of the engine as a function of time. If you were to design this engine,
what could you do to speed up the cooling process?
46 CHAPTER 2. FIRST-ORDER SYSTEMS

R Vs (t)

+ 1
Vs (t) C t
- T 2T 3T 4T

Figure 2.9: A first order electrical system Figure 2.10: Excitation source

i(t) C R v(t)

Figure 2.11: RC circuit to model piezoelectric material

3. Consider the electrical system shwon in Fig. 2.9. The resostamce R = 10kΩ, and the
capacitance C = 0.1µF. The equation governing the voltage vC across the capacitor is
dvC
τ + vC = Vs (t) (2.36)
dt
where Vs (t) is the source voltage and τ = RC is the time constant. Consider the
square-wave excitation voltage Vs (t) shown in Fig. 2.10.

(a) If T = 10 ms, draw the response of vC . Please explain why you get the result.
(b) If the function generator is switch to T = 1 ms, how would this change the response
of vC for 0 < t < 2T ? Please explain the response qualitatively.

4. Figure 2.11 shows an RC circuit commonly used to model piezoelectric materials.


(Piezoelectric materials are materials that can transform mechanical deformation to
electric charge.) The circuit is driven by a current source i(t) modeling the mechanical
deformation of the piezoelectric material. The voltage v(t) generated by the material
2.3. PRACTICE PROBLEMS 47

y(t) x(t)

c k
m

Figure 2.12: A mechanical filter

satisfies the following equation


dv v
C + = i(t) (2.37)
dt R
where C is capacitance of the piezoelectric material in F (Farad) and R is resistance
of the amplifier circuit in Ω (Ohm).

(a) What is the time constant of the circuit?


(b) Initially, the circuit has no current and v(0− ) = 0. At t = 0, the piezoelectric
material is pulled suddenly generating a current surge modeled as i(t) = δ(t),
where δ(t) is the Dirac delta function. What is the voltage v(0+ ) immediately
after the current surge? What is the subsequent voltage response v(t)?
(c) If R = 1 × 106 Ω, C = 1 × 10−6 F, and i(t) = 10−5 A, determine the response of
v(t) if the initial condition is v(0) = 0.

5. Consider the mechanical filter as shown in Fig. 2.12. The mechanical filter consists of
a cart of mass m, a spring of stiffness k, and a dashpot of damping c. The input of the
filter is the motion of the cart x(t) (not the force acting on the cart), and the output
is the displacement y(t) of the hinge connecting the spring and the dashpot.

(a) Apply equilibrium condition at the hinge to derive the input-output relation

T ẏ + y = x(t) (2.38)
48 CHAPTER 2. FIRST-ORDER SYSTEMS

J B
T

Figure 2.13: A damped rotor under torque excitation T

where
c
T = (2.39)
k
(b) What is the physical meaning of T ?
(c) Find the impulse response function of the system by letting x(t) = δ(t) and solving
the differential equation (2.38).

6. Figure 2.13 shows a rotor with polar moment of inertia J. The rotor is supported by
two bearings with viscous damping coefficients B. In other words, the damping torque
produced by the bearings is −Bω, where ω is the angular velocity of the rotor. Answer
the following questions.

(a) A motor (not shown in Fig. 2.13) applies a constant torque T to the rotor at
t = 0. Determine the angular velocity of the rotor as a function of time. Is this
a step response or an impulse response? What is the steady-state velocity of the
rotor?
(b) After the rotor reaches the steady state, the motor is turned off. Determine the
angular velocity of the rotor as a function of time.
(c) Recall that dθ/dt = ω, where θ is the angular displacement. How many cycles
will the rotor rotate before the rotor stops?
(d) After the rotor stops, the motor goes through an electrical surge and produces an
angular impulse h. What is the angular velocity of the rotor immediately after
the surge? What is the time history of the angular velocity afterwards?
2.3. PRACTICE PROBLEMS 49

(e) If the polar moment of inertia is double, how would that changes the solution in
parts (a) and (c)?
50 CHAPTER 2. FIRST-ORDER SYSTEMS
Chapter 3

Second-Order Systems

This chapter is to discuss free and forced response of second-order systems. Second-order
systems not only appear often in many simplified system dynamics models, but also serve
as a building block for complicated system dynamics models. For all second-order systems,
they all have two common characteristic parameter known as natural frequency and damping
ratio. In Section 3.1, we will study the standard form of second-order systems with proper
initial conditions. Section 3.2 deals with homogeneous solution of second-order systems.
Depending on the value of damping ratio, one can classify a second-order system into un-
derdamped, critically damped, and overdamped system. Section 3.3 will present step and
impulse response of second-order systems.

3.1 Formulation of Second-Order Systems

Consider a spring-mass-damper system as shown in Fig. 3.1(a). The rigid block has mass m,
the dashpot has viscous damping coefficient c, and the linear spring has spring constant k.
In addition, the system is subjected to an external load f (t). Let x(t) be the displacement
away from the unstretched position of the spring. At t = 0, the block is subjected to an
initial displacement x0 and initial velocity v0 .

51
52 CHAPTER 3. SECOND-ORDER SYSTEMS

Figure 3.1b shows the free-body diagram, when the block undergoes a positive displace-
ment, positive velocity, and positive acceleration. The spring is stretched, so a restoring force
kx is pointing to the left. The damping force cẋ is opposite to the direction of motion, so it
is pointing to the left. Sum of forces results in

−kx − cẋ + f (t) = mẍ (3.1)

Rearrangement of the terms results in

mẍ + cẋ + kx = f (t) (3.2)

with initial conditions


x(0) = x0 , ẋ(0) = v0 (3.3)

Again, there are two ways to excite a second-order system. One way is to have non-zero
initial conditions with f (t) = 0. The response in this case is the homogeneous solution. The
other way is to have zero initial conditions with f (t) = 0. The response will be the particular
solution. These solutions are explained in the following sections.
3.2. HOMOGENEOUS SOLUTIONS 53

x(t)
c
kx
m f(t) m f(t)
cẋ
k
(a) (b)

Figure 3.1: A spring-mass-damper system

3.2 Homogeneous Solutions

The homogeneous equation takes the form of

mẍ + cẋ + kx = 0 (3.4)

with initial conditions


x(0) = x0 , ẋ(0) = v0 (3.5)
The homogeneous solution takes the form of

x(t) = Ceλt (3.6)

where C is an arbitrary constant and λ is the characteristic value. Substitution of (3.6) into
(3.4) results in the characteristic equation

mλ2 + cλ + k = 0 (3.7)

The two characteristic roots are


r
c c 2 k
λ1,2 =− ± − (3.8)
2m 2m m
Note that the characteristic values only depend on two parameters c/(2m) and k/m. So let
us first discuss these two parameters in detail as follows.
54 CHAPTER 3. SECOND-ORDER SYSTEMS

Physical Meaning of k/m

To figure out the physical meaning of k/m, let us first find out the unit of k/m. The unit of
k is N/m or kg/s2 . The unit of m is kg. Therefore, the unit of k/m is 1/s2 , which is square
of a frequency unit. Therefore, we can define
r
k
ωn ≡ (3.9)
m

where ωn has the unit of frequency and it is called natural frequency. To understand the
physical meaning of ωn , let us consider the case when damping is absent, i.e., c = 0. From
(3.8), the two characteristic roots are

λ1,2 = ±jωn (3.10)

According to (1.21),
x(t) = d1 cos ωn t + d2 sin ωn t (3.11)

In addition, x(t) has to satisfy the initial conditions in (3.5), i.e.,,

x(0) = d1 = x0 (3.12)

and
x(t) = ωn [−d1 sin ωn t + d2 cos ωn t]t=0 = ωn d2 = v0 (3.13)

Hence
v0
d2 = (3.14)
ωn
Substitution of (3.12) and (3.14) back to (3.11) results in
v0
x(t) = x0 cos ωn t + sin ωn t (3.15)
ωn

The response of x(t) is shown in Fig. 3.2. There are several things worth noting. First, the
system oscillates sinusoidally in response to initial displacement and velocity, when there is
no damping and no external excitation. Second, the sinusoidal response has frequency ωn .
This is why ωn is called natural frequency.
3.2. HOMOGENEOUS SOLUTIONS 55

x(t) 1 2π
v0 T=
ωn
x0 t

Figure 3.2: Undamped free response with natural frequency ωn

Physical Meaning of c/(2m)

The parameter c/2m is obviously a measure of damping, because its numerator results from
damping coefficient c. This parameter, however, is not the most convenient parameter to
describe damping, because its unit is not dimensionless. Note that the unit of c is Ns/m or
kg/s. The unit of m is kg. Therefore, the unit of c/2m is 1/s, which is the unit of frequency.
To obtain a dimensionless damping parameter, it is often to define viscous damping factor
ζ as
c c
≡ ωn · ζ, or ζ ≡ (3.16)
2m 2mωn
Also note that ζ is always positive, because c, m, and ωn are positive.

The advantage of defining ζ is twofold. First, the presence of ζ significantly simplifies


the characteristic roots. Substitution of (3.9) and (3.16) into (3.8) results in
 p 
λ1,2 = −ζ ± ζ 2 − 1 ωn (3.17)

Figure 3.3 shows how the characteristic roots λ1,2 vary with respect to ζ. When ζ = 0, the
system is called undamped. The two characteristic roots are pure imaginary in the form of
complex conjugates as shown in (3.10). When 0 < ζ < 1, the system is called underdamped.
The two characteristic roots are complex conjugates. When ζ = 1, the system is called
critically damped. The two characteristics roots are real and repeated. When ζ > 1, the
system is called overdamped. The two characteristic roots are real and distinct. This shows
the second advantage of using ζ. The definition of ζ results in a very simple expression
56 CHAPTER 3. SECOND-ORDER SYSTEMS

Im[λ]
0<ζ<1 ζ=0
λ1

ζ=1 ζ>1 Re[λ]


ζ>1
λ2
0<ζ<1 ζ=0

Figure 3.3: Loci of the characteristic roots λ1,2 with respect to ζ

to distinguish the underdamped, critically damped, and overdamped systems. Of course,


the response of these three systems is very different, because their characteristic roots are
different. We shall study the response of these systems in detail in subsequent sections.

Example 3.1 Figure 3.4(a) shows a rigid ball with mass m is constrained by two elastic
taut strings. Each string has length l and is initially stretched with an internal tension T .
Assume that the motion is small. Also, the motion takes place in the horizontal plan, and
gravity need not be considered. Derive the equation of motion and find the natural frequency
of the system.

Figure 3.4(b) shows the free-body diagram of the ball. The ball undergoes a displace-
ment x(t) away from the equilibrium position. Moreover, x(t) is positive upwards. Since the
strings are taut, each string forms an angle θ with the equilibrium position, such that
x(t)
tan θ = (3.18)
l
When x(t) is small compared with l, θ is small resulting in
x(t)
sin θ ≈ tan θ = , and cos θ ≈ 1 (3.19)
l
3.2. HOMOGENEOUS SOLUTIONS 57

m
l l T T
θ x(t)
l l
(a) (b)

Figure 3.4: Motion of a mass with two taut strings

The summation of the forces in the upward direction and use of Newton’s second law leads
to
−2T sin θ = mẍ (3.20)
Substitution of (3.19) into (3.20) gives
2T
mẍ + x=0 (3.21)
l
Comparison of (3.21) with (3.4) shows that the mass coefficient is m and the stiffness coef-
ficient is 2T /l. According to (3.9), the natural frequency ωn is
r
2T
ωn = (3.22)
ml
In addition, ζ = 0 because no damping is present. Therefore, the free response of the system
will be an undamped sinusoidal vibration with natural frequency ωn given in (3.22).

Example 3.2 Figure 3.5(a) shows a uniform rigid bar with mass m and length l. In addition,
the rigid bar is hinged at O and is supported by a damper and a spring. The damper has
damping coefficient c and is distance a from the hinged point O. The linear spring has spring
constant k and is distance b from the hinged point O. The motion occurs in the horizontal
plane, so the gravity need not be considered. Assume that the motion is small, and the
spring and damper do not change their orientation during the motion. Derive the natural
frequency ωn and viscous damping factor ζ. Determine the condition on c so that the system
is underdamped.
58 CHAPTER 3. SECOND-ORDER SYSTEMS

b
a m θ
Ox
O c k caθ̇ kbθ
Oy

(a) (b)

Figure 3.5: Motion of a uniform rigid bar with a spring and a damper

Figure 3.5(b) shows the free-body diagram, when the bar experience a positive angular
displacement θ. When θ is small, the elongation of the spring is approximately bθ and the
restoring force is approximately kbθ. Similarly, when θ is small, the velocity at the damper
is aθ̇ and the damping force is caθ̇. In addition, the reactions at the hinge point are Ox and
P
Oy . The sum of moment about the hinge point O results in Mo = Io θ̈, i.e.,
  ml2
− caθ̇ a − (kbθ) b = θ̈ (3.23)
3
Rearranging terms in (3.23) leads to
ml2
θ̈ + ca2 θ̇ + kb2 θ = 0 (3.24)
3
Comparison of (3.24) with (3.4) shows that the mass coefficient is ml2 /3, the damping
coefficient is ca2 , and the stiffness coefficient is kb2 . According to (3.9), the natural frequency
ωn is s r
kb2 3kb2
ωn = = (3.25)
ml2 /3 ml2
According to (3.16), the viscous damping factor is
ca2 3ca2
ζ= = (3.26)
2 (ml2 /3) ωn 2ml2 ωn
Hence, the system is underdamped when ζ < 1 or
2ml2 ωn
c< (3.27)
3a2
3.2. HOMOGENEOUS SOLUTIONS 59

x(t)

e−ζω n t
t


T=
ωd

Figure 3.6: Homogeneous solution of underdamped systems

3.2.1 Underdamped Systems

For underdamped systems, ζ < 1. Therefore, the characteristic roots in (3.17) become
p
λ1,2 = −ζωn ± jωn 1 − ζ 2 ≡ −ζωn ± jωd (3.28)

where ωd is called damped natural frequency and is defined as


p
ωd ≡ ωn 1 − ζ 2 (3.29)

According to (1.21), the homogeneous solution of underdamped second-order systems is

x(t) = e−ζωn t (d1 cos ωd t + d2 sin ωd t) (3.30)

where d1 and d2 are arbitrary coefficients. Figure 3.6 shows the homogeneous solution in
(3.30). Note that the response is sinusoidal with exponentially decaying amplitude. The
oscillation with damped natural frequency ωd results from cos ωd t + d2 sin ωd t in (3.30). The
exponentially decaying envelop results from e−ζωn t in (3.30).

Example 3.3 Determine the homogeneous solution of underdamped systems subjected to


initial conditions x(0) = x0 and ẋ(0) = 0. In this case, the arbitrary coefficients d1 and d2
60 CHAPTER 3. SECOND-ORDER SYSTEMS

in (3.30) can be determined as follows. Substitution of t = 0 into (3.30) gives

x(0) = d1 = x0 (3.31)

To impose the zero initial velocity, we need to calculate the velocity as follows.

ẋ(t) = −ζωn e−ζωn t (d1 cos ωd t + d2 sin ωd t) (3.32)


+ e−ζωn t (−d1 sin ωd t + d2 cos ωd t)

For t = 0, (3.33) gives


ẋ(0) = −ζωn d1 + d2 ωd = 0 (3.33)
Substitution of (3.31) and (3.29) into (3.33) gives
ζx0
d2 = p (3.34)
1 − ζ2

Hence the homogeneous solution becomes


!
ζ
x(t) = x0 e−ζωn t cos ωd t + p sin ωd t (3.35)
1 − ζ2

Example 3.4 Figure 3.7 shows a door swinging about its hinged axis. The door has a
moment of inertia I = 4 kgm2 about the hinged axis. The door is subjected to a torsional
spring and a torsional damper at the hinge axis. The spring has a torsional stiffness k = 100
Nm, and the damper has a torsional damping coefficient b. With the angular displacement θ
in Fig. 3.7, the torsional spring gives a restoring moment kθ and the torsional damper gives
a damping moment bθ̇. When the door is subjected to an initial displacement with no initial
velocity, one design criterion requires that the door reduce its response to less than 30% of
its initial displacement in one cycle of oscillation. Determine the value of b to satisfy the
design criterion.

According to the free-body diagram in Fig. 3.7, use of Newton’s second law results in

−kθ − bθ̇ = I θ̈ (3.36)


3.2. HOMOGENEOUS SOLUTIONS 61

kθ ,bθ̇

Figure 3.7: An amazing door problem

or
I θ̈ + bθ̇ + kθ = 0 (3.37)
In addition, the initial conditions are

θ(0) = θ0 , θ̇(0) = 0 (3.38)

According to (3.37), the natural frequency is


k 100
ωn = = = 5 rad/s (3.39)
I 4
and the viscous damping factor is
b b b
ζ= = = (3.40)
2Iωn 2·4·5 40

According to (3.35), the solution of (3.37) with (3.38) is


!
ζ
θ(t) = θ0 e−ζωn t cos ωd t + p sin ωd t (3.41)
1 − ζ2
62 CHAPTER 3. SECOND-ORDER SYSTEMS

After one cycle of oscillation, t = 2π/ωd . From (3.41),


 
2π √
2
θ = θ0 e−2πζωn /ωd = θ0 e−2πζ/ 1−ζ < 0.3θ0 (3.42)
ωd
Solving ζ from (3.42) gives
ζ > 0.188 (3.43)
Comparison of (3.40) and (3.43) gives

b > 40 · 0.188 = 7.528 kgm2 /s (3.44)

3.2.2 Overdamped Systems

For underdamped systems, ζ > 1. Therefore, the characteristic roots λ1 and λ2 in (3.17) are
real and distinct. The homogeneous solution takes the form of

x(t) = C1 eλ1 t + C2 eλ2 t (3.45)

where C1 and C2 are arbitrary constants to be determined from the initial conditions.

It is often mistaken that an overdamped system has larger damping. This is not entirely
true. Note that λ1 λ2 = ωn2 . When one characteristic root λ2 is very negative, the other
characteristic root λ1 will approach zero. As a result, the response of the overdamped
system depends severely on initial conditions. If the initial conditions are such that only λ1
is excited, the homogeneous solution of the overdamped system will take a long time to die
out, because λ1 is very close to zero.

Example 3.5 Consider the swinging door problem in Example 3.4. What would be the
range of damping coefficient b, if the door is overdamped and has at most a time constant of
5 seconds? In addition, what are the worst initial conditions to close the door (i.e., taking
the longest time to close the door)?
 p 
Since the door is overdamped, the two characteristic roots λ1 ≡ −ζ + ζ 2 − 1 ωn
 p 
and λ2 ≡ −ζ − ζ 2 − 1 ωn are real and negative; see Fig. 3.8. Therefore, the time
3.2. HOMOGENEOUS SOLUTIONS 63

Im[λ]

ζ=1 Re[λ]

λ 2 = (−ζ − ζ 2 − 1)ω n λ1 = ( −ζ + ζ 2 − 1)ω n

Figure 3.8: Characteristic roots for the overdamped door

constant for λ1 and λ2 are τ1 = − (λ1 )−1 and τ2 = − (λ2 )−1 , respectively. Note that λ2 is
more negative than λ1 ; therefore, τ1 > τ2 . In other words, the requirement of time constant
being less than 5 second will be imposed on τ1 , i.e.,
1 1
τ1 ≡ − =  <5 (3.46)
λ1
p
2
ζ − ζ − 1 ωn

Substitution of (3.39) into (3.46) gives


p 1
ζ− ζ2 − 1 > = 0.04 (3.47)
5ωn
To find the range for ζ to satisfy (3.47), let us use a trial-and-error approach as follows.
p p
Table 3.1 tabulates ζ − ζ 2 − 1 with respect to ζ. When ζ increases, ζ − ζ 2 − 1 decreases.
p
When ζ = 12.5, ζ − ζ 2 − 1 = 0.0401. Therefore, the solution of (3.47) gives

1 < ζ < 12.5 (3.48)


p
Note that ζ > 1 is part of the solution because the system is overdamped and ζ 2 − 1 is
real. Substitution of (3.40) into (3.48) results in

40 < b < 500 (3.49)

The unit for b is kgm2 /s.


64 CHAPTER 3. SECOND-ORDER SYSTEMS

Table 3.1: Table for Trials and Errors


p
ζ ζ − ζ2 − 1
2.0 0.2679
3.0 0.1716
10.0 0.0561
12.0 0.0417
12.5 0.0401

To identify the worst-case initial conditions, we note from (3.45) that the response
−λ2 t
C2 e will die out much faster than C1 e−λ1 t , because λ1 > λ2 . Therefore, the worst case
occurs when C1 6= 0 but C2 = 0. In this case, the solution from (3.45) becomes

θ(t) = C1 eλ1 t (3.50)

Therefore, the corresponding initial conditions are

θ(0) = C1 (3.51)

and
C1 C1
θ̇(0) = C1 λ1 eλ1 t t=0 = C1 λ1 = −
 
=− (3.52)
τ1 5
According to (3.51) and (3.52), the worst case occurs when the initial conditions θ(0) and
θ̇(0) satisfy θ(0) + 5θ̇(0) = 0.

3.2.3 Critically Damped Systems

When ζ = 1, the two characteristic roots from (3.17) are

λ1 = λ2 = −ωn (3.53)

The solution is then


x(t) = (C1 + tC2 ) e−ωn t (3.54)
where C1 and C2 are arbitrary constants.
3.3. FORCED RESPONSE 65

x(t)
ζ=1

ζ>1

Figure 3.9: Response of a critically damped system

Unlike overdamped systems, both characteristic roots are negative enough for critically
damped systems. Therefore, a critically damped system tends to return to its equilibrium
position faster than an overdamped system, when the systems are subjected to nontrivial
initial conditions. Nevertheless, the critically damped system may have a larger overshoot
than the overdamped system.

3.3 Forced Response

For forced response, we will only consider step response and impulse response in this chapter.
We will defer the frequency response in a later chapter.

3.3.1 Step Response

When the external excitations take the form of constant loadings (i.e., step loading), the
response is called step response. As before, the step response consists of a homogeneous
solution and a particular solution. The homogenous solution will depend on whether the
66 CHAPTER 3. SECOND-ORDER SYSTEMS

θ
I

kθ ,bθ̇

Figure 3.10: A swinging door subjected to a constant load

system is overdamped, underdamped, or critically damped. In contrast, the particular solu-


tion does not and will be a constant. Let us use the following example to demonstrate step
response of second-order systems.

Example 3.6 Figure 3.7 shows a door swinging about its hinged axis. The door has a
moment of inertia I = 4 kgm2 about the hinged axis. The door is supported by a torsional
spring and a torsional damper at the hinge axis. The spring has a torsional stiffness k = 100
Nm, and the damper has a torsional damping coefficient b. With the angular displacement θ
in Fig. 3.7, the torsional spring gives a restoring moment kθ and the torsional damper gives
a damping moment bθ̇. In addition, the door is subjected to a constant force F = 100 N
acting on the door knob. Also, the distance between the knob and the hinge axis is d = 1
m. Assume that the initial conditions of the door are all zero. Find the response of the door
and how wide the door will open in the end, when
3.3. FORCED RESPONSE 67

1. b = 100 kgm2 /s, and

2. b = 4 kgm2 /s.

According to the free-body diagram in Fig. 3.10, use of Newton’s second law results in

−kθ − bθ̇ + F d = I θ̈ (3.55)

or
I θ̈ + bθ̇ + kθ = F d (3.56)
where F and d are constant. In addition, the initial conditions are

θ(0) = 0, θ̇(0) = 0 (3.57)

According to (3.37), the natural frequency is


k 100
ωn = = = 5 rad/s (3.58)
I 4
Also note that both cases will have the same particular solution. To find the particular
solution, one can use the method of undetermined coefficient in Table 1.1 and assume that

θp (t) = K0 (3.59)

where K0 is the constant to be determined. Substitution of (3.59) into (3.56) gives

I · 0 + b · 0 + kK0 = F d (3.60)

Hence
Fd
θp (t) = K0 = (3.61)
k

1. For this case, the viscous damping factor is


b 100
ζ= = = 2.5 (3.62)
2Iωn 2·4·5
Therefore, the door is overdamped. Substitution of (3.58) and (3.62) into (3.17) gives
the two characteristic roots

λ1 = −1.0436, λ2 = −23.9564 (3.63)


68 CHAPTER 3. SECOND-ORDER SYSTEMS

and the homogeneous solution is

θh (t) = C1 e−λ1 t + C2 e−λ2 t (3.64)

With (3.64) and (3.61), the complete solution is

Fd
θ(t) = θh (t) + θp (t) = C1 e−λ1 t + C2 e−λ2 t + (3.65)
k
The complete solution also needs to satisfy initial conditions (3.57), i.e.,

Fd
θ(0) = C1 + C2 + =0 (3.66)
k
and
θ̇(0) = C1 λ1 e−λ1 t + C2 λ2 e−λ2 t t=0 = C1 λ1 + C2 λ2 = 0
 
(3.67)

Solution of (3.66) and (3.67) results in

λ2 Fd
C1 = − · (3.68)
λ2 − λ1 k

and
λ1 Fd
C2 = · (3.69)
λ2 − λ1 k
Substitution of (3.68) and (3.69) into (3.65) yields
 
Fd λ2 −λ1 t λ1 −λ2 t
θ(t) = − e + e +1 (3.70)
k λ2 − λ1 λ2 − λ1

With the numerical values given, (3.70) becomes

θ(t) = −1.0455e−1.0436t + 0.0455e−23.9564t + 1 (3.71)

Figure 3.11 symbolically shows the response of θ(t) in (3.71). There are several issues
to note. First, the response exponentially grows to the steady-state value θmax , which
is 1 rad (≈ 57.30◦ ). When θ = θmax , the moment from the external load F and the
restoring moment kθ of the spring are equal in magnitude and opposite directions.
Therefore, they are in equilibrium.
3.3. FORCED RESPONSE 69

θ(t)

Figure 3.11: Step response of an overdamped swinging door

2. For this case, the viscous damping factor is


b 4
ζ= = = 0.1 (3.72)
2Iωn 2·4·5
Hence, the door is underdamped and its damped natural frequency is
p
ωd = 1 − (0.1)2 (5) = 4.9749 rad/s (3.73)

According to (3.30) and (3.61), the complete solution is


Fd
θ(t) = e−ζωn t (d1 cos ωd t + d2 sin ωd t) + (3.74)
k
Application of initial condition (3.57) gives
Fd
θ(0) = d1 + =0 (3.75)
k
and

−ζωn e−ζωn t [d1 cos ωd t + d2 sin ωd t]



θ̇(0) =
+ e−ζωn t ωd [−d1 sin ωd t + d2 cos ωd t] t=0

= −ζωn d1 + d2 ωd = 0 (3.76)
70 CHAPTER 3. SECOND-ORDER SYSTEMS

x(t)

Figure 3.12: Step response of an underdamped swinging door

Solution of (3.75) and (3.76) gives

Fd
d1 = =0 (3.77)
k
and
ζωn Fd ζ
d2 = d1 = − p (3.78)
ωd k 1 − ζ2
where (3.29) has been used. Substitution of (3.77) and (3.78) into (3.74) results in
( !)
Fd −ζωn t ζ
θ(t) = 1−e cos ωd t + p sin ωd t (3.79)
k 1 − ζ2

With the numerical values given for F , d, and k, (3.79) becomes

θ(t) = 1 − e−0.5t (cos 4.9749t + 0.1004 sin 4.9749t) (3.80)

Figure 3.12 symbolically shows the response of θ(t) in (3.80). There are several issues to
note. First, the steady-state response θmax is 1 rad (≈ 57.30◦ ). This is the same value
as in the case of the overdamped door. Second, the transient response is oscillatory
while the door is approaching the steady-state value.
3.3. FORCED RESPONSE 71

As a final remark, step response can also be considered as a free response around the
stead-state solution. Let us define

θ(t) = θp + η(t) (3.81)

where θp ≡ F d/k is the particular solution. Note that the physical meaning of η is a response
around the steady-state solution θp . Substitution of (3.81) into (3.56) and (3.57) results in

I η̈ + bη̇ + kη = 0 (3.82)

with initial conditions


Fd
η(0) = −θp = − , η̇(0) = 0 (3.83)
k
Note that η(t) satisfies the homogeneous equation with a non-zero initial conditions. There-
fore, η(t) is indeed a homogeneous solution.

3.3.2 Impulse Response

Like first-order systems, impulse response of second-order systems can be determined through
use of physical laws or the Dirac delta function. We will demonstrate both methods as follows.

Example 3.7 Consider the Mexican game piñata shown in Fig. 3.13(a). The little paper
horse has mass 0.2 kg, and the string length is 1 m. An impact force F (t) is applied to the
paper horse conveying an impulse I = 0.1 Nm. When the paper horse starts to swing, the
air drag is bv, where b is the damping coefficient and v is the velocity. Assume that b = 0.05
Ns/m and the paper horse is initially at rest in equilibrium. Determine

1. the velocity immediately after the impact, and

2. how high the paper horse can get.

Since an impulse is applied to the paper horse, the response of the paper horse is impulse
response. To solve for the impulse response, we need to break the problem into two stages.
The first stage is from t = 0− and t = 0+ , where t = 0− is the moment immediately before
the impact and t = 0+ is the moment immediately after the impact. The second stage is for
t > 0+ . The stages are explained in detail as follows.
72 CHAPTER 3. SECOND-ORDER SYSTEMS

F(t)
(b)
g

mg
l

m
Impulse I T
θ (c)

bv
(a) mg

Figure 3.13: Impulse response - An example on piñata


3.3. FORCED RESPONSE 73

1. From t = 0− to t = 0+ , the impact force is present; see the free-body diagram in


Fig. 3.13(b). The paper horse is subjected to three forces: the tension T , the weight
mg, and the impact force F (t). The tension and the weight are both vertical, while
the impact force is horizontal. Application of the impulse-momentum equation in the
horizontal direction gives
Z 0+
F (t)dt = mv(0+ ) − mv(0− ) (3.84)
t=0−

Note that the integral in (3.84) is the impulse I, and the velocity v(0− before the
impact is zero. Therefore, (3.84) becomes
I 0.1
v(0+ ) = = = 0.5 m/s (3.85)
m 0.2

2. After the impact is over, the paper horse experiences a free response with an initial
velocity v(0+ ). The paper horse behaves like a pendulum, and we can model the paper
horse as a point mass. Figure 3.13(c) shows a free-body diagram when the paper horse
occupies an angular position θ. There are three forces acting on the paper horse: the
tension T , the weight mg, and the air drag bv. Application of Newton’s second law in
the tangential direction gives

−mg sin θ − bv = mat (3.86)

where at is the tangential acceleration. Since θ, v, and at do not have the same
dependent variable, a conversion is needed. Recall that v = lθ̇ and at = lθ̈. Therefore,
(3.86) becomes
mlθ̈ + blθ̇ + mg sin θ = 0 (3.87)
Equation (3.87), however, is a nonlinear equation. Its solution is extremely compli-
cated. One way to approximate it is to assume that the angular displacement θ is
small. In this case, sin θ ≈ θ, and (3.87) becomes

mlθ̈ + blθ̇ + mgθ = 0 (3.88)

which is a linear equation. In addition, (3.88) satisfies initial conditions

v(0+ ) I 0.5
θ(0+ ) = 0, θ̇(0+ ) = = = = 0.5 rad/s (3.89)
l ml 1
74 CHAPTER 3. SECOND-ORDER SYSTEMS

To determine the response from (3.88) and (3.89), one needs to find out whether the
system is underdamped or not. Comparison of (3.88) with (3.4) gives
mg g 9.81
ωn = = = = 3.1321 rad/s (3.90)
ml l 1
and
bl b 0.05
ζ= = = = 0.0399 (3.91)
2mlωn 2mωn 2(0.2)(3.1321)
Therefore, the system is underdamped with a damped natural frequency
p p
ωd = 1 − ζ 2 ωn = 1 − (0.0399)2 (3.1321) = 3.1296 rad/s (3.92)

and the solution to (3.88) is

θ(t) = e−ζωn t (d1 cos ωd t + d2 sin ωd t) (3.93)

Substitution of (3.93) into the initial conditions (3.89) results in

θ(0+ ) = d1 = 0 (3.94)

and

θ̇(0+ ) = −ζωn e−ζωn t d2 sin ωd t + d2 ωd e−ζωn t cos ωd t t=0+


 

v(0+ )
= d2 ωd = (3.95)
l
or
v(0+ ) I 0.5
d2 = = = = 0.1598 (3.96)
lωd mlωd 1(3.1296)
Substitution of (3.94) and (3.96) into (3.93) gives
I −ζωn t
θ(t) = e sin ωd t = 0.1598e−0.125t sin 3.1296t (3.97)
mlωd
The maximum height that the paper horse can reach occurs roughly at t ≈ T /4, where
T ≡ 2π/omegad is the period. Hence
 
π
θ == 0.1598e−0.125π/(2·3.1296) = 0.15 rad = 8.597◦ (3.98)
2ωd
Note that θmax is very small so that the previous assumption sin θ ≈ θ is valid.
3.3. FORCED RESPONSE 75

θ(t)

θ max
t
T/4


T=
ωd

Figure 3.14: Impulse response of the paper horse

As shown in Example 3.7, finding impulse response is a two-step process. The first
step is to find the velocity immediately after the impulse is applied. The second step is free
response of the system with the initial velocity gained during the impulse. The first step can
be achieved by using physical laws, such as impulse-momentum equation. The first step can
also be achieved by using Dirac delta function as follows.

Example 3.8 Let us consider the piñata in Example 3.7 again using Dirac delta function.
The equation of motion can be generalized from (3.88) by adding the impact force Iδ(t) as

lθ̈ + blθ̇ + mgθ = Iδ(t) (3.99)

Since an impact force Iδ(t) is involved, the initial conditions are applied immediately before
the impact occurs, i.e., at t = 0− . The initial conditions are

θ(0− ) = 0, θ̇(0− ) = 0 (3.100)

Step 1. To determine the velocity immediately after the impact at t = 0+ , let us inte-
grate (3.99) from t = 0− to t = 0+ resulting in
Z 0+ Z 0+ Z 0+ Z 0+
lm θ̈dt + bl θ̇dt + mg θdt = I δ(t)dt (3.101)
0− 0− 0− 0−
76 CHAPTER 3. SECOND-ORDER SYSTEMS

Now, let us evaluate the integrals in (3.101) one by one. The first integral is
Z 0+ h i
ml θ̈dt = ml θ̇(0+ ) − θ̇(0− ) = mlθ̇(0+ ) (3.102)
0−

where (3.100) is sued. The second integral is


Z 0+
θ̇dt = l θ(0+ ) − θ(0− ) = 0
 
bl (3.103)
0−

because the angular position of a point mass cannot change instantaneously. Therefore,
θ(0+ ) must be equal to θ(0− ). The third integral is
Z 0+
mg θdt = 0 (3.104)
0−

Note that the angular position θ is finite and the duration from t = 0− to t = 0+ is
infinitesimal. Therefore, the integral in (3.104) is zero. Finally, the fourth integral is
Z 0+
I δ(t)dt = I (3.105)
0−

where (2.24) is used. Substitution of (3.102)-(3.104) into (3.101) results in


I
mlθ̇(0+ ) = I, or θ̇(0+ ) = (3.106)
ml
which is the same result as shown in (3.89).

Step 2. For t > 0+ , the impact force is not present any more. Therefore, (3.99) becomes

lθ̈ + blθ̇ + mgθ = 0 (3.107)

In addition, the initial displacement is

θ(0+ ) = θ(0− ) = 0 (3.108)

because of the continuity of the angular displacement. Moreover, the initial velocity is
I
θ̇(0+ ) = (3.109)
ml
Note that (3.107)-(3.109) are identical to (3.88) and (3.89) of Example 3.7.
3.4. PRACTICE PROBLEMS 77

h
P(t)

Figure 3.15: Water level of a tank-piping system

Readers might wonder why the method of Dirac delta function is useful. The method
seems to have rather long derivation as opposed to the impulse-momentum method. For
simple mechanical systems in Example 3.7, application of the impulse-momentum equation
is obvious and straightforward. For many other problems, the appropriate physical law might
not exist or might not be straightforward to apply. For example, Fig. 3.15 shows a hydraulic
system consisting of a tank and a long pipe. The gravitational potential of the fluid in
the tank gives a restoring force behaving like a spring. The fluid inertia in the long pipe
behaves like a rigid mass. Therefore, the fluid height h is governed by a second-order ordinary
differential equation. When the input pressure P (t) experiences a significant pressure surge
for a short period of time, the rate of change of fluid height ḣ(t) immediately after the surge
is not intuitively easy to find. In this case, a formulation through the Dirac delta function
is much more straightforward because of its mathematical rigor.

3.4 Practice Problems

1. Develop the equation of motion in terms of the variable x for the system shown in
Fig. 3.16. Determine an expression for the damping ratio ζ and undamped natural
frequency ωn in terms of the given system properties. How big is the damping coefficient
c in order to make the system overdamped? Neglect the mass of the crank AB and
assume small oscillations about the equilibrium position shown.
78 CHAPTER 3. SECOND-ORDER SYSTEMS

O a A
x
b
m

c B k

Figure 3.16: A spring-mass-dashpot system involving a lever

2. A convenient way to measure damping in a single-degree-of-freedom system is called


logarithmic decrement of free response. Consider the free response given by (3.34).
Let’s denote t1 and t2 denote the times corresponding to two consecutive displacements
x1 and x2 measured one cycle apart, i.e.,

t2 = t1 + T (3.110)

where

T = (3.111)
ωd
According to (3.34), show that the logarithmic decrement δ is
x1
δ ≡ log = ζωn T (3.112)
x2
where log has base e. Combine (3.110), (3.111), and (3.112) to obtain
δ
ζ=√ (3.113)
4π 2 + δ 2
Will this method work for other initial conditions? why?

3. In the mechanical system shown in Fig. 3.17, assume that the rod is massless, perfectly
rigid, and pivoted at point P . The mass of the small ball is m, the stiffness of the
spring is k, and the damping coefficient of the dashpot is c. The displacement x is
3.4. PRACTICE PROBLEMS 79

a b
m

c k x
g

Figure 3.17: A massless mechanical lever

measured from the horizontal position, where the spring is undeformed. Initially, the
system is at rest and the spring is underformed.

(a) Assuming that displacement x is small, derive the equation of motion governing
the motion x and the initial conditions.
(b) Determine the undamped natural frequency ωn of the system.
(c) If k = 100 N/m, c = 5 Ns/m, m = 1 kg, a = 1 m, and b = 2 m, determine if the
system is overdamped, underdamped, or critically damped. Justify your answer,
and roughly sketch the impulse response function. (Please don’t calculate the
impulse response function.)
(d) What is the final displacement x of the lever (i.e., after the transient response has
died out)? Does it depend on whether the system is overdamped or underdamped?
Why?

4. Consider the response shown in Fig. 3.18. Since the damping is small, the undamped
natural frequency ωn can be approximated by damped natural frequency ωd .

(a) Estimate the natural frequency ωn from Fig. 3.18.


(b) Estimate the viscous damping factor ζ from Fig. 3.18.
(c) Does the response in Fig. 3.18look like a step response? Explain why.
80 CHAPTER 3. SECOND-ORDER SYSTEMS

0.8

0.6
displacement (mm)

0.4

0.2

-0.2

-0.4

-0.6

-0.8
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
time (second)

Figure 3.18: Response of a damped system

r r

Figure 3.19: Measuring mass moment of inertia of a flywheel


3.4. PRACTICE PROBLEMS 81

x I

g k,c m
A
θ

Figure 3.20: Recoil mechanism in a cannon

5. The following method is used to determine the mass moment of inertia of a flywheel
shown in Fig. 3.19. The flywheel is suspended from its center by a wire from a fixed
support, and a period τ1 is measured for torsional oscillation of the flywheel about the
vertical axis. Two small weights, each of mass m, are next attached to the flywheel
in opposite positions at a distance r from the center. This additional mass results in
a slightly longer period τ2 . Write an expression for the moment of inertia I of the
flywheel in terms of the measured quantities.
2mr2
Ans : I=
(τ2 /τ1 )2 − 1

6. Figure 3.20 shows a sketch of the recoil mechanism for a cannon. The gun barrel to-
gether with its seat slides on the carriage through a recoil mechanism. The barrel/seat
has mass m and the recoil mechanism has a spring constant k and damping coefficient
c. In addition, the recoil mechanism is undeformed when the barrel/seat is at the front
end of the recoil mechanism.

(a) Before a shell is fired, the barrel is in equilibrium with the recoil mechanism. Find
82 CHAPTER 3. SECOND-ORDER SYSTEMS

the equilibrium position x0 of the barrel measured from the front end.
(b) When a shell is fired, the shell applies an impulse I causing the barrel to recede.
Model the impulse through Dirac delta function and use the position x as the
dependent variable to describe the motion. Write down the equation of motion
of the barrel and its initial condition in terms of x. Remember to include the
gravity.
(c) Introduce the variable y defined as y = x − x0 . Therefore, y is the displacement
of the barrel from the equilibrium position. Write the equation of motion and
initial conditions in terms of y. Do you think this simple substitution simplify
your equation of motion and initial conditions substantially?
(d) Describe how you would find the response of the barrel. You don’t have to solve
it.
(e) If you are designing this recoil system, would you use an underdamped system?
What would happen?

7. A mechanical system is governed by the following differential equation.


d2 y dy
2
+ 6 + 9y = 4e−t (3.114)
dt dt
(a) What is the natural frequency and viscous damping factor of this system?
(b) Is the system underdamped, critically damped, or overdamped?
(c) What is the homogeneous solution of this system?
(d) What is the particular solution of this system?

8. Consider the system shown in Fig. 3.21, where the cylinder of radius r and mass m
is subjected to an external force excitation P (t). The cylinder undergoes pure rolling
and rotates freely about its axis. Also, the cylinder is constrained by a massless spring
with spring constant k and a massless dashpot with damping coefficient c. The motion
of the center of the cylinder is described by displacement x(t), which is measured with
respect to the undeformed position of the spring. Initially, the cylinder has an initial
angular velocity ω0 and the spring is undeformed. (Hint: The mass moment of inertia
of the cylinder about the center is 21 mr2 .) Given that x(t) is governed by
3
mẍ + cẋ + kx = P (3.115)
2
3.4. PRACTICE PROBLEMS 83

x(t)
g
k
m P(t)

c r

Figure 3.21: A rolling cylinder with a spring and a damper

with the initial conditions


x(0) = 0, ẋ(0) = rω0 (3.116)
If the input load P (t) is an impulse of magnitude I, i.e., P (t) = Iδ(t), where δ(t) is the
Dirac delta function, what will be the velocity of the center of the cylinder immediately
after the impulse is applied?

9. A stationary shaft is modeled as a second-order system

mẍ + kx = h(t) (3.117)

where m and k are the mass and stiffness of the shaft, respectively. Also, h(t) is the
external excitation driving the shaft. Answer the following questions.

(a) If m is 1 kg and the natural frequency of the shaft is 10 rad/s, what is the stiffness
k of the shaft?
(b) An engineer mounts the shaft onto a set of bearings to form a spindle. The
vibration of the spindle is then modeled through

mẍ + cb ẋ + (k + kb ) x = h(t) (3.118)

where cb and kb are the damping and stiffness of the bearings. If the bearings are
ball bearings, kb = 800 N/m and cb = 3 Ns/m. Calculate the natural frequency
84 CHAPTER 3. SECOND-ORDER SYSTEMS

h
EI, M

l l
2 2

Figure 3.22: A bob dropped on an elastic beam

ωn and viscous damping factor ζ of the spindle. Is the system overdamped or


underdamped?
(c) If the bearings are fluid bearings, kb = 20 N/m and cb = 33 Ns/m. Calculate the
natural frequency ωn and viscous damping factor ζ of the spindle. Is the system
overdamped or underdamped?
(d) If the spindle is subjected to an impulsive load with zero initial conditions, which
bearing should you use in order to minimize the vibration x(t)? Please explain
why.

10. A bob of mass m is dropped from a height h to a simply supported beam as shown
in Fig. 3.22. After an impact between the mass and the beam, the bob rebounds to
a height h2 . Find the response of the midpoint of the beam after the impact. (Hint:
Approximate the beam as a single degree-of-freedom system with mass M . To calculate
the stiffness of this single degree-of-freedom system, notice that the static deflection
P l3
at the midpoint is δ = 48EI under a concentrated load P at the midpoint, where EI
is the flexural rigidity of the beam. In addition, the momentum change of the small
mass is the impulse applied to the beam.)
Chapter 4

State-Space Formulation

For systems with only one or two components, the dynamics of the systems can be easily de-
scribed through use of ordinary differential equations. Usually, dynamics of each component
satisfies a first- or second-order ordinary differential equation. In Example 1.1, the system
has two lumped components. One is first-order and the other is second-order. After elim-
inating one variable, the resulting governing equation is a third-order ordinary differential
equation. As one can imagine, when the number of components increases, it will take a lot of
efforts to eliminate variables in order to derive the governing ordinary differential equations
and to assign the initial conditions. Therefore, a more efficient way is needed to describe
dynamics of systems comprising of multiple components. An alternative is to use state-space
formulation. Instead of eliminating variables, additional variables are introduced in state-
space formulation so that each variable only satisfies a first-order differential equation. The
governing equation then becomes a first-order matrix equation as opposed to a high-order
scalar ordinary differential equation. In Section 4.1, examples are given to motivate the use of
state-space formulation. In Section 4.2 explains how the state-space formulation compliments
the approach of ordinary differential equations. Section 4.3 demonstrates how state-space
formulation can be derived from ordinary differential equation. Section 4.4 demonstrates
how ordinary differential equations can be derived from state-space formulations.

85
86 CHAPTER 4. STATE-SPACE FORMULATION

x3 x1
c k1 f(t)
m1

k2
g

m2 x2

Figure 4.1: A system with three components

4.1 State Equation and Output Equation

Let us use the following example to demonstrate how ordinary differential equations become
incredibly tedious when a system contains more than several components. Consider the
example shown in Fig. 4.1. The system consists of two rigid blocks, two springs, and a
damper in serial connection. The rigid blocks have mass m1 and m2 . The springs have
spring constants k1 and k2 . The damper has damping coefficient c. In addition, mass
m1 is subjected to an external force f (t), and the gravitational acceleration is g. Let x1 ,
x2 , and x3 be the displacement away from the undeformed state of the springs and the
damper. Moreover, the directions shown in Fig. 4.1 define the positive directions of x1 ,
x2 , and x3 . The system is initially at rest with the springs and damper undeformed. The
goal is to derive the equation governing the dynamics of this system with appropriate initial
conditions. Moreover, the outputs of interest are the displacement x1 of the first block, the
acceleration a2 of the second block, and the damping force fc in the damper.
4.1. STATE EQUATION AND OUTPUT EQUATION 87

Figure 4.2 shows the free-body diagram of each component. First, let us consider the
connecting point between the damper c and spring k1 . The connecting point is moving
to the right. Therefore, the damping force cẋ3 is pointing to the left. In addition, the
spring k1 experiences a tension k1 (x1 − x3 ) pointing to the right. (Note that the spring
force will automatically become compressive when x1 < x3 because of the negative value
of k1 (x1 − x3 ).) Since the connecting point has no mass, the damping force and the spring
force must be in equilibrium resulting in

cẋ3 = k1 (x1 − x3 ) (4.1)

For the mass m1 , the spring k1 gives a tension k1 (x1 − x3 ) pointing to the left. Similarly, the
spring k2 gives a tension k2 (x2 − x1 ) pointing to the right. Application of Newton’s second
law gives
k2 (x2 − x1 ) − k1 (x1 − x3 ) + f (t) = m1 ẍ1 (4.2)
For the mass m2 , the spring k2 gives a tension k2 (x2 − x1 ) in the upward direction. The
weight m2 g is downward. Application of Newton’s second law gives

m2 g − k2 (x2 − x1 ) = m2 ẍ2 (4.3)

In addition, the zero initial velocity at the equilibrium positions suggest that

x1 (0) = x2 (0) = x3 (0) = 0 (4.4)

and
ẋ1 (0) = ẋ2 (0) = ẋ3 (0) = 0 (4.5)

Although the equations governing motion of each component have been derived, there
remain many difficulties to overcome in modeling this system in Fig. 4.1 should one decide
to use the approach of ordinary differential equations. First, it is tedious to obtain the
governing ordinary differential equations, because two of the three variables need to be
eliminated. Note that x1 and x2 have second-order derivatives, and x3 has a first-order
derivative. Therefore, the resulting governing equations for x1 , x2 , or x3 will be fifth-order.
If the equation governing x1 is needed, we must eliminate x2 and x3 from (4.1), (4.2), and
(4.3). In Example 1.1, we have demonstrated the cumbersome process to eliminate a single
variable for a third-order system. The level of tediousness increases dramatically when the
order of the governing equations increases.
88 CHAPTER 4. STATE-SPACE FORMULATION

cẋ3 k1 ( x1 − x3 ) k2 ( x2 − x1 )

m2
f(t)
k1 ( x1 − x3 ) m1 k2 ( x2 − x1 ) m2 g

Figure 4.2: Free-body diagram of Fig. 4.1

Second, it is very difficult to assign initial conditions. For example, let us assume
that we have obtained a fifth-order ordinary differential equation governing x1 . The initial
conditions will require quantities like d4 x1 (0)/dt4 and d3 x1 (0)/dt3 . These initial conditions
do not have physical meaning and can only be derived through elimination of variables again.
Moreover, the fifth-order governing equation will require five initial conditions. There are,
however, six initial conditions derived from physical intuition as shown in (4.4) and (4.5).
Obviously, one initial condition is superfluous.

These difficulties will get worse when the number of components increases. For a com-
plicated system, such as an airplane or a car, it is easy to have hundred of components
that need to be modeled. In this case, the order of the governing equations may be in the
thousands. It is impossible to derive these governing equations by elimination of variables.
Therefore, an alternative way of modeling must be pursued.

To search for an alternative modeling technique, an important observation is that the


dynamics of each component satisfies either a first-order or a second-order ordinary differen-
tial equation. When variables are eliminated, the order increases. If the governing equation
is 50th-order, it is of course a very long way to raise the order from 1 or 2. It is, how-
ever, very easy to lower the order for each component. For a first-order component, the
order cannot be lowered any more. For a second-order component, it can be easily rewritten
into two first-order differential equations. Therefore, the dynamics of a system with many
components can always be described through a bunch of first-order differential equations
instead of one single ordinary differential equation with very high order. These first-order
4.1. STATE EQUATION AND OUTPUT EQUATION 89

differential equations, often coupled together, will be arranged in a neat matrix form called
state equations. Algorithms have been developed to solve the state equations analytically or
numerically.

Let us demonstrate the concept of state equation using the same example in Fig. 4.1.
The procedure is to rewrite all the component equations (4.1), (4.2), and (4.3) in terms of
first-order differential equations. For (4.1), it can be rewritten as
k1
ẋ3 = (x1 − x3 ) (4.6)
c
For (4.2), we can define velocity v1 ≡ ẋ1 or

ẋ1 = v1 (4.7)

Then (4.2) becomes


k2 k1 1
v̇1 ≡ ẍ1 = (x2 − x1 ) − (x1 − x3 ) + f (t) (4.8)
m1 m1 m1
For (4.3), we can define velocity v2 ≡ ẋ2 or

ẋ2 = v2 (4.9)

Then (4.3) becomes


k2
v̇2 ≡ ẍ2 = − (x2 − x1 ) + g (4.10)
m2
Note that all the equations from (4.6) to (4.10) have only first time derivative on the variables
that show up on the left side of the equality sign. Furthermore (4.6) to (4.10) can be rewritten
into the following matrix equation.
 
0 1 0 0 0   
0 0
  
x1  k1 + k2 k2 k1   x 1
  − 0 0   1 
 v1   m1 m1 m1   v1   0
     !
d    m1  f (t)
0 0 0 1 0   x2  + 
  
 x2  =    
(4.11)
dt    0 0  g
k2 k2
   
 v2   0 − 0 0    v2   0 1 
      
m2 m2
x3 
k1 k1
 x3 0 0
0 0 0 −
c c
Equation (4.11) can further be abbreviated as

ẋ(t) = Ax(t) + Bu(t) (4.12)


90 CHAPTER 4. STATE-SPACE FORMULATION

where
 
  0 1 0 0 0
x1  k1 + k2 k2 k1 
 − 0 0 
 v1  m1 m1 m1
   
 
0 0 0 1 0
 
x≡
 x2  , A≡ (4.13)
  

 k2 k2 
 v2  0 − 0 0
   
 m2 m2 
x3 
k1 k1

0 0 0 −
c c

and
 
1 0
 1 
 0  !
 m1  f (t)
B≡ , u≡ (4.14)
 
 0 0 
 g
0 1 
 

0 0

The equation shown in (4.12) is know as the state equation. In addition, x is called the state
variable vector. In this example, the state vector consists of five independent state variables
x1 (t), v1 (t), x2 (t), v2 (t), and x3 (t). The state variables govern the response of the system
and are unknown to be determined from (4.11). The matrix A is a square matrix called state
matrix. The state matrix A governs characteristics of the system, such as time constants
and natural frequencies. Also in (4.12), u is called the input vector. In this example, it
consists of two input variables f (t) and g. Note that the size of the input vector depends
on the problem statement. If f (t) was absent, the input variable will only be g. If a system
has only one input variable, the system is called single-input. If a system has multiple input
variables, the system is called multi-input. Finally, the matrix B is called the input matrix.
Note that the dimension of the input matrix depends on the number of state variables and
the number of input variables. If B has m rows and n columns, m will be the number of
state variables and n will be the number of input variables.

Solving the state equation (4.11) requires initial conditions. Since the state variable
4.1. STATE EQUATION AND OUTPUT EQUATION 91

vector x has only first derivative in (4.11), the initial conditions are
   
x1 (0) 0
 v1 (0)   0 
   
   
x(0) ≡  x 2 (0) = 0 
   (4.15)
 v2 (0)   0 
   
x3 (0) 0
Hence ẋ3 (0) = 0 in (4.5) is redundant.

Note that the state equation (4.12) and initial condition (4.15) now form a well-posed
initial value problem, whose solution is unique. This is very important. The state vari-
ables have to be chosen so that the state equation is well posed and has a unique solution.
Therefore, the state variables at time t can be used to determine the future states of the
system.

Now let us assume that the state equation is solved and the state variables are known.
To obtain the outputs of interest (i.e., x1 , a2 , and fc ), we must represent each output in
terms of the state variables and the input variables. (Why? Because the state variables and
the input variables are known quantities now.) For output x1 , it is a state variable itself.
Therefore, the representation is simply an identity
x 1 = x1 (4.16)
where the x1 on the left is the output of interest and the x1 on the right is a state variable.
For output a2 , it can be obtained from (4.10)
k2
a2 ≡ v̇2 = − (x2 − x1 ) + g (4.17)
m2
For output fc , it can be found from (4.1)
fc ≡ cẋ3 = k1 (x1 − x3 ) (4.18)
Now we can put (4.16), (4.17), and (4.18) into a compact matrix form
 
x 1
  
1 0 0 0 0
   
x1  v1  0 0
 !
  k2 k 2
   f (t)
 a2  =   m2 0 − m2 0 0   x2  +  0 1  (4.19)
   
g
fc 0 −k1  v2  0 0
 
k1 0 0
x3
92 CHAPTER 4. STATE-SPACE FORMULATION

Equation (4.19) can further be abbreviated as

y(t) = Cx(t) + Du(t) (4.20)

where  
1 0 0 0 0
 
x1
 k2 k2 
y ≡  a2  , C≡
 m2 0 − m2 0 0  (4.21)
  
fc k1 0 0 0 −k1
and  
0 0
D≡ 0 1  (4.22)
 

0 0
The matrix equation in (4.20) is called output equation. Since y consists of three output
variables of interest (i.e., x1 , a2 , and fc ), y is called output variable vector. Furthermore, C
and D in (4.20) are known as output matrices.

As one can see, dynamics of a system can now be formulated in terms of a state equation
in (4.12) and an output equation (4.20), This approach of modeling is called state-space
formulation or state-space representation. The unique feature of the state-space formulation
is its generality, simplicity, and modular structure. The state-space formulation is very
general and universal, because it can model any linear systems as long as the inputs and
outputs are also linear. Therefore, numerical algorithms based on the state-space formulation
can have very wide applications. The state-space formulation is simple and concise, because
the system dynamics is condensed into four matrices A, B, C, and D with good physical
meaning. Matrix A is the signature of the system dynamics. Matrix B defines the input.
Matrices C and D define the output. The modular structure of state-space formulation makes
it very convenient and efficient for numerical simulations. For the example in Fig. 4.1, if a
different set of input variables is chosen, one only needs to redefine the matrix B and the
numerical simulation can resume. If a different set of output variables is chosen, one can
redefine matrices C and D to resume the simulation.

For most beginners who have been used to ordinary differential equations for so long,
the concepts of state equations and output equations can be so foreign and difficult to accept.
One good way to overcome is to see a lot of examples and do a lot of practice.
4.1. STATE EQUATION AND OUTPUT EQUATION 93

θ(t) kθ ,bθ̇
On
h θ(t)
p
Ot
F phd
I F

(b)
kθ ,bθ̇ (a)

Figure 4.3: A door under pressure and a load on the knob

Example 4.1 Consider the door shown in Fig. 4.3(a). The door has width d and heigh h.
Also, the door has a mass moment of inertia I about the hinged axis. The angular position
of the door is θ(t). In addition, the door is subjected to a uniform pressure p, a concentrated
load F on the door knob, restoring force kθ and damping force bθ̇ at the hinge. The outputs
of interests are the angular position θ and the tangential component Ot of the reaction at the
hinge. Use the angular position θ and the angular velocity ω as two state variables. Derive
the state equation and the output equation.

According to the free-body diagram shown in Fig. 4.3(a), sum of moments about the
hinged axis gives
d
F d − p(hd) − kθ − bθ̇ = I θ̈ (4.23)
2
or
hd2
I θ̈ + bθ̇ + kθ = F d − p (4.24)
2
Note from (4.24) that there are two independent input quantities F and p. Since θ and ω
are chosen as state variables, the resulting state equation should look like

θ̇ = · · · (4.25)
94 CHAPTER 4. STATE-SPACE FORMULATION

and
ω̇ = · · · (4.26)

where ”· · · ” in (4.25) and (4.26) are linear functions of the state variables (θ, ω) and the
input variables (F, p). For (4.25), we complete the equation by using the definition

θ̇ = ω (4.27)

For (4.26), we complete the equation by using (4.27) and (4.24), i.e.,

b k d hd2
ω̇ = θ̈ = − ω − θ + F − p (4.28)
I I I 2I
Rearraging (4.27) and (4.28) into the following matrix form
!   !   !
d θ 0 1 θ 0 0 F
= k b  + d hd2  (4.29)
dt ω − − ω − p
I I I 2I
Comparison of (4.29) with (4.12) shows that the state vector x(t) is
!
θ
x(t) ≡ (4.30)
ω

the state matrix A is  


0 1
A≡ k b  (4.31)
− −
I I
the input matrix B is  
0 1
B≡ k b  (4.32)
− −
I I
and the input vector u is !
F
u(t) ≡ (4.33)
p

For the output equation, we need to express the output variables θ and Ot as

θ = ··· (4.34)
4.1. STATE EQUATION AND OUTPUT EQUATION 95

and
Ot = · · · (4.35)
where ”· · · ” in (4.34) and (4.35) are linear functions of the state variables (θ, ω) and the
input variables (F, p). For (4.34), it is simply an identity

θ=θ (4.36)

For (4.35), the sum of forces in the tangential direction as shown in Fig. 4.3(b) gives
 
d
Ot + F − phd = m θ̈ (4.37)
2
or  
d
Ot = −F + phd + m θ̈ (4.38)
2
The right side of (4.38), however, is not a linear function of the state variables (θ, ω) and
the input variables (F, p), because θ̈ is present. To eliminate θ̈, we can substitute (4.28) into
(4.38) to obtain
md2 md2
   
kmd bmd
Ot = − − 1 F + hd 1 − p− θ− ω (4.39)
2I 4I 2I 2I
Rearranging (4.36) and (4.39) into the following matrix form
!   !
θ 1 0 θ
=  kmd bmd 
Ot − − ω
2I 2I
 
0 0
!
F
md2 md2 
   
+  (4.40)
− −1 hd 1 − p
2I 4I
Comparison of (4.40) with (4.20) shows that the output vector y(t) is
!
θ
y(t) ≡ (4.41)
Ot
the output matrices C and D are
   
1 0 0 0
C ≡  kmd bmd  , D ≡  md2 md2 
   
(4.42)
− − − −1 hd 1 −
2I 2I 2I 4I
96 CHAPTER 4. STATE-SPACE FORMULATION

For a given system, the choice of state variables is not unique. It depends on the need
of the persons modeling the system. Let us demonstrate this via the following example.

Example 4.2 Consider the door shown in Fig. 4.3. Let the state variables be the angular
position θ and the angular moment H of the door about the hinged axis. Derive the state
equation.

Since the state variables are θ and H, the state equation will look like

θ̇ = · · · (4.43)

and
Ḣ = · · · (4.44)
where ”· · · ” in (4.43) and (4.44) are linear functions of the state variables (θ, H) and the
input variables (F, p). For (4.43), we complete the equation by using the definition of angular
momentum, H = I θ̇, or
1
θ̇ = H (4.45)
I
For (4.44), we complete the equation by using (4.45) and (4.24), i.e.,
hd2
Ḣ = I θ̈ = −bθ̇ − kθ + d · F − p
2
b hd2
= − H − kθ + d · F − p (4.46)
I 2
Rearraging (4.45) and (4.46) into the following matrix form
 
! 1 !   !
d θ 0 θ 0 0 F
=
 I  + hd2  (4.47)
dt H b  H d − p
−k − 2
I
Comparison of (4.47) with (4.12) shows that the state vector x(t) is
!
θ
x(t) ≡ (4.48)
H
the state matrix A is  
1
 0 I 
A≡ b  (4.49)
−k −
I
4.1. STATE EQUATION AND OUTPUT EQUATION 97

the input matrix B is  


00
B≡ hd2  (4.50)
d −
2
and the input vector u is !
F
u(t) ≡ (4.51)
p
as before.

Mathematically, the state variables do not have to be independent. The state equation
(4.12) can still be well-posed and have a unique solution. Dependent state variables increase
the size of the state matrix A and make the solutions numerically inefficient to obtain. In
practical applications, people tend to choose independent state variables to minimize the size
of the state matrix so that the solution can be efficient. The following example addresses
this point.

Example 4.3 Consider the door shown in Fig. 4.3. Let the state variables be the angular
position θ, the angular velocity ω, and the restoring moment Tk of the door about the hinged
axis. Derive the state equation.

The restoring moment is Tk is related to θ through

Tk = kθ (4.52)

Therefore, Tk and θ are dependent on each other. The derivative of (4.52) becomes

Ṫk = k θ̇ = kω (4.53)

Combining (4.27), (4.28), (4.53) into a single matrix equation


    
0 1 0 0 0
 
θ θ !
d    k b   d hd2  F
 ω = − − 0  ω  + 
 I − 2I (4.54)
 
dt  I I   p
Tk 0 k 0 Tk 0 0
98 CHAPTER 4. STATE-SPACE FORMULATION

x1 x2

m Fs
B k

Figure 4.4: A very interesting spring-mass-damper system

According to (4.54), state matrix A is


 
0 1 0
 k b 
A≡ −
 I − 0  (4.55)
I 
0 k 0

Since the first and third state variables are linearly dependent, the first and the row of the
state matrix A are proportional to each other. As a result,

det A = 0 (4.56)

This is a good criterion to check whether or not the state variables are independent. When
the state variables are independent, det A 6= 0.

Although det A = 0, the state equation (4.54) remains a mathematically well-posed


problem. In other words, the solution will be unique should proper initial conditions are
given. The result will be that Tk always equals to kθ. In this case, it is easier to use the
simpler state equation shown in (4.29).

Although the choice of independent state variables can reduce the size of the state
equation, it may or may not be a good practice depending on the system dynamics and the
output variables of interest. This following example shows this feature.
4.1. STATE EQUATION AND OUTPUT EQUATION 99

k(x2 − x1 ) m Fs

Bẋ1 k(x2 − x1 )

Figure 4.5: Free-body diagram of this interesting problem

Example 4.4 Consider the flagship model shown in Example 1.1. The system is redrawn
in Fig. 4.4. Derive the state equation in terms of state variables (x1 , x2 , v2 ).

Figure 4.5 shows the free-body diagram of the system. According to (1.4) and (1.5),
the equations for the mass and the spring/damper junction are

Fs − k (x2 − x1 ) = mẍ2 (4.57)

The second free-body diagram is for the spring-damper connecting point. Note that the
forces on both sides must be in equilibrium resulting in

k (x2 − x1 ) = B ẋ1 (4.58)

Since the state variables are (x1 , x2 , v2 ), the state equations will look like

ẋ1 = · · · , ẋ2 = · · · , v̇2 = · · · (4.59)

where ”· · · ” are linear functions of the state variables (x1 , x2 , v2 ) and the input variable Fs .
For ẋ1 , (4.58) gives
k
ẋ1 = (x2 − x1 ) (4.60)
B
For ẋ2 , it can be obtained through definition

ẋ2 = v2 (4.61)

For v̇2 , (4.57) ad (4.61) give


k 1
v̇2 = ẍ2 = − (x2 − x1 ) + Fs (4.62)
m m
100 CHAPTER 4. STATE-SPACE FORMULATION

Combining (4.60), (4.61), and (4.62) gives the following state equation
k k
   
− 0 0
   
x1  x1
d    B B

  0 

 x2  =  0 0 1   x2  1  Fs (t)
+ (4.63)

dt  k k 
v2 − 0 v2
m m m
subjected to initial conditions    
x1 (0) 0
 x2 (0)  =  0  (4.64)
   

v2 (0) 0
Note that the three state variables (x1 , x2 , v2 ) are not independent. This can be found from
det A = 0 in (4.63). Elimination of k (x2 − x1 ) from (4.57) and (4.58) results in

mv̇2 + B ẋ1 = Fs (4.65)

Integration of (4.65) from 0 to t and use of initial condition (4.64) give


Z t
mv2 (t) + Bx1 (t) = Fs (t)dt (4.66)
0

Therefore, v2 (t) and x1 (t) are not independent.

Since x1 and v2 are not independent, one should be able to choose proper state variables
so that the state equation (4.63) is reduced to a 2 × 2 matrix differential equation. In this
case, let us choose v2 and the spring force Fk as the state variables, where Fk is

Fk ≡ k (x2 − x1 ) (4.67)

The state equations will take the form of

v̇2 = · · · , Ḟk = · · · (4.68)

where ”· · · ” are linear functions of the state variables (v2 , Fk )and the input variable Fs . For
v̇2 , (4.62) gives
1 1
v̇2 = − Fk + Fs (4.69)
m m
For Ḟk , (4.67) gives
Ḟk = k (v2 − v1 ) (4.70)
4.1. STATE EQUATION AND OUTPUT EQUATION 101

Also note that (4.58) becomes


Fk = Bv1 (4.71)
Elimination of v1 from (4.70) and (4.71) results in
k
Ḟk = kv2 − Fk (4.72)
B
Finally, let us put together (4.69) and (4.72) and form the following state equation
 
! 1 !  1 
d v2 0 −  v2
=
 m +  m  Fs (t) (4.73)
dt Fk k 
Fk
k − 0
B

Obviously, (4.73) has two state variables and is less computationally intensive than
(4.63) which has three state variables. What is not obvious is that the reduction of state
variables comes with a cost, that is, not all information of the system can now be retrieved
from the solution of (4.73). For example, let the output variables be x1 and a2 , where a2 is
the acceleration of the mass. For state variables x1 , x2 , and v2 , the output equations will
look like
x1 = · · · , a 2 = · · · (4.74)
where ”· · · ” are linear functions of the state variables (x1 , x2 , v2 ),and the input variable Fs .
For ẋ1 , it is a state variable. Hence, the output equation is a simple identity

x1 = x1 (4.75)

For a2 , use of (4.62) gives


k 1
a2 ≡ ẍ2 = − (x2 − x1 ) + Fs (4.76)
m m
Combination of (4.75) and (4.76) gives the matrix output equation
  
!  x1

x1 1 0 0  0
= k x2  +  1  Fs (t) (4.77)

k 
a2 − 0
m m v2 m

Now, if the state variables are v2 and Fk , one will find that the output variable x1
cannot be represented in terms of a linear combination of v2 , Fk , and Fs (t). One needs to
102 CHAPTER 4. STATE-SPACE FORMULATION

integrate v2 (t) with respect to t to get x1 (t). Then x2 (t) can be found as
1
x2 (t) = x1 (t) + Fk (t) (4.78)
k
Hence the moral is the following. It is the output that dictates which state equation to
use. If the output of interests can be expressed in terms of linear combination of v2 and Fk ,
the state equation (4.73) suffices. If the output of interests cannot be expressed in terms of
linear combination of v2 and Fk , it is easier to use the state equation (4.63) with three state
variables.

As the last puzzling piece, Example 4.3 and Example 4.4 both have dependent state
variables with det A = 0. Why is Example 4.3 so simple, but Example 4.4 so complicated?
The answer is the subtle difference in the dependence of state variables. In Example 4.3, the
state variables θ and Tk are linearly dependent as indicated in (4.52). As a result, one can
choose θ, for example, as the independent variable and reduce the state equation to a 2 × 2
matrix differential equation. Once θ(t) is solved from the state equation, Tk can be found
from (4.52). It does not involve any further integration. In Example 4.4, the state variables
x1 and v2 are not linearly dependent. Instead, x1 and v2 satisfy a holonomic constraint in
(4.66) that results from an integration. In this case, even though a state equation with two
independent variables v2 and Fk can be obtained, finding certain quantities, such as x1 or
x2 , requires a further integration of the state variable v2 .

4.2 The Big Picture

Both state-space formulations and ordinary differential equations are very powerful tools
to analyze dynamics of a system. Nevertheless, each approach has its own advantages and
disadvantages. In this section, let us focus on the big picture and explain how state-space
formulations and ordinary differential equations complement each other in system analysis.

As shown in Section 4.1, it is easier and more efficient to model dynamics of a system
using state equations when the systems are complicated with multiple components. The
major reason is that the order of time derivatives for each component is lowered and more
variables are introduced in the state-space formulation. Also, initial conditions are obvious
4.2. THE BIG PICTURE 103

and very easy to assign. Basically, it is the state variable at t = 0. Finally, the state-
space formulation allows easy implementation of numerical solutions because of its modular
structure. In fact, a standard procedure to solve an ordinary differential equation numerically
is to rewrite the ordinary differential equation into a state equation.

For complicated systems, their response tends to be equally complicated as well. Con-
sequently, numerical results obtained from a state-space formulation become more difficult
to interpret, and the physical behavior of the system becomes less obvious to understand.
Without correct theoretical guidance, one can run hundreds of simulations based on the
state-space formulation resulting in meaningless results. In this case, ordinary differential
equations can compliment state-space formulations by offering clear and simple physical
meaning. For example, concepts with great physical meaning, such as time constant, nat-
ural frequency, and viscous damping factor, are natural results from ordinary differential
equations. Later in this book, we will explain frequency response of a system when the
system is subjected to harmonic excitations. Within a given frequency range, a complicated
system can always be approximated in terms of a very simple low-order ordinary differential
equation. Therefore, we can analyze the simple ordinary differential equation instead of a
full-scale state equation to understand the physics.

Although the physics of a system does not depend on the method of modeling, the
state-space formulation and the approach of ordinary differential equations often use different
mathematical formats to explain the same physical behavior. For example, we will show later
in the book that eigenvalues of A are, in fact, the characteristics roots of the corresponding
ordinary differential equation. In most cases, the number of state variables will be equal to
the highest order of the time derivatives should the approach of ordinary differential equation
be used.

Since the state-space formulation and the approach of ordinary differential equations
are two different ways to model the same system, it is often necessary to obtain ordinary
differential equations from a state-space formulation, and vice versa. The following two
sections discuss the conversion in detail.
104 CHAPTER 4. STATE-SPACE FORMULATION

4.3 Deriving State Equations from Ordinary Differen-


tial Equations

Let us use the following examples to demonstrate how to derive state equations and output
equations from ordinary differential equations.

Example 4.5 Consider the ordinary differential equation

ÿ(t) + ẏ(t) + 2y(t) = u(t) (4.79)

where the input is u(t) and the output is y(t). Derive the state equation and the output
equation.

For a second-order ordinary differential equation, we can define two state variables x1
and x2 as
x1 ≡ y, x2 ≡ ẏ (4.80)
Hence the two state equations will take the form of

ẋ1 = · · · , ẋ2 = · · · (4.81)

where ”· · · ” are linear combinations of state variables x1 , x2 , and input variable u(t). For
ẋ1 , the definition in (4.80) implies
ẋ1 = ẏ = x2 (4.82)
For ẋ2 , (4.79) and (4.80) result in

ẋ2 = ÿ = −ẏ(t) − 2y(t) + u(t) = −x2 − 2x1 + u(t) (4.83)

Finally, (4.82) and (4.83) can be written as a matrix state equation


! " # ! " #
d x1 0 1 x1 0
= + u(t) (4.84)
dt x2 −2 −1 x2 1
The output equation will take the form of
!
h i x1 h i
y = x1 = 1 0 + 0 u(t) (4.85)
x2
4.3. DERIVING STATE EQUATIONS FROM ORDINARY DIFFERENTIAL EQUATIONS105

Here is a more complicated example that involves derivative of the input variable.

Example 4.6 Consider the ordinary differential equation

z̈(t) + ż(t) + 2z(t) = u̇(t) (4.86)

where the input is u(t) and the output is z(t). Derive the state equation and the output
equation.

There are two issues that need to be resolved. First, what is the output z(t)? Second,
the standard form of state equation (4.12) does not allow derivatives of the input variable,
such as u̇(t). To find out z(t), we note that the differential operator on the left side of (4.86)
and (4.79) are the same, i.e.,
d2 d
L[•] = 2 + +2 (4.87)
dt dt
This means that system remains the same but the input has changed1 . Furthermore, taking
one more time derivative of (4.79) results in
d3 y d2 y dy
3
+ 2 +2 = u̇(t) (4.88)
dt dt dt
Comparison of (4.88) with (4.86) gives

z(t) = ẏ(t) (4.89)

The block diagram in Fig. 4.6 summarize the finding above. The system remains the same
and is governed by the differential operator L[•]. When the input is u(t), the output is y(t).
When the input becomes u̇(t), the output will be ẏ(t). This makes a lot of sense, because
the system is linear governed by a linear differential operator.

Since u̇(t) does not appear in the standard form of state-space equation (4.12), we cannot
use (4.86) directly to develop the state-space formulation. Instead, we need to replace (4.86)
with (4.79) and (4.89) so that u̇(t) does not appear in the state-space formulation. With
the state variables x1 and x2 defined in (4.80), we have demonstrated in Example 4.5 that
(4.79) is equivalent to the state equation
! " # ! " #
d x1 0 1 x1 0
= + u(t) (4.90)
dt x2 −2 −1 x2 1
1
This is like driving the same car (the same system) but on different road conditions (different inputs).
106 CHAPTER 4. STATE-SPACE FORMULATION

Input System Output

u(t) y(t)
L[ ]
u̇(t) z(t) = ẏ(t)

Figure 4.6: Block diagram for input/output relation

Furthermore, note that (4.89) now becomes the output equation. With the state variables
x1 and x2 defined (4.80), the output equation (4.89) becomes
!
h i x h i
1
z = ẏ(t) = x2 = 0 1 + 0 u(t) (4.91)
x2

To summarize, when the input takes the form of derivatives (cf. (4.86)), one should first
derive the state equation using the associate response without the derivative (cf. (4.79)).
Then one can assign the derivative using the output equation (cf. (4.89)).

Example 4.7 Consider the following ordinary differential equation


d4 y d3 y dy d2 u du
4
+2 3 + + 10y = 15 2 − 3 (4.92)
dt dt dt dt dt
where u(t) is the input and y(t) is the output. Represent (4.92) in terms of the state-space
formulation.

According to (4.6), the first step is to derive the state equation governing the associate
response x(t) with u(t) being the input variable. In other words, x(t) satisfies
d4 x d3 x dx
+ 2 + + 10x = u(t) (4.93)
dt4 dt3 dt
For (4.93), let us define the following state variables
d3 x(t)
x1 ≡ x(t), x2 ≡ ẋ(t), x3 ≡ ẍ(t), x4 ≡ (4.94)
dt3
4.3. DERIVING STATE EQUATIONS FROM ORDINARY DIFFERENTIAL EQUATIONS107

Input System Output

u(t) x(t)
−3u̇(t) L[ ] −3 ẋ(t)
15˙˙
u(t) 15˙˙
x(t) } ≡ y(t)
Figure 4.7: Block diagram to determine the output equation

For x1 (t), the state equation is


ẋ1 ≡ ẋ(t) = x2 (4.95)

For x2 (t), the state equation is


ẋ2 ≡ ẍ(t) = x3 (4.96)

For x3 (t), the state equation is


d3 x(t)
ẋ3 ≡ = x4 (4.97)
dt3
For x4 (t), the state equation is

d3 x(t) d3 x dx
ẋ4 ≡ = −2 3 − − 10x + u(t)
dt3 dt dt
= −2x4 − x2 − 10x1 + u(t) (4.98)

Rearranging (4.95) to (4.98) into a compact matrix form gives


      
x1 0 1 0 0 x1 0
d  x2   0 0 1 0  x2   0 
= +  u(t) (4.99)
      
dt 
 
x3   0 0 0 1  x3   0 
x4 −10 −1 0 −2 x4 1

d2 u du
The second step is to assign the input 15 2 − 3 of (4.92) into an output equation.
dt dt
Figure 4.7 shows a block diagram to identify the output equation. When the input is u(t),
the output is x(t). Since the system is linear, when the input is −3u̇(t), the output should
108 CHAPTER 4. STATE-SPACE FORMULATION

be −3ẋ(t). When the input is 15ü(t), the output is 15ẍ(t), Therefore, the output y(t)
d2 u du
corresponds to the input 15 2 − 3 should be
dt dt
d2 x dx
y(t) = 15 2 − 3 (4.100)
dt dt
With the state variables defined in (4.94),
 
x1
h i x2 
y(t) = 15x3 − 3x2 = 0 −3 15 0  (4.101)
 

 x3 
x4

4.4 Deriving Ordinary Differential Equations from State


Equations

The easiest way to derive ordinary differential equations from state equation is to treat
derivatives as a multiplication operator and to use Cramer’s rule. In the following sections,
we first review differential operator and Cramer’s rule. Then we explain how we derive
ordinary differential equations from state equations.

4.4.1 Differential Operator d/dt

Let us use the following simple example to demonstrate the differential operator d/dt behaves
like a multiplication operator. Consider the following ordinary differential equation.

d2 y dy
2
+ 3 + 2y = 0 (4.102)
dt dt
It can be rewritten as
d2 y dy dy
2
+2 + + 2y = 0 (4.103)
dt dt dt
4.4. DERIVING ORDINARY DIFFERENTIAL EQUATIONS FROM STATE EQUATIONS109

For the first two terms, d/dt can be factored out as


   
d dy dy
+ 2y + + 2y = 0 (4.104)
dt dt dt
 
dy
Note that + 2y in (4.104) can be factor out as
dt
  
d dy
+1 + 2y = 0 (4.105)
dt dt
or   
d d
+1 +2 y =0 (4.106)
dt dt
Alternatively, (4.102) can be rewritten as
d2 y dy dy
2
+ + 2 + 2y = 0 (4.107)
dt dt dt
For the first two terms, d/dt can be factored out as
   
d dy dy
+y +2 +y =0 (4.108)
dt dt dt
 
dy
Note that + y in (4.108) can be factor out as or
dt
  
d d
+2 +1 y =0 (4.109)
dt dt
 
d
Comparison of (4.106) and (4.109) shows that differential operators, such as +2 and
  dt
d
+ 1 , have the commutative, associative, and the distributed properties. Therefore,
dt
differential operators with constant coefficients behave like multiplication operators.

4.4.2 Crammer’s Rule

Consider a set of n simultaneous linear algebraic equation


   
x1 b1

f11 f12 · · · f1n
  x2  
b2

 21 f22 · · · f2n  
 f   
 =
..  .. (4.110)
  
 ··· ···  .   .


fn1 fn2 · · · f2n xn bn
110 CHAPTER 4. STATE-SPACE FORMULATION

or in matrix form
Fx = b (4.111)
To solve for xj (j = 1, 2, . . . , n) from F and b, one can define a matrix
 
f11 f12 · · · b1 · · · f1n
 f
 21 f22 · · · b2 · · · f2n 
F(j) ≡  (4.112)


 ··· ··· ··· ··· 
fn1 fn2 · · · bn · · · f2n

Note that F(j) is obtained by replacing the j-th column of F by the vector b. The Cramer’s
rule says that
det F(j)
xj = , j = 1, 2, . . . , n (4.113)
det F
To use the Cramer’s rule for deriving ordinary differential equations from the state equation,
F(j) and det F will involve differential operators. Therefore, a better form will be

det F(j) xj = det F, j = 1, 2, . . . , n



(4.114)

4.4.3 Examples

The following examples demonstrate how the Cramer’s rule (4.114) can be used to derive
ordinary differential equations from state equations.

Example 4.8 Consider the following state equation


! " # ! " #
d x1 0 1 x1 0
= + u(t) (4.115)
dt x2 −2 −1 x2 1

Determine the ordinary differential equations governing x1 and x2 .

Since x1 and x2 show up on both side of (4.115), they need to be combined. Hence, let
us rewrite (4.115) as
 
d ! " # ! " #
 dt 0 x 1 0 1 x 1 0
= + u(t) (4.116)

 d 
x −2 −1 x 1
0 2 2
dt
4.4. DERIVING ORDINARY DIFFERENTIAL EQUATIONS FROM STATE EQUATIONS111

Moving the state vectors to the left side of the equation leads to
 
d ! " #
 dt −1  x1 0
 d  = (4.117)
2 +1 x2 u(t)
dt

Comparison of (4.117) with (4.110) and use of Cramer’s rule (4.114) for j = 1 results in

d

dt −1
0
−1

d x 1 =
u(t) d (4.118)
2 + 1 +1
dt

dt

Recall that all the differential operators in (4.118) behave like multiplication operators.
Expansion of (4.118) gives
   
d d
+ 1 + 2 x1 = u(t) (4.119)
dt dt
or
ẍ1 + ẋ1 + 2x1 = u(t) (4.120)

Similarly, comparison of (4.117) with (4.110) and use of Cramer’s rule (4.114) for j = 2
results in
d
d

−1

0

dt
x2 = dt (4.121)

d
2 + 1 2 u(t)
dt

Expansion of (4.121) gives


   
d d du(t)
+ 1 + 2 x2 = (4.122)
dt dt dt
or
ẍ2 + ẋ2 + 2x2 = u̇(t) (4.123)

Note that x1 and x2 represent two  different states


 of the same system. Therefore, x1 and x2
d2 d
satisfy the same linear operator + + 2 on the left side of (4.120) and (4.123) but
dt2 dt
are subjected to different inputs in (4.120) and (4.123).
112 CHAPTER 4. STATE-SPACE FORMULATION

x1 x2

m Fs
B k

Figure 4.8: A very interesting spring-mass-damper system

Example 4.9 Let us reconsider the flagship example redrawn in Fig. 4.8. According to
(4.73), the state equation governing v2 and Fk is
 
! 1 !  1 
d v2 0 − v2
=
 m  +  m  Fs (t) (4.124)
dt Fk k 
F
k − k 0
B
Derive the ordinary differential equation governing v2 and Fk .

First, let us rewrite (4.124) into a operator form


   
d ! 1 !  1 
0  v2 0 − v2
 dt
= m 
k  Fk + m Fs (t) (4.125)

d  Fk
  
0 k − 0
dt B
After the state variables are moved to the left side, (4.125) becomes
 
d 1 !  1 
 dt m  v2  m Fs (t) 
 d k  = (4.126)
−k + Fk 0
dt B
Comparison of (4.126) with (4.110) and use of (4.114) for j = 1 result in

d 1 1 1
Fs (t)
dt m m m
v = (4.127)

2 d k
−k d + k

0 +
dt B dt B

4.5. PRACTICE PROBLEMS 113

Expansion of (4.127) gives


     
d d k k d k Fs (t)
+ + v2 = + (4.128)
dt dt B m dt B m
or
k k 1 k
v̈2 + v̇2 + v2 = Ḟs (t) + Fs (t) (4.129)
B m m Bm
Similarly, comparison of (4.126) with (4.110) and use of (4.114) for j = 2 result in

d 1
d Fs (t)


dt m

F = dt m (4.130)

k

−k d + k

−k 0
dt B

Expansion of (4.130) gives


k k k
F̈k + Ḟk + Fk = Fs (t) (4.131)
B m m

4.5 Practice Problems

1. Problem 2 of Chapter 1 is now considered again here; see Fig. 4.9. The equation of
motion is
c k
mẍ + ẋ + x = mg (4.132)
4 4
with initial conditions x(0) = ẋ(0) = 0.

(a) Use the displacement x and velocity v ≡ ẋ as state variables. Let g be the input,
and the damping force Fd of the damper and the reaction at the pivot point as
the outputs. Write down the state equation, the output equation, and the initial
conditions.
(b) Use the angular displacement θ and angular momentum H about the pivot point
as state variables. Let g be the input, and the damping force Fd of the damper as
the output. Write down the state equation, the output equation, and the initial
conditions.
114 CHAPTER 4. STATE-SPACE FORMULATION

a 2a
m

c k x
g

Figure 4.9: A massless mechanical lever

x(t)
g u(t)
k
r
m c

Figure 4.10: A cylinder pulled by a spring


4.5. PRACTICE PROBLEMS 115

x(t)
g
k
m P(t)

c r

Figure 4.11: A rolling cylinder with a spring and a damper

2. Problem 3 of Chapter 1 is considered again here; see Fig. 4.10. The equation of motion
is
3
mẍ + cẋ + kx = cu̇ + ku (4.133)
2
with initial conditions x(0) = u(0) and ẋ(0) = 0. Let u(t) be the input and the
displacement x be the output. Write down the state equation, the output equation,
and the initial conditions. Do you find it easy to assign the initial conditions?

3. Consider the system shown in Fig. 4.11, where the cylinder of radius r and mass m
is subjected to an external force excitation P (t). The cylinder undergoes pure rolling
and rotates freely about its axis. Also, the cylinder is constrained by a massless spring
with spring constant k and a massless dashpot with damping coefficient c. The motion
of the center of the cylinder is described by displacement x(t), which is measured with
respect to the undeformed position of the spring. Initially, the cylinder has an initial
angular velocity ω0 and the spring is undeformed. (Hint: The mass moment of inertia
of the cylinder about the center is 21 mr2 .) Given that x(t) is governed by

3
mẍ + cẋ + kx = P (4.134)
2
with the initial conditions
x(0) = 0, ẋ(0) = rω0 (4.135)
answer the following questions:
116 CHAPTER 4. STATE-SPACE FORMULATION

(a) Consider P (t) as the input, and choose x and v ≡ ẋ as state variables. Derive the
state equation describing the motion of the system.
(b) Let the angular velocity ω and the spring force Fk be the desired outputs. Derive
the output equation.

4. Consider the following ordinary differential equation

ẍ + ẋ + 4x = 2u̇ − 5u (4.136)

(a) Rewrite the equation in a state-space representation.


(b) Determine the undamped natural frequency ωn and viscous damping factor ζ.

5. Consider the state equation


! " # ! !
d x1 0 1 x1 0
= + u(t) (4.137)
dt x2 −1 0 x2 1

where x1 and x2 are state variables, and u(t) is the input.

(a) Let the output vector be !


2x1 − x2
y= (4.138)
ẋ2
Write down the output equation.
(b) Derive the ordinary differential equation governing x2 .
(c) What is the advantage of using state equations over ordinary differential equa-
tions? When should we use state equations?

6. Consider the following state-space representation


! " # ! !
d x1 −2 −1 x1 1
= + u(t) (4.139)
dt x2 3 0 x2 0

(a) Derive the equation of motion governing x1 .


(b) What are the initial conditions for part (a), if x1 (0) = 0 and x2 (0) = 1?
(c) Derive the equation of motion governing x2 .
4.5. PRACTICE PROBLEMS 117

x(t),v(t)
U(t),V(t)
B
m
k

Figure 4.12: A simple mechanical system

7. A mechanical system consists of a cart, a spring, and a damper as shown in Fig. 4.12.
The cart has mass m, the spring has stiffness k, and the damper has damping coefficient
B. The input at the spring can be a displacement input U (t) or a velocity input
V (t), where V (t) ≡ U̇ (t). Let x(t) and v(t) be the position and velocity of the mass,
respectively.

(a) Derive the equation of motion governing x(t) using U (t) as the input.
(b) Derive state and output equations using U (t) as input and x(t) as output.
(c) What would be the equation of motion for x(t) in part (a), if V (t) was the input?
Could we put it into a standard state representation? What difficulty do we
encounter?
(d) Take a time derivative of the equation of motion derived in part (a), and write
down the equation of motion governing v(t) using V (t) as the input.
(e) Derive state and output equations using V (t) as input and v(t) as output.
(f) What would be the equation of motion for v(t) if U (t) was the input? Could we
put it into a standard state representation? If yes, show the state and output
equations.
(g) Tell me what we learn from this problem? Do you think it is wiser to model the
input as a displacement input or a velocity input?
118 CHAPTER 4. STATE-SPACE FORMULATION

c2
P (t)
s I, R1

Figure 4.13: A simple hydraulic system

8. The hydraulic system shown in Fig. 4.13 is driven by a pump with prescribed pressure
Ps (t). The state equation is known to take the following form
! " # ! !
d pC −1 1 pC 0
= + Ps (t) (4.140)
dt QI −4 −4 QI 4
where pC and QI are the pressure in the tank and the flow rate in the pipe, respectively.
Derive the ordinary differential equation governing QI .

9. Write the following differential equation in state-space form.


d2 y dy du
2
+ 4 + 2y = u + 7 (4.141)
dt dt dt

10. Figure 4.14 shows a mechanical model to simulate motion of a car wheel. The mass
of the wheel is m. The wheel is attached to the car body through a damper with
viscous damping coefficient c, and to the ground through a stiffness k (simulating the
tire stiffness). The body of the car is assume to be fixed, and the wheel is subjected
to road excitation u(t). Also, the gravitational acceleration g is present.

(a) Show that the equation of motion is

mẍ + cẋ + kx = ku(t) − mg (4.142)

(b) Put the equation of motion (4.142) into a state equation (i.e., first-order matrix
ODE). Clearly explain your state variables and input variables.
(c) The output variables of interest are the spring force Fk and the acceleration of
the mass ẍ. Derive the output equation (matrix equation) in terms of the state
variable vector and input variable vector.
4.5. PRACTICE PROBLEMS 119

c x(t)

g m
u(t)
k

Figure 4.14: Model of a car wheel


120 CHAPTER 4. STATE-SPACE FORMULATION
Chapter 5

One-Port Elements

Many real engineering systems consist of components that input, output, and transform
various forms of energy. In this case, each form of energy is called a domain, and such
systems are called multi-domain systems. For example, the engine startup shown in Fig. 1.1
involves mechanical components, electrical components, and hydraulic components.

5.1 Multi-Domain Systems

When modeling multi-domain systems, one major difficulty is how the governing equation
can be derived systematically and efficiently. For example, Fig. 5.1 shows an electric motor
hoisting a mechanical system with two heavy masses and two springs. The system involves
both electrical and mechanical components. Should the electric component be absent (e.g.,
Fig. 4.1), one can follow the flowchart in Fig. 5.2 to draw a free-body diagram for each point
mass. Then one can apply Newton’s second law to each point mass to derive a governing
equation, which is a second-order ordinary differential equation. Finally, the two governing
equations can be combined into a fourth-order ordinary differential equation, whose solution
determines the response of the system.

This procedure, however, will not be successful, when other domains are present in the

121
122 CHAPTER 5. ONE-PORT ELEMENTS

k1 m1
Vs
k2

m2

Figure 5.1: An electric motor hoisting a mechanical system

system. Let us consider the electric domain of Fig. 5.1. Free-body diagrams are meaningless
for electrical domains. Other physical laws, instead of F = ma, should be used to describe
the unique physics associated with the electrical domain. Finally, when the system becomes
complicated, ordinary differential equations will be inefficient and awkward as indicated in
Chapter 4.

Figure 5.2 shows a possible solution to overcome these difficulties, The first step is to
rewrite all the physical laws in various domains in a unified format called one-port elements.
These one-port elements can take three forms: generalized resistance, generalized capaci-
tance, and generalized inductance. For example, a damper can be treated as a generalized
resistor, a linear spring can be treated as a generalized capacitor, and a point mass can be
treated as generalized inductor. Moreover, the governing equation of one-port elements has
first-order time derivative at most. The second step is to map out the relationship among
all the one-port elements using a tool called linear graphs. The role of linear graphs in
system analysis is equivalent to free-body diagram in particle dynamics, except that linear
graphs will be valid for any domain because one-port elements have unified formats in all
domains. The third step is to derive state equations based on the linear graphs and one-port
elements. Since one-port elements have at most one time derivative, they are very convenient
for derivation of state equations, which are the most efficient format to describe dynamics
of complicated systems.
5.1. MULTI-DOMAIN SYSTEMS 123

Dynamics Difficulties Solutions for Functions


Problems Multi-domain

Provides a unified
Free-body Doesn't work One-Port description for
Diagram for multi-domain Elements all domains

Describe system
Require other Linear configuration
F=ma physics laws Graphs systematically

Cannot be solved Allows systematic


2nd-order systematically State-Space solutions through
ODE when the system Formulations computers
becomes large

Solution by
Integration

Figure 5.2: Flow chart to deal with multi-domain systems


124 CHAPTER 5. ONE-PORT ELEMENTS

5.2 One-Port Elements

In this book, we will focus on components in the following five domains: translational, rota-
tional, electrical, hydraulic, and thermal. Translational components include point mass, lin-
ear springs, and linear dashpots. Rotational components include rotational inertia, torsional
springs, and dragcups. Electrical components include inductors, resistors, and capacitors.
Hydraulic components include tanks, fluid inertia in pipes, and fluid resistance in valves and
orifice. Thermal components include heat conduction and specific heat.

Since each of these components is governed by a different physical law, how can we
develop a universal way to model all these elements? We first note that all these components
either store energy or dissipate energy. Therefore, energy can be a universal quantity to span
over all these domains. Since the system is dynamic, it is better to start with power instead
of energy. For example, power P in mechanical system is given by

(power) (force) (velocity)


= × (5.1)
P F V

where F is the force and V is the velocity. In this case, we can choose F and P as variables
to describe components in the mechanical domain. F and V are called power variables. As
we will see later in the chapter that power variables will lead to only first derivative in the
governing equation of each component. Therefore, power variables are ideal candidates to
serve as state variables.

5.2.1 Electrical Elements

For electrical systems, power is calculated through

(power) (voltage drop) (current)


= × (5.2)
P v i

where v is the voltage drop across a circuit element and i is the current through the element.
Therefore, v and i are the power variables for electrical elements. There are three types of
elements.
5.2. ONE-PORT ELEMENTS 125

i
V1 V2 di
Linear v=L
Inductor dt
1 2
L

i
V1 V2 dv
Linear i=C
Capacitor dt
1 C 2

i
V1 V2
Linear v = Ri
Resistor
1 2
R

Figure 5.3: Three linear electric one-port elements


126 CHAPTER 5. ONE-PORT ELEMENTS

In Figure 5.3, the linear electric inductor has inductance L in Henry. A current flows
from node 1 to node 2 (i.e., the positive direction). The voltage at nodes 1 and 2 is V1 and
V2 , respectively. The voltage drop is

v ≡ V1 − V2 (5.3)

The law of inductance is


di
v=L (5.4)
dt
Similarly, the linear electric capacitor in Fig. 5.3(b) has capacitance C in Farad. The current
flows from node 1 to node 2 as the positive direction. Let the voltage drop be v ≡ V1 − V2
defined in (5.3). Then the charge stored Q in the capacitor is Q = Cv, and its first time
derivative gives
dv
i=C (5.5)
dt
Finally, the linear resistor in Fig. 5.3 has resistance R. The voltage drop and the current are
related through Ohm’s law, i.e.,
v = Ri (5.6)

The formulations above through power variables have several things worth noting. First,
the governing equations (5.4), (5.5), and (5.6) have only at most first-order time derivatives.
Therefore, the power variables, specifically the current in inductors and voltage in capacitors,
are natural candidates for state variables. Second, the formulations in (5.4), (5.5), and (5.6)
are linear, when L, C, and R are constants. Note that these circuit elements can be nonlinear
in nature (e.g., resistance depending on the current instead of being a constant). For this
book, we will only focus on linear circuit elements.

Under the power variables v and i, there are two types of idealized power sources driving
the circuits: voltage sources and current sources. For voltage sources, the voltage is given
or prescribed. For example, a 9-volt battery is a voltage source. For current sources, the
current is given or prescribed. Many power supplies that are commercially available can
provide a prescribed current to some extent.

Ideally, a voltage source gives a prescribed voltage no matter how much the current
flows out of the source. In reality, this cannot be achieved. Nevertheless, the voltage source
is a simple way to model a realistic power source, such as a 9-volt battery. Similarly, an ideal
5.2. ONE-PORT ELEMENTS 127

x1 x2
dF
Linear = kv
Spring F F dt
1 2
k

x1 x2
Viscous F = Bv
Damper F 1
F
c 2

x F
m dv
Mass F=m
dt
O X

Figure 5.4: Three linear translational one-port elements

current source gives a prescribed current no matter how much the voltage comes out of the
source. In reality, this will always be a limit on how high the voltage can reach, because the
source can only supply finite amount of power.

5.2.2 Translational Elements

During translational motion, the power is given by

(power) (Force) (velocity)


= × (5.7)
P F v

where F is the force and v is the velocity. Therefore, F and v are the power variables for
translational motions.

There are three types of translational elements: linear springs, lumped masses, and
dashpots. As shown in Figure 5.4, a linear spring has spring constant k. Let us assume that
a tensile force is transmitted (or flows) from node 1 to node 2. In addition, we can define
128 CHAPTER 5. ONE-PORT ELEMENTS

the relative displacement x as


x ≡ x2 − x1 (5.8)
where x1 and x2 are the position of nodes 1 and 2, respectively. According to Hooke’s law

F = kx (5.9)

Note that the sign is consistent in (5.8) and (5.9), because a tensile force F will cause the
spring to stretch resulting in position relative displacement x. Note that the power variable
v does not appear in (5.9). To derive a governing equation with both power variables, one
can take the first time derivative of (5.9) to obtain

dF
= kv (5.10)
dt
where v is the relative velocity defined as

v ≡ v2 − v1 (5.11)

Similarly, the viscous damper in Fig. 5.4 has damping coefficient B. With the relative
velocity defined in (5.11), the governing equation of the viscous damper becomes

F = Bv (5.12)

Finally, the point mass in Fig. 5.4 has mass m. If the motion of the point mass takes place
in an inertia frame with constant velocity vref , one define the relative velocity as

v ≡ vmass − vref (5.13)

where vmass is the velocity of the mass. Then the Newton’s second law becomes
dv
F =m (5.14)
dt

Like electrical elements, the formulations above for translational elements have several
things worth noting. First, the governing equations (5.10), (5.12), and (5.14) have only at
most first-order time derivatives. (If F and x were used as variables, the Newton’s law in
(5.14) would have a time derivative to the second order.) Therefore, the power variables,
specifically the velocity of the mass and the force in the spring, are natural candidates for
5.2. ONE-PORT ELEMENTS 129

state variables. Second, the formulations in (5.10), (5.12), and (5.14) are linear. Note that
nonlinear translational elements appear very often in practice. For example, a bevel washer
presents nonlinear stiffness. The drag force from the air, often approximated by CD v 2 , is
nonlinear in nature. Again, we will only focus on linear translation elements in this book.
Third, one can see that the power-variable formulations come out naturally for electrical
elements. For translational elements, we need to do some fine tunings (e.g., velocity relative
to an inertial frame) in order to put the governing equation under the framework of power
variables. Nevertheless, the fine tunings are worthy because we have a uniform formulation
for both electrical and translational elements.

Under the power variables F and v, there are two types of idealized power sources for
translational elements: force sources and velocity sources. For force sources, the force is
given or prescribed. For example, the thrust of an airplane engine is a force source. During
the takeoff, the engine gives a constant force to accelerate the airplane. For velocity sources,
the velocity is given or prescribed. The velocity source is more difficult to visualize. Here are
two examples. The first example is the landing of an airplane. During landing, an airplane
has to follow a certain velocity profile to descend. Especially landing in bad weather, one can
hear the pilots adjusting the thrust and flaps so that the airplane can follow the prescribed
velocity profile to land. Another example is shakers. Shakers are either electromagnetic or
hydraulic device to produce a certain velocity or acceleration profiles. Therefore, shakers can
simulate excitations from earthquake or during a flight. Test objects, such as a relay switch
in nuclear power plants, can be mounted on shakers to test whether they remain functional
during a simulated earthquake.

Ideally, a force source gives a prescribed force no matter how much the velocity. Simi-
larly, an ideal velocity source gives a prescribed velocity no matter how much the force comes
out of the source. In reality, this cannot be achieved, because all source have a limit on the
power.
130 CHAPTER 5. ONE-PORT ELEMENTS

5.2.3 Rotational Elements

For rotational motion, the power is given by

(power) (torque) (angular velocity)


= × (5.15)
P T ω

where T is the torque and ω is the angular velocity. Therefore, T and ω are the power
variables for rotational motions.

Figure 5.5: A torsional spring


Figure 5.6: A rotary inertia element

There are also three types of rotational elements: torsional springs, lumped rotary
inertial elements, and torsional dampers (also known as drag cups). Figure 5.5 shows a
torsional spring with spring constant kr . One notes that a torque T is transmitted (or flows)
from end 1 to end 2 of the torsional spring. In addition, we can define a relative angular
displacement θ as
θ ≡ θ2 − θ1 (5.16)

where θ1 and θ2 are the position of ends 1 and 2, respectively. Note that the angles θ1 and
θ2 are both when measured in the counterclockwise sense relative to a resonance. According
to Hooke’s law
T = kr (θ2 − θ1 ) = kr θ (5.17)

where θ ≡ θ2 − θ1 is the relative angular displacement between the two ends of the torsional
spring. Note that the sign is consistent in (5.16) and (5.17), because the torques T in Fig. 5.5
will cause the torsional spring to open resulting in a positive relative angular displacement
θ. Note that the power variable ω does not appear in (5.17). To derive a governing equation
5.2. ONE-PORT ELEMENTS 131

Figure 5.7: A rotational damper (aka a drag cup)

with both power variables, one can take the first time derivative of (5.17) to obtain
dT
= kr ω (5.18)
dt
where ω is the relative angular velocity defined as

ω ≡ ω2 − ω1 (5.19)

where ω1 ≡ θ̇1 and ω2 ≡ θ̇2 .

Rotary inertial elements appear in rotational motion of a rigid body. For example,
Fig. 5.6 shows a rigid body in the form of a slender rod rotating about a hinge (or a fixed
point). The rod has a mass moment inertia I. Application of the Newton’s second law in
rotational motion leads to

T =I (5.20)
dt
where ω is the angular velocity of the rod. Another way to look at ω is to think about the
definition in (5.19) with ω1 = 0, i.e., referenced to an inertia frame with no rotation.

Finally, Fig. 5.4 shows a torsional damper, also known as a drag cup in industry. The
torsional damper has damping coefficient Br . The torsional damper turns out to be counter-
intuitive especially for its sign convention. First of all, the two ends of the drag cup experience
angular velocities ω1 and ω2 , both referring to the same positive direction. (For example,
if you look at the drag cup from the right side, the positive direction is counter-clockwise.)
The torque at the right end is positive in the counter-clockwise direction as viewed from the
right end. The torque at the left end is positive in the counter-clockwise direction as viewed
132 CHAPTER 5. ONE-PORT ELEMENTS

from the left end. Under this set of sign convention, the governing equation of the viscous
damper becomes
T = Br ω (5.21)

where the relative velocity ω is defined in (5.19).

Like the translational elements, the formulations above for rotational elements have
several things worth noting. First, the governing equations (5.18), (5.20), and (5.21) have
only at most first-order time derivatives. As one can see, if T and θ were used as variables,
the equation of motion in (5.20) would have a time derivative to the second order. Therefore,
the power variables, specifically the angular velocity of the rigid body and the torque in the
torsional spring, are natural candidates for state variables.

Second, the formulations in (5.18), (5.20), and (5.21) are linear. Note that nonlinear
rotational elements appear very often in practice. For example, the gravity acting on a
pendulum serves as a restoring force with nonlinear stiffness. Specifically, the spring constant
is g sin θ, where θ is the angle between the pendulum and the vertical downward direction.
Again, we will only focus on linear rotational elements in this book.

Finally, we once again see that the power-variable formulations come out naturally
for electrical elements, but no so much for the rotational elements (e.g., the rotary inertia
element shown in (5.20)). Some fine tunings (e.g., angular velocity relative to an inertial
frame) and efforts to overcome our intuition (e.g., the sign convention of the drag cup in
(5.7)) are needed in order to put the governing equation under the framework of power
variables.

Under the power variables T and ω, there are two types of idealized power sources for
rotational elements: torque sources and angular velocity sources. For torque sources, the
torque is given or prescribed. For example, the torque out of an engine can be considered
as an ideal torque source. For angular velocity sources, the angular velocity is given or
prescribed. A good example of angular velocity sources is electric mixers used in kitchens.
They may have three different speeds: low, medium, and high. When you select a speed,
the mixer will provide a constant speed to mix food. Ideally, a torque source gives the same
prescribed torque no matter what the angular velocity is. Similarly, an ideal angular velocity
source gives a prescribed angular velocity no matter how much the torque needs to come
5.2. ONE-PORT ELEMENTS 133

Figure 5.8: Pressure and velocity of a pipe flow

out of the source. In reality, these conditions cannot be achieved, because all source have a
limit on the power. If you try to mix a large amount of heavy dough using a small mixer
spinning at a high speed, your mixer may not be able to spin at that speed because it does
not have the power.

5.2.4 Fluid Elements

Fluid elements are primarily associated with one-dimensional, incompressible pipe flows.
Our first task is to figure out the two power variables. To do so, let us consider a pipe flow
shown in Fig. 5.8. The pipe has a cross-sectional area A. The fluid in the pipe flows with
velocity v from the left side (upstream) to the right side (downstream) resulting in a flow
rate Q, whose unit is volume (e.g., f t3 ) per second. Moreover, the flow rate is found as

Q=A·v (5.22)

Now let us focus on a segment of fluid in the pipe as shown in Fig. 5.8. The upstream
pressure is p1 and the downstream pressure is p2 . For the fluid segment identified, it is
subjected to a net force
F = p1 A − p2 A = pA (5.23)

where p is a pressure drop defined as

p ≡ p1 − p2 (5.24)
134 CHAPTER 5. ONE-PORT ELEMENTS

Figure 5.9: Fluid inertance element

The power P is then


P = F · v = p (Av) = p · Q (5.25)

or
(power) (pressure) (flow rate)
= × (5.26)
P p Q
Hence the power variables are pressure drop p and flow rate Q. Note that the pressure drop
is the upstream pressure minus the downstream pressure.

There are three types of fluid elements: fluid inertance, fluid capacitance, and fluid
resistance. Figure 5.9 shows the setup leading to fluid inertance. Basically, the fluid inertance
results from the Newton’s second law. Let us consider a column of fluid in a pipe (cf. the
right drawing of Fig. 5.9). The fluid column has length l and cross-sectional area A. The
fluid in the column has density ρ and flows with velocity v. At the same time, the upstream
and downstream pressures are p1 and p2 . Application of the Newton’s law to the fluid column
results in
dv
(p1 − p2 ) A = (ρAl) (5.27)
dt
or
ρl d
(p1 − p2 ) = (Av) (5.28)
A dt
Through use of the power variables, (5.28) is expressed as

dQ
p=I (5.29)
dt
5.2. ONE-PORT ELEMENTS 135

Figure 5.10: Fluid capacitance element

where I is the fluid inertance defined as


ρl
I= (5.30)
A
The notation of fluid inertance is an inductance symbol (cf. the left drawing of Fig. 5.9)
with flow rate as well as upstream and downstream pressures annotated.

The fluid capacitance element results from the gravitational potential. Figure 5.10
shows an open tank storage that has a cross sectional area A. The fluid inside the tank has
density ρ. Moreover, the fluid height h varies with time because the flow rate into the tank
is Q. From the continuity of incompressible flows,
d dV
Q= (Ah) = (5.31)
dt dt
where V is the volume of the fluid inside the tank. As a result of the gravity, a pressure p1
develops at the bottom of the fluid. The pressure p1 gives rise to a pressure difference with
the surround atmospheric pressure patm , i.e.,

p = p1 − patm (5.32)

where the pressure difference p is also known as the gage pressure. According to hydrostatics,
the gage pressure is given as
ρg ρg
p = ρgh = (Ah) = V (5.33)
A A
136 CHAPTER 5. ONE-PORT ELEMENTS

Figure 5.11: Notation of a fluid capacitance element

By taking the time derivative of (5.33), one obtains

dp ρg dV
= (5.34)
dt A dt
dV
Elimination of in (5.31) and (5.34) results in
dt
dp
Q = Cf (5.35)
dt
where Cf is the fluid capacitance defined as

A
Cf = (5.36)
ρg

Based on (5.35), the pressure increases when the fluid flows into the tank.

Since the fluid capacitance results from the gravitational potential, it has a datum plane
or a reference. Obviously, the bottom surface of the fluid is the datum plane implicitly.
Figure 5.11 illustrates the notation of the fluid capacitance. Note that it does shows a
pressure drop from p1 to patm . To be candid, the notation is very confusing because the
direction of the flow makes people feel that the fluid is out of the tank for a positive flow
rate Q.

Fluid damping elements are developed to describe head loss in fluid mechanics. The
loss may result from presence of damping, elbows, valves, orifices, and so on. When fluid
damping is present, it causes a pressure drop. The notation of the fluid damping element
5.2. ONE-PORT ELEMENTS 137

Figure 5.12: Notation of a fluid damping element

is shown in Fig. 5.12; it is the same symbol used for resistance in electric circuits. The
governing equation of the fluid element is

p = Rf Q (5.37)

where p is the upstream-downstream pressure difference defined in (5.24) and Rf is a fluid


damping coefficients that can be found from handbooks. One should also note that (5.37) is
only an approximation because damping is intrinsically nonlinear in the real world.

For fluid systems, there are two types of excitation sources. One type of sources is
to prescribe the pressure. For example, the water main at single-family houses now have a
pressure-regulating valve, which control the pressure into all household plumbing components
(e.g., hot-water tanks). In this case, the pressure is regulated and it is a prescribed pressure
source. The other type of sources is to prescribe the flow rate. For example, the IV pumps
in hospital prescribes flow rate. You can specify how fast the IV fluid is injected into the
body. Therefore, it is a prescribed flow rate source.

5.2.5 Thermal Elements

For thermal domains, there is no explicit ”motion” available. Therefore, the procedure to
use power to identify two power variables becomes obsolete. In this case, what would be a
proper way to define variables that can describe governing equation of thermal elements?
One clue comes from excitation sources. For all the one-port elements we have discussed
thus far, their sources are all related to power variables. For example, pressure and flow
138 CHAPTER 5. ONE-PORT ELEMENTS

rate are the power variables for fluid systems. At the same time, available sources for fluid
systems either prescribe pressure or flow rate.

By the same token, one can investigate sources of thermal systems to figure out the
corresponding power variables. One type of thermal sources is to prescribe temperature T .
A thermostat controlling a furnace or an air conditioning unit is an example to prescribe
temperature. The thermostat will turn on and off the furnace and air conditioning unit so
that the temperature is maintained at a set value. The other type of thermal sources is to
prescribe heat flux or heat flow q. An example is electric stoves boiling a kettle of water.
The electric stoves will give constant heat flow to the kettle. When the kettle has a lot of
water, the temperature of the water will rise slowly. In contrast, if the kettle has little water,
the temperature of the water will rise quickly. By looking at the sources, one can use the
temperature T and heat flux q as power variables for thermal systems.

There are only two types of thermal elements: thermal resistors and thermal capacitors.
Thermal inductors have not been identified yet.

Figure 5.14: A thermal resistor

Figure 5.13: A thermal capacitor

Thermal capacitors result from media that store or release heat via specific heat. Con-
sider a cylinder containing air as shown in Fig. 5.13. Heat flows into the cylinder with a rate
q, and the heat absorbed in the air is H. Then q and H are related through
dH
q= (5.38)
dt
5.2. ONE-PORT ELEMENTS 139

Moreover, the heat absorbed H is related to the temperature of the air via

H = mCp T (5.39)

where m and Cp are the mass and specific heat of the air, respectively, and T is the absolute
temperature. Substitution of (5.39) into (5.38) results in

dT
q = Ct (5.40)
dt
where Ct is a thermal capacitance defined as

Ct = mCp (5.41)

There is one thing that is worth noting here. In the thermal capacitor above, we have
been vague about the specific heat. As one might remember from thermodynamics, there
are two types of specific heat: Cp and Cv . Cp is specific heat at constant pressure, and is
related to enthalpy. It is used when the thermal capacitor undergoes a constant pressure
process. Cv is specific heat at constant volume, and is related to internal energy. It is used
when the thermal capacitor undergoes a constant volume process.

Also, the notation of a thermal capacitance is illustrated in Fig. 5.13. To be honest, I


find it quite useless. Nevertheless, I show it in Fig. 5.13 for the sake of completeness.

Thermal resistors appear when there is heat conduction or convection resulting from a
temperature difference. Fig. 5.14 illustrates a schematic drawing of a thermal resistor. In
general, heat flows from upstream (the hot side) to downstream (the cool side) with a rate
q. The temperature difference from the upstream to the downstream is

T ≡ T1 − T2 (5.42)

where T1 and T2 are the temperature at the upstream and the downstream, respectively. For
a heat resistor, the heat flow rate q and temperature difference T are related through

T = Rt q (5.43)

where Rt is the thermal resistance.


140 CHAPTER 5. ONE-PORT ELEMENTS

Now let us consider Fourier law of heat conduction. The law basically says that the
heat flow rate q is proportional to the temperature difference T1 − T2 , i.e.,

q = CD T1 − T2 (5.44)

where CD is the heat conductance (whose unit is Watt/◦ K). Comparison of (5.43) and (5.44)
shows that the thermal resistance of heat conduction is
1
Rt = (5.45)
CD

For the case of convection heat transfer, the heat flow rate q and the temperature
difference T1 − T2 are related through

q = Ch A (T1 − T2 ) (5.46)

where Ch is the convection heat transfer coefficient (whose unit is Watt/◦ Km2 ). Comparison
of (5.43) and (5.46) shows that the thermal resistance of heat convection is
1
Rt = (5.47)
Ch A

5.2.6 Lumped-Parameter Systems

The one-port elements described above provide a simple way to model a real system. For
example, Fig. 5.15 shows a cantilever beam with an end mass that appears in one of the
experiments done in recitation sessions. This system has infinite degrees of freedom, because
the displacement of every point of the cantilever needs to be determined in order to describe
the motion of the entire cantilever. There are infinite points on the cantilever; therefore, the
system has infinite degrees of freedom. Finding response of an infinite-degrees-of-freedom
system is not a trivial task. For a cantilever with an an end mass, it is already challenging,
let along a complex system that involves multiple domains (e.g., an car engine).

The one-port elements provide a vehicle to reduce a continuous system to a finite-


degrees-of-freedom model. For example, the elasticity of the cantilever can be modeled via
a linear spring with stiffness k and the inertia of the cantilever and the end mass can be
5.2. ONE-PORT ELEMENTS 141

Figure 5.15: A cantilever beam with an end mass

modeled via a point mass m. In this case, the continuous system is reduced to a single-
degree-of-freedom model. Another example is a conductive wire with electric current. The
wire has resistance and produces stray capacitance with its surrounding environment at
every point along the wire. Therefore, the conductive wire is indeed a continuous sys-
tem. With the electrical one-port element, we are able to reduce the actual system to a
resistor of resistance R and a capacitor with capacitance C, thus forming a first-order RC-
circuit. Such finite-degrees-of-freedom models consisting of one-port elements are often called
lumped-parameter models or simply lumped models.

Lumped models are definitely easier to analyze, because their governing equations can
ordinary differential equations and can be cast into state-state representations. (In contrast,
continuous systems are governed by partial differential equations, which are much more
difficult to analyze.) If lumped models are so easy to use, what is the catch? By using
a lumped model, we are trading high-frequency response with the simplicity. Lumped-
parameter models are usually accurate for low-frequency response. When desired response
is in a higher frequency range, the lumped-parameter models start losing its accuracy. In
142 CHAPTER 5. ONE-PORT ELEMENTS

Figure 5.16: Through- and across-variables of a translational system

this case, a more delicate lumped-parameter model or a continuous model may be needed.

Finally, all the one-port elements we discussed in this chapter are linear. The advantage
is simplicity because the governing equation in the end will be linear and its solution is readily
available. As explained earlier, real-world applications may involve nonlinearity, such as air
drag. In this case, a procedure called linearization might be used to approximate nonlinear
systems. We will discuss linearization in significant detail later in this book.

5.3 Through- and Across-Variables

Now we have learned power variables in several different domains. A natural question is,
”how do we remember them?” A more subtle question is, ”do they have any similarity?”
For a pair of power variable in each domain, it turns out that one of the variable will ”go
through” the system and is called a through-variable. The other variable will represent a
difference or drop across an element and is called a across-variable.

Let us use a tensile test as an example to demonstrate through- and across-variables.


In a tensile test, a force F is applied to pull a specimen axially. Under the axial force, he
two ends of the specimen move relatively stretching the specimen till it breaks. This is a
translational system, and the specimen behaves like a spring when it is still in the linear
5.3. THROUGH- AND ACROSS-VARIABLES 143

Figure 5.17: Through- and across-variables of an electrical system

range. The power variables are the force and the relative velocity.

The force in a tensile test is measured via a load cell. The load cell must be added to the
specimen in a serial combination. When the force is applied, the force will pass through the
load cell and the specimen. With this nature, the force variable is called a through-variable,
because it goes through the system and its measurement must be done in series with the
system.

In contrast, the relative velocity is measured by measuring the strain rate. First, one
marks two points on the specimen and the distance of the two points is known as the gage
length. The the gage length is monitored with respect to time to determine the strain and
strain rate. The gage length is basically the difference in displacement of the two marks
across the fracture point of the specimen. To measure the displacement difference, we ill
need to use a measurement tool that is added to the system in a parallel combination (e.g.,
potentiometer). With this nature, the velocity variable is called an across-variable, because
it is derive from a ”difference” or ”drop” across the system. To measure an across-variable,
one must use an instrument that is in parallel combination with the system.

The through- and across-variables can also be demonstrate easily for an electrical sys-
tem. Figure 5.17 illustrate a resistor. When current I flow through the resistor, it accom-
panies a voltage drop V across the resistor. The current I is a through-variable, because it
144 CHAPTER 5. ONE-PORT ELEMENTS

Figure 5.18: Through- and across-variables of a fluid system

flow through the resistor. The voltage V is an across-variable, because it is a drop across the
resistor. If one wants to measure the current via an ammeter, the ammeter must be placed
with the resistor in a serial combination. In other words, the through variable is related to
serial combination. If one wants to measure the voltage drop using a voltmeter, the voltmeter
must be placed next to the resistor in a parallel combination. That is, the across-variable
is related to parallel combination. These characteristics of through- and across-variables in
the electric domain are identical to those of the mechanical domain.

Through- and across-variables also exist in fluid systems. Figure 5.18 shows a schematic
drawing a fluid system involving a turbine. Working fluid enters and flows through the tur-
bine and generate power. Accompanying the power generation, the pressure of the working
fluid drops across the turbine. The flow rate Q is then the through-variable. If one wants to
measure the flow rate via a flowmeter, the flowmeter must be linked to the turbine in a serial
combination. In contrast, the pressure p is an across-variable because it is a drop across
the turbine. If one wants to measure the pressure drop, one needs to have an instrument
connected between the entrance and exit of the turbine in a parallel manner.

The same concept applies to rotational systems and thermal systems. For rotational
systems, torque T is a through-variable, and angular velocity v is an across-variable. For
thermal systems, heat flow rate q is a through-variable, and the temperature T is an across-
variable.
5.3. THROUGH- AND ACROSS-VARIABLES 145

Figure 5.19: Occasions when the drop of an across-variable is not obvious

Although the through- and across-variables are cool concepts, they may seem awkward
sometimes. For example, Fig. 5.19 shows several one-port elements for which the drop of
the corresponding across-variable is not obvious. The first one is the lumped mass and a
velocity drop is not obvious.
dv d
F =m = m (v − vref ) (5.48)
dt dt
where vref is a constant velocity of the reference inertial frame. The velocity does have a drop,
though implicitly. The second case is a fluid capacitor in the form of a tank. The pressure p
at the bottom of the tank does not seem to have a drop. In fact, the pressure p at the bottom
of the tank is a gage pressure relative to the surrounding atmospheric pressure; see (5.32).
So the pressure p is indeed a drop. The third case is a thermal capacitor, which involves
only temperature T . The drop or difference of temperature T comes from the fact that the
temperature is defined relative to an absolute zero temperature, i.e., T ≡ T − 0. Since the
reference temperature is zero, the temperature T itself is a drop or relative temperature.

Another awkward feature of through-variables is that it is so hard to visualize a force


or a torque flowing through certain elements, such as a lumped mass. (It is easier to see
the force flowing through a spring as shown in Fig. 5.16 though.) The concept of one-port
elements stems from electrical elements and is then generalized to other domains, such as
translational systems. The generalization is not 100% seamless and we need to bite the
bullet to adopt the concept. After all, the gain significantly exceeds the awkwardness.

There are several things worthy noting at the end of this section. First, through- and
146 CHAPTER 5. ONE-PORT ELEMENTS

Figure 5.20: A table of through- and across-variables for various domains

across-variables cannot be independent simultaneously. One variable depends on the other


variable. For example, let us consider a lumped mass governed by (5.48) If the through-
variable F is known, the across variable v can be determined via integration. If the across-
variable v is known, the through variable F can be determine via differentiation. Second,
any elemental equation from one-port elements will have a through-variable on one side and
an across-variable on the other side of the element equation. This implies that the through-
variable and the across-variable cannot be prescribed simultaneously. Finally, we will use
the notation f to denote a through-variable and v to denote an across-variable for the rest
of the book. Figure 5.20 summaries through- and across-variables of the various domains we
have discussed thus far.

5.4 Classification of Element Types

Now we have learned so many different types of one-port elements. How do we remember
them? One simple way is to classify one-port elements that are alike into a group and
5.4. CLASSIFICATION OF ELEMENT TYPES 147

Figure 5.21: A table of A-type elements

identify their common features in the group. By doing so, we find that there are tree types
of elements. They are described in details as follows.

5.4.1 A-Type Elements: Generalized Capacitors

The terminology A-type refers to ”across-variable” type. A-type elements are basically gen-
eralized capacitors. The elemental equation of these generalized capacitors is characterized
as
d
f =c v (5.49)
dt
where c is generalized capacitance. According to (5.49), A-type elements involve a time
derivative. Moreover, the time derivative is applied to the across-variable. The proportional
constant is then the generalized capacitance. The reference value of the across-variable (e.g.,
surrounding atmospheric pressure in fluid capacitance) is not important because the time
d
derivative applies to the across-variable v.
dt
Example of A-type elements including, electric capacitors, lumped mass, torsional iner-
148 CHAPTER 5. ONE-PORT ELEMENTS

Figure 5.22: A table of T -type elements

tia, fluid tanks, and heat storage media (with specific heat). The table in Fig. 5.21 summa-
rizes the A-type one-port elements we have learned and lists their elemental equations and
generalized capacitance.

5.4.2 T -Type Elements: Generalized Inductors

The terminology T -type refers to ”through-variable” type. T -type elements are basically
generalized inductors. The elemental equation of these generalized inductors is characterized
as
d
v=L f (5.50)
dt
where L is generalized inductance. According to (5.50), T -type elements involve a time
derivative. Moreover, the time derivative is applied to the through-variable. The propor-
tional constant is then the generalized inductance.

Example of T -type elements including, electric inductors, translation and rotation


springs, torsional inertia, and fluid inertance. The table in Fig. 5.22 summarizes the T -
5.4. CLASSIFICATION OF ELEMENT TYPES 149

Figure 5.23: A table of D-type elements

type one-port elements we have learned and lists their elemental equations and generalized
inductance.

5.4.3 D-Type Elements: Generalized Resistors

The terminology D-type refers to ”dissipative” type. D-type elements are basically general-
ized resistors. The elemental equation of these generalized resistors is characterized as

v = Rf (5.51)

where R is generalized resistance. According to (5.51), D-type elements involve no derivative.


The proportional constant is then the generalized resistance.

Example of D-type elements including dashpots, drag cups, electrical resistors, fluid
resistors, and heat transfers. The table in Fig. 5.23 summarizes the D-type one-port ele-
ments we have learned and lists their elemental equations and generalized resistance. For
translational and rotational systems, please note that the generalized resistance is in fact
the reciprocal of the damping coefficients.
150 CHAPTER 5. ONE-PORT ELEMENTS

Figure 5.24: Summary of through- and across-variable sources

5.5 Classification of Excitation Sources

Under the framework of through- and across-variables, we can also classify excitation sources
into through-variable sources and across-variable sources. For through-variable sources, the
through-variable of the sources are prescribed. A fluid excitation source with prescribed flow
rate and an electrical excitation source with prescribed current are examples of through-
variable sources. For across-variable sources, the across-variable of the source are prescribed.
A shaker providing prescribed velocity and a pump delivering a prescribed pressure are
examples of across-variable source. The table in Fig. 5.24 summarizes the excitation sources
we have learned thus far, and classify them in terms of through- and across-variable sources.
Chapter 6

Linear Graphs

In Chapter 5, I explain challenges in modeling dynamics of multi-domain systems. Solutions


to these challenges consist of three parts: one-port elements, linear graphs, and state-space
formulations. Use of one-port elements provides a unified way to model physics from various
domains as shown in Chapter 5. Use of linear graphs allows designers to describe configura-
tions or topology of a multi-domain systems systematically. Use of state-space formulations
allows designers to compute a system’s response efficiently via computers.

In this chapter, I will focus on linear graphs. The notion of linear graphs is easy to
capture for electrical circuits and fluid systems, but become quite perplexing for mechanical
systems (e.g., linear and rotational elements). I will first describe notation used for linear
graphs and demonstrate the concept for electrical circuits. Next, I will demonstrate two laws
that all linear graphs need to obey. With the help of these two laws, I will explain linear
graphs for mechanical systems. The last part is modeling of excitation sources.

6.1 Notation of Linear Graphs

A linear graph usually consists of multiple one-port elements and at least one excitation
source connected together graphically. Let me focus on one-port elements first and use the

151
152 CHAPTER 6. LINEAR GRAPHS

Figure 6.1: Notation of one-port elements

electrical domain as an example to introduce the notation of linear graphs. Once the notation
is established, it will work for other domains, such as fluid systems, thermal systems, and
mechanical systems.

Figure 6.1 illustrates notation for one-port elements. The top row of Fig. 6.1 shows
three electrical elements: an inductor (i.e., a T -type element), a capacitor (i.e., an A-type
element), and a resistor (i.e., a D-type element). In the electrical domain, each element is
defined at two end points, known as nodes, A and B. A is at the upstream and B is at
the downstream. Moreover, a voltage is registered at each node. An electric current flows
from node A to node B resulting in a voltage drop V . Once the electric current is known,
di
the voltage drop V can be determined by the elemental equations V = L , V = Ri, and
dt
dV
i=C .
dt
The center row of Fig. 6.1 shows linear graph notations for these three electrical ele-
ments. First, we see that each notation consists of two end points A and B and a curved line
with an arrow. Each element is annotated with a letter L, C, or R representing an inductor,
a capacitor, or a resistor, respectively. The two end points A and B are called ”nodes,”
6.1. NOTATION OF LINEAR GRAPHS 153

where an across-variable (i.e., the voltage in the electrical domain) is defined. The symbol
”−→” in Fig. 6.1 defines the positive direction of the through-variable (i.e., the current in the
electrical domain). Associated with the positive direction of the through-variable, a drop
of the across-variable from the upstream node (i.e., node A) to the downstream node (i.e.,
node B) occurs. Moreover, the drop of the across-variable is determined via the elemental
dv df
equations, such as f = c for generalized capacitance, v = L for generalized inductance,
dt dt
and v = Rf for generalized resistance.

Although the notations of linear graphs are demonstrated on electrical elements, they
work equally well for one-port elements from other domains. Under the concept of through-
and across-variables, we can generalize the notations to T -type elements, A-type elements,
and D-type elements; see the third row of Fig. 6.1. For example, if the notation was for
a fluid domain, the arrows would define that the fluid flow from node A to node B had a
positive flow rate. A pressure drop from node A to node B will accompany the fluid flow
from node A to node B. Moreover, the pressure drop will be derived from the elemental
equations in the fluid domain if the flow rate is known.

It is important to know that the positive direction of the through-variable in an one-


port element is assigned arbitrarily. (So feel free to choose the positive direction you want.)
Once a positive direction is defined, state equations will be derived (details to be described
in later chapters) based on the positive direction of the through-variable. When the state
equations are finally solved, if you obtain a positive response of the through variable, then
you know you assume the right direction. If you obtain a negative response, you known that
the flow in the one-port element is in the opposite direction that you assume. This is similar
to solving statics problems with a free-body diagram. You assume a positive direction for
an unknown reaction in the free-body diagram. After you solve the problem, if the reaction
turns out to be positive, then you know that you have assumed the right direction for the
reaction. If not, you know that the reaction is actually in the opposite direction you assumed
in the free-body diagram.

For some one-port elements, a reference across-variable is needed. One such example is
a fluid tank with fluid capacitance show in Fig. 6.2. For the pressure at the bottom of the
tank, it is not clear what the pressure drop is. In this case, the atmospheric pressure patm is
introduced. As explained in (5.32), the pressure at the bottom of the tank is indeed a gage
154 CHAPTER 6. LINEAR GRAPHS

Figure 6.2: Notation of one-port elements requiring a reference

pressure indicating a drop from the real pressure at the bottom to the atmospheric pressure.
To indicate the existence of the reference, we distinguish the linear graph notation with a
half-solid-and-half-dash line. Other such elements include lumped masses, rotary inertia,
and thermal capacitors.

Figure 6.3: Linear graph notation of through-variable sources

Now let me explain linear graph notations for excitation sources. Figure 6.3 shows the
notations for through-variable sources. A source is denoted by a circle. An arrow in the
circle represents that the source is a through-variable source, and the direction of the arrow
is the positive direction of the through-variable. For example, the left drawing of Fig. 6.3
shows that a force is applied serving as a through-variable source, while the right drawing
of Fig. 6.3 shows that electric current flows out of the through-variable source.
6.2. LINEAR GRAPHS: AN APPETIZER 155

Figure 6.4: Linear graph notation of across-variable sources

In contrast, Figure 6.4 shows the notations for across-variable sources. A circle with a
positive sign on top and a negative sign at bottom represents that the source is an across-
variable source. The side with the positive sign has higher value in across-variable than the
side of the negative sign. Since an across-variable source provides energy to drive a system,
the through-variable will flow from the positive side into the system. Moreover, the through-
variable will flow from the system back to the across-variable source via the negative side.
For example, the left drawing of Fig. 6.3 shows that a voltage source has a positive and a
negative side indicating that the source provides a voltage difference. Similarly, the right
drawing of Fig. 6.3 shows that a pressure source has a positive and a negative side driving
the system with a pressure difference.

6.2 Linear Graphs: An Appetizer

Linear graphs are easy to draw for some domains (e.g., electrical, thermal, and fluid domains),
but can get quite confusing for other domains (e.g., translational and rotational systems).
Let us focus on the three easy domains first. A very important rule to follow in drawing
linear graphs is

GO WITH THE FLOW!


156 CHAPTER 6. LINEAR GRAPHS

Figure 6.5: Linear graph for electrical system 1

What I mean here is that we need to keep track of the through-variable when drawing
a linear graph. You start with an excitation source, and the through-variable ”flows” out of
the source. Next, you follow the flow (i.e., the through-variable) to branch into a network of
one-port elements. After the flow passes through the one-port elements, the flow will come
back to the source. This is how it works in general in an electric network.

Example 6.1 This example demonstrates linear graphs of three electrical systems.

The left drawing of Fig. 6.5 shows electrical system 1. The right drawing of Fig. 6.5
shows the corresponding linear graph. To construct the linear graph, I have laid out the
steps in Fig. 6.6.

We first start from the source. The voltage of the source is prescribed, so we draw an
across-variable source; see Fig. 6.6(a). The bottom end of the source is connected to the
ground, while the top end of the source is connected to node A. Also, we understand that
a current will flow out of the source from the positive side (i.e., go with the flow). When
the current flows out of node A, it has no place to go except flowing through the resistor
R1 in order to reach node B; see Fig. 6.6(b). When the current flows past node B, the
current is split into two branches (Fig. 6.6(c)). One branch of the current enters capacitor
C1 , while the other branch of the current enters resistor R2 . The branch of the current
entering capacitor C1 passes node C and enter capacitor C2 ; see Fig. 6.6(d). Finally, the two
6.2. LINEAR GRAPHS: AN APPETIZER 157

Figure 6.6: Steps to construct linear graph for electrical system 1


158 CHAPTER 6. LINEAR GRAPHS

Figure 6.7: Linear graph for electrical system 2

branches of current in the capacitor C2 and resistor R2 merge, return to the ground, and
enter the ground end of the source as shown in Fig. 6.6(e).

Figure 6.7 and Fig. 6.8 show linear graphs of two more electrical systems for an il-
lustration. The linear graphs can be constructed following the same procedure described
above.

From the examples above, linear graphs of an electrical system are really not different
from the original electric circuit diagrams. This is not surprising, because the concept of
linear graphs is derived from electrical circuits.

Example 6.2 This example demonstrates linear graphs of two fluid systems.

The first fluid system, shown in Fig. 6.9, consists of a pump, two tanks, a long pipe,
and a valve. The pump provides a prescribed flow rate Qs (t). The two tanks have fluid
capacitance C1 and C2 . The long pipe presents inertance I1 and resistance R1 . The valves
presents a resistance R2 . The lower drawing of Fig. 6.9 shows the corresponding linear graph
of the fluid system.

To construct the linear graph, let us follow the flow as shown in Fig. 6.10. The first step
is draw the source as shown in Fig. 6.10(a). Since the flow rate is prescribed, the source is a
through-variable source. Also, the bottom end of the pump is taking fluid under atmospheric
pressure; therefore, the bottom node of the through-variable source is grounded at patm . The
6.2. LINEAR GRAPHS: AN APPETIZER 159

Figure 6.8: Linear graph for electrical system 3

Figure 6.9: Linear graph for fluid system 1


160 CHAPTER 6. LINEAR GRAPHS

Figure 6.10: Steps to construct linear graph for fluid system 1

fluid flow out of the pump first reaches node A (cf. Fig. 6.9) and splits into two parts. One
part of the fluid enters the tank C1 and the other enters the long pipe.

The second step is shown in Fig. 6.10(b). The fluid entering the tank eventually reaches
to atmospheric pressure; therefore, the linear graph branch of C1 goes down to the ground
patm . Since the ground has a reference value, part of the C1 branch is in dash line (cf.
Fig. 6.2). At the same time, the fluid entering the long pipe experiences not only inertance
I1 but also resistance R1 finally reaching node B. Note that the inertance I1 and resistance
R1 must be in series because the pressure drop from both elements add up.

After the fluid passes node B, as shown in Fig. 6.10 (c), part of the fluid flows into
the tank C2 and the rest goes through the resistor R2 . Since the flow entering the tank C2
eventually reaches atmospheric pressure, the C2 branch in Fig. 6.10 (d) goes to the ground
6.2. LINEAR GRAPHS: AN APPETIZER 161

Figure 6.11: Linear graph for electrical system 2

patm via a dash line. Similarly, the fluid going through the valve R2 also reaches atmospheric
pressure, the R2 branch in Fig. 6.10 (d) also goes to the ground with a pressure patm .

Figure 6.11 shows a fluid system driven by a pump with prescribed pressure Ps (t). One
can follow the procedure laid out in Figure 6.10 to draw the corresponding linear graph
shown in Figure 6.11.

Example 6.3 This example demonstrates linear graphs of two thermal systems.

In the first example, let us consider heat transfer and dissipation of an engine cylinder
when the gas inside the cylinder is ignited. Figure 6.12 shows a thermal system consisting
of a cylinder with fins. The gas inside the cylinder has a thermal capacitance Ct . When
the gas is ignited, it generates heat at a rate of Qs (t). The cylinder has resistance R3 from
natural convection. The fins have resistance R2 via heat conduction and resistance R4 from
forced convection.
162 CHAPTER 6. LINEAR GRAPHS

Figure 6.12: Linear graph for thermal system 1

To draw the linear graph of this thermal system, we need to analyze how the heat flows.
(Otherwise, we cannot go with the flow to draw the linear graph.) The analysis of the heat
flow is shown in the lower right drawing of Fig. 6.12. Basically, when the heat is generated
inside the cylinder, part of the heat is used to increase the temperature of the gas (i.e., Ct ),
while the rest goes to the cylinder. When the heat arrives at the cylinder, part of it goes
into the natural convection R3 , while the rest goes to the fin (i.e., R2 ) first and then to the
forced convection.

Based on the analysis of the heat flow, one can draw the linear graph shown on the
lower left drawing of Fig. 6.12. There are a couple of things worth noting. First, the source
is a through-variable source, because the heat flow rate Qs (t) is prescribed. The thermal
capacitance Ct has dash line indicating that it is referred to a reference level.

In the second example, let us consider heat transfer of a pot of food on a cooking stove.
Figure 6.13 shows the setup of the thermal system of interest. The stove provides a heat
source with prescribed heat flow rate Qs (t). Moreover, the stove has radiation loss R2 . The
6.2. LINEAR GRAPHS: AN APPETIZER 163

Figure 6.13: Linear graph for thermal system 2

pot has resistance R3 from conduction and R1 from natural convection. The food inside the
pot has thermal capacitance Ct .

To draw the linear graph of this thermal system, let us analyze how the heat flows
again. The analysis of the heat flow is shown in the upper right drawing of Fig. 6.13. When
the heat comes in from the stove, part of the heat is lost in radiation R2 while the remaining
heat goes to the pot. When the heat arrives at the pot, the heat spread through the pot
(i.e, resistance R3 from heat conduction). After that, part of the heat warms up the food,
and the rest is dissipated to the ambient environment via natural convection R1 .

Based on the analysis of the heat flow, one can draw the linear graph shown on the
bottom drawing of Fig. 6.13.

So far the slogan ”GO WITH THE FLOW!” works out very well for electrical, fluid,
and thermal systems. It works well because these systems all have a very clear notion of a
flow, i.e., current flow, fluid flow, and heat flow. For translational and rotational systems,
the concept of a ”flow” is obscure. For example, Fig. 6.14 shows a translational system
subjected to an external force f (t). We know that the force constitutes a through-variable
164 CHAPTER 6. LINEAR GRAPHS

Figure 6.14: Challenge for a translational system

source in the linear graph. But, in this case, how does the force flows through the system?
Even if we convince ourselves that a force indeed flows through a system, what is the positive
direction of a force flow?

It is not surprising at all such an arcane situation occurs for translational and rota-
tional systems. The concept of linear graph stems from electrical circuits, and its goal is to
achieve a ”one-size-fits-it-all” or an universal way to model dynamical systems from multiple
domains. During this process, the linear graph approach turns out to be quite natural for
some domains, but gets very strange for other domains.

The best way to accommodate the translational and rotational systems is to understand
some basic laws of linear graphs first. These basic laws will help us understand how the
through-variable ”flows” through a linear graph. After we have learned these laws, we will
revisit the translational and rotational system for their linear graphs.

6.3 Element Interconnect Laws

Since linear graphs are networks, they follow basic laws that appear in many networks formed
by interconnected elements. Two basic laws are a compatibility law and a continuity law.
They are explained in detail as follows.

The compatibility law states that sum of the across-variables around any closed loop is
6.3. ELEMENT INTERCONNECT LAWS 165

identically zero, i.e.,


X
vi = 0 (6.1)

where v is the across-variable drop and the index i is summed through all the elements
constituting the closed loop. The equation in (6.1) is usually called a loop equation for
convenience.

The loop equation (6.1) can be very misleading, so let us use a hypothetical network
in Fig. 6.15 to explain the compatibility law. In using the loop equation, one must establish
the positive flow direction first (very, very, very important!). In Fig. 6.15, there are
three nodes, 1, 2, and 3. The positive flow directions as we have assumed in drawing the
linear graph are as follows. The flow goes from node 1 to node 2 and from node 1 to node 3.
The flow also goes from node 2 to node 3 through two pathways a and b. Therefore, there
are three loops in Fig. 6.15: loop 1-2-a-3-1, loop 1-2-b-3-1, and loop 2-a-3-b. For the loop
1-2-a-3-1, the loop equation (6.1) is

v1→2 + v2→a→3 + v3→1 = 0 (6.2)

Note that v1→2 and v2→a→3 both follow the positive direction of the flow; therefore, v1→2
and v2→a→3 are positive across-variable drops. In contrast, v3→1 is against the positive flow
direction (which is from node 1 to node 3); therefore, v3→1 experiences an across-variable
rise. In other words, v3→1 is a negative across-variable drop. As a result, (6.2) is possible
because two of the three terms are positive and the third term is negative.

The negative across-variable drop is certainly mind-boggling. So on remedy is to move


v3→1 to the right side and (6.2) becomes

v1→2 + v2→a→3 = −v3→1 = v1→3 (6.3)

where the across-variable drop v1→3 is used instead. Now, (6.3) gives us a different way to
interpret the compatibility law. Within a closed loop, we will have two pathways to go from
one point to the other point. Moreover, the across-variable drop from these two pathways
must be equal. I personally like this formulation better.

The compatibility law has great physical meaning for each domain we have discussed
so far. For electrical systems, it is simply the Kirchhoff’s voltage law. For fluid systems, it
166 CHAPTER 6. LINEAR GRAPHS

Figure 6.15: The compatibility law Figure 6.16: The continuity law

states that pressure is a scalar potential and is independent of path. For thermal systems,
it states that temperature is a scalar potential and is independent of path.

The continuity law states that sum of the through-variables flowing into a closed contour
(or a node) is zero, i.e.,
X
fi = 0 (6.4)
where v is the through-variable into the contour and the index i is summed through all the
flows intersecting the closed contour. The equation in (6.4) is usually called a node equation
for convenience.

In using the node equation (6.4) one must watch out the sign convention. Let us use
another hypothetical network in Fig. 6.16 to explain the continuity law. In Fig. 6.16, there
is one node and there are three elements 1, 2, and 3 connected to the node. Let us draw
a closed contour around the node. The through-variable in elements 1 and 2 flows into the
contour, but the through-variable of element out of the contour. If we follow the positive
direction of the flow, the node equation in (6.4) is

f1 + f2 − f3 = 0 (6.5)

Please note that −f3 is needed, because the flow in element 3 is out of the contour. Similarly,
we can move f3 to the right side of the equation resulting in

f1 + f2 = f3 (6.6)

In this case, (6.6) basically states that the sum of the inflow through-variables must be equal
to the sum of the outflow through-variables.
6.3. ELEMENT INTERCONNECT LAWS 167

Figure 6.17: Loop and node equations for a hydraulic system from Fig. 6.9

Finally, the continuity law also has great physical meaning for each domain we have
discussed so far. For electrical systems, it is simply the Kirchhoff’s current law. For fluid
systems, it reflects the continuity equation in fluid mechanics while no sources or sinks are
present. For thermal systems, it reflects the continuity heat flow; no heat is created via a
source or dissipated through a sink.

Example 6.4 This example is to demonstrate loop and node equations of a fluid system
discussed earlier in Fig. 6.9.

Let us check out following closed loops to apply the compatibility law. For loop Qs −
I1 − R1 − C2 , the loop equation is

Ps (t) = pI1 + pR1 + pC2 (6.7)


168 CHAPTER 6. LINEAR GRAPHS

where Ps (t) is the pressure rise from the bottom to the top of the source. Although the
source is a through-variable source, it must accompany with a pressure rise when the fluid
flows in and out of the source. That pressure rise is Ps (t). Since the through-variable Qs (t) is
prescribed at the source, the pressure rise Ps (t) is unknown and will depend on the pressure
drops of relevant elements shown in (6.7).

For loop C1 − I1 − R1 − R2 , there are two pathways to form the closed loop. One is via
element C1 , and the other is from I1 − R1 − R2 . The compatibility equation implies that the
pressure drop from node A to the ground is the same for these two pathways, i.e.,

PC1 = pI1 + pR1 + pR2 (6.8)

Now let us check out the node equation. Let us first choose a closed contour that
encloses only node A shown as a dashed circle around A. The flow into the contour is via
the source Qs , while the flow out of the contour includes I1 and C1 . Therefore, the node
equation (6.6) results in
Qs (t) = QI1 + QC1 (6.9)

Similarly, one can choose a contour that encloses both nodes A and B. In this case, the flow
into the contour is Qs , but the flow out of the contour includes C1 , C2 , and R2 . Therefore,
the loop equation (6.6) becomes

Qs (t) = QC1 + QC2 + QR2 (6.10)

6.4 Linear Graphs for Translational Systems

To successfully draw a linear graph for a translational system, I have two suggestions. The
first one is to draw a free-body diagram and apply the Newton’s second law. The use of
F = ma will become the node equation, which allows you to figure out how the forces flow
into and out of the system quite easily. The second suggestion is to use your free-body
diagram to define the positive direction of each through-variable in your linear graphs. This
6.4. LINEAR GRAPHS FOR TRANSLATIONAL SYSTEMS 169

will give each through-variable appropriate physical meaning (e.g., a spring force in tension
or in compression). Let us walk through the following example to demonstrate how one
could draw a linear graph for translational systems.

Figure 6.19: The corresponding free-


Figure 6.18: A translational system body diagram

The translational system consists of a mass m, a damper B1 , and two springs k1 and k2
as shown in Fig. 6.18. The displacement and velocity of the mass are x and v, respectively.
Moreover, the positive direction of x and v are to the right. Let us denote the mass as node
A in the linear graph formulation. The springs and the damper are all anchored to walls.
Since the walls are fixed in space, they correspond to the same ground node G in the linear
graph formulation. Finally, the mass is subjected to an external force Fs (t), which is positive
when pointing to the right.

To construct the linear graph, we should start from the free-body diagram of the mass
first; see Fig. 6.19. In drawing a free-body diagram, we will need to assume that the mass
experiences a positive displacement x and a positive velocity v. If the mass has a positive
displacement, it moves to the right stretching the spring k1 and compressing the spring k2 .
Therefore, the free-body diagram Fig. 6.19 shows that the spring force fk1 (from the spring
k1 ) is pulling the mass to the left, while the spring force fk2 (from the spring k2 ) is pushing
the mass to the left. Similarly, if the the mass has a positive velocity, it speeds to the right
leaving the the damper B1 in tension. Therefore, the tensile damping force fB1 (from the
damper B1 ) also pulls the mass to the left. Finally, the applied force Fs (t) points to the
right completing the free-body diagram.

According to Newton’s second law,

Fs (t) − fk1 − fB1 − fk2 = ma (6.11)


170 CHAPTER 6. LINEAR GRAPHS

Figure 6.20: Linear graph at node A

or equivalently,
Fs (t) = fk1 + fB1 + fk2 + ma (6.12)

Note that (6.12) is indeed the node equation for node A. According to (6.12), we can
construct the linear graph at node A shown in Fig. 6.20.

As seen in Fig. 6.20, the applied force Fs (t) appears as a through-variable source pro-
viding a force on the mass (i.e., node A). Moreover, the bottom end of the through-variable
source is on the ground. (You can imagine that a person stands on the ground and pushes
the mass with a force Qs (t).) The first part of the applied force is used for fk1 , which is to
stretch the spring k1 . The second part of the applied force is used for fB1 , which is to stretch
the damper B1 . The third part of the applied force is used for fk2 , which is to compress the
spring k2 . The rest of the applied force is used for accelerate the mass m. One important
outcome from this approach is that we know exactly what the positive direction of each flow
means (e.g., to stretch the spring k1 and to compress the spring k2 ).

The last step is to complete the linear graph from Fig. 6.20. The spring k1 is connected
to both the mass and the wall; therefore, the flow fk1 in Fig. 6.20 representing the spring k1
should be connected to the ground; see Fig. 6.21. Similarly, the spring k2 and damper B1 are
also connected to mass (node A) and the wall (node G). Therefore, fk2 and fB1 should be
6.4. LINEAR GRAPHS FOR TRANSLATIONAL SYSTEMS 171

Figure 6.21: Linear graph of the translational system shown in Fig. 6.18

Figure 6.22: Physical meaning of positive flow direction shown in Fig. 6.21
172 CHAPTER 6. LINEAR GRAPHS

connected to the ground too. The inertial force ma is also connected to the ground with a
dash line (cf. Fig. 6.2 and its description). Finally, Fig. 6.22 tabulates the physical meaning
associated with the positive directions shown in the complete linear graph Fig. 6.21.

Linear graphs for translational systems are very confusing. Below please find several
more examples.

Example 6.5 One extremely confusing situation often encountered in translational system
is presence of prescribed velocity sources. As explained before, prescribed velocity sources
are common, such as airplane approaching for landing or prescribed motion controlled by a
shaker or linear motor. This example explains how that should be incorporated in a linear
graph.

Figure 6.23: A translational system sub-


jected to a prescribed veloc-
Figure 6.24: The corresponding free-
ity
body diagram

Figure 6.23 shows a translational system driven by a prescribed velocity Vs (t). The
translational system consists of a lumped mass m, two springs with stiffness coefficients k1
and k2 , and a damper (in the form of a lubricating film) with viscous damping coefficient
B. Also, the prescribed force Vs (t) has a positive direction pointing to the right. Moreover,
the prescribed velocity is applied on the spring k1 , while the spring k2 connects the lumped
mass to a wall.

To construct the linear graph, we need to keep thinking about the catch phrase ”GO
WITH THE FLOW!” This is especially true when a prescribed velocity source is present.
When a prescribed velocity is present, that means some forces are present as well to make
6.4. LINEAR GRAPHS FOR TRANSLATIONAL SYSTEMS 173

the prescribed velocity possible. When an airplane approaches an airport and follows a
prescribed velocity profile for landing, the pilot will adjust the engine thrust to provide the
velocity profile. When a linear motor moves a pallet with a prescribed velocity, a motor
controller will adjust electric current to provide enough electromagnetic force to drive the
motor to achieve that prescribed velocity. Therefore, when the prescribed velocity Vs (t) is
present in Figure 6.23, that means there must be a spring force fk1 ”FLOWING” through
the spring k1 to achieve the prescribed velocity Vs (t). We want to go with the force flow fk1
in drawing the linear graph.

Figure 6.24 shows the free-body diagrams of the translational system. First, the force
fk1 must be present on both sides of the spring k1 in order to keep the spring in equilibrium.
Moreover, let us assume that the spring force fk1 is tensile, simply to define a positive sign
convention for the spring force. Next, the force fk1 flows to the mass m. Before we draw the
free-body diagram of the mass, let us define that the displacement x and velocity v of the
mass are considered to be positive if they are pointing to the right. Again, this is nothing
but defining a sign convention so that we know what a positive displacement means.

Under the sign convention, the mass undergoes a positive displacement and velocity
moving to the right. Therefore, the spring k2 is stretched applying a force to the left, while
the damper applies a damping force fB to the left opposite to the direction of motion. In the
mean time, the spring k1 is in tension applying a force to the right. Application of Newton’s
second law leads to
fk1 − fB − fk2 = ma (6.13)
or equivalently,
fk1 = fB + fk2 + ma (6.14)
Equation (6.14) is the node equation at the mass.

Now we are ready to draw the linear graph; see Fig. 6.25. First, the prescribed velocity
Vs (t) is an across-variable source. The bottom side of the source is connected to the ground,
while the upper side is connected to the spring k1 . More importantly, we know that there
must be a force flowing out of the source to make the prescribed velocity Vs (t) possible. (GO
WITH THE FLOW!) Moreover, that force subsequently flows into the spring k1 and that
force must be the spring force fk1 . After the force fk1 reaches the mass m, (6.14) indicates
that fk1 branches into three different pathways: the first one causes the spring k2 to stretch,
174 CHAPTER 6. LINEAR GRAPHS

Figure 6.25: The corresponding linear Figure 6.26: Physical meaning of the pos-
graph itive direction in the linear
graph

the second one causes the damper to shear, and the last one accelerates the mass to the right.
All these branches eventually all go back to the ground. Fig. 6.26 shows a table describing
physical meaning of the positive directions shown in the linear graph Fig. 6.25.

Example 6.6 Another really confusing thing is the sign or direction of the flows in linear
graphs. One needs to know that all the directions of any flows you assumed in linear graphs
are just positive directions you assume for setting up the problem. It does not matter if an
actual flow is in the defined positive direction. The more important issue is what the defined
positive directions mean physically.

Figure 6.27 is the same system discussed in Example 6.5 except that the prescribed
velocity is reversed in its direction. In this case, the spring k1 will be pushed and also the
mass will be pushed to the left. So it is simpler to assume that the motion (e.g., velocity and
displacement) of the mass is positive pointing to the left. Moreover, the spring force k1 is
positive in compression, so that the force will flow through the spring k1 to the mass under
the excitation source Vs (t).

Figure 6.28 shows the corresponding free-body diagram. Since the spring k1 is in com-
pression, the spring force Fk1 is acting to the left. At the same time, the mass is in positive
6.4. LINEAR GRAPHS FOR TRANSLATIONAL SYSTEMS 175

Figure 6.27: A translational system sub-


jected to a prescribed veloc-
ity Figure 6.28: The corresponding free-
body diagram

motion; therefore, spring k2 is also in compression applying a force to the right on the cart.
Similarly, the damping force is opposite to the positive velocity of the mass and points to
the right. Applying Newton’s second law and recalling that the positive direction is to the
left, one would obtain
fk1 − fB − fk2 = ma (6.15)

or equivalently,
fk1 = fB + fk2 + ma (6.16)

Equation (6.16) then serves the purpose of a node equation.

Figure 6.30: Physical meaning of the pos-


Figure 6.29: The corresponding linear itive direction in the linear
graph graph
176 CHAPTER 6. LINEAR GRAPHS

Figure 6.29 now shows the linear graph. Basically, an across-variable source provides
the flow fk1 into the spring k1 . Then the force flow fk1 splits into three branches: one
to compress the spring k2 , one to overcome damping B and one to accelerate the mass m.
Moreover, the positive directions of Fig. 6.29 are based on the free-body diagram described in
Fig. 6.28. Figure 6.30 lists physical meaning of all the positive directions in the linear graph
Fig. 6.29. Note that the linear graph in Fig. 6.29 is identical to Fig. 6.25 of Example 6.6.
The only difference is the definition of positive direction; see Fig. 6.30 vs. Fig. 6.26.

Example 6.7 In this example, we will demonstrate how to draw a linear graph of a two-
degrees-of-freedom system shown in Fig. 6.31. The system consists of two mass points
with masses m1 and m2 . The mass m1 moves on a horizontal plane and is subjected to a
prescribed horizontal force Fs (t). In contrast, the mass m2 moves vertically and is subjected
to the gravity or its own weigh m2 g. The mass m1 is connected to a wall via a spring k1 and
a damper B1 . Moreover, the two masses are connected together via a spring k2 .

Figure 6.32: Linear graph around the


Figure 6.31: A two-degrees-of-freedom
first mass
translational system

Based on our past experience, the first step is to draw a free-body diagram. Figure 6.31
also shows the free-body diagram of the mass m1 , which will serve as node A in the linear
graph to be drawn. Moreover, positive motion for the mass m1 is pointing to the right as
defined in Fig. 6.31. As a result, the spring k1 and the damper B1 are both in tension, thus
exerting forces fk1 and fB1 to the left. Moreover, there is a tension developed in the spring
6.4. LINEAR GRAPHS FOR TRANSLATIONAL SYSTEMS 177

kk2 and let us call it T . T is considered positive and is pointing to the right. Summing all
the forces acting on the mass m1 and using Newton’s second law, one obtains

Fs (t) − fk1 − fB1 + T = m1 aA (6.17)

where aA refers to the acceleration of the mass m1 (i.e., node A). Rearrange (6.17) to obtain,

Fs (t) + T = fk1 + fB1 + m1 aA (6.18)

Equation (6.18) is then the continuity equation for node A.

Based on (6.18), two forces Fs (t) and T flow into node A, while three forces fk1 , fB1 ,
and m1 aA flow out of node A. Accordingly, one can draw the linear graph around node A as
shown in Fig. 6.32. A through-variable source Fs (t), with its bottom end grounded, provides
a flow into node A. The tension T flows from node B (where mass m2 is located) into node
A. These two forces merge and split into three branches: one to stretch the spring k1 , one
to stretch the damper B1 , and the rest to accelerate the mass m1 . These three forces fk1 ,
fB1 , and m1 aA then all return to the ground.

Figure 6.33: Free-body diagram of the


Figure 6.34: Flows around the two
second mass
masses

To complete the linear graph, let us draw the free-body diagram of mass m2 (i.e., node B
in linear graph); see Fig. 6.33. Let us define the positive direction of motion to be downward
for mass m2 . Note that the tension T is in the upward direction. Use of Newton’s second
law leads to
m2 g − T = m2 aB (6.19)
178 CHAPTER 6. LINEAR GRAPHS

where aB refers to the acceleration of the mass m2 (i.e., node B). Rearrange (6.19) to obtain,
m2 g = T + m2 aB (6.20)
Equation (6.20) serves the continuity equation for node A.

Based on (6.20), only one force m2 G flows into node B, while two forces T and m2 aB
flow out of node B. Accordingly, one can fill in the linear graph around node B as shown
in Fig. 6.34. A through-variable source m2 g, with its bottom end grounded, provides a flow
into node B. In return, two forces flow out of node B. First, the tension T flows out of node
B into node A. What is left accelerates the mass m2 relative to the ground.

Figure 6.35: Completed linear graph


Figure 6.36: Physical meaning of positive
flows

Finally, we change the notation of the forces in Fig. 6.34 to the notation used in linear
graphs. For example, the force fk1 is now replaced by k1 representing the presence of a
spring k1 in that branch. The final linear graph is shown in Fig. 6.35. Moreover, Fig. 6.36
summarizes physical meaning of the positive directions in the linear graph.

6.5 Linear Graphs for Rotational Systems

Linear graphs for rotational systems can be drawn based on the same procedure described
in Section 6.4 for translational systems. In other words, one would start with a free-body
6.5. LINEAR GRAPHS FOR ROTATIONAL SYSTEMS 179

Figure 6.37: A flywheel driven by a motor via a drag cup

diagram, apply Newton’s second law, derive a continuity equation, and draw a linear graph
around the node. Here are some examples.

Example 6.8 Figure 6.37 illustrates a rotational system consists of a flywheel, a drag cup, a
dissipative bearing, and a driving motor. The flywheel has a mass moment of inertia I. One
side of the flywheel rests on the bearing with viscous damping coefficient B1 . The other side
of the flywheel is connected to the motor via a drag cup, whose viscous damping coefficient
is B2 . Finally, the motor drives the drag cup with a prescribed angular velocity Ωs (t).

Let A and B represent two nodes on the left and right sides of the drag cup, respectively.
Node A corresponds to the motor, while node B corresponds to the flywheel. Let us start
from the source and go with the flow. The flow comes out of the motor (i.e., the source)
and go into the drag cup. Therefore, let us draw the free-body diagram of the drag cup and
show it in Fig. 6.38. Since the motor prescribes the velocity, the motor will provide a torque
TB2 to drive the drag cup no matter how large the torque will be. The drag cup has no
inertia; therefore, the right end of the drag cup must present an equal and opposite torque
to balance out TB2 .

The torque TB2 then passes onto the flywheel; see the free-body diagram of the flywheel
in Fig. 6.39. For the flywheel, we need to set up a positive direction. As shown in Fig. 6.39,
the + sign shows the positive direction for angular displacement θ, angular velocity ω, and
angular acceleration α of the flywheel. Note that the + sign is applied to the anywhere of
the flywheel. Based on Newton’s third law, the torque TB2 from the drag cup applies to
180 CHAPTER 6. LINEAR GRAPHS

Figure 6.38: Free-body diagram of the Figure 6.39: Free-body diagram of the
drag cup flywheel

the left side of flywheel in the positive direction. In the meantime, the damping from the
bearing will apply a torque TB1 opposite to the direction of motion, thus appearing in the
negative direction as shown in Fig. 6.39.

Based on the free-body diagram in Fig. 6.39, one can apply Newton’s second law to
obtain
TB2 − TB1 = IαB (6.21)

where αB refers to the angular acceleration of the flywheel at node B. Rearrange (6.21) to
obtain,
TB2 = TB1 + IαB (6.22)

Equation (6.22) serves the continuity equation for node B. (The continuity equation of node
A is simply the force from the motor being TB2 .)

Now we are ready to draw the linear graph; see Fig. 6.40. Let us start with the source
and go with the flow. The source is the motor, which is an across-variable source. One side
of the source is connected to the ground. The other side is connected to node A, which is the
left side of the drag cup. The torque flowing out of the source (i.e., the motor) entirely goes
into the drag cup (with damping B2 ) and arrives at node B located at the right side of the
drag cup. At node B (i.e., the flywheel), the torque TB2 applied on the flywheel follows the
continuity equation (6.22) and splits into two branches. One is to overcome the damping B1
from the bearing. The other is to accelerate the flywheel I. Figure 6.41 shows the physical
meaning of the flow directions specified in the linear graph Fig. 6.40.
6.5. LINEAR GRAPHS FOR ROTATIONAL SYSTEMS 181

Figure 6.41: Physical meaning of the


Figure 6.40: Constructed linear graph
flows in the linear graph
Fig. 6.40

Figure 6.42: A rotational system with a two-stage rotor assembly

Example 6.9 Consider a two-stage rotor assembly shown in Fig. 6.42. The first-stage rotor
has mass moment of inertia J1 and rests of a set of bearings with damping coefficient B2 .
The second-stage rotor has mass moment of inertia J2 and rests of a set of bearings with
damping coefficient B4 . A source with prescribed torque Ts (t) is connected to the left side
of the first-stage rotor via a drag cup B1 . The right side of the first-stage rotor is connected
to the left side of the second-stage rotor via another drag cut B3 . Moreover, the right side
of the second-stage rotor is connected to the wall via a spring with stiffness k. Finally, node
A refers to the driving motor, node B refers to the first-stage rotor, and node C refers to
the second-stage rotor.
182 CHAPTER 6. LINEAR GRAPHS

Figure 6.43: Free-body diagrams of each rotational element

Figure 6.43 shows the free-body diagram of every component in the rotational system.
For the drag cup B1 , the torque Ts from the source appear on the left side and the reactive
torque T1 appears on the right side. According to Newton’s third law, the applied torque Ts
must be equal to the torque of the drag cup T1 .

Ts = T1 (6.23)

For the first-stage rotor, it experiences the torque T1 from the drag cup B1 at the left
side. In the meantime, the dissipative torque TB2 from the bearing is present and is against
the motion. Also, a reactive torque TB3 from the drag cup B3 appears on the right side
of the rotor. Note that we do not know what the proper direction the torque TB3 will be.
We simply assumes the direction here as in all free-body diagrams. Based on the free-body
diagram of the first-stage rotor, one can apply Newton’s second law to obtain

T1 − TB2 − TB3 = J1 α1 (6.24)

where α1 refers to the angular acceleration of the first-stage rotor. Rearrange (6.24) to
obtain,
T1 = TB2 + TB3 + J1 α1 (6.25)

Equation (6.25) serves the continuity equation for node B. (Recall that node B is associated
with the first-stage rotor.)

For the drag cup B3 , the torque TB3 from the first-stage rotor appears on the left side
and the reactive torque TB3 from the second-stage rotor appears on the right side. The
statement above has already implicitly applied static equilibrium because the drag cup has
no inertia.
6.5. LINEAR GRAPHS FOR ROTATIONAL SYSTEMS 183

Figure 6.44: The resulting linear graph of the system shown in Fig. 6.42

For the second-stage rotor, it experiences the torque TB3 from the drag cup B3 at the left
side. In the meantime, the dissipative torque TB4 from the bearing is present and is against
the motion. Also, a reactive torque Tk from the spring k appears on the right side of the
rotor. Since we know that the right side of the spring is connected to the wall, the proper
direction the torque Tk will be the direction shown in Fig. 6.42. Based on the free-body
diagram of the first-stage rotor, one can apply Newton’s second law to obtain

TB3 − Tk − TB4 = J2 α2 (6.26)

where α1 refers to the angular acceleration of the first-stage rotor. Rearrange (6.26) to
obtain,
TB3 = Tk + TB4 + J2 α2 (6.27)

Equation (6.27) serves the continuity equation for node C. (Recall that node C is associated
with the second-stage rotor.)

Figure 6.44 shows the resulting linear graph. Let us start with the source and go with
the flow. The source is a through-variable source. One side of the source is grounded, and
the other side is connected to the drag cup B1 , specifically at node A. The flow of the source
goes entirely into the drag cup B1 and ends at node B where the first-stage rotor is; see
(6.23).
184 CHAPTER 6. LINEAR GRAPHS

When the torque TB1 arrives at node B, the torque splits into three branches: TB2 ,
TJ1 ≡ J1 α1 , and TB3 . Since the bearing B2 and the rotor inertia J1 are both connected to
the ground, the branches TB2 and TJ1 are both connected to the ground. In contrast, the
drag cup B3 is connected to the second stage-rotor (i.e., node C). Therefore, TB3 flows out
of node B and into node C. (Veeeeeeery Important! Do not mess up the sign. Follow the
continuity equation in (6.25).)

When the torque TB3 arrives at node C, the torque splits into three branches again:
TB4 , TJ2 ≡ J2 α2 , and Tk . Since the bearing B4 , the rotor inertia J2 , and the spring k are
all connected to the ground, the three branches TB2 , TJ1 , and Tk all go to the ground. The
linear graph is now complete and in good order.

6.6 Physical Source Modeling

The modeling so far for across-variable and through-variable sources are ideal. For across-
variable sources, the assumption has been that the across variable output is also independent
of the through-variable. If this assumption holds true for a 9-volt battery, the voltage
provided by the battery will be always 9 volts no matter what devices the battery drives or
no matter what the current the battery is delivering. As shown in Fig. 6.45, this implies a
horizontal and constant line in the v-f diagram. This, of course, is not true. The battery
will quickly reduces its voltage during a short circuit condition for which the current is very
large. The corresponding v-f diagram should have a drop when f becomes large as shown
in Fig. 6.45.

Similarly, for through-variable sources, the assumption has been that the through vari-
able output is also independent of the across-variable. If this assumption holds true for a car
engine, the torque provided by the engine will be always be a fixed torque matter how heavy
the car is and how fast the car is moving. As shown in Fig. 6.45, this implies a horizontal
and constant line in the f -v diagram. This, of course, is not true. In fact, the torque of a car
engine is not constant; it drops at both high and low speed. Many through-variable sources
have a f -v diagram that presents a drop when v becomes large as shown in Fig. 6.45.
6.6. PHYSICAL SOURCE MODELING 185

Figure 6.45: Ideal vs. real sources


186 CHAPTER 6. LINEAR GRAPHS

Figure 6.46: A linear approximation of the v-f diagram

To accommodate the drop shown in Fig. 6.45, a simple way is to model the drop through
a linear approximation. Let us use the v-f diagram in Fig. 6.46 as an example. A flat zone
starts with an across-variable variable Vs when the through-variable is zero. When the flat
zone extends near Fs , the across-variable quickly drops to zero as f = Fs . The flat zone
and the sudden drop in Fig. 6.46 obviously is very difficult to model. We end up with two
choices. The first choice is to ignore the drop and simply models the v-f diagram as a flat
zone. In this case, we totally discount possible effects of the through-variable on the across-
variable source. The other choice is to do a very crude approximation by drawing a straight
line connecting Vs and Fs to represent the v-f diagram. Of course, the straight line deviates
from the original v-f diagram but it does provide some sense how the through-variable might
affect performance of an across-variable source.

Should the second choice be taken, the v-f diagram is approximated as

v = Vs − Rf (6.28)

where
Vs
R≡ (6.29)
Fs
is the slope of the straight line of the approximate v-f diagram. In (6.28), the expression
serves as a compatibility equation of the source. In other words, the physical source now
consists of an ideal across-variable source Vs and a resistor R in a series connection. The
ideal aross-variable source Vs does not vary with the through-variable. In the meantime,
6.6. PHYSICAL SOURCE MODELING 187

the effect of through-variable behaves like a resistor dissipating part of the ideal source Vs .
Based on this formulation, we can draw a linear graph to represent the ideal source Vs and
the resistor R as shown in the box of Fig. 6.47. This form of physical source modeling is
called the Thevenin equivalent source.

Figure 6.47: Thevenin equivalent source Figure 6.48: Norton equivalent source
modeling modeling

Alternatively, the v-f diagram can be also approximated as


1
f = Fs − v (6.30)
R
In this case, (6.30) serves as a continuity equation of the source. The physical source now
consists of an ideal through-variable source Fs and a resistor R in a parallel combination. The
ideal through-variable source Fs does not vary with the across-variable. In the meantime, the
effect of across-variable v behaves like a parallel resistor drawing part of the through-variable
away from the ideal source Fs . Based on this formulation, we can draw a linear graph to
represent the ideal through-variable source Fs and the resistor R as shown in Fig. 6.48. This
form of physical source modeling is called the Norton equivalent source.

Example 6.10 As shown in Fig. 6.49, the flywheel with mass moment of inertia J is sup-
ported at the left end by a drag cup with damping coefficient B1 and at the right end by
a bearing with damping coefficient B2 . The drag cup B1 is driven by a motor, which will
188 CHAPTER 6. LINEAR GRAPHS

Figure 6.49: A flywheel driven by a motor via a drag cup, reproduced from Fig. 6.37

be model as a physical source in this example. The is the same flywheel system studied in
Fig. 6.37, excepted that B1 and B2 notations are swapped, and also the source here is not
ideal.

Figure 6.50 shows the v-f diagram of the physical source. Basically, the angular velocity
reaches its maximum Ωs ≡ Ωmax when the torque Ts = 0. The angular velocity drops linearly
with respect to the torque Ts , and is reduced to zero when the torque reaches Ts = Tmax . In
this case, the physical source model is

Ωs = Ωmax − RTs (6.31)

where
Ωmax
R≡ (6.32)
Tmax
is the slope of the v-f diagram. Basically, (6.31) implies an ideal source Ωmax in series to a
resistor R; see the Thevenin equivalent source model in Fig. 6.47.

Figure 6.51 illustrates the free-body diagram of each component of the flywheel system.
The torque Ts from the physical source flows through the drag cup B1 . When the torque Ts
arrives at the flywheel, the torque splits into two branches. One is used to over the bearing
friction torque TB2 . The other is the torque TJ to accelerate the flywheel J.

Based on the physical source model (6.31) and the free-body diagram, we can construct
a linear graph with a Thevenin equivalent system shown in Fig. 6.52. The physical source
6.6. PHYSICAL SOURCE MODELING 189

Figure 6.51: Free-body diagrams of each


component

Figure 6.50: Motor as a physical source

consists of an ideal across-variable source Ωmax and a resistor R in a series connection. The
torque out of the resistor R is the torque Ts and should go into the drag cup B1 . After
the flow Ts arrives at the flywheel, the flow splits into a resistor B2 and a capacitor J to
complete the linear graph.

Figure 6.52: Thevenin equivalent source


modeling of Fig. 6.49 Figure 6.53: Norton equivalent source
modeling of Fig. 6.49

The v-f diagram of the physical source (6.31) can also be rewritten as

Ωs
Ts = Tmax − (6.33)
R
190 CHAPTER 6. LINEAR GRAPHS

Equation (6.33) implies that the physical source is equivalent to an ideal through-variable
source Tmax in parallel with a resistor R, which is shown in Fig. 6.48 as the Norton equivalent
source.

Based on the physical source model (6.33) and the free-body diagram, we can construct
a linear graph with a Norton equivalent system shown in Fig. 6.53. The physical source
consists of an ideal through-variable source Tmax and a resistor R in a parallel combination.
The flow out of the physical source (i.e., Tmax less the flow into the resistor R) is the torque
Ts and should go into the drag cup B1 . After the flow Ts arrives at the flywheel, the flow
splits into a resistor B2 and a capacitor J to complete the linear graph.
Chapter 7

Deriving State Equations from Linear


Graphs

In Chapter 5, I mentioned that state equations would provide a better way to model systems
involving multiple domains. In Chapter 6, I further explained how linear graphs were con-
structed for each domain. One missing link thus far is how state equations can be derived
from linear graphs. In this chapter, our goal is to put back that missing link, i.e., we will
learn how to derive state equations from linear graphs for a single domain. Deriving state
equations for systems involving multiple domains are more challenging. We will defer that
to a later chapter.

I will start with an heuristic example to show, in general, that a state equation can
be derived from a linear graph, if all continuity and compatibility equations are written
down. This approach, however, is not very efficient nor foolproof, because there are many
”traps” that could cause the derivation to run into a circle. Therefore, certain concepts are
developed, such as normal trees, branches, links, primary variables, and secondary variables,
to make the derivation a foolproof process. With the help of these concepts, we will be able
to come up with a ”recipe” that one can faithfully follow to derive a state equation for a
given linear graph.

191
192 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

7.1 A Heuristic Example

Let us consider the electrical circuit shown in Fig. 7.1. The circuit consists of a voltage
source Vs (t), a inductor L, a resistor R, and a capacitor C. There are three nodes: A, B,
and G. The capacitor and the resistor are in a parallel combination forming an RC-circuit
between nodes B and G. The inductor is in a series connection with the RC-circuit between
nodes A and B. Finally, the voltage source is connected to nodes A and G to drive the
circuit. The corresponding linear graph of the circuit is illustrated in Fig. 7.2. Our goal is
to derive a state equation for the circuit of interest.

Figure 7.1: An electric circuit for the


Figure 7.2: The corresponding linear
heuristic example
graph of Fig. 7.1

Step 1: Write down all variables. The first step is to write down all possible
variables. A linear graph consists of many one-port elements as well as one or multiple
sources. Each element or source has a through-variable and an across-variable. All these
through- and across-variables are possible candidates to serve as state variables in the state
equation to be derived.

For the circuit in Fig. 7.2, let us define iR , iC , and iL as the current through the resistor
R, capacitor C, and inductor L, respectively. Similarly, let us define vR , vC , and vL as the
voltage drop across the resistor R, capacitor C, and inductor L, respectively. Therefore,
there are six viable candidates iR , iC , iL , vR , vC , and vL to serve as state variables. Note
that only some of these six variables will serve as independent state variables. Once those
state variables are chosen, other variables will depend on the state variables.
7.1. A HEURISTIC EXAMPLE 193

How about the across-variable Vs and through-variable is of the voltage source? Why
are they not considered? The answer is very simple. Since voltage source is an across-
variable source, Vs is prescribed and considered known. Therefore, it is not independent. It
will provide driving terms showing up on the right side of the state equation. The through-
variable is will depend on the current iR , iC , and iL in the circuit and will not be independent
either.

Step 2: Write down all available equations. There are three types of equations
available: elemental equations, continuity equations, and compatibility equations. All these
equations need to be satisfied simultaneously. The state equation to be derived must satisfy
all these equations.

A. Elemental Equations. For each one-port element in the linear graph, its through-
variable and across-variable are related via an elemental equation. For the current example,
these element equations are
diL
vl = L (7.1)
dt
vR = RiR (7.2)

and
dvC
iC = C (7.3)
dt

B. Continuity Equations. For the circuit in Fig. 7.1, the continuity equations are derived
from Kirchhoff’s current law. For node A, the continuity equation is

is = iL (7.4)

which simply states how the current is out of the source will depend on the through-variables
iR , iL , and iC . For node B, the continuity equation is

iL = iR + iC (7.5)

B. Compatibility Equations. For the circuit in Fig. 7.1, the compatibility equations are
derived from Kirchhoff’s voltage law. For loop ABG, the compatibility equation is

VS = vL + vR (7.6)
194 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

which basically states a constraint that across-variables vR , vL , and vC must satisfy. For
loop BGB, the compatibility equation is

vR = vC (7.7)

Step 3: Select state variables. Now, we have six variables: iR , iC , iL , vR , vC , and


vL . How do I know which ones are the state variables? One clue is to look at the form of
the state equation, i.e.,
d
x = Ax + Bu (7.8)
dt
where x is the state variable vector and u is the input variable vector. As one can see that
d
a state variable must carry a differential operator in front of it. With this clue, we can
dt
d
go back and inspect equations (7.1) to (7.7). There are only two variables preceded by ;
dt
they are iL and vC . So iL and vC will be the state variables.

Step 4: Eliminate dependent variables. The other four variables iR , iC , vR , and


vL that are not chosen as state variables must be eliminated through use of the elemental
equations, continuity equations, and compatibility equations as follows. Let us start with
(7.1)
diL 1 1
= vL = (Vs − vR ) (7.9)
dt L L
where (7.6) has been used. Since vR is not a state variable, it needs to be replaced further
as
diL 1 1
= (Vs − vR ) = (Vs − vC ) (7.10)
dt L L
where (7.7) is used. Equation (7.10) is in good order, because every term involves either a
state variable for an input variable.

Similarly, let us start with (7.3)

dvC 1 1
= iC = (iL − iR ) (7.11)
dt C C
where (7.5) is used. Since iR is not a state variable, it must be replaced as

dvC 1 1  vR 
= (iL − iR ) = iL − (7.12)
dt C C R
7.2. TRAP 1: WHICH LOOPS AND NODES? 195

where (7.2) is used. Nevertheless, vR in (7.12) is still not a state variable. So it must be
replaced further, and (7.12) becomes

dvC 1  vR  1  vC 
= iL − = iL − (7.13)
dt C R C R
where (7.7) is used. Now every term in (7.13) involves either a state variable or an input
variable. Therefore, (7.13) is in good order and can serve as a state equation.

Finally, the two viable state equations (7.10) and (7.13) are written into the following
matrix form. ! " # ! !
d iL 0 − L1 iL 1
L
= 1 1
+ Vs (7.14)
dt vC C
− RC
vC 0

The derivation above seems to work out very nicely. Theoretically, it should work for
any system that has a linear graph representation. There are, however, some hidden traps.
Some of them are easy to find, but others are more subtle. For example, I only listed
continuity equations of nodes A and B above (cf. (7.4) and (7.5)). Why didn’t I list the
continuity equation of node G? If I did, how did I know I should use the continuity equations
associated with nodes A and B not the one with node G? As another example, we are able
to pick the state variables iL and vC without much difficulties. Is it going to be as simple in
every system? All in all, this example shows that the derivation above is rather idealistic.
It is not foolproof. We need to develop a recipe that can make the system foolproof and at
the same time very efficient. To do so, we need to find out what possible traps are present,
and how they can be eliminated. As we will see later, these traps will be removed by using
a concept called normal trees. The efficiencies will be significantly improved by using a
concept called branches and links.

7.2 Trap 1: Which Loops and Nodes?

The first trap in the heuristic example is the choice of compatibility and continuity equations.
For the linear graph in Fig. 7.2, the complete set of continuity equations should include

is = iL (7.15)
196 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

for node A,
iL = iR + iC (7.16)

for node B, and


is = iR + iC (7.17)

for node G. As one can see, these three equations are not independent; substitution of (7.16)
into (7.15) leads to (7.17). In this case, how would one know which two continuity equations
to use for the derivation of the state equation?

In a broader context, when the system of interest becomes more complex, the number of
continuity equation increases dramatically. If the linear graph had 10 nodes, the continuity
equations would result from not only each single node, but also loops created around 2
nodes, 3 nodes, and so on. The number of available continuity equation would quickly
become unmanageable even for a moderately complex system.

The same scenario occurs for the compatibility equation. For the linear graph in Fig. 7.2,
the complete set of compatibility equations should include

VS = vL + vR (7.18)

for loop LR,


vR = vC (7.19)

for loop RC, and


VS = vL + vC (7.20)

for loop LC. As one can see, these three equations are not independent; substitution of
(7.19) into (7.18) leads to (7.20). In this case, how would one know which two compatibility
equations to use for the derivation of the state equation? Similarly, when the number of
nodes increases, there will be many more loops available to write down the compatibility
equations. Filtering out the correct compatibility equations will become a real challenge.

In conclusion, we need a systematic way to choose ”appropriate” continuity and com-


patibility equations. Otherwise, the step 4 of eliminating dependent variables in Section 7.1
cannot be implemented in a simple and straightforward manner. We will explain the sys-
tematic way to identify the needed continuity and compatibility equations in a later section.
7.3. TRAP 2: WHICH STATE VARIABLES? 197

Figure 7.3: An electric circuit that falls Figure 7.4: The corresponding linear
in trap 2 graph of Fig. 7.3

7.3 Trap 2: Which State Variables?

In the heuristic example of Section 7.1, I hypothesized that every variable preceded by a
d
differential operator is a state variable. Is this a generally true statement valid for all
dt
systems? The answer, unfortunately, is: not necessarily. Here is an example.

Figure 7.3 shows an electrical circuit consists of two RC circuit components. The
resistor R1 and capacitor C1 are in a parallel combination forming the first RC circuit
component that is located between nodes A and B. The resistor R2 and capacitor C2 are
also in a parallel combination forming the second RC circuit component that is located
between nodes B and G. The two circuit components are in a series connection and driven
by a voltage source Vs . If we inspect each element in the circuit, we will obtain the following
elemental equations:
dvC1
iC1 = C1 (7.21)
dt
for capacitor C1 ,
dvC2
iC2 = C2 (7.22)
dt
for capacitor C2 ,
vR1 = R1 iR1 (7.23)

for resistor R1 , and


vR2 = R2 iR2 (7.24)
198 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

d
for resistor R2 . If we follow the hypothesis that variables preceded with a are state
dt
variable, we will conclude from (7.21) and (7.22) that vC1 and vC2 are state variables.

It turns out that this statement is wrong. vC1 and vC2 cannot be state variables at the
same time, because they are not independent. If we consider the loop formed by the source
Vs , the capacitor C1 , and the capacitor C2 , the corresponding compatibility equation is

vC1 + vC2 = Vs (t) (7.25)

Therefore, vC1 and vC2 cannot be independent. Once vC1 is varied independently, vC2 needs
to change accordingly to satisfy the compatibility equation (7.25), and vice versa.

To avoid trap 2, we will need to find out smart ways to dependent variables that are
d d
preceded by the operator . The way to find these dependent variables with is to inspect
dt dt
compatibility equations and continuity equations. We will discuss detail methods to rule out
these variables in a later section.

7.4 Trap 3: Is A Singular?

To be precise, it is more a nuisance than a trap. In a state-space formulation, dynamics of


a system is governed by a state equation
dx
= Ax + Bu (7.26)
dt
where x is the state-variable vector, A is the state matrix, B is the input matrix, and u is
the input-variable vector. For some systems, A is singular, i.e., det A = 0 or equivalently
A−1 does not exist.1

A dynamical system with a singular A is not a new thing. In fact, we have encountered
systems with det A = 0 in Chapter 4. We learn from Example 4.3 and Example 4.4 that
det A = 0 implies that the state variables are ”dependent” in some manner. Basically, when
1
Our textbook calls such a system a uncontrollable system. Please be careful about the use of this
terminology, because ”controllability” of a system means something else in control theories and is not
defined this way.
7.4. TRAP 3: IS A SINGULAR? 199

det A = 0, a linear combination of some state variables will satisfy a differential equation.
Solution of the differential equation (with suitable initial conditions) will give a constraint
on the linear combination of the state variables involved. Therefore, we know these state
variables are dependent. The form of dependence, however, is not known until the differential
equation is solved. In other words, the form of dependence will vary with the input as well as
the initial conditions. The form of dependence is not definite, like the one from compatibility
or continuity equations (cf. (7.25)).

To better understand the explanation above, let us revisit Example 4.3. From the
first and third rows of (4.54), one can conclude that a linear combination T − kθ satisfies
d
a differential equation (T − kθ) = 0, where T and θ are two of the three state variables
dt
chosen in Example 4.3. Therefore, T and θ must depend on each other through the solution
of the differential equation (cf. (4.52)). Similarly, for the case of Example 4.4, one can
conclude from the first and third rows of (4.63) that a linear combination mv2 + Bx1 satisfies
d
a differential equation (mv2 + Bx1 ) = Fs (t). Therefore, v2 and x1 will depend on each
dt
other through the solution of the differential equation; (4.66).

So here we face a difficult situation. Do we want to avoid the trap so that all derived
state equations will have det A 6= 0? The advantage is that we can reduce the state equation
to a minimal order. But there is no free lunch; the price we pay is that we sweep the linear
combination under the carpet. If we want to know the response of the state variables involved
in the linear combination, additional integrations are needed. If we choose to do so, how do
we actually make sure det A 6= 0 to begin with?

It turns out that linear graph formulations often lead to state equations with det A 6= 0.
We will see later why it is the case. In linear graph formulations, there are two known
scenarios leading to det A = 0: (a) multiple capacitors in a series connection and (b) multiple
inductors in a parallel combination. Let me first demonstrate one of the scenarios through
the following example. Then I will explain what we could do to modify the formulation so
that det A 6= 0.

Example 7.1 This example is to show that two capacitors in parallel will lead to two linearly
dependent state variables. Figure 7.5 shows an electric circuit consists of one resistor R and
two capacitors C1 and C2 in a series connection. The resistor R is located between nodes
200 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.5: An electric circuit that falls Figure 7.6: The corresponding linear
in trap 3 graph of Fig. 7.5

A and B, while the two capacitors are located between nodes B and G. A voltage source
drives the circuit at nodes A and G with a prescribed voltage Vs (t). Figure 7.6 shows the
corresponding linear graph.

We can follow the heuristic approach to derive state equations as follows. The elemental
equations are
vR = RiR (7.27)
dvC1
iC1 = C1 (7.28)
dt
and
dvC2
iC2 = C2 (7.29)
dt
Therefore, we can choose vC1 and vC2 as state variables. At this stage, there is no indication
that vC1 and vC2 will be linearly dependent.

The next step is to derive the following continuity equations

is = iR (7.30)

for node A,
iR = iC1 (7.31)
for node B, and
iC1 = iC2 (7.32)
7.4. TRAP 3: IS A SINGULAR? 201

for the unnamed node between C1 and C2 . For compatibility equation, there is only one
loop resulting in
Vs = vR + vC1 + vC2 (7.33)

Finally, we can derive the state equations by eliminating variables as


dvC1 1 iR vR 1
= iC1 = = = (VS − vC1 − vC2 ) (7.34)
dt C1 C1 RC1 RC1
and
dvC2 1 iR vR 1
= iC2 = = = (VS − vC1 − vC2 ) (7.35)
dt C2 C2 RC2 RC2
In the derivation of (7.34) and (7.35), the first equality signs are from the elemental equations
(7.28) and (7.29), respecitvely. The second equality signs are from the continuity equations
(7.31) and (7.32). The third equality signs come from the elemental equation (7.27). The
fourth equality signs are from the compatibility equation (7.33).

Finally, the state equations (7.34) and (7.35) are arranged in the following matrix form
! " # ! !
d vC1 1 C2 C2 vC1 1 C2
=− + Vs (t) (7.36)
dt vC2 RC1 C2 C1 C1 vC2 RC1 C2 C1
Note that the state matrix is singular with

1 C C
2 2
det A = − =0 (7.37)
RC1 C2

C1 C1

The zero determinant obviously indicates that vC1 and vC2 are dependent in a way. It is,
however, not clear how vC1 and vC2 are related. It is certainly not the case in Section 7.3,
where vC1 and vC2 are linearly dependent via a compatibility equation (cf. (7.25)). It turns
out that one would not see vC1 and vC2 being dependent on each other until the state equation
(7.36) is solved.

To find the solution of (7.36), one can add (7.34) and (7.35) together to obtain
   
d 1 1 1 1 1 1
(vC1 + vC2 ) + + (vC1 + vC2 ) = + VS (t) (7.38)
dt R C1 C2 R C1 C2
Therefore, one can define a combined voltage drop vC as

vC ≡ vC1 + vC2 (7.39)


202 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

and a combined time constant τ as


 
1 1 1 1
≡ + (7.40)
τ R C1 C2

Then (7.38) is reduced to


d
τ vC + vC = VS (t) (7.41)
dt
Equation (7.41) has already implied that vC1 and vC2 will depend on each other, because
vC1 + vC2 = p(t), where p(t) is the solution of (7.41).

To demonstrate that vC1 (t) and vC2 (t) are dependent more explicitly, let us investigate
a special case for which Vs (t) is a constant and initial conditions are zeros, i.e.,

vC1 (0) = vC2 (0) = 0 (7.42)

In this case, solution of (7.41) is

vC (t) = vC1 (t) + vC2 (t) = Vs 1 − e−t/τ



(7.43)

Equation (7.43) implies that vC1 (t) and vC2 (t) depend on each other after (7.41) is solved via
an integration. This confirm the statement that det A = 0 implies some form of dependence
between the state variables.

To find vC1 (t) and vC2 (t) explicitly, one can substitute (7.43) back to (7.34) to obtain
dvC1 Vs −t/τ
= e (7.44)
dt RC1
With the zero initial condition vC1 (0) = 0 in (7.42), solution of (7.44) is
Vs τ
1 − e−t/τ

vC1 (t) = (7.45)
RC1
Substitution of (7.45) back to (7.43) yields
Vs τ
1 − e−t/τ

vC2 (t) = (7.46)
RC2
As one can see from (7.45) and (7.46) that vC1 and vC2 are linear dependent through

C1 vC1 − C2 vC2 = 0 (7.47)


7.4. TRAP 3: IS A SINGULAR? 203

Figure 7.7: Equivalent capacitor for multiple capacitors in a series connection

From the analysis above, we can conclude that two capacitors in a series connection
will render det A to vanish. As a result, vC1 (t) and vC2 (t) will depend on each other. The
dependence, however, cannot be sorted out via a continuity equation or a compatibility
equation. The dependence is implicit as a result of det A = 0.

Now I have demonstrated that multiple capacitors arranged in a series connection could
lead to det A = 0. What can we do to avoid this scenario? One simple way is to combine all
the capacitors into an equivalent capacitor. As shown in Fig. 7.7, n capacitors (or A-type
elements in general) C1 , C2 , . . . , Cn are in a series connection. In this case, these n capacitors
can be combined into an equivalent capacitor Ceq defined as

1 1 1 1
≡ + + ··· + (7.48)
Ceq C1 C2 Cn

To understand where (7.48) comes from, let us define vC1 , vC2 , . . . , vCn as the voltage
drops across the capacitors C1 , C2 , . . . , Cn , respectively. Moreover, let iC1 , iC2 , . . . , iCn be the
current through C1 , C2 , . . . , Cn , respectively. Since the capacitors are in a series connection,
the currents are the same implying that

iC1 = iC2 = · · · = iCn ≡ iC (7.49)

where iC is the current going through any of the capacitors. Recall that each capacitor
204 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

satisfies its elemental equation

d 1 1
vC1 = iC1 = iC
dt C1 C1
d 1 1
vC2 = iC2 = iC
dt C2 C2 (7.50)
············
d 1 1
vCn = iC1 = iC
dt Cn Cn

where (7.49) has been used. By summing up all the equations in (7.50), we obtain
 
d 1 1 1
(vC1 + vC2 + · · · + vCn ) = + + ··· + iC (7.51)
dt C1 C2 Cn

By defining
vC ≡ vC1 + vC2 + · · · + vCn (7.52)

as the overall voltage drop across the serial capacitors, we can rewrite (7.51) as

d 1
vC = iC (7.53)
dt Ceq

where Ceq is defined in (7.48).

The use of an equivalent capacitor means that we give up the idea of using vC1 , vC2 , . . . ,
vCn as state variables. Instead, we use an overall voltage drop vC as a state variable in
exchange for det A 6= 0.

Similarly, if there are n inductors (or T -type elements) L1 , L2 , . . . , Ln in a parallel


combination (Fig. 7.8), the state equation will have a vanishing state matrix, i.e., det A = 0.
To avoid this scenario, we can define an equivalent inductor

1 1 1 1
≡ + + ··· + (7.54)
Leq L1 L2 Ln

To understand where (7.54) comes from, let us define iL1 , iL2 , . . . , iLn as the currents
through the inductors L1 , L2 , . . . , Ln , respectively. Moreover, let vL1 , vL2 , . . . , vLn be the
7.4. TRAP 3: IS A SINGULAR? 205

Figure 7.8: Equivalent inductor for multiple inductors in a parallel combination

voltage across the inductors L1 , L2 , . . . , Ln , respectively. Since the inductors are in a parallel
combination, the voltage drops are the same implying that

vL1 = vL2 = · · · = vLn ≡ vL (7.55)

where vL is the voltage drop for any of the inductors. Recall that each inductor satisfies its
elemental equation
d 1 1
iL1 = vL1 = vL
dt L1 L1
d 1 1
iL2 = vL2 = vL
dt L2 L2 (7.56)
············
d 1 1
iLn = vLn = vL
dt Ln Ln
where (7.55) has been used. By summing up all the equations in (7.56), we obtain
 
d 1 1 1
(iL + iL2 + · · · + iLn ) = + + ··· + vL (7.57)
dt 1 L1 L2 Ln
By defining
iL ≡ iL1 + iL2 + · · · + iLn (7.58)
as the overall current supplied to the parallel inductors, we can rewrite (7.57) as
d 1
iL = vL (7.59)
dt Leq
206 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.9: An example to illustrate graph trees and links

where Leq is defined in (7.54).

The use of an equivalent inductor means that we give up the idea of using iL1 , iL2 , . . . ,
iLn as state variables. Instead, we use an overall current iL supplied to all the parallel
inductors as a state variable in exchange for det A 6= 0.

7.5 Concepts of Trees and Links

Now we have explained the three traps that we would like to avoid in deriving state equations
from linear graph models. What would be an effective way to avoid the traps then? One
way to avoid the traps is to use a concept called trees and links.

A graph tree is a subset of a linear graph satisfying the following two conditions:

1. the graph tree contains all nodes, and

2. the graph tree has maximal number of branches of the linear graph without creating
any closed loops.

A graph link is a branch in the linear graph that is not included in a tree.

As an example, I will use the electrical system in Fig. 7.9 to illustrate the concept of
7.5. CONCEPTS OF TREES AND LINKS 207

Figure 7.10: Some possible graph trees

Figure 7.11: Some possible graph links

graph trees and links. The system consists of three resistors, one inductor, one capacitor,
and an across-variable source. A linear graph model of the system is also shown in Fig. 7.9.
The linear graph model shows that there are 5 nodes: A, B, C, D, and G.

Figure 7.10 illustrates three possible graph trees from the linear graph. In each tree, all
nodes of the linear graph are included, and there are no closed loops formed. As an example,
the first tree on the left in Fig. 7.10 includes the following branches: AB, BG, CD, and
DG. In contrast, Fig. 7.11 illustrates the links corresponding to the trees in Fig. 7.10. For
example, the links for the first tree in Fig. 7.10 are branches AB and BC.

From now on, we will adopt the following notations. All the branches in a tree are
denoted by solid lines. All the branches in the form of links are denoted by dashed lines.
208 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.13: Generating continuity equa-


Figure 7.12: Generating compatibility
tion by cutting only one
equations by adding links
branch

The concept of trees and links is there for a reason. When a link is put back to the
tree, it forms a closed loop. Once a closed loop is formed, a compatibility equation can be
written down. Therefore, adding links back to the tree one by one is a systematic way to
generate compatibility equations.

Example 7.2 Let us take the center tree in Fig. 7.10 as an example. The tree has branches
AB, BC, BG and DG. The corresponding links are AG and CD. When we put the link
AG back to the tree (see Fig. 7.12), the link AG form a closed loop with the branches AB
and BG (see loop I in Fig. 7.12). As a result, we can quickly write down the corresponding
compatibility equation
vAG = vAB + vBG (7.60)
Similarly, when the link CD is put back to the tree (see Fig. 7.12), the link CD forms a
closed loop with the branches DG, BC and BG (see loop II in Fig. 7.12). As a result, we
can quickly write down the corresponding compatibility equation

vBG = vBC + vCD + vDG (7.61)

The concept of graph trees can also be used to generate continuity equation systemat-
ically. To do so, we select those contours that intersect the tree at only one branch. Then
we can count the flows in and out of the chosen contours to generate continuity equations.
7.6. PRIMARY AND SECONDARY VARIABLES 209

Example 7.3 Let us use the center tree in Fig. 7.10 again as an example. The tree has
branches AB, BC, BG and DG. The corresponding links are AG and CD. Figure 7.13
shows four closed contours that we can choose to generate continuity equations. The first
contour is around node A intersecting on the branch AB. The outflow and inflow must be
equal for this contour resulting in a continuity equation

iAG = iAB (7.62)

The second contour is around node C cutting only the branch BC. The outflow and inflow
must be equal for this contour resulting in a continuity equation

iBC = iCD (7.63)

The third contour is around node D intersecting only the branch DG. The outflow and
inflow must be equal for this contour resulting in a continuity equation

iCD = iDG (7.64)

The last contour encloses nodes A, B, and C simultaneously. As a result, the close contour
only intersects the branch BG. The outflow and inflow must be equal for this contour
resulting in a continuity equation

iAG = iBG + iCD (7.65)

7.6 Primary and Secondary Variables

The concept of trees and links provides a systematic ways to generate compatibility equations
and continuity equations. As shown in the heuristic example of Section 7.1, some variables
in the compatibility and continuity equations are eliminated but others are not. In this case,
what would be an effective way to separate variables to be eliminated from those which will
not?

Concepts of primary and secondary variables are introduced to systematically identify


variables to be eliminated. By definition, primary variables are across-variables of tree
210 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.14: Center tree in Fig. 7.10 reproduced

branches and through variables of tree links. Secondary variables are through-variables of
tree branches and across-variable of tree links.

Example 7.4 Let us use the center tree in Fig. 7.10 again as an example, reproduced in
Fig. 7.14 as a reference. The tree has branches AB, BC, BG and DG. The corresponding
links are AG and CD. By definition, the primary variables are

primary variables : vAB , vBC , vBG , vDG , iAG , iCD (7.66)

where vAB , vBC , vBG , and vDG are across-variable of the tree branches, and iAG and iCD are
through-variables of the corresponding tree links. In contrast, the second variables are

secondary variables : iAB , iBC , iBG , iDG , vAG , vCD (7.67)

where iAB , iBC , iBG , and iDG are through-variable of the tree branches, and vAG and vCD are
across-variables of the corresponding tree links.

Now, why are primary and secondary variables defined this way? Why could primary
and secondary variables as defined serve as an effective way to eliminate variables? The
answer is the following. When compatibility and continuity equations are generated sys-
tematically using trees and links as shown in Section 7.5, each compatibility or continuity
equation will only contain one secondary variable. All other variables in the compatibility
7.6. PRIMARY AND SECONDARY VARIABLES 211

and continuity equations will be primary variables. As a result, the secondary variables
can be represented in terms of the primary variables in the compatibility and continuity
equations. When the compatibility and continuity equations are used to eliminate variables
in the derivation of state equations, the secondary variables are eliminated. Let us revisit
Example 7.2 and Example 7.3 to elucidate this subtle point.

Example 7.5 Let us take the center tree in Fig. 7.10 to revisit Example 7.2. The tree has
branches AB, BC, BG and DG. The corresponding links are AG and CD. When we put
the link AG back to the tree (see Fig. 7.15 reproduced from Fig. 7.12), the link AG form
a closed loop with the branches AB and BG (see loop I in Fig. 7.15). As a result, we can
quickly write down the corresponding compatibility equation

vAG = vAB + vBG (7.68)

where vAG is a secondary variable while vAB and vBG are primary variables. As one can see
from (7.68), there is only one secondary variable in the compatibility equation. Moreover, the
secondary variable vAG is represented in terms of the primary variables vAB and vBG . This
arrangement makes it easy for the secondary variable vAG to be eliminated in an effective
way.

Similarly, when the link CD is put back to the tree (see Fig. 7.15), the link CD forms
a closed loop with the branches DG, BC and BG (see loop II in Fig. 7.15). As a result, we
can quickly write down the corresponding compatibility equation

vCD = vBG − vBC − vDG (7.69)

where vCD is a secondary variable while vBG , vBC , and vDG are all primary variables. The
same situation occurs here. There is only one secondary variable in the compatibility equa-
tion (7.69). Moreover, the secondary variable vCD is represented in terms of the primary
variables vBG , vBC , and vDG . The secondary variable vCD is ready to be eliminated.

Now let us turn to continuity equations using the same tree in Fig. 7.16 (reproduced
from Fig. 7.13). Let us see what happens to primary and secondary variables in the continuity
equations. Figure 7.16 shows four closed contours that we can choose to generate continuity
equations. The first contour is around node A intersecting on the branch AB. The outflow
212 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.16: Generating continuity equa-


Figure 7.15: Generating compatibility
tion by cutting only one
equations by adding links
branch

and inflow must be equal for this contour resulting in a continuity equation

iAB = iAG (7.70)

where iAB is a secondary variable and iAG is a primary variable. Again, there is only one
secondary variable in the continuity equation (7.70). Since the secondary variable iAB is
designed to be eliminated, it is written on the left side of the continuity equation.

The second contour is around node C cutting only the branch BC. The outflow and
inflow must be equal for this contour resulting in a continuity equation

iBC = iCD (7.71)

where iBC is a secondary variable and iCD is a primary variable.

The third contour is around node D intersecting only the branch DG. The outflow and
inflow must be equal for this contour resulting in a continuity equation

iDG = iCD (7.72)

where iGD is a secondary variable and iCD is a primary variable.

The last contour encloses nodes A, B, and C simultaneously. As a result, the close
contour only intersects the branch BG. The outflow and inflow must be equal for this
7.6. PRIMARY AND SECONDARY VARIABLES 213

Figure 7.17: Summary of why primary and secondary variables work

contour resulting in a continuity equation

iBG = iAG − iCD (7.73)

where iBG is a secondary variable while iAG and iCD are primary variables. As one can see
from (7.73), there is only one secondary variable in the continuity equation. Moreover, the
secondary variable iBG is represented in terms of the primary variables iAG and iCD . This
arrangement makes it easy for the secondary variable iBG to be eliminated in an effective
way.

There are several other point worth noting about primary and secondary variables.
First, every one-port element will have exactly one primary variable and one secondary
variable. Therefore, only one of the two power variables of the one-port element can serve
as a primary variable. The other power variable must be a secondary variable. This is also
a very subtle, because it makes sure that all elemental equations will be used in deriving
state equations from a linear graph model. Second, Fig. 7.17 summarizes why primary and
secondary variables work. For any compatibility equation, a loop must be formed. Therefore,
a link must be added back to a tree. Since compatibility equations govern across-variables,
the across-variable of the link must be defined as a secondary variable to ensure that there
is only one secondary variable in the compatibility equation. For any continuity equation,
a closed contour must be present to intersect only one branch. Since continuity equations
govern through-variables, the through-variable of the branch cut by the closed contour must
be defined as a secondary variable to ensure that there is only one secondary variable in the
continuity equation.
214 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.18: An example circuit to demonstrate construction of a normal tree

7.7 Normal Trees

In the previous two sections, methods are developed to systematically generate compatibility
and node equations (e.g., via trees and links) as well as to effectively eliminate variables (e.g.,
primary vs. secondary variables). There are, however, multiple trees associated with a linear
graph. In this case, which tree is the right one to use? How do we find such a tree? The
goal of this section is to answer those questions.

Among all the available trees, one is called a normal tree. The normal tree is constructed
via a sequence of rigorous steps. By using the normal tree, one can derive state equations
from a linear graph methodologically. Let me use the following example to explain the steps
to construct a normal tree and why the steps are needed.

Figure 7.18 shows a circuit whose normal tree is going to be constructed and explained.
The circuit includes all three traps discussed earlier. For example, the system is complex
enough so that it is not clear which compatibility or continuity equation should be used.
The system has dependent state variables, such as capacitors C1 , C2A , and C2B forming a
close loop with a voltage source Vs (t). The system also has singular state matrix, because
capacitors C2A and C2B are in a series connection and inductors L1A and L1B are in a parallel
combination. It is a perfect system to demonstrate every possible detail in constructing a
normal tree.
7.7. NORMAL TREES 215

Figure 7.19: Linear graph of the example circuit in Fig. 7.18

Figure 7.19 illustrates the linear graph of the example circuit in Fig. 7.18. Basically,
the left side of the linear graph is drawn based on currents out of the voltage source, while
the right side of the linear graph is based on currents out of the current source. It is not
clear what the direction of the current is for resistor R4 , so it is assumed that the current
flows from node D to node C.

Here are the steps to construct a normal tree.

Step 0. Replace A-type elements in series and T -type elements in parallel


by equivalent elements. This is a preliminary step to clean up the linear graph. Fig. 7.20
shows the linear graph after this step is done.

Explanation. The linear graph in Fig. 7.19 has two capacitors C2A and C2B in series.
They need to be combined into an equivalent capacitor C2 using (7.48). Also, the linear
graph in Fig. 7.19 has two inductors L1A and L1B in parallel. They need to be combined
into an equivalent inductor L1 using (7.54).

Why? The purpose is to make sure that the state equation in the end will have det A 6= 0.

Step 1. Draw all the nodes. This step is self-explanatory, because the definition of
216 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.20: Step 0 of constructing a normal tree

Figure 7.21: Step 1 of constructing a normal tree


7.7. NORMAL TREES 217

a tree requires all the nodes. Figure 7.21 shows the normal tree in construction after this
step is done.

Explanation. There are seven nodes in the linear graph, i.e., nodes A, B, C, D, E, F ,
and the ground node. Do not forget the ground node.

Why? The purpose is to make sure that the tree will include all the nodes in the original
linear graph.

Figure 7.22: Step 2 of constructing a nor-


Figure 7.23: Step 3 of constructing a nor-
mal tree
mal tree

Step 2. Include all across-variable sources as tree branches. Figure 7.22 shows
the normal tree in construction after this step is done.

Explanation. There is only one across variable in the linear graph Fig. 7.20, i.e., the
voltage source Vs (t). So Vs (t) is in the normal tree now.

Why? By doing so, the voltage source will be forced to appear in the compatibility
equation in the form of a primary variable. After the normal is constructed, if a link is put
back to the tree, it will form a loop to generate a compatibility equation. If the voltage is in
the tree to start with, the voltage Vs (t) will appear in the compatibility equation. Moreover,
the across-variable of a tree branch is a primary variable. So Vs (t) will appear as a primary
variable and will not be eliminated.

Step 3. Include as many as possible A-type elements as tree branches. Fig-


ure 7.23 shows the normal tree in construction after this step is done.
218 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.24: Presence of dependent energy storage elements via A-type elements

Explanation. A-type elements are capacitors. There are two capacitors C1 and C2 in
the linear graph Fig. 7.20. These two capacitors C1 and C2 , however, cannot appear in the
tree at the same time, because the capacitors C1 , C2 , and the voltage Vs (t) will form a closed
loop thus violating the definition of a tree. Therefore, only at most one capacitor can be
included in the normal tree. It is completely arbitrary which capacitor to be included in the
tree. In this case, we include capacitor C2 in the tree; see Fig. 7.23.

Why? This step is to ensure that an across-variable preceded with a differential operator
d
will serve as a primary variable as much as possible. Recall that a capacitor (i.e., an A-
dt
type element) satisfies the elemental equation
dv
i=C (7.74)
dt
If the capacitor is in the tree, the across-variable of the capacitor, which bears a differential
d
operator , will be a primary variable and will not be eliminated later in the derivation.
dt
Step 3(a). What happens when A-type elements cannot be included as tree
branches? For Step 3, the ideal case is to have all A-type elements included in the tree.
For some systems, it is not possible to include all A-type elements in the tree. An A-type
element that cannot be included in the tree is called a dependent energy storage element.

Explanation. For the current example, the capacitor C1 is not in the tree; see Fig. 7.24.
7.7. NORMAL TREES 219

Figure 7.25: Step 4 of constructing a normal tree

Therefore, the capacitor C1 is a dependent energy storage element. Alternatively, if the


capacitor C1 was chosen in Step 3 to be included in the tree, then the capacitor C2 would
be a link and thus a dependent energy storage element.

Why? When an A-type element is not included in the tree, it basically means that trap
2 in Section 7.3 is present. For the example here, vC1 depends on vC2 due to a compatibility
equation vC1 + vC2 = Vs (t) from the loop formed by C1 , C2 , and Vs (t) in Fig. 7.20. By
not allowing C1 in the tree, we avoid trap 2 and mandate that vC1 will be a secondary
variable. As a result, only vC2 will appear as a primary variable and thus a candidate for
state variables.

Step 4. Include as many as possible D-type elements to complete the tree.


Figure 7.25 shows the normal tree in construction after this step is done.

Explanation. D-type elements are resistors. According to the linear graph in Fig. 7.20,
three resistors R3 , R4 , and R5 can be included in the tree to connect nodes C, D, and E.
Note that node F is not yet included in the tree yet. Therefore, the tree shown in Fig. 7.25
is not a complete tree at this time.

Why? This step is to force all T -type elements (i.e., inductors) to become tree links. If
the remaining tree branches are filled by D-type elements, then all T -type elements will be
220 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.26: Presence of dependent energy storage elements via T -type elements

forced to be tree links. Recall that a T -type element satisfies the elemental equation
di
v=L (7.75)
dt
If the T -type element is in the links, its through-variable, which bears a differential operator
d
, will be a primary variable and will not be eliminated later in the derivation. Therefore,
dt
d
this step is to ensure that a through-variable preceded with a differential operator will
dt
serve as a primary variable as much as possible.

Step 5. Check if all nodes are included in the tree. Ideally, the tree should be
completed by now. If not, one can include T -type elements to complete the tree. In this case,
a T -type element that must appear to complete the tree is also a dependent energy storage
element.

Explanation. For the current example, node F is not in the tree yet at the end of Step
4. According to Fig. 7.20, node F is connected to the tree constructed thus far via two
inductors (i.e., T -type elements) L1 and L2 . One way is to connect nodes E and F via the
inductor L2 ; see Fig. 7.26. Then the tree is completed and all nodes are included. If this
tree is chosen, L2 is a dependent energy storage element. An alternative way is to connect
nodes D and F via the inductor L1 . Then the tree will be completed and all nodes will be
included. In this case, L1 will be a dependent energy storage element.

Why? When a T -type element (e.g., an inductor) is included in the tree, it basically
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 221

means that trap 2 in Section 7.3 is present. For the example here, iL1 and iL2 depend on
each other due to a continuity equation iL1 + iL2 = Is (t) at node F ; see Fig. 7.20. By forcing
L2 in the tree, we avoid trap 2 and mandate that iL2 will be a secondary variable. As a
result, only iL1 will appear as a primary variable and thus a candidate for state variables.

Step 6. Final Check. Check the tree one more time. The tree should be complete now
and all nodes should be included. There should be no need to include any through-variable
sources in the tree. Congratulations, now you have drawn your first normal tree!

7.8 Deriving State Equations from Linear Graphs

With the preparation in the last three chapters, we finally are ready to derive state equations
using linear graphs. To do so, we need to follow the steps in the recipe below.

Step 1. Draw the linear graph and develop the corresponding normal tree.

Step 2. Write down the primary variables and secondary variables from the normal tree.
Identify state variables from the primary variables.

Step 3. Write down elemental equations of all one-port elements; primary variables on the
left side and secondary variables on the right side of the equations.

Step 4. Use compatibility and continuity equations to eliminate secondary variables. If


dependent energy storage elements are present, their primary variables need to be
eliminated as well.
d
Step 5. Eliminate primary variables that are not state variables by starting with equa-
dt
tions.

The best way to learn the steps in the recipe is go through as many example as possibles.
Let me first walk you through an illustrative example below. The example is a circuit
consisting of a voltage source Vs (t), a resistor R1 , an inductor L1 , and three capacitors C1 ,
C2 , and C3 ; see Fig. 7.27. One goal is to derive the state equation governing the dynamics of
222 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.27: An electric circuit to illustrate derivation of state equation via linear graphs

the circuit. Moreover, the output variables of interest are the voltage vR1 across the resistor
R1 as well as the current iC3 going through the capacitor C3 . Another goal is to derive the
output equation governing vR1 and iC3 .

Step 1. Draw the linear graph and develop the corresponding normal tree.
The corresponding linear graph and normal tree are shown in Fig. 7.28 and Fig. 7.29, re-
spectively. In drawing the normal tree, one need to follow the steps in Section 7.7.

Step 0. There are no capacitors in a series connection or inductors in a parallel connec-


tion. Therefore, this step is not needed.

Step 1. Include all nodes A, B, C, and G. The resistor R1 is located between nodes A
and B. The inductor L1 is located between nodes B and C. Node G is the ground node.

Step 2. The voltage source Vs (t), which is an across-variable source, is included in the
normal tree.

Step 3. Include as many as possible A-type elements in the tree. There are three
capacitors C1 , C2 , and C3 in the linear graph Fig. 7.28. The capacitor C1 must be in the
tree. We cannot, however, put C2 and C3 all in the tree, because will form a loop. Therefore,
we only include C2 in the tree.

Step 3(a). Now the capacitor C3 is not in the tree. Therefore, the capacitor C3 will
serve as a dependent energy storage element.
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 223

Figure 7.28: Linear graph of the circuit Figure 7.29: Normal tree of the linear
shown in Fig. 7.27 graph shown in Fig. 7.28

Step 4. Include as many as possible D-type element in the tree. In this example, there
is only one resistor R1 and it cannot be included in the tree. Otherwise, it will form a loop
with the source Vs (t) and the capacitor C1 . So the normal tree is not changed in this step.
We also

Step 5. All the nodes A, B, C, and G are included in the tree. There is no need to
include the inductor L1 in the tree. So L1 is not a dependent energy storage element.

Step 6. The normal tree is complete and is in good order as shown in Fig. 7.29.

Step 2. Write down the primary variables and secondary variables from the
normal tree. Identify state variables from the primary variables. By definition,
primary variables are across-variables of the branches and through-variables of the links.
Therefore, primary variables from the normal tree are

Primary Variables : Vs (t), vC1 , vC2 , iR1 , iL1 , [iC3 ] (7.76)

where the known source Vs (t) also serves a primary variable. Moreover iC3 is included in a
pair of square brackets simply to remind us that C3 is a dependent energy storage element.

Similarly, secondary variables are through-variables of the branches and across-variables


of the links. Therefore, secondary variables from the normal tree are

Secondary Variables : is (t), iC1 , iC2 , vR1 , vL1 , [vC3 ] (7.77)


224 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Note that the current is (t) out of the voltage source is not prescribed. Moreover vC3 is
included in a pair of square brackets simply to remind us that C3 is a dependent energy
storage element.

To identify state variables, let us go through the list of primary variables (cf. (7.76))
with a process of elimination. First, C3 is a dependent energy storage system; therefore, iC3
cannot serve as a state variable. Next, resistor R1 is a D-type element and its elemental
d
equation does not bear any differential operator . Therefore, iR1 cannot serve as state
dt
variable. Also, Vs (t) is a prescribed quantity and cannot serve as a state variable. Finally,
the list of primary variables are boiled down to vC1 , vC2 , and iL1 . These primary variables
are from either an inductor or capacitors, whose element equations will carry a differential
d
operator preceding the primary variables. Therefore, the state variables are
dt
State Variables : vC1 , vC2 , iL1 (7.78)

Step 3. Write down elemental equations of all one-port elements; primary


variables on the left side and secondary variables on the right side of the equa-
tions. This step is straightforward. We simply go through each element and right down its
elemental equation.
d 1
vC1 = iC
dt C1 1
d 1
vC2 = iC
dt C2 2

1
iR1 = vR
R1 1 (7.79)

d 1
iL1 = vL
dt L1 1

d
iC3 = C3 vC
dt 3

Step 4. Use compatibility and continuity equations to eliminate secondary


variables. If dependent energy storage elements are present, their primary vari-
ables need to be eliminated as well. This is a critical step. To implement this step, we
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 225

need to add links one by one back to the normal tree. When a link is put back the tree, it
forms a loop with the tree and recovers a compatibility equation. When the compatibility
equation is recovered, it should only involve one secondary variable and the rest should be
all in primary variables. Therefore, we can represent the secondary variable in terms of the
primary variables for the compatibility equation. Then we can eliminate the secondary vari-
able from the elemental equation in Step 3. Let me demonstrate this through the illustrative
example.

First, let us bring the link R1 back to the normal tree; see Fig. 7.30. The resistor
R1 will form a loop with the source Vs (t) and the capacitor C1 . As a result, the following
compatibility equation is recovered.

Vs (t) = vR1 + vC1 (7.80)

As one can see from (7.80), there is only one secondary variable vR1 in this compatibility
equation. Therefore, the equation can be rewritten as

vR1 = Vs (t) − vC1 (7.81)

where the right side only include primary variables vC1 and Vs (t). Now, (7.81) is substituted
in the third elemental equation in (7.79) to eliminate the secondary variable vR1 resulting in
d 1
vC1 = iC
dt C1 1
d 1
vC2 = iC
dt C2 2

1 1
i R1 = vR = (Vs (t) − vC1 ) Loop Vs R1 C1
R1 1 R1 (7.82)

d 1
iL1 = vL
dt L1 1

d
iC3 = C3 vC
dt 3

Next, let us bring the link L1 back to the normal tree; see Fig. 7.31. The inductor L1
will form a loop with the capacitors C1 and C2 . As a result, the following compatibility
226 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.30: Normal tree after the link R1 Figure 7.31: Normal tree after the link L1
is put back is put back

equation is recovered.
vC1 = vL1 + vC2 (7.83)
As one can see from (7.83), there is only one secondary variable vL1 in this compatibility
equation. Therefore, the equation can be rewritten as

vL1 = vC1 − vC2 (7.84)

where the right side only include primary variables vC1 and vC2 . Now, (7.84) is substituted
in the fourth elemental equation in (7.82) to eliminate the secondary variable vL1 resulting
in
d 1
vC1 = iC
dt C1 1
d 1
vC2 = iC
dt C2 2

1 1
iR1 = vR = (Vs (t) − vC1 ) Loop Vs R1 C1
R1 1 R1 (7.85)

d 1 1
iL1 = vL1 = (vC1 − vC2 ) Loop C1 L1 C2
dt L1 L1

d
iC3 = C3 vC
dt 3
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 227

Figure 7.32: Normal tree after the link C3 Figure 7.33: Closed contour around node
is put back A

Finally, let us put the last link C3 back to the normal tree; see Fig. 7.32. The capacitor
C3 will form a loop with the capacitor C2 . As a result, the following compatibility equation
is recovered.
vC3 = vC2 (7.86)
As one can see from (7.86), there is only one secondary variable vC3 in this compatibility
equation, and it is already on the left side of (7.86). Now, (7.86) is substituted in the fifth
elemental equation in (7.85) to eliminate the secondary variable vC3 resulting in
d 1
vC1 = iC
dt C1 1
d 1
vC2 = iC
dt C2 2

1 1
iR1 = vR = (Vs (t) − vC1 ) Loop Vs R1 C1
R1 1 R1 (7.87)

d 1 1
iL1 = vL1 = (vC1 − vC2 ) Loop C1 L1 C2
dt L1 L1

d d
iC3 = C3 vC = C3 vC Loop C2 C3
dt 3 dt 2

After all the links are brought back to the tree, we want to select closed contours that cut
only one branch of the normal tree. When such a contour is selected, a continuity equation
228 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

will be derived from the inflow and outflow of the contour. The continuity equation should
contain one secondary variable, and the remainder of the continuity equation should only
involve primary variables. Therefore, the secondary variable can be represented in terms
of all the primary variables from the continuity equation. Then the secondary variable will
be eliminated from the elemental equations derived in Step 3. We will demonstrate this
procedure by continuing working on the illustrative example.

First, let us select a closed contour around node A; see Fig. 7.33. The selected contour
intersects only the tree branch Vs (t). The inflow into the contour is the current iA ≡ is (t)
from the voltage source. The outflow is the current iR1 of the resistor R1 . As a result, the
following continuity equation is recovered.

is = iR1 (7.88)

As one can see from (7.88), there is only one secondary variable is in this continuity equation,
and it is already on the left side of (7.88). Note that the elemental equations in (7.87) do
not involve is . Nevertheless, it is good to have (7.88) around because it may be useful for
other reasons, such as output equation.

Next, let us select a closed contour around node B; see Fig. 7.34. The selected contour
intersects only one tree branch, i.e., the capacitor C1 . The inflow into the contour is the
current iR1 from the resistor R1 . The outflow includes the currents iC1 and iL1 of the
capacitor C1 and the inductor L1 , respectively. As a result, the following continuity equation
is recovered.

iR1 = iC1 + iL1 (7.89)

As one can see from (7.89), there is only one secondary variable iC1 in this continuity equation,
where the other terms iR1 and iL1 are primary variables. Rearranging (7.89) such that the
secondary variable iC1 appear on the left side as

iC1 = iR1 − iL1 (7.90)

Now, (7.90) is substituted in the first elemental equation in (7.87) to eliminate the secondary
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 229

Figure 7.35: Closed contour around node


Figure 7.34: Closed contour around node C
B

variable iC1 resulting in


d 1 1
vC1 = iC1 = (iR1 − iL1 ) Node B
dt C1 C1
d 1
vC2 = iC
dt C2 2

1 1
iR1 = vR = (Vs (t) − vC1 ) Loop Vs R1 C1
R1 1 R1 (7.91)

d 1 1
iL1 = vL1 = (vC1 − vC2 ) Loop C1 L1 C2
dt L1 L1

d d
iC3 = C3 vC = C3 vC Loop C2 C3
dt 3 dt 2

Finally, let us select a closed contour around node C; see Fig. 7.35. The selected
contour intersects only one tree branch, i.e., the capacitor C2 . The inflow into the contour
is the current iL1 from the inductor L1 . The outflow includes the currents iC2 and iC3 of
the capacitors C1 and C3 , respectively. As a result, the following continuity equation is
recovered.
iL1 = iC2 + iC3 (7.92)
As one can see from (7.92), there is only one secondary variable iC2 in this continuity equation,
230 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

where the other terms iC3 and iL1 are primary variables. Rearranging (7.92) such that the
secondary variable iC2 appear on the left side as

iC2 = iL1 − iC3 (7.93)

Now, (7.93) is substituted in the second elemental equation in (7.91) to eliminate the sec-
ondary variable iC2 resulting in

d 1 1
vC1 = iC1 = (iR1 − iL1 ) Node B
dt C1 C1
d 1 1
vC2 = iC2 = (iL1 − iC3 ) Node C
dt C2 C2

1 1
iR1 = vR = (Vs (t) − vC1 ) Loop Vs R1 C1
R1 1 R1 (7.94)

d 1 1
iL1 = vL1 = (vC1 − vC2 ) Loop C1 L1 C2
dt L1 L1

d d
iC3 = C3 vC = C3 vC Loop C2 C3
dt 3 dt 2

Step 5. Eliminate primary variables that are not state variables by starting
d
with equations. The elemental equations in (7.94) now only involve primary variables,
dt
because secondary variables have all been eliminated. Among the five elemental equations
d
in (7.94), three of them start with on the left side. These three equations have the right
dt
format of the state equations we want to derive. In contrast, the remaining two elemental
d
equations in (7.94) (i.e., the third and the fifth) do not start with on the left side.
dt
Therefore, the primary variables in these two elemental equations (i.e., iR1 and iC3 ) need to
be eliminated.

To do so, one can substitute the third elemental equation of (7.94) into the first element
equation of (7.94) to eliminate iR1 resulting in
 
d 1 1
vC1 = [Vs (t) − vC1 ] − iL1 (7.95)
dt C1 R1
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 231

Note that every term in (7.95) has a state variable or an input variable. Similarly, one can
substitute the fifth elemental equation of (7.94) into the second element equation of (7.94)
to eliminate iC3 resulting in
 
d 1 d
vC = iL1 − C3 vC2 (7.96)
dt 2 C2 dt

d
Note that vC appears on both side of (7.96). They can be combined to obtain
dt 2
d 1
vC2 = iL (7.97)
dt C2 + C3 1

Now, every term in (7.97) has a state variable. Finally, the third elemental equation in
(7.94) has a state variable and requires no further simplification. It is listed here again for
reference.
d 1
iL1 = (vC1 − vC2 ) (7.98)
dt L1
Equations (7.95), (7.96), and (7.98) are the state equations we want to derive. They can be
arranged in a matrix form
      
vC1 − R11C1 0 − C11 vC1 1
R1 C1
d  1
 vC2  =  0 0   vC2  +  0  Vs (t) (7.99)
     
dt C2 +C3
1
iL1 L1
− L11 0 iL1 0

Finally, let us wrap up the illustrative example by working out the output equation. At
the beginning of this example, we set a goal to find output variables vR1 and iC3 in the form
of an output equation. First, vR1 is from the compatibility (7.81). It is reproduced here for
reference.
vR1 = Vs (t) − vC1 (7.100)

Since every term on the right side of (7.100) involves a state variable or a source, (7.100) is
in good order and can serve as an output equation. In contrast, iC3 is found in an elemental
equations (see the fifth equation in (7.94)), which is reproduced here for reference.

d
iC3 = C3 vC (7.101)
dt 2
232 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.36: Derivation of state equations of a hydraulic system

d
Although the right side involves a state variable, but the derivative is not allowed. There-
dt
d
fore, the right side vC2 should be replaced by the state equation (7.97) resulting in
dt
C3
iC3 = iL (7.102)
C2 + C3 1
Now the right side of (7.102) has only state variables and no derivatives. Therefore, (7.102)
is in good order and can serve as an output equation. Finally, we can put (7.100) and (7.102)
into the following matrix form
 
! " # vC " #
1
vR1 −1 0 0 1
=  vC2  + Vs (t) (7.103)
 
C3
iC3 0 0 C2 +C3 0
iL1

Below please find more examples. It is, however, too cumbersome to explain each
example in such exhaustive details as in the illustrative example. I will only present major
results with some highlights of any special features encountered in each example.

Example 7.6 Figure 7.36 shows a hydraulic system consists of a source with prescribed
flow rate Qs (t), two tanks with capacitors C1 and C2 , a long pipe with inertance I1 and
resistance R1 , and an exhaust valve with resistance R2 . The long pipe connects the tanks.
Moreover, nodes A and B are at the inlet of tank C1 and C2 , respectively. Output variables
of interest include the pressure at tank C1 (i.e., pressure at node A) as well as the flow rate
into tank C2 . Derive the state and output equations.
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 233

Figure 7.37: Linear graph of the hy- Figure 7.38: Normal tree from the linear
draulic system in Fig. 7.36 graph in Fig. 7.37

Figure 7.37 shows the corresponding linear graph. One thing to watch out in this
particular linear graph is the inertance I1 and resistance R1 . Since the long pipe has the
inertance I1 and resistance R1 simultaneously, a psudo-node C is introduce to make I1 and
R1 two separate one-port element. Also, the pump is a through-variable source because the
the flow rate is prescribed.

Figure 7.38 shows the corresponding normal tree. Since there are no across-variable
source, the two capacitors C1 and C2 are included in the normal tree. Then the resistor R1
is added to complete the normal tree. The inductor I1 is not in the tree; therefore, there is
no dependent energy storage element in this hydraulic system. The through-variable source
is not included in the tree as expected.

Based on the normal tree, primary variables (i.e., across-variables of the branches and
through-variables of the links) are
Primary Variables : pC1 , pC2 , pR1 , QI1 , QR2 , Qs (t) (7.104)
where the known source Qs (t) also serves a primary variable. Similarly, secondary variables
(i.e., through-variables of the branches and across-variables of the links) are
Secondary Variables : QC1 , QC2 , QR1 , pI1 , pR2 , ps (t) (7.105)
Note that ps (t) is an unknown pressure needed to maintain the prescribed flow rate Qs (t).
Among the list of primary variables (cf. (7.104)), pC1 , pC2 , and QI1 are from either an
234 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.39: Loops and contours to generate compatibility and continuity equations

d
inductor or capacitors, whose element equations will carry a differential operator preceding
dt
the primary variables. Therefore, the state variables are

State Variables : pC1 , pC2 , QI1 (7.106)

Based on the linear graph in Fig. 7.37, we write down the following elemental equations
(with primary variables on the left and secondary variables on the right).
d 1
pC1 = QC
dt C1 1
d 1
pC2 = QC
dt C2 2

pR1 = R1 QR1
(7.107)
d 1
QI1 = pI1
dt I1

1
QR2 = pR2
R2

Figure 7.39 shows the loops and closed contours to generate needed compatibility and
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 235

continuity equations. When the source Qs is brought back to the tree, we obtain the com-
patibility equation
pA ≡ ps (t) = pC1 (7.108)
where pA is the pressure at node A. When the link I1 is brought back to the tree, we recover

pC 1 = pI1 + pR 1 + pC 2 (7.109)

When the link R2 is brought back to the tree, we recover

pC2 = pR2 (7.110)

A closed contour around node A will cut only branch C1 resulting in the following continuity
equation.
Qs (t) = QC1 + QI1 (7.111)
A closed contour around node C will cut only branch R1 resulting in

QI1 = QR1 (7.112)

A closed contour around node B, however, would not work because it would cut two branches
R1 and C2 . Therefore, we must select a contour that encircles nodes C ad B simultaneously.
This contour will then cut one branch, i.e., the resistor C2 , leading to the following continuity
equation.
QI1 = QC2 + QR2 (7.113)
Substitution of (7.109) - (7.113) back to (7.107) to eliminate the secondary variables leads
to the following equation.
d 1 1
pC1 = QC1 = (Qs (t) − QI1 ) Node A
dt C1 C1
d 1 1
pC2 = QC2 = (QI1 − QR2 ) Nodes B, C
dt C2 C2

pR1 = R1 QR1 = R1 QI1 Node C


(7.114)
d 1 1
QI1 = pI1 = (pC1 − pR1 − pC2 ) Loop C1 I1 R1 C2
dt I1 I1

1 1
Q R2 = pR2 = pC Loop C2 R2
R2 R2 2
236 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

To obtain the state equation, we must eliminate pR1 and QR2 in (7.114). The first
equation of (7.114) requires no further reduction because every term has either a source or
a state variable. The equation is reproduced here.
d 1
pC1 = (Qs (t) − QI1 ) (7.115)
dt C1
Elimination of QR2 in the second and the fifth equations of (7.114) results in
 
d 1 1
pC = QI1 − pC (7.116)
dt 2 C2 R2 2
Elimination of pR1 in the third and the fourth equations of (7.114) gives
d 1
QI1 = (pC1 − R1 QI1 − pC2 ) (7.117)
dt I1
Finally, (7.115), (7.116), and (7.117) can be rewritten in a matrix equation
      
pC1 0 0 − C11 pC1 1
C1
d 
 pC2  =  0 R21C2 C12   pC2  +  0  Qs (t) (7.118)
     
dt
QI1 1
I1
− I11 − RI11 QI1 0

For the output equation, we first recall that the output variables of interest are pressure
at point A (i.e., pA ) and the flow rate through the capacitor C2 (i.e., QC2 ). First of all, pA
has already been in (7.108), and it is reproduced here for reference.

pA = pC1 (7.119)

Since pC1 is a state variable, (7.119) is in good order and can serve as an output equation. For
QC2 , the best way is to use the elemental equation of capacitor C2 (see the second equation
in (7.107)), which is reproduced here for reference.
d
QC2 = C2 pC (7.120)
dt 2
d
Although the right side involves a state variable, but the derivative is not allowed. There-
dt
d
fore, the right side pC2 should be replaced by the state equation (7.116) resulting in
dt
 
1
QC2 = QI1 − pC (7.121)
R2 2
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 237

Figure 7.40: A translational system


driven by a prescribed
Figure 7.41: Free-body diagram of the
velocity
system in Fig. 7.40

Now the right side of (7.121) has only state variables and no derivatives. Therefore, (7.121)
is in good order and can serve as an output equation. Finally, we can put (7.119) and (7.121)
into the following matrix form
 
! " # pC1 " #
pA 1 0 0 0
=  pC2  + Qs (t) (7.122)
 
1
QC2 0 − R2 1 0
QI1

Example 7.7 Figure 7.40 shows a translational system subjected to a prescribed velocity.
A rigid block of mass m is connected to a wall via a spring of stiffness k1 and a dashpot with
viscous damping coefficient B. The rigid block is driven a by prescribed velocity Vs (t) via
a second spring with stiffness k2 . (Recalled that velocity can be prescribed through use of
a shaker or a linear motor.) The output variables of interest are the force Fs (t) required to
produce the prescribed motion Vs (t) as well as the force fB developed in the damper.

To construct a linear graph, we first analyze free-body diagrams in Fig. 7.41. To


maintain the prescribed motion Vs (t), there must exist a force Fs (t) acting on the right
side of spring k2 and being in equilibrium with the spring force fk2 . Similarly, the free-body
diagram on the block indicates that

d
fk2 − fk1 − fB = m vm ≡ fm (7.123)
dt
238 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.42: Linear graph of the system Figure 7.43: Positive directions of the lin-
in Fig. 7.40 ear graph in Fig. 7.42

The free-body diagrams and (7.123) lead to the linear graph shown in Fig. 7.42. Note that the
prescribed velocity Vs (t) serves as an across-variable source. Then, the force Fs (t) associate
with the across-variable source flows through the spring k2 and splits into three branches:
one overcomes the spring force fk1 , one overcomes the damping force fB , and one accelerate
the mass m. Physical meaning of the positive directions in the linear graph is summarized
in Fig. 7.43.

Figure 7.44 shows the corresponding normal tree. First, the across-variable source Vs (t)
in included in the tree. Then the capacitor m is included in the tree. By now, all the nodes
have been included and the normal tree is in good order. There is no dependent energy
storage element in this system as indicated from the normal tree.

Based on the normal tree, primary variables (i.e., across-variables of the branches and
through-variables of the links) are
Primary Variables : vm , Vs (t), fk1 , fk2 , fB (7.124)
where the known velocity source Vs (t) is a primary variable. Similarly, secondary variables
(i.e., through-variables of the branches and across-variables of the links) are
Secondary Variables : fm , Fs (t), vk1 , vk2 , vB (7.125)
Note that Fs (t) is an unknown force needed to maintain the prescribed velocity Vs (t). Among
the list of primary variables (cf. (7.124)), vm , fk1 , and fk2 are from either a capacitor or
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 239

Figure 7.44: Normal tree from the linear Figure 7.45: Loops and closed contours
graph in Fig. 7.42 to generate compatibility
and continuity equations

d
inductors, whose element equations will carry a differential operator preceding the primary
dt
variables. Therefore, the state variables are

State Variables : vm , fk1 , fk2 (7.126)

Based on the linear graph in Fig. 7.42, we write down the following elemental equations
(with primary variables on the left and secondary variables on the right).

d 1
vm = fm
dt m
d
fk = k1 vk1
dt 1
(7.127)
d
fk = k2 vk2
dt 2

fB = BvB

Figure 7.45 shows the loops and closed contours to generate needed compatibility and
continuity equations. When the the spring k2 is brought back to the tree, we obtain the
240 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

compatibility equation
Vs (t) = vk2 + vm (7.128)

When the link k1 is brought back to the tree, we recover

vk1 = vm (7.129)

When the link B is brought back to the tree, we recover

vB = vm (7.130)

A closed contour around the block mass node will cut only branch m resulting in the conti-
nuity equation (7.123), which is reproduced below.

fk2 − fk1 − fB = fm (7.131)

Substitution of (7.128) - (7.131) back to (7.127) to eliminate the secondary variables leads
to the following equation.

d 1 1
vm = fm = (fk − fk1 − fB ) Node block mass
dt m m 2
d
fk = k1 vk1 = k1 vm Loop k1 m
dt 1
(7.132)
d
fk = k2 vk2 = k2 [Vs (t) − vm ] Loop Vs k2 m
dt 2

fB = BvB = Bvm Loop Bm

To obtain the state equation, we eliminate fB in the first and the fourth equations of
(7.132) to obtain
d 1
vm = (fk − fk1 − Bvm ) (7.133)
dt m 2
The second and third equations of (7.132) require no further reduction because every term
has either a source or a state variable. They are reproduced here as a reference.

d
fk = k1 vm (7.134)
dt 1
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 241

d
fk = k2 [Vs (t) − vm ] (7.135)
dt 2
Finally, (7.133), (7.134), and (7.135) can be rewritten in a matrix equation
      
B
vm −m − m1 m1 vm 0
d 
 fk1  =  k1 0 0   fk1  +  0  Vs (t) (7.136)
     
dt
fk 2 −k2 0 0 fk2 k2

For the output equation, we first recall that the output variables of interest are the
forces Fs (t) and fB . Based on the free-body diagram

Fs (t) = fk2 (7.137)

Since fk2 is a state variable, (7.137) is in good order and can serve as an output equation.
For fB , the fourth equation in (7.132) indicates that

fB = Bvm (7.138)

Note that the right side of (7.138) involves only state variable vm . Therefore, (7.138) is in
good order and can serve as an output equation. Finally, we can put (7.137) and (7.138)
into the following matrix form
 
! " # vm " #
Fs (t) 0 0 1  0
=  fk1  + Vs (t) (7.139)

fB B 0 0 0
fk2

Example 7.8 Figure 7.46 shows a translational system subjected to a prescribed force. A
rigid block of mass m is connected to a wall via a spring of stiffness k1 and a dashpot with
viscous damping coefficient B. The rigid block is driven a by prescribed force Fs (t) via
a second spring with stiffness k2 . Note that the system in Fig. 7.46 is identical to that
shown in Fig. 7.40 except that the prescribed excitation is a force Fs (t) instead of a velocity
Vs (t). So a very important goal of this example is to demonstrate how the difference in the
prescribed source could affect the resulting state equations. The output variables of interest
are the velocity and displacement of node A, which is the right side of the spring k2 where
the prescribed force Fs (t) is applied.
242 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.46: A translational system


driven by a prescribed force Figure 7.47: Free-body diagram of the
system in Fig. 7.46

To construct a linear graph, we first analyze free-body diagrams in Fig. 7.47. The
prescribed force Fs (t) from the source acts directly on the right side of spring k2 and is in
equilibrium with the spring force fk2 . Similarly, the free-body diagram on the block indicates
that
d
fk2 − fk1 − fB = m vm ≡ fm (7.140)
dt
The free-body diagrams and (7.140) lead to the linear graph shown in Fig. 7.48. Physical
meaning of the positive directions in the linear graph is summarized in Fig. 7.49.

One should note that the free-body diagram in Fig. 7.47 is identical to that in Exam-
ple 7.7. As a result, the continuity equation in (7.140) from the Newton’s second law is
identical to that shown in (7.123). Naturally, the linear graph in Fig. 7.48 is largely identical
to that shown in Fig. 7.42, except that the source in Fig. 7.48 is a through-variable source
Fs (t) while the source in Fig. 7.42 is an across-variable source. Since the source in Fig. 7.48
prescribes the force driving the system, its response Vs (t) is to be determined. (It was the
other way around in Example 7.7.) As a chain reaction, physical meaning of the positive
direction is the same as in Example 7.7; see Fig. 7.49 vs. Fig. 7.43.

Figure 7.50 shows the corresponding normal tree. This is where the derivation of state
equations deviates dramatically from that of Example 7.7. First, the through-variable source
Fs (t) cannot be included in the normal tree. Then the capacitor m appears in the tree. To
connect all the nodes, an inductor k2 must appear in the tree. As a result, the inductor k2
becomes a dependent energy storage element of this system.

Based on the normal tree, primary variables (i.e., across-variables of the branches and
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 243

Figure 7.48: Linear graph of the system Figure 7.49: Positive directions of the lin-
in Fig. 7.46 ear graph in Fig. 7.48

through-variables of the links) are

Primary Variables : vm , [vk2 ] , fk1 , fB , Fs (t) (7.141)

where the prescribed force Fs (t) is a primary variable. Also, vk2 is shown inside a square
bracket to denote that k2 is a dependent energy storage element. Similarly, secondary vari-
ables (i.e., through-variables of the branches and across-variables of the links) are

Secondary Variables : fm , fk2 , vk1 , vB , Vs (t) (7.142)

As stated earlier, Vs (t) is the unknown velocity at the source controlled by the prescribed
force Fs (t). Among the list of primary variables (cf. (7.141)), vm and fk1 are from either a
d
capacitor or inductors, whose element equations will carry a differential operator preceding
dt
the primary variables. Therefore, they are qualified to serve as state variables. In contrast,
the across-variable vk2 of the inductor k2 is disqualified for being a state variable. Although
vk2 is in the list of primary variables, k2 is a dependent energy storage element and therefore
d
a differential operator will not precede vk2 . Therefore, the state variables are
dt
State Variables : vm , fk1 (7.143)

Based on the linear graph in Fig. 7.48, we write down the following elemental equations
244 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.51: Loops and closed contours


Figure 7.50: Normal tree from the linear to generate compatibility
graph in Fig. 7.48 and continuity equations

(with primary variables on the left and secondary variables on the right).
d 1
vm = fm
dt m
1 d
vk2 = fk
k2 dt 2
(7.144)
d
fk = k1 vk1
dt 1

fB = BvB

Figure 7.51 shows the loops and closed contours to generate needed compatibility and
continuity equations. When the the source Fs (t) is brought back to the tree, we obtain the
compatibility equation
Vs (t) ≡ vA = vk2 + vm (7.145)
When the link k1 is brought back to the tree, we recover

vk1 = vm (7.146)

When the link B is brought back to the tree, we recover

vB = vm (7.147)
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 245

A closed contour around node A (i.e., the right end of the spring k2 ) will cut only branch k2
resulting in
Fs (t) = fk2 (7.148)

A closed contour around node B (i.e., the block mass), however, will cut into two branches
m and k2 simultaneously. Therefore, we choose a closed contour that encloses both nodes A
and B leading to the continuity equation

Fs (t) = fk1 + fB + fm (7.149)

Substitution of (7.146) - (7.149) back to (7.144) to eliminate the secondary variables leads
to the following equation.

d 1 1
vm = fm = [Fs (t) − fk1 − fB ] Nodes A, B
dt m m
1 d 1
vk2 = fk2 = Fs (t) Node A
k2 dt k2
(7.150)
d
fk = k1 vk1 = k1 vm Loop k1 m
dt 1

fB = BvB = Bvm Loop Bm

To obtain the state equation, we eliminate fB in the first and the fourth equations of
(7.150) to obtain
d 1
vm = [Fs (t) − fk1 − Bvm ] (7.151)
dt m
The third equation of (7.150) requires no further reduction because every term has a state
variable. It is reproduced here as a reference.

d
fk = k1 vm (7.152)
dt 1
Finally, (7.151) and (7.152) are rearranged in a matrix equation
! " # ! " #
B
d vm −m − m1 vm 1
= + m Fs (t) (7.153)
dt fk1 k1 0 fk 1 0
246 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

For the output equation, we first recall that the output variables of interest are the
velocity at the source Vs (t) and the displacement at the source xA . To find Vs (t), one can
combine (7.145) and the second equation of (7.150) to obtain
1 d
Vs (t) = vm + Fs (t) (7.154)
k2 dt
Note that (7.154) is not in a traditional form of an output equation, because a derivative of
the input variable appears. This is a natural consequence of using power variables, because
a time derivative is applied to Hooke’s law as shown in (5.9) and (5.10).

For xA , the compatibility equation in (7.145) implies that


1 1
xa = xk1 + xk2 = fk1 + Fs (t) (7.155)
k1 k2
where the Hooke’s law of spring k1 and the continuity equation (7.148) are used. Note that
the right side of (7.155) involves only the state variable fk1 and the input variable Fs (t).
Therefore, (7.155) is in good order and can serve as an output equation. We can now put
(7.155) into a matrix form
!
h i v h i
m
xA = 0 k11 + k12 Fs (t) (7.156)
fk 1

Example 7.9 Figure 7.52 shows a two-degrees-of-freedom system subjected to a gravita-


tional force. The system consists of two blocks with masses m1 and m2 , two springs with
stiffness coefficients k1 and k2 , and a dashpot with damping coefficient B1 . The first block
m1 is connected to a wall via the spring k1 and the damper B1 . The second block m2 is
connect to the first block m1 via the spring k2 . The first block m1 moves horizontally and is
not subjected to the gravitational effect. In contrast, the second block m2 moves vertically
and the gravitational acceleration applies. A prescribed force Fs (t) is applied to the first
block m1 . The goal is derive the state equation of the two-degrees-of-freedom system.

To construct a linear graph, we first analyze free-body diagrams in Fig. 7.53. For the
first block m1 , sum of all forces results in
d
Fs (t) + fk2 − fk1 − fB1 = m1 vm ≡ fm1 (7.157)
dt 1
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 247

Figure 7.53: Free-body diagram of the


Figure 7.52: A two-mass translational
system in Fig. 7.52
system driven by the gravity

For the second block m2 , sum of all forces results in


d
m2 g − fk2 = m2 vm ≡ fm2 (7.158)
dt 2
The free-body diagrams and (7.157) and (7.158) lead to the linear graph shown in Fig. 7.54.
From (7.157), Fs (t) and fk2 flow into node A At the same time, fk1 , fB1 , and fm1 flow out
of node A. From (7.158), m2 g flows into node B, and fk2 and fm2 flow out of node B. Note
that the applied force Fs (t) and the weight m2 g serve as through-variable sources. Moreover,
physical meaning of the positive directions in the linear graph is summarized in Fig. 7.55.

Figure 7.56 shows the corresponding normal tree. First, there are no across-variable
sources. Then the capacitors m1 and m2 appear in the tree. By now, all nodes have been
connected in the tree and the tree is complete. There is no dependent energy storage element
of this system.

Based on the normal tree, primary variables (i.e., across-variables of the branches and
through-variables of the links) are

Primary Variables : vm1 , vm2 , fk1 , fk2 , fB1 , Fs (t), m2 g (7.159)

where the prescribed forces Fs (t) and m2 g are primary variables. Similarly, secondary vari-
ables (i.e., through-variables of the branches and across-variables of the links) are

Secondary Variables : fm1 , fm2 , vk1 , vk2 , vB1 , vA , vB (7.160)


248 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.54: Linear graph of the system Figure 7.55: Positive directions of the lin-
in Fig. 7.52 ear graph in Fig. 7.54

where vA and vB are the unknown velocity at nodes A and B, respectively. Among the list
of primary variables (cf. (7.159)), vm1 , vm2 , fk1 , fk2 are from either a capacitor or inductors,
d
whose element equations will carry a differential operator preceding the primary variables.
dt
Therefore, the state variables are

State Variables : vm1 , vm2 , fk1 , fk2 (7.161)

Based on the linear graph in Fig. 7.48, we write down the following elemental equations
(with primary variables on the left and secondary variables on the right).

d 1
vm1 = fm
dt m1 1
d 1
vm2 = fm
dt m2 2

d
fk = k1 vk1 (7.162)
dt 1

d
fk = k2 vk2
dt 2

fB1 = B1 vB1
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 249

Figure 7.57: Loops and closed contours


Figure 7.56: Normal tree from the linear
to generate compatibility
graph in Fig. 7.54
and continuity equations

Figure 7.57 shows the loops and closed contours to generate needed compatibility and
continuity equations. When the the source Fs (t) is brought back to the tree, we obtain the
compatibility equation
Vs (t) ≡ vA = vm1 (7.163)
When the link k1 is brought back to the tree, we recover

vk1 = vm1 (7.164)

When the link B1 is brought back to the tree, we recover

vB1 = vm1 (7.165)

When the the source m2 g is brought back to the tree, we obtain the compatibility equation

vB1 = vm2 (7.166)

When the link k2 is brought back to the tree, we recover

vk2 = vm2 − vm1 (7.167)

Now let us turn to the continuity equations. A closed contour around node A will cut
only branch k2 resulting in

Fs (t) + fk2 = fk1 + fB1 + fm1 (7.168)


250 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

which is the same result in (7.157). Similarly, a closed contour around node B leads to the
continuity equation
m2 g = fk2 + fm2 (7.169)

Substitution of (7.163) - (7.169) back to (7.162) to eliminate the secondary variables


leads to the following equation.

d 1 1
vm1 = fm1 = [Fs (t) + fk2 − fk1 − fB1 ] Node A
dt m1 m1
d 1 1
vm2 = fm2 = (m2 g − fk2 ) Node B
dt m2 m2

d
fk = k1 vk1 = k1 vm1 Loop k1 m1 (7.170)
dt 1

d
fk = k2 vk2 = k2 (vm2 − vm1 ) Loop k2 m1 m2
dt 2

fB1 = B1 vB1 = B1 vm1 Loop B1 m1

To obtain the state equation, we eliminate fB1 in the first and the fourth equations of
(7.170) to obtain
d 1
vm1 = [Fs (t) + fk2 − fk1 − B1 vm1 ] (7.171)
dt m1
The second, third, and fourth equations of (7.170) requires no further reduction because
every term has a state variable or a source. They are reproduced here as a reference.

d 1
vm2 = (m2 g − fk2 ) (7.172)
dt m2

d
fk = k1 vm1 (7.173)
dt 1

d
fk = k2 (vm2 − vm1 ) (7.174)
dt 2
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 251

Finally, (7.171) to (7.174) are rearranged in a matrix equation


B1
0 − m11 m11 1
      
vm1 −m 1
vm1 m1
0 !
d  vm2   0 0 0 − m12  vm2   0 1  Fs (t)
= + (7.175)
      
dt 
  
fk 1   k1 0 0 0  fk 1   0 0  g
fk 2 −k2 k2 0 0 fk 2 0 0

Figure 7.58: A Two-stage rotor system driven by a prescribed torque

Example 7.10 Figure 7.58 shows a two-stage rotor system driven by a prescribed torque
Ts (t). The system consists of two rotors with moments of inertia J1 and J2 , a torsional spring
with stiffness coefficient k, and two drag cups with damping coefficients B1 and B2 . The
rotors are supported by frictionless bearings. The prescribed torque Ts (t) drives the first-
stage rotor J1 via the drag cup B1 . The first-stage rotor J1 is connected to the second-stage
rotor J2 via the drag cup B2 . The second-stage rotor J2 is then connected to a wall via the
spring k. The goal is derive the state equation of the two-stage rotor system.

Figure 7.59 shows the linear graph of the two-rotor system. Node A represents the left
end of the drag cup B1 where the prescribed torque Ts (t) is applied. Node B is the first-stage
rotor J1 and node C is the second-stage rotor J2 . When the torque Ts (t) is applied to node
A, the entire torque passes through the drag cup B1 to the rotor J1 (i.e., node B), thus
producing the branch AB in the linear graph. When the torque arrives at node B, part of
the torque accelerates the rotor J1 (producing the branch BG in the linear graph) and the
rest of the torque moves on to the drag cup B2 (thus producing the branch BC in the linear
graph). When the torque reaches the second-stage rotor (i.e., node C), part of the toque
252 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.59: Linear graph of the system Figure 7.60: Normal tree from the linear
in Fig. 7.58 graph in Fig. 7.59

accelerates the rotor J2 (thus producing the branch BG through the capacitor J2 ) and the
rest of the torque deforms the torsional spring k (thus producing the branch BG through
the inductor k).

Figure 7.60 shows the corresponding normal tree. First, there are no across-variable
sources. Then the capacitors J1 and J2 appear in the tree. To include all the nodes, the
resistor B1 must appear in the tree. By now, all nodes have been connected in the tree and
the tree is complete. There is no dependent energy storage element of this system.

Based on the normal tree, primary variables (i.e., across-variables of the branches and
through-variables of the links) are

Primary Variables : ωJ1 , ωJ2 , ωB1 , TB2 , Tk , Ts (t) (7.176)

where the prescribed torque Ts (t) is a primary variable. Similarly, secondary variables (i.e.,
through-variables of the branches and across-variables of the links) are

Secondary Variables : TJ1 , TJ2 , TB1 , ωB2 , ωk , ωs (t) (7.177)

where ωs (t) is the unknown angular velocity at node A to enable the prescribed torque Ts (t).
Among the list of primary variables (cf. (7.176)), ωJ1 , ωJ2 , and Tk are from either a capacitor
d
or inductors, whose elemental equations will carry a differential operator preceding the
dt
primary variables. Therefore, the state variables are

State Variables : ωJ1 , ωJ2 , Tk (7.178)


7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 253

Figure 7.61: Loops and closed contours to generate compatibility and continuity equations

Based on the linear graph in Fig. 7.48, we write down the following elemental equations
(with primary variables on the left and secondary variables on the right).

d 1
ωJ1 = TJ1
dt J1
d 1
ωJ2 = TJ2
dt J2

1
ωB1 = TB (7.179)
B1 1

TB2 = B2 ωB2

d
Tk = kωk
dt

Figure 7.61 shows the loops and closed contours to generate needed compatibility and
continuity equations. When the the source Ts (t) is brought back to the tree, we obtain the
compatibility equation
ωs (t) ≡ ωB1 + ωJ1 (7.180)

When the link B2 is brought back to the tree, we recover

ωB2 = ωJ1 − ωJ2 (7.181)


254 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

When the link k is brought back to the tree, we recover

ωk = ωJ2 (7.182)

Now let us turn to the continuity equations. A closed contour around node A will cut
only branch B1 resulting in
TB1 = Ts (t) (7.183)
A closed contour around node B will cut into two branches B1 and J1 . Therefore, we need
to choose a closed contour enclosing nodes A and B simultaneously. This will only cut into
branch J1 leading to the continuity equation

TJ1 = Ts (t) − TB2 (7.184)

Finally, a closed contour around node C will cut only branch J2 resulting in

TJ2 = TB2 − Tk (7.185)

Substitution of (7.181) - (7.185) back to (7.179) to eliminate the secondary variables


leads to the following equation.
d 1 1
ωJ1 = TJ1 = (Ts (t) − TB2 ) Nodes A, B
dt J1 J1
d 1 1
ωJ2 = TJ2 = (TB2 − Tk ) Node C
dt J2 J2

1 1
ωB1 = TB = Ts (t) Node A (7.186)
B1 1 B1

TB2 = B2 ωB2 = B2 (ωJ1 − ωJ2 ) Loop J1 B2 J2

d
Tk = kωk = kωJ2 Loop J2 k
dt

To obtain the state equation, we eliminate TB2 in the first and the second equations of
(7.186) to obtain
d 1
ωJ1 = [Ts (t) − B2 (ωJ1 − ωJ2 )] (7.187)
dt J1
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 255

Figure 7.62: A thermal system involving


Figure 7.63: Linear graph of the system
a wok on a cook stove
in Fig. 7.62

and
d 1
ωJ2 = [B2 (ωJ1 − ωJ2 ) − Tk ] (7.188)
dt J2
The fifth equation of (7.186) requires no further reduction because every term has a state
variable or a source. It is reproduced here as a reference.
d
Tk = kωJ2 (7.189)
dt

Finally, (7.187) to (7.189) are rearranged in a matrix equation


   B    
ωJ1 − J12 BJ12 0 ωJ1 J
1
d    B2 B2 1    1 
ω =
 J2   J2 − J2
− J2  
ω J2  +  0  Ts (t) (7.190)
dt
Tk 0 k 0 Tk 0

Example 7.11 Figure 7.62 illustrate a thermal system modeling a wok with food cooking
on a cook stove. The cook stove provides a prescribed heat flow rate Qs (t) to the wok with
food. The wok has heat conduction R2 and dissipates heat to ambient air via a convection
R1 . The food inside the wok has specific heat Ct . The goal is to derive the state equation
governing the dynamics of this thermal system.

Figure 7.63 shows the corresponding linear graph. Let the stove be node S and the wok
be node W . The ground has the ambient temperature T∞ . If we go with the heat flow, we
256 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

will find Qs (t) coming out of the stove and entering the wok via the resistance R2 . After the
heat arrives at the wok, part of it dissipates into air via resistance R1 and the rest will heat
up the food of capacitance Ct .

Figure 7.64 shows the normal tree of the thermal system. First, there are no across-
variable sources. Then the only capacitor Ct appears in the tree. To include all the nodes,
the resistor R2 must appear in the tree. By now, all nodes have been connected in the tree
and the tree is complete. There is no dependent energy storage element of this system.

Based on the normal tree, primary variables (i.e., across-variables of the branches and
through-variables of the links) are

Primary Variables : TCt , TR2 , qR1 , Qs (t) (7.191)

where the prescribed forces Qs (t) is a primary variable. Similarly, secondary variables (i.e.,
through-variables of the branches and across-variables of the links) are

Secondary Variables : qCt , qR2 , TR1 , Ts (t) (7.192)

where Ts (t) is the unknown temperature rise at the stove to enable the prescribed heat flow
rate Qs (t). Among the list of primary variables (cf. (7.191)), only TCt is from a capacitor
d
or an inductor, whose element equations will carry a differential operator preceding the
dt
primary variables. Therefore, there is only one state variables

State Variables : TCt (7.193)

Based on the linear graph in Fig. 7.63, we write down the following elemental equations
(with primary variables on the left and secondary variables on the right).
d 1
TCt = qCt
dt Ct
d
TR = R2 qR2
dt 2 (7.194)

1
q R1 = TR
R1 1
7.8. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS 257

Figure 7.64: Normal tree from the linear Figure 7.65: Loop and node equations
graph in Fig. 7.63 from the normal tree in
Fig. 7.64

Figure 7.65 shows the loops and closed contours to generate needed compatibility and
continuity equations. When the the source Qs (t) is brought back to the tree, we obtain the
compatibility equation
Qs (t) = TR2 + TCt (7.195)

When the link R1 is brought back to the tree, we recover

TR1 = TCt (7.196)

Now let us turn to the continuity equations. A closed contour around node S will cut
only branch R2 resulting in
qR2 = Qs (t) (7.197)

A closed contour around node W will cut into two branches R2 and Ct . Therefore, we need
to choose a closed contour enclosing nodes S and W simultaneously. This will only cut into
branch Ct leading to the continuity equation

qCt = Qs (t) − qR1 (7.198)

Substitution of (7.195) - (7.198) back to (7.194) to eliminate the secondary variables


258 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

leads to the following equation.

d 1 1
TCt = qCt = [Qs (t) − qR1 ] Nodes S, W
dt Ct Ct
d
TR = R2 qR2 = R2 qs (t) Node S
dt 2 (7.199)

1 1
qR 1 = TR = TC Loop Ct R1
R1 1 R1 t

To obtain the state equation, we eliminate qR1 in the first equation of (7.199) to obtain
 
d 1 1
TC = Qs (t) − TC (7.200)
dt t Ct R1 t
or
d 1 1
TCt = − TCt + Qs (t) (7.201)
dt R1 Ct Ct
Since there is only state variable, (7.201) is indeed the state equation of the system.

7.9 Closing Remarks

In this chapter, we have introduced the concept of using linear graphs to obtain state equa-
tions. The advantage is that it is an universal approach that works for all kinds of domains.
This is a powerful tool especially for systems that involve multiple domains. Nevertheless,
there are two closing remarks that I want to bring to your attention.

First, one should note that linear graphs are not the only approach to obtain state
equations. For example, we have used Newton’s second law to obtain state equations; see
(4.11). We have also used a heuristic approach to obtain state equations; see Section 7.1.
In these examples, they are often less complicated and cumbersome than the approach of
linear graphs in deriving the state equations.

Second, one should note that the linear graph approach may not give you enough order
you need to model a system. This arises from the use of power variables chosen. For example,
7.9. CLOSING REMARKS 259

Figure 7.66: The flagship example revis-


ited here Figure 7.67: Linear graph of the flagship
system shown in Fig. 7.66

Hooke’s law in mechanical domain is F = kx as shown in (5.9). To fit into the framework of
d
power variables, a derivative of the Hooke’s law F = kv is used instead; see (5.10). This
dt
implies that the state equations derived through use of linear graphs will end up with power
variables f and v. Most of the time it works nicely. Occasionally, state equations from linear
graphs may not have enough orders to take care of displacements, thus not revealing entire
physics of the system.

The following example demonstrates the two important remarks I made above.

Example 7.12 This example is to revisit our flagship example shown in Fig. 1.4, which is
reproduced here in Fig. 7.66. I will first use the method of linear graphs to derive the state
equation. Then I will use Newton’s second law to derive the state equation again.

Figure 7.67 shows the linear graph of the flagship system. The force Fs (t) is a through-
variable source, and it is split into two branches. One branch is to over the spring force, and
the other branch is to accelerate the mass. The force going to the spring will continue to pass
to the damper. Figure 7.68 shows the corresponding normal tree. There is no across-variable
source. Therefore, the mass m (i.e., the capacitor) is included in the normal tree first. Then
a damper B is introduced to cover all the nodes. There is no dependent energy storage
element, because m is in the tree and k is in the link. Moreover, primary variables are

Primary Variables : vm , vB , fk , Fs (t) (7.202)

where the prescribed forces Qs (t) is a primary variable. Similarly, secondary variables (i.e.,
260 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS

Figure 7.69: Loop and node equations


Figure 7.68: Normal tree of the linear
from the normal tree in
graph show in Fig. 7.67
Fig. 7.68

through-variables of the branches and across-variables of the links) are

Secondary Variables : fm , fB , vk , Vs (t) (7.203)

where Vs (t) is the unknown velocity associated with the source Fs (t). The state variables
are
State Variables : vm , fk (7.204)

Following the establish procedure of writing down elemental equations, loop equations,
and node equations, we obtain the following set of equations. (Refer Fig. 7.69 for the loops
and nodes chosen.)
d 1 1
vm = fm = [Fs (t) − fk ] Node A
dt m m
1 1
vB = fB = fk Node B
B B (7.205)

d
fk = kvk = k (vm − vB ) Loop mkB
dt

Finally, elimination of variable vB leads to the state equation


! " # ! " #
d vm 0 − m1 vm 1
= + m Fs (t) (7.206)
dt fk k − Bk fk 0
Note that the state equation is second-order with state variables fk and vm . Should one
wants to know the position x2 (t) of the mass m, it is necessary to integrate vm (t) with
7.9. CLOSING REMARKS 261

respect to t. In other words, the state equation in (7.206) does not provide information on
x2 (t) directly.

The state equation can also be derived through use of Newton’s second law. The
detailed derivation is already shown in Example 4.4; therefore, it is not repeated here. The
state equation derived this way has the position x2 a state variable resulting in a third-order
state equation; see (4.63).
262 CHAPTER 7. DERIVING STATE EQUATIONS FROM LINEAR GRAPHS
Chapter 8

Deriving State Equations of


Multi-Domain Systems

In this chapter, our goal is to generalize the linear graph approach developed in Chapter 7 to
multi-domain systems. Since energy is exchanged between various domains, there must be
elements that transform energy from one domain to another (e.g., an electric motor changes
energy from an electrical domain to a mechanical domain). Such elements are known as
two-port elements.

For the rest of the chapter, I will first explain two-port elements in detail. Then I will
develop a procedure to create linear graphs and normal trees involving multiple domains and
two-port elements. Finally, I will demonstrate a recipe through which state equations are
derived systematically based on the linear graphs and normal trees.

8.1 Two-Port Transducing Elements: An Introduction

Multi-domain systems are systems that consist of sub-systems from multiple domains, such as
electrical, translation, rotational, thermal, and hydraulic sub-systems. For example, Fig. 8.1
illustrates a multi-domain system consisting of an electrical motor, a rotor, and a gear

263
264 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.1: An electrical motor driving a rotor via a gear train

train connecting the motor and the rotor. The system is multi-domain because it involves
electric energy (characterized by power variables V and i) and rotational mechanical energy
(characterized by power variables T and ω).

Figure 8.2 shows power flow and energy transduction of this multi-domain system.
When the energy goes from the electrical domain to the rotational mechanical domain, the
energy is transduced twice. The first transduction is through the motor from power variables
V and i of the electrical domain to the power variables ω1 and T1 of the first gear. This energy
transduction occurs between two different domains. The second transduction is through the
gear trains from the power variables ω1 and T1 of the first gear to the power variables ω2
and T2 of the second gear and hence the rotor. In contrast, this energy transduction occurs
in the same domain, i.e., the rotational mechanical domain.

Figure 8.2: Power flow in the motor-gear-rotor system shown in Fig. 8.1

The mechanisms or devices (e.g., electrical motors and gears) that transduce energy
between two domains, whether they be different or same, are called two-port transducing
elements. Our goal in this section is to model two-port element mathematically and to
develop a linear graph representation of two-port elements.
8.2. MATHEMATICAL MODEL OF TWO-PORT ELEMENTS 265

To achieve this goal, let us make some more observation based on the example of the
motor-gear-rotor system shown in Fig. 8.1. First of all, energy flows out of a first domain
and flows into a second domain. Therefore, a two-port element behaves like a relay. One
port of the element receive energy from the first domain, and the other port of the element
give the energy to the second domain. For the first domain, the two-port element behaves
like a load to be driven. For the second domain, the two-port element behaves like a source
to provide energy.

For example, the electrical motor receives energy from the electrical domain via power
variables V and i in one port; see Fig. 8.3. For the electrical domain, the motor behaves
like a load to be driven. Then the electrical motor outputs the energy to the other port
via power variables ω1 and T1 . In this case, the electric motor behaves like a source to the
rotational mechanical domain. Similarly, the gear train receives energy from power variables
ω1 and T1 via one port and behaves like a load to be driven. Then the gear train outputs the
energy to the rotor via power variables ω2 and T2 behaving like a source providing energy.
The two-port transduction nature is illustrated in Fig. 8.3 conceptually.

Figure 8.3: Power flow in the motor-gear-rotor system shown in Fig. 8.1

8.2 Mathematical Model of Two-Port Elements

Figure 8.4 is a diagram illustrating energy transfer of a two-port element. The port on the
left receives energy from the first domain with a through-variable f1 and an across-variable
v1 . Therefore, the power flowing into the two-port element is

Pin = f1 v1 (8.1)

Similarly, the port on the right outputs energy to the second domain with a through-variable
f2 and an across-variable v2 . Therefore, the power flowing out of the two-port element is

Pout = f2 v2 (8.2)
266 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.4: Power flows of a two-port transducing element

To derive a mathematical model of two-port elements, we make the following two as-
sumptions. First, there is no energy loss in a two-port element, i.e.,

Pin = Pout or f1 v1 = f2 v2 (8.3)

The second assumption is that two-port transducing elements are linear and time-invariant,
i.e., ! " # !
v1 c11 c12 v2
= (8.4)
f1 c21 c22 f2
where c11 , c12 , c21 , and c22 are constants.

Under these two assumptions, the constant coefficients c11 , c12 , c21 , and c22 cannot be
arbitrarily chosen. Instead, they must satisfy certain conditions. To derive these conditions,
let us first expand (8.4) as (
v1 = c11 v2 + c12 f2
(8.5)
f1 = c21 v2 + c22 f2
Then substitution of (8.5) into (8.3) leads to

(c11 v2 + c12 f2 ) (c21 v2 + c22 f2 ) − f2 v2 = 0 (8.6)

or
c11 c21 v22 + c12 c22 f22 + (c12 c21 + c11 c22 − 1) f2 v2 = 0 (8.7)
8.2. MATHEMATICAL MODEL OF TWO-PORT ELEMENTS 267

In other to have non-trivial f2 and v2 , every coefficient in (8.7) must vanish resulting in

c11 c21 = 0 (8.8)

c12 c22 = 0 (8.9)


and
c12 c21 + c11 c22 = 1 (8.10)
Simultaneous solution of (8.8), (8.9), and (8.10) evolves into the following two cases.

Case 1: Gyrators. For a gyrator,

c11 = 0 (8.11)

and (8.8) is automatically satisfied. As a result, (8.10) implies that c12 6= 0 and c21 6= 0.
Moreover,
1
c12 = (8.12)
c21
Since c12 6= 0, equation (8.9) implies that

c22 = 0 (8.13)

Therefore, (8.4) takes the form of


! " # !
v1 0 GY v2
= (8.14)
f1 (GY)−1 0 f2

where GY is a gyrator constant.

Case 2: Transformers. For a transformer,

c21 = 0 (8.15)

and (8.8) is automatically satisfied. As a result, (8.10) implies that c11 6= 0 and c22 6= 0.
Moreover,
1
c22 = (8.16)
c11
Since c22 6= 0, equation (8.9) implies that

c12 = 0 (8.17)
268 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Therefore, (8.4) takes the form of


! " # !
v1 TF 0 v2
= (8.18)
f1 0 (TF)−1 f2

where TF is a transformer constant.

Figure 8.5 to Fig. 8.16 show a library of transformers and gyrators.

Figure 8.6: A vertical crank and slider


Figure 8.5: A rack and pinion system as
system as a transformer
a transformer

Figure 8.8: A voice coil as a transformer


Figure 8.7: A rotary pump as a gyrator
8.2. MATHEMATICAL MODEL OF TWO-PORT ELEMENTS 269

Figure 8.10: An electrical motor (or gen-


Figure 8.9: A hydraulic cylinder as a gy-
erator) as a transformer
rator

Figure 8.11: A pulley set as a transformer


270 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.12: A gear train as a transformer

Figure 8.13: A mechanical lever as a transformer


8.2. MATHEMATICAL MODEL OF TWO-PORT ELEMENTS 271

Figure 8.14: A belt drive as a transformer

Figure 8.15: An electric transformer as a transformer


272 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.16: A fluid transformer as a transformer

8.3 Linear Graph Models Involving Two-Port Elements

Using two-port transducing elements, one can connect multiple domains together in a linear
graph model. To do so, let us first define notation and sign convention for two-port elements
as shown in Fig. 8.17. The notation for transformers is a rectangular box, while the notation
of gyrators takes the form of a figure 8. Both notations show two branches reflecting their
two-port nature. The left branch represents a flow into the two-port element from the first
domain, while the right branch represents a flow out of the two-port element into the second
domain. In other words, the left branch is input and the right branch is output.

Figure 8.17: Notation and sign convention of transformers and gyrators


8.3. LINEAR GRAPH MODELS INVOLVING TWO-PORT ELEMENTS 273

Figure 8.18: An engine starter system whose linear graph is to be drawn

In developing linear graphs involving multi-domains with two-port elements, the basic
principle remains the same, i.e., start with a source and go with the flow. Here are some
examples below.

Example 8.1 Figure 8.18 shows an engine starter system. A voltage source Vs (t) drives
an engine starter (i.e., an electric motor) with inductance L and resistance R. The starter
then drives a gear train, which subsequently drives the engine of rotary inertia J. Rotating
elements at the starter side have damping B1 , while rotating elements at the engine side
have damping B2 . What is the linear graph model?

Figure 8.19 shows the corresponding linear graph. In the electrical domain, we begin
with a source Vs (t). The inductance L and resistance R will both cause a voltage drop;
therefore, they are in series connection with the source Vs (t). The voltage left in the electrical
circuit then drives the electrical motor (i.e., the starter) modeled as a transformer; see
Fig. 8.10. The energy in the electrical domain is now transduced to the rotational mechanical
domain.

For the rotational motion on the starter side, the electric motor serves as a source
providing prescribed angular velocity ω. As a result, a torque T flows out of the electric
motor and splits into two parts. One part of the torque is to overcome damping B1 , while
the other part of the torque is to drive the gear train. The gear trains, on the one hand,
behaves like a load to the electric motor and, on the other hand, serves as a transformer (cf.
Fig. 8.12).
274 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.19: Linear graph of the starter system shown in Fig. 8.18

For the rotational motion of the engine side, the gear train serves as a source providing
a torque to drive the engine. Part of the torque is used to overcome the damping B2 , and
the rest of the torque is to accelerate the engine inertia J.

Example 8.2 Figure 8.20 shows an electro-hydraulic system that lifts a car in garage for an
oil change. A voltage source Vs (t) drives an electric motor with inductance L and resistance
R. The electric motor drives a gear train, which subsequently drives the rotor of a rotary
pump that has rotary inertia J. Rotating elements at the electric motor side have damping
B1 , while rotating elements at the rotary pump side have damping B2 . The rotary pump
pressures fluid from a reservoir through an orifice with damping B3 into a hydraulic cylinder.
In the meantime, the rotary pump has a small leak Rl send a small amount of the fluid back
to the reservoir. Finally, the hydraulic cylinder lifts the car with weight mg through an
isolation system with stiffness k and damping B4 . The goal is to find a linear graph model
of this electro-hydraulic system.

Figure 8.21 shows the corresponding linear graph. In the electrical domain, we begin
with a source Vs (t). The inductance L and resistance R will both cause a voltage drop;
therefore, they are in series connection with the source Vs (t). The voltage left in the electrical
8.3. LINEAR GRAPH MODELS INVOLVING TWO-PORT ELEMENTS 275

Figure 8.20: A hydraulic jack system for car repair

circuit then drives the electrical motor that is modeled as a transformer (cf. Fig. 8.10). The
energy in the electrical domain is now transduced to the rotational mechanical domain.

For the rotational motion on the electrical motor side, the electric motor serves as a
source providing prescribed angular velocity ω to the rotational motion of the motor and the
first gear. As a result, a torque T flows out of the electric motor and splits into two parts.
One part of the torque is to overcome damping B1 , while the other part of the torque is to
drive the gear train. The gear trains, on the one hand, behaves like a load to the electric
motor and, on the other hand, serves as a transformer (cf. Fig. 8.12) passing energy to the
rotary pump.

For the rotational motion of the pump, the gear train serves as a source providing a
torque to drive the pump. The torque is split in three parts. The first part of the torque is
used to overcome the damping B2 , the second part of the torque is to accelerate the pump
inertia J, and the third part of the torque is used to transduce energy from the rotational
motion into a fluid domain. The transduction mechanism of a pump is a gyrator; see Fig. 8.7.
The pump behaves like a load to gear train, but serves as a source to the fluid domain.

When the fluid flow comes out of the pump, it splits into two parts. One part of the
276 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.21: Linear graph of the hydraulic system shown in Fig. 8.20

flow leaks back to the ground (i.e., the reservoir), while the other part goes through the
orifice with damping B3 . After the orifice, the flow enter the hydraulic cylinder modeled as a
gyrator; see Fig. 8.9. The hydraulic cylinder behaves like a load to the pump, but serves as
a source to the translational motion. Now, the energy is transduced to a mechanical domain
with translational motion.

Finally, the piston of the hydraulic pump serves a source providing a force to enable
the translational motion. The force is split into four parts. The first part is to overcome
the stiffness k, the second part is to overcome the damping force B4 , the third part is to
overcome the weight mg, and the last part is to acceleration the car with mass m.

8.4 Primary and Secondary Variables for Two-Port El-


ements

To incorporate two-port transducing elements into linear graph models, the next step is to
define primary and secondary variables associated with two-port elements. Before we do
so, let us review the existing rules of primary and secondary variables for linear graphs in
a single domain. First of all, primary variables are across-variables of trees and through-
variables of links. Secondary variables are through-variables of trees and across-variables
of link. Finally, each elemental equation has a primary variable and a secondary variable.
8.4. PRIMARY AND SECONDARY VARIABLES FOR TWO-PORT ELEMENTS 277

Primary and secondary variables of two-port transducing elements must be compatible with
these existing rules.

Figure 8.23: Trees and branches of a


Figure 8.22: A transformer model transformer

Figure 8.22 shows a transformer model with a transformer constant TF. The input port
has through-variable f1 and across-variable v1 , while the output port has through-variable
f2 and across-variable v2 . For a transformer,

v1 = (TF) v2 (8.19)

and
1
f1 = f2 (8.20)
TF
In each of (8.19) and (8.20) only one variable is primary and the other must be secondary.
As a result, there are only two possible scenarios.

Scenario 1: Input port is in a tree branch. In this case, v1 is a primary variable


because v1 is the across-variable of the input port (which is a tree branch). If v1 is a primary
variable, then v2 must be a secondary variable as a result of (8.19). Therefore, f2 of the
output port must be a primary variable, because f2 and v2 cannot serve as primary variables
at the same time. Since f2 is a primary variable and v2 is a secondary variable, the output
port must be a link based on the existing rule for single-domain systems.

Scenario 2: Input port is in a link. In this case, f1 is a primary variable because


f1 is the through-variable of the input port (which is a link). If f1 is a primary variable,
278 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

then f2 must be a secondary variable as a result of (8.20). Therefore, v2 of the output port
must be a primary variable, because f2 and v2 cannot serve as primary variables at the same
time. Since v2 is a primary variable and f2 is a secondary variable, the output port must be
a tree branch based on the existing rule for single-domain systems.

Based on the two scenarios above, one concludes that a transformer has exactly
one tree branch and one link; see Fig. 8.23. The tree branch can be at the input port
or at the output port.

Figure 8.25: Trees and branches of a gy-


rator
Figure 8.24: A gyrator model

Figure 8.24 shows a gyrator model with a gyrator constant GY. The input port has
through-variable f1 and across-variable v1 , while the output port has through-variable f2
and across-variable v2 . For a gyrator,

v1 = (GY) f2 (8.21)

and
1
f1 = v2 (8.22)
GY
In each of (8.21) and (8.22) only one variable is primary and the other must be secondary.
As a result, there are only two possible scenarios.

Scenario 1: Input port is in a tree branch. In this case, v1 is a primary variable


because v1 is the across-variable of the input port (which is a tree branch). If v1 is a primary
variable, then f2 must be a secondary variable as a result of (8.21). Therefore, v2 of the
output port must be a primary variable, because f2 and v2 cannot serve as primary variables
8.5. NORMAL TREES OF MULTI-DOMAIN SYSTEMS 279

at the same time. Since v2 is a primary variable and f2 is a secondary variable, the output
port must be a tree branch based on the existing rule for single-domain systems.

Scenario 2: Input port is in a link. In this case, f1 is a primary variable because


f1 is the through-variable of the input port (which is a link). If f1 is a primary variable,
then v2 must be a secondary variable as a result of (8.22). Therefore, f2 of the output port
must be a primary variable, because f2 and v2 cannot serve as primary variables at the same
time. Since f2 is a primary variable and v2 is a secondary variable, the output port must be
a link based on the existing rule for single-domain systems.

Based on the two scenarios above, one concludes that a gyrator must have two tree
branches or two links simultaneously; see Fig. 8.25.

8.5 Normal Trees of Multi-Domain Systems

With tree branches and links defined for transformers and gyrators, we can now develop a
procedure to draw normal trees for multi-domain systems. To achieve this end, let us use
the starter-engine system shown in Example 8.1 to illustrate. The starter-engine system is
reproduced in Fig. 8.26 along with its linear graph in Fig. 8.27 for reference. Note that the
linear graph in Fig. 8.27 now has labels for nodes A, B, C, D, and E. (Note that ground
nodes in each domain are not labeled.) Also, the electric motor (a transformer) has input
and output ports labeled as 1 and 2. The gear train (also a transformer) has input and
output ports labeled as 3 and 4.

Figure 8.26: An engine starter system


280 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.27: Linear graph of the engine starter system in Fig. 8.26

Here are the steps to construct a normal tree.

Step 0. Replace A-type elements in series and T -type elements in parallel


by equivalent elements. This is a preliminary step to clean up the linear graph. The
purpose is to avoid a singular state matrix in the end. Since the linear graph in Fig. 8.26
does not meet this condition, there is no need to perform this step.

Figure 8.28: Step 1 of constructing a normal tree

Step 1. Draw all the nodes. This step is self-explanatory, because the definition of
a tree requires all the nodes. Figure 8.28 shows the normal tree in construction after this
step is done.
8.5. NORMAL TREES OF MULTI-DOMAIN SYSTEMS 281

Explanation. There are five nodes in the linear graph, i.e., nodes A, B, C, D, and E.
Do not forget the three ground nodes. Also note that the two transformers are set up and
are ready to receive tree branches and links.

Why? The purpose is to make sure that the tree will include all the nodes in the original
linear graph.

Figure 8.29: Step 2 of constructing a normal tree

Step 2. Include all across-variable sources as tree branches. Figure 8.29 shows
the normal tree in construction after this step is done.

Explanation. There is only one across variable in the linear graph Fig. 8.27, i.e., the
voltage source Vs (t). So Vs (t) is in the normal tree now.

Why? By doing so, the voltage source will be forced to appear in the compatibility
equation in the form of a primary variable. After the normal is constructed, if a link is put
back to the tree, it will form a loop to generate a compatibility equation. If the voltage is in
the tree to start with, the voltage Vs (t) will appear in the compatibility equation. Moreover,
the across-variable of a tree branch is a primary variable. So Vs (t) will appear as a primary
variable and will not be eliminated.

Step 3. Include as many as possible A-type elements as tree branches so


that (a) all transformers have only one branch, and (b) all gyrators have two
branches or none. Figure 8.30 shows the normal tree in construction after this step is
done.
282 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.30: Step 3 of constructing a normal tree

Explanation. A-type elements are capacitors. There is only one capacitor J (from the
engine’s rotary inertia) in the linear graph Fig. 8.27. Addition of A-type elements together
with the conditions on transformers and gyrators often result in a chain reaction. For
example, output port 4 must be a link in order not to form a closed loop. Since the gear
train is a transformer, input port 3 must be a tree branch. To avoid a closed loop, output
port 2 of the electric motor must be a link. Since the electric motor is a transformer, input
port 1 must be a tree branch.

Why? This step is to ensure that an across-variable preceded with a differential operator
d
will serve as a primary variable as much as possible. Recall that a capacitor (i.e., an A-
dt
type element) satisfies the elemental equation
dv
i=C (8.23)
dt
If the capacitor is in the tree, the across-variable of the capacitor, which bears a differential
d
operator , will be a primary variable and will not be eliminated later in the derivation.
dt
Also, the requirements of the transformers and gyrators will ensure that each port has only
one primary variable and one secondary variable.

Step 3(a). What happens when A-type elements cannot be included as tree
branches? For Step 3, the ideal case is to have all A-type elements included in the tree.
For some systems, it is not possible to include all A-type elements in the tree. An A-type
element that cannot be included in the tree is called a dependent energy storage element.
8.5. NORMAL TREES OF MULTI-DOMAIN SYSTEMS 283

Explanation. For the current example, there is only capacitor J and it is included.
Therefore, this condition is not met. For every problem involving multiple domains, one
needs to fill in branches for transformers and gyrators as shown in Step 3. In doing that
procedure, one needs to make sure that as many T -type elements as possible are kept out of
the tree and as many A-type elements are kept in the tree.

Why? When an A-type element is not included in the tree, it basically means that trap
2 in Section 7.3 is present.

Figure 8.31: Step 4 of constructing a normal tree

Step 4. Include as many as possible D-type elements to complete the tree.


Figure 8.31 shows the normal tree in construction after this step is done.

Explanation. D-type elements are resistors. According to the linear graph in Fig. 8.27,
the resistor R should be included to connect nodes B and C. The tree has already been
completed because all nodes are included.

Why? This step is to force all T -type elements (i.e., inductors) to become tree links. If
the remaining tree branches are filled by D-type elements, then all T -type elements will be
forced to be tree links. Recall that a T -type element satisfies the elemental equation

di
v=L (8.24)
dt

If the T -type element is in the links, its through-variable, which bears a differential operator
d
, will be a primary variable and will not be eliminated later in the derivation. Therefore,
dt
284 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

d
this step is to ensure that a through-variable preceded with a differential operator will
dt
serve as a primary variable as much as possible.

Step 5. Check if all nodes are included in the tree. Ideally, the tree should be
completed by now. If not, one can include T -type elements to complete the tree. In this case,
a T -type element that must appear to complete the tree is also a dependent energy storage
element.

Explanation. This condition is not met in this example. There is no need to include
any T -type element in the tree.

Why? When a T -type element (e.g., an inductor) is included in the tree, it basically
means that trap 2 in Section 7.3 is present.

Step 6. Final Check. Check the tree one more time. The tree should be complete now
and all nodes should be included. There should be no need to include any through-variable
sources in the tree.

Example 8.3 In this example, let us revisit the hydraulic jack system show in Fig. 8.20 and
in Example 8.2. The hydraulic jack system and its linear graph are reproduced in Fig. 8.32
and Fig. 8.33 for reference.

First of all, let us review the linear graph in Fig. 8.33. It has five domains, two trans-
formers, and two gyrators. In addition to the five ground nodes, there are eight additional
nodes, i.e., nodes A though H. For the electric motor (a transformer), its two ports are
denoted as 1 and 2. For the gear train (a transformer), its two ports are denoted as 3 and
4. For the rotary pump (a gyrator), its two ports are denoted as 5 and 6. For the hydraulic
cylinder (a gyrator), its two ports are denoted as 7 and 8.

Let us use the steps developed above to construct a normal tree shown in Fig. 8.34.

Step 0. Replace A-type elements in series and T -type elements in parallel


by equivalent elements. Since the linear graph in Fig. 8.33 does not meet this condition,
there is no need to perform this step.
8.5. NORMAL TREES OF MULTI-DOMAIN SYSTEMS 285

Figure 8.32: A hydraulic jack system for car repair

Figure 8.33: Linear graph of the hydraulic jack in Fig. 8.32


286 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Step 1. Draw all the nodes. In Figure 8.34, this step is shown in black. Note that
there are five ground notes, and eight additional nodes, i.e., nodes A through G. Moreover,
two transformers and two gyrators are set up for their ports to receive tree branches and
links.

Step 2. Include all across-variable sources as tree branches. In Figure 8.34,


this step is also shown in black. Basically, there is only one across variable in the linear
graph Fig. 8.32, i.e., the voltage source Vs (t). So Vs (t) is in the normal tree now.

Figure 8.34: The corresponding normal tree of Fig. 8.33

Step 3. Include as many as possible A-type elements as tree branches so that


(a) all transformers have only one branch, and (b) all gyrators have two branches
or none. In Figure 8.34, this step is shown in red. There two capacitors: J2 (from rotary
pump) and m (from the car). The presence of m causes port 8 to be a link (in order not
to form a loop), which, in turns, causes port 7 to be a link because the hydraulic cylinder
is a gyrator. The presence of J2 causes port 5 to be a link (in order not to form a loop),
which, in turns, causes port 6 to be a link because the rotary pump is a gyrator. Moreover,
the presence of J2 causes port 4 to be a link (in order not to form a loop), which, in turns,
causes port 3 to be a tree branch because the gear train is a transformer. Furthermore, port
2 must be a link (in order not to form a loop), which, in turns, causes port 1 to be a tree
branch because the electric motor is a transformer.

Step 3(a). What happens when A-type elements cannot be included as


tree branches? For the current example, all capacitors have been included; therefore, this
8.6. STATE EQUATIONS FOR MULTI-DOMAIN SYSTEMS 287

condition is not met.

Step 4. Include as many as possible D-type elements to complete the tree. In


Figure 8.34, this step is shown in blue, solid lines. D-type elements are resistors. Therefore,
the resistor R is added to connect nodes B and C. The resistor B3 is added to connect nodes
F and G. The resistor Rl is added to connect nodes F to the ground node.

Step 5. Check if all nodes are included in the tree. The tree is completed
because all nodes are included. There is no need to include T -type elements to complete the
tree. There is no dependent energy storage element for this system. To make it easy to see;
all the links not appearing in the normal tree are shown in blue, dashed lines for reference.

Step 6. Final Check. Check the tree one more time. The tree is complete and in
good order. All nodes are included, and the through-variable source mg is not in the tree.

8.6 State Equations for Multi-Domain Systems

Derivation of state equations for multi-domain systems basically follows the same procedure
for single-domain systems as described in Section 7.8. The procedure is listed here again for
reference.

Step 1. Draw the linear graph and develop the corresponding normal tree.

Step 2. Write down the primary variables and secondary variables from the normal tree.
Identify state variables from the primary variables.

Step 3. Write down elemental equations of all one-port elements; primary variables on the
left side and secondary variables on the right side of the equations.

Step 4. Use compatibility and continuity equations to eliminate secondary variables. If


dependent energy storage elements are present, their primary variables need to be
eliminated as well.
d
Step 5. Eliminate primary variables that are not state variables by starting with equa-
dt
tions.
288 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.35: An electric motor driving a hard disk drive

Here are some examples to demonstrate how state equations are derived for multi-
domain systems.

Example 8.4 Figure 8.35 shows an electric motor driving a hard disk drive system. The
motor coil has inductance L and resistance R, and is driven via a prescribed voltage Vs (t).
Moreover, the motor drives a hard disk drive system, which is modeled as a rigid rotor with
a polar moment of inertia J. Also, the bearings present a small amount of damping B and
the motor has a motor constant ka . What is the state equation governing the dynamics of
this electro-mechanical system?

Figure 8.36 shows the linear graph of the system. The linear graphs shows an electric
domain and a rotational mechanical domain, separated by a transformer representing the
electric motor. The electric domain has a source Vs (t) in series connection with the induc-
tance L and resistance R. This arises because the inductance L and resistance R cause
voltage drops from the source Vs (t). The mechanical domain shows that the transformer,
the damping B, and the rotary inertia J are in a parallel connection. This arises because
the torque from the transformer is split in two parts: one to overcome the damping B and
the other to accelerate the rotary inertia J.

Figure 8.37 shows the corresponding normal tree. First of all, there is a ground node
8.6. STATE EQUATIONS FOR MULTI-DOMAIN SYSTEMS 289

Figure 8.36: Linear graph of the hard disk drive in Fig. 8.35

Figure 8.37: Corresponding normal tree of Fig. 8.36

for each domain. There are additional nodes A, B, and C for the electric domain as well
as node D for the mechanical domain. The input and output ports of the transformer are
labeled as 1 and 2. As indicated in Step 1, the voltage source Vs (t) is in the normal tree.
Next, the inertia J (i.e., a capacitor) is included in the tree. The presence of J causes port
2 to become a link because a closed loop cannot be formed. Consequently, port 1 must be a
tree branch, because the two ports of a transformer must have exactly one tree branch and
one link. Finally, the resistor R must be included in the normal tree to connect nodes A and
B. Now all the nodes are included and the normal tree is in good order.

Based on the normal tree, primary variables (i.e., across-variables of the branches and
290 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

through-variables of the links) are

Primary Variables : Vs (t), vR , v1 , ωJ , iL , T2 , TB (8.25)

where the known voltage source Vs (t) is a primary variable. Similarly, secondary variables
(i.e., through-variables of the branches and across-variables of the links) are

Secondary Variables : Is (t), iR , i1 , TJ , vL , ω2 , ωB (8.26)

Note that Is (t) is an unknown current needed to maintain the prescribed voltage Vs (t).
Among the list of primary variables (cf. (8.25)), ωJ and iL are from either a capacitor
d
or inductors, whose element equations will carry a differential operator preceding the
dt
primary variables. Therefore, the state variables are

State Variables : ωJ , iL (8.27)

Based on the linear graph in Fig. 8.36, we write down the following elemental equations
(with primary variables on the left and secondary variables on the right).
vR = RiR
1
v1 = ω2
ka

d 1
iL = vL
dt L

1 (8.28)
T2 = i1
ka

TB = BωB

d 1
ωJ = TJ
dt J

Figure 8.38 shows the loops and closed contours to generate needed compatibility and
continuity equations. When the the inductance L is brought back to the tree, we obtain the
compatibility equation
vL = Vs (t) − vR − v1 (8.29)
8.6. STATE EQUATIONS FOR MULTI-DOMAIN SYSTEMS 291

Figure 8.38: Loop and node equations from the normal tree in Fig. 8.37

When output port 2 is brought back to the tree, we recover

ω2 = ωJ (8.30)

When the damping B is brought back to the tree, we recover

ωB = ωJ (8.31)

A closed contour around node B cuts only branch R resulting in the following continuity
equation
iR = iL (8.32)

A closed contour around node C cuts only branch input port 1 resulting in

i1 = iL (8.33)

Finally, a closed contour around node D cuts only branch J resulting in

TJ = T2 − TB (8.34)

Substitution of (8.29) - (8.34) back to (8.28) to eliminate the secondary variables leads
292 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

to the following equation.

vR = RiR = RiL Node B


1 1
v1 = ω2 = ωJ Loop 2J
ka ka

d 1 1
iL = vL = [Vs (t) − vR − v1 ] Loop Vs RL1
dt L L

1 1 (8.35)
T2 = i1 = iL Node C
ka ka

TB = BωB = BωJ Loop BJ

d 1 1
ωJ = TJ = (T2 − TB ) Node D
dt J J

To obtain the state equation, we first substitute vR and v1 from the first and second
equation of (8.35) into the third equation of (8.35)
 
d 1 1
iL = Vs (t) − RiL − ωJ (8.36)
dt L ka

We then substitute T2 and TB from the fourth and fifth equation of (8.35) into the last
equation of (8.35)  
d 1 1
ωJ = iL − BωJ (8.37)
dt J ka
Finally, (8.36) and (8.37) can be rewritten in a matrix equation
! " # ! !
R 1 1
d iL − L − Lka iL L
= 1 B
+ Vs (t) (8.38)
dt ωJ Jka
− J
ω J 0

Example 8.5 Figure 8.39 shows a shaft-pump system. A prescribed torque Ts (t) drives a
shaft of inertia J and damping B from the shaft’s bearing. The shaft then drives a pump to
8.6. STATE EQUATIONS FOR MULTI-DOMAIN SYSTEMS 293

Figure 8.39: A shaft-pump system subjected to a prescribed torque

pump water out a reservoir and experiences fluid resistance R. Let us use the linear graph
approach to find the state equation.

Figure 8.40 shows the linear graph of the system. The linear graphs shows a rotational
mechanical domain and a fluid domain, separated by a gyrator representing the pump. The
rotational mechanical domain has a through-variable source Ts (t) in parallel connection with
the mass moment of inertia J, resistance B, and the pump (branch 1). This arises because the
torque from the prescribed source Ts (t) is split in three parts: one to overcome the damping
B, one to accelerate the rotary inertia J, and the rest to drive the pump. The fluid domain
shows that the gyrator (branch 2) and the fluid damping R are in a parallel connection.
Basically, the fluid coming out of the pump flows through the resistor R. Therefore, the
gyrator only drives the fluid resistance R.

Figure 8.41 shows the corresponding normal tree. First of all, there is a ground node
for each domain. There is a node A for the rotational mechanical domain as well as a node
B for the fluid domain. The input and output ports of the gyrator are labeled as 1 and 2. As
indicated in Step 1, only across-variable sources can appear in the normal tree. Therefore,
Ts (t) does not appear in the normal tree. Next, the inertia J (i.e., a capacitor) is included in
the tree. The presence of J causes port 1 to become a link because a closed loop cannot be
formed. Consequently, port 2 must be a tree link, because the two ports of a gyrator must
have either two tree branches or two links. Finally, the resistor R must be included in the
normal tree to connect node B and the ground node of the fluid domain. Now all the nodes
294 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.40: Linear graph of the shaft- Figure 8.41: Normal tree correspond-
pump system in Fig. 8.39 ing to the linear graph in
Fig. 8.40

are included and the normal tree is in good order.

Based on the normal tree, primary variables (i.e., across-variables of the branches and
through-variables of the links) are

Primary Variables : Ts (t), ωJ , pR , TB , T1 , q2 (8.39)

where the known torque Ts (t) is a primary variable. Similarly, secondary variables (i.e.,
through-variables of the branches and across-variables of the links) are

Secondary Variables : Ωs (t), TJ , qR , ωB , ω1 , p2 (8.40)

Note that Ωs (t) is an unknown angular velocity of the pump when the prescribed torque Ts (t)
is delivered to the pump. Among the list of primary variables (cf. (8.39)), only ωJ is from
d
either a capacitor or inductors, whose element equations will carry a differential operator
dt
preceding the primary variables. Therefore, the state variable is

State Variables : ωJ (8.41)

Based on the linear graph in Fig. 8.40, we write down the following elemental equations
8.6. STATE EQUATIONS FOR MULTI-DOMAIN SYSTEMS 295

(with primary variables on the left and secondary variables on the right).
d 1
ωJ = TJ
dt J
pR = RqR

TB = BωB (8.42)

T1 = Dp2

q2 = Dω1

where D is a pump constant.

Now, let us revisit Fig. 8.40 to generate needed compatibility and continuity equations.
When the the source Ts (t) is brought back to the tree, we obtain the compatibility equation

Ωs (t) = ωJ (8.43)

When the damping B is brought back to the tree, we recover

ωB = ωJ (8.44)

When input port 1 of the gyrator is brought back to the tree, we recover

ω1 = ωJ (8.45)

When output port 2 of the gyrator is brought back to the tree, we recover

p2 = pR (8.46)

A closed contour around node A cuts only branch J resulting in the following continuity
equation
TJ = Ts (t) − TB − T1 (8.47)
A closed contour around node B cuts only branch R resulting in

q2 = qR (8.48)
296 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Substitution of (8.43) - (8.48) back to (8.42) to eliminate the secondary variables leads
to the following equations.

d 1 1
ωJ = TJ = [Ts (t) − TB − T1 ] Node A
dt J J
pR = RqR = Rq2 Node B

TB = BωB = BωJ Loop BJ (8.49)

T1 = Dp2 = DpR Loop 2R

q2 = Dω1 = DωJ Loop 1J

To obtain the state equation, we first substitute TB and T1 from the third and fourth
equation of (8.49) into the first equation of (8.49)

d 1
ωJ = (Ts − BωJ − DpR ) (8.50)
dt J
Substitution of the second equation of (8.49) into (8.50) to eliminate pR resulting in

d 1
ωJ = (Ts − BωJ − DRq2 ) (8.51)
dt J
Since q2 is not a state variable, it needs to be eliminated via the fifth equation of (8.49) as

d B + D2 R 1
ωJ = − ωJ + Ts (8.52)
dt J J
Equation (8.52) is the state equation of the shaft-pump system.

Example 8.6 This example is a long and complicated one, because it has a dependent
energy storage element. This example is an extension of Example 8.5. In Fig. 8.42, a
prescribed torque Ts drives a pump shaft that has mass moment of inertia J and damping
B1 . The pump, with a pump constant D, pumps fluid into a hydrolic cylinder through an
orifice. The orifice has fluid damping coefficient B2 and the cylinder has an area A. A piston
in the cylinder drives a translational spring-mass-damper system, with mass m, stiffness k,
8.6. STATE EQUATIONS FOR MULTI-DOMAIN SYSTEMS 297

Figure 8.42: A hydro-mechanical system with a dependent energy storage element

and damping coefficient B3 . Our goal is to derive the state equation governing the response
of this sytem.

Figure 8.43 shows the linear graph of the hydro-mechanical system. The linear graphs
shows a rotational mechanical domain, a fluid domain, and a translational mechanical do-
main. The rotational mechanical domain and the fluid domain are separated via a gyrator
representing the pump (i.e., branches 1 and 2). The fluid domain and the translational
mechanical domain are separated via a gyrator representing the hydraulic cylinder (i.e.,
branches 3 and 4).

The rotational mechanical domain has a through-variable source Ts (t) in parallel con-
nection with the mass moment of inertia J, resistance B1 , and the pump (branch 1) as a
load. This arises because the torque from the prescribed source Ts (t) is split in three parts:
one to overcome the damping B1 , one to accelerate the rotary inertia J, and the rest to
drive the pump as a load. The fluid domain shows that the gyrator (branch 2) from the
pump serves as a source driving the fluid damping B2 and the hydraulic cylinder (branch
3). The fluid damping B2 and the hydraulic cylinder (branch 3) are in series connection,
because the fluid going through the orifice (i.e., damping B2 ) is the same as the cylinder.
The translational mechanical domain has the cylinder (branch 4) as a source in parallel with
the mass m, the spring k, and damper B3 . This arises because the force from the hydraulic
cylinder (branch 4) is split in three parts: one to overcome the damping B3 , one to overcome
the spring force k, and the rest to accelerate the mass m.
298 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

Figure 8.43: Linear graph of the hydro-mechanical system in Fig. 8.42

Figure 8.43 shows the corresponding normal tree. First of all, there is a ground node
for each domain. There is a node A for the rotational mechanical domain, two nodes B and
C in the fluid domain, and a node D for the translational mechanical domain. The input
and output ports of the pump are labeled as 1 and 2. The input and output ports of the
hydraulic cylinder are labeled as 1 and 2.

The first step is to include across-variable sources in the normal tree. Since there is
none, this step is bypassed. Next, we need to include generalized capacitors in the tree. By
including the mass m in the tree, we find that spring k, damper B3 , and output port 4 will
all be links. Moreover, input port 3 will also be a link because ports 3 and 4 form a gyrator.

In the meantime, if we included the mass moment of inertia J into the tree, it would
make ports 1 and 2 into links. This is not allowed, because now port 2 and 3 would be
links and nodes B and C cannot connect to the ground node. Therefore, we conclude that
J cannot be included in the tree. As a result, the mass moment of inertia J is a dependent
energy storage element.

To complete the normal tree, we include the orifice B2 to connect nodes B and C.
Furthermore, we include output port 2 in the tree. As a result, the input port 1 is a tree
branch, and the source Ts (t), damping B1 , and mass moment of inertia J are links.

Based on the normal tree, primary variables (i.e., across-variables of the branches and
8.6. STATE EQUATIONS FOR MULTI-DOMAIN SYSTEMS 299

Figure 8.44: Normal tree of the hydro-mechanical system from Fig. 8.43

through-variables of the links) are

Primary Variables : ω1 , p2 , pB2 , vm , Ts (t), TB1 , TJ , Q3 , f4 , fB3 , fk (8.53)

where the known torque Ts (t) is a primary variable. Similarly, secondary variables (i.e.,
through-variables of the branches and across-variables of the links) are

Secondary Variables : T1 , Q2 , QB2 , fm , Ωs (t), ωB1 , ωJ , p3 , v4 , vB3 , vk (8.54)

Note that Ωs (t) is an unknown angular velocity of the pump when the prescribed torque
Ts (t) is delivered to the pump. Among the list of primary variables (cf. (8.53)), vm and fk
are from either a capacitor or inductors, whose elemental equations will carry a differential
d
operator preceding the primary variables. Therefore, the state variable is
dt

State Variables : vm , fk (8.55)

Note that TJ is not a state variable. Although TJ results from an generalized inductor, there
d
is no differential operator preceding TJ .
dt
Based on the linear graph in Fig. 8.43, we write down the following elemental equations
300 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

(with primary variables on the left and secondary variables on the right).

TB1 = B1 ωB1
d
TJ = J ωJ
dt

1
ω1 = Q2
D

1
p2 = T1
D

pB2 = B2 QB2
(8.56)
Q3 = Av4

f4 = Ap3

fB3 = B3 vB3

d
fk = kvk
dt

d 1
vm = fm
dt m

where D is a pump constant.

Now, let us use Fig. 8.45 to generate needed compatibility and continuity equations.
When the the source Ts (t) is brought back to the tree, we obtain the compatibility equation

Ωs (t) = ω1 (8.57)

When the damping B1 is brought back to the tree, we recover

ωB1 = ω1 (8.58)

When the mass moment of inertia J is brought back to the tree, we recover

ωJ = ω1 (8.59)
8.6. STATE EQUATIONS FOR MULTI-DOMAIN SYSTEMS 301

Figure 8.45: Loop and node equations from the normal tree Fig. 8.44

When port 3 of the hydraulic cylinder is brought back to the tree, we recover

p3 = p2 − pB2 (8.60)

When port 4 of the hydraulic cylinder is brought back to the tree, we recover

v4 = vm (8.61)

When damping B3 is brought back to the tree, we recover

vB3 = vm (8.62)

When the stiffness k is brought back to the tree, we recover

vk = vm (8.63)

For the node equations, a closed contour around node A cuts only branch J resulting
in the following continuity equation

T1 = Ts (t) − TB1 − TJ (8.64)

A closed contour around node B will not work, because the contour will cut two tree branches.
Therefore, we need a closed contour around nodes B and C to obtain

Q3 = Q2 (8.65)
302 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

A closed contour around node C cuts only branch B2 resulting in

Q3 = QB2 (8.66)

Finally, a closed loop around node D leads to

fm = f4 − fB3 − fk (8.67)

Substitution of (8.57) - (8.67) back to (8.56) to eliminate the secondary variables leads
to the following equations.
TB1 = B1 ωB1 = B1 ω1 Loop B1 , 1
d d
TJ = J ωJ =J ω1 Loop J, 1
dt dt

1 1
ω1 = Q2 = Q3 Nodes B, C
D D

1 1
p2 = T1 = [Ts (t) − TB1 − TJ ] Node A
D D

pB2 = B2 QB2 = B2 Q3 Node C


(8.68)
Q3 = Av4 = Avm Loop m, 4

f4 = Ap3 = A (p2 − pB2 ) Loop 2, B2 , 3

fB3 = B3 vB3 = B3 vm Loop B3 , m

d
fk = kvk = kvm Loop k, m
dt

d 1 1
vm = fm = (f4 − fB3 − fk ) Node D
dt m m

To derive the state equation, let us start from the second to the last equation of (8.68),
which states
d
fk = kvm (8.69)
dt
8.6. STATE EQUATIONS FOR MULTI-DOMAIN SYSTEMS 303

This equation is already in good order, because the term on the right side is a state variable.
From the last equation of (8.68), we need to eliminate all variables that do not have a
derivative in (8.68) as follows.
d 1
vm = [A (p2 − pB2 ) − B3 vm − fk ] (8.70)
dt m
where f4 and fB3 are eliminated via the seventh and eighth equations of (8.68). Since vm
and fk are state variables, we only need to eliminate p2 and pB2 via the fourth and fifth
equations of (8.68) to obtain
 
d 1 A
vm = (Ts − TB1 − TJ ) − AB2 Q3 − B3 vm − fk (8.71)
dt m D
In (8.71), TB1 , TJ , and Q3 are not state variables, they need to be eliminated via the first,
the second, and the sixth equation of (8.68) resulting in
   
d 1 A d 2

vm = Ts − B1 ω1 − J ω1 − A B2 + B3 vm − fk (8.72)
dt m D dt
Since ω1 is not a state variable, it needs to be eliminated. Based on the third and the sixth
equation of (8.68)
A
ω1 = vm (8.73)
D
Substitution of (8.73) into (8.72) results in
A2 B1 A2 J d
 
d 1 A 2

vm = Ts − vm − 2 vm − A B2 + B3 vm − fk (8.74)
dt m D D2 D dt
d
Note that both sides of (8.74) has vm ; therefore, they need to be combined to yield
dt
A2 B1
   
d 1 A 2
vm = 2 Ts − A B2 + B3 + vm − fk (8.75)
dt m + AD2J D D2

Finally, (8.69) and (8.75) can be rewritten in a matrix equation


! " # ! !
d fk 0 k fk 0
= + Ts (t) (8.76)
dt vm − Z11 − ZZ21 vm A
D·Z1

where Z1 and Z2 are defined as follows


A2 J
Z1 ≡ m + (8.77)
D2
304 CHAPTER 8. DERIVING STATE EQUATIONS OF MULTI-DOMAIN SYSTEMS

and
A2 B1
Z2 ≡ A2 B2 + B3 + (8.78)
D2
Chapter 9

Nonlinear Systems

Many systems are intrinsically nonlinear. Typical examples include pendulum with large
amplitude, central force motion (e.g., the earth rotating around the sun), and aerodynamic
drags. For nonlinear systems, their response is not governed by a linear ordinary differential
equation shown in (1.12). As a result, their response cannot be expressed as the sum of a
homogeneous solution yh (t) and a particular solution yp (t) as shown in (1.14).

For nonlinear systems, their response can be predicted usually through three different
ways: exact solutions, numerical solutions, and linearization processes. Exact solutions are
available for simple problems. Nevertheless, they are extremely valuable because they provide
a benchmark to validate numerical solutions or approximate solutions (e.g., linearization).
Numerical solutions, such as direct integration, are generally good for low-order systems.
For high-order systems, direct integration can prove to be very computational expensive. In
this case, approximate solutions such as linearization can become very useful.

In this chapter, I will first briefly go over an example whose exact solution is available.
Then I will devote the rest of the chapter to linearization processes.

305
306 CHAPTER 9. NONLINEAR SYSTEMS

Figure 9.1: A hydraulic system with a linear tank and a nonlinear valve

9.1 Exact Solutions

Figure 9.1 shows a hydraulic system consists of a tank, a valve, and a through-variable source
with a prescribed flow rate Qs (t) along with its linear graph. The tank is linear satisfying
dpc
qc = C (9.1)
dt
where C is the capacitance of the tank, pc is the gage pressure at the bottom of the tank,
and qc is the flow rate into the tank. Recalling that

pc = ρgh (9.2)

where ρ is the fluid density, g is the gravitational constant, and h is the fluid height, I can
rewrite (9.1) into
dh 1
= qc (9.3)
dt A
where A is the base area of the tank and C = A/ρg has been used. In contrast, the valve
has a quadratic nonlinearity satisfying

pR = RqR2 (9.4)

where R is the resistance of the valve, pR is the pressure drop across the valve, and qR is the
flow rate through the valves. Basically, (9.3) and (9.4) serve as elemental equations of the
system.

According to the linear graph in Fig. 9.1, the compatibility equation requires that

pc = pR (9.5)
9.1. EXACT SOLUTIONS 307

Moreover, the continuity equation requires that

Qs (t) = qR + qc (9.6)

To obtain the equation governing the response of this nonlinear hydraulic system, let
us substitute (9.2) and (9.4) into (9.5) to obtain
r
ρg
qR = h (9.7)
R
Then substitution of (9.3) and (9.7) into (9.6) results in
1 ρg √
r
dh 1
+ h = Qs (t) (9.8)
dt A R A
Equation (9.8) is the nonlinear differential equation governing the fluid height h in the tank.

With the nonlinear equation in (9.8), its solution can be found exactly by integration
because the equation is first-order. Let us investigate the following questions and see how a
nonlinear system behaves differently from its linear counter part.

Question 1. If Qs (t) = 0 and h(0) = h0 , what is h(t)? This question is to find the
homogeneous solution (i.e., no forcing term with Qs = 0) subjected an initial height h0 . In
this case, (9.8) becomes

r
dh 1 ρg
+ b h = 0, b ≡ (9.9)
dt A R
with
h(0) = h0 (9.10)
By conducting a separation of variables, we can rearrange (9.9) into
Z h Z t
dh
√ =− bdt (9.11)
h0 h 0

By carrying out the integrations in (9.11), one gets


√ p b
h− h0 = − dt (9.12)
2
or p r 2
1 ρg
h(t) = h0 − t (9.13)
2A R
308 CHAPTER 9. NONLINEAR SYSTEMS

Let the time it takes to drain the tank be τ . Then


p r 2
1 ρg
h(τ ) = h0 − τ =0 (9.14)
2A R
or s
Rh0
τ = 2A (9.15)
ρg
Note that the time it takes to drain the tank is finite. Moreover, the larger the resistance R
is, the longer the time it takes to drain the tank.

In contrast, for the system in Fig. 9.1, its linear counterpart is a first-order system with
a time constant RC. The homogeneous solution will be an exponential decay, through which
the system will take infinite time to reduce the water level h(t) to zero.

Question 2. If Qs (t) = Q̄ (a constant), what is the height h̄ at the steady state? This
question is to find steady response under a step load. When a step load Q̄ is applied, the
system will initially change and gradually reach a steady state h̄.
dh
At steady state, h(t) = h̄, which is a constant. Therefore, = 0 and (9.8) becomes
dt
r
1 ρg p 1
h̄ = Q̄ (9.16)
A R A
or
R 2
h̄ = Q̄ (9.17)
ρg
If the input Q̄ is doubled, the output h̄ will quadrupled according to (9.17). For a linear
system, if the input is doubled the output will be doubled too. That the output and input
do not increase or decrease at the same pace is a characteristic of nonlinear system.

Question 3. If Qs (t) = Q̄ (a constant) and h(0) = h0 6= h̄, what will be time response
of h(t)? This question is to find step response. When a step load Q̄ is applied, the height
h(t) will not change if the initial height is h̄. If the initial height h(0) is not h̄, the height
h(t) will change and its asymptotic value should reach the steady-state height h̄ as t → ∞.

In this case, (9.8) becomes


ρg √
r
dh 1 1
+ h = Q̄ (9.18)
dt A R A
9.2. LINEARIZATION 309

with an initial condition


h(0) = h0 (9.19)

By performing a separation of variables in (9.18), we obtain


s Z
R h
Z t
dh
A = dt (9.20)
ρg h0 Q̄ R − √h
q
0
ρg

q √
R
Note that Q̄ ρg
is indeed h̄ from (9.17). Therefore, (9.20) can be written as
s Z h Z t
R dh
A √ √ = dt (9.21)
ρg h0 h̄ − h 0

and its solution is


s " √ √ #
R p p p h̄ − h0
t = 2A h0 − h(t) + h̄ ln √ (9.22)
ρg
p
h̄ − h(t)

Of course, (9.22) is difficult to invert to find the fluid height h(t) as a function of time.
Nevertheless, the solution can be plotted for any h between h0 and h̄ to find the time t to
reach that particular height h. Figure 9.2 shows an example of such a plot.

9.2 Linearization

From the discussions above, we learn that the dynamics of a nonlinear system can be ex-
tremely complex. An analytical mathematical expression may not be easily obtained, because
integrations are required. The change of getting exact solution is very slim when the order
of the system goes up. Therefore, an approximate method to obtain solutions systematically
is highly desirable.

One such method is called linearization and it consists of the following steps. The
first step is to choose an operating point. For example, the operating point could be the
equilibrium position or the initial condition. Next, we only study response of the system
near the operating point. As a result, the change in the system is small and the system
310 CHAPTER 9. NONLINEAR SYSTEMS

Figure 9.2: Height of fluid level as a function of time

can be treated as linear. The nonlinear system is replaced by an equivalent linear system in
this small neighborhood. Then the solution can easily be found. This process of finding the
equivalent linear system is called linearization.

Let us use the tank-valve system (cf. Fig. 9.1) as an example and revisit Question 3 of
Section 9.1 using linearization. If Qs (t) = Q̄ (a constant) and h(0) = h0 6= h̄, what will be
time response of h(t)?

First of all, let us choose h̄ as the operating point. We further assume that the initial
height h0 is close to h̄ such that
h0 − h̄
h̄  1 (9.23)

As a result, the fluid height h(t), at any time t, is close to the steady-state height h̄ such
that we can define
h(t) = h̄ + η(t) (9.24)
where η(t) is a small quantity describing the difference between the current fluid height h(t)
and the steady-state fluid height h̄. Moreover, the fluid height h(t) throughout the entire
9.2. LINEARIZATION 311

dynamic process will satisfy


h(t) − h̄ η(t)
h̄  1
= (9.25)

where (9.24) is used. In the meantime, the fluid height h(t) must satisfy the following
governing equation (reproduced from (9.18)).

1 ρg √
r
dh 1
+ h = Q̄ (9.26)
dt A R A

A main step of linearization is to transform the nonlinear governing equation in h(t) (cf.
(9.26)) to a linear, approximate governing equation in η(t). To do so, we need to evaluate
the nonlinear governing equation (9.26) term by term. For the first term,

dh dη(t)
= (9.27)
dt dt

For the second term, there is a nonlinear term h that can be approximated via a Taylor
expansion as follows. First, let us recall that the Taylor expansion takes the form
1
f (x) = f (x0 ) + f 0 (x0 ) (x − x0 ) + f 00 (x0 ) (x − x0 )2 + · · · (9.28)
2
where x is a variable, x0 is an operating point, and f (x) is the function to be expanded. In
the case of the tank-valve system, we can specify

x ≡ h, x0 = h̄, f (x) = x, x − x0 = h − h̄ = η (9.29)

where (9.24) has be used. Then a little algebra shows that


1 1 1 3 1 1 1 3
f 0 (x) = x− 2 , f 00 (x) = − x− 2 , f 0 (x0 ) = h̄− 2 , f 00 (x0 ) = − h̄− 2 (9.30)
2 4 2 4
Substitution of (9.29) and (9.30) back to (9.28) results in
√ p  
1  η  1  η 2
h = h̄ 1 + − + ··· (9.31)
2 h̄ 8 h̄

Since η/h̄  1 as assumed in (9.25),


 η 2 η
 (9.32)
h̄ h̄
312 CHAPTER 9. NONLINEAR SYSTEMS

Therefore, all the terms in (9.31) with second order and above can be neglected. Equation
(9.31) is reduced to
√ p  
1 η
h ≈ h̄ 1 + (9.33)
2 h̄

Now substitution of (9.27) and (9.33) back to (9.26) leads to


r  
dη 1 ρg p 1 η 1
+ h̄ 1 + = Q̄ (9.34)
dt A R 2 h̄ A

With the help of (9.16), equation (9.34) can be further reduced to


r
dη(t) 1 ρg
+ η(t) = 0 (9.35)
dt 2A Rh̄

Since (9.35) is a differential equation, an initial condition η(0) must be assigned. According
(9.24),
η(0) = h(0) − h̄ = h0 − h̄ (9.36)

Therefore, the solution of (9.35) is

η(t) = h0 − h̄ e−t/τ

(9.37)

where the time constant τ is given by


r
1 ρg
τ= (9.38)
2A Rh̄

To close this example, I want to make two remarks. First, if the operating point is
an equilibrium position or a steady-state response, the constant terms in the linearized
governing equation will cancel. If they do not, there must be an algebraic error. Second, the
linearization process is not restricted to a single operating point. If needed, the linearization
process can be used iteratively. For example, one can use the equivalent linear system to
predict a response, and use the predicted response as the next operating point. Then the
linearization process is repeated resulting in an iteration scheme to predict approximate
solution of a nonlinear system.
9.3. MORE EXAMPLES ON LINEARIZATION 313

9.3 More Examples on Linearization

Example 9.1 Figure 9.3 shows an inverted pendulum under the gravitational field. The
pendulum consists of a particle supported at one end of a rigid rod. The other end of the
rod is support at the hinge point with a torsional spring. The particle has mass m. The
rigid rod is massless and has a length l. The torsional spring has spring constant k. The
gravitational acceleration is g. The angular position of the pendulum is θ(t) in the clockwise
P
sense. By applying M = Iα about the hinge point, one obtains the following equation of
motion.
ml2 θ̈ + kθ − mgl sin θ = 0 (9.39)
For the special case of
4
k = √ mgl (9.40)
π 2
determine a nonzero equilibrium angle θ0 6= 0 and linearized equation around θ = θ0 .

Equilibrium position θ = θ0 requires that θ̈ = 0. Therefore, (9.39) becomes


 
4
kθ0 − mgl sin θ0 = mgl √ θ0 − sin θ0 = 0 (9.41)
π 2
where (9.40) was used. A non-zero solution of (9.41) is
π
θ0 = (9.42)
4
314 CHAPTER 9. NONLINEAR SYSTEMS

Figure 9.3: An inverted pendulum in the gravitational field

To linearize (9.39) around the equilibrium point θ0 , we choose θ0 as the operating point
and define
θ(t) = θ0 + η(t) (9.43)
where
η(t)  θ0 (9.44)

Now we want to transform the nonlinear governing equation in θ(t) (cf. (9.39)) to a
linear, approximate governing equation in η(t). To do so, we need to evaluate the nonlinear
governing equation (9.39) term by term. For the first term,

θ̈(t) = η̈(t) (9.45)

For the second term, it becomes


4 hπ i
kθ(t) = k [θ0 + η(t)] = √ mgl + η(t) (9.46)
π 2 4
where (9.40) was used. For the third term, sin θ is nonlinear and needs to be expanded via
a Taylor expansion. Again, let us reproduce Taylor expansion (9.28) here
1
f (x) = f (x0 ) + f 0 (x0 ) (x − x0 ) + f 00 (x0 ) (x − x0 )2 + · · · (9.47)
2
where x is a variable, x0 is an operating point, and f (x) is the function to be expanded. In
this example, let us specify
π
x ≡ θ, x0 ≡ θ0 = , f (x) ≡ sin x, x − x0 = θ − θ0 = η(t) (9.48)
4
9.3. MORE EXAMPLES ON LINEARIZATION 315

where (9.43) has be used. With a little algebra, one can show that

1 1
f 0 (x) = cos x, f 00 (x) = − sin x, f 0 (x0 ) = cos θ0 = √ , f 00 (x0 ) = − sin θ0 = − √ (9.49)
2 2

where (9.42) is used. Substitution of (9.48) and (9.49) back to (9.47) results in

η2
 
1
sin θ = √ 1 + η − + ··· (9.50)
2 2

Since η  1, terms with η 2 and above can be ignored. Therefore, (9.50) is reduced to

1
sin θ ≈ √ [1 + η] (9.51)
2

To derive a linearize equation, let us submit (9.45), (9.46), and (9.51) back to (9.39)
and obtain
4 hπ i 1
ml2 η̈ + √ mgl + η − mgl √ [1 + η] = 0 (9.52)
π 2 4 2
Note that the constant terms independent of η(t) in (9.52) cancel out. Therefore, (9.52)
leads to  
2 mgl 4
ml η̈(t) + √ − 1 η(t) = 0 (9.53)
2 π
According to (9.53), the inverted pendulum will vibrate with a natural frequency
s  
g 4
ωn = √ −1 (9.54)
l 2 π

Example 9.2 This example is to demonstrate equation of motion and linearization in a


system involving a quasi-equilibrium position.

Consider the system shown in Fig. 9.4. The system consists of two identical particles
with mass m and an inextensible string of length l. Mass 1 is allowed to move on a horizontal
table, and mass 2 is allowed to move only vertically under the table by the gravity. The
inextensible string passes a central hole on the table to connect the two masses.
316 CHAPTER 9. NONLINEAR SYSTEMS

Figure 9.4: Motion of two masses connected by an inextensible string

The motion of mass 1 can be described via polar coordinates r and θ. Moreover, let er
and eθ be unit vectors in the radial and transverse directions, respectively. The acceleration
of mass 1 is then    
2
r̈1 = r̈ − rθ̇ er + rθ̈ + 2ṙθ̇ eθ (9.55)
Since the string is inextensible, mass 2 will be located under the table with a position l − r.
Moreover, let ez be a unit vector in the vertical upward direction. The acceleration of mass
2 is then
r̈2 = r̈ez (9.56)

To derive the equation of motion, the first step is to draw free-body diagrams. For
mass 1, the string’s action is a tension T pointing toward the central hole. For mass 2,
the forces include a tension T upward and a weight downward. Note that tension T is a
constraint force, because it is needed to ensure that the string is inextensible. It is very
important to choose polar coordinates to describe the motion of mass 1. With the help of
the polar coordinates, the constraint force T is confined only in the radial direction and
does not appear in the transverse direction. Since constraint force T needs to be eliminated
in deriving the equation of motion, coordinate systems should be chosen so that constraint
9.3. MORE EXAMPLES ON LINEARIZATION 317

force T appears in as few coordinates as possible. Otherwise, the elimination process will be
a formidable task to perform.

Application of Newton’s second law to mass 1 leads to


X h    i
F1 = −T er = m r̈ − rθ̇2 er + rθ̈ + 2ṙθ̇ eθ (9.57)

or  
−T = m r̈ − rθ̇2 (9.58)
and  
m rθ̈ + 2ṙθ̇ = 0 (9.59)
Application of Newton’s second law to mass 2 leads to
X
F2 = (T − mg) ez = mr̈ez (9.60)

or
T = mg + mr̈ (9.61)
Substitution of (9.61) into (9.58) results in

2mr̈ − mrθ̇2 + mg = 0 (9.62)

Note that the constraint force T has now been eliminated. There are only two unknowns
r(t) and θ(t) left with two equations (9.59) and (9.62) in the picture.

These two equations can further be consolidated by eliminating θ as follows. First of


all, (9.59) is multiplied by r to obtain
  d  2 
mr rθ̈ + 2ṙθ̇ = mr θ̇ = 0 (9.63)
dt
In other words,
mr2 θ̇ = h (constant) (9.64)
where h is a constant. There are two things worth noting here. First, the physical meaning
of mr2 θ̈ is the angular momentum of the system. Therefore, (9.64) is simply a statement
of conservation of angular momentum. The angular momentum is expected to conserve,
because the tension T creates no moment about the central hole. Therefore, angular mo-
mentum must be conserved about the vertical axis passing through the central hole. Second,
318 CHAPTER 9. NONLINEAR SYSTEMS

how is h determined? The answer is initial conditions. For this system, four initial conditions
will be prescribed, i.e., r(0), ṙ(0), θ(0), and θ̇(0). From these four initial conditions, one can
calculate h and it will be conserved throughout the subsequent motion.

The next step is to eliminate θ̇ in (9.62) and (9.64). Rewrite (9.64) as


h
θ̇ = (9.65)
mr2
and substitute it into (9.62) to obtain
 2
h
2mr̈ − mr + mg = 0 (9.66)
mr2

Now there is only one equation left with one unknown r(t). This equation can be used for
standard vibration analysis, such as finding an equilibrium position and small oscillations
around the equilibrium.

Equilibrium Position. The equilibrium position from (9.66) is defined by r = r0


satisfying ṙ = r̈ = 0. Therefore, r0 satisfies
1/3
h2

r0 = (9.67)
m2 g

Strictly speaking, this equilibrium is a quasi-equilibrium state because θ(t) is not constant.
According to (9.65),
h
θ̇ = ≡ θ̇0 = constant (9.68)
mr02
Based on (9.67) and (9.68), the quasi-equilibrium state corresponds to a circular motion with
radius r0 and angular velocity θ̇0 .

Small Oscillations. Now let us consider small oscillations around the quasi-equilibrium
state by assuming
r(t) = r0 + η(t), η(t)  r(t) (9.69)
where η(t) is the small oscillation away from the quasi-equilibrium state. The physical
meaning of η(t) is a small radial deviation away from the quasi-equilbrium circular motion.
As a result,
ṙ(t) = η̇(t), r̈(t) = η̈(t) (9.70)
9.3. MORE EXAMPLES ON LINEARIZATION 319

Substitution of (9.69) into the nonlinear term in (9.66) results in


2
h2 h2
  
h η(t)
mr = (r0 + η(t))−3 = 1−3 + ··· (9.71)
mr2 m mr03 r0

where a Taylor expansion is used to obtain the results. If r0 in (9.67) is used, (9.71) becomes
 2    
h η(t) η(t)
mr = mg 1 − 3 + ··· ≈ mg 1 − 3 (9.72)
mr2 r0 r0

where only the constant and linear terms are retained. This process is known as linearization.
By substituting (9.70) and (9.72) back to (9.66), we obtain
3g
η̈(t) + η(t) = 0 (9.73)
2r0
This is the linearized equation of motion governing small oscillations around the quasi-
equilibrium state.

In the process of linearization, there are two issues worth noting. First, when deriving
the equation of motion, the constant term in the equation of motion (which is mg in (9.66)
in this example) will always be canceled by the constant term resulting from the Talyor
expansion. If the constant terms do not cancel, there must be an algebraic error in the
derivation. Second, (9.73) need two initial conditions to solve. They can be found as

η(0) = r(0) − r0 , η̇(0) = ṙ(0) (9.74)

because initial conditions r0 and ṙ(0) have already been prescribed.


320 CHAPTER 9. NONLINEAR SYSTEMS
Chapter 10

Operational Block Diagrams

Block diagrams have many uses in system dynamics. A block diagram can serve as a graphical
way to represent a set of state equations. It is also a handy tool to represent process flows in
a dynamical system or a control system. Therefore, a block diagram can serve as a graphic
interface of software tools.

In this short chapter, I will first go over notations of various operations used in block
diagrams. Then I will explain how to construct block diagrams from state equations.

10.1 Basic Operations and Notations

There are three basic operations in block diagrams: multiplication, differentiation, and in-
tegration. In working out a block diagram, a very important key is

READ THE EQUATION FROM RIGHT TO LEFT!

The reason is simple. The quantities on the right side of an equation serve as input,
and the quantity on the left side of the equation is the output. It is the same concept we see

321
322 CHAPTER 10. OPERATIONAL BLOCK DIAGRAMS

in programming.

Consider a multiplication operation described by

f =B·v (10.1)

where v is an input, b is a constant, and f is an output. Figure 10.1 shows the notation
representing a multiplication process in a block diagram. If we read (10.1) from right to left,
we see the input v entering a block B that represents a multiplication process. The the exit
end of the block B is the output f .

Figure 10.1: Multiplication operation Figure 10.2: Differentiation operation

Now, let us consider a differentiation operation described by

dv
f =m (10.2)
dt
where v is an input, m is a constant, and f is an output. Figure 10.2 shows the notation
representing a differentiation process shown in (10.2). The block diagram is cascaded with
two blocks. One block has the notation S to represent a differentiation process, and the other
has the notation m to represent a multiplication process. If we read (10.2) from right to left,
we see the input v entering a block S to go through a differentiation operation. Therefore,
dv dv
the exit of the block S is . Subsequently, enters the block m to be multiplied by m.
dt dt
According to (10.2), the exit of the block m is the output f .

To demonstrate the integration operation, we can integrate (10.2) to get


Z t
1
v(t) = v(0) + f (u)du (10.3)
m 0

Figure 10.3 shows the block diagram of equation (10.3). The heart of Fig. 10.3 is the
1 1
integration operator or S −1 . Since s or S represents a differentiation operation, or
s s
−1
S is the inverse operation of differentiation, which is integration. Now, in Fig. 10.3, the
10.2. BLOCK DIAGRAMS FOR STATE EQUATIONS 323

Figure 10.3: Block diagram for an integration process

1
input f enters the integration operation and is followed by a multiplication to obtain
Z t m
1
f (u)du. The circle in Fig. 10.3 represents a summation process adding v(0) and
m Z0
1 t
f (u)du together to produce the output v(t).
m 0
Among the operations above, the most important one is the integration. Basically,
solution of a state equation is a process of integration.

10.2 Block Diagrams for State Equations

Very often, we need to convert a state equation into a set of block diagrams. That can be
done via the following steps.

Step 1. Unwind the state equation and output equation into a non-matrix form of simul-
taneous equations.

Step 2. Start from the input variables and read the state equation from right to left.

Step 3. Connect the states.

Step 4. Complete the output equation.

Let me use the following example to demonstrate the steps described above.
324 CHAPTER 10. OPERATIONAL BLOCK DIAGRAMS

The state equation of interest is


! " # ! " # !
d x1 1 2 x1 1 −1 u1
= + (10.4)
dt x2 3 −4 x2 0 0 u2

and the output equation of interest is


! " # ! " # !
y1 1 −2 x1 0 0 u1
= + (10.5)
y2 0 1 x2 −2 1 u2

How do we convert the state equation (10.4) and the output equation (10.5) into block
diagrams?

Step 1. Unwind the state equation and output equation into a non-matrix
form of simultaneous equations.

Equations (10.4) and (10.5) can be rewritten as


(
ẋ1 = x1 + 2x2 + u1 − u2
(10.6)
ẋ2 = 3x1 − 4x2

and (
y1 = x1 − 2x2
(10.7)
y2 = x2 − 2u1 + u2

Step 2. Start from the input variables and read the state equation from
right to left.

Figure 10.4 shows the block diagram constructed after this step. First of all, we need
to identify two points on the left denoting the two input variables u1 and u2 . Next, let us
consider the first line of the state equation in (10.6), whose right side is x1 + 2x2 + u1 − u2 .
This represents a summation process of four terms: x1 , 2x2 , u1 , and −u2 . So, we draw a
circle (the upper left circle in Fig. 10.4) to represent the summation of x1 , 2x2 , u1 , and −u2 .
For the term u1 , we draw an arrow from the input u1 directly into the summation operator.
For the term −u2 , we draw an arrow from the input u2 directly into the summation operator,
except that there is a negative sign to indicate that the operation for this part is subtraction
indeed. For x1 and 2x2 , we do not know where they come from. Therefore, we just leave
10.2. BLOCK DIAGRAMS FOR STATE EQUATIONS 325

Figure 10.4: Step 2 of forming block diagrams from state equations

them as two entries at the summation operator. In the meantime, the output end of the
summation operator is then ẋ1 according to the first state equation of (10.6). After an
integration operator S −1 together with an initial condition x1 (0), we will receive the first
state variable x1 .

Similarly, the second line of the state equation in (10.6), whose right side is 3x1 − 4x2 .
This represents a summation process of two terms: 3x1 and −4x2 . So, we draw a circle (the
lower left circle in Fig. 10.4) to represent the summation of 3x1 and −4x2 . Since we do not
know where they come from, we have 3x1 and 4x2 as two entries at the summation operator
with a negative sign for 4x2 to indicate that it is a subtraction operation. In the meantime,
the output end of the summation operator is then ẋ2 according to the second state equation
of (10.6). After an integration operator S −1 together with an initial condition x2 (0), we will
receive the second state variable x2 .

Step 3. Connect the states. In Fig. 10.4, the first summation operation (the upper
left circle) have two entries x1 and 2x2 left unconnected. Now they can be connected to the
states x1 and x2 as shown in Fig. 10.5. For the entry x1 , it can be connected directly to the
state x1 . For the entry 2x2 , It can start from the state x2 and pass through a multiplication
block 2 to reach the first summation operator.
326 CHAPTER 10. OPERATIONAL BLOCK DIAGRAMS

Figure 10.5: Step 3 of forming block diagrams from state equations

Similarly, the second summation operation (the upper left circle) have two entries 3x1
and −4x2 left unconnected. They can be connected to the states x1 and x2 in a similar way.
For the entry 3x1 , it can start from the state x1 and pass through a multiplication block 3
to reach the second summation operator. For the entry −4x2 , It also has a negative sign at
the second summation operator. Therefore, we can start from the state x2 and pass through
a multiplication block 4 to reach the second summation operator.

By now we have completely transform the state equation (10.6) into a block diagram.

Step 4. Complete the output equation. For output equation y1 = x1 − 2x2 , we use
a summation operator (i.e., the upper right circle in Fig. 10.6. The state x1 flows directly
into the summation operator. The state x2 first flows through a multiplication factor of 2 and
enters the summation operator with a negative sign. As a result, the exit of the summation
operator is y1 , which is x1 − 2x2 .

For output equation y2 = x2 − 2u1 + u2 , we use a summation operator (i.e., the lower
right circle in Fig. 10.6. The state x2 flows directly into the summation operator. The input
u1 first flows through a multiplication factor of 2 and enters the summation operator with
10.2. BLOCK DIAGRAMS FOR STATE EQUATIONS 327

Figure 10.6: Step 4 of forming block diagrams from state equations

a negative sign. The input u2 flows directly into the summation operator. As a result, the
exit of the summation operator is y2 , which is x2 − 2u1 + u2 .

The block diagram of the state equation (10.6) and the output equation (10.7) is now
completed.
328 CHAPTER 10. OPERATIONAL BLOCK DIAGRAMS
Chapter 11

Solution of State Equations in Time


Domain

In previous chapters, a major goal was to derive state equations of a system. Multiple meth-
ods were introduced for that goal, including conversion from ordinary differential equations,
a heuristic approach, or a linear graph approach. In this chapter, the goal is to solve the
state equation of interest.

There are two ways to solve a state equation: a time-domain approach and a frequency-
domain approach. The time-domain approach often can be programmed so that numerical
solutions can be easily obtained. The frequency-domain approach often gives great physical
insights helping us understand the response of the state equation. In this chapter, I will focus
only on the time-domain solution. The frequency-domain solution will be deferred until the
concept of transfer functions is introduced.

Since state equations are matrix differential equations, their solutions naturally will
require significant use of linear algebra. Therefore, I will first review some basic linear algebra
in this chapter; specifically, eigenvalues, eigenvectors, and similarity transformations. There
are many ways to solve a state equation in the time domain, and I will explain two of them.
One is to use eigenvector representation and similarity transformation. The other is to use a
series solution leading to a notation known as state transition matrix. Finally, I will explain

329
330 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

the concept of stability of a system.

In this chapter, we will constantly face a dilemma in learning the time-domain solution.
As explained before, the use of state equations is to handle complicated systems with large
numbers of degrees of freedom (i.e., high orders). When the order is high, it becomes very
difficult to show examples that have exact solutions or comprehensible results. As a result,
you will see many examples in this chapter simply involve a second-order system because
their solutions are easily obtained. One frequently asked question is, ”Why are we solving
this simple problem using such a complicated method?” Now you know the answer. These
examples are only for demonstrative purposes, because we cannot afford to use a more
complex example to demonstrate solution of state equations that is already complicated by
itself.

11.1 A Review of Linear Algebra

This is a very long section. I will go over important concepts needed for the solution of
state equations. These concepts include eigenvalues, eigenvectors, modal matrix, similarity
transformation, diagonalization, and Jordan form.

11.1.1 Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors (also known as characteristic values and characteristic vectors,
respectively) of a matrix are important quantities that appear in many engineering applica-
tions. For example, stresses at a point in a continuum (e.g., solid or fluid) are described via
a 3 × 3 matrix (see Fig. 11.1)
 
σxx σxy σxz
T =  σyx σyy σyz  (11.1)
 

σzx σzy σzz

where σij , i, j = x, y, z, are stress components acting on surface i in the j direction. Principal
stresses and principal directions are eigenvalues and eigenvectors of the stress matrix T.
11.1. A REVIEW OF LINEAR ALGEBRA 331

Figure 11.2: Area moment of inertia of a


cross section
Figure 11.1: Stress states in a continuum

Another example is area moment of inertia. In bending of a beam, rigidity of the beam’s
cross section depends on area moment of inertia. Consider the L-shape cross section in
Fig. 11.2, the area moment of inertia about x and y axes form a matrix
" #
Ixx Ixy
I= (11.2)
Iyx Iyy

where Ii,j , i, j = x, y, are area moment of inertia about i and j axes. The principal area
moments of inertia are the eigenvalues of matrix I. Moreover, the eigenvector of I will give
the principal axes of the cross section. There are many other applications of eigenvalues and
eigenfunctions, such as buckling loads and buckling shapes of truss structures.

Mathematically, eigenvalue problems (also known as characteristic value problems) in


linear algebra take the form of
    
a11 a12 · · · a1n x1 x1
 a21 a22 · · · a2n   x2   x2 
    
  ..  = λ  ..  (11.3)
 . .. .. 
..
   
 . .
 . . .  .   . 
an1 an2 · · · ann xn xn

or a more compact form


Ax = λx (11.4)
332 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

Note that (11.3) is a homogeneous equation. Therefore,


 
x1
 x2 
 
x≡ . 

=0 (11.5)
 .. 
xn
is always a solution. Such a solution is called a trivial solution, because it is not good for
anything. What we are interested in is nontrivial solutions, i.e.,
 
x1
 x2 
 
x≡  ..  6= 0
 (11.6)
 . 
xn
It turns out that nontrivial solutions x 6= 0 do not exist all the time. They are only available
when λ takes specific values. These λ values leading to nontrivial solution x 6= 0 from (11.3)
or (11.4) are called eigenvalues or characteristic values. The corresponding nontrivial
solutions x are called eigenvectors or characteristic vectors.

To find out eigenvalues and eigenvectors, one can rewrite (11.4) into

(A − λI) x = 0 (11.7)

where I is an identity matrix. Since (11.7) is an homogeneous equation, its solutions evolve
into the following two cases.

Case 1. If |A − λI| 6= 0, the only solution is x = 0. This leads to trivial solutions,


which are irrelevant to eigenvalue problems.

Case 2. If |A − λI| = 0, a basic theorem in linear algebra indicates that nontrivial


solutions x 6= 0 exist. Therefore, eigenvalues of a matrix A is governed by the following
equation.
|A − λI| = 0 (11.8)

Now let us take a look of (11.8) more closely. If A is an n × n matrix, |A − λI| will be
a polynomial of order n in λ. Specifically, expansion of (11.8) will take the form of

pn λn + pn−1 λn−1 + · · · + p2 λ2 + p1 λ + p0 = 0 (11.9)


11.1. A REVIEW OF LINEAR ALGEBRA 333

Solution of (11.9) will lead to n roots. Therefore, there are n eigenvalues for an n × n matrix
A. Specifically, (11.9) can be rewritten as

(λ − λ1 ) (λ − λ2 ) · · · (λ − λn ) = 0 (11.10)

where λ1 , λ2 , . . ., λn are the n eigenvalues.

For eigenvectors, the most important thing to remember is that they do not have a
fixed magnitude. If x is an eigenvector satisfying (11.7), then cx will also be an eigenvector.
One can easily prove the statement by replacing x in (11.7) by cx, i.e.,

(A − λI) cx = c (A − λI) x = 0 (11.11)

So not having a fixed magnitude is a direct consequence of the fact that eigenvalue problems
defined in (11.4) are indeed homogenous equations.

Here are four examples, and each example has its own characteristics.

Example 11.1 The first example considers a symmetric matrix with distinct eigenvalues.
Let us consider " #
5 2
A= (11.12)
2 2
Then the eigenvalues λ must satisfy

5−λ 2
|A − λI| = = λ2 − 7λ + 6 = 0 (11.13)

2 2−λ

So the eigenvalues are


λ1 = 1, λ2 = 6 (11.14)

For λ = λ1 , the eigenvector !


x1
u1 ≡ (11.15)
y1
satisfies " # ! !
5 − λ1 2 x1 0
= (11.16)
2 2 − λ1 y1 0
334 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

Since λ1 = 1, (11.16) becomes (


4x1 + 2y1 = 0
(11.17)
2x1 + y1 = 0
Note that the two equations in (11.17) are linearly dependent; therefore, the solution of x1
and y1 is not unique as proved in (11.11). (When solving an eigenvector, if you do not have
simultaneous linear equations that are linearly dependent, you must have made an algebraic
error somewhere!) The solution of (11.17) is then
! !
x1 1
u1 = = c1 (11.18)
y1 −2

where c1 is an arbitrary constant. Very often, the coefficient c1 is omitted, because it is well
known that eigenvectors have arbitrary magnitude.

For λ = λ2 , the eigenvector !


x2
u2 ≡ (11.19)
y2
satisfies " # ! !
5 − λ2 2 x2 0
= (11.20)
2 2 − λ2 y2 0
Since λ2 = 6, (11.20) becomes (
−x2 + 2y2 = 0
(11.21)
2x2 − 4y2 = 0
The solution of (11.21) is then
! !
x2 2
u2 = = (11.22)
y2 1

The morals of this example is twofold. First, all symmetric matrices have real eigen-
values and real eigenvectors. This is a very distinct characteristic of symmetric matrices.
Second, any two eigenvectors ui and uj of a symmetric matrix are orthogonal to each other,
i.e.,
ui · uj = 0 (11.23)
11.1. A REVIEW OF LINEAR ALGEBRA 335

This can easily be verified in this example because


! !
1 2
u1 · u2 = · =1·2−2·1=0 (11.24)
−2 1

These two properties are general properties associated with a Hermitian matrix. If
matrix A is Hermitian, it satisfies
ĀT = A (11.25)

where Ā is complex conjugate of A. When matrix A is real, Ā = A and the condition of


(11.25) reduces to the condition of a symmetric matrix, i.e., AT = A. These two properties
can be proved as follows.

Let us assume that eigenvalues λi and eigenvectors ui are complex, i = 1, 2, . . . , n. Then


λi and ui satisfy
Aui = λi ui (11.26)

Taking a transpose of (11.26) leads to

uTi AT = λi uTi (11.27)

Post-multiplication of ūj to (11.27) results in

uTi AT ūj = λi uTi ūj (11.28)

where ūj is the complex conjugate of uj .

In the meantime, another set of eigenvalue λi and eigenvector ui satisfy

Auj = λj uj (11.29)

Complex conjugation of (11.29) is then

Āūj = λ̄j ūj (11.30)

Pre-multiplication of uTi to (11.30) results in

uTi Āūj = λ̄j uTi ūj (11.31)


336 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

Note that the left sides of (11.28) and (11.31) are equal because matrix A is Hermitian
(cf. (11.25)). Therefore, the difference between (11.28) and (11.31) gives

λi − λ̄j uTi ūj = 0,



i, j = 1, 2, . . . , n (11.32)

When i = j, (11.32) becomes


λi − λ̄i uTi ūi = 0

(11.33)

Since eigenvector ui is nontrivial, uTi ūi 6= 0. Therefore

λi = λ̄i (11.34)

implying that λi is a real number.

When all eigenvalues are distinct, the condition i 6= j in (11.32) implies that

λi 6= λ̄j (11.35)

Therefore, (11.32) is reduced to


ūTi uj = 0, i 6= j (11.36)

The condition (11.36) is known as the orthogonality condition.

Example 11.2 The second example considers a symmetric matrix with repeated eigenval-
ues. Let us consider  
25 0 0
A =  0 34 −12  (11.37)
 

0 −12 41
Then the eigenvalues λ must satisfy

25 − λ 0 0

= (25 − λ) λ2 − 75λ + 1250 = 0

|A − λI| = 0 34 − λ −12 (11.38)


0 −12 41 − λ

or
λ1 = λ2 = 25, λ3 = 50 (11.39)

Therefore, the matrix A has a pair of repeated eigenvalues λ1 = λ2 = 25.


11.1. A REVIEW OF LINEAR ALGEBRA 337

For the repeated eigenvalue λ = 25, the corresponding eigenvector


 
x1
u ≡  x2  (11.40)
 

x3
satisfies     
0 0 0 x1 0
(A − λI) u =  0 9 −12   x2  =  0  (11.41)
    

0 −12 16 x3 0
The solution of (11.41) is then
    
 x 1 = c1
 1 0
x 2 = c2 , or u = c1  0  + c2  1  (11.42)
   

x3 = 34 c2 0 3

4

where c1 and c2 are arbitrary constants. The results are not surprising, because the rank of
the matrix in (11.41) is 1, implying that the solution of (11.41) has two arbitrary constants.
Based on (11.42), one can conclude that the repeated eigenvalues λ1 = λ2 = 25 do have two
corresponding eigenvectors
   
1 0
u1 =  0  , u2 =  1  (11.43)
   
3
0 4

Moreover, the two eigenvectors are orthogonal to each other satisfying


   
1 0
u1 · u2 =  0  ·  1  = 0 (11.44)
   
3
0 4

So the takeaway of this example is the following. If a symmetric (or Hermitian) matrix
has repeated eigenvalues, there will be enough eigenvectors corresponding to the repeated
eigenvalues. Moreover, the eigenvectors remain independent and are orthogonal to each
other. In fact, there is a theorem in linear algebra about this phenomenon.

Theorem. If a characteristic number, say λ1 , of a symmetric matrix is a multiple root


of multiplicity s, then to λ1 there correspond s linearly independent eigenvectors.
338 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

Example 11.3 The third example considers a non-symmetric matrix with distinct eigen-
values. Let us consider " #
1 −2
A= (11.45)
1 −1
Then the eigenvalues λ must satisfy

1−λ −2
|A − λI| = = λ2 + 1 = 0 (11.46)

1 −1 − λ

or
λ1,2 = ±j (11.47)

where j ≡ −1 is the unit imaginary number. Note that the two eigenvalues are complex
conjugate to each other because matrix A is real.

For λ1 = j, the corresponding eigenvector


!
x1
u1 ≡ (11.48)
y1

satisfies " # ! !
1−j −2 x1 0
(A − λ1 I) u1 = = (11.49)
1 −1 − j y1 0

The solution of (11.49) is then


( !
x 1 = c1 2
, or u1 = (11.50)
y1 = c21 (1 − j) 1−j

For λ1 = −j, the corresponding eigenvector


!
x2
u2 ≡ (11.51)
y2

satisfies " # ! !
1+j −2 x2 0
(A − λ2 I) u2 = = (11.52)
1 −1 + j y2 0
11.1. A REVIEW OF LINEAR ALGEBRA 339

The solution of (11.52) is then


( !
x 2 = c2 2
, or u2 = (11.53)
y2 = c22 (1 + j) 1+j

Note that the two eigenvectors u1 and u2 are complex conjugate to each other. This is a
natural result, because the two eigenvalues are complex conjugate.

The moral of this example is the following. A non-symmetric matrix A may have
complex eigenvalues and eigenvectors even it is a real matrix.

Example 11.4 The third example considers a non-symmetric matrix with distinct eigen-
values. Let us consider " #
1 1
A= (11.54)
−1 −1
Then the eigenvalues λ must satisfy

1−λ 1
|A − λI| = = λ2 = 0 (11.55)

−1 −1 − λ

or
λ1 = λ2 = 0 (11.56)

Note that the two eigenvalues are repeated.

For λ = 0, the corresponding eigenvector


!
x1
u≡ (11.57)
x2

satisfies " # ! !
1 1 x1 0
(A − λI) u = = (11.58)
−1 −1 x2 0
or equivalently (
x1 + x2 = 0
(11.59)
−x1 − x2 = 0
340 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

Figure 11.3: Summary of properties of eigenvalues and eigenvectors

The two equations in (11.59) are virtually the same. Therefore, the solution of (11.59) is
then ( !
x 1 = c1 1
, or u = (11.60)
x2 = −c1 −1

The result in (11.60) is indeed eye-opening, because there are two eigenvalues but they
correspond to the same eigenvector. In other words, there are not enough independent
eigenvectors to support all eigenvalues. When a matrix is symmetric (as in Example 11.2),
there are always enough independent eigenvectors even when eigenvalues are repeated.

So the takeaway of this example is the following. When a matrix A is non-symmetric,


eigenvalues of multiplicity s may not have s independent eigenvectors. We will find that
the lack of enough independent eigenvectors causes a big problem later on in similarity
transformation of non-symmetric matrices.

In Fig. 11.3, I summarize the major results we seen in the four examples above. The
columns show that the matrix can be symmetric and non-symmetric. The rows show that the
eigenvalues of interest are distinct or repeated. For symmetric matrices, their eigenvalues
are always real and there are always orthogonal and independent eigenvectors. For non-
symmetric matrices, their eigenvalues may be complex. When eigenvalues are repeated,
there might not be enough eigenvectors.
11.1. A REVIEW OF LINEAR ALGEBRA 341

11.1.2 Modal Matrix and Similarity Transformation

A modal matrix M is a matrix consisting of all eigenvectors, i.e.,

M = [u1 , u2 . . . un ] (11.61)

Of course, the definition in (11.61) implicitly assumes that there are n independent eigen-
vectors available. When the original matrix A is non-symmetric with repeated eigenvalues,
there might not be enough independent eigenvectors. We will discuss that special case later
in Section 11.1.3.

There are several things worth noting about the modal matrix M defined in (11.61).
First, the order and the magnitude of the eigenvectors in the modal matrix M do not matter.
Second, the inverse of modal matrix, i.e., M−1 , exists, because all the eigenvectors forming
M are independent.

Now let us take a close look of the product AM. By definition in (11.61),

AM = A [u1 , u2 . . . un ] = [Au1 , Au2 . . . Aun ] (11.62)

Since u1 , u2 . . . un are eigenvectors, use of (11.4) reduces (11.62) to


 
λ1 0 · · · 0
0 λ2 · · · 0
 
 
AM = [λ1 u1 , λ2 u2 . . . λn un ] = [u1 , u2 . . . un ]  .. .. . . .  (11.63)

 . . . .. 

0 0 · · · λn

where the last equality is from a direct expansion. Through the definition of the modal
matrix M in (11.61), equation (11.63) becomes
 
λ1 0 · · · 0
0 λ2 · · · 0
 
 
AM = M  .. .. . . .  (11.64)

 . . . .. 

0 0 · · · λn
342 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

or, equivalently
 
λ1 0 · · · 0
0 λ2 · · · 0
 
M−1 AM = 
 
.. .. . . .  (11.65)

 . . . .. 

0 0 · · · λn

The product M−1 AM is called similarity transformation. As one can see in (11.66),
the similarity transformation leads to a diagonal matrix with all eigenvalues appear in the di-
agonal elements. The process is also called diagonalization. The similarity transformation
is very important, because it is a critical step in solving state equations for their response.

Example 11.5 Let us use the matrix A in Example 11.1 to demonstrate the similarity
transformation. Recall that " #
5 2
A= (11.66)
2 2

and the eigensolutions are


! !
1 2
λ1 = 1, u1 = ; λ2 = 6, u1 = (11.67)
−2 1

Therefore, the modal matrix is


" #
1 2
M= (11.68)
−2 1

and its inverse is " #


1 1 −2
M−1 = (11.69)
5 2 1

The similarity transformation is then


" #" #" # " #
−1 1 1 2 5 2 1 −2 1 0
M AM = = (11.70)
5 −2 1 2 2 2 1 0 6
11.1. A REVIEW OF LINEAR ALGEBRA 343

11.1.3 Jordan Form

When matrix A is non-symmetric with repeated eigenvalues, there may not be enough
independent eigenvectors. (Be extra careful here. It depends on the matrix A. For some non-
symmetric matrices with repeated eigenvalues, there are still enough eigenvectors. But for
others, there may not be.) If there are not enough eigenvectors, it is impossible to construct
a modal matrix M to perform a similarity transformation resulting in a diagonal matrix.
Since similarity transformation is very useful, it is desirable to come up with something to
fake it as much as possible. That philosophy leads to the Jordan form to be explained in
detail below.

As a demonstration, let us consider the case when λ1 = λ2 but there is only one
independent eigenvector u1 (cf. Example 11.4). For λ1 and u1 , they satisfy

Au1 = λ1 u1 (11.71)

Now, let us cook up u2 such that

Au2 = λ2 u2 + u1 (11.72)

Note that u2 is NOT an eigenvector, and it does not satisfy a homogeneous equation. Also
please keep in mind that λ1 = λ2 .

By borrowing the vector u2 , we can form the modal matrix M following (11.61). We
can also repeat the derivation of similarity transformation for this new modal matrix M as
follows. Let us start with

AM = A [u1 , u2 . . . un ] = [Au1 , Au2 . . . Aun ] (11.73)

Since u2 satisfies (11.72) instead, equation (11.73) is reduced to


 
λ1 1 · · · 0
 0 λ2 · · · 0
 

AM = [λ1 u1 , λ2 u2 + u1 . . . λn un ] = [u1 , u2 . . . un ] 
 .. .. . . ..  (11.74)
 . . . .


0 0 ··· λn
where λ1 = λ2 is assumed. Note that the off-diagonal element now has a ”1” appearing in
the 1-2 element resulting from the additional u1 term in (11.74). Through the definition of
344 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

the modal matrix M in (11.61), equation (11.74) becomes


 
λ1 1 · · · 0
 0 λ2 · · · 0
 

AM = M   .. .. . . .  (11.75)
 . . . .. 

0 0 ··· λn

or, equivalently  
λ1 1 · · · 0
0 λ2 · · · 0
 
M−1 AM = 
 
.. .. . . .  (11.76)

 . . . .. 

0 0 · · · λn
The expression in (11.76) is known as Jordan form.

This process can be generalized to repeated roots of multiplicity s. For example, if


λ1 = λ2 = λ3 and there are only 1 independent eigenvector, one would use (11.71) and
(11.72) to generate u1 and u2 as before. In addition, one would use

Au3 = λ3 u3 + u2 (11.77)

to generate u3 . Going through the same derivation, one would find that the Jordan form
appears as  
λ1 1 0 ··· 0
 0 λ2 1 ··· 0
 

−1
 
M AM =   0 0 λ3 ··· 0  (11.78)
 .. .. .. ..

... 
 . . . . 
0 0 0 ··· λn

Example 11.6 Let us use the matrix A in Example 11.4 to demonstrate the Jordan form.
Recall that " #
1 1
A= (11.79)
−1 −1
and the eigensolutions are !
1
λ1 = λ2 = 0, u1 = (11.80)
−1
11.1. A REVIEW OF LINEAR ALGEBRA 345

Since there is only one independent eigenvector, we will select a second independent vector
u2 such that
Au2 = λ2 u2 + u1 (11.81)
Let us assume that u2 takes the form of
!
x1
u2 = (11.82)
x2

Then (11.81) is reduced to


" # ! !
1 1 x1 1
= (11.83)
−1 −1 x2 −1
or (
x 1 + x2 = 1
(11.84)
−x1 − x2 = −1
Note that the two equations in (11.84) are the same, and there are infinitely many solutions
to (11.84). Therefore, we pick a simple solution
!
1
u2 = (11.85)
0

With u1 from (11.80) and u2 from (11.85), the modal matrix is


" #
1 1
M= (11.86)
−1 0

and its inverse is " #


−1 0 −1
M = (11.87)
1 1
The similarity transformation is then
" #" #" # " #
0 −1 1 1 1 1 0 1
M−1 AM = = (11.88)
1 1 −1 −1 −1 0 0 0

In this Jordan form, the diagonal elements are eigenvalues (i.e., 0), and the number ”1”
appears in the 1-2 element.
346 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

Figure 11.4: Summary of homogeneous and particular solutions

11.2 Solution Structures of State Equations

A well-posed state equation takes the following form

ẋ(t) = Ax(t) + Bu(t), x(0) = x0 (11.89)

Note that an initial condition x(0) = x0 is imposed. Since the state equation (11.89) is
linear, the complete solution can be split into a homogeneous solution xh (t) and a particular
solution xp (t), i.e.,
x(t) = xh (t) + xp (t) (11.90)

Specifically, the homogeneous solution xh (t) satisfies

ẋh (t) = Axh (t), xh (0) = x0 (11.91)

and the particular solution xp (t) satisfies

ẋp (t) = Axp (t) + Bu(t), x(0) = 0 (11.92)

The philosophy is to assign the initial conditions to the homogeneous solution xh (t) and leave
the forcing term Bu(t) to the particular solution xp (t). The table in Fig. 11.4 summarizes
the philosophy of the homogeneous solution xh (t) and the particular solution xp (t). The
homogeneous solution will be explained in Section 11.3 and Section 11.4. The particular
solution will be explained in Section 11.5.
11.3. HOMOGENEOUS SOLUTION: EIGENVECTOR EXPANSION 347

11.3 Homogeneous Solution: Eigenvector Expansion

There are many ways to find the homogeneous solution xh (t) satisfying
ẋh (t) = Axh (t), xh (0) = x0 (11.93)
One of them is the method of eigenvector expansion. In this method, the homogenous
solution is assumed to take the form of
xh (t) = veλt (11.94)
where v and λ are a unknown constant vector and a unknown scalar, respectively. Substi-
tution of (11.94) into (11.93) resulting in
λveλt = Aveλt (11.95)
or
Av = λv (11.96)
Therefore, the unknown vector v and scalar λ in (11.94) turn out to be an eigenvector and
an eigenvalue of A, respectively.

Let us assume for now that there are enough independent eigenvectors u1 , u2 , . . . , un
to go with eigenvalues λ1 , λ2 , . . . , λn . Then the homogeneous solution will be a linear com-
bination of all possible solutions, i.e.,
xh (t) = c1 u1 eλ1 t + c2 u2 eλ2 t + · · · cn un eλn t (11.97)
where c1 , c2 , . . . , cn are unknown coefficients to be determined from the initial condition in
(11.93). Equation (11.97) can further be arranged as
 
c1 eλ1 t
 c2 e λ 2 t 
 
xh (t) = [u1 , u2 . . . un ] 
 ..  (11.98)
.

 
cn e λ n t
or, equivalently   
e λ1 t 0 · · · 0 c1
λ2 t
0 e ··· 0 c2
  
  
xh (t) = M  .. .. .. ..  ..  (11.99)

 . . . .

 .


0 0 · · · e λn t cn
348 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

where the definition of the modal matrix in (11.61) has been used. To determine c1 , c2 , . . . , cn ,
let us impose initial conditions (11.93) to (11.99) and obtain
 
c1
 c2 
 
xh (0) = M  . 

 = x0 (11.100)
 .. 
cn

In other words,  
c1
c2
 
 = M−1 x0
 
 .. (11.101)
.
 
 
cn
Finally, substitution of (11.101) back to (11.99) results in
 
eλ1 t 0 ··· 0
 0 eλ2 t ··· 0
 
 −1
xh (t) = M 
 .. .. .. ..  M x0 (11.102)
 . . . .


0 0 ··· eλn t

The explicit expression of xh (t) in (11.102) indicates that the homogeneous solution is
entirely determined by the eigenvalues and eigenvectors of the state matrix A. Another thing
is note is that eigenvalues are indeed characteristic roots in ordinary differential equation.
As we know, the system can be modeled via an ordinary differential equation or a state
equation. If the system is modeled via an ordinary differential equation, the homogeneous
solution takes the form of ni=1 ci eλi t , where ci is a constant to be determined from initial
P

conditions and λi is a root from the characteristic equation. If the system is modeled via a
state equation, the homogeneous solution takes the form of ni ci ui eλi t , where ci is a constant
P

to be determined from initial conditions and λi is an eigenvalue. Since the homogeneous


solution with a prescribed initial condition is unique, the roots of the characteristic equation
of an ordinary differential equation must be the eigenvalues of the state matrix.

Example 11.7 Figure 11.5 shows a simple harmonic oscillator consist of a mass m and a
spring k. Let the displacement of the mass from the equilibrium position be x(t). Then the
11.3. HOMOGENEOUS SOLUTION: EIGENVECTOR EXPANSION 349

Figure 11.5: A harmonic oscillator

motion of the simple harmonic oscillator is governed by


r
k
ẍ(t) + ωn2 x(t) = 0, ωn = (11.103)
m
By defining the following state variables
!
x1
x1 ≡ x(t), x2 ≡ ẋ(t), x≡ (11.104)
x2
the ordinary differential equation of motion (11.103) can be cast into the following the state
equation. " #
0 1
ẋ(t) = x(t) (11.105)
−ωn2 0

To determine the homogeneous solution xh (t), the first step is to find the eigenvalues
and eigenvectors. To find eigenvalues

−λ 1
= λ2 + ωn2 = 0 (11.106)

−ωn2 −λ

Note that the equation in (11.106) satisfied by the eigenvalues is exactly the same charac-
teristic equation from the ordinary differential equation from (11.103). Solution of (11.106)
leads to two eigenvalues
λ1,2 = ±jωn (11.107)
Moreover, the two corresponding eigenvectors are
! !
1 1
u1 = , u2 = (11.108)
jωn −jωn
Hence we can form the modal matrix as
" #
1 1
M ≡ [u1 , u2 ] = (11.109)
jωn −jωn
350 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

and its inverse is " #


1 −jωn −1
M−1 =− (11.110)
2jωn −jωn 1
According to (11.102), the homogeneous solution xh (t) is
" #" #" #
1 1 1 ejωn t 0 −jωn −1
xh (t) = − x0 (11.111)
2jωn jωn −jωn 0 e−jωn t −jωn 1

By multiplying the second and third matrices, one obtain


" #" #
1 1 1 −jωn ejωn t −ejωn t
xh (t) = − x0 (11.112)
2jωn jωn −jωn −jωn e−jωn t e−jωn t

By multiplying the two matrices in (11.112), one obtain


" #
1 −jωn (ejωn t + e−jωn t ) −ejωn t + ejωn t
xh (t) = − x0 (11.113)
2jωn ωn2 (ejωn t − e−jωn t ) −jωn (ejωn t + e−jωn t )

Recalling the definition of cos θ and sin θ in complex numbers as


ejθ + e−jθ ejθ − e−jθ
cos θ ≡ , sin θ ≡ (11.114)
2 2j
one could reduce (11.113) to
" #
1
cos ωn t ωn
sin ωn t
xh (t) = x0 (11.115)
−ωn sin ωn t cos ωn t

Now what happens when there are not enough eigenvectors, i.e., when A is not sym-
metric but with repeated eigenvalues? Again, let me use the case λ1 = λ2 to demonstrate
the solution procedure below.

According to (11.96), u1 eλ1 t is an independent solution to the homogeneous equation


(11.93). The second independent homogeneous solution is not available, because a second
independent eigenvector does not exist. In this case, let me guess a second independent
solution in the form of
u1 teλ1 t + u2 eλ2 t , λ1 = λ2 (11.116)
11.3. HOMOGENEOUS SOLUTION: EIGENVECTOR EXPANSION 351

where u1 is the known independent eigenvector whereas u2 is an unknown vector to be


determined. The solution guessed in (11.116) is a more elaborate form that one often uses
for ordinary differential equations. In solving ordinary differential equations, we often throw
in a t to a homogeneous solution when there are repeated roots. It is the same trick in
(11.116), except that an unknown u2 is also kept in (11.116) to keep more flexibility.

Since (11.116) is a homogenous solution we guess, it must satisfies the homogenous


equation (11.93). Substitution of (11.116) into (11.93) results in

u1 eλ1 t + λ1 teλ1 t + λ2 u2 eλ2 t = A u1 teλ1 t + u2 eλ2 t = λ1 u1 teλ1 t + Au2 eλ2 t


 
(11.117)

Comparing the left and right sides of (11.117) and recalling λ1 = λ2 , one obtains

Au2 = λ2 u2 + u1 (11.118)

Basically, (11.118) shows how the vector u2 should be chosen. Note that (11.118) is exactly
the same equation as (11.72) used to find the second vector needed to form a modal matrix.
Again, u2 in (11.118) is not an eigenvector.

So now the homogeneous solution xh (t) takes the form of

xh (t) = c1 u1 eλ1 t + c2 u1 teλ1 t + u2 eλ2 t + · · · cn un eλn t



(11.119)

where u2 is the guessed vector satisfying (11.118) and c1 , c2 , . . . , cn are unknown coefficients
to be determined from the initial condition in (11.93). Equation (11.119) can further be
arranged as  
c1 eλ1 t + c2 teλ1 t
c2 eλ2 t
 
 
xh (t) = [u1 , u2 . . . un ] 
 ..  (11.120)
.

 
cn eλn t
or, equivalently   
eλ1 t teλ1 t · · · 0 c1
λ2 t
0 e ··· 0 c2
  
  
xh (t) = M  .. .. ... ..  ..  (11.121)
. . . .
  
  
0 0 · · · eλn t cn
352 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

where the definition of the modal matrix in (11.61) has been used. To determine c1 , c2 , . . . , cn ,
let us impose initial conditions (11.93) to (11.121) and obtain
 
c1
 c2 
 
xh (0) = M 
 ..  = x0
 (11.122)
 . 
cn
In other words,  
c1
c2
 
 = M−1 x0
 
 .. (11.123)
.
 
 
cn
Finally, substitution of (11.123) back to (11.121) results in
 
eλ1 t teλ1 t · · · 0
 0 eλ2 t · · · 0  −1
 
xh (t) = M  .
 .. .. ..  M x0 (11.124)
 . . . . . 
0 0 ··· eλn t

What happens when λ1 = λ2 = λ3 ? One can prove that the homogeneous solution
takes the form of
 λt 
e 1 teλ1 t 12 t2 eλ1 t · · · 0
 0 eλ1 t teλ1 t · · · 0 
 
 λ1 t
 −1
xh (t) = M  0 0 e · · · 0  M x0
 .. .. .. .. 
 
...
 . . . . 
λn t
0 0 0 ··· e
As one can see, the upper triangular matrix above the diagonal for the repeated roots are
filling up.

Example 11.8 In this example, we build on Example 11.4 and Example 11.6 to find the
homogeneous solution of
" #
1 1
ẋ(t) = x(t), x(0) = x0 (11.125)
−1 −1
11.3. HOMOGENEOUS SOLUTION: EIGENVECTOR EXPANSION 353

Since the state equation in (11.125) is homogeneous, its solution is indeed a homoge-
neous solution, i.e., x(t) = xh (t). To obtain the homogeneous solution, the first step is to
find the eigenvalues and eigenvectors. According to Example 11.4 (or cf. (11.80)),
!
1
λ1 = λ2 = 0, u1 = (11.126)
−1
Since there is not enough eigenvector, the process in Example 11.6 must be used leading to
the modal matrix " #
1 1
M= (11.127)
−1 0
and its inverse " #
0 −1
M−1 = (11.128)
1 1

Substitution of (11.126), (11.127), and (11.128) into (11.124), the homogeneous solution
xh (t) is " #" #" #
1 1 1 t 0 −1
xh (t) = x0 (11.129)
−1 0 0 1 1 1
By multiplying the second and third matrices, one obtain
" #" #
1 1 t −1 + t
xh (t) = x0 (11.130)
−1 0 1 1
By multiplying the two matrices in (11.112), one obtain
" #
t+1 t
xh (t) = x0 (11.131)
−t 1 − t

As a closing remark, the method of eigenvectors to find xh (t) has pros and cons. The
biggest advantage is that the method is exact. Therefore, one can use it to calibrate other
approximate methods. The downside is that the method may become difficult to implement.
For example, if the system has one million orders, we will need to find all 1 million eigenvalues
and eigenvectors. Moreover, we need to perform the inverse of a one-million-by-one-million
modal matrix. The computational efforts could be significant when the number of order is
high. This leads to the thought of the method of state transition matrix.
354 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

11.4 Homogeneous Solution: State Transition Matrix

The state transition matrix Φ(t) is defined through an infinite series

A2 t2 A3 t3
Φ(t) ≡ I + At + + + ··· (11.132)
2! 3!
Note that each term in (11.132) is an n × n matrix. Therefore, Φ(t) defined in (11.132) is
alson an n × n matrix. Moreover, the series converges because the denominator takes the
form of n! and grows very quickly with n. The infinite series may look a bit familiar, because
the Taylor series of eat takes the form of

a2 t2 a3 t3
eat = 1 + at + + + ··· (11.133)
2! 3!
Therefore, it is customary to define the infinite series in (11.132) as eAt , i.e.,

A2 t2 A3 t3
eAt ≡ I + At + + + ··· (11.134)
2! 3!
As a result, the state transition is also known as

Φ(t) = eAt (11.135)

The state transition matrix Φ(t) has many important properties. They are described
one by one as follows.

The first property is


Φ(0) = I (11.136)
which can be easily seen from (11.132).

The second property of Φ(t) is that Φ(t) satisfies


d
Φ(t) = AΦ(t) (11.137)
dt
Equation (11.137) is very easy to prove. One can simply differentiate both sides of (11.132)
to obtain
d A2 t A3 t2
Φ(t) = A + + + ··· (11.138)
dt 1! 2!
11.4. HOMOGENEOUS SOLUTION: STATE TRANSITION MATRIX 355

Next, a matrix A can be factored out from the left side of (11.138) as

A2 t2 A3 t3
 
d
Φ(t) = A I + At + + + · · · = AΦ(t) = Φ(t)A (11.139)
dt 2! 3!

The third property is that each column of Φ(t) is a homogeneous solution of the
state equation. Let
Φ(t) ≡ [φ1 (t), φ2 (t), . . . , φn (t)] (11.140)
where φ1 (t), φ2 (t), . . . , φn (t) are the n columns of the state transition matrix Φ(t). Substitu-
tion of (11.140) into (11.137) results in
h i
φ̇1 , φ̇2 , . . . , φ̇n = A [φ1 , φ2 , . . . , φn ] = [Aφ1 , Aφ2 , . . . , Aφn ] (11.141)

By comparing each column of the left and right sides of (11.141), one obtain

φ̇i (t) = Aφi (t), i = 1, 2, . . . , n (11.142)

implying that each φi (t) satisfies the state equation and is a homogeneous solution thereof.

The fourth property is that all the n columns of Φ(t) are linearly independent. Other-
wise, det Φ(t) would be zero. On the other hand, (11.136) indicates that det Φ(0) = 1. The
clear contradiction proves that all the n columns of Φ(t) are linearly independent.

These properties above lead to a very important result as follows. Since every column of
Φ(t) is a homogenous solution and all the columns are linearly independent, we can conclude
that the complete homogeneous solution xh is a linear combination of all the n columns of
Φ(t), i.e.,
n
X
xh (t) = ci φi (t) (11.143)
i=1

where ci , i = 1, 2, . . . , n are constant coefficients to be determined from the initial condition.


Alternatively, (??) can be rewritten as
   
c1 c1
 c2   c2 
   
xh (t) = [φ1 , φ2 , . . . , φn ] 
 ..  = Φ(t)  .. 
   (11.144)
 .   . 
cn cn
356 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

where (11.140) has been used. To determine ci , i = 1, 2, . . . , n, let us impose the initial
condition  
c1
 c2 
 
xh (0) = Φ(0) 
 ..  = x0
 (11.145)
 . 
cn
Since Φ(0) = I (cf. (11.136)), we conclude from (11.145)
 
c1
 c2 
 
 .  = x0 (11.146)
 . 
 . 
cn

Substitution of (11.146) back to (11.144) results in

xh (t) = Φ(t)x0 (11.147)

Equation (11.147) shows that Φ(t) serves as a vehicle to transition the system from an initial
state x0 to a present state xh (t). This is why Φ(t) is called the state transition matrix.

Note that the homogeneous solution xh (t) is unique for an initial value problem. There-
fore, comparison of (11.147) with (11.102) leads to the conclusion that
 
eλ1 t 0 · · · 0
 0 e λ2 t · · · 0  −1
 
Φ(t) = M  .
 .. .. .. M (11.148)
 . . . . . 
0 0 ··· e λn t

when there are enough independent eigenvectors. Similarly, comparison of (11.147) with
(11.124) leads to the conclusion that
 
eλ1 t teλ1 t · · · 0
 0 e λ2 t · · · 0  −1
 
Φ(t) = M  .
 .. ... M
..  (11.149)
 .. . . 
0 0 ··· eλn t

when there are not enough independent eigenvectors (when λ1 = λ2 as an example).


11.4. HOMOGENEOUS SOLUTION: STATE TRANSITION MATRIX 357

Although the state transition matrix Φ(t) is defined in an infinite series in (11.132),
there are many ways to calculate it. For example, (11.148) and (11.149) show how one could
use eigenvalues and eigenvectors to calculate Φ(t). The function eA is indeed quite common
in many software tools. In MATLAB, for example, eA can be calculate numerically via the
command expm(A).

Example 11.9 In this example, let us revisit the simple harmonic oscillator (cf. Fig. 11.5)
discussed in Example 11.7 by using the approach of the state transition matrix. The state
equation is reproduced here again
" #
0 1
ẋ(t) = x(t) (11.150)
−ωn2 0

Specifically " #
0 1
A≡ 2
(11.151)
−ωn 0

" #" # " # " #


0 1 0 1 −ωn2 0 1 0
A2 = 2 2
= = −ωn2 (11.152)
−ωn 0 −ωn 0 0 −ωn2 0 1

" #" # " #


1 0 0 1 0 1
A3 = −ωn2 2
= −ωn2 2
(11.153)
0 1 −ωn 0 −ωn 0

" #" # " #


0 1 0 1 1 0
A4 = −ωn2 = ωn4 (11.154)
−ωn2 0 −ωn2 0 0 1

" #" # " #


1 0 0 1 0 1
A5 = ωn4 = ωn4 (11.155)
0 1 −ωn2 0 −ωn2 0

············
358 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

By substituting (11.151) through (11.155) back to (11.132), one would obtain


" # " # " # " #
2
1 0 0 1 2 1 0 t 2 0 1 t3
Φ(t) = + t − ωn − ωn
0 1 −ωn2 0 0 1 2! −ωn2 0 3!
" # " #
4
1 0 t 0 1 t5
+ ωn4 + ωn4 + ··· (11.156)
0 1 4! −ωn2 0 5!

or, equivalently

ω 2 t2 ω 4 t4 ωn2 t3 ωn4 t5
 
1 − n + n − ··· t− + − ···
 2! 4! 3! 5! 
Φ(t) =  (11.157)
 

ω 4 t3 ω 6 t5 ωn2 t2 ωn4 t4
 
−ωn2 t + n − n + · · · 1− + − ···
3! 5! 2! 4!
Recall that Taylor expansion of sin θ and cos θ are

θ3 θ5 θ2 θ4
sin θ = θ − + − ··· , cos θ = 1 − + − ··· (11.158)
3! 5! 2! 4!
Through use of (11.158), the state transition matrix Φ(t) in (11.157) is reduced to
" #
1
cos ωn t ωn
sin ωn t
Φ(t) = (11.159)
−ωn sin ωn t cos ωn t

which is the same as (11.115).

Finally, it is worth noting that each column in Φ(t) corresponds to a homogeneous


solution subjected to a very specific initial condition. Consider an initial condition
 
0
 . 
 .. 
 
 0 
 
 
x0 =   1  ← ith row (11.160)

 0 
 
 . 
 .. 
 
0
11.4. HOMOGENEOUS SOLUTION: STATE TRANSITION MATRIX 359

where only the i-th element is 1 and all other elements are zero. Substitution of (11.160)
into (11.147) results in
 
0
 . 
 .. 
 
 0 
 
 
xh (t) = [φ1 (t), φ2 (t), . . . , φn (t)] 
 1  = φi (t)
 (11.161)
 0 
 
 . 
 .. 
 
0

This method is often used to generate Φ(t) one column at a time. As an example, one
type of instability is called parametric resonance. A common parametrically excited system is
governed by a linear differential equation with periodic stiffness of period T . Riding a child’s
swing is a classic example. When riding a swing, we raise the center of gravity periodically
thus changing the stiffness of the system. The stability of such a system depends on Φ(T )
where T is the period of the stiffness change. One can use the special initial condition in
(11.160) to generate Φ(T ) one column at a time.

Example 11.10 Let us revisit the simple harmonic oscillator in Example 11.7 (cf. Fig. 11.5).
We want to use the specific initial conditions (11.160) to obtain the state transition matrix.

First of all, let us reproduce the formulation below. The ordinary differential equation
is r
k
ẍ(t) + ωn2 x(t) = 0, ωn = (11.162)
m
By defining the following state variables
!
x1
x1 ≡ x(t), x2 ≡ ẋ(t), x≡ (11.163)
x2

the ordinary differential equation of motion (11.162) can be cast into the following the state
equation. " #
0 1
ẋ(t) = x(t) (11.164)
−ωn2 0
360 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

Moreover, the state transition matrix is

Φ(t) = [φ1 (t), φ2 (t)] (11.165)

To obtain φ1 (t), let us specify !


1
x0 = (11.166)
0
By comparing (11.166) with (11.163), one would know that the initial conditions in (11.166)
corresponds physically to the following initial displacement and velocity

x(0) = 1, ẋ(0) = 0 (11.167)

The homogeneous solution of (11.162) subjected to initial conditions (11.167) is easily ob-
tained as
x(t) = cos ωn t, ẋ(t) = −ωn sin ωn t (11.168)
with the definition of state variables in (11.163), we obtain
!
cos ωn t
φ1 (t) = (11.169)
−ωn sin ωn t

To obtain φ2 (t), let us specify !


0
x0 = (11.170)
1
By comparing (11.170) with (11.163), one would know that the initial conditions in (11.170)
corresponds physically to the following initial displacement and velocity

x(0) = 0, ẋ(0) = 1 (11.171)

The homogeneous solution of (11.162) subjected to initial conditions (11.171) is easily ob-
tained as
1
x(t) = sin ωn t, ẋ(t) = cos ωn t (11.172)
ωn
with the definition of state variables in (11.163), we obtain
!
1
ωn
sin ω n t
φ2 (t) = (11.173)
cos ωn t
11.5. PARTICULAR SOLUTION 361

Finally, substitution of (11.169) and (11.173) back to (11.165) results in


" #
1
cos ωn t ωn
sin ωn t
Φ(t) = (11.174)
−ωn sin ωn t cos ωn t
which is the same as (11.159).

11.5 Particular Solution

The particular solution xp (t) satisfies the state equation

ẋp (t) = Axp (t) + Bu(t), xp (0) = 0 (11.175)

To obtain a solution xp (t), let us premultiply e−At to (11.175) to obtain

e−At ẋp (t) = e−At Axp (t) + e−At Bu(t) (11.176)

According to (11.135) and (11.139),


d −At 
e = −Ae−At = −e−At A (11.177)
dt
With (11.177), we can reduce (11.176) to
d d −At 
e−At (xp (t)) + e xp (t) = e−At Bu(t) (11.178)
dt dt
or, equivalently
d −At
e xp (t) = e−At Bu(t)

(11.179)
dt
Integration of (11.179) from t = 0 to present time t leads to
Z t
−At −A0
e xp (t) − e xp (0) = e−Aτ Bu(τ )dτ (11.180)
0

where a dummy integration variable τ is used. With the zero initial condition in (11.175),
one obtain Z t Z t
At −Aτ
xp (t) = e e Bu(τ )dτ = eA(t−τ ) Bu(τ )dτ (11.181)
0 0

Although (11.181) provides an analytical solution for xp (t), it is not often used. Most
particular solutions are obtained numerically. The analytical solution in (11.181) often serve
as a benchmark for numerical solutions.
362 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

11.6 Stability of a System

Stability of a system is a very important consideration for many practical applications, be-
cause we want to know if the system will remain stable or not (e.g., collapse of Tacoma
Narrow Bridge). When there are no external excitations (i.e., looking at homogeneous so-
lutions), a system is said to be stable if the response of the system remains bounded for all
initial conditions that are bounded. This type of stability is generally known as Lyapunov
stability. Otherwise, we will consider the system to be unstable. Stability can also be
classified into many finer classes (e.g., asymptotically stable), and the classification often
depends on the definition of these classes.

Since stability of a system is determined by its homogeneous solutions, analytical solu-


tions via eigenvector expansion, such as (11.102) and (11.124) reproduced below, provide an
effective tool to analyze the stability.
  
λ1 t

 e 0 · · · 0

 0 eλ2 t · · · 0  −1

  



 M  .. .. ... ..   M x0


  . . . 

0 0 · · · eλn t




xh (t) = (11.182)
  
λ t λ t
e 1 te 1 · · · 0





 0 e λ2 t · · · 0  −1

  


 M .
 .. ... ..  M x0


  .
. . . 


0 0 · · · eλn t

As one can see, the only time-dependent term in (11.182) is eλi t , i = 1, 2, . . . , n. Therefore,
the stability depends on the eigenvalue λi .

Figure 11.6 shows how an eigenvalue λi affects the response of eλi t and thus the stability.
First of all, the eigenvalue λi can be a real number or a complex number. If λi is a real
number, there are three possible scenarios: (a) λi is negative, (b) λi is zero, and (c) λi is
positive. They are described in detail as follows.

(a) λi < 0. In this case, eλi t will present an exponential decay; see Fig. 11.6. As t → ∞,
the response eλi t goes to zero as well. The system is certainly Lyapunov stable. Moreover,
11.6. STABILITY OF A SYSTEM 363

Figure 11.6: Effects of eigenvalues on stability


364 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

the system is asymptotically stable, because the response vanishes as t → ∞.

(b) λi = 0. In this case, eλi t will simply be a constant; see Fig. 11.6. The system is
Lyapunov stable. The system is also known as neutrally stable because the response is a
constant.

(c) λi > 0. In this case, eλi t grow exponentially; see Fig. 11.6. The growth is monotonic,
and the system is unstable. Such type of instability with monotonic growth in response is
known as divergence.

If λi is a complex number, there are also three possible scenarios: (a) < [λi ] is negative,
(b) < [λi ] is zero, and (c) < [λi ] is positive. They are described in detail as follows.

(a) < [λi ] < 0. In this case, let λi = σ + jω, where σ < 0. Then

eλi t = eσt (cos ωt + j sin ωt) (11.183)

In (11.183), the function eσt decays exponentially. In the meantime, cos ωt and sin ωt will
oscillate. Therefore, the response is oscillatory with an exponential decay; see Fig. 11.6. The
system is asymptotically stable.

(b) < [λi ] = 0. According to (11.183), the response will be oscillatory; see Fig. 11.6.
The system remains bounded all the time; therefore, the system is Lyapunov stable.

(c) < [λi ] > 0. In this case, eλi t grow exponentially and oscillate simultaneously; see
Fig. 11.6. Such type of instability is usually called flutter.

All the 6 stability scenarios from λi can be plotted on a complex plane; see Fig. 11.7.
The abscissa is the real part of λi and the ordinate is the imaginary part of λi . One can
see that the origin corresponds to a neutrally stable state. The imaginary axis (except the
origin) corresponds to Lyapunov stability with an oscillatory motion. Any point on the left
half plane corresponds to an asymptotically stable state. In contrast, any point of the right
half plane indicates some sort of instability (e.g., divergence on the positive real axis and
flutter for the rest of the right half plane).

Since a system usually has multiple eigenvalues, the system’s stability will depend on
all eigenvalues. The system will be stable if all eigenvalues have a non-positive real part,
11.6. STABILITY OF A SYSTEM 365

Figure 11.7: Eigenvalue λi on a complex plane and its relation to stability


366 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

i.e., < [λi ] ≤ 0, ∀ i = 1, 2, . . . , n. If there exists one or more eigenvalues with a positive real
part, i.e., ∃ < [λi ] > 0, the system will be unstable. The same criteria above also apply to
characteristic roots from an ordinary differential equation, because characteristic roots are
the eigenvalues. Characteristic roots are easier to obtain, when the dynamics of a system is
governed via an ordinary differential equation.

Note that the stability criterion above (i.e., < [λi ] ≤ 0, ∀ i = 1, 2, . . . , n) only applies
when all the eigenvalues are distinct. The stability criterion was derived based on the solution
eλi t in (11.182) when all eigenvalues are distinct. When there are repeated eigenvalues,
the homogeneous solution depends on both eλi t and teλi t . Therefore, the stability will be
governed not only by eλi t but also teλi t .

When repeated eigenvalues are present, the stability depends on where the repeated
eigenvalues are located. If the repeated eigenvalues appear on the right half plane, the system
will be unstable as in the case of distinct eigenvalues. If the repeated eigenvalues appear
on the left half plane, the system will remain stable as in the case of distinct eigenvalues.
This is because eλt decays much faster than t increases. As a result, limt→∞ teλt = 0 when
λ < 0 or <[λ] < 0. If the repeated eigenvalues appear on the pure imaginary axis, they will
indeed destabilize the system due to the presence of teλt in (11.182). This is because eλt
will be finite (e.g., constant or oscillatory). Therefore, teλt becomes unbounded resulting in
instability.

Example 11.11 Figure 11.8 is a reproduction of the flagship example that we show in
Example 4.4. What is the system’s stability? The state equation of the system is
k k
 
  
− 0 0
  
x1  x1
d    B B

  0 

 x2  =  0 0 1   x2 +  1  Fs (t) (11.184)

dt  k k 
v2 − 0 v2
m m m

Eigenvalues λ of the system satisfy


k k

− −λ

0
B B
0 −λ 1 =0 (11.185)


k k

− −λ


m m
11.6. STABILITY OF A SYSTEM 367

Figure 11.8: Stability of the flagship example

or  
k 2 k
−λ λ + λ + =0 (11.186)
B m
The eigenvalues are  s  
2
1 k k k
λ = 0, − ± −4  (11.187)
2 B B m

Since all eigenvalues do not have a positive real part, the system is stable.

Example 11.12 This example is to revisit the inverted pendulum in the gravitational field
considered in Example 9.1, which is reproduced in Figure 11.9. The inverted pendulum has
mass m and length l. Moreover, the pendulum is supported through a torsional spring with
stiffness k at the hinge point. The angular position is measured by the angle θ from the
vertical upward position. The equation of motion is derived in (9.39) and is reproduced
below
ml2 θ̈ + kθ − mgl sin θ = 0 (11.188)
The system is obviously a nonlinear system because of the presence of sin θ. Also, θ = 0 is
always an equilibrium position, because θ = 0 satisfies

kθ − mgl sin θ = 0 (11.189)

What is the stability of the inverted pendulum around the equilibrium position θ = 0?

For motion around θ = 0, θ will be small and sin θ ≈ θ. Hence, (11.188) is reduced to

ml2 θ̈ + (k − mgl) θ = 0 (11.190)

Let me use two different ways to analyze stability of the inverted pendulum as follows.
368 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN

Figure 11.9: Stability of an inverted pendulum around the equilibrium position θ = 0

Method 1. Characteristic Roots. By substituting θ = Ceλt into (11.190), one


would obtain the following equation governing the characteristic roots λ

ml2 λ2 + k − mgl = 0 (11.191)

When k > mgl, the characteristic roots are


r
k − mgl
λ = ±j (11.192)
ml2
Since both roots are pure imaginary, the inverted pendulum is Lyapunov stable when k >
mgl. Now let us consider the case k < mgl. In this case, the characteristic roots are
r
mgl − k
λ=± (11.193)
ml2
Since one root is real and positive, the inverted pendulum is unstable around θ = 0 when
k < mgl.

Method 2. Eigenvalues. Now, let us first convert the equation of motion (11.190)
into a state equation. First, let us assume the following state variables

x1 ≡ θ, x2 ≡ θ̇ (11.194)

Then the equation of motion (11.190) is rewritten as


! " # !
d x1 0 1 x1
= mgl−k (11.195)
dt x2 ml2
0 x2
11.6. STABILITY OF A SYSTEM 369

Eigenvalues λ then satisfy



−λ 1 mgl − k
= λ2 − =0 (11.196)

mgl−k
ml2 −λ ml2

Note that (11.196) is virtually the same as (11.191) leading to the same conclusions on
stability of the inverted pendulum.

On interesting case to look into is when k = mgl. In this case, the two eigenvalues are

λ = 0, 0 (11.197)

They are repeated and are the origin of the complex plane. The system is unstable.
370 CHAPTER 11. SOLUTION OF STATE EQUATIONS IN TIME DOMAIN
Chapter 12

Transfer Functions

The approach of transfer functions is an extremely powerful tool to analyze input excitations,
system characteristics (e.g., stability), and forced response of a linear system. It also leads to
many important concepts, such as frequency response functions, frequency domain analyses,
and frequency spectrum analyses. The entire theory of classical control is built around the
concept of transfer functions.

There are two ways to explain the concept of transfer functions. The first way is to
use Laplace Transforms. The advantage is its mathematical rigor. Also, it will cover a
very large class of input functions. The downside, however, is that substantial mathematical
preparation is needed. Moreover, it is difficult to figure out physical insights of this approach.

The second way is to limit the input function to the class of est , where s is a complex
number. The advantage is its simplicity and ease. It is also easier to obtain physical insights
through this approach. The disadvantage is that it is not clear whether or not the method
of transfer functions will work for input functions other than the form of est .

In this chapter, I will follow the second way to explain the method of transfer functions
using input functions that take the form of est . Since transfer functions are generally complex
functions, I will first review some fundamentals of complex numbers. Then I will explain the
definition and implication of transfer functions.

371
372 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.1: A complex number on a complex plane

12.1 A Review of Complex Numbers

12.1.1 Definition

A complex number z is defined as

z = x + jy (12.1)


where j ≡ −1 is the imaginary unit, and x and y are the real and imaginary parts of the
complex number z, respectively. Alternatively, the real part and imaginary parts are written
as

x = <[z], y = =[z] (12.2)

Figure 12.1 shows a complex plane describing the collection of all complex numbers.
The abscissa represents the real part of a complex number while the ordinate represents the
imaginary part of a complex number. On the complex plane, a complex number z = x + jy
can then be represented by a point z whose abscissa is x and ordinate is y; see Fig. 12.1.
The same complex number can also be described via a vector from the origin to the point z.
12.1. A REVIEW OF COMPLEX NUMBERS 373

12.1.2 Polar Form

The polar form is best described via the vector from the origin to point z. Let the vector
have the magnitude (i.e., the length) r and make an angle θ with the real axis; see Fig. 12.1.
Then basic trigonometry leads to

x = r cos θ, y = r sin θ (12.3)

As a result,
z = x + jy = r (cos θ + j sin θ) (12.4)
Moreover, y
p
r= x2 + y 2 ≡ |z| , θ = tan−1 ≡6 z (12.5)
x
where r is the modulus and θ is the argument of a complex number. Physical meaning of r
is the magnitude of a complex number, whereas θ is the phase angle of a complex number.
Note that the modulus is the distance from the origin to the point z on the complex plane;
therefore, r is always positive. Another thing to note is that the argument is multi-value,
i.e., the argument takes the form of θ ± 2nπ, where n is an integer. Moreover, both the
modulus and the argument are real numbers.

Example 12.1 Consider the complex number z = 1 + j plotted on the complex plane; see
Fig. 12.2. The magnitude, or the modulus, of z is then
√ √
r= 12 + 12 = 2 (12.6)

The phase angle, or the argument, is then


 
−1 1 π
θ = tan = ± 2nπ (12.7)
1 4
Note that both the modulus r and argument θ are real quantities. Also, the argument θ is
multi-value.

Example 12.2 Consider the real number z = −1 plotted on the complex plane; see Fig. 12.3.
(Note that all real numbers are special cases of the complex number system.) The complex
374 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.2: Polar form of z = 1 + j Figure 12.3: Polar form of z = −1

number is located on the negative real axis. The distance from the origin to the complex
number is 1; therefore
p
r = |z| = (−1)2 + 12 = 1 (12.8)

This is consistent with the notion that the magnitude of z must be positive. Since the
complex number is on the negative real axis, its angle relative to the positive real axis is π.
Therefore, the phase angle, or the argument, is
 
−1 1
θ = 6 θ = tan =π (12.9)
−1

where the principal value of θ is shown.

12.1.3 Multiplication

Multiplication is carried out term by term. Use j 2 = −1 to simplify the calculations.

Example 12.3 Consider two complex numbers z1 = 1 + j and z2 = 2 − j. Then

z1 z2 = (1 + j)(2 − j) = 2 + 2j − j − j 2 = 2 + j − (−1) = 3 + j (12.10)


12.1. A REVIEW OF COMPLEX NUMBERS 375

12.1.4 Reciprocals

Reciprocals of a complex number will have the complex number appear in the denominator.
To simplify it, we will need to multiple complex conjugate of the complex number to both
the denominator and the numerator. The the denominator will become a real number. See
the example below.

1
Example 12.4 Consider z = . One needs to multiple 1 + 2j, which is the complex
1 − 2j
conjugate of 1 − 2j, to the numerator and the denominator as follows

1 1 1 + 2j 1 + 2j 1
z= = × = 2 2
= (1 + 2j) (12.11)
1 − 2j 1 − 2j 1 + 2j 1 − (2j) 5

12.1.5 Complex Functions

When the real and imaginary parts are functions of time t, the complex number turns into a
complex function. At every time t, the complex function will return you a complex number.
In this case, every notion discussed thus far (e.g., magnitude, phase, multiplication, and
reciprocals) remains valid.

Example 12.5 Consider the following complex function

z(t) = e−t cos 2t + je−t sin 2t (12.12)

The real and imaginary parts are

<[z(t)] = e−t cos 2t, =[z(t)] = e−t sin 2t (12.13)

Therefore, the magnitude (or modulus) is


q q
r(t) = (e−t cos 2t)2 + (e−t sin 2t)2 = e−2t cos2 2t + sin2 2t = e−t

(12.14)
376 CHAPTER 12. TRANSFER FUNCTIONS

and the phase (or argument) is

e−t sin 2t
 
−1
θ(t) = tan = tan−1 (tan 2t) = 2t (12.15)
e−t cos 2t

The reciprocal is then


1 1 1 1
= −t −t
= −t ·
z(t) e cos 2t + je sin 2t e cos 2t + j sin 2t

1 cos 2t − j sin 2t
= et ×
cos 2t + j sin 2t cos 2t − j sin 2t

= et (cos 2t − j sin 2t) (12.16)

12.1.6 Complex Elementary Functions

Many elementary functions, such as exponential and trigonometry functions, can also enter-
tain a complex independent variable z. The very basic elementary function is the exponential
function ez . It is defined via the infinite series
z2 z3
ez ≡ 1 + z + + + ··· (12.17)
2! 3!
This is the same series used to define a real exponential function shown in (11.133). Based
on the exponential function, one can define sine and cosine functions as

ejz − e−jz z3 z5
sin z ≡ =z− + + ··· (12.18)
2j 3! 5!

and
ejz + e−jz z2 z4
cos z ≡ =1− + + ··· (12.19)
2 2! 4!
From (12.18) and (12.19), we obtain

ejz + e−jz e − e−jz


 jz 
cos z + j sin z = +j = ejz (12.20)
2 2j
12.1. A REVIEW OF COMPLEX NUMBERS 377

Equation (12.20) is known as Euler’s formula. It is one of the most important and useful
formula in complex numbers. An immediate consequence of (12.20) is when z takes the value
of θ (i.e., the phase angle). In this case, (12.20) becomes

ejθ = cos θ + j sin θ (12.21)

Similarly, if z = −θ, (12.20) becomes

e−jθ = cos θ − j sin θ (12.22)

For the rest of the course, we will use (12.21) and (12.22) over and over again.

12.1.7 Polar Form Revisted

One important consequence of the Euler’s formula is to represent a complex number into a
polar form. Substitution of (12.21) into (12.4) results in

z = x + jy = r (cos θ + j sin θ) = rejθ (12.23)

The representation of a complex number in the form of rejθ is called the polar form. The
magnitude appears as the real number in front of the exponential function, while the phase
angle appears in the exponent multiplied by the imaginary unit j.

Example 12.6 Here are some examples of complex numbers of functions in polar forms.

(a) Consider z = 1 + j.
√  π π  √ j ( π4 )
z =1+j = 2 cos + j sin = 2e (12.24)
4 4
√ π
where 2 is the magnitude and 4
is the phase.

(b) Consider z(t) = e−t cos 2t + je−t sin 2t.

z(t) = e−t cos 2t + je−t sin 2t = e−t (cos 2t + j sin 2t) = e−t ej(2t) (12.25)

where e−t is the magnitude and 2t is the phase


378 CHAPTER 12. TRANSFER FUNCTIONS

(c) Consider z = −1.

z(t) = −1 = 1 · (cos π + j sin π) = 1 · ejπ (12.26)

where the magnitude is 1 and the phase is π. In the mathematical world, the expression

ejπ = −1 (12.27)

from (12.26) is considered as one of the most amazing because it combines several most
incredible numbers, such as e, π, j, and 1, into a single expression.

12.1.8 Multiplication in Polar Form

As we all know, exponential functions handle multiplication very effectively. Also, the polar
form involves an exponential function; see (12.23). These two features make the polar form a
very desirable way to carry out multiplication operations of two complex numbers. Consider
two complex numbers z1 and z2 with their polar forms as

z1 = r1 ejθ1 , z2 = r2 ejθ2 (12.28)

Then the product z1 and z2 is


z1 z2 = r1 r2 ej(θ1 +θ2 ) (12.29)
The results in (12.29) is significant. The magnitude of a product of two complex numbers
is the product of the magnitude of each complex number. The phase of a product of two
complex numbers is the sum of the phase angle of each complex number.

Example 12.7 Consider two complex numbers z1 = 1 + j and z2 = −1 + j. Find the


product z1 z2 .

(a) Method 1. Direct Multiplication:

z1 = (1 + i) (−1 + i) = −1 − i + i + i2 = −2 (12.30)

(b) Method 2. Polar Form: First, z1 and z2 are represented in a polar form (Fig. 12.4)
√ π √ 3π
z1 = 2 · ej 4 , z2 = 2 · ej 4 (12.31)
12.1. A REVIEW OF COMPLEX NUMBERS 379

Figure 12.4: Multiplication of complex number in the polar form

Therefore,
√ √ π 3π
z1 z2 = 2· 2 · ej ( 4 + 4 ) = 2 · ejπ = −2 (12.32)

12.1.9 Reciprocals in Polar Form

Similarly, reciprocals can also be handled via the polar form every effectively. Consider
z = rejθ , its reciprocal will be
1 1 · ej·0 1 j(0−θ) 1 −jθ
= = ·e = ·e (12.33)
z r · ejθ r r
Therefore, the magnitude of the reciprocal is the reciprocal of the magnitude of the original
complex number. The phase of the reciprocal is the negative of the phase of the original
complex number.

Example 12.8 Find the reciprocal of z = 1 + j.

(a) Method 1. Direct Calculation:


1 1 1 1−j 1
= = · = (1 − j) (12.34)
z 1+j 1+j 1−j 2
380 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.5: Reciprocal of a complex number in the polar form

(b) Method 2. Polar Form: First, z = 1 + j is represented in a polar form (Fig. 12.5)
√ π
z= 2 · ej 4 (12.35)

where the magnitude is 2 and the phase angle is π4 . Following (12.33), one obtains

1 1 −j π4 1  π π 1
= √ ·e = √ cos − j sin = (1 − j) (12.36)
z 2 2 4 4 2

12.2 What Is a Transfer Function?

In this section, I will first summarize major results concluded for state equations and motivate
the need for an alternative way to understand or interpret particular solutions. Then I will
show a heuristic example to show how a transfer function can be defined to help finding and
understanding particular solution of a dynamical system. Based on the heuristic example, I
will then formally introduce the definition of transfer functions.
12.2. WHAT IS A TRANSFER FUNCTION? 381

12.2.1 Motivation

From Chapter 11, we know that the complete solution x(t) of a state equation consists of
two parts (cf. (11.90))

x(t) = xh (t) + xp (t) (12.37)

where xh (t) is the homogeneous solution and xp (t) is the particular solution. The homoge-
nous solution xh (t) is determined entirely by eigenvalues and eigenvectors of the state matrix
A. Alternative, xh (t) can be obtained via the state transition matrix eAt . The homogenous
solution xh (t) also determines stability of the system. The particular solution xp (t) can be
obtained via an integral (cf. (11.181))

Z t
xp (t) = eA(t−τ ) Bu(τ )dτ (12.38)
0

If one takes the statements above by their face value, the problem is solved. In reality,
it is not the case. First of all, the integral in (12.38) is quite difficult to apply. The worst
thing is that the integral in (12.38) does not provide any physical insights. We know that
state-space formulations are designed for systems with high orders. Therefore, solution of
state equations are often down numerically. In this case, how would one know the numerical
results obtained are correct and make good sense? How would one know whether or not it
is garbage in and garbage out? Moreover, a large amount of parameters need to be tried
in a design process. That means the state equations will be solved over and over again for
a large parameter space. This will lead to a very time-consuming and expensive process.
If the designer has some physical insights of the response, the designer can quickly narrow
down the parameter space and make the numerical simulations significantly more efficient.

For all these reasons, there is a strong need for an alternative way to visualize and
understand the input excitations, the system characteristics, and the particular solutions.
One alternative way is the method of transfer functions.
382 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.6: A spring-mass-damper system for Heuristic Example 1

12.2.2 Heuristic Example 1

In this example, I use a second-order system to demonstrate the concept of transfer functions.
The concept developed in this example is valid for systems of any order.

Figure 12.6 shows a second-order system consisting of a spring, a damper, and a point
mass. The spring has stiffness coefficient k, the damper has damping coefficient c, and the
point mass has mass m. The spring-mass-damper system is subjected to an external load
f (t). The displacement of the mass is x(t). In this example, f (t) serves an input variable
and x(t) serves as an output variable. Moreover, let us consider the following parameters:
m = 1, c = 2, and k = 2. What is the response x(t) if the input excitation f (t) is


 (a) 2

4t
 (b) 3e



f (t) = (c) 2 cos 2t (12.39)

(d) e−2t sin 3t





 (e) cos 2t − sin 2t

To obtain the response, one first derive the equation of motion as

mẍ + cẋ + kx = f (t) (12.40)

With m = 1, c = 2, and k = 2, the equation of motion is

ẍ + 2ẋ + 2x = f (t) (12.41)


12.2. WHAT IS A TRANSFER FUNCTION? 383

As explained in Section 12.2.1, we are not interested in finding the particular solution using
the state equation approach (12.38). In this case, what alternative do we have?

Part I: Analyses of Input Excitations. The first clue is to note that the five input
functions in (12.39) can all be represented in a form of F (s)est , where s is a complex number.
Here is why.

Case (a).
f (t) = 2 = 2e0·t ⇐⇒ f (t) = F (s)est (12.42)
By comparing the left and right sides of ⇐⇒ in (12.42), one can easily conclude that

F (s) = 2, s=0 (12.43)

Case (b).
f (t) = 3e4t ⇐⇒ f (t) = F (s)est (12.44)
By comparing the left and right sides of ⇐⇒ in (12.44), one would obtain

F (s) = 3, s=4 (12.45)

Case (c). For the case of f (t) = 2 cos 2t, let us start with the Euler’s formula (12.21)
to obtain 2ej(2t) = 2 cos 2t + j (2 sin 2t). Therefore

f (t) = 2 cos 2t = < 2ej(2t) ⇐⇒ f (t) = < F (s)est


   
(12.46)

By comparing the left and right sides of ⇐⇒ in (12.46), one would obtain

F (s) = 2, s = 2j (12.47)

Please note that the parameter s is now a pure imaginary number.

Case (d). For the case of f (t) = e−2t sin 3t, let us start with

ea+jθ = ea · ejθ = ea (cos θ + j sin θ) (12.48)

Therefore,
ea sin θ = = ea+jθ
 
(12.49)
384 CHAPTER 12. TRANSFER FUNCTIONS

With a = −2t and θ = 3t, equation (12.49) is reduced to

f (t) = e−2t sin 3t = = e−2t+j(3t) ⇐⇒ f (t) = = F (s)est


   
(12.50)

By comparing the left and right sides of ⇐⇒ in (12.50), one would obtain

F (s) = 1, s = −2 + 3j (12.51)

Now the parameter s is a complex number.

Case (e). For the case of f (t) = cos 2t − sin 2t, let us start with

(1 + j) ej(2t) = (1 + j) (cos 2t + j sin 2t) = (cos 2t − sin 2t) + j (cos 2t + sin 2t) (12.52)

Therefore,

f (t) = cos 2t − sin 2t = < (1 + j) ej(2t) ⇐⇒ f (t) = < F (s)est


   
(12.53)

By comparing the left and right sides of ⇐⇒ in (12.53), one would obtain

F (s) = 1 + j, s = 2j (12.54)

In this case, the parameter F (s) is a complex number and s is a pure imaginary number.

The moral of the analyses on input functions is the following. The mathematical ex-
pression F (s)est , where F (s) and s are complex, describes a wide class of input excitations
often used in practical applications. If an input function does take the form of F (s)est , one
can easily retrieve the complex parameters F (s) and s from the input function.

Part II: Analyses of the System. Since many input excitations take the form of
F (s)est , it does not make sense to solve (12.41) for its particular solution individually for
f (t) = 2, f (t) = 3e4t , f (t) = 2 cos 2t, and so on. Instead, one should assume that the input
bears a general form
f (t) = F (s)est (12.55)

where F (s) and s may be complex. Since the system is linear, the corresponding particular
solution will take the form of
x(t) = X(s)est (12.56)
12.2. WHAT IS A TRANSFER FUNCTION? 385

where X(s) may be complex. Substitution of (12.55) and (12.56) into (12.40) results in

ms2 + cs + k X(s)est = F (s)est



(12.57)

or
1
X(s) = · F (s) (12.58)
ms2 + cs + k
The ratio between X(s) and F (s), i.e.,
X(s) 1
H(s) ≡ = 2
(12.59)
F (s) ms + cs + k
is known as the transfer function from input f (t) to output x(t).

The first thing to not is that H(s) involves the system parameters m, c, and k. Moreover,
the order of the derivatives (e.g., two time derivative in the mass term) in the equation of
motion (12.40) is properly reflected via the order of the parameter s (e.g., s2 for the mass
term). Therefore, the transfer function H(s) in (12.59) characterizes the dynamics of the
system.

Second, the transfer function H(s) in (12.59) may be used in multiple ways. During
the design stage, one could have a designed loading condition f (t) (and thus F (s) and s).
After the system is designed, the transfer function H(s) is known. Then (12.59) can help
a designer to predict response x(t). If a prototype is made, one can provide f (t) to the
prototype and measure the response x(t). Then equation (12.59) can be used to determine
the transfer function H(s) and to evaluate how closely the prototype reflects the designed
system.

Part III: Analysis of Output Response. With the transfer function H(s) defined
in (12.59), one can predict the particular solution x(t) for the five cases of input loading
shown in (12.39). The process is explained as follows. Let us use m = 1, c = 2, and k = 2
as an example.

Case (a). f (t) = 2. According to (12.43), F (s) = 2 and s = 0. For this case,
1 1
H(s) = = (12.60)
02 +2·0+2 2
As a result,
1
X(s) = H(s)F (s) = ·2=1 (12.61)
2
386 CHAPTER 12. TRANSFER FUNCTIONS

Finally, the particular solution is



x(t) = X(s)est s=0 = 1 · e0·t = 1 (12.62)

Case (b). f (t) = 3e4t . According to (12.45), F (s) = 3 and s = 4. For this case,
1 1
H(s) = = (12.63)
42 +2·4+2 26
As a result,
1 3
X(s) = H(s)F (s) = ·3= (12.64)
26 26
Finally, the particular solution is
3
x(t) = X(s)est s=4 = e4t (12.65)
26

Case (c). f (t) = 2 cos 2t = < [F (s)est ]. According to (12.47), F (s) = 2 and s = 2j. For
this case,
1 1 1 −2 − 4j 1
H(s) = = = × = − (1 + 2j) (12.66)
(2j)2 + 2(2j) + 2 −2 + 4j −2 + 4j −2 − 4j 10
As a result,
1 1
X(s) = H(s)F (s) = − (1 + 2j) · 2 = − (1 + 2j) (12.67)
10 5
Finally, the particular solution is
 
h
st
i 1 j(2t)
x(t) = < X(s)e s=2j = < − (1 + 2j)e
5
 
1
= < − (1 + 2j) (cos 2t + j sin 2t)
5
 
1 j
= < − (cos 2t − 2 sin 2t) − (2 cos 2t + sin 2t)
5 5

1
= − (cos 2t − 2 sin 2t) (12.68)
5

The moral of this heuristic example is the following—the method of transfer functions
can be used to predict particular solutions of a system excited by a very wide class of input
functions. Moreover, transfer functions represent characteristics of a dynamical system.
12.2. WHAT IS A TRANSFER FUNCTION? 387

Figure 12.7: Physical meaning of s = σ + jω

12.2.3 Definition

Let us consider a single-input-single-output (SISO) system governed by the ordinary differ-


ential equation
dn y dn−1 y dy dm u dm−1 u du
an n
+ a n−1 n−1
+ · · · + a 1 + a 0 = b m m
+ b m−1 m−1
+ · · · + b1 + b0 (12.69)
dt dt dt dt dt dt
where m ≤ n, u(t) is the input variable, and y(t) is the output variable. Moreover, the
coefficients a0 , . . . , an and b0 , . . . , bm are real. To make the best use of the formulation in
(12.69), let us allow that the input u(t) and output y(t) to be complex. Since a0 , . . . , an
and b0 , . . . , bm are real, the real part of (12.69) implies that the real parts of u(t) and y(t)
will satisfy the same ordinary differential equation (12.69). Similarly, the imaginary part
of (12.69) implies that the imaginary parts of u(t) and y(t) will satisfy the same equation
(12.69). For instance, in case (c) of Heuristic Example 1, the input f (t) is the real part
of a complex input F (s)est , which plays the role of the complex input u(t) in (12.69). The
output x(t) is the real part of a complex output X(s)est , which plays the role of the complex
output y(t) in (12.69).

A convenient way to define transfer functions is to assume that the input variable u(t)
takes a form of
u(t) = U (s)est (12.70)
where s ≡ σ + jω and U (s) are complex. In particular, σ describes how fast the input u(t)
388 CHAPTER 12. TRANSFER FUNCTIONS

grows (σ > 0) or decays (σ < 0) in time; see Fig. 12.7. In contrast, ω describes how fast
the input u(t) oscillates. Moreover, U (s) is present to adjust the magnitude and phase of
the input function u(t). Therefore, the expression in (12.70) describes a fairly large class of
input functions. Figure 12.8 summarizes the functions described by est .

The exponential form specified in (12.70) has an important consequence. If (12.70) is


substituted into the governing equation (12.69), one will find that the right side of (12.69)
will also take the form of est . Since the governing equation (12.69) is linear, the particular
solution for the output y(t) will take a similar form of

y(t) = Y (s)est (12.71)

where Y (s) is complex. The exponential forms in (12.70) and (12.71) enable a quick solution
of (12.69). Substitution of (12.70) and (12.71) into (12.69) results in

an sn + an−1 sn−1 + · · · + a1 s + a0 Y (s)est = bm sm + bm−1 sm−1 + · · · + b1 s + b0 U (s)est


 

(12.72)
or
Y (s) bm sm + bm−1 sm−1 + · · · + b1 s + b0
H(s) ≡ = (12.73)
U (s) an sn + an−1 sn−1 + · · · + a1 s + a0
Y (s)
In (12.73), a transfer function H(s) is defined as the ratio . When U (s) and H(s)
U (s)
are known, one can use (12.73) to find the response Y (s). Conversely, if U (s) and Y (s) are
known, one can use (12.73) to find the transfer function H(s).

The transfer function H(s) is a system property characterizing input-output relationship


as graphically shown in Fig. 12.9. The input F (s)est is fed into the system characterized
by the transfer function H(s). The the output Y (s)est is related to the input via Y (s) =
H(s)U (s). As shown in (12.73), H(s) depends entirely on the coefficients ai and bi as well
as the orders m and n of the differential equation (12.69). Since ai , bi , m, and n govern how
a system behaves, the transfer function H(s) truthfully reflects the dynamics of a system.

Example 12.9 Consider the flagship example discussed in Example 1.4; see Fig. 12.10. The
ordinary differential equation governing x2 , as derived in (1.9), is
d3 x2 mk k
m 3
+ ẍ2 + k ẋ2 = Fs (t) + Ḟs (t) (12.74)
dt B B
12.2. WHAT IS A TRANSFER FUNCTION? 389

Figure 12.8: The class of input function covered by est


390 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.9: Transfer function H(s) describing input-output relationship

Figure 12.10: Transfer function H(s) of the flagship example

With the definition in (12.73), the transfer function from Fs to x2 is


k
B
+s
Hx2 (s) = mk 2
(12.75)
ms3 + B
s + ks
Similarly, the differential equation governing v2 ≡ ẋ2 is
d2 v2 mk k
m 2
+ v̇2 + kv2 = Fs (t) + Ḟs (t) (12.76)
dt B B
Then, the transfer function from Fs to x2 is
k
B
+s
Hv2 (s) = (12.77)
ms2 + mk
B
s+k
Note that the transfer functions Hx2 in (12.75) and Hv2 in (12.77) have different expressions.
It is not surprising, because a transfer function is an input-output relationship. Therefore,
the transfer function will depend on the input and output variables chosen.

12.3 Magnitude and Phase

Since the transfer function approach, in general, uses complex numbers, there are two ways
to carry out the operation. One is to use real and imaginary parts as shown in Heuristic
12.3. MAGNITUDE AND PHASE 391

Figure 12.11: A spring-mass-damper system for Heuristic Example 2

Example 1. The other way is to use magnitude and phase. In this section, I will first work
out the same heuristic example using magnitude and phase. Then I will explain implications
and benefits of using this approach.

12.3.1 Heuristic Example 2

Let us revisit Heuristic Example 1 discussed in Section 12.2.2 here. Figure 12.11 shows
the second-order system consisting of a spring, a damper, and a point mass as before. The
spring has stiffness coefficient k, the damper has damping coefficient c, and the point mass
has mass m. The spring-mass-damper system is subjected to an external load f (t). The
displacement of the mass is x(t). As before, f (t) serves an input variable and x(t) serves as
an output variable. Moreover, let us consider the following parameters: m = 1, c = 2, and
k = 2. The input excitation f (t) considered includes


 (a) 2

4t
 (b) 3e



f (t) = (c) 2 cos 2t (12.78)

(d) e−2t sin 3t





 (e) cos 2t − sin 2t

The corresponding transfer function H(s) from f (t) to x(t) is


X(s) 1
H(s) ≡ = 2
(12.79)
F (s) ms + cs + k
How do we find the response x(t) using magnitude and phase?
392 CHAPTER 12. TRANSFER FUNCTIONS

The basic concept is to represent a complex number z in its polar form z = rejθ , where
r is the magnitude and θ is the phase; see (12.23). For the input excitation F (s)est , it is a
complex number and can be represented in a polar form. For the transfer function H(s), it
is also a complex number and can be represented in a polar form. Therefore, the response
X(s)est = H(s)F (s)est is the multiplication of the polar forms of H(s) and F (s)est . When
two polar forms are multiplied, their magnitudes are multiplied and their phase angles are
added together.

Let us use case (c) to demonstrate the concept. Case (c) is chosen is because it is
complicated enough so that the calculation is non-trivial. At the same time, the algebra is
not too complicated to follow.

Part I: Analysis of Input. For this case, f (t) = 2 cos 2t. According to (12.46), we
can identify F (s)est from f (t) as

f (t) = 2 cos 2t = < 2ej(2t) ⇐⇒ f (t) = < F (s)est


   
(12.80)

By comparing the left and right sides of ⇐⇒ in (12.80), we identify F (s) and s as

F (s) = 2, s = 2j (12.81)

At the same time, we can express F (s)est in the polar form, i.e.,

F (s)est = 2 · ej(2t) ⇐⇒ F (s)est = r · ejθ (12.82)

By comparing the left and right sides of ⇐⇒ in (12.82), one would obtain the magnitude r
and the phase θ of F (s)est as

r = F (s)est = 2, θ = 6 F (s)est = 2t (12.83)

Part II: Transfer Function Analysis. For the transfer function analysis, let us use
the numerical values m = 1, c = 2, and k = 2. The goal of the analyses is to represent the
transfer function H(s) in a polar form, i.e.,

1
H(s) = = |H(s)| ejφ(s) (12.84)
s2 + 2s + 2
12.3. MAGNITUDE AND PHASE 393

Figure 12.12: Polar form of z = 1 + 2j

where |H(s)| and φ(s) are the magnitude and phase of the transfer function H(s), respec-
tively. As indicated in (12.81), s = 2j. Also, we have already shown in (12.66) that
1
H(s) = − (1 + 2j) (12.85)
10
How does one convert (12.85) into a polar form to find |H(s)| and φ(s)?

First of all, let us recall from Example 12.2 (cf. (12.8) and (12.9)) that

−1 = 1 · ej·π (12.86)

where the magnitude is 1 and the phase is π. Moreover, Fig. 12.12 shows that the magnitude

and phase of the complex number z = 1 + 2j are 5 and tan−1 2, respectively. Therefore,
√ −1
1 + 2j = 5 · ej·tan 2 (12.87)

Substitution of (12.86) and (12.87) into (12.85) results in



1 5 j·(π+tan−1 2)
H(s) = − (1 + 2j) = ·e ⇐⇒ H(s) = |H(s)| · ejφ(s) (12.88)
10 10
By comparing the left and right sides of ⇐⇒ in (12.88), one would obtain the magnitude
|H(s)| and the phase φ(s) of the transfer function H(s) as

5
|H(s)| = , φ(s) = π + tan−1 2 (12.89)
10

Part III: Response Analysis. According to (12.68), the response x(t) is

x(t) = < X(s)est s=2j = < H(s)F (s)est s=2j


   
(12.90)
394 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.13: Magnitude and phase relationship between input and output

where the definition of transfer function H(s) from (12.59) is used. Substitution of (12.82)
and (12.88) into (12.90) results in

"√ #
5 j·(π+tan−1 2)
x(t) = < ·e × 2 · ej(2t)
10
 
1 j·(2t+π+tan−1 2)
= < √ ·e
5
1
= √ cos 2t + π + tan−1 2

(12.91)
5

Figure 12.13 summarizes the magnitude and phase operation shown in this example.
Basically, the input
√ excitation has a magnitude 2 and a phase angle 2t. The transfer
√ function
5 5 1
has a magnitude and a phase angle π + tan−1 2. The output magnitude is ×2 = √ ,
10 10 5
and the output phase angle is 2t + π + tan−1 2.

The moral from Heuristic Example 2 is the following. The output magnitude is the
product of the input magnitude and the transfer function magnitude. The output phase
angle is the sum of the input phase angle and the transfer function phase angle.
12.3. MAGNITUDE AND PHASE 395

12.3.2 Implications

The moral from Heuristic Example 2 is indeed a general statement that is valid for all
linear systems. For the input excitation F (s)est , it can always be represented as a polar form

F (s)est = F (s)est ej {F (s)e }


6 st
(12.92)

where |F (s)est | and 6 {F (s)est } are the magnitude and phase of F (s)est , respectively. Simi-
larly, the transfer function H(s) can also be represented as a polar form
6 H(s)
H(s) = |H(s)| ej (12.93)

where |H(s)| and 6 H(s) are the magnitude and phase of H(s), respectively. As a result, the
output response X(s)est takes the form of

X(s)est = H(s)F (s)est = |H(s)| ej H(s) × F (s)est ej {F (s)e }


6 6 st

=
 6
|H(s)| × F (s)est ej (
H(s)+6 {F (s)est }) (12.94)

According to (12.94), the magnitude of the output is the product of magnitudes of


the input and the transfer function. The phase angle of the output is the sum of phase
angles of the input and the transfer function. Therefore, the transfer function H(s) does two
things to an input excitation. First, the transfer function amplifies or attenuate the input
excitation by |H(s)|. Second, the transfer function shifts the phase of the input excitation
by 6 H(s). Of course, how much the transfer function H(s) will amplify or shift the phase
of the input excitation will depend on the parameter s, which is, in turns, controlled by
the input excitation. The implication for practical applications is huge. For example, if one
wants to have large response (e.g., designing a resonator), one would drive the input with a
parameter s such that H(s) has a large magnitude. If one wants to have very little response
(e.g., designing an isolation table), one would design the system such that H(s) has a small
magnitude for all input parameter s of interest.

One might also ask why it is beneficial to go through magnitude and phase operation,
which are much more complicated than the straightforward calculations using real and imag-
inary part as shown in Heuristic Example 1. A straight answer is that magnitude and
396 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.14: A simple model to simulate a car on a bumpy road

phase angles are simply more useful. In practical applications, magnitude and phase angle
are quantities that one can see and measure. Therefore, they bear more physical meaning
and show more physical insights than real and imaginary parts.

Example 12.10 Let us use a very primitive model to simulate motion of a car on a bumpy
road; see Fig. 12.14. The model consists of a car body mass and a suspension spring, but
with no damping The mass of the car is m = 1000 kg. The suspension spring stiffness is
k = 16 kN/m. The bumpy road is simulated via a sinusoidal force input

f (t) = 0.1 cos 10vt (12.95)

where v is the cruising speed of the car. Displacement of the car body (i.e., the car mass)
from its equilibrium position is x(t). Determine how x(t) depends on the cruising speed v.
When will x(t) be large and small?

The equation the motion governing x(t) is

mẍ + kx = f (t) (12.96)

With m = 1000, k = 16 × 103 , and (12.95), the equation of motion (12.96) becomes

1000ẍ + 16 × 103 x = 0.1 cos 10vt (12.97)

or
ẍ + 16x = 10−4 cos 10vt (12.98)
12.3. MAGNITUDE AND PHASE 397

Part I. Analysis of Input. The input is now

u(t) = 10−4 cos 10vt = < 10−4 ej(10vt) = < U (s)est


   
(12.99)

Therefore,
U (s) = 10−4 , s = (10v) j (12.100)

Moreover, the magnitude and phase of U (s)est are

U (s)est = 10−4 ,

6 {U (s)est } = 10vt (12.101)

Part II: Transfer Function Analysis. Consider a differential equation with complex
variables u(t) and y(t) satisfying
ÿ + 16y = u(t) (12.102)

Moreover
u(t) = U (s)est , y(t) = Y (s)est (12.103)

Substitution of (12.103) into (12.102) to obtain

s2 + 16 Y (s)est = U (s)est

(12.104)

The transfer function H(s) is then

Y (s) 1
H(s) ≡ = 2 (12.105)
U (s) s + 16

Since s = 10vj from (12.100), the transfer function takes the value of

Y (s) 1
H(s) ≡ = (12.106)
U (s) 16 − 100v 2

Note that the transfer function H(s) is a real number in this case. Furthermore, H(s) > 0
when v < 0.4, and H(s) < 0 when v > 0.4. Therefore, the magnitude and phase of H(s) are

1

 |H(s)| = , 6 H(s) = 0, v < 0.4
16 − 100v 2



(12.107)


 |H(s)| = 1
, 6 H(s) = π, v > 0.4

2
100v − 16
398 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.15: Magnitude and phase of the input, transfer function, and output for v < 0.4

where the magnitude |H(s)| is always positive.

Part III: Response Analysis. Since the equation of motion (12.98) is the real part
of the complex equation of motion (12.102),

x(t) = < Y (s)est = < H(s)U (s)est


   
(12.108)

The response x(t) depends on the value of v. There are two possibilities.

(a) Case 1: v < 0.4. Then (12.108) becomes

 
1 j·0 −4 j(10vt)
x(t) = < · e × 10 · e
16 − 100v 2
10−4 10−4
 
j(10vt)
= < · e = cos 10vt (12.109)
16 − 100v 2 16 − 100v 2

where (12.101), (12.107), and the polar form (12.23) have been used to represent H(s) and
U (s)est .

Figure 12.15 shows the magnitude and phase of the input, transfer function, and output.
One can see that output magnitude is the input magnitude multiplied by the transfer function
magnitude. Also, the output phase is the input phase added to the phase angle of the transfer
function. Based on (12.109), the largest response occurs when v → 0.4.
12.3. MAGNITUDE AND PHASE 399

Figure 12.16: Magnitude and phase of the input, transfer function, and output for v > 0.4

(b) Case 2: v > 0.4. Then (12.108) becomes


 
1 j·π −4 j(10vt)
x(t) = < · e × 10 · e
100v 2 − 16
10−4 10−4
 
j(10vt+π)
= < · e = − cos 10vt (12.110)
100v 2 − 16 100v 2 − 16

where (12.101), (12.107), and the polar form (12.23) have been used to represent H(s) and
U (s)est .

Figure 12.16 shows the magnitude and phase of the input, transfer function, and output.
One can see that output magnitude is the input magnitude multiplied by the transfer function
magnitude. Also, the output phase is the input phase added to the phase angle of the transfer
function. Based on (12.110), the large response around v ≈ 0.4 starts to decrease to zero as
v → ∞. Also, the phase shift π cause the output response x(t) to be out of phase with the
input force f (t), as indicated by the negative sign in (12.110).

Moral. As explained before, the transfer function H(s) amplifies the input magnitude
and shifts the input phase. Therefore, one could analyze the transfer function H(s) to obtain
a very good understanding of the response x(t) without explicitly deriving it in (12.109) and
(12.110). Based on (12.107), the magnitude and phase of H(s) are plotted in Fig. 12.17 with
respect to the cruise speed v. The cruise speed of v = 0.4 m/s is a critical point. As the
1
speed v increases from 0 toward 0.4 m/s, the magnitude |H(s)| starts with 16 and approaches
to ∞. Once the speed passes 0.4 m/s, the magnitude |H(s)| starts to decreases and drops to
400 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.17: Magnitude and phase of the transfer function H(s) with respect to v

zero as v → ∞. Since the magnitude of the transfer function amplifies (or attenuates) the
input, Fig. 12.17 predicts that the input will be amplified significantly near v = 0.4 m/s and
the car response will be large. As v  0.4 m/s, Fig. 12.17 shows that the transfer function
has very small amplitude resulting in a small response of the car. As far as the phase, when
v < 0.4 m/s, the input and output are in the same phase. As v > 0.4 m/s, the input and
output are out of phase by 180◦ .

12.4 Poles and Zeros

In Section 12.3, it is shown that the magnitude of a transfer function |H(s)| amplifies or
attenuate the magnitude of the input to result in the magnitude of the output response.
It natural leads to the following questions: (a) when will the input magnitude be amplified
resulting in a significantly large output, and (b) when will the input magnitude be attenuated
resulting in a suppressed output?
12.4. POLES AND ZEROS 401

Figure 12.18: A spring-mass-damper system for Heuristic Example 3

In this section, I will first go through Heuristic Example 3 to demonstrate when


input will be amplified or reduced significantly. These special occasions, denoted by specific
s values, are known as poles and zeros. Then I will formally introduce the definitions of
poles and zeros as well as their implications in system dynamics.

12.4.1 Heuristic Example 3

Let us revisit Heuristic Example 2 discussed in Section 12.3.1. Figure 12.18 shows the
second-order system consisting of a spring, a damper, and a point mass as before. The spring
has stiffness coefficient k, the damper has damping coefficient c, and the point mass has mass
m. The spring-mass-damper system is subjected to an external load f (t). The displacement
of the mass is x(t). As before, f (t) serves an input variable and x(t) serves as an output
variable. Moreover, let us consider the following parameters: m = 1, c = 2, and k = 2. The
corresponding transfer function H(s) from f (t) to x(t) is
X(s) 1
H(s) ≡ = 2 (12.111)
F (s) s + 2s + 2
Under what conditions will the transfer function H(s) significantly amplify the input exci-
tations?

When the denominator of H(s) vanishes, |H(s)| → ∞. As a result, the input excitation
F (s)est will be amplified infinitely. Such special s values are called system poles, or simply
poles. Moreover, they satisfy
s2 + 2s + 2 = 0 (12.112)
402 CHAPTER 12. TRANSFER FUNCTIONS

Solving (12.112) leads to the following system poles p1 and p2

s = p1,2 = −1 ± j (12.113)

The corresponding input excitations are

F (s)est = F (s)e(−1±j)t = F (s)e−t (cos t ± j sin t) (12.114)

Therefore, excitation f (t) taking the form of e−t cos t, e−t sin t, or linear combination thereof
will be amplified infinitely by the system shown in Fig. 12.18 with m = 1, c = 2, and k = 2.

For the spring-mass-damper system shown in Fig. 12.18, the transfer function H(s)
takes the form of
X(s) 1
H(s) ≡ = 2
(12.115)
F (s) ms + cs + k
Therefore, the poles are determined by the following equation

ms2 + cs + k = 0 (12.116)

Note that this is exactly the characteristic equation of the spring-mass-damper-system; see
(3.7). As a result, the poles obtained from (12.116) are
r
c c 2 k
p1,2 = − ± − (12.117)
2m 2m m
r
k
When the system is underdamped, one can define a natural frequency ωn = (cf. (3.9))
m
c
and a viscous damping factor ζ = (cf. (3.16)). The poles in (12.117) then take the
2mωn
form of
p
p1,2 = −ζωn + jωn 1 − ζ 2 , 0 < ζ < 1 (12.118)

Figure 12.19 shows the magnitude of H(s) from (12.115) on the complex s-plane for
an underdamped system. The two volcano-shape regions are neighborhood of the two poles
in (12.118). When s approaches one of the poles, the denominator of H(s) becomes smaller
and smaller. As a result, the magnitude of H(s) becomes larger and larger forming those
two volcano-shape regions.

One also needs to note that each complex number s on the complex plane defines an
input excitation in the form of F (s)est . Therefore, one could know whether or not an input
12.4. POLES AND ZEROS 403

Figure 12.19: Magnitude of H(s) of an underdamped spring-mass-damper system

excitation will lead to a large response by simply checking the distance between the poles
and the s parameter from the input. The closer the distance, the larger the response. The
following example show how it works.

Example 12.11 Let us revisit the system in Fig. 12.18 with m = 1, c = 2, and k = 2.
Consider the following two input excitations

f1 (t) = sin 2t, f2 (t) = cos t (12.119)

Both input excitations have the same magnitude. Which input excitation will lead to a
larger response x(t)?

Figure 12.20 plots the poles and s parameters on the complex plane. For the excitation
f1 (t) = sin 2t, its s parameter is s1 = 2j. For the excitation f2 (t) = cos t, its s parameter is
s2 = j. In addition, the poles of the system are p1,2 = −1 ± j; see (12.113). One can clearly
sees that s2 is closer to the pole p1 = −1 + j than s1 does. Therefore, f2 (t) = sin t will lead
to a larger transfer function magnitude |H(s)| and hence a larger response x(t).
404 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.20: Relative positions of poles and s parameters

12.4.2 Definition of Poles and Zeros

Let us consider a single-input-single-output (SISO) system governed by the ordinary differ-


ential equation
dn y dn−1 y dy dm u dm−1 u du
an n + an−1 n−1 + · · · + a1 + a0 = bm m + bm−1 m−1 + · · · + b1 + b0 (12.120)
dt dt dt dt dt dt
where m ≤ n, u(t) is the input variable, and y(t) is the output variable. Moreover, the
coefficients a0 , . . . , an and b0 , . . . , bm are real. Again, let us allow that the input u(t) and
output y(t) to be complex in (12.120). Moreover, the input variable u(t) and output variables
take the form of
u(t) = U (s)est , y(t) = Y (s)est (12.121)
where s, U (s), and Y (s) are complex. Then the transfer function H(s) is
Y (s) bm sm + bm−1 sm−1 + · · · + b1 s + b0 N (s)
H(s) ≡ = n n−1
≡ (12.122)
U (s) an s + an−1 s + · · · + a1 s + a0 D(s)
where N (s) and D(s) are the numerator and the denominator of H(s) given by

N (s) = bm sm + bm−1 sm−1 + · · · + b1 s + b0 (12.123)

and
D(s) = an sn + an−1 sn−1 + · · · + a1 s + a0 (12.124)
12.4. POLES AND ZEROS 405

Figure 12.21: Transfer function H(s) describing input-output relationship

Since the transfer function H(s) plays the role of an input-output relation (Fig. 12.21),
two questions of interest are:

1. When will H(s) amplifies U (s) significantly to end up with a huge Y (s)?

2. When will H(s) suppress U (s) entirely resulting in a vanishing Y (s)?

To amplify U (s) indefinitely, one must have a zero denominator. In this case, the
transfer function H(s) will blow up. This condition requires that

D(s) = an sn + an−1 sn−1 + · · · + a1 s + a0


= an (s − p1 ) (s − p2 ) · · · (s − pn ) = 0 (12.125)

Those complex numbers p1 , p2 , . . . , pn satisfying D(s) = 0 in (12.125) are called system


poles or simply poles.

There are several things worth noting about the system poles. First, the equation
(12.125) governing the system poles is indeed the characteristic equation of the ordinary
differential equation (12.120). (For characteristic equation, please see (1.15) and (1.17).)
Therefore, system poles p1 , p2 , . . . , pn are the same as characteristic roots λ1 , λ2 , . . . , λn (cf.
(1.18)), because they are governed by the same equation, that is, (12.125).

The second thing to note is that system poles are also the same as eigenvalues of state
matrix A. From Chapter 11, eigenvalues of state matrix A are the same as characteristic
roots λ1 , λ2 , . . . , λn ; see the paragraph between equation (11.102) and Example 11.7. There-
fore, system poles, characteristic roots, and eigenvalues of the state matrix A are all the
same.

The final thing to note is that the input excitation that causes H(s) to blow up takes the
form of epi t . Since the system poles pi are the same as the characteristic roots λi , that means
406 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.22: Input-output relationship when s = pi

epi t is indeed a homogeneous solution. When the input excitation (i.e., the forcing term on
the right side of the ordinary differential equation) is the same as an homogeneous solution,
the particular solution will be in the form of Ktepi t ; see the paragraph immediately before
Example 1.4. Schematically, this particular input-output relationship at s = pi is shown in
Fig. 12.22. The input side still takes the form of U (s)est . The output side, however, does
not fit into the form of Y (s)est any more because of the presence of t. The inability to fit
into the form of Y (s)est results from the fact that H(s) becomes indefinite when s = pi .

To suppress U (s) entirely, one must have a zero numerator. In this case, the transfer
function H(s) will vanish. This condition requires that

N (s) = bm sm + bm−1 sm−1 + · · · + b1 s + b0


= bm (s − z1 ) (s − z2 ) · · · (s − zm ) = 0 (12.126)

Those complex numbers z1 , z2 , . . . , zm satisfying N (s) = 0 in (12.126) are called system


zeros or simply zeros.

A transfer function always have poles, but it may not necessarily have zeros. A good
example is the spring-mass-damper system shown in Fig. 12.18. According to (12.115), the
transfer function Hx (s) from f (t) to x(t) is

X(s) 1
Hx (s) ≡ = 2
(12.127)
F (s) ms + cs + k

As one can see that there are two poles satisfying D(s) = ms2 + cs + k = 0 (cf. (12.116)).
There are, however, no zeros for the transfer function in (12.127), because N (s) = 1 and
N (s) can never vanish.

Let us now consider the same system, but the velocity v ≡ ẋ is the output. By taking
a time derivative on both sides of (12.40), one obtains the following governing equation for
12.4. POLES AND ZEROS 407

v
mv̈ + cv̇ + kv = f˙(t) (12.128)

By assuming f (t) = F (s)est and v(t) = V (s)est , one would obtain the transfer function
Hv (s) from f (t) to v(t) as

V (s) s
Hv (s) = = 2
(12.129)
F (s) ms + cs + k

The poles of Hv (s) still satisfy the same equation D(s) = ms2 +cs+k = 0 shown in (12.116).
In other words, Hx (s) and Hv (s) have the same poles. But Hv (s) has a zero at s = 0, which
Hx (s) does not have.

So the moral is the following. System poles remain the same no matter which transfer
functions you analyze as long as they are from the same system. System zeros, however, will
depend on the transfer function you analyze. This is, in fact, very reasonable. System poles
are characteristic roots and determine stability of a system. Therefore, the stability of the
system should remain the same no matter which transfer function we analyze.

In contrast, zeros describe relationship between a certain pair of input and output
variables. By choose different input and output variables, the zeros might change. For
example, when a force f (t) is applied to the spring-mass-damper system in Fig. 12.18, there
is always a displacement x(t) responding to the force f (t). Mathematically, it means that
a non-zero particular solution always exists for x(t). As a result, Hx (s) does not have any
zeros. The velocity v(t), however, may not respond to the force f (t) applied. Let us consider
a case when the applied force f (t) is constant, say f (t) = K, where K is a constant. In
this case, the particular solution of the system is x(t) = Kk and v(t) = 0, which indicates a
non-zero displacement x(t) and a zero velocity v(t). In terms of x(t), the particular solution
is non-zero, which is consistent with the statement that Hx (s) does not have any zero. In
terms of v(t), the particular solution is zero, which is consistent with the statement that
Hv (s) has a zero at s = 0, because s = 0 corresponds to a constant input excitation.

To wrap up system poles and zeros, I want to mention two more things. First, locations
of poles and zeros on the complex s-plane are important because they map out regions where
the system will experience large and small response. The poles and zeros are often denoted
by × and , respectively, on the complex s-plane. Second, it is possible to have a pole
408 CHAPTER 12. TRANSFER FUNCTIONS

and a zero locating at the same spot on the complex s-plane. When this occurs, it is called
pole-zero cancellation. In control theories, pole-zero cancellation is used to stabilize a
system. If a system has a pole pi that destabilizes the system, one could design a controller
such that the system will also have a zero at pi . In this case, the system response with the
presence of the zero will be stabilized.

Figure 12.23: An LC circuit with a voltage Figure 12.24: Pole-zero plot on complex s-
source plane

Example 12.12 Consider an LC circuit shown in Fig. 12.23. The LC circuit consists of an
inductor L, a capacitor C, and a voltage source Vs (t). The equation governing the current
i(t) is
d2 i 1 dVs
L 2 + i= (12.130)
dt C dt
The transfer function from Vs (t) to i(t) is

s N (s)
H(s) = 1 ≡ (12.131)
Ls2 + C
D(s)

System poles satisfy


1
D(s) = Ls2 + =0 (12.132)
C
resulting in
j
p1,2 = ± √ =0 (12.133)
LC
System zeros then satisfy
N (s) = s = 0 (12.134)
12.4. POLES AND ZEROS 409

Figure 12.25: The flagship example revisited

leading to a single zero


z1 = 0 (12.135)
The poles and zeros are plotted on the complex s-plane as shown in Fig. 12.24. Note that
the two poles appear on the pure imaginary axis and are denoted as ×. Moreover, the zero
is at the origin and is denoted as .

Example 12.13 Let us revisit the flagship example discussed in Example 12.9, which is
reproduced in Fig. 12.25. The system consists of a damper with damping coefficent B, a
spring of stiffness k, and a rigid cart with mass m. These three elements are in a series
connection and subjected to an applied load Fs (t). The displacement of the spring-damper
junction is x1 , and the displacement of the rigid cart is x2 . If Fs (t) and x2 (t) are chosen
as input and output, respectively, the governing ordinary differential equation (as shown in
(12.74)) is
d3 x2 mk k
m 3 + ẍ2 + k ẋ2 = Fs (t) + Ḟs (t) (12.136)
dt B B
The transfer function from Fs (t) to x2 (t) is
k
B
+s
Hx2 (s) = mk 2
(12.137)
ms3 + B
s + ks

The system poles satisfy  


2 mk
D(s) = s ms + s+k =0 (12.138)
B
leading to the following three poles
p1 = 0 (12.139)
410 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.26: Pole-zero plot of the flagship example

and
 s   (
2 k 2 k
   
1 k k k  complex, B
<4 m
p2,3 = − ± −4 = k 2 k
(12.140)
2 B B m real, B
>4 m

Moreover, Hx2 (s) has a zero determined by

k
N (s) = +s=0 (12.141)
B
so the zero is located at
k
z1 = − (12.142)
B
The poles and zeros are plotted in Fig. 12.26. There are two things worth noting. First, the
location of the poles differs when the system is overdamped and underdamped. But the zero
is always at the same spot. Second, the damping of poles p2,3 are characterized via the real
part − Bk . It turns out that larger the damping coefficient B, smaller the real part − Bk and
therefore a slower decay rate. This is very counter-intuitive. Finally, the pole at zero means
that a constant force (which corresponds to s = 0) will cause the transfer function Hx2 (s)
to amplify the input infinitely. This can be easily seen in Fig. 12.25. When a constant force
is applied, it will stretch the damper indefinitely leading x2 (t) to infinite response.

As we learned in Example 12.9, dynamics of the flagship example can also be described
12.4. POLES AND ZEROS 411

via the velocity v2 ≡ ẋ2 . In this case, the governing equation is


d2 v2 mk k
m + v̇2 + kv2 = Fs (t) + Ḟs (t) (12.143)
dt2 B B
Moreover, the transfer function from Fs to x2 is
k
B
+s
Hv2 (s) = (12.144)
ms2 + mk
B
s+k

The poles of Hv2 (s) satisfy


mk
D(s) = ms2 + s+k =0 (12.145)
B
resulting in the following two poles
 s  
2  
1 k k k 
p1,2 = − ± −4 (12.146)
2 B B m

Moreover, the zeros of Hv2 (s) satisfy


k
N (s) = +s=0 (12.147)
B
resulting in one zero
k
z1 = − (12.148)
B

Comparison of Hx2 (s) and Hv2 (s) paints a very confusing picture. Both Hx2 (s) and
Hv2 (s) describe motion of the rigid cart, and both transfer functions have the same zero;
see (12.142) and (12.148). Yet, Hx2 (s) provides three poles (cf. (12.139) and (12.140)) while
Hv2 (s) only provides two poles (cf. (12.146)). As I explained on Page 407, the system poles
should remain the same no matter which transfer function we choose to examine. But now
Hx2 (s) provides three poles and Hv2 (s) provides only two. Is this a contradiction? What is
going on?

A short answer to what is happening here is pole-zero cancellation. To understand


where the pole-zero cancelation comes from, one can differentiate the equation (12.143)
governing v2 to obtain
d3 v2 mk d2 v2 dv2 k
m 3
+ 2
+k = Ḟs (t) + F̈s (t) (12.149)
dt B dt dt B
412 CHAPTER 12. TRANSFER FUNCTIONS

Therefore, the transfer function from Fs (t) to v2 (t) takes the form of

s Bk + s

H̃v2 (s) = (12.150)
ms3 + mk B
s2 + ks

where a tilde (˜) over Hv2 (s) represents the different form of Hv2 (s). As one can see from
(12.150), the denominator is the same as in (12.138) leading to the same three system poles
in (12.139) and (12.140). Nevertheless, the numerator now provides a second zero

z2 = 0 (12.151)

in addition to the first zero in (12.148). The second zero z2 is the same as the first pole z1
causing the pole-zero cancelation.

When pole-zero cancellation occurs in a transfer function, it means that one cannot see
the entire dynamics of the system through that particular transfer function. In other words,
that transfer function has blind spots. For the flagship example, the pole-zero cancelation
occurs at s = 0, which corresponds to a constant load Fs (t). When a constant Fs (t) is applied
and a steady state is achieved, the damper will stretch indefinitely at a constant speed to
offset the constant applied force Fs (t). This infinite stretch, on one hand, will increase the
displacement x2 (t) to infinity, as reflected by the pole at z = 0 from Hx2 (s). (Recall that the
pole will blow up the excitation indefinitely.) The same infinite stretch, on the other hand,
will kept the velocity v2 (t) constant because a pole-zero cancellation occurs in Hv2 (s). As
a result of the pole-zero cancelation, one cannot tell from Hv2 (s) that part of the system is
indeed blowing up (e.g., the displacement x2 ).

In Example 7.12, I derived a state equation of the flagship example using the linear
graph approach. In that state equation, the state variables are vm and fk ; see (7.206).
Therefore, the governing ordinary differential equation is second-order with only two poles.
Obviously, that state equation (7.206) has an embedded pole-zero cancellation and does not
describe the entire dynamics of the system. It is not surprising, because we already know
that we need to integrate vm in order to get the displacement xm .

Example 12.14 Let us revisit the primitive car model shown in Example 12.10. The car
model is reproduced here in Fig. 12.27 for convenience. Again, the car has a mass m = 1000
12.4. POLES AND ZEROS 413

Figure 12.27: A simple model to simulate Figure 12.28: Magnitude of H(s) with re-
a car on a bumpy road spect to v

kg and the suspension has stiffness k = 16 kN/m. The bumpy roads produces an excitation

f (t) = 0.1 cos 10vt (12.152)

where v is the constant speed of the car. Now I want to revisit this example through the
angle of poles and zeros.

From Example 12.10, we have the following information. First, the equation of motion
is derived in (12.98), reproduced below

ẍ + 16x = 10−4 cos 10vt (12.153)

The corresponding s parameter is


s = j(10v) (12.154)

and the transfer function H(s) is

Y (s) 1
H(s) ≡ = 2 (12.155)
U (s) s + 16

Also, how the transfer function will amplify the input excitation as a function of v is shown
in Fig. 12.17, which is reproduced below in Fig. 12.28.

These results shown in Fig. 12.28, such as maximal response at the speed of v = 0.4
m/s and diminishing response as v → ∞, are very natural in view of poles and zeros on the
414 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.30: Upgrading the car model


with a damper
Figure 12.29: Pole-zero plot of the primi-
tive car model

complex s-plane. From the transfer function H(s) in (12.155), there is no zero and the poles
are

p1,2 = ±4j (12.156)

Therefore, one can construct a pole-zero plot on the complex s-plane as shown in Fig. 12.29.
Since the input is s = 10vj, the input s-parameter will be located on the pure imaginary
axis. By varying the cruise velocity v, the input s-parameter moves along the pure imaginary
axis. When s is close to one of the two poles, the transfer function will amplify the input
significantly. This condition occurs when

10vj = ±4j, v = 0.4 m/s (12.157)

That is why |H(s)| blows up at v = 0.4 m/s in Fig. 12.28. Also, when v → ∞, the s parameter
is so far away from the poles. As a result, the denominator of the transfer function becomes
so large that |H(s)| → 0.

Now let us upgrade our car model by adding a damper with a damping coefficient c;
see Fig. 12.30. The mass of the car remains at m = 1000 kg. The road excitation is still the
same with f (t) = 0.1 cos 10vt. The car will drive in a speed range 0 < v < 20 m/s. How
would one design the stiffness k and damping c of the suspension?
12.4. POLES AND ZEROS 415

Figure 12.31: Pole-zero plot of the upgraded car model with a damper

To answer this question, let us first write down the new equation of motion as

1000ẍ + cẋ + kx = 0.1 cos 10vt (12.158)

Poles of the upgrade model then satisfy

1000s2 + cs + k = 0 (12.159)

and the poles are


−4
h √ i
p1,2 = 5 × 10 · −c ± j 4000k − c 2 (12.160)
The poles are plotted on the complex s-plane as shown in Fig. 12.31. Note that the real part
of the poles are solely determined via the damping coefficient c, while the imaginary part is
largely dominated by the stiffness coefficient k.

Since s = 10vj, the operating speed range 0 < v < 20 m/s translates to a range of
input s-parameter with −200j < s < 200j. The range of s is also shown in Fig. 12.31 as
the solid bar. To avoid large response of x(t), the poles need to be kept away from the solid
bar representing the input s-parameter for a certain margin indicated by the dash line in
416 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.32: Differentiation process using Figure 12.33: Differentiation process in


transfer functions block diagrams

Fig. 12.31. One can then adjust c and k to ensure that the poles lie outside the margin away
from the input s-parameter.

12.5 Connection to Operational Block Diagrams

Transfer functions and block diagrams together form a very useful toolset to analyze dynam-
ics of a complex system. Therefore, it is beneficial to revisit the block diagram in view of
transfer functions.

12.5.1 Revisit of Block Diagrams

Figure 12.32 shows a differentiation process in the spirit of transfer functions. The left side
of the d/dt block is a complex input U (s)est . After the differentiation process, the right side
results in the derivative U (s)sest . At the same time, the right side is also the output Y (s)est .
Therefore,
Y (s) = sU (s) (12.161)

and the transfer function of the differentiation process is

Y (s)
H(s) = =s (12.162)
U (s)

In Chapter 10, we learn that the block diagram of differentiation is denoted by an operator
S; see Fig. 10.2, which is partially reproduced here in Fig. 12.33. By comparing the transfer
function in (12.162) and the block diagram in Fig. 12.33, you probably understand now why
12.5. CONNECTION TO OPERATIONAL BLOCK DIAGRAMS 417

Figure 12.34: Integration process using Figure 12.35: Integration process in block
transfer functions diagrams

S is used in Chapter 10 to represent a differentiation process. The block diagram is indeed


a direct representation of the transfer function associated with the operation of the block.
Therefore, a block with s or S represents a differentiation process.

Figure 12.34 shows an integration process in view of transfer functions. The left side
R
of the dt block is a complex input U (s)est . After the integration process, the right side
U (s) st
results in the derivative e . At the same time, the right side is also the output Y (s)est .
s
Therefore,

U (s)
Y (s) = (12.163)
s

and the transfer function of the integration process is

Y (s) 1
H(s) = = (12.164)
U (s) s

In Chapter 10, we learn that the block diagram of an integration process is denoted by an
operator S −1 ; see Fig. 10.3, which is partially reproduced here in Fig. 12.35. Again, we see
that the block diagram is a direct representation of the transfer function associated with the
operation of the block. Therefore, a block of 1/s or S −1 denotes an integration process.

The moral is the following. The S operator in block diagrams are the s parameter in
the transfer function approach. Each block diagram is a direct representation of the transfer
function associated with the operation of the block. Block diagrams are just different ways
to represent transfer functions.
418 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.36: Block diagrams in a series Figure 12.37: Synthesized block


connection

12.5.2 Interconnected Systems

Block diagrams in combination with transfer functions are very powerful tools for analyzing
complex systems. Often, a complex system can be dissembled into many smaller and easier
blocks who transfer functions are derived and understood. Then the transfer functions from
all the forming blocks are synthesized to obtain the transfer function of the entire system.
Two basic types of connections are series connections and parallel connections. They are
explained as follows.

Series Connections. Figure 12.36 illustrates two blocks in a series connection. The
first block has a transfer function H1 (s) and the second block has a transfer function H2 (s).
The input to the first block is U (s)est and the output of the first block is X(s)est . Therefore,
they are related via
X(s)
H1 (s) = (12.165)
U (s)
Since the two blocks are in a series connection, the output X(s)est of the first block is
the input of the second block. At the same time, the second block has an output Y (s)est
satisfying
Y (s)
H2 (s) = (12.166)
X(s)
Now the two blocks in the series connection can be treated or synthesized into a single
block with input U (s)est and an output Y (s)est ; see Fig. 12.37. The corresponding transfer
function is then
Y (s) Y (s) X(s)
H(s) = = · = H1 (s)H2 (s) (12.167)
U (s) X(s) U (s)

Parallel Connections. Figure 12.38 illustrates two blocks in a parallel connection.


The two block have transfer functions H1 (s) and H2 (s). The input U (s)est is the same for
12.5. CONNECTION TO OPERATIONAL BLOCK DIAGRAMS 419

Figure 12.39: Synthesized block


Figure 12.38: Block diagrams in a parallel
connection

the two blocks. As a result, the output from the first block is Y1 (s)est with

Y1 (s) = H1 (s)U (s) (12.168)

and the output from the second block is Y2 (s)est with

Y2 (s) = H2 (s)U (s) (12.169)

Since the two blocks are in a parallel connection, the overall output Y (s)est is the sum from
the two blocks, i.e.,
Y (s) = Y1 (s) + Y2 (s) (12.170)

Now the two blocks in the parallel connection is synthesized into a single block with input
U (s)est and an output Y (s)est ; see Fig. 12.39. The corresponding transfer function is then

Y (s) Y1 (s) Y2 (s)


H(s) = = + = H1 (s) + H2 (s) (12.171)
U (s) U (s) U (s)

Example 12.15 Consider the RLC circuit shown in Fig. 12.40. The circuit consists of a
resistor with resistance R, an inductor of inductance L, and a capacitor of capacitance C.
The three circuit elements are in a series connection and are driven by a voltage source Vs (t).
As a result, a current i(t) flows through the circuit inducing a voltage drop vL (t) across the
inductor and vC (t) across the capacitor. What are transfer functions from Vs (t) to vL (t) and
vC (t), respectively?
420 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.40: An RLC circuit

The governing equation of the circuit takes the form of


d2 i di 1 dVs
L 2
+R + i= (12.172)
dt dt C dt
The transfer function H(s) from Vs (t) to i(t) is then
s
H(s) = 1 (12.173)
Ls2 + Rs + C

To obtain the transfer function HL (s) from Vs (t) to vL (t), one could use the block
diagrams shown in Fig. 12.41. There are two blocks in Fig. 12.41. The first block represents
the transfer function from Vs (t) to i(t). The second block represents the inductor, which
satisfies an elemental equation
di
vL (t) = L (12.174)
dt
and a transfer function Ls with an input i(t) to an output vL (t). Therefore, the series
connection of the two blocks in Fig. 12.41 will generate a transfer function HL (s) that goes
from Vs (t) to vL (t). Following (12.167), one would obtain
Ls2
HL (s) = Ls · H(s) = 1 (12.175)
Ls2 + Rs + C

Note that both the numerator and the denominator are second order in HL (s).

To obtain the transfer function HC (s) from Vs (t) to vC (t), one could use the block
diagrams shown in Fig. 12.42. There are also two blocks in Fig. 12.42. The first block
12.5. CONNECTION TO OPERATIONAL BLOCK DIAGRAMS 421

Figure 12.41: Block diagram to find HL (s) Figure 12.42: Block diagram to find HC (s)

represents the transfer function from Vs (t) to i(t). The second block represents the capacitor,
which satisfies an elemental equation

dvC
i(t) = C (12.176)
dt
or Z
1
vc (t) = i(t)dt (12.177)
C
1
The corresponding transfer function for an input i(t) to an output vC (t) is Cs . Therefore,
the series connection of the two blocks in Fig. 12.42 will generate a transfer function HC (s)
that goes from Vs (t) to vC (t). Following (12.167), one would obtain

1 1 s 1
HC (s) = · H(s) = · 2 1 = (12.178)
Cs Cs Ls + Rs + C
LCs2 + RCs + 1

One might ask if a pole zero cancellation occurs in (12.178). It certainly does, if one
is interested in knowing how much the charge is accumulated in the capacitor. Let us
assume that the capacitor is ideal and it can store infinite amount of electric charge. The
pole-zero cancellation occurs at s = 0 implying that the input excitation is constant. In
approaching a steady state, the current i(t) will gradually diminish to zero and the voltage
vc (t) will approach to a constant, while infinite amount of electric charge will be stored in the
capacitor. Mathematically, the pole at s = 0 cause the electric charge to blow up indefinitely
and the pole-zero cancellation keeps the voltage drop vC (t) a constant. As a result of the
pole-zero cancellation, response in vC (t) or vL (t) does not reveal the fact that the electric
charge stored in the capacitor becomes unbounded.

Example 12.16 It is not surprising that I am going to revisit the flagship example using
block diagrams in this example. Figure 12.43 shows the flagship example. According to
422 CHAPTER 12. TRANSFER FUNCTIONS

Figure 12.44: Free-body diagram relating


Figure 12.43: The flagship example revis- Fs (t) and fk
ited

Example 12.13, the equation of motion is


d3 x2 mk k
m 3 + ẍ2 + k ẋ2 = Fs (t) + Ḟs (t) (12.179)
dt B B
and the transfer function from Fs (t) to x2 (t) is
k
+sB
H(s) = (12.180)
ms3 + mk
B
s2 + ks

What is the transfer function from Fs (t) to the force fk in the spring?

To find the transfer function from Fs (t) to fk , one needs to find out how these two
forces are related. Figure 12.44 shows the free-body diagram of the rigid cart, where Fs (t)
is acting at the right side and the spring force fk is acting at the left side. Application of
Newton’s second law leads to
Fs (t) − fk = mẍ2 (12.181)
or
fk = Fs (t) − mẍ2 (12.182)

Figure 12.45 shows the block diagram representing (12.182). Basically, two signals
Fs and −mẍ2 enter a summation operator to produce fk . Figure 12.46 further replaces
the term mẍ2 by a block diagram ms2 for the following reason. According to (12.162), a
differentiation process produces a block of transfer function s. Therefore, the operation of
mẍ2 corresponds to a block diagram of ms2 with input being x2 and output being mẍ2 .
Finally, the displacement x2 in Fig. 12.46 is related to Fs through use the transfer function
H(s) in (12.128), thus leading to a complete block diagram shown in Fig. 12.47.
12.5. CONNECTION TO OPERATIONAL BLOCK DIAGRAMS 423

Figure 12.45: Block diagram representing Figure 12.46: Replacing ẍ2 by a block dia-
(12.182) gram

Figure 12.47: The complete block diagram describing the relation from Fs (t) to fk

The interconnected system shown in Fig. 12.47 has a parallel connection of a unit block
(i.e., ”1”) and a subsystem that consists of two blocks H(s) and ms2 in a series connection.
According to (12.167), the transfer function of the subsystem is ms2 H(s). According to
(12.170), the transfer function Hk (s) of the interconnected system from Fs to fk is

Hk (s) = 1 − ms2 H(s) (12.183)

Substitution of (12.180) into (12.183) results in


k
2 B
+s
Hk (s) = 1 − ms mk 2
ms3 + B
s + ks
ks k
= mk 2
= mk
(12.184)
ms3 + B
s + ks ms2 + B
s +k

Note that a pole-zero cancellation occurs at s = 0 in (12.184). Again, the explanation


is the same as in Example 12.13. The condition of s = 0 corresponds to a constant applied
424 CHAPTER 12. TRANSFER FUNCTIONS

force Fs . In this case, the steady-state response leads to a constant stretch of the spring and
a constant stretch rate of the damping. As a result, the displacement x2 grows indefinitely,
but the spring force fk remains constant and finite. The infinite growth of the displacement
x2 results from the pole s = 0, while the finite fk results from the pole-zero cancellation.
Therefore, an analysis of the transfer function Hk cannot reveal the entire dynamics, because
it will not indicate that x2 has drifted to infinity.

12.6 Transfer Functions in State Formulation

The concept of transfer functions is not limited to single-input-single-output (SISO) systems.


It also works for multi-input-multi-output (MIMO) systems, especially in the form of state-
space formulation. Consider the following state equation

ẋ = Ax + Bu (12.185)

and output equation


y = Cx + Du (12.186)

Let us consider a complex input excitation

u = U(s)est (12.187)

where U(s) is now a constant and complex vector. Since the system is linear, the state
variable vector x(t) and the output variable vector y(t) will take the form of

x = X(s)est (12.188)

and
y = Y(s)est (12.189)

Substitution of (12.187) and (12.188) into (12.185) leads to

sX(s) = AX(s) + BU(s) (12.190)


12.6. TRANSFER FUNCTIONS IN STATE FORMULATION 425

By collecting terms involving X(s) to one side, one obtains

(sI − A) X(s) = BU(s) (12.191)

where I is an identity matrix of the same dimensions as A. By inverting (12.191), one obtains

X(s) = (sI − A)−1 BU(s) (12.192)

Now let us focus on the output equation (12.186). Substitution of (12.187), (12.188),
and (12.189) into (12.186) results in

Y(s) = CX(s) + DU(s) (12.193)

Moreover, substitution of (12.192) into (12.193) leads to

Y(s) = C (sI − A)−1 B + D U(s)


 
(12.194)

Based on (12.194), one can define a transfer matrix H(s) mapping the input vector U(s)
to the output vector Y(s), i.e.,
Y(s) = H(s)U(s) (12.195)

where
H(s) = C (sI − A)−1 B + D (12.196)

There are several things worth noting. First of all, the transfer function formulation
now takes the form of a transfer matrix H(s). Since it is a matrix operation now, the order
becomes critical. The transfer matrix H(s) must be pre-multiplied to the input vector U(s)
as shown in (12.195). The number of rows of H(s) is the same as that of the output variable
vector y. The number of columns of H(s) is the same as the number of row of the input
variable vector u. For example, if there are two input variables u1 (t) and u2 (t) and three
output variables y1 (t), y2 (t), and y3 (t), then the transfer matrix H(s) is a 3 × 2 matrix in
the form of  
H11 (s) H12 (s)
H(s) =  H21 (s) H22 (s)  (12.197)
 

H31 (s) H32 (s)


426 CHAPTER 12. TRANSFER FUNCTIONS

and (12.195) will take the form of


   
Y1 (s) H11 (s) H12 (s) !
U1 (s)
 Y2 (s)  =  H21 (s) H22 (s)  (12.198)
   
U2 (s)
Y3 (s) H31 (s) H32 (s)

Second, system poles can be recovered from (12.196) by recalling


adj (sI − A)
(sI − A)−1 = (12.199)
|sI − A|
where the numerator is the adjoint (or adjugate) matrix of (sI − A) and the denominator is
the determinant of (sI − A). System poles then satisfy

|sI − A| = 0 (12.200)

Note that (12.200) is exactly the equation determining eigenvalues of matrix A; see (11.8).
This is a direct proof that system poles are eigenvalues of state matrix A.

The expression in (12.199) also shows that all elements in the transfer matrix H(s)
have the same poles because the denominator is the same. The zeros, however, may be
different from elements to elements in the transfer matrix H(s), because the adjoint matrix
of (sI − A) may have different elements.

Example 12.17 Consider the following state equation


! " # ! " #
d x1 0 1 x1 0
= + f (t) (12.201)
dt x2 −6 −7 x2 1
and output equation ! " # ! " #
y1 1 0 x1 0
= + f (t) (12.202)
y2 0 1 x2 0
Find the transfer functions from f (t) to y1 (t) and y2 (t), respectively.

From the state equation (12.201) and output equation (12.202), one can identify the
following four matrices
" # " # " # " #
0 1 0 1 0 0
A= , B= , C= , D= (12.203)
−6 −7 1 0 1 0
12.6. TRANSFER FUNCTIONS IN STATE FORMULATION 427

Moreover, (12.196) implies


" #
H1 (s)
H(s) = = C (sI − A)−1 B + D (12.204)
H2 (s)

First of all, let us calculate (sI − A)−1 . From (12.203),


" #−1 " #
s −1 1 s+7 1
(sI − A)−1 = = 2 (12.205)
6 s+7 s + 7s + 6 −6 s

Substitution of (sI − A)−1 from (12.205) and C, B, and D from (12.203) into (12.204) results
in
" # " #" #
H1 (s) 1 s+7 1 0
H(s) = = 2
H2 (s) s + 7s + 6 −6 s 1
" #
1 1
= 2 (12.206)
s + 7s + 6 s

Therefore,
1 s
H1 (s) = , H 2 (s) = (12.207)
s2 + 7s + 6 s2 + 7s + 6
Note that the two transfer functions H1 (s) and H2 (s) have the same poles but different zeros.
428 CHAPTER 12. TRANSFER FUNCTIONS
Chapter 13

Frequency Response Functions

In this chapter, I will discuss the concept of frequency response functions. Frequency re-
sponse functions are basically transfer functions evaluated along the pure imaginary axis. If
one has thoroughly understood the concept of transfer functions, this chapter will be very
straightforward. If not, one might want to review Chapter 12 to brush up the concept of
transfer functions before starting this chapter.

In this chapter, I will first explain the motivation to use frequency response function
instead of transfer functions. I will also formally introduce the concept of frequency response
functions. Then I will demonstrate frequency response functions of first- and second-order
systems, and explain how characteristics of such systems (e.g., time constant, natural fre-
quency, and viscous damping factor) affect the magnitude and phase of the frequency re-
sponse functions. I will also explain Bode plots, which is a very common way to represent
frequency response functions.

13.1 Motivation

In Chapter 12, we have learned the method of transfer functions to find particular solutions
or response of a system as illustrated in Fig. 13.1. The basic principle is the following. Many

429
430 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.1: Transfer function approach to find particular solution

input excitations take the form of a complex function U (s)est . Moreover, let us assume
that the system has a transfer function H(s). Then the response or the particular solution
corresponding to the complex input excitation is Y (s)est , where

Y (s) = H(s) · U (s) (13.1)

The method of transfer functions has many advantages. The biggest advantage of the
method of transfer functions is its mathematical rigor. When dynamics of the system (e.g., an
ordinary differential equation) and input functions are known mathematically, the particular
solution as output can be accurately obtained. Second, the method of transfer functions
works for a large class of problems. It works for a very large class of input functions—
significantly larger than the form of U (s)est . In fact, the method of transfer functions will
work as long as Laplace transforms of the input function exists. It also works for a very large
class of systems—even unstable systems. The third advantage of the method of transfer
functions is physical insights it reveals. For example, the magnitude of transfer functions
will amplify or attenuate the input magnitude to give the output magnitude. The phase of
transfer functions will shift the input phase to give the output phase. The concept of poles
and zeros entail when the input will be amplified indefinitely or suppressed entirely by the
system.

The method of transfer functions, however, has its own challenges. The biggest challenge
is that it cannot be measured experimentally. If one wanted to measure H(s), one would
need to generate an input function U (s)est and measure the corresponding output function
Y (s)est . This will be very difficult to achieve, because U (s)est involves exponential functions
that may grow or decay very quickly.

There is, however, one exception. When s is along the pure imaginary axis, i.e., s = jω,
the input function takes the form of sin ωt, cos ωt, or linear combinations of them. These
sinusoidal functions are easy to generate, easy to measure, and easy to operate mathemati-
13.2. DEFINITION 431

cally. As a result, one can obtain H(s) along the pure imaginary axis both mathematically
and experimentally. Consideration of transfer functions along the pure imaginary axis also
reduces mathematical complexity. A transfer function has a complex independent variable
s, which requires a real part and an imaginary part to define a meaningful transfer function
H(s). If a transfer function H(s) is evaluated along the pure imaginary axis, H(s) now only
depends on a frequency variable ω, which is the imaginary part of the complex s variable.

For all these reasons above, one often needs to find a compromise between mathematical
rigor and practicality by using a transfer function evaluated along the pure imaginary axis. A
transfer function evaluated along the pure imaginary axis is known as a frequency response
function.

13.2 Definition

For an asymptotically stable system, a frequency response function G(ω) is a transfer function
H(s) of the system evaluated along the pure imaginary axis s = jω, i.e.,

G(ω) = H(jω) (13.2)

where ω is a frequency parameter.

There are many things worthy of further discussions in this definition. The first natural
question is why the condition of ”an asymptotically stable system” is imposed? As one
knows, the complete solution y(t) of a linear differential equation is

y(t) = yh (t) + yp (t) (13.3)

where yh (t) is the homogeneous solution and yp (t) is the particular solution. When the
system is asymptotically stable, the homogeneous solution yh (t) dies out eventually, i.e.,

lim yh (t) = 0 (13.4)


t→∞

Therefore, the steady-state solution (or long-term solution) is indeed the particular solution
yp (t) and can be predicted via the method of transfer functions. This implies that a frequency
432 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

response function G(ω) is an input-output relationship when a stated-state condition is


available. Specifically, it is a relationship between a sinusoidal input and its corresponding
steady-state output.

The second point I want to mention is that the excitation for frequency response function
G(ω) is sinusoidal. When s = jω, the corresponding excitation U (s)est will be sinusoidal.
This is somewhat trivial, because it has been explained multiple times. But it is very
important, because the rest of this chapter will build upon this understanding. From now
on, I will simply assume that the input is u(t) = A sin ωt, where A is the magnitude and ωt
is the phase angle of the input excitation.

The third thing I want to point out is that the independent variable of G(ω) is the
frequency variable ω, which is a real variable. The variable ω determines the location of s
variable along the pure imaginary axis. It should be noted that ω is real but the frequency
response function G(ω) is in general complex. Therefore, G(ω) can be represented in a
real part and an imaginary part. Or, equivalently, G(ω) can be represented in terms of a
magnitude and a phase angle, i.e.,

G(ω) = |G(ω)| ejφ(ω) , φ(ω) ≡ 6 G(ω) (13.5)

where |G(ω)| is the magnitude and φ(ω) ≡ 6 G(ω) is the phase angle of G(ω).

The simplest way to visualize G(ω) is via the transfer function H(s) from which G(ω)
is derived. One can plot the magnitude of H(s) over the entire complex s-plane and it will
appear like a surface. Then one can cut the |H(s)| surface along the pure imaginary axis
(as if you were cutting a cake) and the side view of the cut will generate |G(ω)|. A similar
procedure can be done to find the phase angle 6 G(ω).

For example, Fig. 13.2 and Fig. 13.3 show the magnitude |G(ω)| and the phase angle
6 G(ω) of the frequency response function G(ω) derived from the following first-order system
1
H(s) = (13.6)
s+1
whose pole is located at
p1 = −1 (13.7)
13.2. DEFINITION 433

Figure 13.2: Magnitude of G(ω) for a Figure 13.3: Phase angle of G(ω) for a
first-order system first-order system

In a magnitude plot, a pole can, in general, be identified via a big spike on the complex
s-plane. In Fig. 13.2, the pole with a big spike occurs at s = −1. In a phase plot, a pole can
usually be identified via the center of a ”swirl” or ”vorticity.” If one draws a circle around
a pole, one will find that the phase angle either descends or ascends like a spiral staircase
around a vertical post. For the phase plot in Fig. 13.3, one can see such phenomenon around
s = −1.

Now let us focus on the cross-sectional views in Fig. 13.2 and Fig. 13.3 along the pure
imaginary axis s = jω. For the magnitude |G(ω)|, it peaks at ω = 0 and gradually dies out
as ω → ∞. For the phase angle 6 G(ω), it starts with a zero value at ω = 0 and gradually
transitions to −π as ω → ∞. These are typical behaviors of a first-order system, and we can
use them to approximate the location of the pole. I will explain the detail in Section 13.4.

As another example, Fig. 13.4 and Fig. 13.5 show the magnitude |G(ω)| and the phase
angle 6 G(ω) of the frequency response function G(ω) derived from the following second-order
system
1
H(s) = 2 (13.8)
s + s + 4.25
whose two poles are located at
p1,2 = −0.5 ± 2j (13.9)
434 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.4: Magnitude of G(ω) for a Figure 13.5: Phase angle of G(ω) for a
second-order system second-order system

Again one can see in Fig. 13.4, big spikes appear at the poles. Also, the phase angle in
Fig. 13.5 descends like a spiral staircase around the poles.

Now let us focus on the cross-sectional views in Fig. 13.4 and Fig. 13.5 along the pure
imaginary axis s = jω. For the magnitude |G(ω)|, it has a finite value at ω = 0. Then it
peaks at ω = 2, which is the imaginary part of the pole. Then it rolls off to zero as ω → ∞.
For the phase angle 6 G(ω), it starts with a zero value at ω = 0. Then it experiences a sharp
transition at ω = 2 and finally settles down to −π as ω → ∞. These are typical behaviors of
a second-order system, and we can use them to locate of the poles. I will explain the detail
in Section 13.5.

To wrap up this section, I want to reiterate that a frequency response function G(ω)
reveals a lot of information about the dynamics of a system, because G(ω) is the cross-
sectional view of H(s) along the pure imaginary axis s = jω. For an asymptotically stable
system, all system poles are on the left half of the complex s-plane. Some poles may be
far away from the pure imaginary axis, but others may be close. For those poles which
are far away from the pure imaginary axis, they have very little influence on G(ω). Those
poles are usually not so important, because they cannot be ”felt” in real applications. In
contrast, those poles close to the pure imaginary axis are much more critical, because they
dominate the magnitude and phase of the frequency response function G(ω). They could
also be ”moved” to the right half of the complex s-plane when follower forces are present
causing the system to become unstable (e.g., collapse of Tacoma Narrow Bridge).
13.3. G(ω) AS AN INPUT-OUTPUT RELATION 435

13.3 G(ω) as an Input-Output Relation

It is important to know how G(ω) relates a sinusoidal input excitation to the corresponding
output response. Once this input-output relation is known, one can use it to identify G(ω)
experimentally. For example, one can drive the system experimentally via a sinusoidal input
and measure the corresponding output to obtain an input-output relation, from which G(ω)
can be identified.

The procedure to find response under a harmonic excitation has been explained in
Section 12.3.1. Let us derive the input-output relation more rigorously here by using the
concept of frequency response function G(ω).

Step 1: Analyze the Input. The input excitation is sinusoidal in the form of

u(t) = A sin ωt = = Aejωt ≡ = U (s)est


   
(13.10)

In other words, the magnitude and phase of the input U (s)est are

U (s)est = A, 6 U (s)est = ωt
 
(13.11)

On the other hand, the last equality of (13.10) gives

U (s) = A, s = jω (13.12)

Therefore, s = jω in (13.12) confirms that the harmonic excitation sin ω is equivalent to an


s-parameter location on the pure imaginary axis.

Step 2: Analyze the transfer function. Let us assume that the system has a
transfer function H(s). Since s = jω, the transfer function is evaluated at s = jω resulting
in
6
H(jω) ≡ G(ω) = |G(ω)| ej G(ω) (13.13)
where |G(ω)| is the magnitude and 6 G(ω) is the phase of the frequency response function
G(ω).

Step 3: Derive the Output. The output response y(t) will be

y(t) ≡ = Y (s)est s=jω


 
(13.14)
436 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.6: Input-output relation via G(ω), where φ is the phase angle of G(ω)

where Y (s) is obtained from


Y (s) = H(s)U (s) (13.15)
Substitution of (13.15) into (13.14) leads to

y(t) ≡ = H(s)U (s)est s=jω


 
(13.16)

Moreover, substitution of (13.10) and (13.13) into (13.16) yields


n o n o
6 6
y(t) = = |G(ω)| ej G(ω) · Aejωt = = A |G(ω)| ej [ωt+ G(ω)]

= A |G(ω)| sin [ωt + 6 G(ω)] (13.17)

According to (13.17), the magnitude of the output response y(t) is A |G(ω)|. Compared
with the input magnitude A, the output magnitude is amplified by the magnitude of the
frequency response function |G(ω)|. Moreover, the phase angle of the output response y(t) is
ωt + 6 G(ω). Compared with the input phase angle ωt (cf. (13.11)), the output phase angle
is shifted by the phase angle of the frequency response function 6 G(ω). Figure 13.2 shows
the input-output relationship via G(ω), where the sinusoidal output y(t) has a magnitude Y
and a phase shift φ relative to the input phase. Obviously from (13.17), Y = A |G(ω)| and
φ = 6 G(ω).

Figure 13.2 and (13.17) suggests a practical way to determine the frequency response
function G(ω). One can drive the system sinusoidally via

u(t) = A sin ωt (13.18)

which has an input magnitude A and phase angle ωt. Then one measures the output y(t) as

y(t) ≡ Y sin (ωt + φ) (13.19)

which has an output magnitude Y and phase angle ωt + φ. According to (13.17), the
magnitude of the frequency response function |G(ω)| is obtained as
Y
|G(ω)| = (13.20)
A
13.3. G(ω) AS AN INPUT-OUTPUT RELATION 437

Figure 13.7: A flywheel subjected to viscous bearing damping

According to (13.17) again, the phase angle of the frequency response function 6 G(ω) is
obtained as
6 G(ω) = φ = 6 output − 6 input (13.21)
The magnitude |G(ω)| and phase 6 G(ω) can then define the frequency response function
G(ω) in a polar form through (13.13).

People have also used equations (13.18) to (13.21) as a way to define frequency response
functions G(ω) without referring to transfer functions H(s) as shown in (13.2). The definition
goes as follows. If a linear, asymptotically stable system is driven with a sinusoidal input
A sin ωt as shown in (13.18), the system will reach a steady-state response Y sin(ωt + φ) as
shown in (13.19). Then the amplification ratio Y /A and the phase shift φ in (13.20) and
(13.21) collectively define the frequency response function G(ω).

Example 13.1 Consider a flywheel driven by a torque as shown in Fig. 13.7. The flywheel
has rotary inertia J and is subjected to bearing damping with coefficient B. In addition,
the flywheel is driven by a sinusoidal torque

T (t) = T0 sin ωt (13.22)

where T0 is the amplitude and ω is the driving frequency of the applied torque. Determine
the steady-state response of the rotor’s angular velocity Ω(t) for the case of J = 1 and B = 2.

To find the steady-state response, one needs to derive the equation of motion first. Use
P
of M = Iα leads to
dΩ
T (t) − BΩ = J (13.23)
dt
438 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

For J = 1 and B = 2, the equation of motion in (13.23) becomes


dΩ
+ 2Ω = T0 sin ωt (13.24)
dt
The complete solution of (13.24) is

Ω(t) = C1 e−2t + K1 sin ωt + K2 cos ωt (13.25)

In (13.25), C1 e−2t is the homogeneous solution Ωh (t) and K1 sin ωt+K2 cos ωt is the particular
solution Ωp (t). Let us look at these two solutions more closely as follows.

For the homogeneous solution Ωh (t), the coefficient C1 will be determined by an initial
condition. More importantly, homogeneous solution Ωh (t) decays exponentially and vanishes
as t → ∞. Therefore, the system is asymptotically stable, and the steady-state response is
the particular solution yp (t).

For the particular solution Ωp (t), it can take several forms. One form is from the method
of undetermined coefficients, i.e.,

Ωp (t) = K1 sin ωt + K2 cos ωt (13.26)

where K1 and K2 are the undetermined coefficients from Table 1.1. The other form is to use
trigonometry to convert (13.26) into a polar form

Ωp (t) = Y sin(ωt + φ) (13.27)


p  
K2
where Y ≡ K12 + K22 is the magnitude and φ ≡ tan−1 K 1
is the phase shift. One can see
that (13.27) takes the same form as in (13.19). Therefore, the steady-state solution can be
obtained via the frequency response function G(ω) instead of the method of undetermined
coefficients. Using G(ω) is not only simpler algebraically, but also provides good physical
insights such as the magnification ratio and the phase shift.

To obtain response via G(ω), we need to find H(s) first. Consider the governing equation
dΩ
+ 2Ω = u(t) (13.28)
dt
By assuming u(t) = U (s)est and Ω(t) = Y (s)est in (13.28), the transfer function is then
Y (s) 1
H(s) = = (13.29)
U (s) s+2
13.3. G(ω) AS AN INPUT-OUTPUT RELATION 439

Figure 13.8: Polar form of complex number 2 + jω

The frequency response function G(ω) is


1
G(ω) = H(jω) = (13.30)
2 + jω

To put G(ω) in a polar form, it is easier to put the denominator 2 + jω in a polar


form first. (This is an important rule to remember. Do not expand the fraction in (13.30).
Otherwise, you will end up with a mess.) Figure 13.8 shows the denominator 2 + jω on the
complex plane, with 2 being the base and ω being the height of a triangle. Therefore, the

hypotenuse 4 + ω 2 is the magnitude and the angle tan−1 ω2 is the phase resulting in the


following polar form


√ −1 ω
2 + jω = 4 + ω 2 ej·tan ( 2 ) (13.31)
By substituting (13.31) back to (13.30) and recalling (12.33), one obtains
1 1 −1 ω
G(ω) = =√ e−j·tan ( 2 ) (13.32)
2 + jω 4 + ω2
In other words,
1 ω 
|G(ω)| = √ , 6 G(ω) = − tan−1 (13.33)
4 + ω2 2

The final step to derive the steady-state response in (13.27) is to recall


T0
Y = |G(ω)| · T0 = √ (13.34)
4 + ω2
and ω 
φ = 6 G(ω) = − tan−1 (13.35)
2
440 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.9: Plots of magnitude and phase of G(ω) as a function of ω

Hence
T0 h  ω i
yp (t) = √ sin ωt − tan−1 (13.36)
4 + ω2 2

The value of the frequency response function G(ω) does not only lies on its simplicity to
obtain the steady-state solution yp (t), but also its ability to provide physical insights through
|G(ω)| and 6 G(ω). Especially, a plot of |G(ω)| and 6 G(ω) with respect to ω offers a quick
glance of how the system will respond as a function of the driving frequency ω. Figure 13.9
plots |G(ω)| and 6 G(ω) in (13.33) as a function of ω. When ω is small, |G(ω)| ≈ 21 and
6 G(ω) ≈ 0. This means that the angular velocity is roughly in phase with the input torque

and its amplitude is about half of the input amplitude. When the driving frequency increases
from low to high frequency, G(ω) experiences a sharp transition. The magnitude |G(ω)| drops
very quickly and the phase angle 6 G(ω) shifts to −90◦ . When the driving frequency ω is
large, the system barely responds and its response is about 90◦ behind the input excitation
phase.

Example 13.2 Let us revisit the primitive car model studied in Example 12.10. The car
model is reproduced in Fig. 13.10. The model consists of a car body mass and a suspension
spring, but with no damping The mass of the car is m = 1000 kg. The suspension spring
stiffness is k = 16 kN/m. The bumpy road is simulated via a sinusoidal force input

u(t) = d sin ωt (13.37)


13.3. G(ω) AS AN INPUT-OUTPUT RELATION 441

Figure 13.10: A simple car model used in Example 12.10 to simulate car motion on a bumpy
road

where d is a measure of the bumpiness of the road and ω is the driving frequency, which, in
turn, depends on the cruising speed v of the car and the waviness of the road. Displacement
of the car body (i.e., the car mass) from its equilibrium position is x(t). Determine the
steady-state response of x(t).

The equation of motion for the car model is

mẍ + kx = u(t) (13.38)

With m = 1000 kg, k = 16 kN/m, and u(t) = d sin ωt, the equation of motion (13.38) is
reduced to
103 ẍ + 1.6 × 103 x = d sin ωt (13.39)
The complete solution of (13.24) is

x(t) = C1 sin 4t + c2 cos 4t + K1 sin ωt + K2 cos ωt (13.40)

In (13.40), C1 sin 4t + C2 cos 4t is the homogeneous solution xh (t) and K1 sin ωt + K2 cos ωt
is the particular solution xp (t). Let us look at these two solutions more closely as follows.

For the homogeneous solution xh (t), it oscillates sinusoidally and does not vanish as
t → ∞. Technically speak, the system is not asymptotically stable. A sinusoidal excitation
A sin ωt will never have a steady response Y sin(ωt + φ) as pictured in Fig. 13.6, because the
homogeneous solution never dies out. Therefore, the concept of frequency response function
G(ω) is not valid. This view, of course, is rigorous, but surely is a bit assertoric. If the model
442 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

was a bit more realistic, it would have some damping. As a result, the damping would make
the homogenous solution go away as t → ∞ and a frequency response function G(ω) would
exist. Hence, one could still use the concept of a frequency response function G(ω) for the
equation of motion in (13.39), with an understanding that this would be a limiting case of a
more realistic model with the damping being nearly zero. With this understanding, we can
still consider the steady-state response as the particular solution yp (t).

For the particular solution xp (t), it can take several forms as in Example 13.1. One
form is from the method of undetermined coefficients, i.e.,

xp (t) = K1 sin ωt + K2 cos ωt (13.41)

where K1 and K2 are the undetermined coefficients from Table 1.1. The other form is to use
trigonometry to convert (13.41) into a polar form

xp (t) = Y sin(ωt + φ) (13.42)


p  
K2
where Y ≡ K12 + K22 is the magnitude and φ ≡ tan−1 K 1
is the phase shift. One can see
that (13.42) takes the same form as in (13.19). Therefore, the steady-state solution can be
obtained via the frequency response function G(ω) instead of the method of undetermined
coefficients.

To obtain response via G(ω), we need to find H(s) first. By assuming u(t) = U (s)est
and x(t) = Y (s)est in (13.39), the transfer function is then
Y (s) 1
H(s) = = 3 2 (13.43)
U (s) 10 s + 16 × 103
The frequency response function G(ω) is
1
G(ω) = H(jω) = (13.44)
16 × 103 − 103 ω 2
Note that G(ω) in (13.44) is a real number. For ω < 4, G(ω) > 0; therefore, |G(ω)| = G(ω)
and 6 G(ω) = 0. For ω > 4, G(ω) < 0; therefore, |G(ω)| = −G(ω) and 6 G(ω) = −π (or π).
As a result, one concludes that the magnitude and phase are
(
1 , 6 G(ω) = 0, ω<4

|G(ω)| = 3 3 2
(13.45)
16 × 10 − 10 ω −π, ω > 4
13.4. FIRST-ORDER SYSTEMS 443

Figure 13.11: Plots of magnitude and phase of G(ω) as a function of ω

Finally, the response magnitude Y in (13.42) is given by Y = d |G(ω)| and the phase shift φ
in (13.42) is given by φ = 6 G(ω).

Figure 13.11 shows the magnitude |G(ω)| and the phase shift 6 G(ω) given in (13.45).
1
When ω is small, the |G(ω)| ≈ 16×10 3 and
6 G(ω) = 0. Therefore, the car basically follows

the excitation profile proportionally. As ω → 4, the system starts to experience larger and
larger amplitude in its response. For ω > 4, the response becomes out of phase with the
excitation. As ω → ∞, the response starts to decrease significantly.

13.4 First-Order Systems

First-order systems often serve as filters. There are two types of filters: low-pass filters
and high-pass filters. As explained in Chapter 2, first-order systems have one characteristic
parameter, i.e., time constant τ . We will see in this section that the time constant plays a
critical role in defining the response of filters.

Let us consider the following first-order system governed by


dy
τ + y = u(t) (13.46)
dt
By assuming u(t) = U (s)est and y(t) = Y (s)est into (13.46), one obtains the transfer function
1
H(s) = (13.47)
τs + 1
444 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.12: Plots of magnitude and phase of a low-pass filter

The frequency response function is then


1
G(ω) = H(jω) = (13.48)
1 + jωτ

To find the magnitude and phase of G(ω), it is easier to focus on the denominator first.
By putting the denominator in a polar form, one obtains
q
−1
1 + jωτ = 1 + (ωτ )2 ej·tan (ωτ ) (13.49)

Substitution of (13.49) into (13.48) results in

1 −1
G(ω) = q e−j·tan (ωτ ) (13.50)
1 + (ωτ )2

In other words, the magnitude and phase of G(ω) are

1
|G(ω)| = q , 6 G(ω) = − tan−1 (ωτ ) (13.51)
1 + (ωτ )2

Figure 13.12 illustrates the magnitude and phase of G(ω) derived in (13.51). The
magnitude |G(ω)| ≈ 1 when ω ≈ 0, and it gradually decreases as ω increases. |G(ω)|,
however, starts to drop significantly when ω ≈ τ1 . When ω > τ1 , the magnitude |G(ω)|
continues to drop and eventually approaches zero as ω → ∞.
13.4. FIRST-ORDER SYSTEMS 445

The behavior shown in Fig. 13.12 is a typical response of low-pass filters. Basically,
the low-frequency portion of the input excitation is retained, because |G(ω)| ≈ 1. In contrast,
the high-frequency portion of the input excitation is attenuated, because |G(ω)| ≈ 0. Hence,
the low-frequency portion of the input is ”passed” to the output, and the high-frequency
portion is ”filtered out.” One can see that the time constant τ plays a very critical role,
because
1
ωc ≡ (13.52)
τ
marks a critical point that delineates the frequency beyond which the input excitation is
filtered out. The frequency ωc is called a cutoff frequency or corner frequency. The
frequency range ω < ωc is known as the bandwidth defining the frequency band in which
the input excitation is retained.

The phase information 6 G(ω) in Fig. 13.12 is not as exciting. The phase angle 6 G(ω)
drops linearly as ω  τ1 . The phase angle 6 G(ω) turns −45◦ when ω = τ1 . When ω  τ1 ,
the phase angle 6 G(ω) approaches −90◦ .

Now, let us turn to the following first-order system governed by


dy
τ + y = τ u̇(t) (13.53)
dt
By assuming u(t) = U (s)est and y(t) = Y (s)est into (13.53), one obtains the transfer function
τs
H(s) = (13.54)
τs + 1
The frequency response function is then
jωτ
G(ω) = H(jω) = (13.55)
1 + jωτ
To find the magnitude and phase of G(ω), one can substitute (13.49) into (13.55) to obtain
ωτ π
ej·[ 2 −tan
−1 (ωτ )
G(ω) = q ] (13.56)
2
1 + (ωτ )

In other words, the magnitude and phase of G(ω) are


ωτ π
|G(ω)| = q , 6 G(ω) = − tan−1 (ωτ ) (13.57)
2
1 + (ωτ )2
446 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.13: Plots of magnitude and phase of a high-pass filter

Figure 13.13 illustrates the magnitude and phase of G(ω) derived in (13.57). The
magnitude |G(ω)| ≈ ωτ when ω ≈ 0, and it increases linearly as ω increases. |G(ω)|,
however, starts to plateau when ω ≈ τ1 . When ω > τ1 , the magnitude |G(ω)| asymptotically
approaches to 1 as ω → ∞.

The behavior shown in Fig. 13.13 is a typical response of high-pass filters. Basically,
the high-frequency portion of the input excitation is retained, because |G(ω)| ≈ 1. In
contrast, the low-frequency portion of the input excitation is attenuated, because |G(ω)| ≈ 0.
Hence, the high-frequency portion of the input is ”passed” to the output, while the low-
frequency portion is ”filtered out.” Again, the time constant τ plays a very critical role,
because the cutoff frequency ωc defined in (13.52) delineates the frequency below which the
input excitation is filtered out. In this case, the frequency range ω > ωc is known as the
bandwidth.

The phase information 6 G(ω) in Fig. 13.13 is simply a 90◦ shift of the phase angle
shown in Fig. 13.12.

Example 13.3 Figure 13.14 shows an experimental device to test ride comfort of a sus-
pension system. The suspension system consists of a spring with stiffness k and a viscous
damper with damping coefficient b in a series connection. The lower end of the suspension
system is riding on a wavy road, while the upper end is connected to a block mass that is
constrained to move on a horizontal line. The suspension system moves horizontally with a
13.4. FIRST-ORDER SYSTEMS 447

Figure 13.14: A experimental device to test ride comfort

constant velocity v. Let x(t) be the horizontal position of the suspension system and y(t) be
the vertical displacement of point A where the spring and damper are connected. Moreover,
the stiffness and damping of the suspension are k = 20 kN/m and b = 1 kN·s/m. The
waviness of the road is given as

R(t) = 0.1 cos 10x = 0.1 cos 10vt (13.58)

where the unit is in meter. Answer the following two questions.

1. At what velocity v will the displacement y(t) at point A have an amplitude of 0.05 m?
What is the corresponding frequency?

2. What would be the force transmitted to the mass?

Figure 13.15: Analysis of road waviness Figure 13.16: Free-body diagram of the
suspension

First of all, let us derive the equation of motion. Figure 13.15 shows the vertical position
of the suspension’s lower end in contact with the road surface. Since the suspension moves
448 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

horizontally, the vertical position of the lower end is a function of time t. Moreover, it can
generally be described via a function R(t) as

R(t) = d cos px = d cos pvt = d cos ωt, ω ≡ pv (13.59)

When d = 0.1 and p = 10, (13.59) reduces to the condition described in (13.58).

Figure 13.16 shows the free-body diagram at point A. Let us assume that R(t) is positive
(i.e., upward) at a particular instant. Moreover, let us also assume that the displacement
y(t) is positive and y(t) > R(t) at the same instant. Therefore, the spring will be in tension
providing a downward force k(y − R) to point A. Since y(t) is positive upward and the mass
only moves horizontally, the damper is compressed providing a downward force bẏ to point
A. Note that there is no mass located at point A. Therefore, the two forces k(y − R) and
bẏ must be in equilibrium, i.e.,
k(y − R) + bẏ = 0 (13.60)
Substitution of (13.59) into (13.60) results in the governing equation

bẏ + ky = kd cos ωt (13.61)

or
τ ẏ + y = d cos ωt (13.62)
where
b
τ= (13.63)
k

Note that the equation of motion described in (13.62) takes the standard form of a
low-pass filter described in (13.46). According to (13.51)
1
|G(ω)| = q (13.64)
1 + (ωτ )2

For this G(ω), the magnification ratio between the output and input is
Y
|G(ω)| = (13.65)
d
where Y is the amplitude of displacement y(t) and d is the amplitude of road excitation
R(t). When the displacement y(t) has an amplitude Y = 0.05 m with an input amplitude
13.4. FIRST-ORDER SYSTEMS 449

Figure 13.17: Frequency response function of the suspension system

d = 0.1m, that corresponds to a frequency


d 0.05
q = = 0.5 (13.66)
2 0.1
1 + (ωτ )

Solution of (13.66) leads to √


3
ω= (13.67)
τ
Figure 13.17 shows the magnitude of the frequency response function G(ω). One can quickly

concludes that y(t) will have an amplitude larger than 0.05 m when ω < τ3 , because G(ω)

behaves like a low-pass filter. Once the velocity v exceeds τ3 , the amplitude of y(t) will fall
below 0.05 m.

Finally, let us put in the numbers to finish the first question.


b 1
τ= = s (13.68)
k 20
√ √
3 3 √
ω= = 1 = 20 3 rad/s (13.69)
τ 20
where the default unit for ω is always rad/s. According to (13.59),

ω 20 3 √
v= = = 2 3 m/s (13.70)
p 10

As far as the amount of force transmitted to the mass, the input remains the same
excitation R(t) but the output is changed to the transmitted force. We can use block
diagrams to find the transmitted force and its frequency response function as follows.
450 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

The force transmitted is the damping force given by

f (t) = bẏ (13.71)

Therefore, one can use the block diagrams in Fig. 13.18 to find the transmitted force f (t). In
Fig. 13.18, the input excitation d cos ωt first goes through the low-pass filter (denoted by its
transfer function H(s)) to get the displacement y(t). Then the operation of b dtd described in
(13.71) converts y(t) into the transmitted force f (t). Therefore, the transfer function Hf (s)
from R(t) to f (t) is
bs
Hf (s) = bs · H(s) = (13.72)
τs + 1
The corresponding frequency response function is
jbω
Gf (jω) = Hf (jω) = (13.73)
1 + jτ ω

Figure 13.18: Block diagrams to find


transmitted force
Figure 13.19: Frequency response function
of the transmitted force
For the sinusoidal input R(t) = d cos ωt, the corresponding transmitted force is f (t) =
F cos(ωt + φ), where F and φ are both functions of ω. Moreover, the ratio F/d is the
magnitude of G(ω), i.e.,
F bω
= |Gf (jω)| = q (13.74)
d
1 + (ωτ )2
Figure 13.19 shows |Gf (jω)| as a function of ω. The response indicates that the system is a
high-pass filter. When ω > τ1 , a lot of force is transmitted and it saturates at

b
lim |Gf (jω)| = =k (13.75)
ω→∞ τ
13.5. SECOND-ORDER SYSTEMS 451

where (13.63) is used. Equation (13.75) shows that the force transmitted to the mass is
indeed controlled by the stiffness of the spring not by the damping force.

One might ask what role the damping plays in this example. The main use of damping
is to control the cutoff frequency through the time constant τ . When b increases, the time
constant τ increases and the cutoff frequency ωc decreases. Another thing I try to convey
in this example is what I call the first law of engineering—there is no free lunch. It is
almost impossible to control every performance parameter the way we want it. For example,
we cannot ask for a smooth ride (i.e., small vibration) and extreme ride comfort (i.e., a small
transmitted force) simultaneously using this simple design. For this simple model, small
displacement at point A accompanies a large force transmitted to the mass, and vice versa.

13.5 Second-Order Systems

As shown in Chapter 3, a second-order system can be underdamped, critically damped,


and overdamped. An overdamped system has two real roots, and behaves simply like two
first-order systems cascaded together. Therefore, I will only focus on an underdamped
second-order system in this section. In particular, I want to use the spring-mass-damper
system shown in Fig. 13.20 as an example. The results derived below will be valid for other
second-order systems as well if the coefficients m, c, and k are replaced by appropriate
coefficients.

The governing equation for the spring-mass-damper system is

d2 y dy
m 2
+ c + ky = u(t) (13.76)
dt dt
or, equivalently,
d2 y dy 2 1
+ 2ζω n + ωn y = u(t) (13.77)
dt2 dt m
where ζ is the viscous damping factor and ωn is the natural frequency given by
r
k c
ωn ≡ , ζ= (13.78)
m 2mωn
452 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.20: A representative second- Figure 13.21: Real and imaginary part of
order system denominator D in (13.80)

By assuming u(t) = U (s)est and y(t) = Y (s)est in (13.76), one obtains the transfer function
H(s) as
Y (s) 1 1
H(s) = = · 2 (13.79)
U (s) m s + 2ζωn s + ωn2
The corresponding frequency response function G(ω) is

1 1 1 1
G(ω) = H(jω) = · 2 2
= ·  (13.80)
m (ωn − ω ) + 2jζωn ω k 
ω
 2  
1 − ωn + 2jζ ωωn

The easiest way to derive the magnitude and phase of G(ω) in (13.80) is to find the
polar form of its denominator
"  2 #  
ω ω 6 D
D ≡ 1− + 2jζ = |D| ej (13.81)
ωn ωn

Figure 13.21 shows its real and imaginary part on the complex plan, where the real part
is the base of the triangle and the imaginary part is the height of the triangle. Hence the
magnitude of D is the hypotenuse of the triangle given as
v"
u  2 #2  2
u ω 2
ω
|D| = t 1− + 4ζ (13.82)
ωn ωn
13.5. SECOND-ORDER SYSTEMS 453

and the phase angle of D is  


ω
2ζ ωn
6 D = tan−1  2 (13.83)
ω
1− ωn

Substitution of (13.81) back to (13.80) gives


6 G(ω)
G(ω) = |G(ω)| ej (13.84)

with the magnitude |G(ω)| and phase 6 G(ω) being


1 1 1 1
|G(ω)| = · = · s (13.85)
k |D| k   2 2  2
ω ω
1− ωn
+ 4ζ 2 ωn

and  
ω
2ζ ωn
6 G(ω) = −6 D = − tan−1  2 (13.86)
ω
1− ωn

Figure 13.22: Magnitude of frequency re-


sponse function G(ω) Figure 13.23: Resonant response when ζ =
0 and ω = ωn

Figure 13.22 shows the magnitude |G(ω)| as a function of the driving frequency ω. The
magnitude is characterized via three very distinct regions: a peak region, a flat region, and
a roll-off region. They are explained in detail as follows.
454 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

The peak region of |G(ω)| is called resonance. When damping is present (i.e., ζ 6= 0),
the magnitude |G(ω)| peaks around ω = ωn . To be precise, |G(ω)| reaches its maximum
p
when ω = ωn 1 − 2ζ 2 . This phenomenon of reaching a very large amplitude near the
natural frequency is known as resonance. For many lightly damped systems (e.g., a metal
p
bar), ζ  1; therefore, ωn 1 − 2ζ 2 ≈ ωn . Hence, one can literally consider that the maximal
amplification occurs at ω = ωn for many practical applications. Moreover, the resonance
amplitude can be estimated via (13.83) by replacing ω by ωn as
1 1
|G(ωn )| = · (13.87)
k 2ζ
According to (13.87), the resonance amplitude |G(ωn )| increases as damping decreases.
Therefore, the resonance amplitude can become quite significant when the system has low
damping. The resonance region is significantly controlled by the viscous damping factor ζ.

Resonance may or may not be desirable depending on applications. In some applications


(e.g., a resonator or AM radio channels), large response is desired. In that case, resonance
is the best way to achieve the desired large response. In other applications, resonance could
cause unwanted vibration response and result in failure of rotating machines, for example.

Theoretically speaking, the resonance amplitude will approaches to infinity when damp-
ing vanishes. When ζ = 0 and ω = ωn , the response of the system is governed by
d2 y A
2
+ ωn2 y = cos ωn t (13.88)
dt m
if u(t) = A cos ωn t. The particular solution of (13.88) is
A
yp (t) = ωn t sin ωn t (13.89)
2m
which is plotted in Fig. 13.23. This, of course, does not appear in any real applications
for two reasons. First, every system has some damping, no matter how small it is. Sec-
ond, the oscillation will become nonlinear when the amplitude becomes large enough. The
nonlinearity of the system will automatically limits the response amplitude.

ω
The flat region occurs when ω  ωn or ω → 0. In this case, ωn
 1 and (13.85)
becomes
1
|G(ω)| ≈ (13.90)
k
13.5. SECOND-ORDER SYSTEMS 455

This means that the system behaves simply like a spring and its response is nothing but
static deformation. Inertia m plays no role in this region.

ω
The roll-off region occurs when ω  ωn . In this case, ωn
 1 and (13.85) becomes

|G(ω)| → 0 (13.91)

This is, however, a quite useless expression because it does not reveal how fast |G(ω)| rolls
off. So a more elaborate analysis is needed. Let us investigate the denominator of |G(ω)| in
(13.85) more carefully.
 2  2 "  2 #2  4
ω ω ω ω
1− ≈− , 1− ≈ (13.92)
ωn ωn ωn ωn

Moreover,  4  2
ω 2 ω
 4ζ (13.93)
ωn ωn
Therefore, v"
u  2 #2  2  2
t 1− ω ω ω
u
+ 4ζ 2 ≈ (13.94)
ωn ωn ωn

and  −2
1 ω 1
|G(ω)| ≈ · = (13.95)
k ωn mω 2
1
In other words, the magnitude |G(ω)| rolls off at the rate of 2 when ω  ωn . Also, the
ω
roll-off region is dominated by the mass effect.

Figure 13.24 shows the phase angle 6 G(ω) as a function of the driving frequency. When
ζ = 0, the phase lag can only be 0 or −π, and a discontinuous transition occurs at ω = ωn .
When ζ 6= 0, the transition of φ from 0 to −π becomes continuous although the transition
can be quite steep near ω = ωn when ζ is small. As ζ increases, the transition becomes less
abrupt and more gradual.

One important feature to note is that the phase angle 6 G(ω) always takes the value
of − π2 when ω = ωn no matter what ζ is. This becomes a useful alternative to identify ωn
in experiments. In many practical applications, vibration tests are conducted to identify
456 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.24: Phase angle of frequency response function G(ω)

Figure 13.25: A more realistic model for a car on a bumpy road

natural frequency ωn of a structure. Of course, the easiest way to measure ωn is to use


resonance, i.e., driving the system at various frequencies to see which frequency generates
the maximal displacement and that frequency is ωn . This approach is often very effective
when damping is small because resonance amplitude is significant. For many systems with
large damping (e.g., bio-systems with soft tissue, hydrodynamic bearings, or systems with
friction), the resonance amplitude can be small and is difficult to identify experimentally. In
this case, one can check the phase angle to identify sharp transitions around − π2 to identify
the natural frequency ωn .

Example 13.4 Figure 13.25 shows a more realistic model to analyze response of a car
moving on a bumpy road. The model consists of a car body and a suspension system. The
13.5. SECOND-ORDER SYSTEMS 457

car body is rigid with a mass m. The suspension system consists of a spring with stiffness
k and a viscous damper with damping coefficient b in a parallel connection. The lower end
of the suspension system is riding on a wavy road, while the upper end is connected to a
car body. The car moves horizontally with a constant velocity v. Let x(t) be the horizontal
position of the car and y(t) be the vertical displacement of the car body with respect to its
equilibrium position. The gravitational acceleration g also needs to be considered.

Physical parameters of the car and the road conditions are given as follows. The stiffness
and damping of the suspension are k = 16 kN/m and b = 0.8 kN·s/m. The mass of the car
body is m = 1000 kg. The waviness of the road is given as

R(t) = 0.1 sin 10x = 0.1 sin 10vt (13.96)

where the unit is in meter. Answer the following two questions.

1. What is the response of the car, i.e., y(t), in general?

2. When will the displacement y(t) be less than 0.05 m?

Figure 13.26: Analysis of road waviness


Figure 13.27: Free-body diagram of the
car mass

First of all, let us derive the equation of motion. Figure 13.26 shows the vertical position
of the suspension’s lower end in contact with the road surface. Since the car and thus the
suspension move horizontally, the vertical position of the lower end is a function of time t.
Moreover, it can generally be described via a function R(t) as

R(t) = d sin px = d sin pvt = d sin ωt, ω ≡ pv (13.97)


458 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

When d = 0.1 and p = 10, (13.97) reduces to the condition described in (13.96).

Figure 13.27 shows the free-body diagram of the car mass. Let us assume that R(t)
is positive (i.e., upward) at time t. Also, let z(t) be the absolute displacement of the car
mass from the centerline of the wavy road surface. For the instant t, let us also assume
that the absolute displacement z(t) is positive and z(t) > R(t). Therefore, the spring will
be in tension providing a downward force k(z − R) acting on  the mass.
 At the same time,
the damper will be stretched providing a downward force b ż − Ṙ to the car mass. Also,
the weight of the mass is mg in the downward direction. Resultant of all these forces will
accelerate the mass following Newton’s second law, i.e.,
 
−mg − k(z − R) − b ż − Ṙ = mz̈ (13.98)

or
mz̈ + bż + kz = kR(t) + bṘ(t) − mg (13.99)
In (13.99), there are two input sources. One is R(t) and the other is mg. This arrangement
is not preferable. Therefore, we want to remove mg to reduce the number of input variables.

To do so, let us consider the case when the car is not moving (R = Ṙ = 0) and the car
is in equilibrium ż = z̈ = 0. In this case, z = z0 , where z0 is a constant. Equation (13.99) is
then reduced to
mg
kz0 = −mg ⇒ z0 = − (13.100)
k
Therefore, one can define a displacement y(t) relative to the equilibrium position as
mg
z(t) = − + y(t) (13.101)
k
Substitution of (13.101) into (13.99) results in the equation of motion

mÿ + bẏ + ky = kR(t) + bṘ(t) (13.102)

where (13.100) has been use to eliminate mg in (13.99). Now (13.102) is the preferred
equation of motion because it only involves one input variable R(t).

Before a formal analysis is laid out, let us first calculate the natural frequency ωn and
viscous damping factor ζ as
r r
k 16 × 103
ωn = = = 4 rad/s = 0.637 Hz (13.103)
m 1000
13.5. SECOND-ORDER SYSTEMS 459

and
b 800
ζ= = = 0.1 (13.104)
2mωn 2(1000)(4)
Note that ζ < 1, and the car model is underdamped.

Since R(t) is sinusoidal, we can use the concept of frequency response function to find
y(t). To do so, let us assume R(t) = ρ(s)est and y(t) = Y (s)est in (13.102) to obtain the
transfer function H(s) from R(t) to y(t) as

Y (s) bs + k 2ζωn s + ωn2


H(s) = = = (13.105)
ρ(s) ms2 + bs + k s2 + 2ζωn s + ωn2

The corresponding frequency response function G(ω) is then

ωn2 + j (2ζωn ω)
G(ω) = H(jω) = (13.106)
(ωn2 − ω 2 ) + j (2ζωn ω)

The magnitude of G(ω) is


v
u  2
ω
s
2 1+ 4ζ 2
u
ωn4 + (2ζωn ω) u ωn
|G(ω)| = = u (13.107)
(ωn2 − ω 2 )2 + (2ζωn ω)2 t   2 2 2
u 
ω ω
1− ωn
+ 4ζ 2 ωn

Figure 13.28 shows the magnitude |G(ω)| derived in (13.107). Again, there are three
very distinct regions in |G(ω)|: a resonance region, a flat region, and a roll-off region.

The resonance region occurs as ω → ωn resulting in a peak. Moreover, the height of


the resonance peak is roughly
s r
1 + 4ζ 2 1 1
|G(ωn )| = = 1 + ≈ (13.108)
4ζ 2 4ζ 2 2ζ

According to (13.108), the resonance amplitude |G(ωn )| increases as damping decreases.


With (13.97) and (13.103), the car speed causing the resonance is

ωn 4
v= = = 0.4 m/s (13.109)
p 10
460 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.28: Frequency response function |G(ω)| of the more realistic car model

ω
The flat region occurs when ω  ωn or ω → 0. In this case, ωn
 1 and (13.107)
becomes
|G(ω)| ≈ 1 (13.110)
This means that the road waviness R(t) is almost 100% transmitted to the response y(t).
The car basically follows the contour of the wavy road surface when the car speed is low.

The roll-off region occurs when ω  ωn or ωωn  1. In this case, the numerator of
(13.107) becomes
 2  2
2 ω 2 ω
N (ω) ≡ 1 + 4ζ ≈ 4ζ (13.111)
ωn ωn
The denominator of |G(ω)| in (13.107) becomes
"  2 #2  2  4
ω 2 ω ω
D(ω) ≡ 1 − + 4ζ ≈ (13.112)
ωn ωn ωn

Therefore,
2ζωn
|G(ω)| ≈ (13.113)
ω
1
In other words, the magnitude |G(ω)| rolls off at the rate of when ω  ωn . When the
ω
speed of the car is high, the car only oscillates with small response.
13.5. SECOND-ORDER SYSTEMS 461

Table 13.1: Table for Trials and Errors

ω/ωn |G(ω)|
1.5 0.812
1.7 0.550
1.8 0.468

To answer the second question (i.e., when the amplitude of the displacement y(t) will
be less than 0.05 m), one can start with calculation of the output-to-input ratio. The input
magnitude d of R(t) is 0.1 m. The output magnitude is d · |G(ω)| and it must be less than
0.05 m. Therefore, y(t) being less than 0.05 m occurs when
0.05
|G(ω)| < = 0.5 (13.114)
0.1
Substitution of (13.107) into (13.114) results in
v
u  2
1 + 4ζ 2 ωωn
u
u
|G(ω)| = uu  2 2  2 < 0.5 (13.115)
1 − ωωn + 4ζ 2 ωωn
t

Recalling that ζ = 0.1 from (13.104), one can further reduce the condition (13.115) to
v
u  2
ω
1 + 0.04
u
u ωn
|G(ω)| = u 
u
 2 2  2 < 0.5 (13.116)
1 − ωωn + 0.04 ωωn
t

Solution of (13.116) may seems formidable, but there are two clues to help. One is
to use Fig. 13.28 to narrow down the solution. Since |G(ω)| < 1, the frequency range
satisfying (13.116) must be in the roll-off region. Therefore, the eligible frequency must
exceed the natural frequency. The second clue is to use a trial-an-error approach like the one
in Example 3.5. Table 13.1 tabulates |G(ω)| with respect to ω/ωn . Note that the trial-and-
error starts at the roll-off region where ω/ωn > 1. When ω/ωn increases, |G(ω)| decreases.
According to Table 13.1, |G(ω)| = 0.5 when 1.7 < ω/ωn < 1.8. A further iteration gives
ω
≈ 1.761 (13.117)
ωn
462 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.29: Transmissibility of an isolation table

Therefore, displacement y(t) with amplitude less than 0.05 m occurs when

ω pv 10v
= = > 1.761 (13.118)
ωn ωn 4

or
v > 0.704 m/s (13.119)

13.6 Bode Plots

Magnitude and phase of a frequency response function often are plotted in log scale, and
this type of representations is called Bode plots. For the magnitude, Bode plots are usually
in the form of log10 |G(ω)| vs. log10 ω. For the phase angle, Bode plots are usually in the
form of 6 G(ω) vs. log10 ω. For example, Fig. 13.29 shows the transmissibility of an isolation
table in the form of of Bode plots.

In reading a Bode plot, one needs to know terminology often used in Bode plots. An
octave is a frequency ratio of 2:1. For example, 100 Hz is 2 octaves above 25 Hz. A decade
is a frequency ratio of 10:1. For example, 100 Hz is two decades below 10,000 Hz. The
13.6. BODE PLOTS 463

Figure 13.30: Bode plot of microphone

decibel level of a magnitude A is defined as


 
A
Q = 20 log10 (13.120)
Aref

where Aref is a reference level. the decibel level is usually denoted as dB. A good example of
showing decibel level is hearing and noise. Noise level is often measured via sound pressure.
Normal human ears can sense sound when the sound pressure is around 20 µPa. Therefore,
Aref = 20 × 10−6 . If a noise has a sound pressure of 1 Pa, its decibel level is then Q =
1

20 log10 20×10−6 ≈ 94 dB. For Bode plots of a frequency response function, A = |G(ω)| and
Aref = 1.

There are many advantages to adopt Bode plots. First, the log scales allow the Bode
plots to cover a very large frequency and amplitude range. Second, the log scales make some
features easier to see. For example, resonance peaks are much easier to see in Bode plots.
Third, Bode plots of a complex systems can be constructed in a system way. This is a very
attractive feature of Bode plots 50 years ago, when computers were not so powerful. The way
to do it is to construct Bode plots of first-order systems (e.g., low-pass or high-pass filters)
and second-order systems (e.g., spring-mass-damper systems) as building blocks. Then the
Bode plot of a complex system can be synthesized from these building blocks. All these are
possible because of the log scales, for they convert a multiplication operation into an additive
operation. I will explain this more in Section 13.6.2.

Bode plots are often used to describe performance of equipment. Figure 13.30 shows a
Bode plot of a microphone’s sensitivity (i.e., how much voltage output by the microphone
per unit Pa pressure). As one can see that there is a low-frequency ramp below 30 Hz. There
464 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

is a flat region from 30 Hz to about 2000 Hz. In this flat range, the microphone generate -46
dB volt/Pa. Beyond the flat region, there is a resonance around 12,000 Hz.

The working range of this microphone is the flat region from 30 Hz to 2000 Hz for a
reason that I will explain in great detail in Chapter 14. The short version of it is that the
constant gain region often implies proportional amplification without any distortion. The
frequency range in which a piece of equipment works is usually called bandwidth. The end
points of the frequency band is often defined by 3-dB points, where |G(ω)| decreases or
√ √
increases from the constant gain by a factor of 2. According to (13.120), a factor of 2 in
the gain corresponds to a dB level of


Q = 20 log10 2 = 10 log10 2 ≈ 3 dB (13.121)

So for this microphone, the bandwidth is the frequency range in which the gain increases
from -49 dB to -43 dB. According to Fig. 13.30, the bandwidth is roughly from 15 Hz to
7,000 Hz.

13.6.1 Commonly Used Bode Plots

Figure 13.31 lists four commonly used Bode plots to build frequency response functions of
a complex system. The left column lists the transfer function H(s) from which the Bode
plots are derived. The center column lists magnitude |G(ω)| of the corresponding Bode plot
in dB. The right column list the phase 6 G(ω) of the corresponding Bode plot.

There are four basic elements: a constant gain, an integrator 1/s, and first-order low-
pass filter, and a second-order underdamped system. Note that many other systems can
be easily derived from these four elements. For example, a high-pass filter in (13.54) is a
first-order low-pass filter multiplied by the reciprocal of an integrator. A overdamped second-
order system is multiplication of two first-order low-pass filters. Now, let me go through each
of the four basic elements one by one.
13.6. BODE PLOTS 465

Figure 13.31: Commonly used Bode plots


466 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Constant Gain

For the constant gain in Fig. 13.31,

H(s) = k, G(ω) = k (13.122)

Therefore (
0◦ , k>0
|G(ω)| = |k| , 6 G(ω) = ◦
(13.123)
180 , k < 0
For the case of k > 0, the Bode magnitude plot in Fig. 13.31 shows a dB value at 20 log10 k
and the Bode phase plot in Fig. 13.31 is 0◦ .

Integrator

For the integrator in Fig. 13.31,


1 1
H(s) = , G(ω) = (13.124)
s jω
Therefore,
1
|G(ω)| = , 6 G(ω) = −90◦ (13.125)
ω
Since a Bode plot is made with respect to a frequency in a log scale, it is beneficial to use a
log axis
x ≡ log10 ω (13.126)
When ω is advanced by one decade (e.g., from 10 to 100), the corresponding x is increased
by 1 (e.g., from 1 to 2). Therefore, the dB level from (13.125) can be expressed as
 
1
dB ≡ 20 log10 |G(ω)| = 20 log10 = −20 log10 ω = −20 · x (13.127)
ω
Therefore, the magnitude Bode plot of an integrator is a straight line with a slope of -20
dB/decade.

The integrator in Fig. 13.31 can also be ”flipped” to get the Bode plot of a zero at
origin, i.e.,
H(s) = s, G(ω) = jω (13.128)
13.6. BODE PLOTS 467

Figure 13.32: Bode plot of a zero at origin

Therefore,
|G(ω)| = ω, 6 G(ω) = 90◦ (13.129)
and
dB ≡ 20 log10 |G(ω)| = 20 log10 ω = 20 · x (13.130)
Therefore, the magnitude Bode plot of a zero at origin is a straight line with a slope of 20
dB/decade. The Bode plot of a zero is the mirror image of the Bode plot of an integrator
about 0 dB (for magnitude plot) and 0◦ (for phase plot); see Fig. 13.32.

Low-Pass Filter

The low-pass filter is characterized via a first-order pole and is defined by


1 1
H(s) = , G(ω) = (13.131)
τs + 1 jωτ + 1
Therefore,
1
|G(ω)| = q , 6 G(ω) = − tan−1 (ωτ ) (13.132)
2
1 + (ωτ )
468 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.33: Asymptotic analysis of a low-pass filter

For the magnitude plot, let me conduct a simple asymptotic analysis as follows. When
ωτ  1, (13.132) indicates that |G(ω)| ∼ 1 and dB ≡ 20 log10 |G(ω)| ∼ 0. When ωτ  1,
(13.132) indicates that |G(ω)| ∼ (ωτ )−1 and dB ≡ 20 log10 |G(ω)| ∼ −20 log10 (ωτ ). Results
of the asymptotic analysis is tabulated in Fig. 13.33. Based on the results in Fig. 13.33, one
can construct the Bode magnitude plot in Fig. 13.34. The basic logic is to draw the two
asymptotes. The asymptote for ωτ << 1 is a horizontal line at 0 dB. The asymptote for
ωτ >> 1 is a straight line with a slope of -20 dB/decade. Moreover, these two asymptotes
intersect at ωτ = 1, where the corner frequency is located. Finally, the magnitude plot is a
smooth curve next to the two asymptotes.

Figure 13.34: Bode magnitude plot of a


low-pass filter Figure 13.35: Bode phase plot of a low-
pass filter

Figure 13.35 shows the Bode phase plot of the low-pass filter. Basically, 6 G(ω) ∼ 0
when ωτ  1 and 6 G(ω) ∼ −90◦ when ωτ  1. There is a sharp transition between these
two frequency regions. Therefore, the Bode phase plot in Fig. 13.35 has two asymptotes at
0 and −90◦ . The transition from 0 to −90◦ is approximated via a straight line, which starts
13.6. BODE PLOTS 469

and ends the transition at ωτ = 0.1 and ωτ = 10, respectively. In other words, the transition
spans two decades centered at the corner frequency.

Again, one can flip the magnitude and phase plots to get the Bode plots for a first-
order zero in the form of H(s) = τ s + 1. The detail will not be discussed here because it is
somewhat trivial.

Second-Order Underdamped System

A second-order underdamped system is characterized by two complex-conjugate poles and


is described by

ωn2 ωn2
H(s) = , G(ω) = (13.133)
s2 + 2ζωn s + ωn2 (ωn2 − ω 2 ) + 2jζωn ω

Therefore,
 
ω
1 2ζ ωn
|G(ω)| = s , 6 G(ω) = − tan−1  2 (13.134)
 2 2 ω
1−
  2
ω ω
1− ωn
+ 4ζ 2 ωn
ωn

For the magnitude plot, let me conduct a simple asymptotic analysis as follows. When
ω/ωn  1, (13.134) indicates that |G(ω)| ∼ 1 and dB ≡ 20 log10 |G(ω)| ∼ 0. This asymptote
is a horizontal line at 0 dB. When ω/ωn  1, (13.134) indicates that |G(ω)| ∼ (ω/ωn )−2
and dB ≡ 20 log10 |G(ω)| ∼ −40 log10 (ω/ωn ). This asymptote represents a straight line
with a slope of -40 dB/decade. When ω/ω  n ≈ 1, (13.134) indicates that |G(ω)| ∼ 1/(2ζ)
1
and dB ≡ 20 log10 |G(ω)| ∼ 20 log10 2ζ representing a resonance peak. Results of the
asymptotic analysis is tabulated in Fig. 13.36.

Based on the results in Fig. 13.36, one can construct the Bode magnitude plot in
Fig. 13.37. The basic logic is to draw the two asymptotes. The asymptote for ω/ωn << 1
is a horizontal line at 0 dB. The asymptote for ω/ωn >> 1 is a straight line with a slope of
-40 dB/decade. These two asymptotes intersect at ω/ω = 1, where
  the natural frequency
is located. Next, we mark at ω/ω = 1 the magnitude 20 log10 2ζ1 to the resonance peak.
470 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.36: Asymptotic analysis of a second-order underdamped system

Finally, the magnitude plot is a smooth curve next to the two asymptotes encompassing the
flat region, the resonance peak, and the roll-off region.

Figure 13.38 shows the Bode phase plot of the second-order underdamped system.
Basically, 6 G(ω) ∼ 0 when ω/ω  1 and 6 G(ω) ∼ −180◦ when ω/ω  1. There is a sharp
transition between these two frequency regions. Therefore, the Bode phase plot in Fig. 13.38
has two asymptotes at 0 and −180◦ . The transition from 0 to −180◦ is approximated via a
vertical line, which represents the case of ζ = 0. The phase plot is then a smooth plot next
to these asymptotes. The phase plot passes −90◦ at ω/ω = 1.

Figure 13.38: Bode phase plot of a second-


Figure 13.37: Bode magnitude plot of a
order underdamped system
second-order underdamped
system
13.6. BODE PLOTS 471

13.6.2 Bode Plots for Complex Systems

For complex systems, the elements in Fig. 13.31 can be used as building blocks to construct
Bode plots of the complex systems. The log scale makes it very easy to construct Bode plots
of a complex system from its individual building blocks. As an illustrative example, let us
consider a system with the following transfer function

4(s + 1)
H(s) = (13.135)
s (s2 + 2s + 2)

The first step is to represent H(s) in (13.135) in the form of the transfer functions shown in
Fig. 13.31, i.e.,
1 2
H(s) = 2 · · (s + 1) · 2 (13.136)
s s + 2s + 2
There are a couple of things to watch out in this step. First, one must stick to the mathe-
matical expressions used in Fig. 13.31. For example, the second-order underdamped system
in Fig. 13.31 has a numerator ωn2 instead of 1. Therefore, one must have

ωn2 2
⇐⇒ (13.137)
s2 + 2ζωn s + ωn2 s2 + 2s + 2

Second, reciprocals of the expressions shown in Fig. 13.31 are allowed, because Bode plots
of the reciprocals will simply be a flip of the Bode plots in Fig. 13.31 about 0 dB (for the
magnitude plots) and about 0◦ (for the phase plots). Therefore, the term s + 1 in (13.136)
1
is allowed, because it is the reciprocal of the first-order pole τ s+1 with τ = 1.

The second step is to construct the magnitude plot. With s = jω in mind, the dB level
of (13.136) is expressed as

dB = 20 log10 |H(s)|

1 2
= 20 log10 2 + 20 log10 + 20 log10 |s + 1| + 20 log10 2 (13.138)
s s + 2s + 2

Therefore, one can secure the dB magnitude of each term in (13.138) from the magnitude
plots in Fig. 13.31, and subsequently add them together to construct the Bode magnitude
plot derived from H(s).
472 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

Figure 13.39: Flagship example revisited for Bode plots

The last step is to construct the phase plot. With s = jω in mind, the phase angle of
(13.136) is expressed as
   
1 2
6 H(s) = 6 2 + 6 + 6 (s + 1) + 6 (13.139)
s s2 + 2s + 2
Therefore, one can secure the phase of each term in (13.138) from the phase plots in
Fig. 13.31, and subsequently add them together to construct the Bode phase plot derived
from H(s).

So the moral is the following. Bode plots of a complex system can be constructed by
adding Bode plots (both magnitude and phase) of individual building blocks constituting
the complex system.

Example 13.5 Let us revisit the flagship example shown in Fig. 13.39. The transfer func-
tion from Fs (t) to x2 (t) is
k
B
+s
Hx2 (s) = 3 mk 2
(13.140)
ms + B s + ks
Construct the Bode plot of Hx2 (s) for the case of B = 1000 N·s/m, m = 1 kg, and k = 100
N/s.

The first step is to break (13.140) into the building blocks available in Fig. 13.31 as
  k
1 B 1
Hx2 (s) = · s + 1 · · 2 km k
(13.141)
B k s s + Bs + m
Furthermore, let
B 1000
τ≡ = = 10 s (13.142)
k 100
13.6. BODE PLOTS 473

Figure 13.40: Bode plot of the flagship example

r r
k 100
ωn = = = 10 rad/s (13.143)
m 1
and
k
B 100/1000
ζ= = = 0.005 (13.144)
2ωn 2(10)
Then (13.141) becomes

1 1 ωn2
Hx2 (s) = · (τ s + 1) · · 2 (13.145)
B s s + 2ζωn s + ωn2

where every factor is in the form of a building block from Fig. 13.31.

Figure 13.40 illustrates how the Bode magnitude plot is constructed. The first thing to
do is to draw the abscissa in a log scale. Note that the log scale does not have a zero per
se. We can only go from 1, 0.1, 0.01, 0.001 and so on. Therefore, an advance or decrease
by one order represents a decade. The vertical axis will have ordinates in 20-dB divisions.
With this setup in abscissa and ordinate, one can easily draw straight lines with slopes of 20
dB/decade or 40 dB/decade.

Next, one should map out the magnitude of each factor in (13.145) from Fig. 13.31; see
the dash lines in Fig. 13.40. They are explained in detail as follows.
474 CHAPTER 13. FREQUENCY RESPONSE FUNCTIONS

1
1. is a horizontal line at -60 dB.
B

1 1
2. is an integrator, and passes through the origin at ω = 1 and 0 dB.
s s
3. (τ s + 1) is a first-order zero with a corner frequency at 1/τ = 1/10 = 0.1 (cf. (13.142)).
Moreover, when ω  0.1, |τ s + 1| is a horizontal line at 0 dB. When ω  0.1, |τ s + 1|
is a straight line with a slope of 20 dB/decade passing the corner at ω = 0.1 and 0 dB.
ωn2
4. is a second-order underdamped system. Its Bode plot has three re-
s2 + 2ζωn s + ωn2
gions: flat region,
resonance region, and roll-off region. When ω  ωn = 10 rad/s
2

ωn
is a horizontal line at 0 dB. When ω  ωn = 10
(cf. (13.143)), 2 2
s + 2ζω n s + ω n
ωn2


rad/s, 2
is a straight line with a slope of -40 dB/decade passing
s + 2ζωn s + ωn2
2

ωn peaks at 20 log10 1 =

ω = 10 rad/s at 0 dB. When ω = ωn = 10, 2
s + 2ζωn s + ωn2 2ζ
1
20 log10 = 40 dB (cf. (13.144)).
2(0.005)

By adding these individual Bode plots together, one will result in the system’s Bode
plot shown as the solid line in Fig. 13.40. One can see that there is a flat region extending
from ω = 0.1 to ω = 1 rad/s. That is the major portion of the bandwidth of the flagship
example.
Chapter 14

Fourier Analysis

The concept of frequency response functions is very powerful, because it has mathematical
rigor and is experimentally measurable. The framework of frequency response functions,
however, hinges on a very important assumption, that is, the input excitation must be
sinusoidal. Therefore, it is important to figure out ways to expand the framework so that it
can accommodate other types of input excitations.

In this chapter, I will first explain the concept of Fourier series, so that frequency
response functions can be used to accommodate periodic excitations. I will also explain
the concept of frequency spectrums—a new and different perspective to allow us to analyze
system dynamics in a frequency domain instead of a time domain. Then I will explain Fourier
transforms so that frequency response functions can be used for non-periodic signals. These
tools above are, in general, known as Fourier analysis.

14.1 Response under Periodic Excitations

Dynamical systems subjected to periodic excitations appear very often in practical applica-
tions. For example, an electric motor will give a periodic excitation to a transmission system
driven by the motor. To analyze response of such a system, the most effective way is to use

475
476 CHAPTER 14. FOURIER ANALYSIS

Figure 14.1: A spring-mass-damper system subjected to periodic excitations

the concept of frequency response functions developed in Chapter 13.

In this section, I will focus on the use of frequency response functions for periodic exci-
tations. Many concepts explained in this section will also be used in Section 14.2 to address
non-periodic excitations. This section, unfortunately, is rather long. I will first introduce
Fourier series, which converts a periodic excitation into a series of harmonic excitations.
Both real and complex Fourier series will be discussed. Then I will derive the frequency
and time response using the Fourier series and the frequency response function. Finally,
I will introduce an important concept call spectrums to make the solution process more
insightful.

14.1.1 Fourier Series

Figure 14.1 illustrates a spring-mass-damper system subjected to a periodic force excitation


f (t). Specifically in Fig. 14.1, the excitation takes the form of a square wave (i.e., bang-bang
excitations). For this case, how would one predict the response x(t) of the spring-mass-
damper and how would x(t) look like? In fact, the model shown in Fig. 14.1 works for any
dynamical system subjected to a periodic excitation. For example, Fig. 14.1 illustrates an
RLC circuit subjected to a periodic voltage input Vs (t). For this case, how would one predict
the response Vc (t) across the capacitor C?

To find the response, the tool we need is Fourier series. Let me start with the square
wave first, and I will generalize the analysis for an arbitrary periodic excitation f (t) later.
The square wave f (t) in Fig. 14.1 is periodic function of period T . Since it is an odd function,
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 477

Figure 14.2: An electrical circuit subjected to periodic excitations

it can be approximated via a series of sinusoidal functions of the same period T , i.e.,

X
f (t) ≈ B1 sin ω0 t + B2 sin 2ω0 t + B3 sin 3ω0 t + · · · = Bn sin nω0 t (14.1)
n=1

where

ω0 ≡ (14.2)
T
is called fundamental frequency. Each term in (14.1) is called a harmonics. For example,
the sin 3ω0 t term is called the third harmonics. The series shown in (14.1) is called Fourier
sine series because all the harmonics are sinusoidal.

When the input excitation f (t) is a general periodic function, we need to add cosine
harmonics in the series.

X
f (t) ≈ A0 + (An cos nω0 t + Bn sin nω0 t) (14.3)
n=1

Note that a constant term A0 is present in (14.3) to represent the case of n = 0. The series
in (14.3) is called a Fourier series.

In either the Fourier sine series (14.1) or the complete series (14.3), how does one
choose the coefficients An and Bn to best approximate f (t)? In Fourier series, the coeffi-
cients are chosen such that the area squared between the function f (t) and a partial sum
PN
n=1 (An cos nω0 t + Bn sin nω0 t) is minimized. Specifically, An and Bn are chosen such that
Z T( " N
#)2
X
EN ≡ f (t) − A0 + (An cos nω0 t + Bn sin nω0 t) dt (14.4)
0 n=1
478 CHAPTER 14. FOURIER ANALYSIS

is minimized. Therefore, An and Bn are chosen such that

∂EN ∂EN
= 0, = 0, n = 1, 2, . . . , N (14.5)
∂An ∂Bn

As a result,
Z T
1
A0 = f (t)dt (14.6)
T 0

Z T
2
An = f (t) cos nω0 tdt, n = 1, 2, . . . , N (14.7)
T 0

and
Z T
2
Bn = f (t) sin nω0 tdt, n = 1, 2, . . . , N (14.8)
T 0

The coefficients A0 , An , and Bn , n = 1, 2, . . ., are known as Fourier coefficients.

Example 14.1 This example is to show how the Fourier coefficients Bn , n = 1, 2, . . ., are
derived from (14.5) for a Fourier sine series shown in (14.1). In this case, the error defined
in (14.4) between f (t) and a partial sum of the Fourier sine series is reduced to

" N
#2
Z T X
EN ≡ f (t) − Bn sin nω0 t dt (14.9)
0 n=1

PN
The term n=1 Bn sin nω0 t in (14.9) is known as the n-th partial sum.

Figure 14.3 shows the difference between f (t) and the n-th partial sum. One can see
that the difference is the height of the shaded area. Since the difference can be positive or
negative, it is not a good measure of errors. A sum of positives and negatives may cancel
while the difference remains big. Therefore, the difference between f (t) and the n-th partial
sum is squared and integrated. This type of error measures is known as an L2 norm.

I here use a two-term partial sum to derive (14.8) for n = 1, 2. For N = 2, (14.9)
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 479

Figure 14.3: Difference between f (t) and its partial sum

becomes
Z T
E2 ≡ [f (t) − B1 sin ω0 t − B2 sin 2ω0 t]2 dt
0
Z T Z T Z T
2 2 2 2
= [f (t)] dt + B1 sin ω0 tdt + B2 sin2 (2ω0 t) dt
0 0 0
Z T Z T
− 2B1 f (t) sin ω0 tdt − 2B2 f (t) sin (2ω0 t) dt
0 0
Z T
+ 2B1 B2 sin ω0 t sin (2ω0 t) dt (14.10)
0

Note that Z T Z T
2 T
sin ω0 tdt = sin2 (2ω0 t) dt = (14.11)
0 0 2
and Z T
sin ω0 t sin (2ω0 t) dt = 0 (14.12)
0

Substitution of (14.11) and (14.12) into (14.10) reduces it to


Z T
T
[f (t)]2 dt + B12 + B22

E2 =
0 2
Z T Z T
− 2B1 f (t) sin ω0 tdt − 2B2 f (t) sin (2ω0 t) dt (14.13)
0 0
480 CHAPTER 14. FOURIER ANALYSIS

Minimization of E2 requires that


Z T
∂E2
= T B1 − 2 f (t) sin ω0 tdt = 0 (14.14)
∂B1 0

and Z T
∂E2
= T B2 − 2 f (t) sin (2ω0 t) dt = 0 (14.15)
∂B2 0

Solution of (14.14) and (14.15) leads to

2 T T
Z Z
2
B1 = f (t) sin ω0 tdt, B2 = f (t) sin (2ω0 t) dt (14.16)
T 0 T 0

which are the same expressions as in (14.8) for n = 1, 2.

When N → ∞, the partial sum in (14.4) becomes an infinite series in (14.3), and the
error EN → 0, i.e.,
Z T( " N
#)2
X
lim f (t) − A0 + (An cos nω0 t + Bn sin nω0 t) dt = 0 (14.17)
N →∞ 0 n=1

In this case, it is said that the Fourier series defined in (14.3) converges to f (t) in L2 sense
because the norm used in (14.17) is the L2 norm.

Convergence of a series of functions is indeed a very profound subject in mathematical


analysis. There are many types of convergence, such as uniform convergence, pointwise
convergence, L1 convergence, and L2 convergence. Each convergence has its own definition.
A series of function may converge under one set of definition, but may not under another set
of definition. For example, uniform convergence is very strict, but pointwise convergence is
not. L2 convergence is somewhere in between.

Figure 14.4 demonstrates the concept of L2 convergence. In Fig. 14.4, the function
considered is given by (
1, 0 < t < π
f (t) = (14.18)
0, otherwise

The four subplots in Fig. 14.4 show Fourier partial sum N


P
n=1 (An cos nω0 t + Bn sin nω0 t)
that approximate f (t) for N = 5, 10, 15 and 20, respectively.
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 481

Figure 14.4: L2 convergence and Gibb’s phenomenon

According to (14.18), f (t) is not a continuous function. The discontinuity occurs at


t = 0 and t = π. In contrast, the partial sum N
P
n=1 (An cos nω0 t + Bn sin nω0 t) is continuous.
As a result, the discrepancy between f (t) and the partial sum is more pronounced near the
discontinuities at t = 0 and t = π. The discrepancy at the discontinuity appears in the form
of overshoot and rapid oscillations. This phenomenon is known as Gibb’s phenomenon.
Nevertheless, the error between f (t) and the partial sum defined in the L2 sense, i.e., the error
EN defined in (14.4), is reducing as N increases. Although the Fourier series in Fig. 14.4
will converge to f (t) as N → ∞, the looks of f (t) and the Fourier series expansion are
different. Despite that, f (t) and its Fourier series will be considered identical (because the
series converges) in the L2 sense.

Example 14.2 Figure 14.5 illustrates a clock signal given to the RLC circuit shown in
Fig. 14.2 via the voltage source Vs (t). The clock signal switches back and forth between 1
and 0 with a period of T . Therefore,
(
1, 0 < t < T /2
Vs (t) = , Vs (t + T ) = Vs (t) (14.19)
0, T /2 < t < T

Expand Vs (t) into a Fourier series.


482 CHAPTER 14. FOURIER ANALYSIS

Figure 14.5: Clock signals of an RLC circuit

The Fourier series of Vs (t) takes the form of



X
Vs (t) = A0 + (An cos nω0 t + Bn sin nω0 t) (14.20)
n=1

where the fundamental frequency ω0 is



ω0 ≡ (14.21)
T

For coefficient A0 ,
Z T Z T /2
1 1 1
A0 = Vs (t)dt = 1 · dt = (14.22)
T 0 T 0 2
For coefficient An , n = 1, 2, . . .,

2 T 2 T /2
Z Z
An = Vs (t) cos nω0 tdt = cos nω0 tdt
T 0 T 0
 
2 1 nω0 T 2 1
= · sin = · sin nπ = 0 (14.23)
T nω0 2 T nω0
where (14.21) is used. For coefficient An , n = 1, 2, . . .,

2 T 2 T /2
Z Z
Bn = Vs (t) sin nω0 tdt = sin nω0 tdt
T 0 T 0
  
2 1 T /2 2 nω0 T
= − · cos nω0 t|0 = 1 − cos
T nω0 nω0 T 2

1  0, n = even
= (1 − cos nπ) = 2 (14.24)
nπ  , n = odd

14.1. RESPONSE UNDER PERIODIC EXCITATIONS 483

Therefore, the Fourier series (14.20) takes the form of



1 X 2
Vs (t) = + sin nω0 t
2 n=1,3,5,... nπ
1 2 2 2 2
= + sin ω0 t + sin 3ω0 t + sin 5ω0 t + sin 7ω0 t + · · · (14.25)
2 π 3π 5π 7π
Note that the Fourier series in (14.25) converges because the denominator has a factor n.

14.1.2 Complex Fourier Series

The complete series in (14.3) makes very good physical sense in terms of convergence, but it
is not compact. Three different types of coefficients, A0 , An and Bn need to be calculated.
These coefficients have different mathematical expressions and can be cumbersome in coding
or developing new algorithms (e.g., for signal processing). Complex Fourier series provides
an effective remedy by giving a concise representation as follows.

One can obtain the complex Fourier series by applying definition of sin nω0 t and cos nω0 t
in (12.18) and (12.19) to obtain

1 jnω0 t
− e−jnω0 t

sin nω0 t = e (14.26)
2j

and
1 jnω0 t
+ e−jnω0 t

cos nω0 t =e (14.27)
2
Substitution of (14.26) and (14.27) into (14.3) results in
∞  
X An Bn jnω0 t
jnω0 t −jnω0 t
−jnω0 t

f (t) = A0 + e +e + e −e
n=1
2 2j
∞     
X An Bn jnω0 t An Bn −jnω0 t
= A0 + + e + − e (14.28)
n=1
2 2j 2 2j

Furthermore, one can define


Z T
1
ck ≡ f (t)e−jkω0 t dt, k = 0, ±1, ±2, ±3, . . . (14.29)
T 0
484 CHAPTER 14. FOURIER ANALYSIS

Then Z T
1
A0 = f (t)dt = c0 (14.30)
T 0
Moreover,
Z T
An Bn 1
+ = f (t) [cos nω0 t − sin nω0 t] dt = cn , n = 1, 2, 3, . . . (14.31)
2 2j T 0

and
Z T
An Bn 1
− = f (t) [cos nω0 t + sin nω0 t] dt = c−n , n = 1, 2, 3, . . . (14.32)
2 2j T 0

Hence, (14.28) is reduced to



X
f (t) = ck ejkω0 t (14.33)
k=−∞

Note that k is a dummy index. If n is used as a dummy index, the complex Fourier series
(14.33) takes the form of
∞ Z T
X 1
f (t) = jnω0 t
cn e , cn = f (t)e−jnω0 t dt (14.34)
n=−∞
T 0

There are a couple of things to note here. First, the coefficients cn are generally complex;
therefore, cn will have a magnitude and a phase angle. Second, the index n of cn runs through
all integers, both positive and negative. Third, f (t) is usually a real function for various
applications. If f (t) is real, the coefficients cn and c−n are complex conjugate, i.e., cn = c∗n ,
where the superscript ∗ represents complex conjugate. The complex conjugation is needed,
so that the imaginary parts of (14.34) will cancel by itself.

Example 14.3 Let us revisit the clock signal whose Fourier series was derived in Exam-
ple 14.2. Figure 14.6 illustrates again the clock signal Vs (t) that switches between 1 and 0
with a period of T . Moreover,
(
1, 0 < t < T /2
Vs (t) = , Vs (t + T ) = Vs (t) (14.35)
0, T /2 < t < T

Expand Vs (t) into a complex Fourier series.


14.1. RESPONSE UNDER PERIODIC EXCITATIONS 485

Figure 14.6: Clock signals of an RLC circuit

According to (14.34),

X 2π
Vs (t) = cn ejnω0 t , ω0 = (14.36)
n=−∞
T

For n = 0, the Fourier coefficient c0 can be found from (14.34) as

1 T 1 T /2
Z Z
1
c0 = Vs (t)dt = dt = (14.37)
T 0 T 0 2
For n 6= 0, the Fourier coefficient cn can be found from (14.34) as

1 T 1 T /2 −jn( 2πt
Z Z
cn = Vs (t)e −jnω0 t
dt = e T ) dt
T 0 T 0
 
1 T −jn( 2πt )
T /2 j  −jπn 
= · e = e −1
T

T −2πjn 2πn

0
(
0, n = even
= j (14.38)
− nπ , n = odd

Note that the expression of cn derived in (14.38) works for both positive and negative n.

14.1.3 Frequency Response

Since the equation of motion of the spring-mass-damper system in Fig. 14.1 is linear, the
principle of superposition can be used to find the response under periodic excitations. For the
Fourier series in (14.3), each term in the Fourier series is an harmonic excitation. Therefore,
vibration response excited by each harmonic excitation can be predicted via (13.17) by using
486 CHAPTER 14. FOURIER ANALYSIS

Table 14.1: Input force harmonics and corresponding response

Harmonics of f (t) Response of x(t)


A0 A0 |G(0)|
An cos nω0 t An |G(nω0 )| cos (nω0 t + 6 G(nω0 ))
Bn sin nω0 t Bn |G(nω0 )| sin (nω0 t + 6 G(nω0 ))

the frequency response function G(ω). Table 14.1 lists the harmonics of the force excitation
and the corresponding harmonic response of x(t) based on (13.17). According to Table 14.1,
the response x(t) of the spring-mass-damper system is

x(t) = A0 |G(0)|
X∞
+ An |G(nω0 )| cos (nω0 t + 6 G(nω0 ))
n=0

X
+ Bn |G(nω0 )| sin (nω0 t + 6 G(nω0 )) (14.39)
n=0

Figure 14.7 recaps the approach of using Fourier series and principle of superposition
to predict displacement x(t). First, the excitation force f (t), serving as the input to the
spring-mass-damper system, is represented in a Fourier series. The Fourier series consists
of infinitely many harmonics, each of which is characterized by a Fourier coefficient A0 ,
An , or Bn as well as a driving frequency nω0 . Second, the spring-mass-damper system is
characterized via its frequency response function G(ω). Since each harmonics from f (t)
serves as an independent harmonic excitation with a driving frequency nω0 , the frequency
response function G(ω) thus magnifies each harmonics by a gain |G(nω0 )| and shifts the
phase of each harmonics by a lag 6 G(nω0 ). Finally, all these response components of x(t),
which are shown in (14.39) and Table 14.1, are summed up to give x(t). These response
components are indeed harmonic and constitute the Fourier series of x(t). The same thinking
process works if a complex Fourier is used; see Fig. 14.8.

Example 14.4 This is a hypothetical example. Its only purpose is to provide an example
to familiarize the processed laid out in Fig. 14.7. A spring-mass-damper system has the
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 487

Figure 14.7: Recap of Fourier series and the principle of superposition

Figure 14.8: Complex Fourier series the principle of superposition


488 CHAPTER 14. FOURIER ANALYSIS

following frequency response function from f (t) to x(t):



G(ω) = (14.40)
m (ωn2 − ω 2 )
where ωn = 2p, where p is a constant parameter. If the excitation force is

f (t) = cos 1.5pt + 2 sin 2.5pt (14.41)

what is the response x(t)?

The knee-jerk reaction is to expand f (t) into a Fourier series. But it is not necessary
because f (t) in (14.41) is already a Fourier series. If we choose ω0 = 0.5p, then (14.41) is
simply
f (t) = cos 3ω0 t + 2 sin 5ω0 t (14.42)
Now let us focus on the first component: cos 1.5pt. The driving frequency is ω = 3ω0 = 1.5p.
The corresponding frequency response function is obtained by setting ω = 1.5p and ωn = 2p
in (14.40), i.e.,
j (1.5p) j
G(3ω0 ) = G(1.5p) =  2 2  = 0.857 (14.43)
m (2p) − (1.5p) mp
Hence, the magnitude and phase of G(1.5p) are
0.857
|G(1.5p)| = , 6 G(1.5p) = 90◦ (14.44)
mp
The response component corresponding to cos 1.5pt excitation is then
0.857 0.857
x3 (t) = cos (1.5pt + 90◦ ) = − sin 1.5pt (14.45)
mp mp

Now let us focus on the first component: 2 sin 2.5pt. The driving frequency is ω =
5ω0 = 2.5p. The corresponding frequency response function is obtained by setting ω = 2.5p
and ωn = 2p in (14.40), i.e.,
j (2.5p) j
G(5ω0 ) = G(2.5p) =  2 2  = −1.11 (14.46)
m (2p) − (2.5p) mp

Hence, the magnitude and phase of G(2.5p) are


1.11
|G(2.5p)| = , 6 G(2.5p) = −90◦ (14.47)
mp
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 489

The response component corresponding to 2 sin 2.5pt excitation is then


1.11 2.22
x5 (t) = 2 · sin (2.5pt − 90◦ ) = − cos 2.5pt (14.48)
mp mp
The total response x(t) is then
0.857 2.22
x(t) = x3 (t) + x5 (t) = − sin 1.5pt − cos 2.5pt (14.49)
mp mp

Figure 14.9: A spring-mass Figure 14.11:


Figure 14.10: Acceleration of Free-body di-
system with base
the base motion agram
motion

Example 14.5 Consider a spring-mass system experiencing a base motion; see Fig. 14.9.
Let y(t) be the displacement of the base motion, and x(t) be the elongation of the spring. In
other words, x(t) is the relative displacement of the mass to the base. Moreover, Fig. 14.10
shows the negative acceleration −ÿ(t) of the base motion, which is an alternating square
wave with period T and amplitude A. Specifically,
(
A, 0 < t < T2
−ÿ(t) = (14.50)
−A, T2 < t < T

Determine the response of the mass and when resonance would occur.

The first step is to derive the equation of motion. Figure 14.11 shows a free-body
diagram of the mass. The only force acting on the mass is −kx, which is the restoring force
490 CHAPTER 14. FOURIER ANALYSIS

from the spring. The negative sign means that the force is acting on the downward direction.
At the same time, the absolute acceleration of the mass is ẍ(t) + ÿ(t). Application of the
Newton’s second law then results in

−kx = m(ẍ + ÿ) (14.51)

Or
mẍ + kx = f (t) (14.52)

where (
mA, 0 < t < T2
f (t) ≡ −mÿ = , f (t + T ) = f (t) (14.53)
−mA, T2 < t < T
In other words, the spring-mass system is subjected to a periodic force excitation f (t), which
is the inertia force of the mass as viewed from the moving base.

The second step is to expand the force excitation f (t) into a Fourier series

X
f (t) = A0 + (Ap cos pω0 t + Bp sin pω0 t) (14.54)
p=1

where
1 T
Z
A0 = f (t)dt = 0 (14.55)
T 0
2 T
Z
Ap = f (t) cos pω0 tdt = 0 (14.56)
T 0
and
Z T Z T /2 Z T /2
2 2 2
Bp = f (t) sin pω0 tdt = mA sin pω0 tdt + (−mA) sin pω0 tdt
T 0 T 0 T 0
(
4mA

, p = odd
= (14.57)
0, p = even
Substitution of (14.55), (14.56), and (14.57) back to (14.54) results in the Fourier series of
f (t) as

4mA X sin pω0 t
f (t) = (14.58)
π p=1,3,5,... p
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 491

The third step is to find the frequency response function of the spring-mass damper
system. For (14.52),
1 1 1
G(ω) = 2
= · 2 (14.59)
k − mω m ωn − mω 2
q
k
where ωn = m is the natural frequency. Note that the frequency response function G(ω)
in (14.59) is real; therefore, the phase of G(ω) can only be 0◦ or 180◦ . This has an important
consequence, that is, when the excitation f (t) is harmonic with a driving frequency ω, the
response x(t) can be obtained simply through x(t) = G(ω)f (t).

The last step is to find response harmonics of x(t) using the principle of superposition.
Let us consider a representative harmonics sin pω0 t in (14.58). Since G(ω) is real and sin pω0 t
is harmonic with a driving frequency pω0 , the corresponding response component is thus
 
4mA sin pω0 t 4A sin pω0 t
xp (t) = G(pω0 ) · = 2
  2  (14.60)
π p πωn pω0
p 1 − ωn

The corresponding response x(t) is then found as


∞ ∞
X X 4A sin pω0 t
x(t) = xp (t) = 2
  2  (14.61)
πω
p=1,3,5,... p=1,3,5,... n
p 1 − pωωn
0

As one can see from (14.60), the denominator of xp (t) vanishes at

pω0 = ωn , p = 1, 3, 5, . . . (14.62)

leading to a resonance condition. This is a very important consequence of periodic excitation.


If the excitation is harmonic, the resonance could only occur when the driving is ωn . If the
excitation is periodic, the higher harmonics could also drive the system into resonance. This
is for real, and I have seen failure of rotary machines due to resonance caused by higher
harmonics.

Example 14.6 Figure 14.12 shows a computer bus transmitting digital data. The computer
bus has resistance R and stray capacitance C. A periodic clock signal with period T switching
492 CHAPTER 14. FOURIER ANALYSIS

Figure 14.12: A computer bus subjected to a clock signal

between 1 and 0 (see Fig. 14.13) is sent to the left end of the computer bus. What is the
transmitted signal at the right end of the bus?

First, the computer bus is modeled as an RC circuit shown in Fig. 14.14, where R and
C accounts for the resistance and stray capacitance of the bus, respectively. The source
Vs to the RC circuit mimics the clock signal shown in Fig. 14.13, which is mathematically
expressed as (
1, 0 < t < T /2
Vs (t) = , Vs (t + T ) = Vs (t) (14.63)
0, T /2 < t < T
The Fourier series of (14.63) has already been found in (14.25), which is reproduced here for
reference.

1 X 2
Vs (t) = + sin nω0 t
2 n=1,3,5,... nπ
1 2 2 2 2
= + sin ω0 t + sin 3ω0 t + sin 5ω0 t + sin 7ω0 t + · · · (14.64)
2 π 3π 5π 7π

For the RC circuit, the transmitted signal is indeed the voltage vc (t) across the stray
capacitance. Moreover,the equation governing vc is

τ v̇c + vc = Vs (t) (14.65)

where the time constant τ is


τ ≡ RC (14.66)
For (14.65), the frequency response function from Vs (t) to vc (t) is
1 6
G(ω) = = |G(ω)| ej G(ω)
(14.67)
1 + jωτ
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 493

Figure 14.13: The periodic clock signal Figure 14.14: A computer bus modeled via
an RC circuit

where the magnitude |G(ω)| and the phase angle 6 G(ω) are
1
|G(ω)| = q , 6 G(ω) = − tan−1 (ωτ ) (14.68)
1 + (ωτ )2
Moreover, Bode plots of |G(ω)| and 6 G(ω) for the RC circuit are shown in Fig. 14.15 and
Fig. 14.16, respectively, for reference. When the input x(t) is harmonic, the output y(t) is
related to the input x(t) through
x(t) = A sin ωt, y(t) = A |G(ω)| sin [ωt + 6 G(ω)] (14.69)
or, alternatively,
x(t) = A cos ωt, y(t) = A |G(ω)| cos [ωt + 6 G(ω)] (14.70)

Now let us focus on the constant term in (14.64). For this term, the driving frequency
ω = 0 and the magnitude is A = 21 . According to (14.68), |G(0)| = 1 and 6 G(0) = 0.
Therefore, the response vc0 corresponding to the constant-term excitation can be obtained
from (14.70) with A = 12 , |G(0)| = 1, and 6 G(0) = 0. Then one obtains
1
vc0 (t) = (14.71)
2

2
Now let us switch to a representative term nπ sin nω0 t in (14.64). For this term, the
2
driving frequency ω = nω0 and the magnitude is A = nπ . According to (14.68),
1
|G(nω0 )| = q , 6 G(nω0 ) = − tan−1 (nω0 τ ) (14.72)
2
1 + (nω0 τ )
494 CHAPTER 14. FOURIER ANALYSIS

Figure 14.16: Bode phase plot for the RC


Figure 14.15: Bode magnitude plot for the
circuit
RC circuit

Therefore, the response vcn (t) corresponding to the representative-term excitation can be
2
obtained from (14.69) with A = nπ and also |G(0)| and 6 G(0) given in (14.72). Then one
obtains
2 1
sin nω0 t − tan−1 (nω0 RC)
 
vcn (t) = ·q (14.73)

1 + (nω0 RC)2
where τ = RC has been used (cf. (14.66)).

Finally, the response voltage vc (t) is assembled from vc0 and vcn (t) as

X
vcn (t) = vc0 + vcn (t)
n=1,3,5,...

1 X 2 1
sin nω0 t − tan−1 (nω0 RC) (14.74)
 
= + ·q
2 n=1,3,5,... nπ
1 + (nω0 RC)2

The response vc (t) derived in (14.74) is elegant, but it does not look very friendly. Its
physics is difficult to interpret without plotting (14.74) explicitly. For example, one would
care whether or not the signal received at the right end is distorted. Under what condition
would the received signal vc (t) resemble the transmitted clock signal? This is exactly the
occasion when Bode plots are coming to rescue.

To gain physical insights of (14.74), there are several key elements to consider. The
first element is Fourier coefficients in a Fourier series. The shape of a function, such as
Vs (t), is controlled by its Fourier coefficients. If two Fourier series have very similar Fourier
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 495

coefficients, the functions represented by the two Fourier series will look very similar. For
example, if the Fourier coefficients of vc (t) in (14.74) are almost proportional to the Fourier
coefficients of Vs (t) in (14.64), the received signal vc (t) at the right bus end will resemble the
transmitted clock signal Vs (t) at the left bus end.

The second element is the convergence of Fourier series. Since Fourier series are infinite
series, terms involving higher-order harmonics have diminishing contribution; see Fig. 14.4.
Very often, a partial sum from 20 or 30 terms will give a reasonably good approximation
of the function. Therefore, the lower-harmonic terms are the dominant terms in a Fourier
series. This feature has a significant consequence. If the dominant terms of two Fourier
series are alike, the functions represented by the two Fourier series will also look alike. Even
though the non-dominant terms from the two Fourier series may be very different, they have
very little influence on the functions represented by the two Fourier series. For the case of
the computer bus here, if the dominant terms from (14.64) and (14.74) are alike, the received
signal vc (t) and the clock signal Vs (t) will look alike too.

The third element is the bandwidth of the frequency response function G(ω), i.e., the
frequency range in which G(ω) is roughly a constant. For the RC-circuit, G(ω) is roughly
constant for ω < 1/τ ; see Fig. 14.14 and Fig. 14.15. If the dominant terms of Vs (t) fall in
the bandwidth (i.e., the flat part of |G(ω)| in Fig. 14.14), the dominant terms of vc (t) will be
roughly proportional to the dominant terms of Vs (t); see (14.69) and (14.70). Therefore, the
received signal vc (t) will resemble the clock signal Vs . In essence, the more lower-harmonic
terms fall within the bandwidth, the more the received signal vc (t) will resemble the clock
signal Vs (t). If one wants to ensure that vc (t) resembles Vs (t), one should choose

ω0  τ (14.75)

or, alternatively,

2πRC  T (14.76)

In other words, the time constant τ of the RC circuit must be small so that the circuit has
fast transient response and is able to follow the excitation Vs (t). The small time constant,
of course, requires that the resistance R and stray capacitance C be small as well.
496 CHAPTER 14. FOURIER ANALYSIS

Figure 14.17: Concept of Fourier sine spectrums

14.1.4 Concept of Spectrums

Although general response under periodic excitations can be found via (14.39), the formula
has only limited use. As I have demonstrated in Example 14.6, the formula (14.39) is very
effective for calculating response, but not so much for big-picture conception. For example,
it is very hard to use (14.39) when one wants to visualize the response unless the response is
calculated and plotted out. It is very hard to find out characteristic responses (e.g., presence
of large resonance or dominance of a single frequency), unless the response is calculated via
(14.39) and plotted. The concept of spectrums is meant to provide high-level, big-picture
conception of the response without much calculation.

Let us start with an odd function f (t) that can be expanded into a Fourier sine series,
say

X
f (t) ≈ A1 sin ω0 t + A2 sin 2ω0 t + A3 sin 3ω0 t + · · · = An sin nω0 t (14.77)
n=1

As shown in Fig. 14.17, each harmonics An sin nω0 t, n = 1, 2, . . ., in the series (14.77) can
be plotted individually as a function of time. Moreover, all these harmonics share the same
time scale over one period T . From the front view, the summation of these individual
harmonics gives the function f (t), as indicated in the Fourier series (14.77). This is known
as time-domain representation.
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 497

From the side view, however, one sees an amplitude A1 at frequency ω0 , an amplitude A2
at frequency 2ω0 , an amplitude A3 at frequency 3ω0 , and so on and so forth. In other words,
the amplitudes An form a discrete function at frequencies nω0 with n = 1, 2, 3, . . .. This
discrete function, with frequency being the independent variable and Fourier coefficients An
being the dependent variable, is known as Fourier sine spectrum. Through the concept
of spectrums, one can describe how much each harmonics participates in the Fourier series
or what frequencies dominate the time-domain function f (t). This new way of visualization
is characterized via frequencies nω0 , and is known as frequency-domain representation.

The visualization described above has a significant implication. Basically, the Fourier
spectrum presents a different way to represent f (t). Instead of describing f (t) in the time
domain, one can use the Fourier spectrum in the frequency domain to describe the same
function f (t). As a result, the spectrum plays the role of a finger print or an DNA of a
time-domain function f (t). If two time-domain signals have the same spectrums, the two
time-domain signals are identical in the L2 sense. If two time-domain signals have similar
spectrums, the two signals will look similar in the time domain.

One advantage of the frequency-domain representation is its simplicity. In the time-


domain, one will need many data points to describe f (t) to a high-enough degree of fidelity.
In the frequency-domain, the number of harmonics needed to have a good representation of
f (t) is usually much lower.

Similarly, the same idea can be generalized to any periodic function f (t) expanded in
the form of complex Fourier series (14.34). In this case, the spectrum obtained is called
a complex Fourier spectrum and is illustrated in Fig. 14.18. Again, each harmonics is
plotted individually. Each harmonics now has a magnitude |cn | and also a phase 6 cn . The
front view of these harmonics gives the Fourier expansion of f (t) in time domain, while the
side view gives the frequency-domain representation of f (t).

Complex spectrums are the most commonly used form of spectrums in practice. Note
that the index n of cn runs from −∞ to ∞. Also, cn with a negative index is the complex
conjugate of cn with a positive index, i.e., c−n = c∗n , n = 1, 2, 3, . . ., where the superscript
∗ represents a complex conjugate. As a result, only cn with positive indices are shown in
complex spectrums.
498 CHAPTER 14. FOURIER ANALYSIS

Figure 14.18: Concept of complex Fourier spectrums

To obtain spectrums in experiments, one can feed time-domain signals into a piece of
equipment called spectrum analyzer. It will convert time-domain representations into
frequency-domain representations. It will be explained more in later sections when I intro-
duce methods of vibration testing.

Figure 14.19: Magnitude of the complex Figure 14.20: Phase of the complex spec-
spectrum in Example 14.7 trum in Example 14.7

Example 14.7 Consider the spring-mass system subjected to a ground motion discussed
in Example 14.5. The spring-mass system in Fig. 14.9 is subjected to a ground motion
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 499

Fig. 14.10, which is equivalent to an inertia force f (t)


(
mA, 0 < t < T2
f (t) ≡ −mÿ = , f (t + T ) = f (t) (14.78)
−mA, T2 < t < T

(The expression of f (t) in (14.78) has been defined in (14.53) and is reproduced here for
reference.) What is its complex spectrum of f (t)?

The first step is to expand f (t) into a complex Fourier series; see (14.34). With f (t)
shown in (14.78), its complex Fourier coefficients cn become
T
Z T Z Z T
1 −jnω0 t 1 2
−jnω0 t 1
cn = f (t)e dt = mAe dt + (−mA) e−jnω0 t dt
T 0 T 0 T T
2

  T   T
mA 1 2
−jnω0 t mA 1 −jnω0 t

= e + e
T −jnω0
0 T jnω0 T
2
 
mA 1
1 − 2e−jnω0 T /2 + e−jnω0 T

= (14.79)
T jnω0

Recall that ω0 = T
, then (14.79) is reduced to
(
2mA
mA  , n = odd
1 + (−1)n+1 =
 nπj
cn = (14.80)
nπj 0, n = even

The magnitude and phase of cn are plotted in Fig. 14.19 and Fig. 14.20, respectively. They
constitute the spectrum of f (t).

Figure 14.21 illustrate how complex spectrums are used in the frequency domain to find
response. Basically, Fig. 14.21 is a re-interpretation of Fig. 14.8 in view of the concept of
spectrum. The first step in Fig. 14.8 is to find the Fourier coefficients cn . This step is now
interpreted as finding the spectrum of input excitation f (t) in Fig. 14.21. Note that the input
spectrum has both magnitude and phase. The second step in Fig. 14.8 is to find the frequency
response function G(ω) at ω = nω0 . This step remains the same in Fig. 14.21. Again, the
frequency response function G(ω) has magnitude and phase. The third step in Fig. 14.8 is to
multiply cn with G(nω0 ). Specifically, the magnitudes of cn and G(nω0 ) are multiplied while
the phases of cn and G(nω0 ) are added. In Fig. 14.21, this step is basically multiplication of
500 CHAPTER 14. FOURIER ANALYSIS

Figure 14.21: Find frequency response via complex Fourier spectrums

the input spectrum by G(nω0 ) resulting in the spectrum of output displacement x(t). Since
the output spectrum is known, the output spectrum is indeed the Fourier coefficients of x(t).
Therefore, one can use the Fourier series to find x(t).

Example 14.8 Consider a spring-mass-damper system as shown in Fig. 14.21. The spring-
mass-damper system has a natural frequency ωn = 400 Hz. Moreover, k = 1 for the sake of
simplicity. The input force f (t) is a square wave switching between 0.5 and -0.5 with a period
T . This example is to shows the displacement response x(t), when T = 0.2 s, T = 0.02 s,
T = 0.002 s, and T = 0.0002 s.

Figure 14.22, Fig. 14.23, Fig. 14.24, and Fig. 14.25 show time-domain and frequency-
domain representations of these four cases. Each of these four figures consists of an upper
sub-plot and a lower sub-plot. The upper sub-plots show a frequency response function
(denoted by the solid line), an input force spectrum (denoted by the plus markers), and
an output displacement spectrum (denoted by the circle markers). Only the magnitude is
plotted due to limited space. The lower sub-plots show the input force f (t) (denoted by the
dash line) and the output displacement x(t) (denoted by the solid line). Note that the input
force spectrum and the output displacement spectrum are Fourier coefficients of f (t) and
14.1. RESPONSE UNDER PERIODIC EXCITATIONS 501

x(t), respectively.

Figure 14.22: Spectrums and response Figure 14.23: Spectrums and response
when ω = 5 Hz when ω = 50 Hz

Since the driving force f (t) is a square wave, it will only have odd-order harmonics; see
2
(14.58). Specifically, the magnitude of the n-th harmonics is nπ , where n = 1, 3, 5, . . .. Since
the magnitude of the n-th harmonics decreases as the order n increases, a small number
of harmonics is enough to produce an accurate representation of f (t) and x(t) in the time
domain. In this example, 25 harmonics are retained, i.e., n = 1, 3, 5, . . . , 25, to construct
the spectrums and the time functions f (t) and x(t) in Fig. 14.22, Fig. 14.23, Fig. 14.24, and
Fig. 14.25.

Consider the case T = 0.2 s, which corresponds to a fundamental frequency ω0 = 5 Hz;


see Fig. 14.22. Since the natural frequency ωn = 400 Hz, frequencies nω0 of the 25 harmonics
retained are all below ωn . As a result, the input force spectrum (i.e., the plus markers) all
appear below the natural frequency (i.e., the frequency where the peak is located). In this
frequency range, the frequency response function is very flat (i.e., almost a constant) and
approaches 1. Therefore, the output spectrum (i.e., the circle markers)—the product of
the input spectrum and the frequency response function—is almost identical to the input
spectrum. The plus and circle markers are right on each other except a few points from the
higher harmonics. As a result, the time-domain representations of f (t) and x(t) are very
similar to each other.
502 CHAPTER 14. FOURIER ANALYSIS

Figure 14.24: Spectrums and response Figure 14.25: Spectrums and response
when ω = 500 Hz when ω = 5000 Hz

Consider the case T = 0.02 s, which corresponds to a fundamental frequency ω0 = 50


Hz; see Fig. 14.23. In this case, the 25 harmonics cover a frequency range up to 1250 Hz,
which is greater than the natural frequency ωn = 400 Hz. As a result, the input force
spectral points appear both below and above the natural frequency. In this frequency range,
the frequency response function has a resonance peak. Therefore, the frequency response
function greatly distorts the output spectrum from the input spectrum. The plus and circle
markers in Fig. 14.23 show two distinct sets of spectrums. Consequently, the time-domain
representations of f (t) and x(t) are very different from each other.

Consider the case T = 0.002 s, which corresponds to a fundamental frequency ω0 = 500


Hz; see Fig. 14.24. In this case, the fundamental frequency is already greater than the
natural frequency ωn = 400 Hz. As a result, all the input force spectral points appear
above the natural frequency. In this frequency range, two things occur. First, the phase is
reversed to 180◦ . Second, the frequency response function has a 40-db/decade rolloff, i.e., the
higher the frequency, the smaller the frequency response function. Therefore, only the first
harmonics of the output spectrum stays visible because its frequency is close to the natural
frequency. The higher harmonics of the output spectrum are highly attenuated. That is,
the circle markers in Fig. 14.24 are significantly below the plus markers except at the first
harmonics. Consequently, the time-domain representation of x(t) is primarily dominant by
14.2. RESPONSE UNDER NON-PERIODIC EXCITATIONS 503

the first harmonics appearing as a sinusoidal function out of phase with the input force f (t)
because of the phase reversal above the natural frequency.

Consider the case T = 0.0002 s, which corresponds to a fundamental frequency ω0 =


5000 Hz; see Fig. 14.25. In this case, the fundamental frequency is significantly greater than
the natural frequency ωn = 400 Hz. As a result, all the input force spectral points appear way
above the natural frequency. The results are very similar to those in Fig. 14.24 except that
even the first harmonics of the output spectrum is now significantly attenuated. Therefore,
all the circle markers in Fig. 14.25 are significantly below the plus markers. Although the
time-domain representation of x(t) is primarily dominated by the first harmonics, its response
is so small because it is highly attenuated by the roll-off portion of the frequency response
function.

14.2 Response under Non-Periodic Excitations

Let us now consider arbitrary excitations to a linear dynamical system. Again, let us use
the spring-mass-damper system in Fig. 14.1 as an example. Having an arbitrary excitation
means that the excitation in Fig. 14.1 is no longer periodic. In Section 14.1, we were able to
extend the framework of frequency response functions to accommodate periodic excitations
and find the system’s response. It is certainly desirable to be able to use the framework of
frequency response functions for non-periodic excitations.

The goal of this section is to extend the framework of frequency response functions to
accommodate excitations that are non-periodic. I will first explain the mathematical tool
that allows us to do so. The mathematical tool is known as Fourier transforms. Next,
I will explain how one could obtain response using the frequency response functions and
Fourier transforms. Finally, I will briefly introduce some practical applications of Fourier
analysis.
504 CHAPTER 14. FOURIER ANALYSIS

Figure 14.26: Continuous spectrum via a limiting process

14.2.1 Fourier Transforms

Figure 14.26 shows a schematic diagram illustrating the basic concept to extend the frame-
work to accommodate nonlinear excitations. When the excitation f (t) is periodic with period
T , one can use a complex spectrum to describe the periodic signal. In particular, the spec-
trum consists of infinitely many spectral lines with a resolution of ω0 = 2π
T
in the frequency
domain. Since the spectrum consists of individual spectral lines, the spectrum is called a
discrete spectrum. When the signal f (t) is non-periodic, what would be the counterpart
of the discrete spectrum? How would the frequency response function come into the picture
in predicting the response?

One way to visualize the result is to treat a non-periodic signal as a periodic signal with
an infinite period, i.e., T → ∞, as shown in Fig. 14.27. When the period T increases, the
fundamental frequency ω0 (and thus the resolution of the spectral lines) decreases. In the
limit when T → ∞, the spectral lines will cram together forming a continuous spectrum.

Mathematically, the limiting process is equivalent to a transition from a Fourier series to


a Fourier transform as shown in Fig. 14.28. When f (t) is periodic, f (t) can be represented
in a Fourier series with cn playing the role of a discrete spectrum. Specifically, the spectrum
cn can be obtained from the time domain signal f (t) via an integral with respect to time
parameter t. This means a switch from a time-domain representation to a frequency-domain
14.2. RESPONSE UNDER NON-PERIODIC EXCITATIONS 505

Figure 14.27: Transition from a discrete spectrum to a continuous spectrum

Figure 14.28: Transition from Fourier series to Fourier transform

representation. Moreover, the index n indicates a measure in the frequency domain (i.e., the
n-th harmonics occurring at frequency nω0 ). Conversely, the time domain signal f (t) can
be reconstructed from the spectrum cn via the Fourier series. Note that the reconstruction
occurs throughout the entire frequency domain by summing over all the frequency indices
n. This means a switch from the frequency-domain representation back to the time-domain
representation.

As T → ∞, f (t) is no longer periodic and the Fourier series evolves into an integral form
with F (ω) playing the role of a continuous spectrum. In the limiting process, the spectrum
F (ω) can be obtained from the time domain signal f (t) via an integral with respect to time
parameter t as follows Z ∞
F (ω) = f (t)e−jωt dt (14.81)
−∞

Note the spectrum F (ω) now has a frequency parameter ω as opposed to a frequency index
n in the periodic case.

As T → ∞, the summation in the Fourier series evolves into an integral. Therefore,


506 CHAPTER 14. FOURIER ANALYSIS

Figure 14.29: Fourier transform pair

the time domain signal f (t) can be reconstructed from the spectrum F (ω) via the following
integral Z ∞
1
F (ω) = F (ω)ejωt dω (14.82)
2π −∞
The reconstruction takes place in the frequency domain by integrating through the frequency
parameter ω, as opposed to summing up the frequency index n in the case of Fourier series.

The expression in (14.81) is called a Fourier transform and its physical meaning
is a continuous spectrum of f (t). The expression in (14.82) is called an inverse Fourier
transform and its physical meaning is reconstruction of time-domain signal f (t) from its
frequency spectrum F (ω). The Fourier transform in (14.81) and the inverse Fourier transform
in (14.82) are called a Fourier transform pair; see Fig. 14.29.

There are several issues to note about Fourier transforms. The first issue is existence
of Fourier transforms. As defined in (14.81), the Fourier transform of a function f (t) may
not exist due to the fact that the integration limits are ∞ and −∞. For example, f (t) = 1
R∞
does not have a Fourier transform, because the integral −∞ e−jωt dt does not converge.

The second issue is the condition of existence. In other words, what condition needs to
be imposed on f (t) in order for the Fourier transform in (14.81) to exist? The answer is the
following. If the integal Z ∞
|f (t)| dt exists and is finite, (14.83)
−∞

the Fourier transform F (ω) defined in (14.81) will exist. Physically, the condition in (14.83)
implies that f (t) must vanish in a fast enough rate as t → ∞ and t → −∞. Therefore, a
14.2. RESPONSE UNDER NON-PERIODIC EXCITATIONS 507

Figure 14.30: f (t) being a square impulse

function such as f (t) = 1 will not satisfy (14.83) because it does not vanish as t → ∞ and
t → −∞.

The third thing to remember is that F (ω) is a complex function because the integrand
in (14.81) is complex. Therefore, F (ω) will have a magnitude |F (ω)| and a phase 6 F (ω).

Finally, Fourier transform F (ω), as defined in (14.81), can be obtained analytically, nu-
merically, or experimentally. There are only a handful of functions whose Fourier transforms
can be derived exactly. These basic Fourier transforms, however, give wonderful insights of
Fourier spectrums and provide solid foundation to advanced theories that facilitate calcula-
tions of Fourier transforms. I will demonstrate some of those in the following examples.

Example 14.9 Consider the square impulse shown in Fig. 14.30. Mathematically, the
square impulse is defined as
(
1, 0 < t < T
f (t) = (14.84)
0, otherwise

where T is the duration of the pulse. (Note that T in (14.84) is not the period. The
function is not periodic and there is no period.) Derive the Fourier transform F (ω).
508 CHAPTER 14. FOURIER ANALYSIS

With the definition in (14.81),


Z ∞ Z T
−jωt
F (ω) = f (t)e dt = e−jωt dt
−∞ 0
1 T 1  −jωT
e−jωt 0 =

= e −1 (14.85)
−jω −jω
According to the Euler’s formula (12.21),

e−jωT = cos ωT − j sin ωT (14.86)

Substitution of (14.86) into (14.85) leads to


1
F (ω) = [sin ωT + j (cos ωT − 1)] (14.87)
ω
which is the Fourier transform of the pulse defined in (14.84).

As a result, the magnitude of F (ω) can be found as


1
q
F (ω) = sin2 ωT + (cos ωT − 1)2
ω
1p 2
= sin ωT + cos2 ωT − 2 cos ωT + 1
ω
1p 2 ωT
= 2 (1 − cos ωT ) = sin (14.88)
ω ω 2

where the half-angle formula cos ωT = 1 − 2 sin2 ωT


2
has been used for the last equality.

Figure 14.31 plots |F (ω)| derived in (14.87) as a function of ω. As one can see, |F (ω)|
consists of many lobes. The central lobe around ω = 0 has the largest height

lim |F (ω)| = T (14.89)


ω→0

1
while the lobes farther away from ω = 0 are attenuated by the factor ω
in (14.88). Moreover,
|F (ω)| = 0 when
2nπ
ω= , n = 1, 2, 3, . . . (14.90)
T

These particular features in Fig. 14.31 have a lot of applications. One of them is
vibration testing. Before a vibration test, one must decide how high a frequency the test
14.2. RESPONSE UNDER NON-PERIODIC EXCITATIONS 509

Figure 14.31: Fourier transform of a square impulse; magnitude only

should go. Then one often use a hammer to excite a test structure to the desired frequency
range by adjusting the duration T . According to Fig. 14.31, the duration T must be short
enough so that the first zero at 2π
T
exceeds the frequency range of the test. For example, if
the during T = 0.5 ms, the frequency can go up to
2π 2π
ω= = = 4π × 103 rad/s = 2000 Hz (14.91)
T 0.5 × 10−3

Figure 14.33: Spectrum of the band-


Figure 14.32: A band-limited cosine limited cosine function in
function Fig. 14.32

Example 14.10 Consider a band-limited cosine function f (t) given by


(
cos (ω0 t) , |t| ≤ T0
f (t) = (14.92)
0, otherwise
510 CHAPTER 14. FOURIER ANALYSIS

Figure 14.34: A square impulse of width  and height 1/

where ω0 has nothing to do with 2π T0


. Figure 14.32 illustrates a conceptual drawing of the
band-limited cosine function. Basically, it is a cosine function within the band −T0 ≤ t ≤ T0 ,
and it is zero outside the band. Its Fourier transform is
Z ∞ Z T0
−jωt
F (ω) = f (t)e dt = cos ω0 t (cos ωt + j sin ωt) dt
−∞ −T0

Z T0 Z T0
= 2 cos ω0 t cos ωtdt = [cos (ω + ω0 ) t + cos (ω − ω0 ) t] dt
0 0
T T
sin (ω + ω0 ) t 0 sin (ω − ω0 ) t 0 sin (ω + ω0 ) T0 sin (ω − ω0 ) T0
= + = + (14.93)
ω + ω0
0 ω − ω0
0 ω + ω0 ω − ω0
sin ωT0
Figure 14.33 illustrates the Fourier transform in (14.93). Basically, it is the function ω
shifted to ω = ω0 and ω = −ω0 first, and then summed up together.

Example 14.11 This example is to find Fourier transform of a Delta function δ(t). As
shown in Fig. 2.7, a Delta function is a limiting case of a square pulse with width  and
height 1/. In other words,
δ(t) = lim f (t) (14.94)
→0

where f (t) is shown in Fig. 14.34 and is expressed mathematically as


(
1/, 0 < t < 
f (t) = (14.95)
0, Otherwise

One can easily find that the impulse in (14.95) is very similar to that in (14.84). If one
amplifies the impulse in (14.84) by a factor of 1/ and let T = , one will recover the impulse
14.2. RESPONSE UNDER NON-PERIODIC EXCITATIONS 511

in (14.95). According to (14.88),


1 2 ω
F{f (t)} = F (ω)|T = = sin (14.96)
 ω 2
where F{f (t)} is the Fourier transform of f (t). As  → 0,
2 ω
F{δ(t)} = lim F{f (t)} = lim sin = 1 (14.97)
→0 →0 ω 2
Therefore, the Fourier transform of the Delta function is 1.

Obtaining Fourier Transforms Numerically

Since Fourier transforms are very useful in many disciplines but very seldom can they be
derived exactly, they are often calculated numerically using a process call discrete Fourier
transforms. One commonly used method to speed up discrete Fourier transforms is called
fast Fourier transforms. Since discrete Fourier transforms and fast Fourier transforms
are numerical methods to evaluate a Fourier transform, these numerical methods will only
approximate the Fourier transform within a certain frequency range.

Obtaining Fourier Transforms Experimentally

In experiments, one can use a piece of equipment called a spectrum analyzer to implement
discrete Fourier transforms or fast Fourier transforms. Figure 14.35 describes the basic
setup of a spectrum analyzer. In experiments, measurements are in the form of analog data.
Therefore, they need to be digitized via an analog-to-digital (A/D) converter. After the data
are digitized, they are ready to be process via a fast Fourier transform. At this stage, a very
important parameter one needs to define is the cut-off frequency. In practical applications,
one often will be interested in a system’s response below certain frequencies. For example,
human hearing is from 10 Hz to roughly 12 kHz. In this case, anything below 12 kHz is
of interest, and a user can choose 12 kHz as a cut-off frequency. Since the capacity of the
spectrum analyzer is fixed, a higher cut-off frequency often means a poorer resolution in the
frequency domain. After the cut-off frequency is chosen, the fast Fourier transform will be
applied to the digitized data to obtain a digitized version of the Fourier transform.
512 CHAPTER 14. FOURIER ANALYSIS

Figure 14.35: Setup of a spectrum analyzer

Figure 14.36: Finding response using Fourier transforms


14.2. RESPONSE UNDER NON-PERIODIC EXCITATIONS 513

14.2.2 Finding Response using Fourier Transforms

The framework depicted in Fig. 14.21 can be adopted and extended to Fig. 14.36 when the
input excitation f (t) is no longer periodic. Basically, a Fourier transform is applied to f (t)
to obtain the input spectrum F (ω). It is then multiplied by the frequency response function
G(ω) to obtain the output response spectrum X(ω) ≡ F (ω)G(ω) . Then an inverse Fourier
transform is performed on X(ω) to obtain the time-domain response x(t). Of course, the
Fourier transform or its inverse transforms are difficult to calculate. This approach to find
response x(t) is not so practical. Nevertheless, it is tremendous valuable to be able to find
the output spectrum X(ω), which often gives very good physical insights of the response
x(t) in the time domain.

Figure 14.37: Spring-mass-damper system Figure 14.38: Frequency response func-


with a base motion tion of the system shown
Fig. 14.37

Example 14.12 Consider the spring-mass-damper system subjected to a base motion as


shown in Fig. 14.37. The mass is m, the spring has stiffness k, and the damper has viscous
damping coefficient c. The base displacement y(t) is given by the band-limited cosine function
shown in Fig. 14.32. The motion of the mass relative to the base is x(t). How would the
relative motion x(t) look like in the time domain?

First of all, the equation of motion of the system satisfies

mẍ + cẍ + kx = −mÿ (14.98)


514 CHAPTER 14. FOURIER ANALYSIS

By substituting
x(t) = X0 ejωt , y(t) = Y0 ejωt (14.99)

into (14.98) to obtain the frequency response function

X0 mω 2
G(ω) ≡ = (14.100)
Y0 k − mω 2 + jcω

Figure 14.38 illustrates the magnitude of frequency response function G(ω). Basically, when
ω  ωn , G(ω) is dominated by the numerator mω 2 . When ω ≈ ωn , the system presents a
resonance. When ω  ωn , G(ω) approaches -1.

Note that the input spectrum Y (ω) in Fig. 14.37 has a major peak at ω = ω0 and
dies out very quickly when ω is far away from ω0 . By observing input spectrum Y (ω) in
Fig. 14.37 and the frequency response function G(ω) in Fig. 14.38, one can pretty much
guess how the response x(t) will look like in the time domain. If ω0 ≈ ωn , the two peaks in
Y (ω) and G(ω) coincide, and the output spectrum will have a large peak around ω0 ≈ ωn .
Therefore, x(t) will have large harmonic response with a frequency around ωn . If ω0  ωn ,
most of the input spectrum Y (ω) will appear in the flat range of G(ω). Therefore, x(t) will
be just like −y(t), because X(ω) is very similar to −Y (ω). If ω0  ωn , most of the input
spectrum Y (ω) will be attenuated by mω 2 of G(ω). In this case, response x(t) will be small,
because X(ω) is small.

Example 14.13 Prove that the frequency response function G(ω) is the Fourier transform
of the impulse response function δ(t), i.e.,

F {h(t)} = G(ω) (14.101)

The proof is very straightforward. According to Fig. 14.36, the output displacement
spectrum X(ω) is related to input force spectrum F (ω) via

X(ω) = G(ω)F (ω) (14.102)

When f (t) = δ(t), F (ω) = 1 according to (14.97) and X(ω) = F {h(t)} by definition.
Therefore, (14.102) is reduced to (14.101).
14.2. RESPONSE UNDER NON-PERIODIC EXCITATIONS 515

Figure 14.39: Revisit of the Experimental device to test ride comfort

As an example, let us revisit Example 13.3, i.e., the experimental device to test ride
comfort of a suspension system reproduced here in Fig. 14.39. The equation governing the
displacement at point A is
τ ẏ + y = d cos ωt (14.103)

where τ is the time constant given by the damping coefficient b and stiffness coefficient k as

b
τ≡ (14.104)
k
The corresponding frequency response function is

1
G(ω) = (14.105)
1 + jωτ

The impulse response function h(t) satisfies

τ ḣ + h = δ(t), h(0− ) = 0 (14.106)

The same equation and initial condition have been described in (2.26) and (2.27). Moreover,
the impulse response function has been found in (2.32), i.e.,
(
1 −t/τ
τ
e , t>0
h(t) = (14.107)
0, t<0
516 CHAPTER 14. FOURIER ANALYSIS

The Fourier transform of h(t) is then


Z ∞
1 ∞ −( τ1 +jω)t
Z
−jωt
F{h(t)} = e h(t)dt = e
∞ τ 0
1 ∞ 1
−( τ1 +jω )t
= − 1  e = (14.108)
τ τ + jω 0 1 + jωτ

which is the same as the frequency response function G(ω) shown in (14.105)

14.2.3 Applications of Fourier Analysis

Fourier analysis find applications in many engineering fields with all sorts of applications.
Some of them are briefly explained as follows.

System Identification

A goal of system identification is to measure frequency response function G(ω), from which
system characteristics (e.g., natural frequencies) can be experimentally extracted. On such
system identification scheme is modal testing. Modal testing is an experimental method to
determine modal parameters such as natural frequencies, mode shapes, and modal damping.
Basically, natural frequencies and modal damping will determine the imaginary part and real
part of a pole, respectively. The mode shapes will determine the eigenfunctions associate
with the pole. The most common practice in industry is to use finite element codes to
predict natural frequencies and mode shapes of a structure in the design stage and to check
the designed results through modal testing.

In general, modal testing consists of two parts: obtaining frequency response func-
tions through experiments and extracting modal parameters from the frequency response
functions.

There are many methods in getting frequency response functions. For example, steady
sine sweep uses the definition of frequency response functions. The system is excited by a
sinusoidal force of frequency ω. When the system is in steady state, the the output/input
ratio is the frequency response function at the frequency ω. The most popular method
14.2. RESPONSE UNDER NON-PERIODIC EXCITATIONS 517

Figure 14.40: Basic setup for modal testing using Fourier Transform

nowaday is the fast Fourier Transform method. The idea is to find the Fourier transform
F (ω) of the input force f (t) and the Fourier transform X(ω) of the output displacement
x(t). Then the frequency response function will be
X(ω)
G(ω) = (14.109)
F (ω)
The general setup for such experiment is shown in Fig. 14.40.

In such experiments, the test object is excited by a hammer or a shaker, and the input
force is measured by a load cell. The output response is usually in the form of displacement
or acceleration, which are measured by displacement probes and accelerometers, respectively.
Then both the force and displacement are fed into a spectrum analyzer, where the Fourier
transforms of both signals are calculated to give the frequency response function. The
frequency response function is then process by modal analysis software in the computer to
extract the modal parameters.

Figure 14.41 shows measured frequency response function G(ω) of a disk drive spindle
motor carrying multiple disks. Each trace in Fig. 14.41 shows G(ω) measured at a given
spin speed. For each G(ω), one can pretty much identify a system pole associated with
each resonance peak. By varying the speed, one can figure out how system poles vary with
respect to the spin speed. At the spin speed you want to operate your disk drives, you want
518 CHAPTER 14. FOURIER ANALYSIS

Figure 14.41: Frequency response function of a disk drive spindle motor at various speeds

to make sure that there are no major resonance so that the vibration response of the disk
drives remains low.

The measured frequency response function G(ω) can also be checked against with the-
oretically predicted G(ω) to find out how closely a prototype mimics the design. No matter
how well one makes a prototype or a product, there will be uncertainties or things one can-
not control that well. Therefore, the performance of the prototype will be different from
that of the design. The comparison will help one understand how far off the real product
performs. Figure 14.42 shows the comparison of theoretical and experimental frequency re-
sponse functions of a hard disk drive spindle motor with fluid bearings. One can see that all
the resonance peaks have been predicted, but the magnitude is about 50% off.

Functional/Operational Tests

To meet design specifications, many mechanical and electrical components must pass an
operational or functional test. In the test, the components must survive excitations with
14.2. RESPONSE UNDER NON-PERIODIC EXCITATIONS 519

Figure 14.42: Comparison of predicted and measured G(ω)

Figure 14.43: Functional test of a circuit breaker


520 CHAPTER 14. FOURIER ANALYSIS

specific spectrums and remain functional. For example, a circuit breaker of a power plant
may need to survive an earthquake of certain magnitude and a piece of avionic equipment
in an airplane must survive certain flight histories. So engineers will mount the component
(e.g., a circuit breaker) on a shaker, and feed the shaker with excitations who spectrum
mimics a real earthquake. If the component remains functional during and after the test,
the component passes the operational test and indicates that the component has met certain
reliability requirements.

Figure 14.43 shows the procedure in an operational test of a circuit breaker for nu-
clear power plant. First of all, earthquake data are collected to find out major frequency
components in a real earthquake. Then a desired excitation spectrum, under which the
circuit breaker must survive and remain functional, is created to mimic a potential earth-
quake. Then a vibration controller will convert the excitation spectrum into time-domain
excitations, which are delivered to the circuit breaker via a shaker during the functional test.
Similar tests are conducted for aircraft components, disk drives, and so on.
Chapter 15

The Method of Impedance

The method of transfer functions described in Chapter 12 is a very powerful tool to ana-
lyze forced response. The transfer functions are primarily characterized via system poles.
Therefore, it is desirable to have various methods to locate the system poles so that one
can characterize the system via various parameters, such as time constant and natural fre-
quencies. One such method is to use impedance. The method of impedance originates from
circuit theory, and it can be easily generalized to any system that has a linear graph model.

In this chapter, I will first introduce the concept of driving-point impedance and
explain its definition. Next, I will explain how one obtains impedance from individual one-
port elements or two-port elements in a linear graph model. Finally, I will discuss how
impedance is related to transfer functions.

15.1 Impedance and Admittance

Figure 15.1 shows a system driven by a source. The system can be a mechanical, electrical,
or fluid system. The source could be a through-variable source or an across-variable source.
In the context of transfer functions, if the input source provides a prescribe across-variable
Vin (s)est , there will be a corresponding through-variable Fin (s)st , and vice versa. The cor-

521
522 CHAPTER 15. THE METHOD OF IMPEDANCE

Figure 15.1: Driving-point impedance of a system

responding Fin (s), however, will depend on the makeup of the system the source is driving.
Therefore, by studying how Vin (s) varies with respect to Fin (s), one can get very useful clues
to characterize the system.

Let me use an electrical circuit driven by a 9-volt battery as an example. The voltage
provided by the source is always the same. The current out of the voltage, however, will
depend on what is carried in a circuit. If the circuit has only a resistor, one will find that
the current and voltage are always in phase. If the circuit has a capacitor or inductor, the
phase of the voltage will be 90◦ behind or ahead of the phase of the current, respectively.
Therefore, the phase different between the voltage and current can help us to identify the
components and characteristics of the circuit driven by the 9-volt battery.

With the understanding above, one can define a driving-point impedance Z(s) as
Vin (s)
Z(s) ≡ (15.1)
Fin (s)
and a driving-point admittance as
Fin (s) 1
Y (s) ≡ = (15.2)
Vin (s) Z(s)
There are two things worth noting here. First, impedance is a quantity that you can probe
from outside right at the source. If the system is a circuit, one can simply measure the voltage
and current of the source. There is no need to probe into the circuit. Second, impedance
can be measured, for electrical systems, through a device called an impedance analyzer.
15.2. IMPEDANCE DERIVED FROM LINEAR GRAPHS 523

Figure 15.2: Driving-point impedance of the flagship example

Example 15.1 Consider the flagship example shown in Fig. 15.2. What is the driving-point
impedance admittance of this system?

For the flagship example, the source is a through-variable source with prescribed Fs (t).
The corresponding across-variable at the source is v2 (t). According to (12.76), the differential
equation governing v2 (t) is
mk k
mv̈2 + v̇2 + kv2 = Fs (t) + Ḟs (t) (15.3)
B B
By substituting Fs (t) = Fin (s)est and v2 (t) = Vin (s)est into (15.3), one obtains the following
driving-point impedance
k
Vin (s) B
+s
Z(s) = = = Hv2 (s) (15.4)
Fin (s) ms2 + mk
B
s+k
which is the same as the transfer function Hv2 (s) from Fs (t) to v2 (t) derived in (12.77). The
admittance is the reciprocal of Z(s), i.e.,
Fin (s) ms2 + mk
B
s+k
Y (s) = = k
(15.5)
Vin (s) B
+s
Note that the impedance shown in (15.4) is a result of pole-zero cancellation as explained
in (12.150).

15.2 Impedance Derived from Linear Graphs

Impedance can be quickly derived for any system that has a linear graph model. The basic
concept is to find impedance of each one-port element. Then one can connect all one-port
524 CHAPTER 15. THE METHOD OF IMPEDANCE

elements through a network of series and parallel connections to derive the impedance of the
entire system. The following subsections describe the procedure in detail.

15.2.1 Impedance of One-Port Elements

In linear graph models, there are three types of one-port elements: capacitors, resistors, and
inductors. For a capacitor,
dv
C =f (15.6)
dt
where v is the across-variable and f is the through-variable. By substituting f = Fin est
and v = Vin (s)est into (15.6), one obtains the following driving-point impedance Z(s) and
admittance Y (s)
Vin (s) 1 Fin (s)
Z(s) = = , Y (s) = = Cs (15.7)
Fin (s) Cs Vin (s)

For a resistor,
v = Rf (15.8)

where v is the across-variable and f is the through-variable. By substituting f = Fin est


and v = Vin (s)est into (15.8), one obtains the following driving-point impedance Z(s) and
admittance Y (s)
Vin (s) Fin (s) 1
Z(s) = = R, Y (s) = = (15.9)
Fin (s) Vin (s) R

For an inductor,
df
v=L (15.10)
dt
where v is the across-variable and f is the through-variable. By substituting f = Fin est
and v = Vin (s)est into (15.10), one obtains the following driving-point impedance Z(s) and
admittance Y (s)
Vin (s) Fin (s) 1
Z(s) = = Ls, Y (s) = = (15.11)
Fin (s) Vin (s) Ls

Figure 15.3 tabulates impedance of one-port elements that can be modeled via linear
graphs in various domains.
15.2. IMPEDANCE DERIVED FROM LINEAR GRAPHS 525

Figure 15.3: Impedance table for one-port elements

15.2.2 Elements in Series or Parallel Connections

Figure 15.4 shows a system consisting of three elements with impedance Z1 (s), Z2 (s), and
Z3 (s) in a series connection. The goal here it do derive the driving-point impedance Z(s) in
terms of the impedance of the elements.

Since the three elements are in a series connection, the node equations require that all
three elements have the same through-variable. Moreover, the loop equation requires that
The driving voltage Vin (s) is

Vin (s) = V1 (s) + V2 (s) + V3 (s) (15.12)

where V1 (s), V2 (s), and V3 (s) are across-variable drops of the three elements, respectively.
Through use of the definition of impedance, (15.12) leads to

Vin (s) = Z1 (s)Fin (s) + Z2 (s)Fin (s) + Z3 (s)Fin (s) (15.13)

Therefore, the driving-point impedance Z(s) is


Vin (s)
Z(s) = = Z1 (s) + Z2 (s) + Z3 (s) (15.14)
Fin (s)
Hence the driving-point admittance Y (s) satisfies
1 1 1 1
= + + (15.15)
Y (s) Y1 (s) Y2 (s) Y3 (s)
Note that the results in (15.14) and (15.15) can be generalized to arbitrary number of
elements.
526 CHAPTER 15. THE METHOD OF IMPEDANCE

Figure 15.4: Elements in series connec-


Figure 15.5: Elements in parallel connec-
tions
tions

Figure 15.5 shows a system consisting of three elements with impedance Z1 (s), Z2 (s),
and Z3 (s) in a parallel connection. The goal here it do derive the driving-point impedance
Z(s) in terms of the impedance of the elements.

Since the three elements are in a parallel connection, the loop equations require that
all three elements have the same across-variable drop. Moreover, the node equation requires
that The driving voltage Vin (s) is

Fin (s) = F1 (s) + F2 (s) + F3 (s) (15.16)

where F1 (s), F2 (s), and F3 (s) are through-variable flows into the three elements, respectively.
Through use of the definition of admittance, (15.16) leads to

Fin (s) = Y1 (s)Vin (s) + Y2 (s)Vin (s) + Y3 (s)Vin (s) (15.17)

Therefore, the driving-point admittance Y (s) is

Fin (s)
Y (s) = = Y1 (s) + Y2 (s) + Y3 (s) (15.18)
Vin (s)

Hence the driving-point impedance Z(s) satisfies


1 1 1 1
= + + (15.19)
Z(s) Z1 (s) Z2 (s) Z3 (s)
15.2. IMPEDANCE DERIVED FROM LINEAR GRAPHS 527

Figure 15.6: Driving-point impedance of the flagship example; revisited

Figure 15.7: Free-body diagram and linear graph of the flagship example

Example 15.2 In this example, let us revisit the flagship example shown worked out in
Example 15.1. The system is shown again in Fig. 15.6 for reference. The goal is to derive
the driving-point impedance using one-port elements and a network of series and parallel
connections.

Figure 15.7 shows a free-body diagram of the mass and the linear graph model of the
system labeled with impedance of each element. Note that the linear graph model has been
constructed before (cf. Fig. 7.67). The linear graph model in Fig. 15.7 is derived from
Fig. 7.67 with each one-port element labeled by its impedance. Since the spring and the
damper are in a series connection, the combined impedance is ks + B1 as shown in Fig. ??.
Now the spring-damper equivalent ks + B1 is in parallel combination with the mass; see
Fig. 15.9. Therefore, the driving-point impedance is

1 1 1 Bk mBs2 + mks + Bk
= + s 1 = ms + = (15.20)
Z(s) 1/ms k
+ B
Bs + k Bs + k
528 CHAPTER 15. THE METHOD OF IMPEDANCE

Figure 15.8: Linear graph after the first Figure 15.9: Linear graph after the sec-
reduction ond reduction

Figure 15.10: A fluid system

Hence
Bs + k s + Bk
Z(s) = = (15.21)
mBs2 + mks + Bk ms2 + mk
B
s+k
which is the same as (15.4).

Example 15.3 Consider a fluid system shown Fig. 15.10. Moreover, Fig. 15.11 shows
the corresponding linear graph model corresponding to the hydraulic system. What is the
driving-point impedance?

Since the capacitor C and resistor R2 are in parallel, they can be combined first as an
equivalent impedance Z1 (s); see Fig. 15.12, where

1 1 1 CR2 s + 1
= + = (15.22)
Z1 (s) 1/Cs R2 R2
15.2. IMPEDANCE DERIVED FROM LINEAR GRAPHS 529

Figure 15.11: Linear graph of the fluid Figure 15.12: Linear graph after the first re-
system duction

or
R2
Z1 (s) = (15.23)
CR2 s + 1
Now the inductor I, resistor R1 , and the equivalent impedance Z1 (s) are in a series connec-
tion. Therefore, the driving-point impedance is
(sI + R1 ) (CR2 s + 1) + R2
Z(s) = sI + R1 + Z1 (s) = (15.24)
CR2 s + 1

15.2.3 Impedance of Two-Port Elements

When two-port elements appear in a linear graph, it will affect the impedance through the
following ways.

For the transformer model shown in Fig. 15.13, the variables at the input and output
ports are related via
v1 = (TF) v2 (15.25)
and
1
f1 = f2 (15.26)
TF
where T F is the transformer constant. Figure 15.14 illustrates a system carrying impedance
Z3 (s) through a transformer. Moreover, let the input and out ports of the transformers be
530 CHAPTER 15. THE METHOD OF IMPEDANCE

Figure 15.14: An impedance carried by a


Figure 15.13: A transformer model transformer

1 and 2, respectively. As viewed by the transformer, the impedance Z3 (s) is simply


V2 (s)
Z3 (s) = (15.27)
F2 (s)
In contrast, the impedance viewed from the driving point is
V1 (s)
Z(s) = (15.28)
F1 (s)
By substituting (15.25) and (15.26) into (15.28), one obtains
(T F ) · V2 (s)
Z(s) = = (T F )2 Z3 (s) (15.29)
(1/T F ) · F2 (s)

For the gyrator model shown in Fig. 15.15, the variables at the input and output ports
are related via
v1 = (GY) f2 (15.30)
and
1
f1 = v2 (15.31)
GY
where GY is the gyrator constant. Figure 15.16 illustrates a system carrying impedance
Z3 (s) through a gyrator. Moreover, let the input and out ports of the gyrator be 1 and 2,
respectively. As viewed by the gyrator, the impedance Z3 (s) is simply
V2 (s)
Z3 (s) = (15.32)
F2 (s)
15.2. IMPEDANCE DERIVED FROM LINEAR GRAPHS 531

Figure 15.16: An impedance carried by a gy-


Figure 15.15: A gyrator model rator

Figure 15.18: Linear graph of the spindle


Figure 15.17: A hard disk drive spindle
motor
motor

In contrast, the impedance viewed from the driving point is

V1 (s)
Z(s) = (15.33)
F1 (s)

By substituting (15.30) and (15.31) into (15.33), one obtains

(GF ) · F2 (s) (GY )2


Z(s) = = (15.34)
(1/GY ) · V2 (s) Z3 (s)

Example 15.4 Now let us revisit the disk drive spindle motor in Example 8.4, which is
reproduced in Fig. 15.17. Its linear graph, shown in Fig. 15.18, is transformed from Fig. 8.36
by labeling each one-port element by its impedance.
532 CHAPTER 15. THE METHOD OF IMPEDANCE

Figure 15.19: Linear graph after first re- Figure 15.20: Linear graph after second re-
duction duction

Since the resistor R and inductor L are in a series connection, they are equivalent to
an impedance R + sL. Moreover, the damping B and rotary inertia J are in parallel. They
can be combined into an equivalent impedance Z3 (s) given by
1 1 1
= + = B + sJ (15.35)
Z3 (s) 1/B 1/sJ
or
1
Z3 (s) = (15.36)
B + sJ
The equivalent impedance R + sL and Z3 are shown in Fig. 15.19. Finally, the effect of the
transformer is taken into account; see Fig. 15.20. The driving-point impedance is then

Z(s) = R + Ls + (T F )2 · Z3 (s) (15.37)

where the transformer constant T F = 1/ka . Substitution of (15.36) into (15.37) results in
1 1
Z(s) = R + Ls + 2
· (15.38)
ka B + sJ

Example 15.5 Figure 15.21 shows a hydraulic system consisting of a hydraulic cylinder
(area A), a long pipe (inertance I), a tank (capacitance C), and a drain (resistance R).
Figure 15.22 is the corresponding linear graph. Note that the hydraulic cylinder is a gyrator
with
1
GY = (15.39)
A
15.2. IMPEDANCE DERIVED FROM LINEAR GRAPHS 533

Figure 15.21: A hydraulic system


Figure 15.22: Linear graph of the hy-
draulic system

Figure 15.24: Linear graph after second re-


Figure 15.23: Linear graph after first re-
duction
duction

Since the resistor R and the capacitor C are in parallel, they can be combined (see
Fig. 15.23) to form an equivalent impedance Z2 with
1 1 1 1
= + = + Cs (15.40)
Z2 (s) R 1/Cs R
or
R
Z2 (s) = (15.41)
1 + RCs
Moreover, Z2 (s) and the inertance I are in a series connection (see Fig. 15.24). Therefore,
they form an equivalent impedance Z3 (s) given by
R RCIs2 + Is + R
Z3 (s) = sI + Z2 (s) = sI + = (15.42)
1 + RCs 1 + RCs

Finally, the driving-point impedance (see Fig. 15.25) is


(GY )2 a 1 + RCs
Z1 (s) = = 2· (15.43)
Z3 (s) A RCIs2 + Is + R
534 CHAPTER 15. THE METHOD OF IMPEDANCE

Figure 15.25: Linear graph for the driving-point impedance

15.3 Relation to Transfer Functions

As explained earlier, impedance can be used to characterize the system dynamics. To do


so, one must know how the impedance is related to the transfer function. Very often,
such relationship can be found through use of loop equations, node equations, or elemental
equations.

As an illustrative example, Fig. 15.26 shows a low-pass filter. The input voltage is u(t)
and the input current is i(t), while the output y(t) is the voltage across the capacitor. Since
the resistor R and capacitor C are in a series connection, the driving-point impedance is
U (s) 1 RCs + 1
Z(s) = =R+ = (15.44)
I(s) Cs Cs

The transfer function from u(t) to y(t) can be found through the loop equation
y(t) = u(t) − Ri(t) (15.45)
or, in the context of s parameter,
Y (s) = U (s) − RI(s) (15.46)
Therefore, the transfer function H(s) is
Y (s) I(s) R
H(s) ≡ =1−R =1− (15.47)
U (s) U (s) Z(s)
15.3. RELATION TO TRANSFER FUNCTIONS 535

Figure 15.26: A low-pass filter Figure 15.27: A high-pass filter

Note that (15.47) is an expression relating the impedance U (s) to the transfer function H(s).
Another thing to note is that Z(s) appears in the denominator. Consequently, the zeros of
the impedance Z(s) will be the poles of the transfer function H(s). Substitution of (15.44)
into (15.47) shows that
1
H(s) = (15.48)
RCs + 1
where the pole s = −1/RC of the transfer function H(s) is exactly the zero of the impedance
Z(s) in (15.44).

Similarly relationship between Z(s) and H(s) is also available for a high-pass filter
shown in Fig. 15.27. The impedance Z(s) remains unchanged, i.e., (15.44). The output y(t)
is now the voltage across the resistor R. The transfer function H(s) can be found through
the elemental equation
y(t) = Ri(t) (15.49)
or, in the context of s parameter,
Y (s) = RI(s) (15.50)
Therefore, the transfer function H(s) is

Y (s) I(s) R RCs


H(s) ≡ =R = = (15.51)
U (s) U (s) Z(s) RCs + 1

Again, the pole of H(s) is the zero of Z(s).

Вам также может понравиться