Академический Документы
Профессиональный Документы
Культура Документы
This publication is copyright under the Berne Convention and the Universal Copyright
Convention. All rights reserved. Apart from any fair dealing for the purposes of research
or private study, or criticism or review, as permitted under the Copyright, Designs and
Patents Act, 1988, this publication may be reproduced, stored or transmitted, in any
form or by any means, only with the prior permission in writing of the publishers, or in
the case of reprographic reproduction in accordance with the terms of licences issued
by the Copyright Licensing Agency. Inquiries concerning reproduction outside those
terms should be sent to the publishers at the undermentioned address:
The Institution of Engineering and Technology
Michael Faraday House
Six Hills Way, Stevenage
Herts, SG1 2AY, United Kingdom
www.theiet.org
While the authors and the publishers believe that the information and guidance given in
this work are correct, all parties must rely upon their own skill and judgement when
making use of them. Neither the authors nor the publishers assume any liability to
anyone for any loss or damage caused by any error or omission in the work, whether
such error or omission is the result of negligence or any other cause. Any and all such
liability is disclaimed.
The moral rights of the authors to be identied as authors of this work have been
asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
ISBN 978-0-86341-745-0
System on chip (SoC) integrated circuits (ICs) for communications, multimedia and
computer applications are receiving considerable international attention. One exam-
ple of a SoC is a single-chip transceiver. Modern microelectronic design processes
adopt a mixed-signal approach since a SoC is a mixed-signal system that includes
both analogue and digital circuits. There are several IC technologies currently avail-
able, however, the low-cost and readily available CMOS technique is the mainstream
technology used in IC production for applications such as computer hard disk drive
systems, sensors and sensing systems for health care, video, image and display sys-
tems and cable modems for wired communications, radio frequency (RF) transceivers
for wireless communications and high-speed transceivers for optical communications.
Currently, microelectronic circuits and systems are mainly based on submicron and
deep-submicron CMOS technologies, although nano-CMOS technology has already
been used in computer, communication and multimedia chip design. While still push-
ing the limits of CMOS, preparation for the post-CMOS era is well under way with
many other potential alternatives being actively pursued.
There is an increasing interest in the testing of SoC devices as automatic testing
becomes crucially important to drive down the overall cost of SoC devices due to the
imperfect nature of the manufacturing process and its associated tolerances. Tradi-
tional external test has become more and more irrelevant for SoC devices, because
these devices have a very limited number of test nodes. Design for testability (DfT)
and built-in self-test (BIST) approaches have thus been the choice for many applica-
tions. The concept of on chip test systems including test generation, measurement and
processing has also been proposed for complex integrated systems. Test and fault diag-
nosis of analogue and mixed-signal circuits, however, is much more difcult than that
of digital circuits due to tolerances, parasitics and non-linearities, and thus it remains
a bottleneck for automatic SoC test. Recently, the closely related tuning, calibration
and correction issues of analogue, mixed-signal and RF circuits have been intensively
studied. However, the papers on testing, diagnosis and tuning have been published
in a diverse range of journals and conferences, and thus they have been treated quite
separately by the associated communities. For example, work on tuning has been
mainly published in journals and conferences concerned with circuit design and has
not therefore come to the attention of the testing community. Similarly, analogue fault
xvi Test and diagnosis of analogue, mixed-signal and RF integrated circuits
diagnosis was mainly investigated by circuit theorists in the past, although it has now
become a serious topic in the testing community.
The scope of this book is to consider the whole range of automatic testing, diagno-
sis and tuning of analogue, mixed-signal and RF ICs and systems. It aims to provide
a comprehensive treatment of testing, diagnosis and tuning in a coherent way and
to report systematically the most recent developments in all these areas in a single
source for the rst time. The book attempts to provide a balanced view of the three
important topics, however, stress has been put on the testing side. Motivated by recent
SoC test concepts, the diagnosis, testing and tuning issues of analogue, mixed-signal
and RF circuits are addressed, in particular, from the SoC perspective, which forms
another unique feature of this book.
The book contains 11 chapters written by leading international researchers in
the subject areas. It covers three theme topics: diagnosis, testing and tuning. The
rst four chapters are concerned with fault diagnosis of analogue circuits. Chapter
1 systematically presents various circuit-theory-based diagnosis methodologies for
both linear and non-linear circuits including some material not previously available
in the public domain. This chapter also serves as an overview of fault diagnosis.
The following three chapters cover the three most popular diagnosis approaches;
the symbolic function, neural network and hierarchical decomposition techniques,
respectively. Then testing of analogue, mixed-signal and RF ICs is discussed exten-
sively in Chapters 5-10. Chapter 5 gives a general review of all aspects of testing with
emphasis on DfT and BIST. Chapters 610 focus in depth on recent advances in test-
ing analogue lters, data converters, sigma-delta modulators, phase-locked loops, RF
transceivers and components, respectively. Finally, Chapter 11 discusses auto-tuning
and calibration of analogue, mixed-signal and RF circuits including continuous-time
lters, voltage-controlled oscillators and phase-locked loops synthesizers, impedance
matching networks and antenna tuning units.
The book can be used as a text or reference for a broad range of readers from
both academia and industry. It is especially useful for those who wish to gain a
viewpoint from which to understand the relationship of diagnosis, testing and tuning.
An indispensible reference companion to researchers and engineers in electronic and
electrical engineering, the book is also intended to be a text for graduate and senior
undergraduate students, as may be appropriate.
I would like to thank staff members in the Publishing Department of the IET
for their support and assistance, especially the former Commissioning Editors Sarah
Kramer and Nick Canty and the current Commissioning Editor, Lisa Reading. I am
very grateful to the chapter authors for their considerable efforts in contributing these
high-quality chapters; their professionalism is highly appreciated. I must also thank
my wife Xiaohui, son Bo and daughter Lucy for their understanding and support;
without them behind me this book would not have been possible.
As a nal note, it has been my long dream to write or edit something in the topic
area of this book. The rst research paper published in my academic career was about
fault diagnosis in analogue circuits. This was over 20 years ago when I studied for
the MSc degree. The real motivation for doing this book, however, came along with
the proposal for a special issue on analogue and mixed-signal test for SoCs for IEE
Preface xvii
Proceedings: Circuits, Devices and Systems (published in 2004). It has since been
a long journey for the book to come into being as you see now, however, the book
has indeed been signicantly improved with the time during the editorial process.
I sincerely hope that the efforts from the editor and authors pay off as a truly useful
and long-lasting companion in your successful career.
Yichuang Sun
Contents
Preface xv
List of contributors xix
5 DFT and BIST techniques for analogue and mixed-signal test 141
Mona Sa-Harb and Gordon Roberts
5.1 Introduction 141
5.2 Background 142
5.3 Signal generation 146
5.3.1 Direct digital frequency synthesis 146
5.3.2 Oscillator-based approaches 147
5.3.3 Memory-based signal generation 148
x Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Index 383
Chapter 1
Fault diagnosis of linear and non-linear
analogue circuits
Yichuang Sun
1.1 Introduction
dictionary method is used practically in the fault diagnosis of analogue and mixed-
signal circuits, especially for single hard-fault diagnosis. The drawback of the method
is the large number of SBT computations that are needed for the construction of a
fault dictionary, especially for multiple-fault and soft-fault diagnosis of a large cir-
cuit. Methods for effective fault simulation and large-change sensitivity computation
are thus needed [610]. Tolerance effects need to be considered as simulations are
conducted at nominal values for fault-free components.
The parameter identication approach calculates all actual component values from
a set of linear or non-linear equations after test and compares them with their nominal
values to decide which components are faulty [1317]. The method is useful for
circuit design modication and tuning. There is no restriction on the number of
faults and tolerance is not a problem in this method because the method targets all
actual component values. However, the method normally assumes that all circuit
nodes are accessible and thus it is not practical for modern IC diagnosis [15, 16]. In
addition, some parameter identication requires solving non-linear equations [13, 14],
which is computationally demanding especially for large-scale circuits. The parameter
identication method has thus become more of a topic of theoretical interest in circuit
diagnosis, in contrast to circuit analysis and circuit design. The only exception is
perhaps the optimization-based identication technique [17] that can have limited
tests for approximate, but optimized, component value calculation. The optimization-
based method will be discussed in the context of the neural network approach in
Chapter 3.
The fault verication method [1839] is concerned with fault location of analogue
circuits with a small number of test nodes and a limited number of faults by using lin-
ear diagnosis equations. Indeed, modern highly integrated systems have very limited
external accessibility and normally only a few components become faulty simulta-
neously. Under the assumption that the number of faults is fewer than the number of
accessible nodes, the fault locations of a circuit can be determined by simply checking
the consistency of a set of linear equations. Thus, the SAT computation burden of
the method is small. The fault verication method is suitable for all types of fault,
and component values can also be determined after fault location. Tolerance effects
are, however, of concern in this method because fault-free components are assumed
to take their nominal values. The fault verication method has attracted considerable
attention, with the k-fault diagnosis approach [1839] being widely investigated.
This chapter systematically introduces k-fault diagnosis theory and methods for
both linear and non-linear circuits as well as the derivative class-fault diagnosis
approach. We also give a general overview of recent research in fault diagnosis of
analogue circuits. Throughout the chapter, a unied discussion is adopted based on
the fault incremental circuit concept. In Section 1.2, we introduce the fault incremen-
tal circuit of linear circuits and discuss various k-fault diagnosis methods including
branch-, node- and cutset-fault diagnoses and various practical issues such as compo-
nent value determination and testability analysis and design. A class-fault diagnosis
theory without structural restrictions for fault location is introduced in Section 1.3,
which comprises both algebraic and topological classication methods. In Section 1.4,
the fault incremental circuit of non-linear circuits is constructed and a series of linear
Fault diagnosis of linear and non-linear analogue circuits 3
methods and special considerations of non-linear circuit fault diagnosis are discussed.
We also introduce some of the latest advances in fault diagnosis of analogue circuits
in Section 1.5. A summary of the chapter is given in Section 1.6.
The k-fault diagnosis methods [1839] have been widely investigated because of
various advantages such as the need for only a limited number of test nodes and use
of linear fault diagnosis equations. It is also practical to assume a limited number of
simultaneous faults. The k-fault diagnosis theory is very systematic and is based on
circuit analysis and circuit design methods.
and Kirchhoffs voltage law to the fault incremental circuit, we can derive numerous
equations useful for fault diagnosis of analogue circuits.
For linear controlled sources and multi-terminal devices or subcircuits, we can also
derive the corresponding branch equations in the fault incremental circuit [3235].
For example, for a VCCS with i1 = gm v2 , it can be shown that i1 = gm v2 + x1
in the fault incremental circuit, where x1 = gm (v2 + v2 ). This remains a VCCS
with an incremental current in the controlled branch, an incremental voltage in the
controlling branch and a fault compensation current source in the controlled branch.
For a three-terminal linear device, y-parameters can be used to describe its terminal
characteristics, with one terminal being taken to be common:
i1 = y11 v1 + y12 v2
i2 = y21 v1 + y22 v2
We can derive the corresponding equations in the fault incremental circuit as
i1 = y11 v1 + y12 v2 + x1
i2 = y21 v1 + y22 v2 + x2
where
x1 = y11 (v1 + v1 ) + y12 (v2 + v2 )
x2 = y21 (v1 + v1 ) + y22 (v2 + v2 )
Although the device has four y-parameters, only two fault compensation current
sources are used, one for each branch in a T -type equivalent circuit in the fault
incremental circuit. If either of x1 or x2 is not equal to zero, the three-terminal
device is faulty. Only if both x1 and x2 are zero, is it fault free. Similarly, we can
also develop a star model for multi-terminal linear devices or subcircuits [35]. A or
delta model is not preferred owing to the existence of loops (will become clear later),
use of more branches and consequence of possible additional internal nodes.
A fault incremental circuit will become a differential incremental circuit if Xb =
Yb (Vb + Vb ) is replaced by Xb = Yb Vb . The differential incremental circuit
is useful for differential sensitivity and tolerance effects analysis, whereas the fault
incremental circuit can be used for large-change sensitivity analysis, fault simulation
and fault diagnosis.
Consider a linear circuit with b branches and n nodes (excluding the ground
node), of which m are accessible and l inaccessible. Assume that the nominal circuit
and faulty circuit have the same current input, then the input current to an accessible
node in the fault incremental circuit is zero. On applying KCL to the fault incremental
circuit, that is, AIb = 0 (where A is the node incident matrix) and noting that Vb =
AT Vn (where Vn is the nodal voltage increment vector), and on substituting it in
Equation (1.4) we can derive:
where Znb = (AYb AT )1 A and Xb = Yb (Vb + Vb ) as given in Equation (1.5).
Dividing Znb = [Zmb T , Zlb T ]T and Vn = [Vm T , Vl T ]T according to exter-
nal (accessible) and internal (inaccessible) nodes, the branch-fault diagnosis equation
can be derived as
and the formula for calculating the internal node voltages is given by
The non-zero elements of Xk in Equation (1.10) indicate the faulty branches. By
checking consistency of the equations of different k branches, we can determine the
k faulty branches. Because we do not know which k components are faulty, we have
to consider all possible combinations of k out of b branches in the CUT. If there are
more than one k-branch combinations whose corresponding equations are consistent,
the k faulty branches cannot be uniquely determined, as they are not distinguishable
from other consistent k-branch combinations.
More generally, the k-fault diagnosis problem is to nd the solutions of Xb
from the underdetermined equation in Equation (1.7), which contains only k non-
zero elements. This further becomes a problem of checking the consistency of a
series of overdetermined equations similar to Equation (1.9) corresponding to all
k-branch combinations. A detailed discussion of the problems and methods can be
found in References 18 and 26.
After location of the k faulty branches, we can calculate Vl using Equation (1.8),
then Vb = AT Vn , and further we can calculate Yb from Equation (1.5).
6 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Theorem 1.1 The necessary and almost sufcient condition for kbranch faults to
be testable is that for all (k + 1)-branch combinations, the corresponding equation
coefcient matrices are of full rank, that is, rank [Zm(k+1) ] = k + 1.
So there are two algebraic requirements that are important: rank [Zmk ] = k and
rank [Zm(k+1) ] = k + 1. The rst one is for the equation to be solvable, which is
always assumed to be true and the second is for a unique solution. In the following,
we will give the topological equivalents of both.
Denition 1.2 A cutset is said to be dependent if all accessible nodes and the
reference node are in one of the two parts into which the cutset divides the circuit.
A simple dependent cutset is one in which there is only one inaccessible node in
one part.
Fault diagnosis of linear and non-linear analogue circuits 7
Theorem 1.2 The necessary and almost sufcient condition for rank [Zmk ] = k of
all k-branch combinations is that the CUT does not have any loops and dependent
cutsets which contain k branches.
Theorem 1.3 The necessary and almost sufcient condition for k-branch faults to
be testable (rank [Zm(k+1) ] = k + 1 for all (k + 1)-branch combinations) is that the
CUT does not have any loops and dependent cutsets that contain (k + 1) branches.
When k = 1, the necessary and almost sufcient condition for a single branch
fault to be testable becomes that the circuit does not have any two branches in parallel
or forming a dependent cutset.
A loop is called the minimum loop if it contains the fewest number of branches
among all loops. A dependent cutset is called the minimum dependent cutset it con-
tains the fewest number of branches among all dependent cutsets. Denote lmin and
cmin as the number of branches in the minimum loop and minimum dependent cutset,
respectively. Then we have the following theorems.
Theorem 1.4 The necessary and almost sufcient condition for k-branch faults to
be testable is k < lmin 1 if lmin cmin or k < cmin 1 if cmin lmin .
It is necessary to nd out loops and dependent cutsets to determine lmin and cmin .
To seek loops is relatively easy, which can be conducted in the CUT, N. However,
dependent cutsets are a little difcult to look for, especially for large circuits. The
following theorem provides a simple method for this purpose, that is, to nd dependent
cutsets in N0 , instead of N, equivalently.
Theorem 1.5 Let N0 be the circuit obtained by connecting all accessible nodes to
the reference node in the original circuit N. Then all cutsets in N0 are dependent and
N0 contains all cutsets in N.
Note that k-branch-fault testability is dependent on both loops and dependent cut-
sets. Increasing the number of branches in the minimum loop and minimum dependent
cutset may allow more simultaneous branch faults to be testable. This is useful when
k is not known.
It is also noted that a non-dependent cutset does not pose any restriction on testa-
bility. Whether or not a cutset is dependent will depend on the number and position
of accessible nodes. Therefore, proper selection of accessible nodes can change the
dependency of a cutset and thus the testability of k-branch faults. The greater the
number of nodes accessible the smaller will be the number of dependent cutsets. If
all circuit nodes are accessible, there will be no dependent cutset. Testability will
then be completely decided by the condition on loops, that is, k < lmin 1. Choos-
ing different accessible nodes may change a dependent cutset to a non-dependent
cutset, thus improving testability. However, selection of accessible nodes will not
change the conditions on loops. Therefore, if testability is decided by loop conditions
only, for example, when lmin cmin , changing accessible nodes will not improve the
8 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
testability. However, since dependency of cutsets is related to the number and posi-
tion of accessible nodes, when testability is decided by conditions on cutsets only,
we will want to select accessible nodes to eliminate dependent cutsets or increase
the number of branches in the minimum dependent cutset to improve the testability.
To increase the number of branches in the minimum dependent cutset, it is always
useful to choose those nodes containing a smaller number of branches as accessible
nodes, because all branches connected to an inaccessible node constitute a dependent
cutset. Generally, to choose some node in the part without the reference node of the
circuit that a dependent cutset divides as an accessible node can make the minimum
dependent cutset not dependent. Finally, the number of accessible nodes must be
larger than the number of faulty branches, as is always assumed. If possible, having
more accessible nodes is always useful as in many cases we do not know exactly how
many faults may happen and the number of dependent cutsets may also be reduced.
In summary, we need to meet m k + 1, lmin > k + 1 and cmin > k + 1.
For a more graph-theory-based discussion of testability, readers may refer to
Reference 23 where detailed testability analysis and design for testability procedures
are given and other equivalent testability conditions are proposed.
We can enhance testability of a circuit by using multiple excitations. For example,
by checking the invariance of the k component values under two different excitations,
we can identify the k faulty components. The CUT can now have (k +1)-branch loops
and dependent cutsets, as in these cases we can still uniquely determine the faulty
branches. Using multiple excitations, all circuits will be single-fault diagnosable.
This will be detailed on the basis of a bilinear method in the next section.
After location of the k faulty branches, we can use the bilinear relation in Equation
(1.11) to determine the k faulty component values. More usefully, a multiple excitation
method can be developed based on checking the invariance of the corresponding k-
component values under different excitations for a unique identication of the faulty
branches and components.
Fault diagnosis of linear and non-linear analogue circuits 9
If we use two independent current excitations with the same frequency and calcu-
late col(Yk )s of all k-component combinations under each excitation, as col(Yk )1
and col(Yk )2 , respectively, and by denoting rk = col(Yk )1 col(Yk )2 , we can
determine the kfaulty branches by checking if rk is equal to zero. This method
can realize the simultaneous determination of the faulty branches and faulty compo-
nent values as calculated under any excitation. The multiple excitation method can
enhance diagnosability. Equivalent faulty k-branch combinations can be eliminated
as for these k-branch combinations, rk is not equal to zero (because component values
of real fault-free k-branch combinations will change with different excitations). Now,
the only condition for the unique identication of k faulty components is that rank
[Zmk ] = k or the k branches do not form loops or dependent cutsets. Thus, during
the checking of different combinations of k components, once a k-component com-
bination is found to have rk = 0, we can stop further checking and this k-component
combination is the faulty one. For a.c. circuits, multiple test frequencies may also be
used, however, component values R, L, C rather than their admittances should be
used since admittances are frequency dependent [21]. A similar bilinear relation and
two-excitation method for non-linear circuits [39] will be discussed in Section 1.4.2.
solved twice. Since once a yj is obtained from one faulty node, it can be used as
a known value for the other faulty node, then the other faulty node should have one
equation less to solve. Theoretically, the minimum number of equations needed is the
number of branches between faulty nodes or the faulty nodes and ground.
A faulty node that has a grounded branch is said to be independent because it
contains a branch that is not owned by other faulty nodes. Otherwise, it is said to be
dependent. A dependent node may not need to be dealt with as its branch component
values can be obtained by solving other faulty node equations. In practice we can use
the following steps to make sure that we solve the minimum number of equations.
Supposing that the rst hi branches are not connected to the faulty nodes that
have already been dealt with, then only hi independent excitations are needed for
node i. The new equations can be written as
(v1 + v1 )(1) y1 + + (vhi + vhi )(1) yhi
= (y1 v1 (1) + + ymi vmi (1) ) [(vhi +1 + vhi +1 )(1) yhi +1 +
+ (vhi + vhi )(1) yhi ]
(v1 + v1 )(2) y1 + + (vhi + vhi )(2) yhi
= (y1 v1 (2) + + ymi vmi (2) ) [(vhi +1 + vhi +1 )(2) yhi +1 +
+ (vhi + vhi )(2) yhi ] . . .
(v1 + v1 )(hi ) y1 + + (vhi + vhi )(hi ) yhi
= (y1 v1 (hi ) + + ymi vmi (hi ) ) [(vhi +1 + vhi +1 )(hi ) yhi +1 +
+ (vhi + vhi )(hi ) yhi ]
The total number of equations from all faulty nodes that are dealt with in this
method is equal to the number of possible faulty branches, no matter in what order
we deal with the faulty nodes. Starting with the faulty node that contains the maximum
number of faulty branches will need the maximum number of excitations. Starting
with the faulty node that contains the fewest branches may result in the minimum
number of excitations as when coming to deal with the node that contains the most
faulty branches some of these branches may have already been solved, thus the number
of excitations needed for the node could be smaller than the number of its faulty
branches.
and the formula for calculating the internal tree branch voltages is given by Vq =
Zqb Xb . Equation (1.22) may benet from the selectability of trees.
Denition 1.3 The k-branch set Sj is said to be dependent on the k-branch set Si if
Zmki Zmkj = Zmkj .
Theorem 1.8 If for some (k + 1) branches, rank [Zm(k+1) ] = k, then all k-branch
sets formed by these (k + 1) branches belong to the same class.
If Zmki Vm = Vm , the k-branch set Si is said to be consistent. It can be proved
that if Si and Sj are dependent, they both are consistent or both are inconsistent. It can
also be proved that if Si and Sj are consistent simultaneously, Si and Sj are dependent.
A class Ci is said to be faulty if it contains the faulty branch set. A class Ci is said
to be consistent, if the k-branch sets in the class are consistent. If Si is faulty, it is
Fault diagnosis of linear and non-linear analogue circuits 17
consistent and if Si is inconsistent, it is not faulty. Thus, the faulty class must be
the consistent class and an inconsistent class is not faulty. Owing to the equivalence
relation, if there is one consistent branch set, the class is consistent and if there is one
inconsistent branch set, the class is inconsistent. There is only one consistent class and
the faulty class can be uniquely determined. Clearly, we can identify the faulty class
by checking only one branch set in each class and once a consistent set/class is found,
we do not need to check any more as the remaining classes are not faulty. When the
number of classes is equal to the number of branch sets, that is, each class contains
only one branch set, the class-fault diagnosis method reduces to the k-branch-fault
diagnosis method.
In the above we assume that all k-branch sets are of full column rank. In the cases
that there are some branch sets which are not of full column rank, the method can also
be used with some generalization. This can be possible by putting all k-branch sets
which are not of full rank together as a class, called the non-full rank class. We can nd
all branch sets with rank [Zmki ] < k by checking determinants det(ZTmki Zmki ) = 0.
For all full rank branch sets we do classication and identication as normal. If a
normal full rank class is faulty by consistency checking, the fault diagnosis is com-
pleted. If none of the full rank classes is faulty, we judge the non-full rank class as
the faulty class.
Classication should be conducted from k = 1 to k = m 1, unless we know
the k value. Classes are determined for each k value using the method given in the
above. A class table similar to a fault dictionary can be formed before test. Zmki of
one branch set (any one of the k-branch sets) in each class computable before test
can also be included in the class table for consistency checking to identify the faulty
class after test.
The class-fault diagnosis method may be suitable for relatively large-scale circuits
as it targets at a faulty region and a class may actually be a subcircuit. In class-
fault diagnosis, the after-test computation level is small because classication can be
conducted and the class table can be constructed before test. The number of classes is
smaller than the number of branch sets and due to the unique identiability, we can stop
checking once a class is found to be consistent thus the times of consistency checking
is at worst equal to the number of classes. There is also no need for testability design
due to the unique identiability. The method has no restriction; the only assumption
is m > k. The method can also be used to classify k-node sets and k-cutset sets [28].
After the determination of the faulty class, we can further determine the k faulty
branches. If the faulty class contains only one k-branch set, the branch set is the
faulty. Otherwise, the two-excitation methods based on the invariance of the faulty
component values may be used to identify the faulty branch set from others in the
faulty class.
The class-fault diagnosis technique is the best combination of the SBT and SAT
methods, retaining the advantages of both. It uses compatibility verication of linear
equations, but the class table is very similar to a fault dictionary. It can deal with
multiple soft faults and the after-test computation level is small. Topological classi-
cation methods to be introduced in the next section can make classication simpler
and computation before test smaller.
18 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Theorem 1.9 We can nd the k-branch sets which are not of full rank topologically
as below [29, 30].
1. When branches i1 , i2 , , ik form a loop or dependent cutset, Si is not of full
rank.
2. The (k + 1), k-branch sets formed by the (k + 1) branches in a (k + 1)-branch
loop containing a dependent cutset are not of full rank. The (k + 1), k-branch
sets formed by the (k + 1) branches in a (k + 1)-branch-dependent cutset
containing a loop are not of full rank.
3. The k-branch sets formed by the (k + 1) branches in a (k + 1)-branch loop
or dependent cutset that shares k branches with another loop containing a
dependent cutset are not of full rank. The k-branch sets formed by the (k + 1)
branches in a (k + 1)-branch dependent cutset or loop that shares k branches
with another dependent cutset containing a loop are not of full rank.
Theorem 1.10 For all normal full rank k-branch sets we can classify them
topologically as below [29, 30]:
1. The (k + 1), k-branch sets in a (k + 1)-branch loop belong to the same class.
The (k + 1), k-branch sets in a (k + 1)-branch dependent cutset belong to the
same class. As a special case of the latter, supposing that an inaccessible node
has (k + 1) branches, the (k + 1), k-branch sets formed by the (k + 1) branches
connected to the node belong to the same class.
2. When a (k +1)-branch loop or dependent cutset shares k branches with another
(k + 1)-branch loop or dependent cutset, all k-branch sets formed by the
branches in the both belong to the same class.
3. In a (k + 2)-branch loop containing a dependent cutset, all those k-branch sets
that do not form the dependent cutset belong to the same class. Similarly, in a
(k + 2)-branch dependent cutset containing a loop, all those k-branch sets that
do not form the loop belong to the same class.
Denition 1.4 An i-branch set and a j-branch set are said to be t-dependent if the
same Vm is caused when their branches are faulty.
For the two-branch sets, we have Zmi Xi = Zmj Xj = Vm . Note that i and
j can be equal, for example, i = j = k, which is the case of class-fault diagnosis
in Sections 1.3.1 and 1.3.2 and they can also be different, which is a new case. To
reect the difference we use the phrase t-dependence to dene the relation, where
t will become clear later. It is evident that this dependence relationship is also an
equivalence relation. So we can classify branch sets in a circuit by combining all
dependent branch sets into a class. Obviously each concerned branch set can lie in
one and only one class. A branch set is a class itself only if it is not dependent on
other branch sets. A topological classication theorem is given below.
Theorem 1.11 [31] The (t + 1)-branch set and (t + 1), t-branch sets formed by the
(t + 1) branches in a (t + 1)-branch loop belong to the same class. The (t + 1)-branch
set and (t + 1), t-branch sets formed by the (t + 1) branches in a (t + 1)-branch-
dependent cutset belong to the same class. As a special case of the latter, supposing
that an inaccessible node has (t + 1) branches connected to it, the (t + 1)-branch set
and (t + 1), t-branch sets formed by the (t + 1) branches belong to the same class.
Note that Xb can be looked upon as a fault excitation current source vector.
Therefore, the above theorem can be easily proved by means of the theorems of
current source and voltage source shift [62]. On the basis of the above theorem some
20 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
1. Branches in a set are arranged in the order of small numbers to large ones.
2. Branch sets in a class are ranked according to the number of branches contained
in the sets, from small to large. If two sets have the same number of branches,
the set with the smaller branch numbers should be put before the other one.
3. Class order numbers, beginning with 1, are determined by branch order numbers
of the rst set in each class, from small to large.
4. The whole classication process may start from t = 1 and end at t = m1, that
is, rst nd out all classes whose smallest branch sets have only one branch,
then those of which the smallest sets have two branches, and so on.
Analogue circuit fault diagnosis has proved to be a very difcult problem. Fault
diagnosis of non-linear circuits is even more difcult due to the challenge in fault
modelling and the complexity of non-linear circuits. There has been less work on
fault diagnosis of non-linear circuits than that of linear circuits in the literature. As
practical circuits are always non-linear; devices such as diodes, transistors, and so
on, are non-linear, development of efcient methods for fault diagnosis of non-linear
circuits become particularly important. Sun and co-workers [3239] have conducted
extensive research into fault diagnosis of non-linear circuits and in References 3239
they have proposed a series of linear methods. This section summarizes some of the
results.
there is no direct single value which can represent the state of a non-linear component
like a linear one. A non-linear component often contains several parameters and any
parameter-based method may result in too many non-linear equations. Fortunately,
the fault compensation source method can be easily extended to non-linear circuits.
Any two-terminal non-linear component can be represented by a single compensa-
tion source no matter how many parameters are in the characterization function. The
resulting diagnosis equations are accurately (not approximately) linear, although the
circuit is non-linear, thus reduced computation, time and memory. This is obviously
an attractive feature.
In the fault modelling of non-linear circuits [3235], the key problem is that a
change in the operation point of a non-linear component could be caused by either a
fault in itself or by faults in other components. The fault model must be able to tell
the real fault from the fake ones. Whether or not a non-linear component is faulty
should be decided by whether the actual operating point of the component falls on to
the nominal characteristic curve, so as to distinguish it from the fake fault due to the
operation point movement of the non-linear component caused by the faults in other
components.
Consider a nominal non-linear resistive component of the characteristic of
i = g(v) (1.24)
If the non-linear component is not faulty, the actual branch current and voltage due
to faults in the circuit will satisfy:
i + i = g(v + v) (1.25)
that is, the actual operation point remains on the nominal non-linear curve, although
moved from the nominal point (i, v); otherwise, the component is faulty since the real
operation point shifts away from the nominal non-linear curve, which means that the
non-linear characteristic has changed.
Introducing
x = i + i g(v + v) (1.26)
we can then use x to determine whether or not the non-linear component is faulty. If
x is not equal to zero, the component is faulty, otherwise it is not faulty according to
Equation (1.25). Using Equations (1.26) and (1.24), we can write i = g(v + v)
g(v) + x and further:
i = yv + x (1.27)
where
g(v + v) g(v)
y= (1.28)
v
which is the incremental conductance at the nominal operation point and can be
calculated when v is known.
Equation (1.27) can be seen as a branch equation where i and v are the branch
current and voltage, respectively; y is the branch admittance and x is a current source.
Fault diagnosis of linear and non-linear analogue circuits 23
Suppose that the circuit has c non-linear components and all non-linear components
are two-terminal voltage controlled non-linear resistors and have the characteristic
i = g(v). For all non-linear components, we can write:
Ic = Yc Vc + Xc (1.29)
Equation (1.29) can be treated as the branch equation corresponding to the non-
linear branches in the fault incremental circuit where Yc is the branch admittance
matrix with individual element given by Equation (1.28), Ic , the branch current vec-
tor, Vc the branch voltage vector and Xc the current source vector with individual
element given by equation (1.26).
Suppose that the CUT has b linear resistor branches, the branch equation of the
linear branches in the fault incremental circuit was derived in Section 1.2.1 and is
rewritten with new equation numbers for convenience:
Ib = Yb Vb + Xb (1.30)
Xb = Yb (Vb + Vb ) (1.31)
Assume that the circuit to be considered has a branches, of which b branches are
linear and c non-linear, a = b + c. The components are numbered in the order of
linear to non-linear elements. The branch equation of the fault incremental circuit can
be written by combining Equations (1.30) and (1.29) as
Ia = Ya Va + Xa (1.32)
where Ia =[Ib T , Ic T ]T , Va =[Vb T , Vc T ]T , Xa = [Xb T , Xc T ]T and Ya =
diag{Yb , Yc }.
Note that the fault incremental circuit is linear, although the nominal and faulty
circuits are non-linear. This will make fault diagnosis of non-linear circuits much sim-
pler, in the same complexity of linear circuits. Also note that during the derivation,
no approximation is made. So the linearization method is accurate. The traditional
linearization method uses the differential conductance at the nominal operation point,
g(v)/v, causing an inaccuracy in the calculated x and thus an inaccuracy in the
fault diagnosis. Similar to fault diagnosis of linear circuits based on the fault incre-
mental circuit, we can derive branch-, node- and cutset-fault diagnosis equations of
non-linear circuits based on the formulated fault incremental circuit.
Non-linear controlled sources and three-terminal devices can also be modelled.
For example, for a non-linear VCCS with i1 = gm (v2 ), we have i1 = ym v2 + x1
in the fault incremental circuit, where ym = [gm (v2 +v2 )gm (v2 )]/v2 and x1 =
i1 + i1 gm (v2 + v2 ). This remains a VCCC with the incremental current of the
controlled branch, incremental voltage of the controlling branch and a compensation
current source in the controlled branch.
Suppose that a three-terminal non-linear device, with one terminal as common,
has the following functions:
i1 = g1 (v1 , v2 )
i2 = g2 (v1 , v2 )
24 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
faulty nodes and cutsets by non-zero elements in Xn and Xt respectively. Further
determination of the faulty branches (linear or non-linear) can be conducted based on
Xn = AXa and Xt = DXa for node- and cutset-fault diagnosis respectively.
In the above we have assumed that all non-linear branches are measurable. Thus,
non-linear components are connected among accessible nodes and ground and are in
the chosen tree as measurable tree branches. This may limit the number of non-linear
components and when choosing test nodes we need to select those nodes connected
by non-linear components. This may not be a serious problem, since in practical elec-
tronic circuits and systems, linear components are dominant; there are usually only a
very few non-linear components in an analogue circuit. It is noted that the coefcients
of all diagnosis equations are determined after test as calculation of Yc can only be
obtained after Vc is measured. However, this is a rather simple computation. Also,
partitioning the node admittance matrix or the cutset admittance matrix according to
accessible and inaccessible nodes or tree branches, only the block of the dimension of
m m or p p corresponding to the accessible nodes or tree branches is related to Yc .
All the other three blocks can be obtained before test because they do not contain Yc .
Using block-based matrix manipulation, the contribution of the m m or p p block
can be moved to the right-hand side to be with the incremental accessible node or tree
branch voltage vector for after-test computation and thus the main coefcient matrix
in the left-hand side of the diagnosis equations can still be computed before test [35].
In the next section, we will further discuss other ways of dealing with non-linear
components.
col(Yk1 ) = {diag[Ak1 T (Vn + Tnm Vm )]}1 Wk1k Zmk L Vm (1.35)
where Wk1k =[Uk1k1 , Ok1k2 ] and Uk1k1 is a unity matrix of dimension of k1 k1 and
Ok1k2 is a zero matrix of k1 k2 . The meanings of the other symbols including Zmk L
and Tnm are the same as those in Section 1.2.4.
26 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
1.4.3.2 Quasi-fault incremental circuit and fault diagnosis [33, 34, 36]
We consider two cases here. The rst case is that we use the alternative model for
all non-linear branches, irrespective of whether or not a non-linear component is
Fault diagnosis of linear and non-linear analogue circuits 27
measurable. On the basis of Equations (1.38) and (1.37) we can write the branch
equation for all non-linear branches as
The overall branch equation of the fault incremental circuit can be obtained by
combining Equations (1.30) and (1.39) as
We call the transformed fault incremental circuit of Equation (1.41) the quasi-fault
incremental circuit. We can derive branch-, node- and cutset-fault diagnosis equations
on the basis of the quasi-fault incremental circuit. Taking branch-fault diagnosis as an
example, after determination of Xa , we cannot immediately judge the state of the
non-linear components. However, Vc can be calculated and thus Yc . Then we can
calculate Xc from Equation (1.40) and use it to decide if a non-linear component is
faulty.
Because Yc can be chosen before test, diagnosis equation coefcients can be
obtained before test. This can reduce computation after test and is good for online
test. Because all non-linear branches will be in the faulty branch set and any node
or cutset that contains a non-linear component will behave as a faulty one whether
or not the non-linear component is faulty, the number of possible faulty branches,
nodes and cutsets for branch-, node- and cutset-fault diagnosis will increase. This
may require more test nodes for a circuit that contains more non-linear components.
In the search of faults, only the k-branch sets that contain all non-linear components,
k-node sets containing all nodes connected by non-linear components and k-cutset
sets containing all cutsets with non-linear components need to be considered.
The overall branch equation of the fault incremental circuit can be obtained by
combining Equations (1.30) (1.42) and (1.43) as
After determining Xa , for measurable non-linear branches we can use Xc1
to directly determine whether measurable non-linear components are faulty, while
for unmeasurable non-linear components we can use Xc2 to further calculate Xc2
from Equation (1.44) and then determine the real state of the unmeasurable non-linear
components. It is noted that after k-fault diagnosis, Vc2 (and any branch voltages)
can be calculated and thus Yc2 can be computed. The diagnosis equation coefcients
can be obtained after test. A fault-free non-linear component that is not measurable
makes the branch, nodes and cutset that contain it look as if faulty. Therefore, in the
fault search, only the k-branch sets that contain all unmeasurable non-linear com-
ponents, k-node sets containing all nodes connected by unmeasurable non-linear
components and k-cutset sets containing all cutsets with unmeasurable non-linear
components need to be considered. The overall performance of this mixed method
should be between the use of original models for all non-linear components and the
use of alternative models for all non-linear components.
The relations between the three types of incremental circuit are summarized in
Reference 39.
The following areas and methods of fault diagnosis of analogue circuits have received
particular attention in recent years. Some promising results have been achieved. We
briey summarize them here, leaving the details to be covered in the following three
chapters.
integer-coded dictionary. The minimum test set is found by using the entropy index
of test points.
References 42 and 43 have studied the testability of analogue circuits in the fre-
quency domain using the fault observability concept. Steady frequency responses are
used. Methods for choosing input frequencies and test nodes to enhance fault observ-
ability of the CUT are proposed. The methods proposed in References 42 and 43 are
based on differential sensitivity and incremental sensitivity analysis, respectively. The
differential-sensitivity-based method is realistic for manipulating soft faults, while
for large deviation and hard faults, accuracy increases with the use of incremental
sensitivity.
References 44 and 45 have investigated the testability of analogue circuits in
the time domain. Transient time responses are used. The proposed test generation
method in Reference 44 is targeted towards detecting specication violations caused
by parametric faults. The relationship between circuit parameters and circuit functions
is used for deriving optimum transient tests. An algorithm for generating the optimum
transient stimulus and for determining the time points at which the output needs to be
sampled is presented. The research on the optimum input stimulus and sampling points
is formulated as an optimization problem where the parameters of the stimulus (the
amplitude and pulse widths for pulse trains, slope for ramp stimulus) are optimized.
The test approach is demonstrated by deriving the optimum piecewise linear (PWL)
input waveform for transient testing. The PWL input stimulus is used because any
general transient waveform can be approximated by PWL segments.
In Reference 45 a method of selecting transient and a.c. stimuli has been presented
based on genetic algorithms and wavelet packet decomposition. The method mini-
mizes the ambiguity of faults in the CUT. It also reduces memory and computation
costs because matrix calculation is not required in the optimization. The stimuli here
are in accordance with PWL transient and a.c. sources. A PWL source is dened by
the given time interval of two neighbouring inexion points, the number of inexion
points (the rst point is (0, 0)) and the magnitude of the point of inexion of PWL
with its value that can be varied within a range. An a.c. source is dened by the
test frequency and corresponding magnitude. The frequency can be changed within a
range, with the rst test frequency being equal to zero (d.c. test) and the total number
of test frequencies can be chosen. Using wavelet packet decomposition to formulate
the objective function and genetic algorithms to optimize it, we can obtain the mag-
nitudes of inexion points of transient PWL sources or the values of test frequencies
of a.c. sources.
analogue circuits, the decomposition and hierarchical approach has attracted consid-
erable attention in recent years [35, 5761]. Some early work on fault diagnosis of
large-scale analogue circuits is based on the decomposition of circuits and verication
of certain KCL equations [58]. This method divides the CUT into a number of sub-
circuits based on nodal decomposition and requires that measurement nodes are the
decomposition nodes. Branch decomposition and branch-node mixed decomposition
methods can also be used. A simple method for cascaded analogue systems has also
been proposed [59]. This method rst decomposes a large-scale circuit into a cascaded
structure and then veries the invariance of simple voltage ratios of different stages to
isolate the faulty stage(s). This method has the minimum computation cost both after
and before test as it only needs to calculate the voltage ratios of accessible nodes and
does not need to solve any linear or non-linear equations. Another method is to rst
divide the CUT into a number of subcircuits, then nd the equivalent circuits of the
subcircuits and use k-fault diagnosis methods to locate the faults of the large-scale
circuits by diagnosing the equivalent circuits [35]. Here an m-terminal subcircuit is
equivalently described by (m1) branches no matter how complex the inside of the
subcircuit. If any of the (m1) equivalent branches is faulty, then the subcircuit is
faulty. More recently, hierarchical methods based on component connection models
[57] have been proposed [60, 61]. The application of hierarchical techniques to the
fault diagnosis of large-scale analogue circuits will be reviewed in Chapter 4.
1.6 Summary
A review of fault diagnosis techniques for analogue circuits with a focus on fault
verication methods has been represented. A systematic treatment of the k-fault
diagnosis theory and methods for both linear and non-linear circuits as well as the
class-fault diagnosis technique has been given. The fault incremental circuit for
both linear and non-linear circuits has been introduced, based on which a coherent
discussion on different fault diagnosis methods has been achieved.
The k-fault diagnosis method involves only linear equations after test and requires
only a few accessible nodes. Both algebraic and topological methods have been pre-
sented in detail for fault verication and testability analysis in branch-fault diagnosis.
A bilinear method for k-component value determination and a multiple excitation
method for parameter identication in node-fault diagnosis have been described. The
cutset-fault diagnosis method has also been discussed, which is more exible and less
restrictive than branch- and node-fault diagnosis methods owing to the selectability
of trees in a circuit.
A class-fault diagnosis theory for fault location has been introduced, which com-
prises both algebraic and topological classication methods. The class-fault diagnosis
method classies branch sets, node-sets or cutset-sets according to an equivalent rela-
tion. The faulty class can be uniquely identied by checking consistency of any set
in a class. This method has no structural restriction and classication can be carried
out before test. Class-fault diagnosis can be viewed as a combination of the fault
dictionary and fault verication methods.
Fault diagnosis of linear and non-linear analogue circuits 33
Linear methods and special considerations for fault diagnosis of non-linear cir-
cuits have been discussed. Faults in non-linear circuits are accurately modelled by
compensation current sources and the linear fault incremental circuit has been con-
structed. Linear equations for fault diagnosis of non-linear circuits can be derived
based on the fault incremental circuit. All k-fault diagnosis and class-fault diagnosis
methods developed for linear circuits have been extended to non-linear circuits using
the fault incremental circuit.
Some latest advances in fault diagnosis of analogue circuits have been reviewed,
including selection and design of test points and test signals. The next three chapters
will continue the discussion of fault diagnosis of analogue circuits with a detailed
coverage of three topical fault diagnosis methods: the symbolic function, neural
network and hierarchical methods in Chapters 2, 3 and 4, respectively.
1.7 References
12 Lin, P.-M., Elcherif, Y.S.: Analog circuits fault dictionary new approaches
and implementation, International Journal of Circuit Theory and Applications,
1985;13 (2): 14972
13 Berkowitz, R.S.: Conditions for networkelement-value solvability, IRE Trans-
actions on Circuit Theory, 1962;6 (3):249
14 Navid, N., Wilson, A.N. Jr.: A theory and algorithm for analog circuit fault
diagnosis, IEEE Transactions on Circuits and Systems, 1979;26 (7):44057
15 Trick, T.N., Mayeda, W., Sakla, A.A.: Calculation of parameter values from node
voltage measurements, IEEE Transactions on Circuits and Systems, 1979;26 (7):
46673
16 Roytman, L.M., Swamy, M.N.S.: One method of the circuit diagnosis,
Proceedings of IEEE, 1981;69 (5):6612
17 Bandler, J.W., Biernacki, R.M., Salama, A.E., Starzyk, J.A.: Fault isolation in
linear analog circuits using the L1 norm, Proceedings of IEEE International
Symposium on Circuits and Systems, 1982, pp. 11403
18 Biernacki, R.M., Bandler, J.W.: Multiple-fault location in analog circuits, IEEE
Transactions on Circuits and Systems, 1981;28 (5):3616
19 Starzyk, J.A., Bandler, J.W.: Multiport approach to multiple fault location in
analog circuits, IEEE Transactions on Circuits and Systems, 1983;30 (10):7625
20 Trick, T.N., Li, Y.: A sensitivity based algorithm for fault isolation in analog
circuits, Proceedings of IEEE International Symposium on Circuits and Systems,
1983, pp. 10981101
21 Sun, Y.: Bilinear relations for fault diagnosis of linear circuits, Proceedings of
CSEE and IEEE Beijing Section National Conference on CAA and CAD, Zhejiang,
1988
22 Sun, Y.: Determination of k-fault-element values and design of testability in
analog circuits, Journal of Electronic Measurement and Instrument, 1988;2
(3):2531
23 Sun, Y., He, Y.: Topological conditions, analysis and design for testability in
analogue circuits, Journal of Hunan University, 2002;29 (1):8592
24 Huang, Z.F., Lin, C., Liu, R.W.: Node-fault diagnosis and a design of testability,
IEEE Transactions on Circuit and Systems, 1983; 30 (5):25765
25 Sun, Y.: Faulty-cutset diagnosis of analog circuits, Proceedings of CIE 3rd
National Conference on CAD, Tianjin, 1988, pp. 3-143-18
26 Sun, Y.: Theory and algorithms of solving a class of linear algebraic equations,
Proceedings of CSEE and IEEE Beijing Section National Conference on CAA and
CAD, Zhejiang, 1988
27 Togawa, Y., Matsumato, T., Arai, H.: The TF -equivalence class approach to
analog fault diagnosis problems, IEEE Transactions on Circuits and Systems,
1986;33 (10):9921009
28 Sun, Y.: Class-fault diagnosis of analog circuits theory and approaches,
Journal of China Institute of Communications, 1990;11 (5):238
29 Sun, Y.: Faulty class identication of analog circuits, Proceedings of CIE 3rd
National Conference on CAD, Tianjin, 1988, pp. 3-403-43
Fault diagnosis of linear and non-linear analogue circuits 35
48 Starzyk, J.A., Pang, J., Manetti, S., Piccirilli, M.C., Fedi, G.: Finding ambiguity
groups in low testability analog circuits, IEEE Transactions on Circuits and
Systems, 2000;47 (8):112537
49 Stenbakken, G.N., Souders, T.M., Stewart, G.W.: Ambiguity groups and
testability, IEEE Transactions on Circuits and Systems, 1989;38 (5):9417
50 Spina, R., Upadhyaya, S.: Linear circuit fault diagnosis using neuromorphic
analyzers, IEEE Transactions on Circuits and Systems-II, 1997;44 (3):18896
51 Aminian, F., Aminian, M., Collins, H.W.: Analog fault diagnosis of actual circuits
using neural networks, IEEE Transactions on Instrumentation and Measurement,
2002;51 (3):54450
52 Aminian, M., Aminian, F.: Neural-network based analog circuit fault diagnosis
using wavelet transform as preprocessor, IEEE Transactions on Circuits and
Systems-II, 2000;47 (2):1516
53 He, Y., Ding, Y., Sun, Y.: Fault diagnosis of analog circuits with tolerances using
articial neural networks, Proceedings of IEEE APCCAS, Tianjin, China, 2000,
pp. 2925
54 He, Y., Tan, Y., Sun, Y.: A neural network approach for fault diagnosis of large
scale analog circuits, Proceedings of IEEE ISCAS, Arizona, USA, 2002, pp.
1536
55 He, Y., Tan, Y., Sun, Y.: Wavelet neural network approach for fault diagnosis
of analog circuits, IEE Proceedings Circuits, Devices and Systems, 2004;151
(4):37984
56 He, Y., Sun, Y.: A neural-based L1-norm optimization approach for fault diag-
nosis of nonlinear circuits with tolerances, IEE Proceedings Circuits, Devices
and Systems, 2001;148 (4):2238
57 Wu, C.C., Nakazima, K., Wei, C.L., Saeks, R.: Analog fault diagnosis with
failure bounds, IEEE Transactions on Circuits and Systems, 1982;29 (5):27784
58 Salama, A.E., Starzyk, J.A., Bandler, J.W.: A unied decomposition approach
for fault location in large scale analog circuits, IEEE Transactions on Circuits
and Systems, 1984;31 (7):60922
59 Sun, Y.: Fault diagnosis of large-scale linear networks, Journal of Dalian Mar-
itime University, 1985; 11 (3), also Proceedings of CIE National Conference on
LSICAD, Huangshan, 1985, pp. 95101
60 Ho, C.K., Shepherd, P.R., Eberhardt, F., Tenten, W.: Hierarchical fault diag-
nosis of analog integrated circuits, IEEE Transactions on Circuits and Systems,
2001;48 (8):9219
61 Sheu, H.T., Chang, Y.H.: Robust fault diagnosis for large-scale analog circuits
with measurement noises, IEEE Transactions on Circuits and Systems, 1997;44
(3):198209
62 Sun, Y.: Some theorems on the shift of nonideal sources and circuit equivalence,
Electronic Science and Technology, 1987;17 (5):1820.
Chapter 2
Symbolic function approaches for analogue
fault diagnosis
Stefano Manetti and Maria Cristina Piccirilli
2.1 Introduction
Rk
Cj
Lm
Rj
gn
Component values
........
Ri = 10 k
Cj = 5 pF
Rk = 15 k
Lm = 3.3 mH
gn = 3
........
approach is a natural choice, because an I/O relation, in which the component values
are the unknowns, is properly represented by a symbolic I/O relation.
The chapter is organized as follows: In Section 2.2, a brief review on symbolic
analysis is reported. Section 2.3 is dedicated to symbolic procedures for testability
analysis, that is, testability evaluation and ambiguity group determination. As it will
be shown, the testability and ambiguity group concepts are of fundamental importance
for determining the solvability degree of the fault diagnosis problem respectively at
the global level and at a component level, once the test points have been selected.
So, testability analysis is essential to both the designer, who must know which test
points to make accessible, and the test engineer, who must know how many and what
parameters can be uniquely isolated by the planned tests.
In Section 2.4 fault diagnosis procedures based on the use of symbolic techniques
are reported.
Both Sections 2.3 and 2.4 refer to analogue linear or linearized circuits. This
is not so big a restriction, because, the analogue part of modern complex systems
is almost all linear, while the non-linear functions are moved toward the digital
part [1]. However, in Section 2.5 a brief description of a possible use of symbolic
methods for testability analysis and fault diagnosis of non-linear analogue circuits is
reported.
Symbolic function approaches for analogue fault diagnosis 39
i
N(s, p1 , ..., pm ) s ai (p1 , ..., pm )
F(s) = = i i (2.1)
D(s, p1 , ..., pm ) i s bi (p1 , ..., pm )
1. Algebraic methods:
numerical interpolation methods
parameter extraction methods
determinant expansion methods.
2. Topological methods:
tree enumeration methods
two-graph method
directed-tree enumeration method
owgraph methods
signal-ow-graph method
Coates-ow-graph method.
The algebraic methods are based on the idea of generating the symbolic circuit
equations, using symbolic manipulations of algebraic expressions, directly solving
the linear system that describes the circuit behaviour, obtained, for example, using
the modied nodal analysis (MNA) technique. Several computer programs have been
realized in the past following this way. Interesting results have been obtained, in
particular, using determinant expansion methods.
The topological methods are based, essentially, on the enumeration of some
subgraphs of the circuit graph. Among these methods, the two-graph method is par-
ticularly efcient. The efciency of the method is mainly owing to the fact that
it, intrinsically, does not generate cancelling terms. In fact, the presence of can-
celling terms can produce a severe overhead in computational times, due to the
post-processing needed to elaborate the cancellations.
The basic two-graph method works only on circuits that contain resistors, capaci-
tors, inductors and voltage-controlled current sources; but it is possible to include all
the other circuit elements using simple preliminary network transformations.
are available. The program can produce symbolic network functions where each com-
ponent can appear with its symbolic name or with a numerical value. The graphical
post-processor is able to show the network function and to plot gain, phase, delay,
pole and zero position, time-domain step and impulse response. The program can be
freely downloaded at the address http://cirlab.det.uni.it/SapWin.
The symbolic expressions generated by SAPWIN are also saved, in a particular
format, in a binary le, which can constitute an interface to other programs. During
the past years, several applications have been developed using SAPWIN as a symbolic
simulator engine, such as symbolic sensitivity analysis, transient analysis of power
electronic circuits, testability evaluation and circuit fault diagnosis. All the programs
presented in this chapter are based on the use of SAPWIN.
In general, a method for locating a fault in an analogue circuit consists in the measure
of all its internal parameters, comparing the measured values with their nominal
working ranges. This kind of measurement, as can be imagined, is not straightforward
and, often, it is not possible to characterize all the parameters. The possibility of
actually accessing this information depends on which kind of measurements are made
on the circuit, as well as on the internal topology of the circuit itself. Then the selection
of the set of measurements, that is, of the test points, is an essential problem in fault
diagnosis applications, because not all the possible test points can be reached in
an easy way. For example, usually it is very difcult to measure currents without
breaking connections or, for complex circuits, a great number of measurements could
not be economically convenient. In other words, the test point selection must take into
account practical measurement problems that are strictly tied with the used technology
and with the application eld of the circuit under consideration. So, in order to perform
test point selection, it is necessary to have a quantitative index to compare different
possible choices. The testability measure concept meets this requirement.
Testability is strictly tied to the concept of network-element-value-solvability,
which was rst introduced by Berkowitz [8]. Successively, a very useful testability
measure was introduced by Saeks and co-workers [912]. Other denitions have been
presented in subsequent years (see, for example, References 1315); and, then, there
is not a universal denition of analogue testability. However, the Saeks denition has
been the most widely used [1619], because it provides a well-dened quantitative
measure of testability. In fact, once a set of test points are selected by representing
the circuit under test (CUT) through a set of equations non-linear with respect to
the component parameters, the testability denition gives a measure of solvability of
these equations and indicates the ambiguity resulting from an attempt to solve such
equations in a neighbourhood of almost any failure. Therefore, this testability measure
allows to know a priori if a unique solution of the fault diagnosis problem exists.
Furthermore, if this solution does not exist, it gives a quantitative measure of how far
we are from it, that is, how many components cannot be diagnosed with the given test
point set.
42 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
obtained from Equation (2.2) by applying the superposition principle and have the
following form:
(j)
(j) yi (p, s) det Aij (p, s)
hi (p, s) = = (1)i+j
xj (s) det A(p, s)
i = 1, . . . , ny
j = 1, . . . , nx (2.3)
(j)
with Aij (p, s) minor of the matrix A(p, s) and yi (p, s) the ith output due to the
contribution of input xj only. As it can be easily noted, the total number of the fault
diagnosis equations is equal to the product of the number of outputs and inputs.
Let (s) = (rk (s)) be the Jacobian matrix associated with the algebraic diag-
nosis Equation (2.3) evaluated at a generic frequency s and at a nominal value p0 of
the parameters. From Equation (2.3) we obtain for rk (s):
det Aij (p, s)
rk (s) = (1)i+j (2.4)
pk det A(p, s) p =p
0
This method provides a valid mean for the numerical computation of testability.
However, the numerical programs obtained in this way are of a very high compu-
tational complexity. First, the calculation of the coefcients of the polynomials prk
requires the knowledge of the values assumed by the polynomials in at least d + 1
points, where d is the degree of the polynomial; this degree must be a priori estimated,
on the basis of the type of the components present in the CUT. Therefore, for large
circuits, the numerical calculation of a considerable number of circuit sensitivities is
required. Furthermore, the program must take into account the inevitable round-off
errors introduced by the algorithm used for sensitivity computation. This problem
was partially overcome by using two different polynomial expansions (for example,
Reference 23). Nevertheless, for large circuits these errors could have a magnitude
so large that the obtained testability values must be considered only as an estimate of
the true testability.
t
with p = p1 , p2 , . . . , pR vector of potentially faulty parameters, n and m degrees
of numerator and denominator, respectively. The matrix BC , of order (m + n + 1) R,
constituted by the derivatives of the coefcients of h(s,p) in Equation (2.9) with respect
to the R unknown parameters, is the following:
a0 a0 a0
p1 p2 ... pR
a1 a1 a1
p1 p2 ... pR
... ... ... ...
an an
... an
BC = p1 p2 pR (2.10)
b0 b0 b0
...
p1 p2 pR
... ... ... ...
bm1 bm1 bm1
p1 p2 ... pR
As shown in Reference 25, the matrix BC has the same rank of the previously
dened matrix B, because the rows of B are linear combination of the rows of
BC . Then the testability value can be computed as the rank of BC by assigning
arbitrary values to the parameters pi and by applying classical triangularization meth-
ods. If the CUT is a multiple-input multiple-output system, that is, if there is more
than one fault diagnosis equation, the same result can be easily obtained. This is a
noteworthy simplication from a computational point of view, because derivatives of
46 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
the coefcients of fault diagnosis equations are simpler to compute with respect to
derivatives of fault diagnosis equations.
The described procedure has been implemented in the program, SYmbolic FAult
Diagnosis (SYFAD) [33, 34], based on the software package SAPWIN [3537].
It should be noted that, from this procedure, it is possible to derive some neces-
sary conditions for a testable circuit (that is, a circuit with maximum of testability)
which are very simple to apply. These necessary conditions are simply based on the
consideration that, for a maximum of testability, the matrix BC must have a rank
equal to the number of unknown parameters, that is, equal to the number of columns.
Then, for a circuit with a given set of test points, we have the following rst necessary
condition:
Necessary condition for maximum of testability is that the number of coefcients in the
fault diagnosis equations must be equal or greater than the number of unknown parameters.
Another interesting necessary condition follows from the consideration that the
number of coefcients depends on the order of the network. In fact, the maximum
number of coefcients of a network function is 2N + 1, if the network is of order
N. From this consideration and from the previous necessary condition, it is possi-
ble to determine the minimum number of fault diagnosis equations and then of test
points, necessary for maximum testability or, given the number of test points, it is
possible to determine the maximum number of unknown parameters for a maximum
of testability. For the single test point case, we have Mp = 2N + 1, where Mp
is the maximum number of unknown parameters, that is, the maximum number of
parameters that are possible to determine with the given fault diagnosis equation. For
the multiple-test point case, since all the fault diagnosis equations are characterized
by the same denominator, we have Mp = N + n(N + 1), where n is the number
of fault diagnosis equations. In summary, we have the following second necessary
condition:
For a circuit of order N, with n test points, a necessary condition for a maximum of
testability is that the number of potentially faulty parameters is equal or lower than
N + n(N + 1).
necessary to divide all the coefcients of the rational functions by the coefcient of
the highest term of the denominator, with a consequent complication in the evaluation
of the derivatives (derivative of a rational function instead of a polynomial function).
In this case an increase in computing speed can be obtained by applying the approach
presented in References 26 and 27, where the testability evaluation is performed
starting from fault diagnosis equations with the coefcient of the highest-order term
of the denominator different from one.
(l)
where Ai and Bj (i = 0,...,nl , j = 0,...,m1) are the coefcients of the fault
diagnosis equations in expression (2.11), which have been calculated in the pre-
vious phase. The Jacobian matrix of this system coincides with the matrix BC ,
48 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
reported in Equation (2.13) for the case of kfault equations and bm different
from one
(1)
(1)
(1)
a0 /bm a0 /bm a0 /bm
p1 p2 ... pR
...
. . .
. . .
(1) (1) (1)
an1 /bm an1 /bm an1 /bm
. . .
p1 p2 pR
. . . . . . . . . . . .
a(K) /bm
(K)
/b
(K)
/b
0 a0 m a 0 m
p p p
BC = 1 2 R (2.13)
... . . . . . .
an(K) /bm
(K)
anK /bm
(K)
anK /bm
K
. . .
p1 p2 pR
(b0 /bm ) (b0 /bm ) (b /b )
p1 p2
0 m
. . . pR
... ... ...
(bm1 /bm ) (bm1 /bm ) (bm1 /bm )
p1 p2 . . . pR
Hence, all the information provided by a Jacobian matrix with respect to its
corresponding non-linear system can be obtained from the matrix BC .
Summarizing, independent of the used fault location method, the testability value
T = rank BC gives information on the solvability degree of the problem, as explained
by the following:
If T is equal to the number of unknown elements, the parameter values can be
theoretically uniquely determined starting from a set of measurements carried out
on the test points.
If T is lower than the number R of unknown parameters, a locally unique solution
can be determined only if RT components are considered not faulty.
Generally T is not maximum and the hypothesis of a bounded number k of faulty
elements is made (k-fault hypothesis), where k T . Then, important information is
given by the testability value: the solvability degree of the fault diagnosis problem
and, consequently, the maximum possible fault hypothesis k.
In the case of low testability and k-fault hypothesis, at most a number of faults
equal to the testability value can be considered. However, under this hypothesis,
whatever fault location method is used, it is necessary to be able to select as potentially
faulty parameters a set of elements that represents, as well as possible, all the circuit
components. To this end, the determination of both the canonical ambiguity groups
and surely testable group is of fundamental importance. In order to understand this
statement better, some denitions and a theorem [20] are now reported.
The matrix BC does not only give information about the global solvability degree
of the fault diagnosis problem. In fact, by noticing that each column is relevant to a
specic parameter of the circuit and by considering the linearly dependent columns of
BC , other information can be obtained. For example, if a column is linearly dependent
with respect to another one, this means that a variation of the corresponding parameter
Symbolic function approaches for analogue fault diagnosis 49
Obviously the number of surely testable parameters cannot be greater than the
testability value, that is, the rank of the matrix BC .
In Reference 20, two important theorems have been demonstrated, which can be
consolidated into a single theorem which considers that a canonical ambiguity group
having null intersection with respect to all the other canonical ambiguity groups can
be considered as a global ambiguity group.
Theorem 2.1 A circuit is k-fault testable if all the global ambiguity groups have
been obtained by unifying canonical ambiguity groups of order at least equal
to (k + 2).
With this kind of selection, each element belonging to the surely testable group is
representative of itself, while the elements selected for each global ambiguity group
are representative of all the elements of the corresponding global ambiguity group.
When the number k of possible simultaneous faults is chosen a priori and an optimum
set of testable components does not exist (or when for whatever value of k the optimum
set does not exist, as in the case of the presence of canonical ambiguity groups of the
second order), only one component has to be selected as being representative of the
global ambiguity groups obtained by unifying canonical ambiguity groups of order
less than or equal to (k + 1), while for the surely testable group and for the other
global ambiguity groups the steps 6 and 7 of the procedure have to be applied. If a
unique solution does not exist, by proceeding in this way we are able to choose a set
of components which represents as well as possible all the circuit elements and, in
the fault location phase, it will be eventually possible to conne the presence of faults
to well-dened groups of components belonging to global ambiguity groups.
It is important to remark that this procedure of component selection is indepen-
dent of the method used in the fault location phase. Furthermore, once the elements
representative of all the circuit components have been chosen on the basis of the
previous procedure, the isolation of the faulty components is up to the chosen fault
location method. If the selected component set is optimum, the result given by the
used fault location method can be theoretically unique. Otherwise, always on the basis
of the selected components, it is possible to interpret in the best way the obtained
results, as it will be shown in the following example. Finally, as the set of compo-
nents to be selected is not unique, the eligibility of the most suitable one could be
given by practical considerations, as, for example, the set containing the highest num-
ber of components with less reliability or by the features of the subsequent chosen
fault location method (algorithms using a symbolic approach, neural networks, fuzzy
analyser, etc.).
As an example, let us consider the SallenKey band-pass lter, shown in
Figure 2.2. Vo is the chosen test point. The program SYFAD is able to yield both
testability and canonical ambiguity groups, as will be shown in the next section. In
Figure 2.3 the program results are shown. As can be seen, there are two canonical
ambiguity groups without elements in common, that can be considered also as global
ambiguity groups. The rst group is of the second order and, then, it is not possible
to select a set of components giving a unique solution. The surely testable group is
constituted by G1 and C1 . As the testability is equal to three, we can take into account
Symbolic function approaches for analogue fault diagnosis 51
R2
+
+ Vi R1 C1 C2 R3
Vo
R4 R5
Testability value: 3
at most a three-fault hypothesis, that is, a possible solution can be obtained if only
three component values are considered as unknowns. On the base of the previous
procedure, the elements to select as representative of the circuit components are the
surely testable group components and only one component belonging to one of the
two canonical ambiguity groups. Let us suppose, for example, the situation of a single
fault. Independent of the used fault location method, if the obtained solution gives
as faulty element C1 or G1 , we can localize the fault with certainty, because both
C1 and G1 belong to the surely testable group. If we locate as the potentially faulty
element a component belonging to the second-order canonical ambiguity group, we
can only know that there is a fault in this ambiguity group, but we cannot locate
it exactly because there is not a unique solution. Instead, if we obtain as the faulty
element a component belonging to the third-order ambiguity group, we have a unique
solution and then we can localize the fault with certainty. In other words, a fault in a
component of this group can be counterbalanced only by simultaneous faults on all
52 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
the other components of the same group. However, by the hypothesis of single fault,
this situation cannot occur.
Figure 2.4 Flowchart of the combinatorial algorithm for the canonical ambiguity
group determination
only an estimate of the effective rank. These numerical problems are mostly over-
come by the use of the singular-value decomposition (SVD) approach which is a
powerful technique in many matrix computations and analyses and has the advantage
of being more robust to numerical errors. The SVD approach allows us to obtain the
effective numerical rank of the matrix, taking into account round-off errors [40]. So,
by exploiting the great numerical robustness of the SVD approach, an accurate eval-
uation of testability value and an efcient procedure for canonical ambiguity group
determination can be obtained, as it will be shown in the following [28].
As known [40], a matrix BC with m rows and n columns can be written as follows
in terms of its SVD:
BC = UVT (2.14)
54 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
where U and V are two square matrices of order m and n, respectively and is
a diagonal matrix of dimension m n. If BC has rank k, the rst k elements i ,
called singular values, on the diagonal of are different from zero and are related
by 1 2 k > 0. The matrix is unique, the matrices U and V are not
unique, but are unitary. This means that they have maximum rank and their rows and
columns are orthonormal. In our case BC is the testability matrix, then n is equal to
the number of potentially faulty parameters. As known, the testability value does not
depend on component values [10]. Then, by assigning, for example, arbitrary values
to the circuit parameters, the numerical value of the entries of BC can be evaluated
and, by applying SVD, the testability value T = rankBC can be determined from the
number of singular values.
Now, V being a unitary matrix and rank BC = T , by multiplying for V both the
members of Equation (2.14), the following expression can be obtained:
BC V = U = [UT T |0] (2.15)
where UT indicates the matrix constituted by the rst T columns of U and T the
square submatrix of containing the singular values. The matrix UT T has dimension
m T , the null submatrix 0 has dimension m (n T ). At this point, the following
equations can be written:
BC VT = UT T
(2.16)
BC VnT = 0m(nT )
where VT indicates the matrix of dimension n T constituted by the rst T columns
of the matrix V and VnT the matrix of dimension n (n T ) constituted by the
last n T columns of the matrix V. By recalling that the kernel of a matrix BC (ker
BC ) of dimension m n is the set of vectors v such that BC v = 0 and that rank
BC + dim(ker BC ) = n, the dimension of ker BC is equal to n T if rank BC = T .
The columns of VnT are linearly independent, because VnT is a submatrix of V.
Then, the columns of VnT constitute a basis for ker BC . Each column of VnT
gives a linear combination of columns of BC and, then, we can think to associate
each column of VnT to an ambiguity group, but we do not know if it is canonical,
global or, eventually, a union of disjoint canonical ambiguity groups. We know only
that vectors of dimension n, representing linear combinations of columns of BC ,
giving canonical ambiguity groups, belong to ker BC . Being the columns of VnT a
basis for ker BC , certainly these vectors can be generated by the columns of VnT .
Unfortunately, we do not know a priori the canonical ambiguity groups, then we do not
know what kind of linear combination of VnT columns gives the canonical ambiguity
groups.
Now, let us consider the symmetric matrix H = VnT VTnT of dimension n n
and with entries lower than one. It has rank equal to n T , being the product of
two matrices of rank n T and derives from the product of each row of VnT for
itself and for all the other rows. By multiplying BC for H, the following expression is
obtained:
BC H = BC VnT VTnT = 0mn (2.17)
Symbolic function approaches for analogue fault diagnosis 55
Equation (2.17) means that the columns of H belongs to ker BC . Furthermore each
row and, consequently, each column of H refer to the corresponding column of BC ,
that is, in our case, to a specic circuit parameter. In Reference 28 the following
theorem and corollaries have been demonstrated.
Theorem 2.2 If in the matrix BC there are only disjoint canonical ambiguity groups,
they are identied by the entries different from zero of the columns of the matrix H.
Corollary 1 If in the matrix BC there are canonical ambiguity groups with non-null
intersection, that is, there are global ambiguity groups, the matrix H provides the
disjoint global ambiguity groups.
Corollary 2 If VnT and then H, have a null row (also a column for H), it
corresponds to a surely testable element.
Furthermore, in Reference 28 it has also been shown that, if the matrix H has all
the entries different from zero, then this means that one of the following conditions
occurs: there are surely testable elements or there is a unique global ambiguity group.
In any case, since we do not know a priori which is the situation for a given circuit,
it is necessary to consider a procedure giving the canonical ambiguity groups. If
in H disjoint global ambiguity groups are located, it is again necessary to consider
a procedure giving the canonical ambiguity groups. In practice, the procedure of
canonical ambiguity group determination ends at the evaluation of H only if H is
constituted by blocks of order two, which locate second-order canonical ambiguity
groups, otherwise it must continue.
In order to determine a canonical ambiguity group starting from the basis con-
stituted by the columns of VnT , it is necessary to determine a suitable vector v of
dimension n T which, multiplied for VnT , yields the vector of dimension n rep-
resenting the canonical ambiguity group. In Reference 28 it was demonstrated that
vectors belonging to ker BC and giving canonical ambiguity groups can be obtained
by locating the submatrices S of VnT , with dimension (n T 1)(n T ), whose
rank is equal to n T 1. These submatrices have a kernel with a unitary dimension,
whose basis x (a vector with n T entries) can be easily obtained through the SVD of
these matrices. In fact x corresponds to the last column of the matrix V of the SVD of
these matrices. By multiplying VnT for the basis x of the kernels of all the matrices
S, canonical ambiguity groups can be obtained [28]. Each vector y = VnT x has
corresponding null entries in the rows of the matrix S relevant to x, because x is a
basis of ker S.
The program, Testability and Ambiguity Group Analysis (TAGA) [29] permits us
to determine testability and canonical ambiguity groups of a linear analogue circuit
on the basis of the theoretical treatment reported in Reference 28 and previously
summarized. It exploits symbolic analysis techniques and it is based on the software
package SAPWIN. Once the symbolic network functions have been determined, the
testability matrix BC is built initially in symbolic form and then in numerical form, by
56 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
assigning arbitrary values to the circuit parameters. At this point the following steps
are performed:
1. SVD of the testability matrix BC and determination of the testability value T
and of the matrix VnT .
2. Determination of the matrix H = VnT VTnT . If the matrix H is constituted only
by blocks of order two, second-order canonical ambiguity groups are located,
then stop; otherwise go to step 3.
3. Selection of a submatrix S of VnT with n T 1 rows and n T columns.
4. SVD of S. If rank S < n T 1, go to step 3. If rank S = n T 1, go to
step 5.
5. Multiplication of VnT and vector x, basis of ker S. If the obtained vector y,
with dimension n, has non-zero entries, except those relevant to the rows of S,
a canonical ambiguity group of order T + 1 has been located, then go to step 8.
If the obtained vector y, with dimension n, has other null entries, besides those
relevant to the rows of S, a canonical ambiguity group of order lower or equal
to T has been located, then go to step 6.
6. Insertion of the obtained canonical ambiguity group in a matrix, called the
ambiguity matrix, where number of rows is equal to n and number of columns
is equal to the total number of determined canonical ambiguity groups.
7. If all the possible submatrices S have been considered, stop. If not all the
possible submatrices S have been considered, go to step 3, discarding the
submatrices S having null rows, because they certainly have a rank less than
n T 1.
8. If all the possible combinations of T elements relevant to the canonical ambi-
guity group of order T + 1 give testable groups of components, then go to
step 7.
The proof of the statements in step 5 are in Reference 28. Furthermore, if there
are surely testable elements in the CUT, they correspond to null rows in the ambiguity
matrix, because each row of the ambiguity matrix corresponds to a specic potentially
faulty circuit element and surely testable elements cannot belong to any canonical
ambiguity group of order at most equal to T [28].
It is important to remark that the availability of network functions in symbolic
form strongly reduces the computational effort in the determination of entries of the
matrix BC , because they can be simply led back to derivatives of sums of products.
Let us consider, as an example, the circuit shown in Figure 2.5. The output V0 has
been chosen as the test point. In Figure 2.6 the matrices VnT and H are shown. As it
can be noted, the matrix H has columns where entries are all different from zero. Then,
the whole procedure of canonical ambiguity group determination has to be applied
and, in a very short time, the results are obtained, as shown in Figure 2.6, where the
ambiguity matrix and the canonical ambiguity groups are reported. In the ambiguity
matrix it is possible to locate three surely testable components (C1 , R1 , R4 ) and ten
canonical ambiguity groups. The computational times are very short: on a Pentium
III 500 MHz; the symbolic analysis, performed by SAPWIN, requires 70 min and the
canonical ambiguity group determination, performed by TAGA, requires 50 min.
Symbolic function approaches for analogue fault diagnosis 57
C1 R3
R1
C2
R4
R6
R2
+
R5
+
+
V0
+
In the past years, a noteworthy number of techniques have been proposed for the
fault diagnosis of analogue linear and non-linear networks (excellent presentations of
the state of the art in this eld can be found in References 16, 41 and 42). All these
techniques can be classied in two basic groups: SBT techniques and SAT techniques.
Both SBT and SAT techniques share a combination of simulations and measurements,
the difference depending on the time sequence in which they are applied. In the
58 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Testability = 3
Matrix Vn T :
Matrix H :
C1 C2 R1 R2 R3 R4 R5 R6
Ambiguity matrix:
C1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
C2 0.00 0.00 0.00 0.00 0.00 0.00 0.67 0.72 0.67 0.61
R1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
R2 0.00 0.00 0.00 0.76 0.81 0.77 0.00 0.00 0.00 0.79
R3 0.00 0.70 0.75 0.00 0.00 0.64 0.00 0.00 0.74 0.00
R4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
R5 0.65 0.00 0.66 0.00 -0.59 0.00 0.00 0.69 0.00 0.00
R6 0.76 0.71 0.00 0.65 0.00 0.00 0.75 0.00 0.00 0.00
former case SBT, the CUT is simulated under different faults and, after a set of
measurements, a comparison between the actual circuit response to a set of stimuli
and the presimulation gives an estimation on how probable is a given fault. There are
many different procedures, but they often rely on constructing a fault dictionary, that
is, a prestored data set corresponding to the value of some network variables when a
given fault exists in the circuit. These techniques are especially suited to the location
Symbolic function approaches for analogue fault diagnosis 59
of hard or catastrophic faults for two reasons: the rst one is that they are generally
based on the assumption that any fault inuences the large-signal behaviour of the
network, the second one is that the dictionary size becomes very large in multiple
soft-fault situations.
The SAT approaches are suitable to cases where the faults perturb the small-signal
behaviour, that is, they are especially suitable to diagnose parametric faults (that is,
deviations of parameter values from a given tolerance). In these methods, starting from
the measurements carried out on the selected test points, the network parameters are
reconstructed and compared to those of the fault-free network to identify the fault.
The use of symbolic methods is particularly suited for SAT techniques and, in
particular, for those based on parameter identication. This is due to the fact that
SAT approaches need more computational time than SBT approaches and, using a
symbolic approach, noteworthy advantages can be reached, not only in computational
terms, but also in terms of including automatically, in the fault diagnosis procedure,
testability analysis, that, as already specied in the previous section, is a necessary
and preliminary step for whatever method of fault diagnosis.
In this section, methods of fault diagnosis based on parameter identication are
considered. In these techniques the aim is the estimation of the effective values of
the circuit parameters. To this end it is necessary to know a series of measurements
carried out on a previously selected test point set, the circuit topology and the nominal
values of the components. Once these data are known, a set of equations representing
the circuit is determined. These equations are non-linear with respect to the parameter
values, which represent the unknowns. Their solution gives the effective values of the
circuit parameters. In both determination and solution of the non-linear equation set,
symbolic analysis can be advantageously used, as will be shown in this section.
In parametric fault diagnosis techniques, the measurements can either be in the
frequency domain or the time domain. Generally, the procedures based on time
domain measurements do not exploit symbolic techniques in the fault location phase.
Nevertheless, also for these procedures, if a symbolic approach is used for testability
analysis, a considerable improvement in the quality of the results can be obtained. An
example of this kind is reported in Reference 43, where a neural network approach is
used in the fault location phase and a symbolic testability analysis is used for sizing
and training the network.
On the contrary, symbolic techniques are used in parametric fault diagnosis meth-
ods based on frequency domain measurements. So, in the following, only this kind
of procedure is considered. For all the techniques presented, the quite realistic k-fault
hypothesis is made, by also taking into account the component tolerances. The use of
a symbolic approach gives noteworthy advantages, not only in the phases of testability
analysis and solution of the fault diagnosis equations, but also in the search of the
best frequencies at which the measurements have to be carried out.
is the most frequent, the double-fault case is less frequent and the case of all faulty
components is almost impossible, in the following the procedures of location and
estimation of the faulty components in the case of single-fault hypothesis will be
described.
The extension of the procedure to the double-fault case, the component tolerance
consideration and a description of the realized system for the full automation of the
procedure are reported in References 4446 respectively.
By substituting the rst equation into the second one, the following expression can
be obtained, where Mand Q are numerical terms:
Aj (k) = mj (k) Ai (l) qi (l) /mi (l) + qj (k) = MAi (l) + Q (2.23)
Equation (2.23) is veried by replacing Ai (l) and Aj (k) with the values obtained by the
measurements only if the potentially faulty parameter is the faulty one and the others
are really not faulty. So, by repeating this procedure for each circuit parameter, the
faulty element can be located. Furthermore, the faulty element can also be estimated
by inverting one of the equations in Equation (2.22), as, for example:
Ai (l) qi (l)
p= (2.24)
mi (l)
If it happens that a parameter p appears in only one equation, this means that this
equation is independent of all the others and can be used in its bilinear form for
evaluating the p value. If this value is out of its tolerance, range, the parameter p is
faulty, because the parameter p, appearing in only one coefcient of the system in
Equation (2.12), certainly does not belong to any canonical ambiguity group, that is,
it is surely testable and, then, distinguishable with respect to all the other parameters.
62 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
solution. To overcome the problem of not really knowing the solution positions, a
grid of initial points is chosen as reported in Reference 34.
It is worth pointing out that the components considered to be working well, due to
their tolerances, yield a deviation in the solution for the components considered faulty,
that is, the solution is affected by the tolerances of the components considered to be
working well. The more the testability decreases (that is, the number of parameters
that cannot be considered unknowns increases), the more the error grows. In extremely
unlucky cases the tolerance effect can completely change the solution results. The
error entity is tightly dependent on the circuit behaviour, that is, on the network
sensitivity with respect to the circuit components. In fact, if a component that gives
a high value of sensitivity has a small deviation with respect to the nominal value, it
could produce completely wrong solutions if it is not considered unknown. However,
it is possible to afrm that high-sensitivity circuit components are often realized with
smaller tolerance intervals, in the sense that also a small deviation with respect to the
nominal value must be considered as a parametric fault.
Each solution obtained with the NewtonRaphson method gives a possible set of
faulty components.
The ow diagram of the algorithm of fault solution determination is shown in
Figure 2.7.
Multiple solutions can be present for the following reasons:
By solving the system with respect to any possible group of k-testable components,
we obtain several solutions, each one indicating a different possible fault situation,
that is, a different parameter group whose values are out of tolerance.
Owing to the system non-linearity, multiple solutions can exist for each parameter
group.
It is worth pointing out that, very often, several of the solutions are equivalent.
In fact, let B be a set of n components (n < R), with values out of tolerance, which
constitute the faulty components of one of the solutions. Assuming n < k, all the
groups of k components that include the set B will have, among their solutions, the
solution in which the components of set B are out of tolerance and the remaining k n
are within tolerance. So, the solution of the system with respect to each combination
of k components leads to multiple equivalent solutions, one for any combination of k
testable components that includes the set B. Then, it is useful to synthesize all these
solutions into a unique one. In practice, the solution list can be remarkably reduced by
applying the procedure shown in the owchart of Figure 2.8. Referring to this gure,
once the whole set of N solutions (set 1) has been determined, a set (set 2) constituted
by all the possible faulty component groups has to be built. This set is obviously
empty in the rst step of the algorithm. We consider iteratively each solution and, if
its related group of faulty components and their values are different from the already
stored ones (in the limits of given tolerances), we add it to set 2.
In the automation of the fault diagnosis procedure, the availability of the network
functions in completely symbolic form permits us to simplify not only the testability
analysis, but also the repeated solution of the non-linear system with different combi-
nations of potentially faulty parameters. In fact, the Jacobian matrices relevant to the
64 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
i=1
i=i+1
No
Is there a group
in set 2 Add the faulty
No component
with the same
faulty components and group
the same values? to set 2
Yes
Figure 2.8 Flowchart of the algorithm for the reduction of the fault list
If k = 4, each group that includes G3 G4 is not testable and also the group
G1 G2 C1 C2 is not testable.
The faults have been simulated by substituting some components with others of
different value. The nominal values of the circuit components are the following:
G1 = G2 = G3 = 1 103 1 (R1 = R2 = R3 = 1 k)
C1 = C2 = 47 nF
O1 : TL081
66 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
C1
G1 G2 O1
+
Va
+ Vo
V1 C2
G4
G3
The circuit has been made by using a simple wiring board and standard compo-
nents with 5 per cent of tolerance. A double parametric fault case has been simulated
by substituting the capacitor C2 with a capacitor of value equal to 20 nF and the
resistor R2 with a resistor of value equal to 1460 .
The amplitude and phase responses related to the selected test points have been
measured using an acquisition board interfaced with a personal computer. Forty
measurements related to a sweep of frequencies between 250 and 10 000 Hz have
been acquired (input signal amplitude equal to 0.1 V). This range has been cho-
sen taking into account the frequency response of the circuit, in order to include
the high-sensitivity region. The collected results, related to the two selected test
points Vo and Va , have been used as inputs for the software program SYFAD, which
implements the procedure of fault location. Choosing to solve the fault diagnosis
equations with respect to the set of all the possible testable combinations of four
components, the program has selected the following solutions among all the possible
ones (the numbers in parenthesis are the ratios between the obtained values and the
nominal ones):
G1 = 2.338 103 (2.337 79)
G2 = 1.701 103 (1.700 78)
C1 = 1.163 107 (2.4747)
or
G2 = 6.873 104 (0.687 267)
C2 = 1.899 108 (0.404 09)
or
G1 = 1.375 103 (1.374 54)
Symbolic function approaches for analogue fault diagnosis 67
where Ap is any matrix p-norm and A+ is the pseudo-inverse of A. The condition
number is always 1. A system whose associated matrix has a small condition number
is usually referred to as well conditioned, whereas a system with a large condition
number is referred to as ill conditioned. The condition number denition comes from
the calculation of the sensitivity of the solution of a system of linear equations with
respect to the possible perturbations in the known data vector and in the elements
of the matrix itself (coefcients of the system equations). Let us consider a system
of linear equations A x = b, where, for the sake of simplicity, A n n and
is non-singular. We want to evaluate how small perturbations of the data in b and
in A could affect the solution vector x. Considering a simultaneous variation in the
vector b and in the matrix A, by means of suitable mathematical elaborations [48],
the following inequality can be obtained:
x b A
cond (A) + (2.26)
x b A
68 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
This inequality provides an upper bound on the error, that is, the worst-case error in
the solution of x. To obtain the condition number of the matrix, the SVD method can
be used.
In the case under analysis, the non-linear fault equations are solved by the Newton
Raphson method, that performs a step-by-step linearization. Then, in order to relate
the previous terms, relative to the linear case, with the actual problem, let us note
that the following associations with the generic case exist: b is the vector of the gain
measurements, that is, each entry of b is a measurement of the amplitude in decibels
of a network response at a different frequency (the extension to the case of gain and
phase measurements is not difcult), consequently the entries of b are the mea-
surement errors; the entries of the matrix A are the entries of the Jacobian matrix (a
generic entry has the form 20 log |hk (ji , p)|/pj with the subscript k indicating the
kth fault equation and the subscript i the corresponding ith measurement frequency),
the entries of A are given by the tolerances of circuit components considered well
working (not faulty). The solution vector x is related to the values of the components
belonging to the testable group. Moreover, it should be highlighted that every column
j of the matrix A is related to a different component belonging to the testable group,
while each row i of the same matrix is related to a different frequency for each test
point; therefore, in order to get a square matrix, the number of performed measure-
ments has to be suitably chosen for each test point. At this point, the choice of a set
of frequencies in a zone where the condition number is minimum is suitable for mini-
mizing the deviation in the solution vector, that is, the error in the resulting component
values.
On the other hand, the condition number alone does not take into account the size
of derivatives in a given frequency range; the condition number could be good in a
frequency zone where high variations of component values with respect to nominal
values result in a small variation of network function amplitude. Then, in addition to
the condition number, it could be useful to take into account the norm of the matrix A,
which gives a measure of the sensitivity of the network functions with respect to the
component variations at a given set of frequencies. Taking into account the previous
observations, a Test Error Index (TEI) of the following form can be introduced:
cond (J) 1 1 1
TEI = = (2.27)
J2 min max
where min and max represent the minimum and maximum singular values of the
Jacobian matrix. The TEI has been chosen in this way, because, in order to minimize
the worst-case error in the solution, the norm of the matrix must be as high as possible
and its condition number must be as low as possible, that is, as near as possible to one.
Consequently, by looking for the minimum of Equation (2.27), both the requirements
are satised. In order to nd the most suitable frequency set, that is, the set where the
previous index number is minimum, two different procedures can be used. The rst
one is based on a heuristic approach [48] and is suitable for the case of a single test
point, the second one is based on the use of genetic algorithms and is more general,
but requires more computational time [49].
Symbolic function approaches for analogue fault diagnosis 69
In the rst procedure, under the hypothesis of only one test point, the logarithm of
the TEI is evaluated on different frequency sets, constituted by a number of frequen-
cies, spaced by octaves, equal to the number of unknown parameters. The minimum
of the TEI is determined and the corresponding set of frequencies constitutes the opti-
mum set of frequencies. In Reference 48 an applicative example is reported, in which
a double-fault situation is considered. By using a new version of program SYFAD,
performing parameter inversion through the NewtonRaphson algorithm, it is shown
that the double fault is correctly identied if the measurement frequencies are deter-
mined by minimizing the TEI volume, while the NewtonRaphson algorithm does
not converge or give completely incorrect results for other frequency sets.
In the second approach [49], an optimization procedure, based on a genetic algo-
rithm, performs the choice of both the testable parameter group and the frequency set
that better leads to locate parametric faults. In fact, the Jacobian matrix associated
with the fault equations depends not only on frequencies, but also on the selected
testable group. Then, even if all the possible testable groups are theoretically equiva-
lent, in the phase of TEI minimization, a testable group could be better than another
one, owing to the different sensitivities of the network functions to the parameters
of each testable groups. Consequently, the algorithm of TEI minimization also has
to take into account this aspect (note that, in the previous method, the testable group
is randomly chosen). A description of the genetic algorithm is reported in Reference
49. The steps of the fault diagnosis procedure exploiting this approach of frequency
selection are summarized below:
1. A list of all the possible testable groups is generated, through a combinatorial
procedure taking into account the canonical ambiguity groups determined in
the phase of testability analysis.
2. The genetic algorithm determines, starting from the nominal component values
po , the testable group and the test frequencies.
3. The fault equations, relevant to the testable group determined in step 2, are
solved with the NewtonRaphson algorithm by using measurements carried
out on the frequencies determined in step 2.
4. The genetic algorithm determines new test frequencies, starting from the
solution p of the previous step (the testable group is unchanged).
5. With the NewtonRaphson algorithm a new solution p* is determined. If, i,
|(pi pi )/pi | 100 , with xed a priori, stop, otherwise go to step 4.
The test frequency set will be that used in the last application of the Newton
Raphson algorithm.
In the automation of both the procedures of frequency selection, the use of
symbolic techniques gives great advantages. In fact, the availability of network
functions in symbolic form strongly reduces the computational effort in both the
testability analysis phase and the determination of the frequency-dependent Jacobian
matrix.
We conclude this subsection with an example relevant to the second described
procedure. In Figure 2.10 a two-stage common emitter (CE) audio amplier is shown.
For the transistors, a simplied model with the same parameter values is considered.
70 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
+Vcc
R5
R6
R3 C R9
R1 C1 Vout
Q1 Q2
+
Vin R2 R7 R8
R4
C3 C4
The analysis of the results already suggests that R4 and hfe_Q1 are the faulty parameters.
Now, considering the testable group found in the rst step, the calculation of the best
frequencies with these parameter values is performed again by the genetic algorithm.
The new set of frequencies, found in 12 iterations, is: f1 = 1.58 Hz, f2 = 3.70 Hz,
f3 = 9.36 Hz, f4 = 23.22 Hz, f5 = 37.72 Hz, f6 = 55.20 Hz, f7 = 78.53 Hz. By
repeating the diagnosis with the new set of frequencies, the following values of the
testable parameters are determined:
R2 = 9552 , R4 = 152.3 , R5 = 14 409 , R8 = 330.03
R9 = 3915.6 ,
hfe_Q1 = 80.984,
hie_Q2 = 1023
Symbolic function approaches for analogue fault diagnosis 71
Comparing these values with the previous ones, the following percentage deviations
are obtained
|R R | |R R |
R2 % = 2R 2 % = 0.228%, R4 % = 4R 4 % = 0.177%
2 4
|R5 R5 | |R8 R8 |
R5 % = R5
% = 0.076%, R8 % = R8
% = 0.148%
|R9 R9 |
R9 % = R9
% = 0.061%
hfe_Q1 hfe_Q1
hfe_Q1 %= % = 0.473%
hfe_Q1
hie_Q2 hie_Q2
hie_Q2 %= % = 1.81%
hie_Q2
When all the percentage deviations are less than , then the procedure is completed
in only one cycle. By comparing the obtained values with the nominal ones, we have:
|R2 R2 | |R4 R4 |
R2 % = R2 % = 4.48%, R4 % = R4 % = 52.3%
|R5 R5 | |R8 R8 |
R5 % = R5 % = 3.94%, R8 % = R8 % = 0.009%
|R9 R9 |
R9 % = R9 % = 2.11%,
hfe_Q1 hfe_Q1
hfe_Q1 % = hfe_Q1 % = 19.02%,
hie_Q2 hie_Q2
hie_Q2 % = hie_Q2 % = 2.3%
Considering the tolerance in every parameter, the faulty parameters are R4 and hfe_Q1
with the following fault values:
R4 = 152.3 , hfe_Q1 = 80.984
Comparing these values with the actual fault values, we have an error of 1.53 and
1.23 per cent respectively.
The previously presented fault diagnosis methodologies are applicable only to linear
circuits or to linearized models of the CUT. They are not applicable to circuits in
which the non-linear behaviour is structural, that is, if it is essential to the requested
electrical behaviour. However, the symbolic approach can also be usefully applied
in these cases. The aim of this section is to present an example of this kind of
application [50].
A eld in which the symbolic approach can give advantages with respect to the
numerical techniques is constituted by those applications that require the repetition
of a high number of simulations performed on the same circuit topology with the
variation of component values and/or input signal values. In this kind of application
72 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
the symbolic approach can be used to generate the requested network functions of
the analysed circuit in parametric form. In this way, circuit analysis is performed
only once and, during the simulation phase, only a parameter substitution and an
expression evaluation are required to obtain numerical results. This approach can
be used to generate autonomous programs devoted to the numerical simulation of a
particular circuit. Furthermore, for a complex circuit, these simulators can be devoted
to parts of the circuit in order to obtain a simulator library.
In this section a program package, developed by the authors following the out-
lined approach, is presented. The program, named, Symbolic Analysis Program for
Diagnosis of Electronic Circuits (SAPDEC) , is able to produce devoted simulators
for non-linear analogue circuits and is aimed for fault diagnosis applications. The
output of the program package is an autonomous executable program, a simulator
devoted to a given circuit structure, instead of a network function in symbolic form.
The generated simulators work with inputs and outputs in numerical form, never-
theless, they are very efcient because they strongly exploit the symbolic approach;
in fact:
1. They use, for numerical simulation, the closed symbolic form of the requested
network functions.
2. They are devoted to a given circuit structure.
3. They are independent from both the component values and input values, which
must be indicated only at run time, before numerical simulation.
The generated symbolic simulators produce time domain simulations and are able
to work on non-linear circuits. To this end the following methods have been used:
1. Non-linear components are replaced by suitable PWL models.
2. Reactive elements are simulated by their backward-difference models.
3. A Katznelson-type algorithm is used for time domain response calculation.
A
Ik + 1
Vk + 1 Gi
I = (C/ T )Vk
It is also worth pointing out that, by exploiting PWL models for non-linear devices,
testability analysis of non-linear circuits can be performed through the methods pre-
sented in Section 2.3. with a testability value that is independent of circuit parameter
value, testability evaluation and ambiguity group determination can be performed
starting from the symbolic network functions obtained by replacing the non-linear
devices with their PWL models and assigning arbitrary values to both the parameters
corresponding to linear components and the ones corresponding to PWL models of
non-linear components [52] in the methods of Section 2.3.
SPICE-like
circuit description
Generation of
symbolic SAPDEC
network functions
Symbolic
network
functions C
compiler
Simulation
algorithm
Devoted
simulator
Component
Input file
values file
Devoted simulator
Output file
2.6 Conclusions
2.7 References
25 Liberatore, A., Manetti, S., Piccirilli, M.C.: A new efcient method for ana-
log circuit testability measurement, Proceedings of IEEE Instrumentation and
Measurement Technology Conference, Hamamatsu, Japan, 1994, pp. 1936
26 Catelani, M., Fedi, G., Luchetta, A., Manetti, S., Marini, M., Piccirilli, M.C.:
A new symbolic approach for testability measurement of analog networks,
Proceedings of MELECON96, Bari, Italy, 1996, pp. 51720
27 Fedi, G., Luchetta, A., Manetti, S., Piccirilli, M.C.: A new symbolic method for
analog circuit testability evaluation, IEEE Transactions on Instrumentation and
Measurement, 1998;47:55465
28 Manetti, S., Piccirilli, M.C.: A singular-value decomposition approach for ambi-
guity group determination in analog circuits, IEEE Transactions on Circuits and
Systems I, 2003;50:47787
29 Grasso, F., Manetti, S., Piccirilli, M.C.: A program for ambiguity group deter-
mination in analog circuits using singular-value decomposition, Proceedings of
ECCTD03, Cracow, Poland, 2003, pp. 5760
30 Liberatore, A., Manetti, S.: SAPEC A personal computer program for the sym-
bolic analysis of electric circuits, Proceedings of IEEE International Symposium
on Circuits and Systems, Helsinki, Finland, 1988, pp. 897900
31 Manetti, S.: A new approach to automatic symbolic analysis of electric circuits,
IEE Proceedings Circuits, Devices and Systems, 1991;138:228
32 Liberatore, A., Manetti, S.: Network sensitivity analysis via symbolic formu-
lation, Proceedings of IEEE International Symposium on Circuits and Systems,
Portland, OR, 1989, pp. 7058
33 Fedi, G., Giomi, R., Luchetta, A., Manetti, S., Piccirilli, M.C.: Symbolic algo-
rithm for ambiguity group determination in analog fault diagnosis, Proceedings
of ECCTD97, Budapest, Hungary, 1997, pp. 128691
34 Fedi, G., Giomi, R., Luchetta, A., Manetti, S., Piccirilli, M.C.: On the application
of symbolic techniques to the multiple fault location in low testability analog
circuits, IEEE Transactions on Circuits and Systems II, 1998;45:13838
35 Liberatore, A., Luchetta, A., Manetti, S., Piccirilli, M.C.: A new symbolic
program package for the interactive design of analog circuits, Proceedings of
IEEE International Symposium on Circuits and Systems, Seattle, WA, 1995,
pp. 220912
36 Luchetta, A., Manetti, S., Piccirilli, M.C.: A Windows package for symbolic
and numerical simulation of analog circuits, Proceedings of Electrosoft96,
San Miniato, Italy, 1996, pp. 11523
37 Luchetta, A., Manetti, S., Reatti, A.: SAPWIN-A symbolic simulator as a support
in electrical engineering education, IEEE Transactions on Education, vol.44, p.
9 and CD-ROM support, 2001.
38 Starzyk, J., Pang, J., Fedi, G., Giomi, R., Manetti, S., Piccirilli, M.C.: A software
program for ambiguity group determination in low testability analog circuits,
Proceedings of ECCTD99, Stresa, Italy, 1999, pp. 6036
39 Starzyk, J., Pang, J., Manetti, S., Piccirilli, M.C., Fedi, G.: Finding ambiguity
groups in low testability analog circuits, IEEE Transactions on Circuits and
Systems I, 2000;47:112537
80 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
40 Golub, G.H., van Loan, C.F.: Matrix Computations (John Hopkins University
Press, Baltimore, MD, 1983)
41 Liu, R.: Testing and Diagnosis of Analog Circuits and Systems (Van Nostrand
Reinhold, New York, 1991)
42 Huertas, J.L.: Test and design for testability of analog and mixed-signal inte-
grated circuits: theoretical basis and pragmatical approaches, Proceedings of
ECCTD93, Davos, Switzerland, 1993, pp. 75151
43 Cannas, B., Fanni, A., Manetti, S., Montisci, A., Piccirilli, M.C.: Neu-
ral network-based analog fault diagnosis using testability analysis, Neural
Computing and Applications, 2004;13:28898
44 Fedi, G., Liberatore, A., Luchetta, A., Manetti, S., Piccirilli, M.C.: A sym-
bolic approach to the fault location in analog circuits, Proceedings of IEEE
International Symposium on Circuits and Systems, Atlanta, GA, 1996, pp. 8103
45 Catelani, M., Fedi, G., Giraldi, S., Luchetta, A., Manetti, S., Piccirilli, M.C.:
A new symbolic approach to the fault diagnosis of analog circuits, Proceedings
of IEEE Instrumentation and Measurement Technology Conference, Brussels,
Belgium, 1996, pp. 11825
46 Catelani, M., Fedi, G., Giraldi, S., Luchetta, A., Manetti, S., Piccirilli, M.C.: A
fully automated measurement system for the fault diagnosis of analog electronic
circuits, Proceedings of XIV IMEKO World Congress, Tampere, Finland, 1997,
pp. 527
47 Fedi, G., Luchetta, A., Manetti, S., Piccirilli, M.C.: Multiple fault diagnosis
of analog circuits using a new symbolic approach, Proceedings of 6th Interna-
tional Workshop on Symbolic Methods and Application in Circuit Design, Lisbon,
Portugal, 2000, pp. 13943
48 Grasso, F., Luchetta, A., Manetti, S., Piccirilli, M.C.: Symbolic techniques
for the selection of test frequencies in analog fault diagnosis, Analog Inegrated
Circuits and Signal Processing, 2004;40:20513
49 Grasso, F., Manetti, S., Piccirilli, M.C.: An appproach to analog fault diagnosis
using genetic algorithms, Proceedings of MELECON04, Dubrovnik, Croatia,
2004, pp. 11114
50 Manetti, S., Piccirilli, M.C.: Symbolic simulators for the fault diagnosis of
nonlinear analog circuits, Analog Integrated Circuits and Signal Processing,
1993;3:5972
51 Vlach, J., Singhal, K.: Computer Methods for Circuit Analysis and Design, 2nd
edn (Van Rostrand Reinhold, New York, 1994)
52 Fedi, G., Giomi, R., Manetti, S., Piccirilli, M.C.: A symbolic approach for
testability evaluation in fault diagnosis of nonlinear analog circuits, Proceedings
of IEEE International Symposium on Circuits and Systems, Monterey, CA, 1998,
pp. 912
53 Katznelson, J.: An algorithm for solving nonlinear resistor networks, Bell System
Technical Journal, 1965;44:160520
54 Konkzykowska, A., Starzyk, J.: Computer analysis of large signal owgraphs
by hierarchical decomposition methods, Proceedings of ECCTD80, Warsaw,
Poland, 1980, pp. 40813
Symbolic function approaches for analogue fault diagnosis 81
3.1 Introduction
Fault diagnosis of analogue circuits has been an active research area since the 1970s.
Various useful techniques have been proposed in the literature such as the fault dic-
tionary technique, the parameter identication technique and the fault verication
method [111]. The fault dictionary technique is widely used in practical engineering
applications because of its simplicity and effectiveness. However, the traditional fault
dictionary technique can only detect hard faults and its application is largely limited
to small to medium-sized analogue circuits [5]. To solve these problems, several arti-
cial neural network (ANN)-based approaches have been proposed for analogue fault
diagnosis and they have proved to be very promising [1225]. The neural-network-
based fault dictionary technique [2023] can locate and identify not only hard faults
but also soft faults because neural networks are capable of robust classication even
in noisy environments. Furthermore, in the neural-network-based fault dictionary
technique, looking up a dictionary to locate and identify faults is actually carried out
at the same time as setting up the dictionary. It thus reduces the computational effort
and has better real-time features. The method is also suitable for large-scale analogue
circuits.
More recently, wavelet-based techniques have been proposed for fault diagno-
sis and testing of analogue circuits [18, 19, 24, 25]. References 18 and 19 develop a
neural-network-based fault diagnosis method using wavelet transform as a preproces-
sor to reduce the number of input features to the neural network. However, selecting
the approximation coefcients as the features from the output node of the circuit
and treating the details as noise and setting them to zero may lead to the loss of valid
information, thus resulting in a high probability of ambiguous solutions and low diag-
nosability. Also, additional processors are needed to decompose the details, resulting
84 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Component tolerances, non-linearity and a poor fault model make analogue fault
location particularly challenging. Generally, tolerance effects make the parameter
values of circuit components uncertain and the computational equations of tradi-
tional methods complex. The non-linear characteristic of the relation between the
circuit performance and its constituent components makes it even more difcult to
diagnose faults online and may lead to a false diagnosis. To overcome these prob-
lems, a robust and fast fault diagnosis method taking tolerances into account is
thus needed. ANNs have the advantages of large-scale parallel processing, paral-
lel storing, robust adaptive learning and online computation. They are therefore ideal
for fault diagnosis of analogue circuits with tolerances. The process of creating a
fault dictionary, memorizing the dictionary and verifying it can be simultaneously
completed by ANNs, thus the computation time can be reduced enormously. The
robustness of ANNs can effectively deal with tolerance effects and measurement noise
as well.
Neural-network-based approaches for analogue circuit fault diagnosis 85
This section discusses methods for analogue fault diagnosis using neural networks
[20, 21]. The primary focus is to provide a robust diagnosis using a mechanism to
deal with the problem of component tolerances and reduce testing time. The approach
is based on the k-fault diagnosis method and backward propagation neural networks
(BPNNs). Section 3.2.1 describes ANNs (especially BPNNs). Section 3.2.2 discusses
the theoretical basis and framework of fault diagnosis of analogue circuits. The neural-
network-based diagnosis method is described in Section 3.2.3. Section 3.2.4 addresses
fault location of large-scale analogue circuits using ANNs. Simulation results of two
examples are presented in Section 3.2.5.
(s)
B
(s) (s1)
Ii = Wij Oj , i = 1, 2, . . . , A (3.1)
j=1
(s) (s)
Oi = fs Ii (3.2)
where A and B are the number of neurons of the sth and the (s1)th layer, respectively.
(s)
Wij represents the weight connecting the jth neuron of the (s1)th layer and the ith
(s)
neuron of the sth layer. Function fs ( ) is the limiting function through which Ii is
passed and it must be non-decreasing and differentiable over all time. A common
limiting function is the sigmoid in the following form:
1
fs (I) = (3.3)
1 + exp (I)
The generalized delta rule that performs a gradient descent over an error surface is
utilized to adapt the weights. Also, the initial values of the weights are assumed to be
random numbers evenly distributed between 0.5 and 0.5.
For an input pattern P of the BPNN, the output error of the output layer can be
calculated as
EP = 21 (yi di )2
i
where di is the expected output of the ith output node in the output layer.
The error signal at the jth node in the sth layer is generally given by
(s) EP
jP = (s)
I jP
(s) (s)
jP = dP OjP f IjP
where dP and Ojp are the target and actual output values, respectively.
For the hidden layer, the error signal is
C
(s) (s+1) (s+1) (s)
jP = iP WijP f IjP
i=1
This equation is compatible or rank [Zmk Vm ]= k and can be solved to give the
solution of
Jk = (ZTmk Zmk )1 ZTmk Vm
For a single fault occurring in the circuit (k = 1), Zmk becomes a single column vector
Zmf and Jk a single variable Jf , where f is the number of the faulty branch.
T
Denoting Zmf = z1f z2f zmf and Vm = [v1 v2 vm ]T , it can be
derived that:
vi = Dzif , i = 1, 2, . . . , m (3.7)
where D is any non-zero constant.
Equation (3.7) can be further written as
vi zif
= (3.8)
m m
j=1 vj
2 2
j=1 zjf
where i = 1, 2, , m.
Thus, the single fault diagnosis becomes the checking of Equation (3.8) for all b
branches (f = 1, 2, , b).
The k-branch-fault diagnosis method can effectively locate faults in circuits with-
out tolerances. However, for circuits with tolerances, the values of Vm and fault
features are inuenced by the tolerances, which will make the contributions of faults
to the two values ambiguous and the testing process becomes slow. In this situation,
fault location results may not be accurate and sometimes a false diagnosis may result.
Fortunately, the advantages of memorizing and associating of ANNs can make up for
this. In order to improve the online characteristics and achieve robustness of diagno-
sis, we present a method that combines the k-fault diagnosis method with the highly
parallel processing BPNN in the next section.
complexity of computation and the effectiveness of the neural network for a specic
problem. To date, there is no absolute rule to design the structure of a BPNN and
results are very much empirical in nature.
can be measured and are used for search of the faulty element among all branches, they
are utilized as the fault feature values of the BPNN and as the inputs of a dimension
of m, of the BPNN. The outputs of the BPNN correspond to the circuit elements to
be diagnosed, having a dimension of b.
single faults. The groups of actual feature values are thus formed.
R8
R2 R4 R6
1 2 3 4
R1 R3 R5 R7
X0 X1 X2
R3 Maximum
value Output node value in other
(ohms) X0 X1 X2 3 (R3 ) value output nodes
has occurred in it. According to the topology of the circuit, three testing nodes are
selected, which are numbered nodes 1, 3 and 4. Thus, the BPNN should have three
input nodes in the input layer and eight output nodes in the output layer. In addition,
two hidden layers with eight hidden nodes each are designed. The BPNN algorithm is
simulated by computer in the C language. Also, PSpice is used to simulate the circuit
to obtain Vm . Because the diagnosis principle is the same for every resistor in the
circuit, we arbitrarily select R3 as an example to demonstrate the method described.
The sample feature values of R3 (X0 , X1 , X2 ) are calculated and shown in Table 3.1.
These sample feature values of R3 are input to the BPNN in order that the BPNN is
trained and can memorize the information learned before. After over 5000 times of
training and when the overall error is less than 0.03, the training of the BPNN is
completed and the knowledge of the sample features is stored in it.
Now suppose R3 is faulty and when it is 0.2, 0.9, 1.2 and 2.5 respectively, the
values of the other resistors are within the tolerance range of 5 per cent (here the
values of the seven resistors are selected arbitrarily as R1 = 1.04 , R2 = 0.99 ,
R4 = 1.02 , R5 = 0.98 , R6 = 1.01 , R7 = 0.987 , R8 = 0.964 ). With
the excitation of a 1 A current to testing node 1, the actual feature values of the
three testing nodes are obtained by getting Vm and calculating the left-hand part
of Equation (3.8). Then, the actual feature values (X0 , X1 , X2 ) of the four situations
are input to the input nodes of the BPNN, respectively, to classify and locate the
corresponding faulty element. The results are shown in Table 3.2.
From Table 3.2, it can be seen that the diagnosis result is correct. For output
node 3 the value of the output layer is more than 0.5 and those of the other output
nodes are less than 0.5, which shows that R3 is the faulty element. Also, when R3
92 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
is 0.9 , which is the case that the fault is very small and comparatively difcult
to detect, the BPNN-based k-fault diagnosis method can still successfully locate it.
Furthermore, for the other seven resistors, the method has also been proven to be
effective by simulation. In addition, once the BPNN is trained, the diagnosis process
becomes very simple and fast.
Compared with the traditional k-fault diagnosis method, the BPNN-based method
has clear advantages. The BPNN-based method requires less computation and is very
fast. Computation is needed only once to obtain sufcient sample and actual feature
values of testing nodes for a particular circuit. Also, the problem of component
tolerance can be successfully handled by the robustness of the BPNN. Hence, the
neural-network-based diagnosis method is more robust and faster and can be used in
real-time testing.
3.2.5.2 Example 2
A second circuit is shown in Figure 3.3. This is a large-scale analogue circuit. It is
decomposed into four subcircuits (marked in dashed lines), denoted by x1 , x2 , x3 ,
x4 , according to the nature of the circuit. Assume that R14 and Q3 are faulty; the
value of R14 is changed to 450 and the base of Q7 is open. Following the steps in
Section 3.2.4, Table 3.3 is produced, containing accessible node voltages.
6 R7 +Vcc
R1 400 R8 R11 R12 +6v
600 10 400 1K 1K
14 Q7
5 Q3 Q4
15 Q8
11
Q5 Q 6
R2 R3 12 16 17
820 13 Vo1
42 21
3 R9 R10
1 4 780 780 R13 R14
X3 22
Q1 Q2 45 45 Vo2
2 v i2 R17
vi1 18 1K
X2 R18
R4 R5 X1 1K
R15
42 42 470
>
8 Q10
Q9
9 19
X4
R6 Q11
500 20
R16
370 VEE
23 6v
93
94 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
The feature vectors are passed through the corresponding BPNNs and the follow-
ing results are obtained from the outputs of the BPNNs: subcircuit 1 (x1 ) and subcircuit
4 (x4 ) were fault free; subcircuit 2 (x2 ) and subcircuit 3 (x3 ) were faulty. The faulty
elements were identied as Q3 and R14 , which are the same as originally assumed.
N N
f V0 = VN Wk and f (t) L 2 (R) can be expressed as f = fN + k=1 gk .
k=1
Thus, the orthogonal projections of f (t) in various frequency bands are obtained. The
projection pj1 of f (t) on Vj1 is given by
pj1 f = cjk j,k + djk j,k
kZ kZ
3.3.3 WNNs
Neural networks have many features suitable for fault diagnosis. We now combine
wavelet transforms and neural networks for fault diagnosis of analogue circuits. The
WNN is shown in Figure 3.4. This is a multi-layer feedback architecture with wavelets,
allowing the minimum time to converge to its global maximum. The WNN employs
a wavelet base rather than a sigmoid function, which discriminates it from general
BPNNS. The function of mapping from Rm to Rn can be expressed as
m
p
t bj
yi (t) = 1 wij xk (t) (3.9)
aj
j=1 k=1
In Equation (3.9), (.) and 1 (.) are the wavelet bases; xk (t) and yi (t) are the kth
input and ith output respectively. The weight functions in the hidden layer and output
layer are wavelet functions. The sum square error performance function is expected
x1(t )
t b1
a1 wi1
wij
1(.)
xk(t )
t bk yi (t )
ak
wip
xm(t ) t bm
am
to reach minimum by feeding information forward and feeding error backward, thus
updating the weights and bias parameters according to its learning algorithms. A
momentum and adaptive learning rule is adopted to reduce the sensitivity of the local
details of error surfaces and shorten the learning time.
Denition 3.1 Test points of a circuit are those accessible nodes in a circuit whose
sensitivities to component parameters are not equal to zero.
Denition 3.2 Pattern extraction nodes (PEN) of a circuit are those test points in a
circuit which are distributed uniformly.
From denition 3.2, using PENs can reduce the probability of missing a fault, thus
increasing the diagnosability. The sketch of the WNN algorithm for fault diagnosis
is given in Figure 3.4. It contains three main steps described below.
1. Extraction of candidate patterns and feature vectors.
To extract candidate features from the PENs of a circuit, 500 Monte Carlo
analyses are conducted for every fault pattern of the sampled circuits with tol-
erances; 350 of them are used to train WNNs and the other 150 are adopted for
simulation. Then optimal features for training neural networks are obtained by
rst selecting candidate sets from wavelet coefcients using PCA and normal-
ization (according to steps 15 as mentioned in Section 3.3.2). Assuming that
the feature vector of the ith PEN is TVi = [A1 , A2 , . . . , An ], then for all PENs,
we have TV = [TV1 , TV2 , . . . , TVq ], where q is the number of PENs.
2. Design and training of WNNs.
A multi-layer feedback neural network whose output number is equal to
the number of fault classes is used. The error performance function is given by
N q
2
E = 21 l
yd,i (t) yil (t) (3.10)
l=1 i=1
l
where N is the total number of training patterns and yd,i (t) and yil (t) are
the desired and real output associated with feature TVi for the lth neuron,
respectively. Also, yil (t) is given in Equation (3.9).
To minimize the sum square error function in Equation (3.10) the weights
and coefcients in Equation (3.9) or Figure 3.4 can be updated using the
following formulas
wij (k + 1) = wij (k) + w Dwij (k) + w [wij (k) wij (k 1) ]
aj (k + 1) = aj (k) + a Daj (k) + a [aj (k) aj (k 1) ]
bj (k + 1) = bj (k) + b Dbj (k) + b [bj (k) bj (k 1) ]
E E E
Dwij = w ij
, Daj = aj
, Dbj = bj
(3.11)
98 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
R1 R2
3k out
1k C1 5n
R3
OUT
C2 2k +
R5
5n
4k
R4
4k
r2
C1 5 n 6200 r 52
C2
r1
5100
5n r 51
R3 6200 r4
2 5100
6200 1600 OUT
OUT OUT
+
+ +
0 r 62 0 r 63
0
10 k
r 64 10 k
10 k
OUT
+ Out
r 61
10 k 0
C2. The notations and stand for high and low, respectively. In order to generate
training data for different fault classes, we set faulty components in the circuit and
vary resisters and capacitors within their tolerances.
Figure 3.6 shows a four opamp biquad high-pass lter of a cut-off frequency of 10
kHz. Its nominal component values are given in the gure. Faulty component values
for this circuit are set to be the same as those in Reference 19 for convenience of
comparison. The tolerances of 5 and 10 per cent are used for resistors and capacitors
to make the example practical.
The impulse responses of the lter circuit are simulated to train the WNN with
the lter input of a single pulse of height 5 V and duration 10 s. We adopt a
WNN architecture of N1 -38-N2 , where N1 is the number of input patterns and N2
is the number of fault patterns. For the fault diagnosis of the SallenKey lter in
Figure 3.5, the method presented in Reference 17 requires a three-layer BPNN. This
network has 49 inputs, 10 rst-layer and 10 second-layer neurons, resulting in about
700 adjustable parameters. During the training phase, an error function of these
parameters must be minimized to obtain the optimal weight and bias values. The
trained network was able to properly classify 95 per cent of the test patterns. Reference
19 for diagnosing nine fault classes (eight faulty components plus the no-fault class)
in the same SallenKey bandpass lter, requires a neural network with four inputs, six
rst-layer and eight output-layer neurons. Their results show that the neural network
can not distinguish between the NF (no-fault class) and the R2 fault classes. If
these two classes are combined into one ambiguity group and we use eight output
neurons accordingly, the neural network can correctly classify 97 per cent of the test
patterns. Using the method described above, the trained WNN is capable of 100 per
cent correct classication of the test data, although the WNN used is somewhat more
complicated.
Using the method in Reference 19 to diagnose the 13 single faults assumed in Table
I of Reference 19 for the four opamp biquad high-pass lter of Figure 3.6 requires
a neural network with ve inputs, 16 rst-layer and 13 output-layer neurons. Their
100 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
trained network was able to properly classify 95 per cent of the test patterns. In this
example, using the WNN presented above, 3250 training patterns are obtained from
6500 Monte Carlo PSpice simulations to train the neural network. The measurement
data associated with the fault features as well as the other 1950 Monte Carlo simulation
specimens are applied to simulate the fault set. Using the technique presented above,
the measured fault features and the feature vectors due to the 1950 Monte Carlo
PSpice simulations are selected to determine the faults. Although the method cannot
identify all the faults, because some features are overlapped to some extent when
component tolerances are near 10 per cent, however, even in the presence of 10 per
cent component tolerance, the WNN method correctly classies 99.56 per cent of
the test data. For example, the WNN fault diagnosis system can distinguish the fault
classes C2 , R1 , R4 which are misclassied in Reference 19. Besides, the
fault classes of the NF and R1 that cannot be distinguished in Reference 19 have
been identied correctly using the method. Figures 3.7 and 3.8 respectively show the
waveforms associated with NF and R1 , sampled from node 2 (one of the PENs).
In each case, the noisy and de-noised signals and their multi-resolution coefcients
are all given, as shown in Figures 3.7 and 3.8. In these gures, ca5 and cdj, j = 1, 2,
, 5 are the coefcients a5 and dj respectively. Also note that these waveforms are
distinct from one and another.
Analogue circuit fault location has proved to be an extremely difcult problem. This
is mainly because of component tolerances and the non-linear nature of the prob-
lem. Among the many fault diagnosis methods, the L1 optimization technique is
a very important parameter identication approach [28, 29], which is insensitive to
tolerances. This method has been successfully used to isolate the most likely faulty ele-
ments in linear analogue circuits and when combined with neural networks, real-time
testing becomes possible for linear circuits with tolerances. Some fault verication
methods have been proposed for non-linear circuit fault diagnosis [3034]. On the
basis of these linearization principles, parameter identication methods can be devel-
oped for non-linear circuits. In particular, the L1 optimization method can be extended
and modied for fault diagnosis of non-linear circuits with tolerances. Neural net-
works can also be used to make the method more effective and faster for non-linear
circuit fault location.
This section deals with fault diagnosis in non-linear analogue circuits with tol-
erances under an insufcient number of independent voltage measurements. The
L1 -norm optimization problem for different scenarios of non-linear fault diagnosis is
formulated. A neural-network-based approach for solving the non-linear constrained
L1 -norm optimization problem is presented and utilized in locating the most likely
faulty elements in non-linear circuits. The validity of the method is veried and
simulation examples are presented.
Neural-network-based approaches for analogue circuit fault diagnosis 101
Noisy signal
1.965
1.96
1.955
1.95
De-noise signal
1.965
1.96
1.955
1.95
0 50 100 150 200 250
cd5
ca5 103 cd4
5 0.01
11.1
0 0
5 0.01
11.05
10 0.02
0 5 10 0 5 10 0 10 20
0
5
0 2
0
4
5 5 6
0 20 40 0 50 100 0 100 200
Figure 3.7 Noisy, de-noised signals and their level 15 coefcients associated with
NF
102 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Noisy signal
1.965
1.96
1.955
1.95
0 50 100 150 200 250 300 350 400
De-noised signal
1.965
1.96
1.955
1.95
0 50 100 150 200 250 300 350 400
0.4 0.4
10
0.2 0.2
5
0 0
0 0.2 0.2
0 10 20 0 10 20 0 20 40
cd3 cd2 cd1
1 1 1.5
1
0.5 0.5
0.5
0 0
0
Figure 3.8 Noisy, de-noised signals and their level 15 coefcients associated
with R1
Neural-network-based approaches for analogue circuit fault diagnosis 103
b
minimize |ei | (3.14a)
i=1
subject to
Vm = Hmb Eb (3.14b)
The result of the optimization problem in Equation (3.14) provides us with Eb .
Then, the network can be simulated using the external excitation source and obtained
104 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
to determine whether or not the element is faulty or in other words, whether or not the
actual working point remains on the normal characteristic curve within the tolerance
limits [3034]. If Equation (3.15) holds within its tolerance, the non-linear element is
fault free and the y, the result of working point Q0 moving along its characteristic
function curve, is caused by other faulty elements. If Equation (3.15) does not hold,
the non-linear element is faulty.
Equation (3.14) is restricted to a single excitation. In fact, multiple excitations
can be used to enhance diagnosability and provide better results. For k excitations
applied to the faulty network, the L1 -norm problem is formulated as
b
yi
minimize (3.16a)
y
i0
i=1
subject to
1 1
V1m Hmb Vb
V2 H2 V2
mb b Y
... =
m (3.16b)
...
Vkm Hkmb Vkb
b
yi vi
minimize + (3.17a)
y v
i0 i0
i=1
Neural-network-based approaches for analogue circuit fault diagnosis 105
subject to
1 1
V1m Hmb Vb
V2 H2 V2
m mb b Y
... = ... (3.17b)
Vkm Hkmb Vkb
where vi0 represents the nominal branch voltage, vi is the change in the voltage due
to faults and tolerance and vi + vi0 the actual voltage vi .
From Equation (3.17) we can obtain Y. For a linear element if y/y0 exceeds
its allowed tolerance signicantly, we can consider it to be faulty. However, for
a non-linear resistive element, we cannot simply draw a conclusion. For analogue
circuits with tolerances, the relation of the voltage and current of a non-linear resistive
element can be represented by a set of curves instead of a single one due to tolerance,
the nominal voltage-to-current characteristic of the non-linear element being in the
centre of the zone. Therefore, for a non-linear component, after determining y/y0
we need to simulate the non-linear circuit again to judge whether or not the component
is faulty. If the actual VI curve of the non-linear element signicantly deviates from
the tolerance zone of curves, the non-linear element can be considered as a faulty one.
Otherwise, if the actual curve falls within the zone, the non-linear element is fault free.
xi = ei , xb+i = 0, ei 0
(3.18)
xi = 0, xb+i = ei , ei 0
minimize CT X (3.19a)
AX = B
subject to (3.19b)
X0
106 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
The L1 -norm problem in Equation (3.16) can also be transformed into a standard
linear programming problem in the same way as the above and can be solved using
the LPNN.
The L1 -norm problem in Equation (3.17) belongs to the non-linear constrained
optimization problem. The solution of a non-linear constrained optimization problem
generally constitutes a difcult and often frustrating task. The search for new insights
and more effective solutions remains an active research endeavour. To solve a non-
linear constrained optimization problem by using neural networks, the key step is
to derive a computational energy function (Lyapunov function) E so that the lowest
energy state will correspond to the desired solution. There have been various neural-
network-based optimization techniques such as the exterior penalty function method,
the augmented Lagrange multiplier method, and so on. However, existing references
discuss only the unconstrained L1 -norm optimization problem. Here, an effective
method for solving the non-linear constrained L1 -norm optimization problem such as
Equation (3.17) is presented.
Although aiming at solving the problem in Equation (3.17), the approach is devel-
oped in a general way. The symbols to be used may have different meanings from
(and should not be confused with) those in the above, though they may be the same
for the convention. Applying the general formation to the problem in Equation (3.17)
is straightforward.
A general non-linear constrained L1 -norm optimization problem can be
described as
m
minimize fj (X) (3.20a)
j=1
subject to
Ci (X) = 0 (i = 1, 2, . . . , l) (3.20b)
m
l
E(X, R) = fj (X) + ri |Ci (X)| (3.21)
j=1 i=1
Neural-network-based approaches for analogue circuit fault diagnosis 107
For those ri satisfying the conditions of the theorem, the optimization solution X
of Equation (3.21) is the solution of Equation (3.20). Using a neural-network method
to solve the problem in Equation (3.21), E(X, R) can be considered as the computation
energy function of the neural network. Implementing the continuous-time steepest
descent method, the minimization of the energy function E(X, R) in Equation (3.21)
can be mapped into a system of differential equations, given by
m
dxj fi (X)
l
Ci (X)
= j ( + s ) + (ci + sci ci )ri
dt xj xj
i=1 i=1
d
= s fi (X), (0) = 0 , i = 1, . . . , m (3.23b)
dt
dci
= ci sci Ci (X), ci (0) = cio , i = 1, . . . , l (3.23c)
dt
where
!
1 fi (X) > 0
= sgn(fi (X)) = (i = 1, . . . , m)
1 fi (X) < 0
!
1 Ci (X) > 0
ci = sgn(Ci (X)) = (i = 1, . . . , l)
1 Ci (X) < 0
!
0 fi (X) = 0
s = (i = 1, . . . , m)
1 fi (X) = 0
!
0 Ci (X) = 0
sci = (i = 1, . . . , l)
1 Ci (X) = 0
1, |ci | 1, j , , ci > 0
Equation (3.23) can be implemented directly by an ANN depicted in Figure 3.9.
This ANN may be considered as the Hopeld model neural network, which is a
gradient-like system. It consists of adders, ampliers, hard limiters and two integrators
(a lossy unlimited integrator and a lossless limited integrator with saturation). The
108 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
f 1
Sf 1
f1(X) VCC =1 +
Sf 1 Sf 1
f 1
+
VCC =1
C1
SC1
C1(X) VCC =1 +
SC1 SC1 +
C1
control network
VCC =1
f1(X)/xj X xj0
+
j xj
r1 +
C1(X)/xj X
x1 xj xn
Figure 3.9 Architecture of articial neural network for solving L1 -norm optimiza-
tion problem
neural network
" will movefrom any initial state X0 that lies in the neighbourhood
N (X ) = X0 X0 X < , > 0 } of X in a direction that tends to decrease
the cost function being minimized. Eventually, a stable state in the network will be
reached that corresponds to a local minimum of the cost function.
It can be proved that the stable state X satises the necessary conditions for
optimality of the function in Equation (3.21). Obviously, dE(X, R)/dt 0. It should
be noted that dE(X, R)/dt = 0, if and only if dx/dt = 0, that is, the neural network
is in the steady state, and
i fi (X ) + Vi fi (X ) = 0 (3.24)
iA
/ iA
where
fi (X) = fi (X), (i = 1, . . . , m)
fi (X) = ri Ci (X), (i = m + 1, . . . , m + l)
I = {1, . . . , m, m + 1, . . . , m + l}
A = A(X ) = {i|fi (X ) = 0, i I}
V 1, i A
i
i = sgn(fi (X )), i
/A
This means that stable equilibrium point X of the neural network satises the neces-
sary conditions for optimality of Equation (3.21). According to theorem 3.1, the stable
Neural-network-based approaches for analogue circuit fault diagnosis 109
G8
2 3
1 4
G2 G3 G5
G1 G7 G4 G6
state X of the neural network corresponds to the solution of the L1 -norm problem in
Equation (3.20).
Note that , ci are used as adaptive control parameters to accelerate the min-
imization process. From the ANN depicted in Figure 3.9, it follows that variables
, ci are controlled by the inner state of the neural network, which will make the
neural network have a smooth dynamic process and as a result of this, the neural
network can quickly converge to a stable state that is a local minimum of E(X, R).
Because the neural-network computation energy function in Equation (3.21) is derived
from the exact penalty approach, on applying the ANN in Figure 3.9 (or the neural
network algorithm of Equation (3.23)), more accurate results can be obtained, with
appropriate penalty parameters ri ( ri > i ) not being large. The main advantage
of the neural-network-based method for solving the L1 -norm problem in comparison
with other known methods is that it avoids the error problem caused by approximating
the absolute value function, thus providing a high-precision solution without use of
large penalty parameters. The effectiveness and performance of the neural-network
architecture and algorithm have been successfully simulated. One example is given
below.
y3 /y30 = 0.037, y7 /y70 = 0.430 65, y8 /y80 = 0.4802, yi /yi0 = 0,
i = 2, 4, 5, 6. Linear elements 2, 4, 5, 6 are normal as the conductance change is
zero. The conductance change in linear element 1 signicantly exceeds its allowed
tolerance, therefore we can judge it is faulty. The change in linear element 3 slightly
exceeds its allowed tolerance, but we can still consider it to be non-faulty. The changes
in non-linear element static conductances signicantly exceed their allowed toler-
ances. We simulate the faulty non-linear circuit again and nd that only the VI curve
of non-linear element 7 signicantly deviates from its tolerance characteristic zone,
hence element 7 is faulty and element 8 is fault free. In fact, in our original setting up,
linear element 1 and non-linear element 7 are assumed faulty. It can thus be seen that
the method can correctly locate the faults. Meanwhile, the validity of the presented
neural-network algorithm for solving non-linear constrained L1 -norm optimization
problem is also conrmed.
3.5 Summary
This chapter addressed the application of neural networks in fault diagnosis of ana-
logue circuits. A fault dictionary method based on neural networks has been presented.
This method is robust to element tolerances and requires little after-test computation.
The diagnosis of soft-faults has been shown, while the method is also suitable for hard
faults. Signicant diagnosis precision has been reached by training a large number
of samples in the BPNN. While the faulty samples trained can be easily identied,
the BPNN can also detect untrained faulty samples. Therefore, the fault diagnosis
method presented can not only quickly detect the faults in the traditional dictionary
but can also detect the faults not in the dictionary. As has been demonstrated, the
method is also suitable for large-scale circuit fault diagnosis.
A method for fault diagnosis of noisy analogue circuits using WNNs has also
been described. In this technique, candidate features are extracted from the energy
in every frequency band of the signals sampled from the PENs in a circuit de-noised
by wavelet analysis and are employed to select optimal feature vectors by PCA, data
normalization and wavelet multi-resolution decomposition. The optimal feature sets
are then used to train the WNN. The method is characterized by its high diagnosability.
It can distinguish the ambiguity sets or some misclassied faults that other methods
cannot identify and is robust to noise. However, some overlapped ranges appear as
the component tolerances increase to 10 per cent.
Fault diagnosis of non-linear circuits taking tolerances into account is the most
challenging topic in analogue fault diagnosis. A neural-network-based L1 -norm opti-
mization approach has been introduced for fault diagnosis of non-linear resistive
circuits with tolerances. The neural-network-based L1 -norm method can solve vari-
ous linear and non-linear equations. A fault diagnosis example has been presented,
which shows that the method can effectively locate faults in non-linear circuits. The
method is robust to tolerance levels and suitable for online fault diagnosis of non-
linear circuits as it requires fewer steps in the L1 optimization and the use of neural
networks further speeds up the diagnosis process.
Neural-network-based approaches for analogue circuit fault diagnosis 111
3.6 References
19 Aminian, F., Aminian, M., Collins, H.W.: Analog fault diagnosis of actual circuits
using neural networks, IEEE Transactions on Instrumentation and Measurement,
June 2002;51 (3):54450
20 He, Y., Ding, Y., Sun, Y.: Fault diagnosis of analog circuits with tolerances using
articial neural networks, Proceedings of IEEE APCCAS, Tianjin, China, 2000,
pp. 2925
21 Deng, Y., He, Y., Sun, Y.: Fault diagnosis of analog circuits with tolerances
using back-propagation neural networks, Journal of Hunan University, 2000;27
(2):5664
22 He, Y., Tan, Y., Sun, Y.: A neural network approach for fault diagnosis of large-
scale analog circuits, Proceedings of IEEE ISCAS, Arizona, USA, 2002, pp.
1536
23 He, Y., Tan, Y., Sun, Y.: Class-based neural network method for fault location of
large-scale analogue circuits, Proceedings of IEEE ISCAS, Bangkok, Thailand,
2003, pp. 7336
24 He, Y., Tan, Y., Sun, Y.: Wavelet neural network approach for fault diagnosis
of analog circuits, IEE Proceedings Circuits, Devices and Systems, 2004;151
(4):37984
25 He, Y., Tan, Y., Sun, Y.: Fault diagnosis of analog circuits based on wavelet pack-
ets, Proceedings of IEEE TENCON, Chiang Mai, Thailand, 2004, pp. 26770
26 He, Y., Sun, Y.: A neural-based L1-norm optimization approach for fault diag-
nosis of nonlinear circuits with tolerances, IEE Proceedings Circuits, Devices
and Systems, 2001;148 (4):2238
27 He, Y., Sun, Y.: Fault isolation in nonlinear analog circuits with tolerance
using the neural network based L1-norm, Proceedings of IEEE ISCAS, Sydney,
Australia, 2001, pp. 8547
28 Bandler, J.W., Biernacki, R.M., Salama, A.E., Starzyk, J.A.: Fault isolation in
linear analog circuits using the L1 norm, Proceedings of IEEE ICAS, 1982; pp.
11403
29 Bandler, J.W., Kellermann, W., Madsen, K.: Nonlinear L1-optimisation algo-
rithm for design, modeling and diagnosis of networks, IEEE Transactions on
Circuits and Systems, 1987;34 (2):17481
30 Sun, Y., Lin, Z.X.: Fault diagnosis of nonlinear circuits, Journal of Dalian
Maritime University, 1986;12 (1):7383
31 Sun, Y., Lin, Z.X.: Quasi-fault incremental circuit approach for nonlinear circuit
fault diagnosis, Acta Electronica Sinica, 1987;15 (5):828
32 Sun, Y.: A method of the diagnosis of faulty nodes in nonlinear circuits, Journal
of China Institute of Communications, 1987;8 (5):926
33 Sun, Y.: Faulty-cut diagnosis in nonlinear circuits, Acta Electronica Sinica,
1990;18 (4):304
34 Sun, Y.: Bilinear relation and fault diagnosis of nonlinear circuits, Microelec-
tronics and Computer, 1990;7 (6):325
Chapter 4
Hierarchical/decomposition techniques for
large-scale analogue diagnosis
Peter Shepherd
4.1 Introduction
The size and complexity of integrated circuits (ICs) and related systems has con-
tinued to grow at a remarkable pace during recent years. This has included much
larger-scale analogue circuits and the development of complex analogue/mixed-signal
(AMS) circuits. Whereas the testing and diagnosis techniques for digital circuits are
well developed and have largely kept pace with the growth in complexity of the ICs,
analogue test and diagnosis methods have always been less mature than their digital
counterparts. There are a number of reasons for this fact. First, the stuck-at fault
modelling and a structured approach to testing has been widely exploited on the digital
side, whereas there is no real equivalent in the analogue world for translating phys-
ical faults into a simple electrical model. The second major problem with analogue
circuits is the continuous nature of the signals, giving rise to an almost innite num-
ber of possible faults within the circuit. Third, there is the problem of the tolerance
associated with component and signal parameters, resulting in the denition of faulty
and fault-free conditions being somewhat blurred. Other problems inherent in large-
scale analogue circuit evaluation include the non-linearities of certain components
and feedback systems within the circuits.
However, even though circuits have grown in size and complexity, the design
tools to realise these circuits have matched this development. This means that com-
plex circuits are becoming increasingly available as custom design items for a growing
number of engineers. Unfortunately, the test and diagnosis tools have not matched
this rate of development; so although the cost of IC production has seen a steady
decrease in terms of cost per component, the test and maintenance cost has increased
proportionally. While some standard analogue test and diagnosis procedures have
been developed over the years, many of these are only applicable to relatively small
114 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
circuits, often requiring access to a number of nodes internal to the circuit. Therefore,
with the increasing size and complexity of circuits, new approaches must be adopted.
One such approach that has attracted attention in recent years is the concept of a
hierarchical approach to the test and diagnosis problem whereby the circuit is viewed
at a number of different levels of circuit abstraction, from the lowest level of basic
component (transistors, resistors, capacitors, etc.) through higher levels of function-
ality (e.g., op-amps, comparators and reference circuits) to much larger functional
blocks (ampliers, lters, phase-locked loops, etc.). By considering the circuit in this
hierarchical way, the problem can be reduced to a tractable size.
This chapter looks at some of the techniques for fault diagnosis in analogue
circuits in which this hierarchical approach is exploited. Related issues are those
of hierarchical tolerance and sensitivity analysis, which share many of the same
problems and some of the same solutions as fault diagnosis. Although a full treatment
of tolerance and sensitivity analysis is beyond the scope of this chapter, some mention
will be made where it impinges directly on the diagnosis techniques.
4.1.1.1 FD or identication
FD is simply to measure whether the circuit is good or faulty, that is, whether it
is working within its specication, in effect go/no-go testing. As analogue testing is
based on a functional approach, strictly this level of diagnosis should test the circuit
under all operating conditions and all possible input conditions. It is therefore not
necessarily as straightforward as it seems at rst sight.
can then be used, for example, to adjust the design to make the overall performance
less sensitive to critical components.
C3
R1
3 1 2 3 R1 1 C3 2
V1 R2 R4 V1 R2
R4
0
0
Once the CUT graph has been derived, it is possible to dene a tree for the
graph. A tree is a subset of the graph edges which connects all the nodes without
completing any closed loops. The co-tree is the complement subset of the tree. Given
a particular graph, there may be many different ways of dening tree/co-tree pairs.
Once a particular tree has been dened, the component connection model (CCM) [7]
can be used to separate the CUT model into component behaviour and topological
description. The behaviour is modelled using the matrix equation:
b = Za (4.1)
where
a = v itree and b = i vtree
cotree cotree
are the input and output vectors, respectively. Z is called the component transfer
matrix and describes the linear voltagecurrent relationships of the components in
CUT. The topology the CUT is described by the connection equation:
a = L11 b + L12 u (4.2)
where u is a stimulus vector. The results from the measurement of the CUT are
described by the measurement equation:
y = L21 b + L22 u (4.3)
where y is the test point vector containing the measurement results. The Lij are the
connection matrices which are derived from the node incidence matrices referring
to the tree/co-tree partition. From the simple example circuit of Figure 4.1, we can
derive a tree such that V1 , R1 and C3 form the tree and R2 and R4 form the co-tree. We
consider V1 to be the stimulus component, and so we can derive the various vector
equations
iR1 vR1
iC vC
a= 3
vR2 , b= 3
iR2 and u = (uV1 ) (4.4)
vR4 iR4
118 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
1 11
a L11 L12
11 b1 L112
= + u (4.9)
a2 L21
11 L1211
b 2
L212
b1
y = L121 L221 + L22 u (4.10)
b2
where the matrices Lkl k
ij and Lij are obtained by appropriately picking up the rows and
columns of the connection matrices Lij . Solving these equations for testee quantities
yields the so-called pseudo-circuit equation [5]:
1 1
a K11 K12 b
= (4.11)
yp K21 K22 up
where
2
a u 12 2 1 1
yp = , up = , K11 = L11
11 L11 (L21 ) L21 ,
b2 y
22 2 1 1
2 1 2 1 11 L11 (L21 ) L21
L21
K12 = (L112 L12
11 (L21 ) L22 11 (L21 ) ),
L12 K21 =
(L221 )1 L221
L212 L22 2 1 2 1
11 (L21 ) L22 11 (L21 )
L22
K22 =
(L221 )1 L22 (L221 )1
This equation is solved to obtain the testee quantities a2 and b2 based on the
knowledge of the test stimuli u and the measured results y. Whether a particular
testee component is fault free or not is determined by whether the results obtained from
solving the pseudo-circuit equations agrees with the expected behaviour described by
Z2 . For ideal fault-free behaviour, the two values of b2 should be identical. However,
there will be an allowable tolerance, so the test is whether the difference between
the two vectors is within a certain tolerance band. Remember that the tester/testee
partition was done without knowledge of whether the components were faulty or fault
free and the algorithm operates on the assumption that all the components in the tester
group are fault free. Therefore, it is unlikely that this rst pass of the ST algorithm
will provide a reliable diagnosis result. However, there will be some components in
the testee group which can be reliably said to be fault free. These can therefore be
moved into the tester partition group (being swapped with other circuit components)
and the ST algorithm re-run. Further testee components will be identied as fault free
and the iterative process continues until it is known that all the components in the
tester group are indeed fault free, at which point the diagnosis is known to be reliable
and the process is complete.
It should be noted that strictly this algorithm is only valid for parametric faults.
If catastrophic faults (short- or open-circuit) are present then this will change the
topology of the original circuit and original graph and tree/co-tree denitions will
be in error. However, it will be seen in the next section that a hierarchical extension
to this process can indeed diagnose catastrophic faults provided they are completely
within a grouped subcircuit.
120 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
where T1 to Tn are the nominal values of the n output parameters, T1 etc.
are the measured deviations of the output parameters, SxT11 etc. are the com-
puted sensitivities of the output parameters with respect to the k component
variations, SxD11 etc. are the computed sensitivities of the denominators of the
output parameters with respect to the k component variations, xi are the nomi-
nal values of the k components and xi are the deviations of the k components,
which are to be calculated. As there is a set of n linearly independent equa-
tions to solve for the k values of xi , hence the requirement that the number
of measured parameters, n, must be greater than or equal to the number of
components, k.
6. Determine the solutions (xi /xi ) of the equation system of step 5. The algo-
rithm may have to be iterated from step 3 if sufcient precision is not achieved.
7. Having determined the deviations of the parameter values, these can be com-
pared with the nominal values and the acceptable tolerances, as well as a further
Hierarchical/decomposition techniques for large-scale analogue diagnosis 121
possible denition of the boundary between soft and hard fault values, in order
to classify each component value as: (i) within acceptable tolerance range;
(ii) out of tolerance range (parametric fault); or (iii) catastrophic fault.
Therefore, the algorithm is potentially very powerful in terms of achieving all three
levels of circuit diagnosis, but at the cost of requiring a lot of independent parameter
measures and a very high computation cost for large-scale circuits.
It can be seen from the descriptions in the previous section that for both SBT and SAT
diagnosis approaches, the computational effort increases enormously with increasing
circuit size. In the SBT case, as the number of components increases, so does the
number of possible faults and therefore the number of simulations required in order
to derive a sufciently extensive and reliable fault dictionary. In SAT approaches, as
these are often matrix-based calculations, the size of the vectors and matrices grows
proportionally with the complexity of the CUT, but the processing increases at a
greater rate, particularly when matrix inversions are required.
In both cases, the computational burden can be made more tractable by the use of
hierarchical techniques. This basically takes the form of grouping components into
subcircuits and treating these subcircuits as entities in their own right, thus effectively
reducing the total number of components. This can be extended to a number of
different levels, with larger groupings being made, perhaps including the subcircuits
into larger blocks. The diagnosis can be implemented at different levels and if required
the blocks can be expanded back out to pinpoint a fault that had been identied with
a particular block.
A number of different approaches to hierarchical diagnosis have been proposed,
for both SBT and SAT techniques (and also combinations of the two). Some of these
are described in the remainder of this chapter. Although not an exhaustive treatment,
it highlights some of the more important procedures described in the literature in
recent years.
Vip VDD
Vin Vo
VDD
Vip +
Vo
Vin _
VSS VSS
Figure 4.2 Operational amplier and its associated hierarchical graph representa-
tion [10]
The hierarchical component transfer matrix Zhier is part of the overall transfer
matrix Z.
Clearly, by adopting this approach, the size of the overall circuit graph can be
considerably reduced from the at circuit representation and the matrix solution
problem becomes more tractable. However, there are a number of complications
arising from this approach. The next step of the CCM algorithm is to partition the
graph into a tree/co-tree pair. On remembering the denition of the a and b vectors
from Equation (4.1), and then when considering the hierarchical component, the
constituent edge currents and voltages must be part of the a and b vectors in the
same way. An optimal tree-generation algorithm was proposed in Reference 8 for
non-hierarchical circuits. This consists of ordering the various components by their
type (voltage sources, capacitors, resistors, inductors and current sources). The earlier
components in the list are preferred tree components, the latter are preferred co-tree.
This preference list has to be adapted to take account of the hierarchical components as
well. As described in Reference 11, this includes the hierarchical tree edges between
the voltage sources and capacitors and the hierarchical co-tree edges between the
inductors and current sources in the preferred listing. In order to prioritise components
in the same class, the edge weight of each component is calculated as the sum of
other edges connected to the edge under consideration. For components with equal
weight, they are further prioritised by the parameter value.
Hierarchical/decomposition techniques for large-scale analogue diagnosis 123
B1
B1 B2 B3
B2
B3
B4 B5
B4 B5
Figure 4.3 Example hierarchical circuit and its associated BPT [13]
H1 = f (s, X)
Hierarchical/decomposition techniques for large-scale analogue diagnosis 125
H2 = f (s, X, H1 )
..
. (4.14)
Hk = f (s, X, H1 , . . . , Hk1 )
H = f (s, X, H1 , . . . , Hk )
The nal expression gives the transfer function H of the overall circuit where s is the
Laplace variable and X the set of component parameters.
A fault diagnosis approach using this sort of symbolic method was described
in Reference 16 in which simulation of single and double parametric faults were
simulated. The simulation time for a large-scale circuit was shown to be 15 times
faster using SOE rather than a traditional numerical simulation. Further improvements
in speed are described in Reference 13, which includes: (i) only re-evaluating the part
of the SOE that is inuenced by a parameter variation; and (ii) optimum choice of
the BPT so that the number of expressions inuenced by a parameter is minimized.
These two methods are now described in detail. For method (i), this is derived from
Reference 17 in which the SOE approach was applied to sensitivity analysis. In order
to identify the equations of the SOE that are inuenced by a particular parameter x,
use is made of a graphical technique. Here, the dependencies implied in the SOE are
represented by connecting arrows. As an example, consider the SOE equation system
given in Equation (4.15) and the corresponding expression graph given in Figure 4.4.
H = Hk /Hk2
Hk = Hk1 + Hk2
Hk1 = 3Hk5 + 3
Hk2 = 2Hk4
..
. (4.15)
H4 = H3 /H1
H3 = 5H2
H2 = x2
H1 = x1
H
Hk
Hk1
Hk2
H4
H3
H2 = x2
H1 = x1
H = H11
H12
H21
node to the root node. In this way, only the expressions that are inuenced by x are
re-evaluated, potentially leading to great savings in computation time.
For method (ii), additional acceleration of computation can be made by reducing
the number of expressions inuenced by a parameter. The aim is to minimize the aver-
age number of inuenced expressions per parameter, resulting in an average reduction
in the computation cost. The number of expressions inuenced by a parameter is the
sum of the lengths of the paths in the DAG that connect from the root node to the leaf
node of the parameter under consideration. Therefore, the aim of this method is to
minimize the average length of the DAG paths from root to leaves. Clearly the SOE
represents the functionality of the circuit, so this cannot be altered itself. However, the
way in which the circuit is hierarchically partitioned can be altered and it is through
this choice that the optimization is achieved. A heuristic solution to this problem was
introduced in Reference 18 for sensitivity analysis. It relies on the fact that the DAG
and the BPT are strongly related. As an example, consider the DAG representation
that is related to the BPT illustrated in Figure 4.3 as shown in Figure 4.5.
For each node in the BPT there is a sample set of equations from the SOE and
therefore a corresponding set of nodes from the DAG. Similarly, as the BPT represents
the dependency between different hierarchical blocks through the connecting directed
edges, there is a close similarity between the paths in the BPT and paths in the DAG.
Therefore, for most typical circuit structures, the length, lpt , of the path in the BPT
Hierarchical/decomposition techniques for large-scale analogue diagnosis 127
B1
B3 B2
B5 B4
B5 B4
B1
B2 B3
B4 B5 B6 B7
Figure 4.6 Balanced (lower) and unbalanced (upper) binary partition trees [13]
and the lengths, lDAG , of corresponding paths in the DAG are proportional to each
other, lDAG lpt . Therefore, minimizing lpt minimizes lDAG . We have now reduced
the problem to determining the BPT that will minimize the average value of lpt . The
solution to this comes from graph theory, where it is known that the solution is a
maximally balanced tree structure, as illustrated in Figure 4.6.
Here the two extremes of balance for a tree structure are illustrated, a totally
unbalanced tree and a maximally balanced partition tree. The respective average
lengths of the paths are given by
n+1 1
lpt (unbalanced) = (4.16)
2 n
lpt (balanced) = log2 n (4.17)
where n is the number of leaves of the tree (number of circuit parameters). Therefore, a
maximum improvement of O(n)/O(log2 n) can be achieved by choosing a maximally
balanced BPT as the basis for the SOE analysis.
Having established the advantages of the hierarchical approach to toler-
ance/sensitivity analysis and fault simulation yielded by symbolic analysis, both SAT
and SBT applications can now be envisioned. Although SBT approaches are further
detailed in the subsequent section of this chapter, both the applications of symbolic
analysis will be outlined here.
In respect of SAT algorithms, the symbolic approach to the calculation of sensi-
tivities is directly applicable to the method of diagnosis described in Section 4.2.2.2.
128 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
One of the main difculties with this approach, apart from requiring a large enough
number of measurements, is the iterative calculation of sensitivities, due to the linear
nature of the equation system. This may require many sensitivity calculations, espe-
cially for large-scale circuits. Therefore, the post-measurement computation burden
can be excessively high. However, employing the SOE approach to multi-parameter
sensitivity analysis can greatly ease this computation task.
In Reference 17 a method has been described for the calculation of the sensitivity
with respect to a parameter, x, which follows the DAG paths in a bottom-up fash-
ion, starting with the leaf node representing the parameter x and ending at the root
node representing H. For example, considering the SOE and DAG associated with
Figure 4.4, the sensitivity of H with respect to x1 is calculated in a successive fashion,
represented by the following equation system:
H1
=1
x1
H4 H4 H1
=
x1 H1 x1
..
.
(4.18)
Hi Hi Hj
=
x1 Hj x1
(Hi ,Hj )DAG
..
.
H H Hj
=
x1 Hj x1
(H,Hj )DAG
the DAG of Figure 4.4, the following equation set can be derived:
H
=1
H
H H
=
Hk Hk
H H Hk
=
Hk1 Hk Hk1
..
.
(4.20)
H H Hi
=
Hj Hi Hj
(Hi ,Hj )DAG
..
.
H H Hi
=
H1 Hi H1
(Hi ,Hj )DAG
As the leaf expressions Hn of the SOE correspond to a circuit parameter, then the
sensitivities of the network function are given by the partial derivatives of H with
respect to the leaf expressions:
H H
sen(H, xn ) = = (4.21)
xn Hn
Therefore, evaluating Equation (4.20) generates all the sensitivities in parallel. Further
details of the method are given in Reference 6. As one sensitivity term is generated
for each leaf node of the DAG and as the number of leaf nodes is expected to increase
linearly with increasing circuit complexity [14, 17], this indicates that the computa-
tional expense of this parallel system is expected to grow only linearly with circuit
complexity.
In respect of the SBT approach, the symbolic approach can be applied to the
generation of a fault dictionary through fault simulation. The process follows the
following steps:
1. The circuit is divided into a hierarchical system employing a maximally
balanced BPT.
2. The SOE for the system is established.
3. Using the nominal circuit parameters (leaf nodes), the nominal SOE is
evaluated to yield the nominal value of the transfer function H.
4. For each fault simulation, the parameter under consideration is changed to its
faulty value and also the corresponding leaf node is allocated a token.
5. Proceeding bottom-up through the graph, each node that has a token, passes the
token on to all the predecessor DAG nodes and all the respective expressions
are re-evaluated.
130 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
6. The process is continued until the root node, yielding the fault simulated
transfer function Hf .
The process can be run for both single fault conditions, where only a single leaf is
allocated a token and fault parameter value, and multiple-fault conditions, where all
the relevant leaf nodes are allocated tokens and fault values. Any nodes on the DAG
that are not passed tokens during the process and therefore re-evaluated, remain with
their nominal values determined in step 3. Once all the required fault simulations
have been completed, the fault dictionary can be built in the traditional manner.
In Reference 13 the method is applied to the fault simulation of an active bandpass
lter circuit with 44 circuit parameters. Here an additional increase in simulation speed
by a factor of ve was observed when a fully balanced BPT and token passing were
implemented, above the factor of 15 observed in Reference 14 which represented the
gain from using the SOE symbolic analysis compared to traditional fault simulation
techniques.
method is to take one example from a cluster as being representative of that cluster
and use this, via the generated macromodel, for the fault simulation at the next level.
The process is repeated upwards to the highest level of the hierarchy.
Once the response of the CUT has been measured, the process is reversed, using
the trained networks to operate on the signature to trace down through the hierarchical
levels and locate the fault. However, this technique does require that in moving
down through the hierarchical levels, the block in which the fault has been identied
must then be isolated and the terminal nodes made accessible. Additionally, there
is the problem of the fault clustering, and the transference of one representative
fault signature through the macromodelling, which was performed in the bottom-up
process of neural network training. In order to track back through the hierarchical
layers to provide detailed diagnosis results, the database has to be extended to separate
the faults in all the clusters. This is achieved by training the neural network with a
new set of signatures for the faults in each cluster. This can be achieved by either
measuring the data at a new node or performing a different type of analysis (e.g., a
time domain measurement or a phase analysis, etc.). The reasoning behind this is that
a different type of analysis of the response from the different faults, which provided
similar signatures in one form of analysis, may well provide sufciently different
responses when analysed under different conditions.
Spec(1)
Level N
Spec(n)
Spec(1)
Module A Module B
Spec(n )
Spec(1)
Level 1
Spec(n)
is that they represent the lowest point in the hierarchy to which the diagnosis will be
performed. This is a practical consideration from the point of view of being able to
repair a faulty circuit or system.
The specications for each level of the hierarchy are indicated in Figure 4.7. At
the top level there are n specications, at the other levels there are varying numbers
of specications appropriate to the different levels of abstraction. The key to this
approach though is the relationship between the specications at various levels and,
in particular, how faults are propagated from lower to higher levels. This is performed
through behavioural modelling and is the pivotal aspect of the approach. A fault at the
top level of the hierarchy can be represented by one or more out of range values for the
spec(1),, spec(n) or may occur because of a structural fault in the interconnections
between the modules at the N1 level. Therefore, knowledge of the faulty values of
the spec() parameters and the structural interconnection faults must be known from
the basic faults introduced in order to form the fault dictionary. The fault simulation
process must therefore be a bottom-up approach.
Starting with the leaf cells, which are the basic building blocks of the circuit,
the specications for these cells are known in terms of nominal values and tolerance
ranges. A value outside of the accepted range, whether it represents a parametric
deviation (soft fault) or deviation to a range extremity (hard fault) can be introduced
into the fault modelling process. By simulation, the effect of the introduced faults
can be quantied. These effects now have to be translated into the next level of the
hierarchy via behavioural modelling of the module they affect. However, during this
process, the concept of fault clustering can be introduced. Suppose two different faults
at the leaf cell level give rise to substantially the same simulated response (within a
specied bound). There is no need to translate behavioural models of the individual
faults a single model will sufce for both faults. There is no loss of diagnostic
power here (in terms of fault location) as the two (or more) faults that give rise to the
characteristic response originate in the same cell.
During the fault simulation process, there are two possible approaches to fault
propagation. First, injection of a chosen fault into the leaf cell and propagation of
the results of the fault into the higher levels of the hierarchy or, second, computation
Hierarchical/decomposition techniques for large-scale analogue diagnosis 133
of the effect of the fault on the behaviour of the leaf cell, repeating this for all the
specied faults, and construction of an overall behavioural model that can then be
applied in a one-pass way to the next level of the hierarchy. The rst option involves
more simulation, the second option requires additional modelling effort but is faster
from the diagnostic simulation point of view. The MiST PROFIT software supports
both approaches.
Given that there are n specications for a particular module, then this represents an
n-dimensional area of space. Hard faults represent a point in this space, whereas soft
faults represent a trajectory or vector within this space. These vectors are represented
by non-linear, real functions and while the functions are computed for one particular
parametric fault, all the other circuit parameters are held at their nominal values. In
the MiST PROFIT approach, component tolerances can also be taken into account.
These are computed separately as described shortly. The fault points and vectors have
to be derived through simulation.
Tolerance values are often computed via Monte Carlo approaches, but these suffer
from a huge computation cost, often requiring many thousands of circuit simulation
runs. This makes them unsuitable for large-scale circuits. So the MiST PROFIT
software takes an alternative route to derive the component tolerance effects, termed
the tolerance band approach. A method of propagating the tolerance effects through
the different levels of the hierarchy is also described. A tolerance band approach was
introduced in Reference 20, but this depends on the invariance of the signs of the
sensitivities under fault conditions. This condition is not generally met as the output
values are often non-linear and non-monotonic with respect to parameter variations.
Therefore, to obtain a more accurate bound of the tolerance effects, the signs of the
sensitivities have to be re-evaluated for each fault value, and therefore the following
algorithm has been implemented:
For each hard fault:
1. Compute the sensitivity of the measurement to the parameter, p.
2. For each parameter, calculate upper and lower bounds to the parameter, the
upper bound is given by the nominal value of p plus the sign of the sensitivity
multiplied by the tolerance value of p whereas the lower bound is given by
the nominal value minus the sign of the sensitivity multiplied by the tolerance
value.
3. The circuit is simulated using these two bound values of p in order to establish
the upper and lower tolerance bounds.
As two additional simulations have to be made for each fault value, it is not
practical to implement this for the soft-fault case, as a number of tolerance calculations
have to be performed along the fault vector space. Therefore, an alternative approach
is adopted based on the following heuristic. Consider a specication as a function of a
particular parameter p, that is, spec(i) = fi (p) for any i between one and n, for a model
with n specications and some non-linear function f . If the sign of the sensitivity of
spec(i) to any one circuit parameter for two different values of p are different, then it is
highly likely that the signs of the sensitivities to all the circuit parameters for the two
different values of p are also different. The relationship spec(i) = fi (p) is a mapping
134 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
For hard fault simulation and diagnosis, the diagnosis procedure is based on a
comparison of the positions in the n-dimensional space (taking account also of the
tolerance boxes). With parametric faults, there is an associated trajectory in the space
which, for convenience, is constructed in a PWL fashion. The diagnosis search is
therefore more complicated as the search must be made for a best t positional point
on a trajectory, again taking account of the tolerance regions around the trajectory.
However, if successful, this can yield both fault location and fault value diagnosis
information.
are translated into the lter specication space which consists of the transfer function
magnitude at three specied frequencies. By only operating on this critical set of fault
syndromes, the number of entries in the fault dictionary and the number of higher
level fault simulations is minimized, thus leading to the most compact form of fault
dictionary that stores only those faults that contain diagnostic information. Once the
fault dictionary has been built, the next stage is to perform the diagnostic work based
on the measurements of the CUT.
For the fault location diagnosis, this is based purely on the SBT data from the
fault dictionary. Measurements of the three transfer functions magnitudes are made
and these represent a point in the three-dimensional specication space. If a fault
is present, this will be different from the point representing the fault-free condition
(with some associated tolerance box). This is termed in Reference 21 as the observed
fault syndrome. The faulty block at the next lowest hierarchical level is determined
via a nearest neighbour calculation between the observed fault syndrome and the
critical fault syndromes in the fault dictionary. In this example, this would identify
one (or possibly more if multiple fault simulations were performed) of the op-amps.
Measurements can then be made on the voltage gain and gain-bandwidth product of
the op-amp(s) and using critical fault syndromes at this level of hierarchy, the faulty
transistor(s) can be identied.
Once the faulty component has been identied, the diagnosis can then move to
the second stage, fault value identication. This requires additional SAT procedures
as outlined here. The basis for this routine in Reference 21 is the use of non-linear
regression models to approximate the functional relationship between the values of the
measurements made on the CUT and the parameters of the circuit under each particular
fault. In this way, the simulation burden can be eased as circuit-level simulation is
replaced by a simple function evaluation. The regression tool used is the Multi-variate
Adaptive Regression Splines (MARS) technique [22]. In addition, as this approach
explicitly solves for the parameters of the CUT, the technique does not suffer from the
fault masking effects that can arise due to component tolerances. In fact, this technique
requires a two-stage process, rst construction of the non-linear regression model via
circuit simulation and second the post-test stage whereby an iterative algorithm solves
for the parameter values of the CUT.
The regression model is built on the assumption that a single fault has occurred,
this parameter is varied between its two extreme values and the remaining parameters
of the CUT are allowed to vary within their tolerance limits. Simulations are based on
these variations (in Reference 21 the allied Cadence simulation and extraction tools
were used). The MARS technique is then used to build up the required regression
models, based on this simulation data. This provides a model of the variation of
measurements made on the CUT with the variation in parameters of the CUT.
The second stage is the fault value evaluation diagnosis procedure, which is an
iterative procedure, using a modied NewtonRaphson algorithm. This comprises the
following stages. As input, the algorithm takes the measurements made on the CUT,
the knowledge of the faulty component and the MARS regression models. A coarse
search for a solution is made to provide an initial value. The Jacobian of the regression
matrix is then computed and a check is made of the convergence to see if the solution
to the regression model matches the measured values within a certain threshold value,
Hierarchical/decomposition techniques for large-scale analogue diagnosis 137
if not the process is iterated. There are two issues in using this approach. First, the
set of measurements may not uniquely identify the parameters of the CUT. If there
are dependent equations in the system, the Jacobian will become ill-conditioned and
there will be an innite set of solutions. In test terms, this means that there exist
ambiguity sets and these must be eliminated for the process to provide a solution.
The presence of ambiguity sets is identied using the procedure of Reference 23 and
they are eliminated by ensuring that the parameters in the ambiguity groups are held
constant. The second issue is the convergence of the NewtonRaphson method, which
can fail to converge if the initial point is a long way from the solution. Hence, the
initial coarse search, the authors also use a damping factor in the iteration process to
further improve the convergence.
In Reference 21 the method is illustrated by using a slew-rate lter as an example
circuit and the authors demonstrate 100 per cent diagnostic accuracy over a range of
levels of introduced faults.
The same authors have further rened this method to include the generation of
optimized test waveforms in the time domain [24]. These transient waveform stimuli
are generated automatically, based on the CUT, its constituent models, a list of the
faults to be diagnosed and, in particular, the list of observable nodes in the circuit
that can be probed at the measurement stage. The waveforms can consist of either
steps or ramps in voltage. There is also a time-to-observe parameter for the test,
which species the period of time after the application of the test waveform when the
signal at the test point should be observed. Both of these aspects are optimized by
minimizing a cost function based on diagnostic success of the test signal. The ramps,
which is the method implemented in Reference 24, are generated by starting with a
random slope for the waveform section, applying this to the CUT and then calculating
the rst-order sensitivities of the faults to the slope of the ramp. Depending on the
values of these sensitivities, the slope is adjusted and an iterative procedure is applied
to derive the optimum values of the slope and the time-to-observe parameter.
4.4 Conclusions
concentrates on the simulation effort after the measurements have been performed in
order to trace the faulty component(s).
There are drawbacks to both of these approaches. In the SBT approach, the
number of faults simulated and therefore contributing to the dictionary, must be
a nite subset of the innite set of possible faults (assuming parametric faults are
considered). However, a good representative subset is usually achievable, given such
fault modelling approaches as IFA. The SAT approach generally requires access to
a high proportion (or indeed all) of the internal circuit nodes in order to perform
measurements and achieve a full diagnostic capability. These techniques were often
developed for circuits that were breadboarded or on single-layer printed circuit boards
where access to all the nodes (for voltage measurement, at least) was possible. With
the implementation of these circuits on multilayer boards, multi-chip modules and
monolithic ICs, this access to internal nodes becomes even more difcult and may
require inclusion of test overhead circuitry to enable observability of internal nodes.
While many of the techniques described in this chapter are relatively well
matured and quite powerful, there is still a lot of work to be done in this eld to
achieve diagnosis approaches that are readily integrated into the available modern IC
technologies.
4.5 References
10 Ho, C.K., Shepherd, P.R., Eberhardt, F., Tenten, W.: Hierarchical fault diagno-
sis of analog integrated circuits, IEEE Transactions on Circuits and Systems I,
2001;48 (8):9219
11 Ho, C.K., Shepherd, P.R., Tenten, W., Eberhardt, F.: Heirarchical approach to
analogue fault diagnosis, Proceedings of 3rd IEEE International Mixed-Signal
Test Workshop, Seattle USA, 36 June 1997, pp. 2530
12 Sheu, H-T., Chang, Y.-H.: Hierarchical frequency domain robust component
failure detection scheme for large scale analogue circuits with component tol-
erances, IEE Proceedings Circuits, Devices and Systems, February 1996;143
(1):5360
13 Eberhardt, F., Tenten, W., Shepherd, P.R.: Symbolic parametric fault simulation
and diagnosis of large scale linear analogue circuits, IEEE Proceedings of 5th
International Mixed-Signal Test Workshop, Whistler, B.C. Canada, June 1999,
pp. 2218
14 Hassoun, M.H., Lin, P.M.: A hierarchical network approach to symbolic analy-
sis of large scale networks, IEEE Transactions on Circuits and Systems, April
1995;42 (4):20111
15 Ho, C., Ruehli, A.E., Brennan, P.A.: The modied nodal approach to
network analysis, IEEE Transactions on Circuits and Systems, June 1975;
25:5049
16 Wei, T.W., Wong, M.W.T., Lee, Y.S.: Fault diagnosis of large scale analog circuits
based on symbolic method, Proceedings of 3rd IEEE International Mixed Signal
Test Workshop, Seattle, USA, 36 June 1997, pp. 38
17 Echtenkamp, J., Hassoun, M.H.: Implementation issues for symbolic sensitivity
analysis, Proceedings of the 39th Midwest Symposium on Circuits and Systems,
1996;IIII, Ch. 319, pp. 42932
18 Eberhardt, F., Tenten, W., Shepherd, P.R.: Improvements in hierarchical
symbolic sensitivity analysis, Electronics Letters, February 1999;35 (4):2613
19 Somayajula, S.S.: A neural network approach to hierarchical analog fault diag-
nosis, Proceedings of IEEE Systems Readiness Technology Conference, 2023
September 1993, pp. 699706
20 Pahwa, A., Rohrer, R.: Band faults: efcient approximation of fault bands for
the simulation before diagnosis of linear circuits, IEEE Transactions on Circuits
and Systems, February 1982;29 (2):818
21 Chakrabarti, S., Cherubal, S., Chatterjee, A.: Fault diagnosis for mixed-signal
electronic systems, Proceedings of IEEE Aerospace Conference, 613 March
1999;3:16979
22 Friedman, J.H.: Multivariate adaptive regression splines, The Annals of
Statistics, 1991;19 (1):1141
23 Liu, E., Kao, W., Felt, E., Sangiovanni-Vincentelli, A.: Analog testability analy-
sis and fault diagnosis using behavioural modelling, Proceedings of IEEE Custom
Integrated Circuits Conference, 1994, pp. 4136
24 Chakrabarti, S., Chatterjee, A. Diagnostic test pattern generation for analog
circuits using hierarchical models, Proceedings of 12th VLSI Design Conference,
710 January 1999, pp. 51823
Chapter 5
DFT and BIST techniques for analogue
and mixed-signal test
Mona Sa-Harb and Gordon Roberts
5.1 Introduction
The continuous decrease in the cost to manufacture a transistor, mainly due to the
exponential decrease in the CMOS technology minimum feature length, has enabled
higher levels of integration and the creation of extremely sophisticated and complex
designs and systems on chip (SoCs). This increase in packing density has been coupled
with a cost-of-test function that has remained fairly constant over the past two decades.
In fact, the Semiconductor Industry Association (SIA) predicts that by the year 2014,
testing a transistor with a projected minimum length of 35 nm might cost more than
its manufacture [1].
Many reasons have contributed to a fairly at cost-of-test function over the past
years. Although transistor dimensions have been shrinking, the same can not be
said about the number of input/output (I/O) operations needed. In fact, the increased
packing density and operational speeds have been inevitably linked to an increased
pin count. First, maintaining a constant pin countbandwidth ratio can be achieved
through parallelism. Second, the increased power consumption implies an increased
number of dedicated supply and ground pins for reliability reasons. Third, the
increased complexity and the multiple functionalities implemented in todays SoCs
entail the need for an increased number of probing pins for debugging and testing pur-
poses. All the above-mentioned reasons, among others, have resulted in an increased
test cost.
Testing high-speed analogue and mixed-signal designs, in particular, is becoming
a more difcult task, and observing critical nodes in a system is becoming increasingly
challenging. As the technology keeps scaling, especially past the 90 nm technology,
metal layers and packing densities are increasing as a function of signal bandwidth
and rise times extending beyond the gigahertz range. Viewing tools such as wafer or
142 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
on-chip probing are no longer feasible since the large parasitic capacitance loading
of a contacting probe would dramatically disturb the normal operation of the circuit.
On the other hand, the automatic test equipment (ATE) interface has become a major
bottleneck to deliver signals with high delity, due to the signicant distances the
signals have to travel at such operational speeds. In addition, the ATE cost is exploding
to keep up with the ability to test complex integrated SoCs. In fact, a $20 000 000
ATE system, capable of testing such complicated systems, was forecasted by the
SIA roadmap. Embedded test techniques, benetting from electrical proximity, area
overhead scaling and bandwidth improvements, hence leading to at-speed testing,
therefore constitute the key to an economically viable test platform.
Test solutions can be placed on the chip and are then known as a structural test or
built-in self test (BIST), on the board level or as part of the requirements of the ATE.
Each solution will entail verication of signal delity and responsibility to different
people (the designer, the test engineer or the ATE manufacturer), different calibration
techniques and different test instruments, all of which directly impact the test cost,
and therefore the overall part cost to the consumer. It is the purpose of this chapter to
highlight some of the work and latest developments on embedded mixed-signal testing
(BIST), and the work that has been accomplished so far on this topic, specically for
the purpose of design validation and characterization. Nonetheless, it is important to
point out that there is a lot of effort on placing more components on the board, as well
as trying to combat the exploding costs of big ATE systems through low-cost ones,
specically to combat the volume or production testing of semiconductor devices,
but that discussion is beyond the scope of this chapter. Some of the recent trends in
the testing industry will also be briey highlighted.
5.2 Background
The standard test methodologies for testing digital circuits are simple and consist
largely of scan chains, automatic test pattern generators and are usually used to test
for catastrophic and processing/manufacturing errors. In fact, digital testing including
digital BIST has become quite mature and is now cost effective [2, 3]. The same can
not be said about analogue tests that are performed for a totally different reason:
meeting the design specications under process variations, mismatches and device
loading effects. While digital circuits are either good or bad, analogue circuits are
tested for their functionality within acceptable upper and lower performance limits as
shown in Figure 5.1. They have a nominal behaviour and an uncertainty range. The
acceptable uncertainty range and the error or deviation from the nominal behaviour is
heavily dependant on the application. In todays high-resolution systems, it could well
be within 0.1 per cent or lower. This makes the requirements extremely demanding
on the precision of the test equipments and methods used to perform those tests.
Added to this problem is the increased test cost when testing is performed after the
integration of the component to be tested into a bigger system. As a rule of thumb, it
costs tens times more to locate and repair a problem at the next stage when compared
to the previous one [4]. Testing at early design stages is therefore economically
DFT and BIST techniques for analogue and mixed-signal test 143
(a) (b)
Good
pdf pdf
Good 2
Bad Bad
Bad
0 XD Digital Fn 0 XL XV Analogue Fn
benecial. This paradigm where, early on in the design stages, trade-offs between
functionality, performance and feasibility/ease of test are considered has come to be
known as design for testability (DfT).
Ultimately, one would want to reduce, if not eliminate, the test challenges as
semiconductor devices exhibit better performance and higher level of integration. The
most basic test set-up for analogue circuits consists of exciting the device under test
(DUT) with a known analogue signal such as a d.c., sine, ramp, or arbitrary wave-
form, and then extracting the output information for further analysis. Commonly,
the input stimulus is periodic to allow for mathematically averaging the test results,
through long observation time intervals, to reduce the effect of noise [5]. Generally,
the stimulus is generated using a signal generator and the output instrument is a root
mean square (RMS) meter that measures the amount of RMS power over a narrow but
variable frequency band. A preferred test set-up is the digital signal processing (DSP)-
based measurement for both signal generation and capture. Most, if not all, modern
test instruments rely on powerful DSP techniques for ease of automation [6] and
increased accuracy and repeatability. Most mixed-signal circuits rely on the presence
of some components such as a digital-to-analogue converter (DAC) and an analogue-
to-digital converter (ADC). In some cases, it is those components themselves that
constitute the DUT. Testing converters can be achieved by gaining access to inter-
nal nodes through some analogue switches (usually CMOS transmission gates). The
major drawback for such method is the increased I/O pin count and the degradation
due to the non-idealities in the switches, especially at high speed, even though some
techniques have been proposed to correct for some of these degradation effects [7].
Nonetheless, researchers looked to dene a mixed-signal test bus standard compatible
with the existing IEEE 1149.1 boundary scan standard [8] to facilitate the testing of
mixed-signal components. One of the earliest BIST to be devised was as a go/no-go
test for an ADC [9]. The technique relies on the generation of an analogue ramp sig-
nal, and a digital nite-state machine is used to compare the measured voltage to the
expected one. A decision is then made about whether or not the ADC passes the test.
While not a major drawback on the functionality of the devised BIST, the proposed
test technique in Reference 9 relies on an untested analogue ramp generation that con-
stitutes a drawback on the overall popularity of the method. An alternative approach
would therefore be to devise signal generation schemes that can be controlled, tuned
and easily transferred to and from the chip in a digital format. Several techniques have
been proposed for on-chip signal generation and they are the subject of Section 5.3.
144 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Digital Digital
DSP
ADC
Signal
generator Anti-aliasing
Digital
filter
DAC
Analogue
Analogue Multiplexer Analogue
output input
Here, it sufces to mention that with the use of sigma-delta ()-based schemes,
it is possible to overcome the drawback of the analogue ramp, as was proposed by
Toner and Roberts [5] in another BIST scheme, referred to as mixed-analogue-digital
BIST (MADBIST). The method relies on the presence of a DAC and an ADC on a
single integrated circuit (IC) as is the case in a coder/decoder (CODEC), for example.
Figure 5.2, which illustrates such a scheme.
In the MADBIST scheme, rst the ADC is tested alone using a digital based
oscillator excitation. Once the ADC passes the test, the DAC is then tested using either
the DSP engine or the signal generator. The analogue response of the DAC is then
looped back and digitized using the ADC. Once the ADC and then the DAC pass the
test respectively, they can be used to characterize other circuit behaviours. In fact, this
technique was used to successfully test circuits with bandpass responses as in wireless
communications. In Reference 10, MADBIST was extended to a superhetrodyne
transceiver architecture by employing a bandpass oscillator for the stimulus
source which was then mixed down using a local oscillator and digitized using the
ADC. Once tested, the DAC and transmit path are then characterized using the loop-
back conguration explained above.
To further extend the capabilities of on-chip testing, a complete on-chip mixed-
signal tester was then proposed in Reference 11, which is capable of a multitude
of on-chip testing functions, all the while relying on transferring the information
to/from the IC core in a purely digital format. The architecture format is generic and
is shown in Figure 5.3. The functional diagram is identical to that of a generic DSP-
based test system. Its unied clock guarantees coherence between the generation and
measurement subsystems that is important from a repeatability and reproducibility
point of view, especially in a production testing environment. This architecture in
particular is the simplest among all those presented above and is versatile enough to
perform many testing functions as will be shown in Section 5.7.
Of particular interest to the architecture proposed in Reference 11, besides its
simplicity and its digital interfacing, is its potential to achieve a more economical
test platform in an SoC environment. SoC developers are moving towards integration
of third-party intellectual properties (IPs) and embedding the various IP cores in an
DFT and BIST techniques for analogue and mixed-signal test 145
Clock source
Arbitrary
Waveform
Program waveform DUT To DSP
digitizer
generator
architecture to provide functionality and performance. The SoC developers have also
the responsibility of testing each IP individually. While attractive to maintain the
integration trend, the resultant test time and cost has inevitably increased as well.
Parallel testing can be used to combat this difculty, avoiding therefore sequential
testing where a signicant amount of DUT, DUT interfaces and ATE resources remain
idle for a signicant amount of time. However, incorporating more of the specialized
analogue instruments (arbitrary waveform generators and digitizers) within the same
test system is one of the cost drivers for mixed-signal ATEs, placing a bound on the
upper limit of parallelism that can be achieved. In fact, modest parallelism is already
in use today by industry to test devices on different wafers, using external probe cards
[12]. However, reliable operation of a high pin count probe is difcult, placing an
upper constraint to the parallel testing, a constraint that does not appear to be able
to keep up with the integration level and therefore, the increased IC pin count, I/O
bandwidth and the complexity and variation in the nature of the IPs integrated, the
semiconductor industry has been facing.
Concurrent testing, which relies on devising an optimum strategy for the DUT
and/or ATE resource utilization to maintain a high tester throughput can help off-
set some of the test time cost that is due to idle test resources. The shared-resource
architecture available in todays tester equipment cannot support an on-the-y recon-
guration of the pins, periods, timing, levels, patterns and sequencing of the ATE. On
the other hand, embedded or BIST techniques can improve the degree of concurrency
signicantly. Embedded techniques, such as the one proposed in Reference 11, benet
from an increased level of integration due to the mere fact of technology scaling that
allows multiple embedded test core integration in critical locations. This, together with
the manufacturing cost, bandwidth limitation and area overhead, all scaling favourably
with the technology evolution. This allows for parallelism and at-speed tests to be
exploited to a level that could potentially track the trend in technology/manufacturing
evolution.
Before presenting the architecture and its measurement capability in more
details, a description of some of the most important building blocks that led to the
implementation of such architecture are detailed rst.
146 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
C1
C2
W1 W D
Sine D-bit Analogue
+ Z1 Out
ROM DAC filter
W
Phase accumulator
Z1
M-bits
Z1 D-bit Analogue
+ + Out
DAC filter
Typically a DAC
k
where w is the number of bits at the output of the phase accumulator, M is the number of
complete sine-wave cycles and fs is the sampling frequency. The amplitude precision
that is a function of D, the ROM word width, is then given according to
Amax
ADDFS = (5.2)
2D+1
The above method requires the use of a DAC which needs to be tested and char-
acterized if it is to be used in a BIST. The number of bits required from the DAC is
dictated by the resolution required for the analogue stimulus, which is often multi-bit.
This, in turn, entails a large silicon area, sophisticated design and increased test time,
all of which are not desirable.
Z1
M-bits
DAC Analogue
+ + Z1 Out
(STF=1) filter
MUX
S
0 1 1-bit
Figure 5.7 Improved digital resonator with the multiplier replaced with a 1-bit
multiplexer
sine
or
DC
1, A1, 1
+ Low-pass Analogue
DAC
filter output
2, A2, 2
Multi-bit digital
adder
N, AN, N
requirement [21]. N is chosen given a certain maximum memory length. The bitstream
is then generated according to a set of criteria such as the signal-to-noise ratio, dynamic
range, amplitude precision and so on.
The practicality of choosing the appropriate bitstream using the minimum hard-
ware needed while maintaining a required resolution in terms of amplitude, phase and
spurious-free dynamic range were analysed in detail in Reference 22. Small changes
in the bitstream can lead to changes as large as 1040 dB in the quality or resolution
of the signal. As a result, an optimization can be run to achieve the best resolution
for a given number of bits and a given hardware availability.
5.3.4 Multi-tones
Multi-tone signal generation is particularly important for characterizing blocks such
as lters. They can reduce test time by stimulating the DUT (also referred to as circuit
under test or CUT) only once with a multitude of signals and then relying on DSP-
techniques such as the fast Fourier transform algorithm to extract the magnitude and
phase responses at each individual frequency. More details on analogue lter testing
can be found in Chapter 6. Another important application of multi-tone signals is
in the testing of inter-modulation distortion. This is particularly important in radio
frequency (RF) testing where measures such as the third-order input inter-modulation
product (IIP3), 1-dB compression point and so on, require the use of a minimum of two
tones. The repeatability and accuracy of the results is usually at its best if coherency,
also known as the M/N sampling principle, is maintained as it is under this condition
that maximum frequency resolution per bin is obtained.
Multi-tone signal generation is conceptually illustrated in Figure 5.9, where a
multi-bit adder and a multi-bit DAC are needed for analogue signal reconstruction,
increasing therefore the hardware complexity. However, the bitstream signal
generation method presented in the previous subsection is readily extendible to the
multi-tone case by simply storing a new sequence of bits in the ROM, with the new
bits now corresponding to a software-generated multi-tone rather than a single-tone
150 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
signal. No additional hardware (such as multi-bit adders and DACs for analogue
signal reconstruction) is needed. Hence, another added testimony to the advantages
of this signal generation method for BIST.
DUT
~
(a)
(b)
bitstream Filter Circuit Digital test signal
generator
DUT
(c)
Figure 5.10 Area overhead and partitioning of the bitstream signal generation
method with (a) analogue stimulus, (b) analogue stimulus using DSP-
based techniques and explicit ltering operation and (c) digital test
stimulus, while relying on the DUT built-in (implicit) ltering operation
DFT and BIST techniques for analogue and mixed-signal test 151
Testing in general comprises of rst sending a known stimulus and then capturing
the resultant waveform of the CUT for further analysis. As discussed previously,
the interface to/from the CUT is preferably in digital form to ease the transfer of
information. The previous sections discussed the reliable generation of on-chip test
stimulus. Signal generation constitutes just one aspect of the testing of analogue and
mixed-signal circuits. This section discusses the other aspect of testing; the analogue
signal capture.
The signal capture of on-chip analogue waveforms underwent an evolution. First,
a simple analogue bus was used to transport this information directly off chip through
analogue pads [23]. Later, an analogue buffer was included on chip to efciently
drive the pads and interconnect paths external to the chip. This evolution is illustrated
graphically in Figure 5.11. In both the cases above, the information is exported off
chip in analogue form and is then digitized using external equipment. Perhaps a better
way to export analogue information is by digitizing it rst. This led to the modication
shown in Figure 5.12, whereby the analogue buffer is replaced with a 1-bit digitizer or
a simple comparator. Here, too, the digitization is achieved externally, shown in one
possible implementation using the successive approximation register (SAR), and with
external reference voltages feeding the comparator, usually and commonly generated
using an external DAC.
AT1
AT2
ATout
+
AT1
AT2
Aref
1-bit
+ output
DTout
AT1
Ain +
SAR
Digital
output
MSB LSB
DAC Aref
Figure 5.12 Signal capture with focus on the comparator and the digitization
process
Another important evolution to the front-end sampling process is the use of under-
sampling. This becomes essential when the analogue waveform to be captured is very
fast. In general, capturing an analogue signal comprises sampling and holding the
analogue information rst, and then converting this analogue signal into a digital
representation using an ADC. There exist many classes of ADCs each suitable for
a given application. Whatever the class of choice might be, the front-end sampling
in ADCs have to obey the Nyquist criterion; that is, the sampling of the information
having a bandwidth, BW, has to be done at a high enough sampling rate, fs , given
by 2BW . As the input information occupies a higher bandwidth, the sampling rate
has to correspondingly increase making the design of the ADC equivalently harder,
as well as more area and power consuming. Instead, testing applications make use
of an important property of the signal to be captured and that is its periodicity. Any
signal that needs to be captured can be made periodic by repeating the triggering of
the event that causes such an output signal to exist using an externally generated,
DFT and BIST techniques for analogue and mixed-signal test 153
Nodes
of Programmable
CUT reference
14
Vout
Quantization level
12
10
S/H S/H Aref
8 + Out
+ 6
4
2 Aref
0
Digital multiple-pass ADC 1 2 15 16
controller Pass number
accurate and arbitrarily slow clock. Each time the external clock is triggered, it is also
slightly delayed. This periodicity feature in the signal to be captured and incremen-
tal delay in the external trigger give rise to an interesting capture method known as
undersampling, illustrated in Figure 5.13. For that, a slowly running clock (slower
than the minimum required by the Nyquist criterion) is used to capture a waveform,
so that the clock frequency is slightly offseted with respect to the input signal period.
That is, if the clock frequency is T + T (with T T ) and the input frequency
is T , then the signal using a multi-pass approach can be captured with an effective
resolution of T . This method has been demonstrated to be an efcient way of cap-
turing high frequency and broadband signals where the input information bandwidth
can be brought down in frequency, making the transport of this information off chip
easier and less challenging, as was rst demonstrated in the implementation of the
integrated on-chip sampler in Reference 24.
In order to make the digitization also included on chip, a multi-pass approach
was rst introduced in Reference 25 whereby the undersampling approach is still
maintained in the front-end sample and hold stage and then further demonstrated and
improved in Reference 11 with the inclusion of the comparator and reference level
generator on chip. The top-level diagram of the circuit that performs such a function
with the corresponding timing and voltage diagram are shown in Figure 5.14 and
operates as described next.
The programmable reference is rst used to generate one d.c. level. The sampled-
and-held voltage of the CUT is then compared to this reference level and quantized
using a 1-bit ADC (or simply a comparator). The next run through, the d.c. reference
154 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
voltage is maintained constant and the clock edge for the sampling operation is moved
by T . The new sampled-and-held information of the CUT is then compared to the
same reference voltage. This sequence is then repeated until a complete cycle of
the input to be captured is covered. Once this is done, the programmable reference
voltage is then incremented to the next step, one least-signicant-bit (LSB) away from
the previous reference level and the whole previous cycle of incrementing T is then
repeated. The above is then repeated until all d.c. reference voltages are covered. This
is referred to as the multi-pass approach. This implies that a time resolution of T
and a voltage resolution of an LSB can be achieved in the time and voltage domains,
respectively. Undersampling, together with a multi-pass approach, combined with an
embedded and reliable d.c. signal generation scheme is now used as a complete and
as an on-chip oscilloscope tool.
Start
D Q
Count
Counter
Stop
D Q
Clock
Figure 5.15 A simple counter method for the measurement of a time interval
Vout Out
ADC
Vin Vin C
Vref
I/n
Vout TMU Out
+
Vref
I
(I ) (I/n)
ADCs, nonetheless, this ADC can be power hungry and could be a tedious and time
consuming task.
A better approach that is insensitive to the absolute value of the capacitor relies
on the concept of charging and then discharging the same capacitor by currents that
are scaled versions of each other. The system [27] is shown in Figure 5.17 and
accomplishes two advantages: (i) it does not rely on the actual capacitor value since
the capacitor is now only used as a mean of storing charge and then discharging it at a
slower rate; and (ii) it performs pulse stretching which will make the original pulse to
be measured much larger, making the task of quantizing it a lot easier. In this case, a
single threshold comparator (1-bit ADC) can be used to detect the threshold crossing
times. A relatively low-resolution time measurement unit (TMU) can then be used to
digitize the time difference. The TMU can be a simple counter, as explained above
in Section 5.5.1 or one of the other potential TMUs that will be discussed next.
The techniques in References 26 and 27 can become power hungry if a narrow
pulse is to be measured. Trade-offs exists in the choice of the biasing current, I, and
the bit-resolution of the ADC (for a given integration capacitor, C); the larger I the
lower the ADC resolution required. However, as the pulse width decreases and in
order to maintain the same resolution requirement on the ADC while using the same
capacitor, C, the biasing current and therefore the power dissipation has to increase.
In fact, for very small pulse widths, the differential pair might even fail to respond fast
enough to the changes in the pulse. For that, digital phase-interpolation techniques
offer an alternative to the analogue-based interpolation schemes.
Data 1 1 1
D Q D Q D Q
CLK
Count_1
Count_2
Count_M
Figure 5.18 A delay-line used to generate multi-phase clock edges. Also can be used
to measure clock jitter with a time resolution set by the minimum gate
delay offered by the technology
The operation of such a TDC is analogous to a ash ADC, where the analogue
quantity to be converted into a digital word is a time interval. They operate by com-
paring a signal edge to various reference edges all displaced in time. Typically, these
devices measure the time difference between two edges, often denoted as the START
and STOP edges. The START signal usually initiates the measurements while the
STOP edge terminates it. Given that the delay through each stage is known a priori
(which will require a calibration step), the nal state of the delay lines can be read
through a set of DFFs and which is directly related to the time interval to be measured.
Usually the use of such delay lines has a limited time dynamic range. Some TDCs
employ time range extension techniques which rely on counters for a coarse time
measurement and the delay lines for ne intervals digitization. This is identical to the
coarse/ne quantizers in ADCs. Other techniques include pulse stretching [29], pulse
shrinking and time interpolation.
The use of the above devices extends to applications such as laser ranging and
high-energy physics experiments. With the addition of a counter at the output, this
simple circuit can be used to measure the accumulated jitter of a data signal (Data)
with respect to a master clock (CLK), as shown in Figure 5.18.
The above circuit can be used for time resolutions down to the gate delay created
by the technology in which they are implemented. To overcome the above limitations,
a Vernier delay line (VDL) can be used.
Data 1 1 1
D Q D Q D Q
CLK 2 2 2
Count_1
Count_2
Count_M
Figure 5.19 A VDL achieving sub-gate time resolution
case and having a total of N delay stages, the time range that can be captured is given
by range = N (2 1 ).
Usually these delays can be implemented using identical gates that are purposely
slightly mismatched. A few picoseconds timing resolution can be achieved in this
method, equivalent to a deep sub-gate delay sampling resolution. VDL samplers
have previously been used to perform time interval measurements [30] and data
recovery [31].
When Vernier samplers are used, data is latched at different moments in time,
leading to synchronization issues that must be considered when interfacing the block
with other units. Read-out structures exist though, allowing for continuous operation
and synchronization of the outcoming data [32]. For the purpose of jitter measurement,
this synchronization block is not needed.
The circuit was indeed used for jitter measurement and implemented [33] in a
standard 0.35-m CMOS technology, achieving a jitter measurement resolution of
res = 18 ps. The RMS jitter was measured to be 27 ps and the peak to peak jitter was
324 ps. For jitter measurements, the same circuit can be congured with the addition
of the appropriate counters, as shown in Figure 5.19.
Note that, in general, these delay stages are voltage controlled to allow for tuning
ranges, and, more often, in a negative feedback arrangement, known as a delay-
locked loop (DLL), the delay stages are a lot more robust to noise and jitter due to the
feedback nature of the implementation. For that it is worth mentioning that, almost
exclusively, DLLs are now relying on a linear voltage-controlled delay (VCD) cell
introduced by Maneatis [34]. The linear aspect of the cell stems from the use of a
diode connected load in parallel with the traditional load. This gives the load of the
cell a more linear aspect, extending the linearity range of the delay cell. The biasing
of these cells are also made more robust to supply noise and variations due to the use
of a uniform biasing circuit to generate both the N- and P-sides biasing. The same
biasing is also used for all blocks where variations affecting one will affect the other
in a uniform manner.
DFT and BIST techniques for analogue and mixed-signal test 159
s s s
Data
D Q D Q D Q
f f f
CLK
Data
D Q
Counter
CLK
Reference clock
Reference clock Up
Vdd/2
Phase Vx
frequency LPF
detector ADC Out
(optional)
Reference
clock
Measured
clock
Up
Down
Vdd/2
Vx
Digitized using the ADC
Horizontal opening
Voltage
Vhigh
Vlow
Time
early late
Figure 5.22 An example of the received eye diagram of high-speed link, with accept-
able openings shown, both in the voltage and time scales, dening the
mask. Violations of those limits are detected by the EOM suggested in
Reference 41
162 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Signal V1
Signal first
Reference V2 Out
Reference first
V = t et / (5.3)
where is the conversion factor from the time to initial voltage at nodes V1 and V2 ,
t is the time difference between the rising edges of the signal and reference and
DFT and BIST techniques for analogue and mixed-signal test 163
2 M3 M4
1 M1 M2
I
I
is the device time constant. By measuring the time t between the moment that the
inputs switch to when the OR gates switches, t can be found.
The previously proposed circuit is compact area wise, but its use is limited for
only a few picoseconds in input time range. The gain also is at the single digit level.
Cascading might get around the latter problem.
A second method proposed for time amplication [43] is shown in Figure 5.24.
The circuit consists of two cross-coupled differential pairs with passive RC loads
attached. Upon arrival of the rising edges of 1 and 2 , the amplier bias current
is steered around the differential pairs and into the passive loads. This causes the
voltage at the drains of transistors M1 and M2 to be equal at a certain time and that
of M3 and M4 to coincide a short time later. This effectively produces a time interval
proportional to the input time difference which can then be detected by a voltage
comparator.
The second time amplication method proposed in Reference 43, while more area
and power consuming, works for very large input ranges extending therefore the input
time dynamic range. Its gain can also be at least an order of magnitude higher, using
only a single stage. The circuit has been built in a 0.18-m CMOS technology and
was experimentally shown to achieve a gain of 200 s/s for an input range of 5300 ps,
giving therefore an output time difference of 160 ns.
Time amplication, when thought of as analogous to the use of PGAs in ADCs,
is the perfect block to precede a TDC. The reason being that with a front-end time
amplication stage, a low-resolution TDC can be used to get an overall high-resolution
TMU.
PDM sinusoidal
N
Vary N to N + 1
lock or track the reference clock fast (hence the tracking or locking time characteris-
tic), as well their phase or jitter noise. Testing for these is of paramount importance
in todays SoCs.
An embedded technique for the measurement of the jitter transfer function of a
PLL was suggested in Reference 44. The technique relies on one of three methods
where the PLL is excited by a controlled amount of phase jitter from which the loop
dynamics can be measured. These techniques, shown in Figure 5.25, include phase
modulating the input reference, i , sinusoidal injection (using a bitstream or a
PDM representation of the input signal) at the input of the low-pass lter or varying
the divide-by-N counter between N and N + 1.
All three techniques have been veried experimentally and tested on commer-
cial PLLs, allowing one to easily implement these testing techniques for on-chip
characterization of jitter in PLLs.
Given the proposed PLL testing method technique presented in Reference 44,
it is benecial to draw some analogies with the voltage measurement or stimulat-
ing schemes presented earlier in Section 5.3.5. A pulse-density modulated signal is
injected into the PLL. Due to the inherent low-pass lter present in PLLs, the testing
or stimulating of such systems, similar to the testing of ADCs, can be achieved in
a purely digital manner without the need for an additional low-pass lter. Silicon
area savings and reduced circuit complexity can be achieved, and is an added bonus
of the proposed PLL BIST. So stimulating the PLL is, here too, done using only
a digital interface [45]. Another analogy can be drawn with respect to the voltage
domain testing. While in the analogue stimulus generation, it is the amplitude that is
modulated, in the case of a PLL, it is the case for the phases or clock edges, as shown
in Figure 5.26.
bitstream
Circuit Amplitude
generator Filter
(ADC) modulation
(Amplitude)
DUT
bitstream
generator Circuit Phase
Filter
(Phase (PLL) modulation
placement)
DUT
Figure 5.26 Analogy between stimulating an ADC and a PLL with a bitstream,
for testing purposes
of edges with known time intervals. However, as the desired calibration resolution
becomes smaller than a few picoseconds, such a task becomes more difcult; on chip,
mismatches and jitter put a lower bound on reliable achievable timing generators,
while off chip, edge and pulse generators can produce such intervals accurately at
additional costs. Calibration methods and their associated trade-offs are therefore
important and will be the subject of Chapter 11. Here, we restrict the discussion to
the calibration of time measurement instruments and in particular, to the ip-op
calibration of what is known as the sampling-offset TDC or SOTDC for short [46]. A
sampling-offset TDC is a type of ash converter that relies solely on ip-op transistor
mismatch, instead of separate delay buffers, to obtain ne temporal resolution. While
a rather specic type of TDC, it is probably one of the more challenging types to
calibrate due to the very ne temporal resolutions that this TDC can achieve, making
therefore the task of measuring and calibrating such small time differences a difcult
task. In fact, it was shown in Reference 47 that mismatches due to process variation can
produce temporal offsets from 30 ps down to 2 ps, depending on the implementation
technology and architecture chosen for the ip-op. Those ip-ops need therefore
to be calibrated rst before they can be used as TMUs.
In Reference 47, an indirect calibration technique was proposed that involves
the use of two uncorrelated signals (practically, two square waves running at
slightly offset frequencies) to nd the relative offsets of the ip-ops used in the
SOTDC.
Finding the absolute values of the offset, which is statistically referred to as
the mean of a distribution of offsets, requires a direct calibration technique. This
technique was introduced in Reference 48. It involves sending two edges to the ip-
op to be calibrated, with time difference, T , tightly controlled, and repeating
the measurements many times to get a (normal or Gaussian) distribution. T is then
changed and the same experiment is repeated. A counter or accumulator is then used to
166 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
nd the cumulative density function (CDF) of the distributions. The point on the CDF
that corresponds to a probability of exactly 50 per cent is the mean of the distribution,
which is the actual absolute offset of the ip-op. Although experimentally veried,
an improved calibration scheme was then developed in Reference 48 to get around
the problem of having to tightly generate T (which is more often done off chip for
resolution purposes at the expense of increased cost as discussed earlier). The basic
idea involves intentionally blowing up the noise distribution by externally injecting
temporal noise into the ip-op with a standard deviation an order of magnitude
(or even more) bigger than the offset standard deviation that needs to be measured.
The ipop standard deviation proper to the ip-op alone will be somewhat lost
with the new distribution, but the mean value becomes much easier to measure as
the need for generating ne T s is eliminated. With this proposed method, temporal
offsets on the order of approximately 10 ps were successfully measured in a prototype
implemented in a 0.18-m CMOS technology.
Some of the above BIST techniques that were highlighted in previous sections have
been incorporated in a single system that was used to perform a full set of tests,
emulating therefore the function of a mixed-signal tester on chip. The advantages of
such a proposed system in Reference 11 include a full digital input/output access,
a coherent system for signal generation and capture, a fully programmable d.c. and
a.c. systems, and a single-comparator or 1-bit ADC, which with an on-chip DLL
can perform a multi-pass capture digitization. The proposed system [11] was shown
earlier in Figure 5.3. This section is dedicated to show some of it versatile applications
that were indeed built, tested and characterized.
CLK DIV
Analogue
CUT
filter
S/H +
+
S/H
DSP
N2 N log2 N
Original
analogue/mixed-
signal core
+
Scan path
Scannable cells for reguar core I/O and for internal "scan chain" acces
Figure 5.28 Emphasis on the digital-in digital-out interface of the proposed BIST
integrity perspective whereby a digital interface, both at the input and output terminals,
is a lot more immune to noise and signal degradation caused by the interconnect
paths.
Last but not least, its exibility from an area overhead perspective is what adds to
its BIST value. As highlighted in Figure 5.29, the proposed test core can be greatly
reduced if the area is of paramount importance. The a.c. and d.c. memory scan chains
168 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
...
...
Analogue/mixed
signal
core
+
...
...
On-chip mixed-signal test circuits
...
Figure 5.29 Possibility of a reduced on-chip core, and therefore, reduced area, while
maintaining a digital-in digital-out interface
can be off chip using external equipment. Similarly, the memory that holds the digital
logic and performs the back-end DSP capabilities can also be external, both, while still
maintaining digital-in digital-out interface. In this case, the abbreviated mixed-signal
test core consists of simple digital buffers (to restore the rise and fall times of the
digital bitstream), the crude low-order d.c. low-pass lter, and the single-comparator
performing the digitization in a multi-pass approach.
5.7.5 Crosstalk
One other ultimate application for digital communication in deep-submicron tech-
nologies is the crosstalk that is becoming more pronounced as technologies scale
down, speeds go up and interconnect traces become longer and noisier. The increased
level of packing density is inevitably introducing lines that are in proximity to one
another where quiet lines, in proximity of aggressor lines get transformed into what
are known as victim lines. This crosstalk effect was indeed captured using the versatile
system proposed above [49].
An earlier version was also implemented in Reference 50. The embedded circuit
was also used to measure digital crosstalk on a victim line due to aggressor lines
switching. In this implementation, only the sample-and-hold function was placed on
chip, together with a VCD line that was externally controlled with a varying d.c.
voltage to generate the delayed clock system. Buffers were then used to export the
d.c. analogue sampled-and-held voltage, and the signal was reconstructed externally.
The circuit relies on external equipment for the most part (which is not always an
undesired effect, in fact it is more desired in a testing environment for more control
170 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
and tuning). Nonetheless, the system was among the earliest to measure interconnect
crosstalk in an embedded fashion and requires therefore attention and credit.
ADC Out
External (digital out for
pulse post-processing)
generator
(with
variable )
Two samplers
Supply (to be
characterized)
+
0 Buffer VCO Counter
Sampler
Sampler
S/H value controls the Oscillation frequency
oscillation frequency is then measured using
of the VCO a high-speed counter
to control the frequency of oscillation of the VCO. This frequency is then measured
using a high-frequency counter and exported off chip in a digital manner. Calibration
is necessary in this implementation in order to capture the voltagefrequencydigital
bitstream relationship.
The system in Reference 51 was implemented in a 0.13-m CMOS technol-
ogy and experimentally veried to capture both the deterministic nature of the noise
(largely captured using undersampling), as well as the stationary noise in a 4-Gb/s
serial link system. The stationary noise was captured using the autocorrelation func-
tion and was in large due, and correlated, to the clock in the system. The power
spectral density revealed the highest noise contribution at 200 MHz agreeing with the
system clock. Other noise contributions in the PSD occurred at other frequencies that
were directly related to some switching activity in the system. So the proposed system
in Reference 51 was indeed capable of capturing both deterministic (also referred to
as periodic) and stationary properties of the supply noise in a Gb/s serial link system.
Also recently, an on-chip system to characterize substrate integrity beyond 1 GHz
was implemented in a 0.13-m CMOS technology [52] and successfully tested. The
relevance of this paper is on one hand its circuit implementation for measuring sub-
strate integrity, which conrms the need for embedded approaches. On the other hand,
the papers conclusion conrms that in an SoC, integrity issues have to be studied and
can not be ignored, especially beyond 1 GHz of operational speed.
Narrowband Narrowband
DUT LNA
filter filter
Calibration
path
Noise diode RF detector
d.c. bias of the diode. Calibration is also made possible to characterize and verify the
correct functionality of the board level test path.
Additional block diagrams for other RF testing functions can be found in more
details in Reference 53. They all fall in the category of RF-to-d.c. or RF-to-analogue
testing, whereby the high-frequency signals are converted to low-frequency or d.c.
signals, which are then captured with more ease and higher accuracy.
If the cost of a component has to be brought down to track Moores law, its testing cost
has to go down as well. While most of the recent tools are mainly for characterization
and device functional testing, more needs to be done about production testing. One
important criterion in production testing is the ability to calibrate all devices while
using simple calibration techniques, with as little test time overhead as possible to be
a production worthy solution. It is therefore important to highlight some of the latest
test concerns and techniques that have emerged in recent years, mainly to reduce
overall test time and cost.
Adaptive test control and collection and test oor statistical process control are
now emerging topics that are believed to decrease the overall test time through inves-
tigating the effect of gathering statistical parameters about the die, wafer and lot, and
feeding them back to a test control section through some interactive interface. As
more parts are tested, it is believed that the variations in the parts are better under-
stood, allowing the test control to enable or disable tests, re-order tests, for example,
DFT and BIST techniques for analogue and mixed-signal test 173
allowing tests that are catching the defects to be run rst [54]. This has the potential
effect of centring the distribution of the devices performance more tightly around
its mean; in other words, getting test results with less variance or standard deviation.
Once this is achieved, the remaining devices in the production line can be easily
scanned and binned more quickly. However, this solution does not address the issue
of mean shifting that could happen if there is a sudden change in the environmental
set-up. Also, the time it takes to gather a statistically valid set of data that works more
or less globally is not yet dened. This is an important criterion since having a set that
works for only a small percentage of the devices to be tested is not an economically
feasible solution. In other words, the time offset introduced by the proposed method
should not have a detrimental effect on the overall test time. Otherwise the proposed
method is not justied.
A design for manufacturability technique based on a manufacturable-by-
construction design was also recently proposed in Reference 55. The idea proposed
is specically intended for the nanometre era and puts forward the concept of incor-
porating accurate physical and layout models of a particular process as part of the
computer-aided design tool used to simulate the system. Such models are then con-
tinuously and dynamically updated based on the yield losses. The concept was
experimentally veried on ve different SoCs implemented in a 0.13-m CMOS
process, including a baseband cell phone, a micro-controller and a graphics chip.
Experimental results show a yield improvement varying between 4 and 12 per cent,
depending on the nature of the system implemented on the chip. The yield improve-
ment was measured with respect to previous revisions of the same ICs implemented
using traditional methods.
Recent questions and efforts are also entailing the consideration by the ATE
manufacturing industry to what is known as an open architecture with module instru-
ments to standardize test platforms and increase their lifetime, which resulted in The
Semiconductor Test Consortium formed between Intel and the Japanese Advantest
Corp. [56].
Finally, the testing of multiple Gb/s serial links and buses has been the focus
of recent panel discussions [57]. Some of the questions that have been addressed
include the appropriateness of DfT/BIST for such tests, whether such measures
are, or will be, the bottleneck for analogue tests, rather than the RF front-end in
mobile/wireless computing, and nally, whether it is necessary to even consider test-
ing for jitter, noise and BER from a cost and economics perspective in a production
environment.
5.9 Conclusions
In summary, it is clear that test solutions and design for test techniques are important,
but where the test solutions are implemented and how they are partitioned, especially
in an SoC era, have an effect on the overall test cost. Improvising the optimum test
strategy that is affordable, achieves a high yield and minimizes the time to market is
a difcult task.
174 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Test solutions and platforms can be partitioned anywhere on the chip, the board
or as part of the requirements of the ATE. Each solution will entail responsibility to
different people (designer, test engineer or ATE manufacturer), different calibration
techniques and different test instruments, all of which directly impact the test cost
and, therefore, the overall part cost to the consumer. This chapter focused mainly on
the latest developments in DfT and BIST techniques and the embedded test struc-
tures of analogue and mixed-signal communication systems for the purpose of design
validation and characterization.
Emerging ideas and latest efforts to decrease the cost of test include adaptive
testing where environmental factors are accounted for and fed back to the testing
algorithm. This could potentially result in a more economical long-term production
testing, but is yet to be veried and justied. On the ATE level, ideas such as con-
current test and open architecture are also being considered. Despite the differences
in the views and the abundance in the suggested solutions for test, testing contin-
ues to be an active research area. A great number of mixed-signal test solutions will
have to continue to emerge to respond to the constantly pressing needs for shipping
better, faster and more economically feasible (cheaper) devices to the electronics
consumers.
5.10 References
11 Hafed, M.M., Abaskahroun, N., Roberts, G.W.: A 4 GHz effective sample rate
integrated test core for analog and mixed-signal circuits, IEEE Journal of Solid
State Circuits, 2002; 37 (4): 499514
12 Zimmermann, K.F.: SiPROBE A new technology for wafer probing,
Proceedings of IEEE International Test Conference, Washington, DC, 1995,
pp. 10612
13 Tierney, J., Rader, C.M., Gold, B.: A digital frequency synthesizer, IEEE
Transactions on Audio and Electroacoustic, 1971; 19: 4857
14 Bruton, L.: Low sensitivity digital ladder lters, IEEE Transactions on Circuits
and Systems, 1975; 22 (3): 16876
15 Lu, A.K., Roberts, G.W., Johns, D.A.: High-quality analog oscillator using
oversampling D/A conversion techniques, IEEE Transactions on Circuits and
Systems II: Analog and Digital Signal Processing, 1994; 41 (7): 43744
16 Toner, M.F., Roberts, G.W.: Towards built-in-self-test for SNR testing of a
mixed-signal IC, Proceedings of IEEE International Symposium on Circuits and
Systems, Chicago, IL, 1993, pp. 1599602
17 Lu, A.K., Roberts, G.W.: An analog multi-tone signal generation for built-
in-self-test applications, Proceedings of IEEE International Test Conference,
Washington, DC, 1994, pp. 6509
18 Haurie, X., Roberts, G.W.: Arbitrary precision signal generation for bandlim-
ited mixed-signal testing, Proceedings of IEEE International Test Conference,
Washington, DC, 1995, pp. 7886
19 Veillette, B., Roberts, G.W.: High-frequency signal generation using delta-
sigma modulation techniques, Proceedings of IEEE International Symposium
on Circuits and Systems, Seattle, Washington, 1995, pp. 63740
20 Hawrysh, E.M., Roberts, G.W.: An integration of memory-based analog signal
generation into current DFT architectures, Proceedings of IEEE International
Test Conference, Washington, DC, 1996, pp. 52837
21 Burns, M., Roberts, G.W.: An Introduction to Mixed-Signal IC Test and
Measurement (Oxford University Press, New York, 2001)
22 Dufort, B., Roberts, G.W.: On-chip signal generation for mixed-signal built-in
self test, IEEE Journal of Solid State Circuits, 1999; 34 (3): 31830
23 Parker, K.P., McDermid, J.E., Oresjo, S.: Structure and metrology for an analog
testability bus, Proceedings of IEEE International Test Conference, Baltimore,
MD, 1993, pp. 30922
24 Larsson, P., Svensson, S.: Measuring high-bandwidth signals in CMOS circuits,
Electronics Letters, 1993; 29 (20): 17612
25 Hajjar, A., Roberts, G.W.: A high speed and area efcient on-chip analog wave-
form extractor, Proceedings of IEEE International Test Conference, Washington,
DC, 1998, pp. 68897
26 Stevens, A.E., van Berg, R., van der Spiegel, J., Williams, H.H.: A time-to-
voltage converter and analog memory for colliding beam detectors, IEEE Journal
of Solid State Circuits, 1989; 24 (6): 174852
27 Sumner, R.L.: Apparatus and Method for Measuring Time Intervals With Very
High Resolution, US Patent 6137 749, 2000
176 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
28 Rahkonen, T.E., Kostamovaara, J.T.: The use of stabilized CMOS delay lines
for the digitization of short time intervals, IEEE Journal of Solid State Circuits,
1994; 28 (8): 88794
29 Chen, P., Liu, S.: A cyclic CMOS time-to-digital converter with deep
sub-nanosecond resolution, Proceedings of IEEE Custom Integrated Circuits
Conference, San Diego, CA, 1999, pp. 6058
30 Dudek, P., Szczepanski, S., Hateld, J.: A CMOS high resolution time-to-digital
converter utilising a Vernier delay line, IEEE Journal of Solid State Circuits,
2000; 35 (2): 2407
31 Kang, J., Liu, W., Cavin III, R.K.: A CMOS high-speed data recovery cir-
cuit using the matched delay sampling technique, IEEE Journal of Solid State
Circuits, 1997; 32 (10): 158896
32 Andreani, P., Bigongiari, F., Roncella, R., Saletti, R., Terreni, P., Bigongiari,
A., Lippi, M.: Multihit multichannel time-to-digital conversion with +/- 1%
differential nonlinearity and near optimal time resolution, IEEE Journal of Solid
State Circuits, 1998; 33 (4): 6506
33 Abaskharoun, N., Roberts, G.W.: Circuits for on-chip sub-nanosecond signal
capture and characterization, Proceedings of IEEE Custom Integrated Circuits
Conference, San Diego, CA, 2001, pp. 2514
34 Maneatis, J.G.: Low-jitter process independent DLL and PLL based on self-
biased techniques, IEEE Journal of Solid State Circuits, 1996; 31 (11):
172332
35 Chan, A.H., Roberts, G.W.: A deep sub-micron timing measurement circuit
using a single-stage Vernier delay line, Proceedings of IEEE Custom Integrated
Circuits Conference, Orlando, FL, 2002, pp. 7780
36 Takamiya, M., Inohara, H., Mizuno, M.: On-chip jitter-spectrum-analyzer
for high-speed digital designs, Proceedings of IEEE International Solid State
Circuits Conference, San Francisco, CA, 2004, pp. 350532
37 Yamaguchi, T., Ishida, M., Soma, M., Ichiyama, K., Christian, K., Oshawa,
K., Sugai, M.: A real time jitter measurement board for high-performance
computer and communication systems, Proceedings of IEEE International Test
Conference, Charlotte, NC, 2004, pp. 7784
38 Lin, H., Taylor, K., Chong, A., Chan, E., Soma, M., Haggag, H., Huard, J.,
Braat, J.: CMOS built-in test architecture for high-speed jitter measurement
technique, Proceedings of IEEE International Test Conference, Charlotte, NC,
2003, pp. 6776
39 Taylor, K., Nelson, B., Chong, A., Nguyen, H., Lin, H., Soma, M., Haggag,
H., Huard, J., Braatz, J.: Experimental results for high-speed jitter measurement
technique, Proceedings of IEEE International Test Conference, Charlotte, NC,
2004, pp. 8594
40 Ishida, M., Ichiyama, K., Yamaguchi, T., Soma, M., Suda, M., Okayasu,
T., Watanabe, D., Yamamoto, K.: Programmable on-chip picosecond jitter-
measurement circuit without a reference-clock input, Proceedings of IEEE
International Solid-State Circuits Conference, San Francisco, CA, 2005,
pp. 5124
DFT and BIST techniques for analogue and mixed-signal test 177
41 Anulai, B., Rylyakob, A., Rylov, S., Hajimiri, A.: A 10 Gb/s eye-opening mon-
itor in 0.13 m CMOS, Proceedings of IEEE International Solid-State Circuits
Conference, San Francisco, CA, 2005, pp. 3324
42 Abas, A.M., Bystrov, A., Kinniment, D.J., Maevsky, O.V., Russell, G.,
Yakovlev, A.V.: Time difference amplier, Electronics Letters, 2002; 38 (23):
14378
43 Oulmane, M., Roberts, G.W.: A CMOS time-amplier for femto-second reso-
lution timing measurement, Proceedings of IEEE International Symposium on
Circuits and Systems, London, 2004, pp. 50912
44 Veillette, B., Roberts, G.W.: On-chip measurement of the jitter transfer function
of charge pump phase-locked loops, IEEE Journal of Solid State Circuits, 1998;
33 (3): 48391
45 Veillette, B., Roberts, G.W.: Stimulus generation for built-in-self-test of charge-
pump phase-locked-loops, Proceedings of IEEE International Test Conference,
Washington, DC, 1997, pp. 397400
46 Gutnik, V.: Analysis and Characterization of Random Skew and Jitter in a Novel
Clock Network, Ph.D. dissertation, Massachusetts Institute of Technology, USA,
2000
47 Gutnik, V., Chandrakasan, A.: On-chip time measurement, Proceedings of IEEE
Symposium on VLSI Circuits, Orlando, FL, 2000, pp. 523
48 Levine, P., Roberts, G.W.: A high-resolution ash time-to-digital converter
and calibration scheme, Proceedings of IEEE International Test Conference,
Charlotte, NC, 2004, pp. 114857
49 Hafed, M., Roberts, G.W.: A 5-channel, variable resolution, 10-GHz sam-
pling rate coherent tester/oscilloscope IC and associated test vehicles, Pro-
ceedings of IEEE Custom Integrated Circuits Conference, San Jose, CA, 2003,
pp. 6214
50 Delmas-Bendhia, S., Caignet, F., Sicard, E., Roca, M.: On-chip sampling in
CMOS integrated circuits, IEEE Transactions on Electromagnetic Compatibility,
1999; 41 (4): 4036
51 Alon, E., Stojanovic, V., Horowitz, M.: Circuits and techniques for high-
resolution measurement of on-chip power supply noise, IEEE Journal of Solid
State Circuits, 2005; 40 (4): 8208
52 Nagata, M., Fukazama, M., Hamanishi, N., Shiochi, M., Lida, T., Watanabe, J.,
Murasaka, M., Iwata, A.: Substrate integrity beyond 1 GHz, Proceedings of
IEEE International Solid-State Circuits Conference, San Francisco, CA, 2005,
pp. 2668
53 Ferrario, J., Wolf, R., Moss, S., Slamani, M.: A low-cost test solution for wireless
phone RFICs, IEEE Communications Magazine, 2003; 41 (9): 828
54 Rehani, M., Abercrombie, D., Madge, R., Teisher, J., Saw, J.: ATE data collection
a comprehensive requirements proposal to maximize ROI of test, Proceedings
of IEEE International Test Conference, Charlotte, NC, 2004, pp. 1819
55 Strojwas, A., Kibarian, J.: Design for manufacturability in the nanometer era:
system implementation and silicon results, Proceedings of IEEE International
Solid-State Circuits Conference, San Francisco, CA, 2005, pp. 2689
178 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
6.1 Introduction
Test and diagnosis techniques for digital systems have been developed for over three
decades. Advances in technology, increasing integration and mixed-signal designs
demand similar techniques for testing analogue circuitry. Design for testability (DfT)
for analogue circuits is one of the most challenging jobs in mixed-signal system on
chip design owing to the sensitivity of circuit performance with respect to component
variations and process technologies. A large portion of test development time and
total test time is spent on analogue circuits because of the broad specications and
the strong dependency of circuit performance on circuit components. To ensure that a
design is testable is an even more formidable task, since testability is not well dened
within the context of analogue circuits. Testing of analogue circuits based on circuit
functionality and specication under typical operational conditions may result in poor
fault coverage, long testing times and the requirement for dedicated test equipment.
Furthermore, the small number of input/output (I/O) pins of an analogue integrated
circuit compared with that of digital circuits, the complexity due to continuous signal
values in the time domain and the inherent interaction between various circuit param-
eters make it almost impossible to design an efcient DfT for functional verication
and diagnosis. Therefore, an efcient DfT procedure is required that uses a single
signal as input or self-generated input signal, has access to several internal nodes,
and has an output that contains sufcient information about the circuit under test.
A number of test methods can be found in the literature and various corresponding
DfT techniques have been proposed [115]. DfT methods can be generally divided
into two categories. The rst seeks to enhance the controllability and observability
of the internal nodes of a circuit under test in order to utilize only the normal circuit
input and output nodes to test the circuit. The second is to convert the function of a
180 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
circuit under test, in order to generate an output signal that contains the performance
of the circuit to determine its malfunction. The most promising DfT methods are the
bypassing [25], multiplexing [69] and oscillation-based test (OBT) [1015]. These
methods, though quite general, are particularly useful for the testing of analogue
lters.
Analogue lters are necessary and are possibly one of the most crucial com-
ponents of mixed-signal system designs and have been widely used in many
important areas such as video signal processing, communications systems, computer
systems, telephone circuitry, broadcasting systems, and control and instrumenta-
tion systems. Research has been very active in developing new high-performance
integrated analogue lters [1634]. Indeed, several types of lters have been
proposed [16, 17]. However, the most popular analogue lters in practice are
continuous-time active-RC lters [1823], OTA-C lters [2330] and sampled-
data switched-capacitor (SC) lters [3134]. Active-RC and SC lters are well
known and have been around for a long time; OTA-C lters are a new type of
lter. They only use an operational transconductance amplier (OTA) and capacitor
(C) and are very suitable for high-frequency applications. OTA-C lters were pro-
posed in the mid-1980s and have become the most popular lters in many practical
applications.
In this chapter we are concerned with DfT of analogue lters. Three popular
DfT techniques are introduced, namely, bypassing, multiplexing and OBT. Appli-
cations of these DfT techniques in active-RC, OTA-C and SC lters are discussed
in detail. Different DfT and built-in self-test (BIST) methods for low- and high-
order active-RC, OTA-C and SC lters are presented. Throughout the chapter, many
DfT design choices are given for particular types of lters. Although many typical
lter structures are illustrated, in most cases, these DfT methods are also appli-
cable to other lter architectures. Two things are worth noting here. One is that
the popular MOSFET-C lters [17, 22, 23] in the literature may be treated in the
same way as active-RC lters from the DfT viewpoint since they derive directly
from active-RC lters with the resistors being replaced by tuneable MOSFETs,
and as a result we do not treat them separately. The other is that single-ended l-
ter structures are used for ease of understanding in the chapter. The reader should,
however, realize that the methods are also suitable for fully differential/balanced
structures [5, 16, 17, 23].
This chapter is organized in the following way. Section 6.2 is concerned with the
bypassing DfT method, including bandwidth broadening and switched-operational
amplier (opamp) techniques as well as application to active-RC and SC lters. The
multiplexing test approach is discussed in Section 6.3, with examples of active-RC
and OTA-C lters being given. Section 6.4 addresses the OBT strategy for active-RC,
OTA-C and SC lters, with many test cases being presented. Section 6.5 discusses
testing of high-order analogue lters using bypassing, multiplexing and oscillation-
based DfT methods, particular attention is given to high-order OTA-C lters. Finally,
a summary of the chapter is given in Section 6.6.
Design-for-testability of analogue lters 181
Two basic design approaches are commonly used in DfT methodologies for fault
detection and diagnosis in analogue integrated lters. The rst approach is based
on splitting the lter under test (FUT) into a few isolated parts, injecting external
test stimuli and taking outputs by multiplexing. The second approach is an I/O DfT
technique based on the partitioning of the FUT into the lter stages. Each lter stage is
then separately tested by bypassing the other stages. Bypassing a stage can be realized
either by bypassing the capacitors (bandwidth broadening) of the stage using MOS
switches or using a duplicate opamp structure at the interface between two stages.
The multiplexing approach will be discussed in Section 6.3. This section addresses
the bypassing approach.
PMOS
NMOS
(a) (b)
PMOS
NMOS
R C R C
An ideal resistor has an unlimited bandwidth and does not need to be modied in
test mode.
The single capacitor branch transformation requires two MOS switches as shown
in Figure 6.1. The impedance in the normal mode ZN is approximately the same as
the original impedance without MOS switches only if the on-resistance of the NMOS
switch RS is small enough so that the zero created does not affect the frequency
response of the stage. The size of the PMOS switch does not matter since its on-
resistance only affects the gain in the test mode.
Two possible transformations of the series RC branch are as shown in Figure 6.2.
A switch in parallel with the capacitor makes the branch resistive in the test mode
or a switch in series with the capacitor disconnects the branch in the test mode as
shown in Figures 6.2(a) and (b) respectively. To avoid signicant perturbations of the
original pole-zero locations in the series switch conguration, the on-resistance of
the NMOS switch must be much less than the series resistance of the branch.
The parallel RC branch may be considered as a combination of a single resistor
branch and a single capacitor branch. The parallel RC branch requires only one switch.
The switch is either in series with the capacitor in order to disconnect it or in parallel
with the capacitor to short it out in test mode, as shown in Figures 6.3(a) and (b),
respectively. To reduce the effect on normal lter performance, the on-resistance of the
NMOS switch must be small and the off-resistance of the PMOS switch must be large.
The modied three-opamp, second-order active-RC lter is shown in Figure 6.4.
The modication requires only three extra MOS switches to invoke each stage of the
FUT in expanded bandwidth in test mode.
Design-for-testability of analogue lters 183
(a) (b)
R PMOS
NMOS R
C
C
R6
T1 T2
C1
C2 T2 R4
R1 R5
Vin
R2
R3
+ Vout
+
+
The test methodology is very simple. The FUT is rst tested in normal mode by
setting control switches, T1 = T2 = high level. If the FUT fails, the test mode is
activated and all stages except the stage under test are transformed to simple gain
stages, with all capacitors disconnected by setting the control switches at a low level.
Thus, the input signal can pass through preceding inverting amplier stages to the
input of the stage under test and the output signal of the stage under test can pass
through succeeding inverting amplier stages to the output of the lter, so that any
individual stage can be tested from the input and output of the lter.
To isolate the faulty stage(s), one stage is tested at a time until all stages are
tested. The input test waveforms depend upon the overall lter topology and transfer
functions of stages. A circuit simulator provides the expected output waveforms, gain
and phase. Given a lter of n stages, n + 2 simulations are required per fault. The
simulated and measured data are interpreted to identify and isolate the faults. These
data should include signals as functions of time, magnitude and phase responses,
Fourier spectra and d.c. bias conditions.
6.2.1.2 SC lters
The bandwidth broadening DfT methodology can be extended to SC lters using a
timing strategy [3]. The timing waveform will convert each stage of the lter into
a simple gain stage without any extra MOS switches. MOS switches are already
184 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
2 C2 2
1 1
C4
Vin 1 C1 2
Vout
2 1
+
included in the basic SC resistor realizations. The requirement for test signal prop-
agation through the SC structure is thus established with the ONOFF sequence of
these built-in switches. The output signal may not be an exact duplicate of the input
but still contains most of the information in the input. The output voltage will be
scaled by inexact gain of the stages or subsystems of the lter. Hence, the timing
strategy permits full control and observation of I/O signals from the input to the rst
stage to the output of the last stage of the lter. A timing methodology for signal
propagation does not only account for the test requirements, but also considers the
proper operations of SC lters. The following combinations are needed to produce
the test timing signals:
1. The master clock, which is the OR combination of two non-overlapping clocks,
1 and 2 .
2. The test enable control signal for each stage.
3. The phase combinations of the master clock and test enable signal.
The clock distribution into the SC structures is needed to permit the selection of
normal mode or test mode of operations.
The basic lowpass single-stage SC lter [31] is given in Figure 6.5.
In the test mode, the path for the input signal to the output can be created such
that the switches in series with capacitors are closed and the switches used to ground
any capacitor are opened. Let T be the test control signal, which remains high during
the test mode and low in the normal mode. The proper switch control waveforms can
be dened as
for signal switches:
1S = T + 1
(6.1)
2S = T + 2
for grounding switches:
1O = T 1
(6.2)
2O = T 2
Design-for-testability of analogue lters 185
10 C11 10
20 10 10
11 11
20 C21 20 11 11
10 C12 10
10 C32 10
11 C1111 CC2
CC1 22 C 12 11
Vin 21
C31 11 CC3
20 10
+ 12 22 + 20 10
+ Vout
The subscripts, S and O, are added to the clock phases to stress the functions of these
signals with respect to the switches.
During the test mode, the lter operates in continuous fashion and its transfer
function is given by
C1
Vout = Vin (6.3)
C2 + C4
Equation (6.3) shows that the input signal Vin is transmitted through the circuit with
its amplitude scaled by the capacitor ratio.
Now we apply the same technique to a third-order lowpass SC lter [32]
shown in Figure 6.6. Assuming that we are interested in testing of stage 2 and
the only accessible points of the circuit are the input of stage 1 and output of
stage 3.
The functional testing of stage 2 requires two signal propagation conditions:
1. Establishing a path through stage 1 to control the input stage 2.
2. Establishing a path through stage 3 to observe the output stage 2.
Therefore the switches can be divided into three distinct groups:
1. The grounding switches in stages 1 and 3 remain open during the testing of
stage 2.
2. The signal switches in stages 1 and 3 remain closed during the testing of
stage 2.
3. The switches in inter-stage feedback circuits remain open to ensure controlla-
bility and observability and to avoid possible instability.
In test mode, stage 2 should be in normal operation, that is, the switches in stage
2 are controlled by the normal two-phase clock. Three test control lines are required
to enable testing of each of the three stages. These lines are designated as T1 , T2 and
T3 . The clock waveforms for both normal and test are dened as follows:
1. For grounding and inter-stage feedback switches:
iO = (T1 + T2 + T3 ) i (6.4)
where i denotes clock phase i, i = 1, 2.
186 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Mode
test /filter
Input
V+ +
S1
V
Vout
Vt +
S2
Output
k=3
ij = i + Tk (6.5)
k=1,k=j
R6
R5
C1 C2
Vin R1
V
R2 R4
V+ V
V1
R3
Vt V+ V
V2
Vout
Vt V+
Mode T/F V3
Vt
Mode T/F
Mode T/F
From the above equations it can be seen that every stage can be tested from the lter
input and output due to the use of the switched opamp.
10 C11 10 10 10
20
11 11 20 C 20
21 11 11
10 C12 10
10 C32 10
11 11 CC2
Vin 22 12
CC1
V V 11 11
20 C11 10 12 C21 22 CC3
V+ V+ V
20 C 10 Vout
31
Vt Vt V+
Vt
Mode T/F Mode T/F
Mode T/F
The multiplexing DfT technique has been proposed to increase access to internal
circuit nodes. Through a demultiplexer, a test input signal can be applied to inter-
nal nodes, while a multiplexer can be used to take outputs from internal nodes. The
controllability and observability of the lter are thus enhanced. When using the mul-
tiplexing DfT technique, the FUT may be divided into a number of functional blocks
or stages. The input demultiplexer routes the input signal to the inputs of different
stages and the outputs of the stages are loaded to the primary output by the output
multiplexer [6]. Testing and diagnosis of embedded blocks or internal stages will thus
become much easier.
For integrator-based test, for example, the FUT is divided into separate test stages
using MOS switches such that each stage represents a basic integrator function [9].
Individual integrators are tested separately against their expected performances to
isolate the faulty stages. The diagnosis procedure then further identies the specic
faults in the faulty stages. Normally, the lter can be divided into two possible types of
integrator: lossy integrator and ideal integrator. Time, amplitude and phase responses
may be tested for these integrators. The implementation of the multiplexing-based
DfT requires only few MOS switches. The value of the MOS switch-on resistance is
chosen such that it does not affect the performance of the lter in normal mode.
R3
Demultiplexer R1
Vin
R4 C1 C2 R6
S1 R2
S1 R5
+ V1 S1
A0 A1 + V2
+ V3
Vout
Multiplexer
A0 A1
S1 A1 A0 Mode Operation
1 0 0 Normal Filter
0 0 1 Test Lossy integrator
0 1 0 Test Ideal integrator
0 1 1 Test Amplier
R3
Demultiplexer
R4
Vin S1 R5 C1
V2
R1 V2 S1 R6
+ V1 S1
+
+ V3
Multiplexer
A0 A1 Vout
R2 S1
A0 A1
S1 A1 A0 Mode Operation
1 0 0 Normal Filter
0 0 1 Test Amplier
0 1 0 Test Ideal integrator
0 1 1 Test Ideal integrator
switch resistances are chosen such that the pole frequency movement is negligible
and the new zeros introduced by the switches are as far outside the original lter
bandwidth as possible. The operation of the testable KHN lter in Figure 6.11 is
given in Table 6.2. In normal mode operation, all control switches designated as S1
are closed with address pins A0 and A1 at zero level and the circuit performs the
same function as the original lter. The fault diagnosis method involves the following
procedure:
1. Set the KHN lter in test mode by placing all switches (S1 ) in open position.
2. Observe the output waveforms of the stage under test in the KHN lter by
assigning the respective address of the stage, as given in Table 6.2.
Each stage is investigated step by step to locate the fault. The multiplexing tech-
nique can be used to observe the fault in any stage. The function of each stage is
simply an ideal integrator or amplier.
Demultiplexer
S2
Vin
+ V01
+ V02
gm1
Multiplexer
S2 Vout
gm2
A0 A1 S1 C1 C2
gm3
+
A0 A1
S1 S2 A1 A0 Mode Operation
0 1 0 0 Normal Filter
1 0 0 1 Test Ideal integrator
1 0 1 0 Test Lossy integrator
excellent low sensitivity to parasitic input capacitances and are suitable for cascade
synthesis of active lters at high frequencies. Multiplexing-based DfT is directly
applicable to the TT using only MOS switches as shown in Figure 6.12.
The values of the switch resistances are chosen so that the modied lter performs
the same function as the original lter. The optimum selection of aspect ratio between
length and width of the control switches will produce negligible phase perturbation
and insignicant increase in the total harmonic distortion due to the non-linearity of
the MOS switch.
The modied TT lter is rst tested in normal mode. In normal mode operation,
control switches designated as S2 are closed and S1 are opened, as shown in Table 6.3.
In the case that failure occurs, the test mode will be activated, with switches S2 open
and S1 closed and the individual stages will be tested sequentially to isolate the faulty
stage. During testing, the TT lter in Figure 6.12 will become two individual stages,
an ideal integrator, stage 1 and a lossy integrator, stage 2. The transfer function of
stage 1 can be derived as
gm1
Vout = Vin (6.10)
sC1
The DfT design has very little impact on the circuit performance of the lter since only
switches S2 are within the integrator loop. All switches are opened and closed simul-
taneously, therefore the combination of n-type and p-type are used which requires
very low pin overhead.
The multiplexing technique is general and also suitable for SC lter testing.
Similar to the application to active-RC and OTA-C lters described, the multiplexer-
based DfT is easily implemented and requires minor modication to the original
multi-stage SC lter without degrading the performance of the circuit. The modied
lter structure provides controllability and observability of the internal nodes of each
lter stage. The technique sequentially tests every stage of the lter and hence reduces
test time and has greatly reduced area overhead.
OBT procedures for analogue lters, based on transformation of the FUT into an
oscillator have been recently introduced [1113]. The oscillation-based DfT structure
uses vectorless output frequency comparison between fault-free and faulty circuits
and consequently reduces test time, test cost, test complexity and area overhead.
Furthermore, the testing of high-frequency lter circuits becomes easier because no
external test signal is required for this test method. OBT shows greatly improved
detection and diagnostic capabilities associated with a number of catastrophic and
parametric faults. Application of the oscillation-based DfT scheme to low-order ana-
logue lters of different types is discussed, because these structures are commonly
used individually as lters and also as building blocks for high-order lters.
In OBT, the circuit that we want to test is transformed into an oscillating circuit
and the frequency of oscillation is measured. The frequency of the fault-free circuit
is taken as a reference value. Discrepancy between the oscillation frequency and the
reference value indicates possible faults. Fault detection can be performed as a BIST
or in the frame of an external tester. In BIST, the original circuit is modied by
inserting some test control logic that provides for oscillation during test mode. In the
external tester, the oscillation is achieved by an external feedback loop network that
is normally implemented as part of a dedicated tester.
An ideal quadrature oscillator consists of two lossless integrators (inverting and
non-inverting) cascaded in a loop, resulting in a characteristic equation with a pair of
roots lying on the imaginary axis of the complex frequency plane. In practice, how-
ever, parasitics may cause the roots to be inside the left half of the complex frequency
plane, hence preventing the oscillation from starting. Any practical oscillator must
be designed to have its poles initially located inside the right-half complex frequency
plane in order to assure self-starting oscillation. Most of the existing theory for sinu-
soidal oscillator analysis [26] models the oscillator structure with a basic feedback
loop. The feedback loop may be positive, negative or a combination of both. The
quadrature oscillator model can ideally be described by a second-order characteristic
equation:
(s2 bs + 20 )V0 (s) = 0 (6.12)
Design-for-testability of analogue lters 193
R3
R4 C1 C2
R1
R5 V2 R2
Vin + V1
R6 S1 +
+ V3
and the frequency of the pole and the quality factor are given by
)
R4
0 =
R3 R1 R2 C1 C2
* (6.16)
1 R5 R3 R2 C2
= K1
Q R6 R4 R1 C1
V2 1/R2 R4 C1 C2
= 2 (6.17)
Vin s + (1/R1 C1 ) s + (R6 /R2 R3 R5 C1 C2 )
Design-for-testability of analogue lters 195
R3
R1 S1
C1 C2
R4 R6
Vin R2
R5
+ V1
+ V2
+ V3
The frequency of pole and quality factor are given by the expressions:
)
R6
0 =
R2 R3 R5 C1 C2
) (6.18)
1 1 R2 R3 C2
=
Q R1 C1
It is clear from Equation (6.18) that both the Q factor and pole frequency 0 can be
independently adjusted. From the above expressions we can see that the condition for
oscillation Q without affecting 0 will be satised if R1 . This is realized
by inserting switch S1 to disconnect R1 from the circuit. In the test mode the lter
will oscillate at resonance frequency 0 . Deviations in the oscillation frequency with
respect to the resonance frequency indicate faulty behaviour of the circuit. The amount
of frequency deviation will determine the possible type of fault, either catastrophic
or parametric, as well as the specic location where the fault has occurred.
C1
Vin R1 R2
+
Vout
RB
C2 RA
where the amplier gain K is equal to 1 + (RB /RA ). We can put the SallenKey
lter into oscillation by substituting 1/Q = 0 in Equation (6.20). As a result, we
get amplier gain K = (R2 C2 + R1 C2 + R1 C1 )/R1 C1 . Some external control of the
value of RB /RA must be provided to obtain the required value of K in test mode.
Note, however, that even when the passive elements are in perfect adjustment, the
nite bandwidth of a real amplier causes dissimilar effects on the pole and zero
positions. We can also put the SallenKey lter into oscillation by adding a feedback
loop containing a high-gain inverter [12].
M3
Vin + V01
gm1 + V02 =Vout
t Close = Osc gm2
M1
1 2 M2 C2
C1
S1
V1 +
Figure 6.16 Testable two-integrator loop OTA-C lter incorporating the OBT
method
M2
Vin + V01 + V02
gm1 + gm3
t Close = Osc M1 gm2 Vout
1 2
V1 + S1 C1 C2
*
1 gm1 gm2 C2
Q= (6.28)
gm3 C1
To put the TT-lter into oscillation with constant amplitude the quality factor must be
innite. The network will then oscillate with resonant frequency 0 if quality factor
Q . By closing the switch S1 , M1 is short-circuited and M2 open-circuited, the
lter network will be converted into an oscillator and the poles are given by
)
gm1 gm2
s1 , s2 = j (6.29)
C1 C2
From Equation (6.28) we can see that the condition for oscillation will be satised if
gm3 = 0, without affecting the resonant frequency. In Figure 6.17 this can be realized
by switching off the gm3 OTA.
+
gm4
M2
VHP +
gm3
gm5
Vin + VBP
t Close = Osc M1 gm1 + VLP
1 2 + gm2
C
V1 + S1 C1 2
Equations (6.31) and (6.32) shows that we can change the cut-off frequency 0
and quality factor Q of the lter independently. The KHN lter will oscillate at
resonant frequency 0 if the quality factor Q . The condition of oscillation will
be satised by substituting gm3 = 0 in Equation (6.32). By closing the switch S1 ,
M1 is short-circuited and M2 open-circuited, the lter network is converted into an
oscillator and the oscillator frequency is the resonant frequency of the lter.
C07 clk2
C09 clk1
C08
C03
C04
clk2 C01 clk2
Vin
C02 clk2
V1 clk2
clk1 clk1
+ Vout
clk1 clk1
+
C05
C06 clk2
Filter
H(z)
V Non-linear block
N(A)
V
Figure 6.20 Filter to oscillator conversion using non-linear block in feedback loop
To convert the biquadratic lter into an oscillator requires a circuit to force a dis-
placement of a pair of poles to the unit circle. A non-linear block in the lter feedback
loop [26, 34] can generate self-sustained robust oscillations. The oscillation condition
and approximation for the frequency and amplitude of the resulting oscillator for the
system in Figure 6.20 would be determined by the roots of
1 [N(A) H(z)] = 0 (6.39)
where N(A) represents the transfer function of the non-linear block as a function of
the amplitude A of the rst harmonic of its input. We consider the non-linear function
formed by an analogue comparator providing one of the two voltages V , as shown
Design-for-testability of analogue lters 201
Two main approaches are found in the literature [16, 23] for the realization of
high-order lters. The rst is to cascade second-order stages without feedback (cas-
cade lter) or through the application of negative feedback, multiple loop feedbacks
202 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
(MLFs). The second is the use of combinations of active and passive components in
order to simulate the operation of high-order LC ladders. The main problem related to
the testing of both types of high-order lter is the controllability and observability of
deeply embedded internal nodes. Controllability and observability can be increased by
partitioning the system into accessible blocks. These blocks are rst-order or second-
order sections representing integrators or biquadratic transfer functions respectively.
The problems encountered in partitioning and testing of any high-order lter can be
summarized as
1. How can the FUT be efciently decomposed into basic functional blocks?
2. How can each stage be tested?
3. How can two or more consecutive stages be tested to check the effects of loading
and impedance mismatches?
4. How can the faulty stage be isolated and the faulty parts diagnosed?
5. What type of test waveforms should be applied to diagnose the faulty
components?
It is clear from these questions that DfT techniques will strongly depend upon
the conguration and type of the high-order lter. The application of multiplexing,
bypass and OBT DfT techniques to high-order lters will be discussed in the next
section.
V V V V
Input Output
V+ V+ V+ V+
Vt Vt Vt Vt
Figure 6.21 Testable nth-order lter using bypass strategy based on switched opamp
components. The basic block for a switched opamp is illustrated in Figure 6.7. It has
two operation modes dened by a digital mode control input T /F. At logic zero, the
opamp operates normally and the circuit under test behaves as lter with very small
performance degradation. When T /F has a value of one, the analogue block acts as
buffer, passing the input signal to the output of the block. The implementation of the
bypass scheme using switched opamps for the nth-order lter is shown in Figure 6.21.
The testable nth-order lter based on switched opamps is easily divided into
separate analogue blocks, each block being a rst- or second-order lter. To test
the ith block, all blocks except the block under test (BUT) are put into test mode,
operating as buffers. The test signal at the input of the system enters the input node
of the BUT via the buffer stages and the output node of BUT is then observed at
the primary output of the system through subsequent buffer stages. The only block
operating as a rst- or second-order lter is the BUT. Therefore, the input to the BUT
will be equal to the primary input of the lter, that is
Vi1 = Vi2 = = Vin (6.46)
where i = 1, 2, 3, , n
The output of the BUT is equal to the primary output of the lter:
Vi = Vi+1 = = Vout (6.47)
In Figure 6.21, although we did not show the coupling and feedback between different
stages, the test method is also suitable for MLF structures.
Figure 6.22 Block diagram of programmable biquad multiplexing test structure for
cascade lter
The control logic will programme the programmable biquad with the same transfer
function as the biquad under test. A programmable biquad is a universal biquadratic
section that can implement any of the basic lter types by electrical programming.
The comparator circuit compares the responses of the biquads to generate an error
signal. The system biquad will be considered fault free if the error signal lies inside
the comparison window.
The testable cascade lter structure in Figure 6.22 consists of the FUT, pro-
grammable biquad, comparator and control logic. The input multiplexer Si1 to Sin
connects the different biquad inputs to the programmable biquad input and the output
multiplexer, So1 to Son , connects the output node of each biquad to the comparator.
A set of switches, Sc2 to Scn , connect and disconnect each biquad output to the next
biquad input. An additional set of switches, St2 to Stn , act as a demultiplexer able to
distribute the input signal to the different biquad input nodes. The control logic is a
nite sequential machine that controls the operational modes as well as conguring
the programmable biquad according to the requirements of the biquad under test.
The DfT procedure has two operating modes, normal/lter mode and test mode. The
test mode of operation is further divided into two sub-modes, online test and ofine
test. In online test mode, testing of the selected biquad is carried out during normal
operation of the lter, using normal signals rather than signals generated specically
for testing. When working in online test mode, the control logic can connect the input
of any biquad in the cascade to the programmable biquads input as well as the same
biquads output to the comparator input. The control logic also programmes the pro-
grammable biquad to implement the same transfer function as the selected biquad.
We can perform functional comparison between the outputs of the selected biquad
and the programmed biquad with a margin range. If the selected biquad is fault free
then the comparator output will lie between the given tolerance limits, since the same
input signal is applied to the input of both biquads.
When the ofine test mode is invoked, switches Scj split the lter into biquad stages
and the input is selectively applied to one of them and to the programmable biquad.
The control logic connects the output of the biquad under test to the comparator and
the comparator compares this output to the programmable biquad output for the same
input signal. The error signal from the comparator output will indicate the faulty
Design-for-testability of analogue lters 205
behaviour of the biquad under test, since the programmable biquad is programmed to
implement the same transfer function. The control logic selects one by one the biquad
stages of the lter and performs the comparison with the programmed biquad.
The above method is also suitable for testing of ladder-based lters [8]. For testing,
a ladder lter needs to be partitioned into second-order sections, which may not be as
straightforward as the cascade structure due to coupling. The programmable biquad
also needs modication to solve the stability problem due to the innite Q of some
sections.
The value of MOS switch ON resistance is chosen such that it does not affect
the performance of the original lter. The test methodology is straightforward. The
modied circuit is rst tested in normal mode. If any malfunction or failure occurs,
the test mode is activated and all the individual stages are tested one by one to isolate
the faulty stage. Then the faulty stage must be further investigated to locate the fault.
The general MLF OTA-C lter is shown in Figure 6.23. The MLF OTA-C l-
ter is composed of a feed-forward network of integrators connected in cascade
and a feedback network that contains pure wire connections only for canonical
realizations [28].
The feedback network may be described as
n
V = fij Voj (6.48)
j=i
206 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Feedback network
Vf 1 Vf 2 Vf 3 Vfn
Vout
gm1 Vo1 gm2 Vo2 gm3 Vo3 gmn
Vin
+ + + +
C1 C2 C3 Cn
Feedback network
Tn
----
Demultiplexer
Vin
T3
T2 Tn
Demultiplexer
----
T1
F + Vout
gm1 + T3
A0 Am S2 gm2 + T2
S2 gm3 + T1
S2 F
gmn
S1 S
C1 C2 C3 Cn 2 A0 Am
where fij is the voltage feedback coefcient from the output of integrator j to the
input of integrator i. The feedback coefcient fij can have zero or non-zero values
depending upon the feedback. Equation (6.48) can be written in the matrix form.
[Vf ] = [F] [VO ] (6.49)
where [Vo ] = [Vo1 Vo2 Von ]t , the output voltages of integrators, [Vf ] = [Vf1 Vf2
Vfn ]t , the feedback voltage to the inverting input terminals of integrators and
[F] = [fij ]n n, the feedback coefcient matrix. The different feedback coefcients
will result in different lter structures. Thus, the feedback network classies the lter
structures.
The modied MLF OTA-C lter using the multiplexing DfT technique is shown
in Figure 6.24. The operation of the modied MLF lter, in normal and test mode is
given in Table 6.4.
In normal operating mode, control switches S2 are closed and S1 are opened, while
the address pins from A0 to Am are at level 0. The fault-free circuit will perform the
Design-for-testability of analogue lters 207
S1 S2 Am A1 A0 Mode Operation
same function as the original lter. In cases where lter performance does not meet
specication, the OTA-C stages must be investigated separately. The fault diagnosis
method involves the following steps:
1. Activate the test mode of operation by closing switches S1 and opening switches
S2 . And set multiplexer address inputs A0 , . . . , Am to select an OTA-C stage
for testing.
2. Applying the input test signal to the selected OTA-C stage through the analogue
demultiplexer.
3. Observe the output of the selected OTA-C stage at the output terminal of the
AMUX. The function of each individual OTA-C stage is an ideal integrator.
The voltage transfer function of the stage can be dened as
gmi
H(s) = (6.50)
sCi
where i is the number of the OTA-C stages and gmi and Ci are the transcon-
ductance and capacitance of the related OTA and capacitor. Therefore, the
output of the fault-free OTA-C stage will be a triangular wave in response to a
square-wave input.
(a)
Vin Sp Sp Sp Sp Sp
+ + +
gm1 + g + gm(n1) + Vout
g m3 g g
m2 m4 mn
Sn C1 C2 C3 C4 Sn Cn
Sn Sn Sn Cn-1 Sn
Frequency counter
(b)
Frequency counter
Sn
Vin
+ Sn
Sn gm1 + Sp
gm2 + Sn
g + Sp
m3 g +
Sp
C1 C2 m4 gm(n1) + Vout
C3 C4
Sn Sp g Cn
Cn1 mn
Sn
Sp Sp
Sn
(c)
Frequency counter
Sp
Vin
+ Sp
gm1 + Sp
gm2 + Sp
g + Sp
m3 g +
m4 gm(n1) + Vout
Sn Sn
Sn g
C1 C2 Sn mn
C3 C4 Cn1 Cn
Sn
Sn
Figure 6.25 Testable high-order OTA-C lter structures based on OBT: (a) cascade
structure, (b) IFLF structure and (c) LF structure
Commonly used design approaches for high-order OTA-C lters are based on
cascade and MLF structures. Choice of the feedback network can result in the cascade,
inverse follow-the-leader feedback (IFLF) and leap-frog (LF) congurations [29].
These types of multi-stage (high-order) OTA-C lter structures can be easily modied
to implement the oscillation-based DfT technique as shown in Figure 6.25, where Sn
and Sp are the NMOS and PMOS transistor switches respectively.
Implementation of the oscillation-based DfT method requires the following
modications to the original lter:
All these modications can be carried out by insertion of MOS transistor switches
into the original lter circuits. Therefore, the MOS transistors are the key components
in OBT testing and provide the testable structure of the high-order OTA-C lter. The
accuracy of the transistors directly affects the accuracy and functionality of the FUT.
The most important characteristics of a transistor are the ON resistance, the OFF
resistance and the values of parasitic capacitors. The effective resistance of a MOS
transistor operating in the linear region may be expressed as
L
RDS = (VGS VT )1 (6.51)
kW
where W and L are the channel width and length resepctively, VGS and VA are the
gate-source bias voltage and threshold voltage respectively, k = n Cox and where
n is the electron mobility and Cox is the oxide capacitance per unit area.
A larger aspect ratio will reduce the series resistance. However, the parasitic
capacitance is approximately proportional to the product of width and length. There-
fore, choosing an optimum aspect ratio and a sensible point in the signal paths for
switch insertion will ensure a minimal impact on the performance of the lter. The
modied lter circuits shown in Figure 6.25 require two types of switches; switches
in the signal path to divide the lter into biquadratic blocks and switches in the feed-
back path to establish oscillation conditions. The switches in the signal path must be
realized using MOS transistors with minimum values of the ON resistance, whereas
the other switches can be designed for minimum size.
The modied lter circuit has two mode of operations, normal and test mode. In
normal mode of operation all switches designated Sp are closed whereas the switches
designated Sn are open and the circuit will perform the original lter functions. When
the test mode is invoked Sp switches are opened and Sn switches closed. Switches
Sp split the lter into biquad stages and switches Sn convert these biquad stages into
oscillators. The oscillation frequency of the oscillator is then
)
gi gi+1
0i = i = odd, i = 1, 3, 5, . . . , n 1 (6.52)
Ci Ci+1
where n is the order of the lter and is even. When n is odd, the last integrator can
be combined with the (n1)th integrator to form an oscillator, although the (n1)th
integrator has already been tested. The condition of oscillation for two-integrator loop
biquadratic lters is discussed in Section 6.4.
The test and diagnosis procedure of OBT is straightforward. The FUT is rst
tested in normal mode and the cut-off frequency of the FUT measured. The test mode
will be activated if the cut-off frequency deviates beyond the given tolerance band. In
test mode, the high-order lter is decomposed into individual biquad oscillators and
individual oscillator frequencies are measured to isolate the faulty stage. Comparison
between the frequency evaluated from Equation (6.52) and the measured frequency of
the corresponding oscillator stage identies the faulty stage of the FUT. The deviation
210 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
from the fault-free frequency will identify the fault, catastrophic or parametric, and
the location of the fault in the stage.
The proposed design can be implemented for any type of high-order OTA-C lter
with little impact on the performance of the original lter. The area overhead depends
upon the type and order of the lter. Implementation for n-stages adds [nSn +(n1)Sp ]
extra MOS transistors for the cascade-type structure and [(n + 1)Sn + nSp ] for the
IFLF-type structure respectively. The area overhead of the LF-type structure is the
same as the cascade-type since only one feedback loop requires changing for both
cases. The percentages of area overhead are calculated for cascade, LF and IFLF
structures as follows
nAn + (n 1)Ap
Overhead for LF and cascade = 100% (6.53)
A
(n + 1)An + nAp
Overhead for IFLF = 100% (6.54)
A
where A is the original circuit area, An is the area of switch Sn and Ap is the area of
switch Sp .
6.6 Summary
This chapter has been concerned with DfT of, and test techniques for, analogue
integrated lters. Many different testable lter structures have been presented. Typical
DfT techniques, such as bypassing, multiplexing and OBT have been discussed. Most
popular lters such as active-RC, OTA-C and SCs lters have been covered. Test of
low-order and high-order lters have been addressed. DfT of OTA-C lters have
been investigated, particularly, because this topic has not been so well studied as the
testing of active-RC and SC lters. Many of the test concepts, structures and methods
described in the chapter are also suitable for other analogue circuits, although they
may be most useful for analogue lters as demonstrated in the chapter.
6.7 References
1 Wey, C.L.: Built-in self-test structure for analogue circuit fault diagnosis, IEEE
Transactions on Instrumentation and Measurement, 1990;39 (3):51721
2 Soma, M.: A design-for-test methodology for active analogue lters, Proceed-
ings of IEEE International Test Conference, Washington, DC, September 1990,
pp. 18392
3 Soma, M., Kolarik, V.: A design-for-test technique for switched-capacitor lters,
Proceedings of IEEE VLSI Test Symposium, Princeton, NJ, April 1994, pp. 427
4 Vazquez, D., Rueda, A., Huertas, J.L., Richardson, A.M.D.: Practical DfT strat-
egy for fault diagnosis in active analogue lters, Electronics Letters, July 1995;31
(15):12212
Design-for-testability of analogue lters 211
5 Vazquez, D., Rueda, A., Huertas, J.L., Peralias, E.: A high-Q bandpass fully dif-
ferential SC lter with enhanced testability, IEEE Journal of Solid State Circuits,
1998;33 (7): 97686
6 Wagner, K.D., William, T.W.: Design for testability of mixed signal integrated
circuits, Proceedings of IEEE International Test Conference, Washington, DC,
September 1988, pp. 8239
7 Huertas, J.L., Rueda, A., Vazquez, D.: Testable switched-capacitor lters, IEEE
Journal of Solid State Circuits, 1993;28 (7): 71924
8 Vazquez, D., Rueda, A., Huertas, J.L.: A solution for the on-line test of analogue
ladder lters, Proceedings of IEEE VLSI Test Symposium, Princeton, NJ, April
1995, pp. 4853
9 Hsu, C.-C., Feng, W.-S.: Testable design of multiple-stage OTA-C lters, IEEE
Transactions on Instrumentation and Measurement, 2000;49 (5): 92934
10 Arabi, K., Kaminska, B.L.: Testing analogue and mixed-signal integrated circuits
using oscillation-test method, IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems, 1997;16:74553
11 Arabi, K., Kaminska, B.L.: Oscillation-test methodology for low-cost testing of
active analogue lters, IEEE Transactions on Instrumentation and Measurement,
1999;48 (4): 798806
12 Zarnik, M.S., Novak, F., Macek, S.: Design of oscillation-based test structure for
active RC lters, IEE Proceedings Circuits, Devices and Systems, 2000;147
(5):297302
13 Hasan, M., Sun, Y.: Oscillation-based test structure and method of continuous-
time OTA-C lters, Proceedings of the IEEE International Conference on
Electronics, Circuits and Systems, Nice, France, 2006, pp. 98101
14 Hasan, M., Sun, Y.: Design for testability of KHN OTA-C lters using oscillation-
based test, Proceedings of IEEE Asia Pacic Conference on Circuits and Systems,
Singapore, 2006, pp. 9047
15 Huertas, G., Vazquez, D., Rueda, A., Huertas, J.L.: Effective oscillation-based
test for application to a DTMF lter bank, IEEE International Test Conference,
Atlantic City, NJ, September 1999, pp. 54955
16 Sun, Y. (ed.): Design of High-Frequency Integrated Analogue Filters (IEE Press,
UK, 2002)
17 Moritz, J., Sun, Y.: Design and tuning of continuous-time integrated lters in
Sun, Y. (ed.), Wireless Communication Circuits and Systems (IEE Press, UK,
2004), ch. 6
18 Sallen, R.P., Key, E.L.: A practical method of designing RC active lters, IRE
Transactions on Circuit Theory, 1955;2: 7485
19 Kerwin, W.J., Huelsman, L.P., Newcomb, R.W.: State-variable synthesis for
insensitive integrated circuit transfer functions, IEEE Journal of Solid State
Circuits, 1967;2 (3): 8792
20 Tow, J.: Active RC lter-a state-space realization, Proceedings of the IEEE,
1968;3: 11379
21 Thomas, L.C.: The biquad: part I- some practical design considerations, IEEE
Transactions on Circuit Theory, 1971;18:3507
212 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
22 Banu, M., Tsividis, Y.: MOSFET-C lters in Sun, Y. (ed.) Design of High-
Frequency Integrated Analogue Filters (IEE Press, UK, 2002), ch. 2
23 Deliyannis, T., Sun, Y., Fidler, J.K.: Continuous-Time Active Filter Design (CRC
Press, Boca Raton, FL, 1999)
24 Sun, Y.: Architecture and design of OTA/gm-C lters in Sun, Y. (ed.) Design of
High-Frequency Integrated Analogue Filters (IEE Press, UK, 2002), ch. 1
25 Sanchez-Sinencio, E., Geiger, R.L., Nevarez-Lozano, H.: Generation of
continuous-time two integrator loop OTA lter structures, IEEE Transactions
on Circuits and Systems, 1988;35 (8):93646
26 Rodriguez-Vazquez, A., Linares-Barranco, B., Huertas, J.L., Sanchez-Sinencio,
E.: On the design of voltage-controlled sinusoidal oscillators using OTAs, IEEE
Transactions on Circuits and Systems, 1990;37 (2):198211
27 Sun, Y., Fidler, J.K.: Structure generation and design of multiple loop feedback
OTA-grounded capacitor lters, IEEE Transactions on Circuits and Systems-I,
1997;44 (1):111
28 Sun, Y., Fidler, J.K.: OTA-C realization of general high-order transfer functions,
Electronics Letters, 1993;29 (12):10578
29 Sun, Y., Fidler, J.K.: Synthesis and performance analysis of universal min-
imum component integrator-based IFLF OTA-grounded capacitor lter, IEE
Proceedings Circuits, Devices and Systems, 1996;143 (2):10714
30 Sun, Y.: Synthesis of leap-frog multiple loop feedback OTA-C lters, IEEE
Transactions on Circuits and Systems, II, 2006;53 (9):9615
31 Unbehauen, R., Cichocki, A.: MOS Switched-Capacitor and Continuous-Time
Integrated Circuits and Systems (Springer-Verlag, Berlin, 1989)
32 Allen, P.E., Sanchez-Sinencio, E.: Switched-Capacitor Circuits (Van Nostrand
Reinhold, New York, 1984)
33 Fleisher, P.E., Laker, K.R.: A family of switched capacitor biquad building
blocks, Bell System Technology Journal, 1979;58:223569
34 Fleisher, P.E., Ganesan, A., Laker, K.R.: A switched capacitor oscillator with
precision amplitude control and guaranteed start-up, IEEE Journal of Solid State
Circuits, 1985;20 (2):6417
Chapter 7
Test of A/D converters
From converter characteristics to built-in self-test proposals
Andreas Lechner and Andrew Richardson
7.1 Introduction
these measurements must be carried out for two or more gain settings and possible
input signal amplitudes, the test time can rapidly grow to several seconds. These
estimates should be considered in the context of total test time for a system-on-a-chip
(SoC) or mixed-signal integrated circuit (IC) that ideally needs to be below a second.
Note also that the converter will generally occupy only a small area of the device.
These issues illustrate the importance of utilizing the best possible methods avail-
able for converter testing. This chapter will present not only the conventional tech-
niques for testing converters but a selection of new ideas and test implementations that
target both test time reduction and the demand for high-cost analogue test equipment.
Digital
output T [2N1] Representational
ideal straight line
Code 2N1
Centre
of code k
Code k
T [1]
Code 1 W [k]
Analogue
Code 0
input
Vmin
Vmin+Q/2
Vmin+Q
Vmin+2Q
T [k]
Vk
T [k+1]
Vmax Q
Vmax
Quantization
error
+Q/2
Input
Vmax
Q/2
FS (Vmax Vmin )
Q= N
= = 1 LSB (7.1)
2 2N
The ideal code bin width Q, usually given in volts, may also be given as a percentage
of the full-scale range. By standard convention, the rst code bin starts at voltage
Vmin and is numbered as 0, followed by the rst code transition level T [1] to code
bin 1, up to the last code transition level T [2N 1] to the highest code bin [2N 1],
which reaches the maximum converter input voltage Vmax [3]. In the ideal case, all
code bin centres fall onto a straight line with equidistant code transition levels, as
illustrated in Figure 7.1. The analogue equivalent of a digital A/D converter output
code k corresponds to the location of the particular ideal code bin centre Vk on the
horizontal axis.
The quantization process itself introduces an error corresponding to the difference
between the A/D converters analogue input and the equivalent analogue value of its
output, which is depicted over the full-scale range in Figure 7.2. With a root mean
square (RMS) value of Q/ (12) for a uniform probability distribution between Q/2
and Q/2 and an RMS value of FS/(2 (2)) for a full-scale input sine wave, the
ideal or theoretical signal-to-noise ratio (SNR) for an N-bit converter can be given in
decibels as
)
2
(FS/2 2)2 12
SNRideal = 10 log10 = 10 log10 2 N
(Q/ 12)2 8
)
12
= 20 log10 [2N ] + 20 log10 = 6.02N + 1.76 (7.2)
8
For real A/D converters, further errors affect the conversion accuracy and converter
performance. The following sections will introduce the main static and dynamic
performance parameters that are usually veried to meet the specications in produc-
tion testing. Standardized performance parameters associated with the A/D converter
transient response and frequency response can be found in Reference 3.
216 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Digital
output
W [m]
Code 2N1
Q
DNL[m]
Code m
Code n
INL[n]
Real
transfer
function Ideal
transfer
Code 1 function
Code 0 Analog
input
Vmin
T [1]
T[2N1]
Figure 7.3 Terminal-based DNL and INL in A/D converter transfer function Vmax
are then determined through matching this real straight line with the ideal straight
line, which again can deviate slightly from the optimization process results [3].
Differential non-linearity is a measure of the deviation of the gain and offset
corrected real code widths from the ideal value. DNL values are given in LSBs for
the codes 1 to (2N 2) as a function of k as
(W [k] Q)
DNL[k] = for 1 k 2N 2 (7.5)
Q
where W [k] is the width of code k determined from the gain and offset corrected code
transition levels as given in Equation (7.3) and Q is the ideal code bin width. Note
that neither the real code bin widths nor the ideal value are dened at either end of the
transfer function. As an example, a DNL of approximately +1/4 LSB in code m is
included in Figure 7.3. The absolute or maximum DNL corresponds to the maximum
value of |DNL[k]| over the range of k given in Equation (7.5). A value of 1 for
DNL[k] corresponds to a missing code.
Integral non-linearity quanties the absolute deviation of a gain and offset com-
pensated transfer curve from the ideal case. INL values are given in LSBs at the code
transition levels as a function of k by
[k]
INL[k] = for 1 k 2N 2 (7.6)
Q
218 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
where (k) is the matching error from Equation (7.4) and Q is the ideal code bin
width, both given in volts. Alternatively, INL[k] values can also be given as a per-
centage of the full-scale input range. As an example, Figure 7.3 also depicts an INL
of approximately +2/3 LSB in code n. Plots of INL[k] values over k provide useful
information on the converter performance, as the overall shape of the INL[k] curve
enables some initial conclusions on the predominance of even- or odd-order harmon-
ics [5]. However, the exact values for the INL do depend on the type of gain and
offset correction methodology applied, which should be documented. The absolute
or maximum INL, usually provided as an A/D converter specication, corresponds to
the maximum value of |INL[k]| over the range of k given in Equation (7.6).
Some additional performance characteristics are dened for A/D converters. An
A/D converter is said to be monotonic, if the output is either consistently increasing
or decreasing with an either consistently increasing or decreasing input. If the change
at input and output are of the same direction, the converter is non-inverting. When
the change at the input and output are of opposite direction, the converter is inverting.
An A/D converter can also be affected by hysteresis. This is a condition where the
computation of the transfer function yields different results for an increasing and a
decreasing input stimulus that are beyond normal measurement uncertainties. For
more details see Reference 3.
Amplitude
A1
ASi
AH3
AH2
AHk
from a spectrum analyser or a discrete Fourier transform (DFT) [6] through analysis
of the A/D converter response to a spectrally pure sine-wave input of frequency fi . The
original input signal can be identied as the fundamental of amplitude A1 . The second
to kth harmonic distortion component, AH2 to AHk , occur at non-aliased frequencies
that are integer multiples of fi . In addition, non-harmonic or spurious components,
such as ASi in Figure 7.4, can be seen at frequencies other than the input signal or
harmonics frequencies. The main dynamic performance parameters given below can
be extracted from the output spectrum in the form of ratios of RMS amplitudes of
particular spectral components, which also relates to signal power ratios. The cal-
culation of these from the results of a DFT is outlined in Section 7.3.4.1. Note that
the input signal frequency and amplitude, and in some cases the sampling frequency,
have an impact on the actual performance parameter value and have to be provided
with test results and performance specications.
The SINAD relates the input signal to the noise including harmonics. The SINAD
can be determined from RMS values for the input signal and the total noise (including
harmonics), which also relates to the power, P, carried in the corresponding signal
component. The SINAD is given in decibels as
RMS(signal) Psignal
SINAD = 20 log10 = 10 log10 (7.7)
RMS(total noise) Ptotal noise
The effective number of bits (ENOB) compares the performance of a real A/D
converter to the ideal case with regard to noise [7]. The ENOB is determined through
RMS(total noise)
ENOB = N log2 (7.8)
RMS(ideal noise)
where N is the number of bits of the real converter. In other words, an ideal A/D
converter with a resolution equal to the ENOB determined for a real A/D converter
will have the same RMS noise level for the specied input signal amplitude and
frequency. The ENOB and SINAD performance parameters can be correlated to each
other as analysed in Reference 3.
THD is a measure of the total output signal power contained in the second to kth
harmonic component, where k is usually in the range from ve to ten (depending on
the ratio of the particular harmonic distortion power to the random noise power) [8].
The THD can be determined from RMS values of the input signal and the harmonic
components and is commonly expressed as the ratio of the powers in decibels:
k 2
A
i=2 Hi(rms) Pharmonic
THD = 20 log10 = 10 log10 (7.9)
A1(rms) Pinput
where A1(rms) is the RMS for the signal and AHi(rms) the RMS for the ith harmonic.
THD is given in decibels and usually with respect to a full-scale input (dBFS). Where
the THD is given in dBc, the unit is in decibels with respect to a carrier signal of
specied amplitude.
220 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
The spurious-free dynamic range (SFDR) quanties the available dynamic range
as a ratio of the fundamental amplitude to the amplitude of the largest harmonic or
spurious component and is given in decibels:
A1
SFDR = 20 log10 (7.10)
max{AH(max) , AS(max) }
where AH(max) and AS(max) are the amplitudes of the largest harmonic component and
spurious component, respectively.
While the dynamic performance parameters introduced above are essential for an
understanding of A/D converter test methodologies (Section 7.3), an entire range of
further performance parameters is included in the IEEE standard 1241 [3], such as
various SNRs specied for particular bandwidths or for particular noise components.
Furthermore, some performance parameters are dened to assess inter-modulation
distortion in A/D converters with a two-tone or multiple tone sine-wave input.
This section introduces A/D converter test methodologies, for static and dynamic
performance parameter testing. The basic test set-up and other prerequisites are
briey described in the next section. For further reference, an introduction to pro-
duction test of ICs, ranging from test methodologies and design-for-test basics to
aspects relating to automatic test equipment (ATE) can be found in Reference 9.
Details on DSP-based testing of analogue and mixed-signal circuits are provided in
References 10 and 11.
In a conventional A/D converter test set-up, test source and test sink are part of
the external ATE and are centrally controlled. The ATE interfaces with the IC via a
device interface board; functional IC input/output pins and IC-internal interconnects
may be facilitated as a test access mechanism. However, in the majority of cases,
some other means of test access has to be incorporated in the early stages of IC
design due to access restrictions, such as limited pin count or converters being deeply
embedded in a complex SoC. Systematic design methodologies that increase test
access, referred to as design-for-testability (DfT), are standardized at various system
levels. The IEEE standard 1149.1, also known as boundary-scan, supports digital IC
and board level tests [12]. Its extension to analogue and mixed-signal systems, IEEE
standard 1149.4, adds an analogue test bus to increase access to analogue IC pins and
internal nodes [13]. For SoC implementations, an IEEE standard for interfacing to
IC-internal subsystems, so-called embedded cores, and documentation of their test
requirements is expected to be approved in the near future [14].
Amplitude
3 9 16
2 10
15 17
4 8
1
11 21 Data
14 18
5 7 samples
20
12 13 19
6
Rearrange data samples
Amplitude
9 16 3
2 10
15 17
8 4
1 1
11 (21) Data
18 14
5 7 samples
20
12 19
6 13
(a)
Amplitude
Data
samples
(b)
Amplitude
Data
samples
Figure 7.7 (a) Beat frequency testing and (b) envelope testing
halves of the input waveform phases as illustrated in Figure 7.7(b).While the latter
sampling schemes allow a quick visualization of the waveforms shape, the sampling
techniques introduced can be employed in the test methodologies described in the
next sections.
mean output code over corresponding input voltage. The input voltage step size
and the number of output codes averaged for each input voltage step depends on
the ideal code bin width, the level of random noise and the required measurement
accuracy, which can be assessed through computation of the standard deviation of
the output.
In static performance production test, however, the use of continuous signals is
more desirable. The following sections introduce two A/D converter test methodolo-
gies widely in use today, which measure code transition levels, namely, feedback-loop
testing and histogram testing [3, 10, 11, 15]. From those values, static performance
parameters are determined as described in Section 7.2.1, which can then be compared
to the test thresholds.
(a) (b)
Vref+ M >N
R
C N
ADC DAC ADC
Vref
C C
(a) (b)
k k
7 7
6 6
5 5
4 4
3 3
2 2
1 Input 1 Input
0 voltage 0 voltage
Code count H [k]
T [1]
T [1]
Code count H [k] T [7]
T [7]
Amplitude
Time Time
Figure 7.9 Histogram generation: (a) linear and (b) sine wave
For ramp histograms, where ideal values for H[2] to H[2N 2] are equal, code
transition levels can be given as in the rst part of Equation (7.12), where C is an
offset component and A a gain factor that is multiplied with the accumulated code
count up to the transition to code k [3]. As the widths of the extreme codes, 0 and 2N 1,
cannot be dened, their code counts are usually set to zero (H[0] = H[2N 1] = 0).
In these cases, C and A can be determined as shown in Equation (7.12), where the
rst code transition level, T [1], is interpreted as the offset component. The gain
factor denes the proportion of the full-scale input range for a single sample in the
histogram, where Htot is the total number of samples in the entire histogram:
k1
T [2N 1] T [1]
k1
T [k] = C + A H[i] = T [1] + H[i] (7.12)
Htot
i=0 i=0
where offset component C and gain factor G correspond to the input sine waves offset
and amplitude.
226 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
For either type of histogram, high stimulus accuracy is essential as most deviations
(ramp linearity, sine-wave distortion) have a direct impact on the test result. For high-
frequency stimuli, tests may also detect dynamic converter failure. The choice of sine-
wave histograms can be advantageous, as stimulus verication and high-frequency
signal generation are easier to achieve [11]. An advantage of ramp histograms is that
generally a lower number of samples is required due to constant ideal code counts.
The number of code counts is an important test parameter, as it is directly proportional
to test application time and depends on required test accuracy and condence level. In
Reference 23, an equation is derived for the number of samples required for random
sampling of a full-scale sine-wave histogram. This number can be reduced through
controlled sampling and overdriving of the converter; a relationship is derived in
Reference 4.
A shortcoming of histogram testing in general is the loss of information associated
with the accumulation of code counts only and not their order of occurrence. Imag-
ine a situation where code bins were swapped, leading to a non-monotonic transfer
function. There will be no effect on a ramp histogram and detection for a sine-wave
histogram depends on the code locations. A more realistic converter failure escaping
histogram testing, is the occurrence of code sparkles. Usually, this is a dynamic effect
where an output code of unexpected difference to its neighbours occurs. However,
such effects can become detectable via accumulation of each codes indices (loca-
tions) in a so-called weight array, which can be computed in addition to the histogram
accumulated in a tally array [10].
Amplitude/dB
A1
0
ASi
AH5
AH2
AH3 AH4
AH7 AH8
AH6
domain being a power of two, the discrete Fourier transformation can be computed
more efciently through FFT algorithms. If coherent sampling of all signal compo-
nents cannot be guaranteed, a periodic repetition of the sampled waveform section
can lead to discontinuities at either end of the sampling interval causing spectral
leakage. In such cases, windowing has to be applied, a processing step in which
the sampled waveform section is mathematically manipulated to converge to zero
amplitude towards the interval boundaries, effectively removing discontinuities [24].
In either case, the A/D converter output signal is decomposed into its individual
frequency components for performance analysis. The frequency range covered by
the spectrum analysis depends on the rate of A/C converter output code sampling,
fs . The number of discrete frequency points, also referred to as frequency bins, is
determined by the number of samples, N, processed in the FFT. While accounting
for aliasing, signal and sampling frequencies have to be chosen to allow sufcient
spacing between the harmonics and the fundamental component. The graphical pre-
sentation of the spectrum obtained from the analysis, frequently referred to as FFT
plot, illustrates the particular signal component amplitude with its frequency on the
x-axis (Figure 7.10). The number of frequency bins is equal to N/2 and their widths
are equal to fs /N.
The following spectrum features can be identied in Figure 7.10. First the fun-
damental component, A1 , corresponding to the input signal, second the harmonic
distortion components, AH2 to AH8 , third large spurious components, such as ASi
and nally the remaining noise oor representing quantization and random noise.
Dynamic performance parameters, such as SINAD, THD and SFDR, can be calcu-
lated from the particular signal components real amplitudes (not in decibels) or the
power contained in them, as given in Section 7.2.2 and described in Reference 25
including multi-tone testing.
228 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Built-in self-test (BIST) for analogue and mixed-signal components has been iden-
tied as one of the major requirements for future economic deep sub-micron IC
test [27, 28]. The main advantage of BIST is to reduce test access requirements.
At the same time, the growing performance gap between the circuit under test and
the external tester is addressed by the migration of tester functions onto the chip. In
addition, parasitics induced from the external tester and the demands on the tester
can be reduced. Finally, analogue BIST is expected to eventually enable the use of
Test of A/D converters 229
Compress, signature of
Test clock
encode ADC
cheaper, digital only or so-called DfT-testers that will help with the integration of
analogue virtual components including BIST for digital SoC applications. Here
the aim is to enable the SoC integrator to avoid the use of expensive mixed-signal
test equipment. Also, for multi-chip modules, on-chip test support hardware helps
to migrate the test of analogue circuitry to the wafer level. It is expected that the
reuse of BIST structures will signicantly reduce escalating test generation costs, test
time and time-to-market for a range of devices. Full BIST has to include circuitry to
implement both TSG and ORA. This section briey summarizes BIST solutions that
have been proposed for performance parameter testing of A/D converters, some of
which have been commercialized.
Most BIST approaches for A/D converter testing aim to implement one of the
converter testing techniques described in Section 7.3. In Reference 29 it is proposed
to accumulate a converter histogram in an on-chip RAM while the test stimulus is
generated externally. The accumulated code counts can be compared against test
thresholds on chip to test for DNL; further test analysis has to be performed off chip.
This test solution can be extended towards a full BIST by including an on-chip trian-
gular waveform generator [30]. In a similar approach, the histogram-based analogue
BIST (HABIST), additional memory and ORA circuitry can be integrated to store
a reference histogram on chip for more complete static performance parameter test-
ing of A/D converters [31]. This commercialized approach [32] also allows the use
of the tested A/D converter (ADC) with the BIST circuitry to apply histogram-based
testing to other analogue blocks included in the same IC. As illustrated in Figure 7.11,
the on-chip integration of a sine wave or saw tooth TSG is optional. The histogram
is accumulated in a RAM where the converter output provides the address and a
read-modify-write cycle updates the corresponding code count. The response anal-
ysis is performed after test data accumulation and subtraction of a golden reference
histogram. As for the TSG, on-chip implementation of the full ORA is optional.
Also the feedback-loop test methodology has been considered for a straightfor-
ward BIST implementation [33]. The oscillating input signal is generated through
the charging or discharging of a capacitor with a positive or a negative reference
current I, generated on chip (Figure 7.12). Testing for DNL and INL is based on the
measurement of the oscillation frequency on the switch control line (ctrl) similar to
feedback-loop testing (Section 7.3.3.1).
230 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
VDD
I
Switch
ADC BIST
I C
VSS Ctrl
Input voltage
BIST
S0 S1 S2 S3 S3 S3
x Time
Test clock S2 S2
S1 S1
Lowpass
ADC
filter y = b0 + b1x + b2 x 2+ b3x 3 S0 S0
lter in turn to generate a rising and a falling step for each quarter of the con-
verters input range. The integration process is conducted during the rising/falling
edges of the exponential (approximately 17 per cent of the step width, as illus-
trated by shaded regions in Figure 7.13(c)) to achieve a relatively linear output code
distribution.
While the advantages of analogue and mixed-signal BIST solutions are clear,
drawbacks due to limited test sets, excessive area overhead or a low condence
in test results have hampered wide industrial use. BIST techniques summarized
above are mostly limited to particular ADC architectures. Histogram-based BIST,
for example, may result in excessive area overhead for high-resolution converters.
The polynomial-tting algorithm BIST scheme is aimed at high-resolution converter
testing, but relies on the assumption that a third-order polynomial accurately ts the
test response.
More work may be required to identify converter performance parameters crucial
for testing. Test requirements and realistic failure modes will depend on particular
converter architectures. An example study can be found in Reference 38.
This chapter discussed the key parameters and specications normally targeted in
ADC testing, methods for extracting these performance parameters and potential
solutions for either implementing full self-test or migrating test resources from exter-
nal test equipment to the device under test. Table 7.1 provides a summary of the
advantages and limitations of ve of the main test methods used in A/D converter
testing.
The eld now faces major new challenges, as the demand for higher-resolution
devices becomes the norm. The concept of design reuse in the form of integrat-
ing third-party designs is also having a major impact on the test requirements, as
in many cases system integrators wishing to utilize high-performance converter
functions will not normally have the engineering or production test equipment
required to test these devices. The concept of being able to supply an ADC with
an embedded test solution that requires only digital external test equipment is hence a
major goal.
In the case of on-chip test solutions, proposed or available commercially, limi-
tations need to be understood before investing design effort. Histogram testing, for
example, will require a large amount of data to be stored and evaluated on chip while
requiring long test times. For servo-loop-based solutions, the oscillation around a sin-
gle transition level may be difcult to achieve under realistic noise levels. Sine-wave
tting will require some signicant area overhead for the on-chip computation, as do
FFT-based solutions and may still not achieve satisfying measurement accuracy and
resolution. Further work is therefore required to quantify test times, associated cost
and measurement accuracies and generate quality test quality metrics.
232 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
7.6 References
4 Blair, J.: Histogram measurement of ADC nonlinearities using sine waves, IEEE
Transactions on Instrumentation and Measurement, 1994;43 (3):37383
5 Maxim Integrated Products, Maxim Application Note A177: INL /DNL Measure-
ments for High-Speed Analog-to-Digital Converters (ADCs), September 2000
6 Oppenheim, A.V., Schafer, R.W.: Discrete-Time Signal Processing (Prentice Hall,
Englewood Cliffs, NJ, 1989)
7 Linnenbrink, T.: Effective bits: is that all there is?, IEEE Transactions on
Instrumentation and Measurement, 1984;33 (3):1847
8 Hofner, T.C.: Dynamic ADC testing part I. Dening and testing dynamic ADC
parameters, Microwaves and RF, 2000;39 (11):7584
9 Grochowski, A., Bhattacharya, D., Viswanathan, T.R., Laker, K.: Integrated cir-
cuit testing, IEEE Transactions on Circuits and Systems II: Analog and Digital
Signal Processing, 1997;44 (8):61033
10 Mahoney, M.: DSP-based Testing of Analog and Mixed-Signal Circuits (IEEE
Computer Society, Washington, DC, 1987)
11 Burns, M., Roberts, G.W.: An Introduction to Mixed-Signal IC Test and
Measurement (Oxford University Press, New York, 2001)
12 IEEE Standard 1149.1-2001: IEEE Standard Test Access Port and Boundary-Scan
Architecture 2001 (Institute of Electrical and Electronics Engineers, New York,
2001)
13 IEEE Standard 1149.4-1999: IEEE Standard for a Mixed-Signal Test Bus (Institute
of Electrical and Electronics Engineers, New York, 1999)
14 IEEE P1500 Working Group on a Standard for Embedded Core Test (SECT),
Available from [Accessed Jan 2008] http://grouper.ieee.org/groups/1500/2003.
15 Max, S.: Testing high speed high accuracy analog to digital converters embed-
ded in systems on a chip, Proceedings of IEEE International Test Conference,
Atlantic City, NJ, USA, 2830 September 1999, pp. 76371
16 Corcoran, J.J., Hornak, T., Skov, P.B.: A high resolution error plotter for analog-
to-digital converters, IEEE Transactions on Instrumentation and Measurement,
1975;24 (4):3704
17 IEEE Standard 1057-1994 (R2001): IEEE Standard for Digitizing Waveform
Recorders (Institute of Electrical and Electronics Engineers, New York, 2001)
18 Max, S.: Optimum measurement ADC transitions using a feedback loop, Pro-
ceedings of 16th IEEE Instrumentation and Measurement Technology Conference,
Venice, Italy, 2426 May 1999, pp. 141520
19 Sounders, T.M., Flach, R.D.: An NBS calibration service for A/D and D/A con-
verters, Proceedings of IEEE International Test Conference, Philadelphia, PA,
USA, 2729 October 1981, pp. 290303
20 Pretzl, G.: Dynamic testing of high speed aid converters, IEEE Journal of Solid
State Circuits, 1978;13:36871
21 Downing, O.J., Johnson, P.T.: A method for assessment of the performance of
high speed analog/digital converters, Electronics Letters, 1978;14 (8):23840
22 van den Bossche, M., Schoukens, J., Renneboog, J.: Dynamic testing and diag-
nosis of A/D converters, IEEE Transactions on Circuits and Systems, 1986;33
(8):77585
234 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
23 Doernberg, J., Lee, H.S., Hodges, D.A.: Full-speed testing of A/D converters,
IEEE Journal of Solid State Circuits, 1984;19 (6):8207
24 Harris, F.J.: On the use of windows for harmonic analysis with the discrete
Fourier transform, Proceedings of the IEEE, 1978;66 (1):5183
25 Hofner, T.C.: Dynamic ADC testing part 2. Measuring and evaluating dynamic
line parameters, Microwaves and RF, 2000;39 (13):7894
26 Peetz, B.E.: Dynamic testing of waveform recorders, IEEE Transactions on
Instrumentation and Measurement, 1983;32 (1):1217
27 Semiconductor Industry Association: International Technology Roadmap for
Semiconductors, 2001 edn. Available from http://www.sia-online.org [Accessed
Jan 2008]
28 Sunter, S.: Mini tutorial: mixed signal test. Presented at 7th IEEE International
Mixed-Signal Testing Workshop, Atlanta, GA, USA, 1315 June 2001
29 Bobba, R., Stevens, B.: Fast embedded A/D converter testing using the micro-
controllers resources, Proceedings of IEEE International Test Conference,
Washington, DC, USA, 1014 September 1990, pp. 598604
30 Raczkowycz, J., Allott, S.: Embedded ADC characterization techniques using a
BIST structure, an ADC model and histogram data, Microelectronics Journal,
1996;27 (6):53949
31 Frisch, A., Almy, T.: HABIST: histogram-based analog built in self test, Pro-
ceedings of IEEE International Test Conference, Washington, DC, USA, 35
November 1997, pp. 7607
32 Fluence Technology Incorporated: BISTMaxxTM product catalog, 2000. Available
from http://www.uence.com
33 Arabi, K., Kaminska, B.: Oscillation built-in self test (OBIST) scheme for func-
tional and structural testing of analog and mixed-signal circuits, Proceedings
of IEEE International Test Conference, Washington, DC, USA, 35 November
1997, pp. 78695
34 Toner, M.F., Roberts, G.W.: A BIST scheme for a SNR, gain tracking, and fre-
quency response test of a sigma-delta ADC, IEEE Transactions on Circuits and
Systems II: Analog and Digital Signal Processing, 1995;42 (1):115
35 Teraoka, E., Kengaku, T., Yasui, I., Ishikawa, K., Matsuo, T., Wakada, H.,
Sakashita, N., Shimazu, Y., Tokada, T.: A built-in self-test for ADC and DAC in a
single chip speech CODEC, Proceedings of IEEE International Test Conference,
Baltimore, MD, USA, 1721 October 1993, pp. 7916
36 Sunter, S.K., Nagi, N.: A simplied polynomial-tting algorithm for DAC and
ADC BIST, Proceedings of IEEE International Test Conference, Washington,
DC, USA, 35 November 1997, pp. 38995
37 Roy, A., Sunter, S., Fudoli, A., Appello, D.: High accuracy stimulus generation
for A/D converter BIST, Proceedings of IEEE International Test Conference,
Baltimore, MD, USA, 810 October 2002, pp. 10319
38 Lechner, A., Richardson, A., Hermes, B.: Short circuit faults in state-of-the-art
ADCs are they hard or soft, Proceedings of 10th Asian Test Conference, Kyoto,
Japan, 1921 November 2001, pp. 41722
Chapter 8
Test of converters
Gildas Leger and Adoracin Rueda
8.1 Introduction
Back in the 1960s, Cutler introduced the concept of modulation [1]. Some years
later, Inose et al., applied this concept to analogue-to-digital converters (ADCs) [2].
Sigma-delta () converters attracted little interest at that time because they required
extensive digital processing. However, with newer processes and their ever decreasing
feature size, what was rst considered to be a drawback is now a powerful advan-
tage: a signicant part of the conversion is realized by digital lters, allowing for a
reduced number of analogue parts, built of simple blocks. Nevertheless, the simplicity
of the hardware has been traded-off against behavioural complexity. modula-
tors are very difcult to study and raise a number of behavioural peculiarities (limit
cycles, chaotic behaviour, etc.) that represent an exciting challenge to the ingenuity
of researchers and are also an important concern for industry.
Owing to this inherent complexity, it is quite difcult to relate defects and in
general non-idealities to performance degradation. In linear time-invariant circuits, it
is usually possible to extract the impact of a defect on performance by considering
that the defect acts as a perturbation of the nominal situation. This operation is known
as defect-to-fault mapping. For instance, in a ash ADC, a defect in a comparator can
be modelled as a stuck-at fault or an unwanted offset that can be directly related to
the differential non-linearity. However, in the case of converters, a given defect
can manifest itself only under given circumstances. For instance, it is known that
the appearance of limit cycles is of great concern, particularly in audio applications.
Indeed, the human ear can perceive these pseudo-periodic effects as deep as 20 dB
below the noise oor. A defect in an amplier can affect its d.c. gain and cause
unwanted integrator leakage and limit cycles. Such a defect can be quite difcult to
detect with a simple functional test. Performing a good and accurate test of a
modulator is, thus, far from straightforward. A stand-alone modulator in its own
236 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
package can be tested with success in a laboratory with appropriate bench equipment.
It can also be tested in a production environment with much more limited resources,
but the cost related to such tests is becoming prohibitive as the resolution of the
modulators increases. In the context of a system on chip (SoC), the test problems
multiply and the test of an embedded modulator can very well become impossible.
Research has thus been done and is still necessary to facilitate modulator testing
and to decrease the overall product cost.
This chapter will address the issue of converter tests in a general manner,
keeping in mind the particular needs of SoC. A brief description of the basic prin-
ciples of modulation is rst included, mainly as a guide for those readers with
little or no knowledge of modulators and their peculiarities that inuence test-
ing. The following section deals with characterization, which is closely related to
test. It intends to give valuable and specic information for the case of mod-
ulators. For more general information on ADC testing, the reader should refer to
Chapter 7. Section 8.4 addresses more specically the test of modulators in
an SoC context, explaining the particular issues and reviewing some solutions pro-
posed in the literature. Finally, Section 8.5 gives some details about a very promising
approach that has the potential to overcome most of the issues described in the
previous sections.
Bit-
1011101001010001
stream
1001100101001010
1001010001010010
1100010100101010
Analog Digital
Modulation
0 fs/2 0 fs/2
Quantizer
z1 + 1
Controller
system. Let us imagine that only a low-resolution converter is available: if the input
signal is directly fed into that converter, the output will be a coarse approximation of
the input. However, the input signal is slow with respect to the maximum sampling
frequency of the low-resolution converter. In control theory, the simplest approach
to improve the behaviour of a system is to use a proportional controller: a quantity
proportional to the quantization error is subtracted from the input signal. This is
depicted in Figure 8.3. When considering a discrete-time situation, a delay has to
be introduced into the feedback loop. If the quantity subtracted from the input is
the entire quantization error, an architecture known as error predicting is obtained.
Modelling the low-resolution converter as an additive noise source [3], the transfer
function of the device can be resolved in the z-domain.
238 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Quantizer
X
L0=G/H U Y
ADC
L1=(H-1)/H
DAC
The output is equal to the input signal plus a quantization noise that is shaped at
high frequencies by the function (1 z1 ). In control theory, the system performance
can be improved by taking an adequate controller (proportional, integral, differential
or any combination of them). In the same way, modulation can be presented in a
generic way as in Figure 8.4: the input signal and the modulator output are combined
in a loop lter whose output is quantized by a coarse converter. The characteristics
of the loop lter further dene the architecture of the modulator and the order of the
noise shaping.
The modulator state equation can be written as
U(z) = L0 (z)X(z) + L1 (z)Y (z) (8.1)
And the inputoutput equation as
Y (z) = G(z)X(z) + H(z) (Y (z) U(z)) (8.2)
The function G(z) is known as the signal-transfer function. Similarly, H(z) is the
noise-transfer function (NTF). The term in the parenthesis represents the quantization
noise.
Filtering Decimation
1
1zN
1z1
+ +
fs fd =fs/N
where fc is the cut-off frequency of the lter and fs the sampling frequency of the
modulator. It is easy to show that the number N dened above for the decimation
operation is actually equal to the OSR.
Both the ltering and decimation operations have to be carried out with care. The
decimation cannot be performed directly on the bit-stream, as the high-frequency
noise would alias into the baseband. On the other hand, performing the whole deci-
mation at the lter output may not be optimum in terms of power efciency. Indeed, it
would force the entire lter to run at the maximum frequency. It may be more conve-
nient to split the lter into several stages with decimators at intermediate frequencies.
Hence, nding an adequate decimation and ltering strategy for a given modulator
is an interesting optimization problem. One widely used structure, however, is that
presented in Figure 8.6.
1 E
X 1 + z1
Y
1-z1 1
+
1
The modulator output is thus equal to the delayed modulator input plus the quantizer
error shaped by the function (1 z1 ). Considering the quantizer error as a white
noise that respects Benetts conditions [3] and assuming a large OSR (that is, the
baseband frequency is much lower than the sampling frequency), the quantization
noise power in the modulator baseband can be calculated as
2 2
PQ = (8.5)
12 OSR3
z1 E
X a1 +
z1 b1 +
1
Y
1z 1 1z 1 1 k
a2 b2
Such a noise shaping provides one more bit per octave of the OSR with respect to the
rst-order modulator. Figure 8.8 shows a z-domain diagram of a general second-order
modulator.
In order to properly choose the branch coefcients (ai , bi ), a z-domain analysis has
to be performed. For that purpose, Ardalan and Paulos [7] showed that the quantizer
had to be considered as having an effective gain k. They demonstrated that such a gain
is actually signal dependent but its d.c. value adjusts to the value that makes the loop
gain to be one. For instance, Boser and Wooley [8] proposed to use a1 = b1 = 0.5
and a2 = b2 = 0.5. This solution has two main advantages. It allows the use
of single-branch integrators with a 0.5 gain. Another advantage is that the voltage
level density at the output of the two integrators is similar, which allows using the
same amplier for both, without adjusting its output range. For such a modulator, the
effective gain settles on average to k = 4.
B1 B2 BL
X +
A0 A1 A2 AL
X +
Y1
+
Reconstruction filter
+
Y2
Y
+
+ YN
The second technique consists in cascading several low-order stages [12] as shown
in Figure 8.10. The quantization error of stage i is digitized by stage i + 1, in some
way similar to pipeline converters where the residue of one stage conversion (i.e., the
quantization error) is the input of the next stage. A proper reconstruction lter has to
be designed that combines the output bit-streams of all the stages. As a result, all but
the last stage quantization errors are cancelled. Such structures benet from a greater
simplicity than single-loop modulators and their design ow is better controlled. A
drawback is that noise cancellation within the reconstruction lter depends on the
characteristics of the different stages (integrator gain, amplier d.c. gain and branch
coefcients). In other words, the digital reconstruction lter has to match the analogue
characteristics of the stages. These requirements put more stress on the design of the
analogue blocks as the overall modulator is more sensitive to integrator leakage and
branch mismatches than single-loop modulators.
Test of converters 243
The converters are ADCs. For this reason, their performance can be described
by standard metrics, dened for any ADC. Similarly, there exist standard techniques
to measure these standard metrics. All this information about ADC testing is actually
gathered in the IEEE 1241-2000 standard [13]. Characterization of state-of-the-art
converters is challenging in itself from a metrological viewpoint. Some
converters claim for a precision of up to 24 bits. For such levels of accuracy, no
detail of the test set-up can be overlooked. However, these concerns are not specic
to converters but are simply a consequence of their overwhelming capability
to reach high resolutions. What is intended in this section is to contemplate ADC
characterization from the viewpoint of modulation. For more general information,
the reader can refer to Chapter 7.
The performance specications of ADCs are usually divided into two categories:
static and dynamic. The meaning of these two words seems to identify the role of
these specications to the eld of application of the converter. As the rst mod-
ulators were of low order, they required a high OSR to reach a good precision.
Their baseband was thus limited to low frequencies. Then, evolutions in the mod-
ulator architecture allowed reducing the OSR while maintaining or even increasing
the resolution. For this reason, the market for converters has evolved from the
low-frequency spectrum to the highest one. In the lowest range of frequency,
modulators are used for instrumentation medical, seismology, d.c. meters, and so
on. At low frequencies, state-of-the-art converters claim for a resolution of up to
24 bits. In that case, the most important ADC performance parameters seem to be
the static ones: gain, offset, linearity. However, the noise gures are also of great
interest for those metering applications that require the detection of very small sig-
nals. The most important market for converters can be found in the audio range.
Indeed, most CODECs use modulators. In that case, dynamic specications are
of interest. Moving forward in the frequency spectrum, modulators can be found
in communication and video applications. The rst target was ADSL and ADSL+
but now converters can be found that are designed for GSM, CDMA and AMPS
receivers.
not deviate from the simulations. On the other hand, the modulator is made
of analogue blocks whose performance is sensitive to small process variations and
unexpected drifts. As a result, much of the characterization burden concentrates on
the modulator.
In most Nyquist-rate ADCs, the conversion is performed on a sample-to-sample
basis. The input signal is sampled at a given instant and that sample is in some way
compared to a voltage reference. The digital output code determines to what fraction
of the full scale the input sample corresponds. In ash converters, the input sample
is compared to a bank of references evenly distributed over the full-scale range. In
dual-slope converters, the time necessary to discharge a capacitor previously charged
at the value of the input sample is measured by a master clock. There exist a variety
of solutions to derive the digital output code, but in all cases a given output code
can be associated to a given input sample. For converters, however, that is not
the case. Indeed, the output of a converter is provided at a low rate, but the
input is sampled at a high rate. How can a digital output code be associated to a
given input sample? This absence of direct correspondence between a given input
sample and a given output code is even more signicant considering a stand-alone
modulator. The adaptive loop of the modulator continuously processes the
input signal and the modulator output at a given instant depends not only on the input
sample at that instant but also on its internal state. The internal state depends on the
whole history of the conversion. Actually, if the same input signal is sent twice to a
modulator in identical operating conditions, the two output bit-streams obtained
will be different. The low-frequency components may be identical and the output
of the decimation lter may be identical, but the actual modulator output would be
different.
The simplicity of modulator largely relies on the use of low-resolution quan-
tizers (even only 1 bit). The drawback is that the quantization error remains strongly
correlated to the quantizer input signal. The study of that correlation is overwhelm-
ingly complicated by the modulator feedback loop. The non-linear dynamics of the
modulators induce some effects that require particular attention. For instance, the
response of a rst-order modulator to some d.c. input levels can be a periodic sequence.
Moreover, it has been shown that integrator leakage stabilizes those sequences known
as limit cycles, over a non-null range of d.c. inputs that thus form a dead-zone [14].
The shape of the d.c. transfer function of such a modulator is known as the devils
staircase [15]. Similarly, pseudo-periodic behaviours can be seen in the time domain
but do not appear in the frequency domain.
The characterization of converters thus requires some knowledge of the ADC
internal structure. In particular, it may be interesting to characterize the modulator
without the decimation lter. Otherwise, one may be led to erroneous interpretations
of the characterization results, in particular for static parameters.
The ideal transfer function should be a perfect staircase. The static performance
metrics describe the deviations of the actual converter transfer function from the
perfect staircase. The rst performance specications may be those that affect only
absolute measurements, as are gain and offset errors. Apart from its staircase-like
appearance, which is due to the quantization process, the transfer function of an ADC
is expected to be a straight line of gain one with no offset. However, that objective is
hardly achieved in reality and gain and offset errors have to be measured (or calibrated)
for applications that require a good absolute precision. For that purpose, a simple
linear regression over a set of known d.c. input levels is sufcient. The number of
d.c. input levels determines the condence interval associated to the measurement.
The value of the residues of the linear regression can also give a rst insight into the
resolution of the converter.
Non-linearity arises when the transfer function cannot be represented by a straight
line anymore. The static parameters that represent how the actual transfer function
deviates from the straight staircase are integral non-linearity (INL) and differential
non-linearity DNL. The former is a representation of the distance of a given point of the
actual transfer function to the ideal transfer function. The latter is a representation of
actual size of a transition width (that is, the voltage difference between two consecutive
code transitions) with respect to the ideal size. This is illustrated in Figure 8.11. These
two metrics are closely related as the INL at code i is the sum of the DNL from code
0 to code i.
An important concept related to the static representation is that of monotonicity.
It is not a metric but a quality of the converter that is implicit in the INL and DNL.
The monotonicity of converters is ensured by design. Indeed, modulators
Output
code Gain
Offset DC input
Best-fit DNL
INL
line
Ideal code
width
can be seen as control loops so if they were not monotonous, they would be unstable.
Another important static aspect of ADCs appears if there are missing codes. The
output code of converters is built by the decimation lter from the modulator
output bit-stream. Provided that the decimation lter is well designed and no rounding
operation limits the resolution, there should not be missing codes.
As described in the previous subsection, the modulation breaks the tradi-
tional representation of the A/D conversion as a sample-to-code mapping. This does
not mean that the static metrics are useless but that their interpretation has to be done
with care. DNL does not provide much information but INL could describe general
trends in the transfer function like polynomial approximations. Anyway, measuring
the INL for all output codes does not make much sense either. As a result, the stan-
dard techniques used to measure the INL and the DNL of ADCs are not adapted
to converters. One example of these techniques is the servo-loop method that
is used to locate code transitions. Apart from the previously commented concerns
on the concept of code transitions, a drawback of that technique is that the algo-
rithm should take into account the potentially large latency of the decimation lter. It
should also be revised to take into account the inuence of noise at high resolution.
Indeed, a converter with a nominal resolution of 24 bits may have an effective
resolution of around 20 bits. Trying to locate transitions at a 24-bit resolution would
imply nding an oscillation buried into a 16 times higher noise. Also, an exhaus-
tive test of all transitions would require an incredible amount of time: for resolution
above 14 bits, the number of codes is very large. Furthermore, in order to obtain the
static transfer function, the increase and decrease rate of the input voltage should be
very slow.
Another technique, histogram testing, requires the acquisition of several samples
per output code. The converter code density (that is, the number of occurrences for
each code) is compared to the input signal density to determine DNL and INL with
precision. The main advantage over servo-loop method is that the histogram can be
computed using sine-wave inputs or slow ramps. There is thus no reason to wonder
how the modulator processes the input signal as in the servo-loop method. However,
a histogram test is still difcult to perform for the highest resolutions.
Hence, it can be concluded that the techniques to measure the static performance
metrics are not quite adapted to modulators. Even the metrics in themselves
suffer some conceptual limitations, apart the gain and offset measurements.
The quantization noise is strongly correlated to the input signal. For most ADC archi-
tectures, the relatively high number of bits tends to de-correlate the quantization noise
from the input signal so that the former behaves as a random white noise. Apart from
quantization noise, other noise sources can occur and reduce the effective resolution.
Stochastic processes such as thermal noise and icker noise are of importance. Clock
jitter should also be taken into account. Finally, cross-talk within the circuit or a
poor power-supply rejection ratio and common-mode rejection ratio may also cause
the appearance of unwanted tones in the modulator baseband and consequently
in the converter output. The performance parameters related to noise effects are the
signal-to-noise-ratio (SNR) and spurious free dynamic range (SFDR). Depending on
the application, the metrics are tailored to isolate a given effect. Spurious tones may
be considered and accounted for as a distortion (even if they are not harmonics of the
input signal), while the noise would in that case account only for random-like effects.
The sum of all non-ideal effects (apart from gain and offset errors) is often considered
in a single performance metric that is called the effective number of bits (ENOB).
The ENOB is the number of bits that would have an ideal converter if its quantization
error power was equal to the overall error and noise power of the converter under
study.
As said before, modulators raise a number of particular concerns about their
dynamic characteristics. Quantization noise in modulators usually appears as a
spectrally shaped random noise, but in some conditions spurious tones can appear
due to internal coupling with the input signal or even idle tones. Similarly, the quanti-
zation noise power in the baseband can vary with the frequency and amplitude of the
signal. modulators are also prone to exhibit pseudo-periodic outputs when they
are excited by d.c. levels. This phenomenon is known as idle tones and is difcult
to identify through spectral analysis. However, examination of the modulator time
response makes possible the detection of such effects. In order to cope with the low
control of the modulators non-linear dynamics, the metrics associated to the
dynamic characteristics (THD, SNR, SFDR, etc.) are usually measured and plotted
over a broad range of input conditions. As most dynamic characterization techniques
rely on sine-wave input, the metrics of interest are measured for several amplitudes
and frequencies. A periodic test signal is thus sent to the converter input and a register
of N data points is acquired at the converter output. Two main analysis techniques
exist.
One is called sine-t. It consists in nding the sine wave that best ts the converter
output in a least-square sense. There are two important variants of the technique. The
rst and simplest one can be applied if the input sine-wave frequency is perfectly
controlled. That can be done if it comes from a digital-to-analogue converter (DAC)
driven by the same clock as the ADC. In that case, a linear-in-the-parameters regres-
sion can be applied to retrieve the sine-wave amplitude, offset and phase. The second
one has to be applied if the signal frequency is not known. In that case, non-linear
solvers have to be used that greatly increase computational efforts. Once the best-t
sine wave has been found, the residue of the t operation gives the overall converter
error power. Renements can be included in the method to take into account possible
harmonics or other tones at known frequencies in the tting algorithm.
248 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
The other and almost ubiquitous analysis technique is the spectral analysis based
on the fast Fourier transform (FFT). The FFT differs from the discrete Fourier trans-
form (DFT) in that N, the number of acquired points (that is, the length of the
acquisition register), has to be a power of two. In that case, the DFT algorithm can
be simplied to a great extent. Apart from that peculiarity the concepts are identical.
The Fourier transform (FT) of a signal is its exact representation in the frequency
domain. Actually, a signal can be represented as the linear combination of an innite
number of base functions that are complex exponentials of the form ej2ft .
The FT of a signal is nothing more than the coefcient set of the linear combina-
tion, that is, the values of the projection of the signal over each one of the complex
exponentials. That information gives a clear representation of how the signal power
is spectrally distributed.
For modulators, which rely on quantization noise shaping to obtain their high
resolution, the FFT is almost unavoidable. However, FFTs should be applied with
care because the simplicity of the result interpretation contrasts with the subtlety of
the underlying concepts. For that reason, the next section is devoted to study the main
issues for the correct application of a FFT to modulators.
Frequency
Figure 8.12 Spectrum of a rectangular window (a) for a coherent tone and (b) for
a non-coherent tone
Coherent sampling is the rst technique to limit these undesirable effects. It con-
sists in properly choosing the test frequencies such that they correspond to FFT bins
as exactly as possible. In practice, this can be implemented if the test signal generator
can be synchronized with the ADC. It can be shown [16] that the test frequencies
should be set to a fraction of the acquisition frequency:
J
ftest = facq (8.7)
N
where N is the number of samples in the acquisition register and J is an integer, prime
with N, that represents the number of test signal periods contained in the register.
This choice also ensures that all the samples are evenly distributed over the test signal
period and that no sample is repeated.
However, it is not always possible to control the test frequencies with a sufcient
accuracy. Similarly, there may be spurious tones in the converter output spectrum
at uncontrolled frequencies. In those cases, a window different from the rectangular
one is required. Spectral leakage occurs because the analysed signal is not periodic
with a period N/facq . The idea behind windowing is to force the acquired signal to
respect the periodicity condition. For that to be done, the signal has to be multiplied
by a function that continuously tends to zero at its edges. As a result, the power
contained in the sidelobes of the window spectrum can be greatly reduced. The
window has to be chosen such that the leakage of all components present in the
ADC output signal falls below the noise oor and thus does not corrupt spectrum
observation. The drawback of such an operation is that the tones present in the output
spectrum are no longer represented by a sharp spectral line at one FFT bin. Indeed,
the main lobe of the window is always sampled by number of adjacent FFT bins
greater than one. As a result, frequency resolution is lost. There is a trade-off between
frequency resolution and sidelobe attenuation. Figure 8.13 represents the spectrum
of several windows sampled for a 1024-point FFT. Figure 8.13(a) shows how the
window would be sampled for a non-coherent tone that would fall exactly between
250 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
(a) Rectangular
(b)
RifeVincent (type II)
0
0 BlackmanHarris
Hanning
100
Window power in dB
50
200
100 0
150 100
200 200
0.13 0.135 0.14 0.13 0.135 0.14
0 0.1 0.2 0.3 0.4 0.5
Normalized frequency
Figure 8.13 (a) A 1024-point FFT of four windows in the worst case of non-coherent
sampling (signal between two FFT bins) and (b) main lobes of the
window spectra
two FFT bins. Figure 8.13(b) shows a close-up of the main lobes of the window
spectra for a coherent tone. Notice that for Figure 8.13(b), there is one marker per
FFT bin.
In order to limit spectral leakage, the authors in Reference 17 proposed to combine
sine-t and FFT. A sine-t is performed on the acquired register in order to evaluate
the gain and offset of the modulator. Then, an FFT is performed on the residue of
the sine-t. As the high-power spectral line has been subtracted from the register,
the residue mainly contains noise, spurious components and harmonics. In most
cases, these components do not exhibit high power tones. A simple window or even
a rectangular window can be used. The spectral leakage of these components should
be buried below the noise oor. The overall spectrum (what the authors call pseudo-
spectrum) can be reconstituted by adding manually the spectral line corresponding
to the input signal. The main drawback of this technique is obviously that it requires
the computational effort of both sine-t and FFT.
The proper application of a FFT requires that three parameters must be determined:
the number of samples in a register, the number of averaged registers and the window
to be applied. The window type sounds too qualitative and it is useful to divide it into
four parameters: the main lobe width (for instance, 13 FFT bins for the RifeVincent
window of Figure 8.13(a)), the window energy, the maximum sidelobe power and
the asymptotic sidelobe power evolution. Figure 8.14 shows how these parameters
relate to the measurement objectives and to the set-up constraints through a number
of central concepts.
The required frequency resolution is dened by the need for tone discrimination
and affected by set-up limitations such as the frequency precision of the signal gener-
ator. For a given type of test, a number of tones are expected in the output spectrum.
For instance, in an inter-modulation test, the user has to calculate, as a function of the
input tone frequency, the expected frequency of the inter-modulation and distortion
Test of converters 251
Objectives:
Tone discrimination
Lowest measurable
tone power
Figure 8.14 Relating FFT parameters to test objectives and set-up constraints
tones. Similarly, expected spurious tones can be taken into account, such as 50 Hz (or
60 Hz) tones. All those components should be correctly discriminated by the FFT in
order to perform correct measurements. Frequency resolution is primarily driven by
the number of samples in the acquired register but the window type is also of great
importance. Indeed, the main lobe width for an efcient window (from a leakage
viewpoint) as for the RifeVincent window shown in Figure 8.13(a) is as large as 13
FFT bins. This means that the frequency resolution is reduced by a factor 13 with
respect to a rectangular window whose main lobe is only one-bin wide. In many cases
though, few tones are expected in the output spectrum and the frequency resolution
issue can easily be solved by a judicious choice of the test frequency.
The noise oor is the concept of most importance. The power of a random signal
spreads over a given frequency range. For a white noise, it spreads uniformly from
d.c. to half the acquisition frequency (facq /2). What the FFT measures is actually the
amount of noise power in a small bandwidth centred on each FFT bin. Obviously, the
larger the number of samples acquired, the smaller the bandwidth and the lower the
amount of noise falls in that bandwidth. The expected value for a noise bin is
*
2
Xk = noise (8.8)
N Ewin
where noise is the standard deviation of the white noise, N is the number of samples
in the acquisition register and Ewin is the energy of the applied window. Indeed,
the window is applied to the whole output data, including the noise and inuences
the effective noise bandwidth. The energy of the window is simply calculated from
252 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
On the other hand, the noise oor is related to the set-up constraints by the actual
noise power in the output data, which should be estimated a priori. The noise oor
has to be set to a value that enables the observation of the lowest expected tone power.
In other words, if a tone of 90 dB below full scale has to be detectable, the number
of samples and the window energy have to be chosen such that the noise oor of the
resulting FFT falls below 90 dB.
The noise dispersion should also be taken into account. It can be shown that the
random variable that corresponds to an FFT bin and whose mean value is expressed
in Equation (8.8) has a standard deviation of the same order as its mean value. As a
result, in the representation of the spectrum in decibels of the full scale, random noise
appears as a large band that goes from 12 dB above the expected power level down
to tens of decibels below. Averaging the magnitude of the FFT bins for K acquisition
registers helps to reduce the standard deviation of the noise FFT bins by a factor
of K 0.5 . For a signicant number of averages, the noise oor tends to a continuous
line, which would be its ideal representation. Actually, the following equation could
be used to derive the FFT parameters from the requirement of the lowest detectable
tone:
1 N Ewin 3
10 log 10 log + 20 log 1 + = Pspur (8.10)
2Pnoise 2 K
where Pnoise is the expected noise power of the converter, K is the number of averaged
registers and Pspur is the power of the minimum spur that has to be detected. Notice that
a full-scale tone is taken as the power reference in Equation (8.10). The last logarithmic
term in Equation (8.10) stands for the dispersion of the noise oor. Figure 8.15 intends
to facilitate comprehension of Equation (8.10).
The dispersion term should be maintained below the variations of the noise spectral
density that has to be detected. For instance, if an increase of 6 dB in the noise density
due to icker noise has to be detected, the noise dispersion term should be lower than
6 dB, which implies averaging K = 10 FFT registers. Note that if the actual noise
power is higher than expected, the noise oor of the obtained FFT is increased. As
a result, the minimum detectable tone is higher. To compensate for this effect, the
number of points in the register should be increased to decrease the noise oor. An
extra term may be introduced into Equation (8.10) in order to account for unexpected
noise increases.
Returning to Figure 8.14, the concept of signal leakage has already been explained.
Considering the maximum input tone power and the frequency precision of the signal
generator available, the proper window should be selected such that the sidelobe power
falls below the noise oor. Notice that if the frequency precision of the generator is
better than half the FFT bin bandwidth, facq /(2N), the sidelobe power requirements
Test of converters 253
Full-scale tone
Power in dBFS
Noise floor:
1 N Ewin
Buried tone Minimum 10log 10log
2Pnoise 2
Noise detectable
tone
may be relaxed as the window spectra would not be worst-case sampled. Taking that
case to an extreme, if coherent sampling is available to the test set-up, no signal
leakage occurs.
For converters, however, another leakage concept may have to be taken
into account: noise leakage. As was said in Section 8.3.1, converters non-
idealities are located mainly in the analogue part which is the modulator. In that
sense, performing the FFT on the modulator bit-stream gives more insight into the
functioning of the modulator because it is possible to check the correctness of
the noise shaping at high frequencies (beyond the cut-off frequency of the decimation
lter). If the FFT is performed on the output of the decimation lter, a number of
samples N has to be acquired at the lter output frequency (facq ) in a high-resolution
format (for instance, the lter output of a 24-bit precision lter can be in a 32-bit
format). If it is performed on the modulator bit-stream, a number of samples, N , has
to be acquired at the sampling frequency of the modulator (which is equal to the lter
output frequency multiplied by the OSR) in a low-precision format (typically 1 bit).
Taking into account that the same non-idealities have to be detected in the baseband,
the same frequency resolution has to be selected in both cases. Hence, the FFT of the
modulator output bit-stream requires OSR times more points than the acquisition at
the lter output. The acquisition time is thus the same in both cases, and the memory
requirements should be of the same order due to the difference in the samples formats.
The drawback of performing an FFT on the modulator bit-stream is that it puts more
stress on the choice of the window that has to be applied to the data register. Indeed
in most ADCs, the noise spectral distribution is almost at and its power is far lower
than full-scale signal power. As a result, noise leakage has little or no impact on the
output spectrum. This reasoning is also valid for a converter if data is acquired
at the output of the decimation lter. However, if the FFT is performed directly at
254 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
the modulator output, the spectral density of the modulator quantization noise is not
at at all and the leakage of high-frequency noise into the modulator baseband could
severely corrupt the FFT analysis. The window has to be chosen not only on the basis
of test signal leakage but also on the basis of spectrally shaped noise leakage. In other
words, the performance of the window depends on the rst sidelobes attenuation
for signal (or tones) leakage, and on the asymptotic attenuation for noise leakage.
It can be seen in Figure 8.13 how the BlackmanHarris and RifeVincent windows
achieve a low sidelobe power. On the other hand, Hannings window induces more
signal leakage but with the drawback that the sidelobe power greatly decreases with
frequency. This window outperforms the BlackmanHarris window at relatively low
frequencies. It may thus be more suitable for avoiding the high-frequency noise of
modulator output bit-stream leaking into the baseband. This may be particularly true if
a combination of sine-t and FFT is performed, as the fundamental component that is
most likely to exhibit visible leakage is removed. In that case, noise leakage becomes
the dominant component, unless there are high-power spurious tones. In order to
properly choose the window, it could be useful to simulate a white noise ltered by
the theoretical NTF of the modulator and perform an FFT with the candidate windows.
That allows checking if the shape of the noise oor in the baseband is higher than
expected.
Summarizing the conclusions on the characterization of converters, it can
be said that it requires an extensive use of spectral analysis (i.e., FFT) over a range
of input conditions (signal amplitude and frequency). Furthermore, FFT has to be
carried out with care and the test engineer should precisely know what the limitations
of the test set-up are and what has to be measured. Concerning the static metrics,
the effective gain and offset have also to be included. Odd effects, specic to the
non-linear dynamics of modulators, such as dead-zones or limit cycles, may also
be found in the time domain.
The term test is commonly used for characterization. Indeed, functional test is the
most-used practice in the eld of mixed-signal circuit production test and is very
similar to characterization. It consists in measuring a given set of datasheet param-
eters and verifying that they lie within the specied limits. Nevertheless, the correct
denition of test is broader than that of characterization. Test should represent any
task that ensures directly or not within a given condence interval that the circuit
is (and not just performs) as expected. For instance, if it were possible to check that
the geometry of all process layers is the expected one and that the electrical process
parameters are within specication over the whole wafer, circuit performance would
be ensured by design. As was said before, production test for mixed-signal circuits and
in particular for modulators is usually functional: a subset of key specications
are measured and the rest of the datasheet parameters are assumed to be correlated to
those measured parameters. It should be clear that functional test is not the perfect test
as it does not guarantee that the circuit is defect free. There exist other alternatives,
Test of converters 255
none of which is perfect either but that may have some added value with respect to
characterization-like tests.
+ Digital
modulator
z1 z1 z1 K1
K1
Low-pass
K2
K2
filter
+
z1 z1 z1 KN
KN
signal. However, most solutions reported lack of some of these functions and hence
they are actually partial BISTs.
Low-pass
filter
Software
modulator
concern the trade-off between precision and extra area. Indeed, the wider the register,
the more precise the encoded signal and the larger the required silicon area. Nev-
ertheless, they also proposed to reuse the boundary-scan register for the generator
shift register. This would provide a potentially large register with a low overhead.
Alternatively, a RAM available on chip could also be reused. Notice that it is impor-
tant to optimize the recorded bit-stream to obtain the best results. The bit-stream
recorded in the shift register is a portion of the output bit-stream from a software
modulator encoding the wanted test stimulus. Optimization consists in choosing the
best performing bit-stream portion over the total bit-stream and in slightly varying
the software modulator input signal parameters to get the best results. Indeed,
the SFDR results of a modulator can vary signicantly with the input signal ampli-
tude. These proposals are well developed and alternative generation methods of the
bit-stream have been shown to improve the obtained signal precision.
In Reference 29 the authors took the idea of Gordon Roberts team and built
a fourth-order oscillator in a eld programmable gate array to demonstrate the
validity of the approach. Their oscillator was designed to avoid the need of multipliers
and required around 6000 gates. They achieve a more than 110 dB dynamic range for
a tone at 4 kHz (the modulator master clock is set to 2.5 MHz).
For the output data analysis, Roberts team also proposed a solution to extract
some dynamic parameters in association to their sine-wave generation mechanism. In
Reference 30 they compare three possible techniques. The rst one is straightforward.
It consists in the implementation of a FFT engine. Although it provides a good pre-
cision, it is not affordable in the majority of cases. The second one consists in using
a standard linear regression to do a sine-t on the acquired data. The same master
clock is used for the sampling process and the test stimulus generation. So the input
frequency is precisely known, which avoids the necessity of using a non-linear four-
parameter search. The precision of the SNR calculation is similar to the FFT, but less
hardware is necessary. However, some multiplications need to be done in real-time
and some memory is also required to tabulate the values of the sine and cosine at the
test stimulus frequency. The third and last proposed solution is to use a digital notch
lter to remove the test signal frequency component and calculate the noise power
and a selective bandpass lter to calculate the signal power. The required hardware
258 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
to implement this method is less than for the other two solutions, as no memory
is needed to tabulate cosine values and no real-time multiplication is required. The
price to be paid is a small reduction in SNR precision and also, the test time is slightly
increased to account for the lter settling time. Actually, the more selective the lter,
the better the SNR precision but the higher the settling time. Extensions of this work
[31] also showed that it was possible to extract harmonic distortion and IMD with
similar digital ltering.
Finally, Ong and Cheng [32] proposed to partially solve the problem of test stim-
ulus generation duplicating the modulator feedback DAC to input digital sequences.
Similar to the scheme proposed by Gordon Roberts, a digital -like bit-stream
can be used as a test stimulus. The authors argue that sine waves encoded in that
way can be used to functionally test the modulator, performing an FFT on its output.
However, a potential limitation of the technique resides in the fact that the digital test
sequence does not pass through an anti-aliasing lter and thus contains a large amount
of high-frequency noise. This high-frequency noise may interact with the modulator
non-linear dynamics up to the point of causing instability. In any case, it has still to
be demonstrated that the functional metrics measured with such a test stimulus match
those measured in a conventional way.
to cover the scope of all possible faults, from catastrophic to parametric deviations.
This solution is thus limited by the possibility to realize realistic fault simulations.
Another drawback is that the non-linear dynamics of modulators may alter the
oscillation results expected by analytical calculations.
De Venuto and Richardson [35] proposed to use an alternative input point to test
modulators. Their intention is to inject a test signal at the input of the modulator
quantizer. This test signal is processed by the modulator just like the quantization
noise. In that sense, the authors argue that they can determine the modulator NTF
accurately. Although it is true that many defects or non-idealities can affect the mod-
ulator NTF and should thus be detected, others are intrinsically related to the input
signal. The best example is given by those defects that cause harmonic distortion such
as non-linear settling of the st integrator. Such a defect would not be detected by
the proposed method. Similarly, it is worth wondering if the input of a test signal at
that point alters signicantly the non-linear dynamics of the modulator under test. In
particular, much care should be taken for high-order modulators to ensure that they
are not driven into instability. Nevertheless, the main advantage of the approach is
that it is applicable, in principle, to any modulator architecture.
Ong and Cheng [36] also proposed another solution based on the use of pseudo-
random sequences to detect integrator leakage in second-order modulators. Their
method relies on a heuristic search to nd the region of the output spectrum that
exhibit sensitivity to integrator leakage. The fact that the proposed solution was
validated only through high-level simulation of a simple z-domain model makes its
reliability questionable.
s = A x + s0 (8.11)
Owing to noise consideration and the limited precision of the performance measure-
ments, it may be necessary to perform measurements over a set of specications wider
than P and retrieve the model parameters in a least-square approach. The selection of
the number of specications to be measured is an optimization problem that greatly
depends on the model and the way it was built.
One of the ways to derive an efcient model consists in identifying the mechanisms
that can potentially impact the specications that have to be tested. Obviously, this
requires knowledge of the exact circuit architecture and implementation together with
a deep understanding of its functioning. Even more, statistics on the process variations
should also be available, as most parametric failures are due to abnormal drift of
some of these parameters. A systematic approach for the model derivation would
then consist in performing a sensitivity analysis around the normal operating point.
All parameters that signicantly impact the specications would be selected to form
the nal matrix A and the operating point performance s0 . The main shortcoming of
such an approach is that the sensitivity analysis can only be applied to the parameters
selected by the designer. It thus requires a deep understanding of the architecture
and its non-idealities in order to avoid test escapes. In turn, a systematic approach
is unfeasible as all the process and device parameters should be considered in the
sensitivity analysis. This would undoubtedly provide a number of parameters much
higher than the number of specication gures that should be measured.
Alternatively, the model can also be derived in an empirical manner [38]. That
operation is often named blind modelling. From a statistically signicant set of
N devices, an empirical model is derived. The number of devices that have to be
Test of converters 261
fully characterized to generate the model put a fundamental limit on the maximum
achievable model order. The complete specication vectors s of the N devices are
concatenated to form an M N matrix. The average (over the N devices) speci-
cation vector s0 is subtracted from each column of that matrix. The singular value
decomposition of the obtained matrix allows identication of the model. This can be
easily understood since singular value decomposition denes the mapping of a vecto-
rial space dened by arbitrary vectors (the specication vectors) to a vectorial space
dened by orthogonal vectors (the model parameter space). A possible optimization
of the model order would consist of selecting only those parameters whose singu-
lar value is above the variations related to measurement noise. The model quality is
determined by a lack-of-t gure of merit, which is similar to the standard devia-
tion of the residues in a least-square t. The main advantage of such an approach is
that it can be easily generalized as the methodology does not require any particular
knowledge of the circuit under test. However, the modelling method still assumes
that the variations are small around an ideal operating point. One drawback of the
blind modelling approach with respect to the sensitivity analysis approach is that it
provides no insight into the validity range of this linear assumption.
Examples of model-based test applied to ADCs can be found in the literature
[39, 40]. However, in these cases, the model is used primarily to reduce the number
of measurements for static test. In Reference 39, a linear model is used to measure
the INL and DNL only for a reduced number of output codes and extrapolate the
maximum INL and DNL. In Reference 40, a linear model is also applied to the
histogram test. All the output codes are measured but the number of samples per code
is relaxed. The model is used to draw the INL and DNL information out of the noisy
code density obtained. Anyway, neither of the proposals make much sense for
converters as they only consider static performance.
Despite its potential for test time (and thus test cost) reduction, model-based
testing does not relax the test requirements. The measurements to be performed are the
same measurements as for characterization, only their number is reduced. However,
model-based testing has a great potential for design-for-test (DfT). The brute-force
approach described above has the advantage of generality: the methodology can be
applied to any circuit and thus to any converter. However, taking into account the
structure of the circuit under test may allow a more efcient use of the test resources.
On the one hand, the test stimuli requirements could be relaxed by adding specic
and easy-to-perform measurements to the initial measurements space. The additional
measurements could be used to retrieve the model parameters in the same manner
as above. On the other hand, the data analysis could also be simplied to some
extent. Test is not the same as characterization and the objective of a test is not to
explicitly provide performance gures. In a functional test, which most resembles
characterization, performance measurements are compared to a tolerance window
and the test outcome is a Pass/Fail result. In order to simplify model-based test for
DfT purposes, it would be possible to map the specication tolerance windows onto
the parameter space. The Pass/Fail result could thus be obtained a priori without the
need of calculating back the whole set of specications. The operation described in
the second line of Equation (8.12) would be useless.
262 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
2Nb 1
Ideal response
to the ramp
Output code
Performance parameters such as the offset, gain, second and third harmonics (further
noted A2 and A3 ) can be retrieved from the equation.
After inverting the matrices in Equation (8.14) and combining the result with
Equation (8.15), the proposed test can be formalized accordingly to Equation (8.12),
obtaining:
S0 n/4 3n2 /32 7n3 /192 15n4 /1024
S1 n/4 n2 /32 n3 /192 n4 /1024
S2 n/4 n2 /32 n3 /192 n4 /1024 b0
S3 n/4 3n2 /32 7n3 /192 15n4 /1024 b1
=
offset 1 0 n2 /8 0 b2
gain 0 n/2 0 3n3 /32 b3
A2 0 0 n2 /8 0
A3 0 0 0 n3 /32
(8.16)
The reader should notice that this matrix inversion is unnecessary as Equation (8.14)
allows us to retrieve the polynomial coefcients directly from the syndromes. It has
been done only to illustrate that the proposed test can be seen as a model-based test.
The application of the scheme described above to modulators is particularly
appealing. First of all, the four syndromes are acquired by accumulating the converter
output. For a converter, the operation can be performed directly on the modulator
bit-stream and thus only requires up-down counters. Moreover, it can be shown that
some non-idealities, such as amplier settling error, map onto the transfer function in
a manner that is quite accurately approximated by a low-order polynomial. In some
264 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
way, the use of a third-order polynomial seems justied for modulators. The
authors pointed out in their original paper that the accuracy of the method could
be compromised if the actual transfer function presents signicant components of
degrees higher than three. That could be the case if clipping occurs. In those cases,
however, they state that the THD can be accurately estimated by taking the squared
sum of the second and third harmonics calculated according to the proposed test.
The validity range of the model is thus questionable, particularly in the context of
a defective device. The issue is not directly addressed in Reference 41 but some
empirical insight in the reliability of the model is actually provided. The proposed test
is performed on a group of commercial modulators and it is shown that the obtained
results are in accordance with standard tests. Another fundamental limitation of the
method is obviously that it only addresses effects that can impact the modulator
distortion. In terms of structural test, the fault coverage is thus intrinsically reduced.
For instance, a defect could greatly alter the d.c. gain of an amplier. A modulator
with such a defect would exhibit integrator leakage: part of the quantization noise
would leak into the modulator baseband. The modulator could be strongly out of
specications in terms of SNR but the test proposed in Reference 41 would not
detect it.
Later work performed by Roy and Sunter [42] extends the solution to an exponen-
tial staircase that can partially be generated on chip. This solution requires a precise
passive ltering that has to be realized off chip. Actually, the authors do speak of a
built-out self-test, and this may be an issue in the context of SoC. Indeed, the proposed
scheme faces the same problem of signal integrity as a functional test.
SNR
Performance THD
specifications SFDR
PSRR
Designer expertise
/heuristic search Amplifier
DC gain
/high-level simul. Slew-rate
Bandwidth
Verification/electrical simulation
+
Behavioural
Behavioral Capacitor
model Matching
Linearity
Macro-blocks
Macro-block division Comparator
architecture choice Hysteresis
/transistor sizing
Electrical
model-based test
implementation
Characterization
/functional test
Layout
Behavioural
/fabrication
and so on. Behavioural model-based test can thus be considered as hierarchical test-
ing, and from that viewpoint, the approach is not so new [50]. Actually it has been
claimed [51] that inductive fault analysis for mixed-signal circuits should consider
macro performance degradations as fault classes. In other words, a behavioural model
level of abstraction is adequate for defect-oriented tests. In that sense, behavioural
model-based test offers valuable advantages for device debugging.
As was said before, the application of model-based test to DfT has to be focused on
relaxing the measurement requirements. This means that the behavioural parameters
have to be retrieved with simple tests. It has been shown in recent works that some
behavioural parameters, such as amplier d.c. gain and settling errors (which are
related to slew rate and gain bandwidth), can be tested using digital stimuli that are
easily generated on chip [45]. The proposed tests can be roughly gathered in the set-up
of Figure 8.20.
The test stimuli are digital and can be generated on-chip at the cost of a linear
feedback shift register (LFSR) of only 6 bits. Those digital stimuli are then sent to the
modulator under test through the feedback DAC during the sampling phase. During
the integrating phase, the feedback DAC is driven by the modulator output, as usual.
That time-multiplexed use of the DAC is symbolized in Figure 8.20 by an extra input.
For the analysis of the test output, test signatures are computed by accumulating the
modulator output bit-stream minus the input sequence. This only requires a couple
266 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
DAC ADC
input disabled in +
test mode
Digital test input
XOR AND Up
Counter Signature
z1 z1 z1 z1 z1 z1 AND Down
Figure 8.20 Generic set-up of the digital tests proposed in References 4345
of logic gates and an up-down counter. They can be simply related to the modulator
behavioural parameters. However, the reader should notice that the test decision has to
be taken in the model parameter space. Indeed, the calculation of explicit performance
gures would require the simulation of the behavioural model. For device-debugging
purposes, the behavioural signatures should be shifted off chip. However, for test
purposes, tolerance windows have to be designed for each behavioural signature. The
performance specications cannot be mapped onto the behavioural parameter space.
Hence, it is not possible to set the tolerance windows so as to obtain an equivalent-
functional test. However, behavioural parameters are closely related to the modulator
design ow. They correspond to performance specications of the different macros.
When choosing one point of the design space at the behavioural model level, some
units are also taken on those parameters, according to the variations of the process
parameters. Those units can be used to establish the required tolerance window. For
instance, if an amplier with a d.c. gain of 80 dB is considered necessary to meet
specications, an amplier with a nominal 90 dB d.c. gain will possibly be designed
such that in the worst-case process corner the d.c. gain is ensured to be higher than
80 dB. For test purposes, that 80-dB limit could serve as the parameter tolerance
window.
It is worth mentioning that this behavioural model-based solution is very attractive
in an SoC context as the different tests could be easily interfaced to a digital test bus
such as the IEEE 1149.1. Research has still to be done to cover more behavioural
parameters and extend the methodology to generic high-order architectures. The
digital tests proposed in References 4345 apply to rst- and second-order modulators
and their cascade combinations, and the results seem quite promising.
Using the set-up sketched in Figure 8.20, the rst integrator leakage of a second-
order modulator can be measured using a periodic digital sequence with a non-null
mean value. The mean value of the test sequence has to be calculated considering
that a digital 1 corresponds to the DAC positive level and a digital 0 to the DAC
negative level, which together dene the modulator full scale. Leger and Rueda [43]
propose the use of a [1 1 1 1 1 0] sequence whose mean value is 2/3 for a (1;
1) normalized full scale. The signature is also built according to the test set-up of
Figure 8.20. The differences between the modulator output bit-stream and the input
sequence are accumulated over a given number N of samples. It is simply sensing how
Test of converters 267
the modulator output deviates from the input in average (i.e., in d.c.). Two acquisitions
with opposite sequences are actually necessary to get rid of input-referred offset. The
signature is shown to be
!
signature = 4N Q (1 p1 ) 2 6
(8.17)
Q = 2/3
The term (1 p1 ) is the rst integrator pole error. This pole error can be directly
related to the d.c. gain of the integrator amplier [6]. It can be seen that the error
term is independent of the number of acquired samples N. This implies that the
correct determination of the pole error requires a number of samples that is inversely
proportional to the pole error, and thus, to a rst approximation, proportional to the
amplier nominal d.c. gain.
A very similar test is provided in Reference 43 to test integrator leakage in rst-
order modulators. It has been demonstrated in Reference 52 that a rst-order
modulator is transparent to digital sequences: the output bit-stream strictly follows
the input sequence. This effect is even stabilized by integrator leakage. The authors
propose to add an extra delay in the digital feedback path (a simple D-latch on the
DAC control switches) during test mode. With this modication, it is shown that the
test set-up of Figure 8.20 provides a signature proportional to the integrator pole error.
An additional condition is also set on the digital sequence: it has to be composed of L
ones and a single zero, with L superior to ve. For hardware simplication, the same
sequence as above ([1 1 1 1 1 0]) can be used:
signature = 4N (1 p)
4
ln(3L 5)/(L 5) (8.18)
L=6
The d.c. gain non-linearity of the rst amplier of a modulator can cause harmonic
distortion. The error associated to d.c. gain non-linearity in ampliers located further
in the loop is usually negligible because it is partially shaped by the loop lter.
In a second-order modulator, it can be shown that the rst integrator output
mean value is proportional to the input mean value. The output of the integrator is the
output of the amplier, so it can be expected that the effective d.c. gain of the amplier
varies with the input mean value. As a result, the integrator pole error also depends
on the input mean value. The test of d.c. gain non-linearity for the rst amplier in a
second-order modulator simply relies on repeating the leakage test with two different
sequences mean value: typically a small one (noted Qs ) and a large one (noted Ql ).
In the ideal case, if the amplier d.c. gain is linear, the obtained signatures should
follow the ratio:
signaturel Ql
= (8.19)
signatures Qs
In the presence of non-linearity, the effective d.c. gain for the sequence of large mean
value Al should be lower than the effective gain for the sequence of small mean value
As . As a result, the pole error for Ql should be greater than for Qs . Thus, it can be
268 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Input sequence 0 0 1 0 1 1 1 0
Master clock
Sampling phase
Integrating phase
written as
signaturel Ql As
= (8.20)
signatures Qs Al
Notice that the signature has to be acquired over a large enough number of points
that the deviation of the effective gain from the actual gain is sensed. Typically, if a
variation of 1 per cent of the effective gain from the nominal gain has to be sensed,
it could be necessary to acquire 100 times the number of acquired points to test
the nominal gain (i.e., the leakage). Fortunately, the distortion induced by the non-
linearity of the amplier d.c. gain is also proportional to the nominal d.c. gain. In other
words, if the d.c. gain is non-linear but very high, then the non-linearity will induce a
distortion that will fall below the noise oor of the converter. Only the non-linearities
associated with a low-nominal d.c. gain will have a signicant impact. Translating
this information to the test, it means that acquiring a very large number of points to
detect d.c. gain non-linearity makes little sense, as it corresponds to a deviation that
has no impact on the modulator precision.
The test for integrator settling errors (which are related to amplier slew-rate
and gain-bandwidth product) introduced in Reference 44 is the same for both rst-
and second-order modulators but requires modication of the modulator clocking
phase. The test sequence is a pseudo-random sequence that can be generated with
a 6-bit LFSR as shown in Figure 8.20. For a one-valued input sample, the clock
phases are doubled (their duration is two master clock periods), and for a zero-valued
input sample they remain unchanged (their duration is one master clock period). The
clocking modication is illustrated in Figure 8.21.
This input-dependent clocking allows unbalancing the integrator settling error.
For a one-valued input sample, the integrator has time to fully settle but not for a
zero-valued input sample. The unbalanced input-referred difference is sensed by the
signature analyser and accumulated over N samples. To get rid of any offset, another
acquisition has to be done inverting the clocking rule: the phases are doubled for a
zero-valued input sample and remain the same for a one-valued input sample. The
results of the two acquisitions are combined to give the offset-insensitive signature:
)
N N
signature = er 4 + 3er 2 (8.21)
2 2
Test of converters 269
S1 S2 Modulator output
Test bit-stream
sequence
Disabled in
test mode S3
1 2 1
1 2 +
Nominal
input
1 2
+
1 2 1
S 3
Disabled in
normal operation
Test
sequence S1 S2 Modulator output
bit-stream
Vref Vref Vref Vref
The term er corresponds to the settling error committed by the integrator for a one-
valued input sample and a zero-valued feedback. This corresponds to the largest step
that can be input to the integrator.
The clocking modication can be implemented on chip at low cost. A simple
nite-state machine is required that consists of an input-dependent frequency divider
(a 2-bit counter). The obtained digital signal is then converted to usable clock phases
by a standard non-overlapping clock generator.
In order to perform all the above-explained digital tests, the schematic of the
modulator should be modied, basically to allow digital test inputs [4345]. There
exist two straightforward solutions. The rst one consists of disabling the nominal
input of the integrators and duplicating the DAC to send the test sequence during the
sampling phase. This is illustrated in Figure 8.22, where switches S1 S3 form the
duplicated DAC.
This approach has the advantage of being very easy to implement. However, a
drawback is that it adds extra switch parasitics to the input node. To avoid this issue,
Leger and Rueda [43] propose to reuse the feedback DAC during the sampling phase
to input the test sequence. The nominal input is disconnected and the feedback switch
is kept closed during both sampling and integrating phases. Only the DAC control
has to be modied to accommodate the double-sampling regime. This is illustrated
in Figure 8.23.
This other solution does not alter the analogue signal path but does put more
emphasis on the timing control of the DAC. Figure 8.23 shows a possible implemen-
tation of the DAC control with transmission gates for the sake of clarity but other
solutions may give better results. Notice that in both cases, the modications can
easily be introduced in the modulator design ow and do not assume the addition of
complex elements such as buffers.
270 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
1 2 +
Nominal
input 2
1 +
2 1
Modulator output
Closed in 2 bit-stream
test mode
Test sequence
Vref Vref 1
It should be noticed that the two modications described above can be realized
on any integrator, which means that a digital test sequence can be inputed at any
feedback node. In the case of a second-order modulator, by disabling the nominal
input of the second integrator and enabling the digital test input, an equivalent rst-
order modulator is obtained. This is symbolized in the diagram of Figure 8.24, where
coefcients a2 and b2 are duplicated to represent the additional test-mode input. Thus,
the tests developed for rst-order modulators can be used to test defects in the second
integrator of the recongured second-order modulators, without signicant impact.
The proposed tests have been validated extensively by simulation. These simula-
tions have been carried out in MATLAB using a behavioural model that implemented
most of the non-idealities described in Reference 48. The test signatures were shown
to be accurate for the isolated effects, only varying the parameter of interest and
maintaining the others at their nominal values [43, 44]. Simulations varying all the
parameters at the same time have also been realized [45]. In that case, the whole set
of proposed tests were performed. It was shown that the whole set of tests provided
high fault coverage if the test limits were set to the expected values of the signatures,
according to the nominal values of the behavioural parameters. Actually, 100 per cent
of the faults that affected the behavioural parameters involved in the proposed tests
were detected.
As a test methodology, behavioural model-based test for modulators has
a great potential, in particular for converters embedded in SoCs. Indeed it opens
the door to structural tests that can relax hardware requirements but maintain a
close relation to the circuit functionality, and also device-debugging capabilities.
It can be considered as a trade-off between functional and defect-oriented tests. In
its current development state, digital tests have been proposed to simply evaluate
integrator leakage and settling errors. These digital tests do not alter the modula-
tor topology and rely on proper modulation. As a result, they have the ability,
Test of converters 271
a2
z1 z1
a1 + b1 + Output
1z1 1z1 bit-stream
Disconnected a2 b2
part
b2
a1 z1 z1 Output
+ b1 +
1-z1 1z1 bit-stream
a2 b2
Disconnected part
beyond the behavioural parameter evaluation, to detect any catastrophic error in the
modulator signal path. Research has to be done to detect more behavioural param-
eters such as branch coefcient mismatches or non-linear switch-on resistance, for
example. Similarly, the digital test methodology should be extended to higher-order
architectures.
8.6 Conclusions
In this chapter, we have tried to provide insights into modulator tests. It has
been shown that the ever-increasing levels of functionality integration, the ultimate
expression of which is SoC, raise new problems on how to test embedded components
such as modulators. These issues may even compromise the test feasibility, or at
least they may displace test time from its prominent position in the list of factors that
determine the overall test cost.
Table 8.1 summarizes the information contained in the chapter. It is clear that
considerable research is still necessary to produce a satisfying solution, but the rst
steps are encouraging. In particular, we believe that solutions based on behavioural
model-based BIST may greatly simplify the test requirements.
272 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Pros. Cons.
Characterization
Static parameters: Gives access to gain and Exhaustive characterization
Histogram Servo-loop offset errors, INL and DNL. requires a large amount of time.
INL and DNL should be related
to transitions in modulators.
Requires the input of a precise
stimulus.
Dynamic parameters: Provide important datasheet Requires complex DSP.
Sine-t FFT specications (SNR, THD, Requires the input of a precise
ENOB, etc.). stimulus.
Functional test
References 2531 Provides a solution for precise Requires a DSP on chip.
on-chip stimulus generation. The area overhead associated to
Digital lter solution to relax stimulus generation may be large.
data analysis. The reuse of on-chip resources
makes concurrent test of other
SoC parts difcult.
Reference 32 The test stimulus is an The test signature aspect is not
unltered digital solved.
sequence. The potential effect of unltered
high-frequency noise in the input
on the result validity is not
addressed.
Defect-oriented test
Reconguration [33] No extra hardware is required. Requires a strong reconguration
Any input signal can be used. of the modulator.
May be difcult to generalize to
other architectures.
The calculation of test coverage
for any input is difcult to
perform.
OBT [34] No test stimulus is required. The calculation of test coverage
The data analysis is generic as for other faults than capacitor
the signatures are always ratio errors is difcult.
amplitude and frequency.
A good methodology has
been developed to apply the
solution to any architecture
with relative effort.
NTF [35] The test stimulus does not The defect coverage may be low
have to be as precise as the as some non-idealities do not
modulator. impact the NTF.
The methodology is The validity of the approach
applicable to any modulator. should be better demonstrated.
Test of converters 273
Pseudo-random [36] The input is digital and The validity of the heuristic
relatively cheap to produce model obtained through
on chip. simulation is questionable.
Potential fault masking.
Only integrator leakage is
addressed.
Model-based test
Model-based test Relaxes the number of The model is linear and performs
standard approach required measurements. well for small variations but may
[3740] A good methodology exists. be limited for other defects.
The methodology is based on
standard specication
measurements: it requires a
precise stimulus and a DSP.
Ad-hoc model-based The test stimulus can be The model is taken a priori and
BIST [41, 42] partially generated on chip. not justied: its validity range
The data analysis is very may be questionable.
simple and can be performed Only four parameters are
on chip. obtained: gain, offset,
Important specications can second-order distortion and
be derived. third-order distortion.
The test stimulus generation still
requires off-chip components.
Behavioural The test stimulus is digital. Research is still necessary to test
model-based BIST The test requires few more behavioural parameters.
[4345] resources and simple The extension of the approach to
modulator modications. other modulators would require
The test signatures can be further research.
used for device debugging.
The test strategy can easily be
integrated in the design ow.
The model is the validated by
design behavioural model.
The validity has been proven
for cascaded modulators of
rst- and second-order
sections.
8.7 References
22 Azais, F., Bernard, S., Bertrand, Y., Renovell, M.: Implementation of a linear
histogram BIST for ADCs, Proceedings of the Design, Automation and Test in
Europe Conference, Munich, Germany, March 2001, pp. 5905
23 Azais, F., Bernard, S., Bertrand, Y., Renovell, M.: Hardware resource minimiza-
tion for histogram-based ADC BIST, Proceedings of the VLSI Test Symposium,
Montreal, Canada, April/May 2000, pp. 24752
24 Arabi, K., Kaminska, B.: Efcient and accurate testing of ADC using oscillation
test method, Proceedings of the European Design and Test Conference, Paris,
France, March 1997, pp. 34852
25 Lu, A., Roberts, G.W., Johns, D.A.: A high-quality analog oscillator using over-
sampling D/A conversion techniques, IEEE Transactions on Circuits and Systems
II, 1994;141:43744
26 Lin, W., Liu, B.: Multitone signal generator using noise-shaping technique, IEE
Proceedings Circuits, Devices and Systems, 2004;151 (1):2530
27 Dufort, B., Roberts, G.W.: On-chip analog signal generation for mixed-signal
built-in self-test, IEEE Journal of Solid State Circuits, 1999;34 (3):31830
28 Hafed, M., Roberts, G.: Sigma-delta techniques for integrated test and measure-
ment, Proceedings of Instrumentation and Measurement Technology Conference,
Budapest, Hungary, May 2001, pp. 15716
29 Rebai, C., Dallet, D., Marchegay, P.: Signal generation using single-bit sigma-
delta techniques, IEEE Transactions on Instrumentation and Measurement,
2004;53 (4):12404
30 Toner, M.F., Roberts, G.W.: A BIST scheme for an SNR test of a sigma-delta
ADC, Proceedings of International Test Conference, Baltimore, MD, October
1993, pp. 80514
31 Toner, M.F., Roberts, G.W.: A frequency response, harmonic distortion and inter-
modulation distortion test for BIST of a sigma-delta ADC, IEEE Transactions
on Circuits and Systems II, 1996;43 (8):60813
32 Ong, C.K., Cheng, K.T.: Self-testing second-order delta-sigma modulators using
digital stimulus, Proceedings of VLSI Test Symposium, Monterey, CA, April/May
2002, pp. 1238
33 Mir, S., Rueda, A., Huertas, J.L., Liberali, V.: A BIST technique for sigma delta
modulators based on circuit reconguration, Proceedings of International Mixed
Signal Testing Workshop, Seattle, WA, June 1997, pp. 17984
34 Huertas, G., Vazquez, D., Rueda, A., Huertas, J.L.: Oscillation-based test in
oversampling A/D converters, Microelectronics Journal, 2003;34 (10):92736
35 de Venuto, D., Richardson, A.: Testing high-resoltuion ADCs by using
the noise transfer function, Proceedings of European Test Symposium, Corsica,
France, May 2004, pp. 1649
36 Ong, C.K., Cheng, K.T.: Testing second-order delta-sigma modulators using
pseudo-random patterns, Microelectronics Journal, 2002;33:80714
37 Stenbakken, G.N., Souders, T.M.: Linear error modeling of analog and mixed-
signal devices, Proceedings of International Test Conference, Nashville, TN,
October 1991, pp. 57381
276 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
38 Stenbakken, G.N., Liu, H.: Empirical modeling methods using partial data,
IEEE Transactions on Instrumentation and Measurement, 2004;53 (2):2716
39 Capofreddi, P.D., Wooley, B.A.: The use of linear models in ADCs testing,
IEEE Transactions on Circuits and Systems I, 1997;44 (12):110513
40 Wegener, C., Kennedy, M.P.: Model-based testing of high-resolution ADCs,
Proceedings of International Symposium on Circuits and Systems, 2000;1:3358
41 Sunter, S.K., Nagi, N.: A simplied polynomial-tting algorithm for DAC and
ADC BIST, Proceedings of International Test Conference, Washington, DC,
November 1997, pp. 38995
42 Roy, A., Sunter, S.: High accuracy stimulus generation for ADC BIST, Pro-
ceedings of International Test Conference, Baltimore, MD, October 2002, pp.
10319
43 Leger, G., Rueda, A.: Digital test for the extraction of integrator leakage in 1st
and 2nd order modulators, IEE Proceedings Circuits, Devices and Systems,
2004;151 (4):34958
44 Leger, G., Rueda, A.: Digital BIST for settling errors in 1st and 2nd order
modulators, Proceedings of IEEE IC Test Workshop, Madeira Island, Portugal,
July 2004, pp. 38
45 Leger, G., Rueda, A.: Digital BIST for amplier parametric faults in mod-
ulators, Proceedings of International Mixed Signals Testing Workshop, Cannes,
France, June 2005, pp. 228
46 Malcovati, P., Brigati, S., Francesconi, F., Maloberti, F., Cuinato, P., Baschirotto,
A.: Behavioral modeling of switched-capacitor sigma-delta modulators, IEEE
Transactions on Circuits and Systems I, 2003;50 (3):35264
47 Schreier, R., Temes, G.C.: Understanding Delta-Sigma Data Converters (IEEE
Press, New York, 2005)
48 Castro, R., et al.: Accurate VHDL-based simulation of modulators,
Proceedings of International Symposium on Circuits and Systems, 2003;IV:6325
49 Medeiro, F., Prez-Verd, B., Rodrguez-Vzquez, A.: Top-down design of High
Performance Sigma-Delta Modulators (Kluwer, Amsterdam, The Netherlands,
1999)
50 Vinnakota, B.: Analog and mixed-signal test (Prentice Hall, Englewood Cliffs,
NJ, 1998)
51 Soma, M.: Fault models for analog-to-digital converters, Proceedings of IEEE
Pacic Rim Conference on Communications, Computers and Signal Processing,
Victoria, Canada, May 1991, pp. 5035
52 Schreier, R., Snelgrove, W.M.: modulation is a mapping, Proceedings of
International Symposium on Circuits and Systems, Singapore, June 1991, pp.
241518
Chapter 9
Phase-locked loop test methodologies
Current characterization and production test practices
Martin John Burbidge and Andrew Richardson
Phase-locked loops (PLLs) are incorporated into almost every large-scale mixed-
signal and digital system on chip (SoC). Various types of PLL architectures exist
including fully analogue, fully digital, semi-digital and software based. Currently,
the most commonly used PLL architecture for SoC environments and chipset appli-
cations is the charge-pump (CP) semi-digital type. This architecture is commonly
used for clock-synthesis applications, such as the supply of a high-frequency on-chip
clock, which is derived from a low-frequency board-level clock. In addition, CP-
PLL architectures are now frequently used for demanding radio-frequency synthesis
and data synchronization applications. On-chip system blocks that rely on correct
PLL operation may include third-party intellectual property cores, analogue-to-digital
conversions (ADCs), digital-to-analogue conversions (DACs) and user-dened logic.
Basically, any on-chip function that requires a stable clock will be reliant on cor-
rect PLL operation. As a direct consequence it is essential that the PLL function
is reliably veried during both the design and debug phase and through production
testing.
This chapter focuses on test approaches related to embedded CP-PLLs used for
the purpose of clock generation for SoC. However, methods discussed will generally
apply to CP-PLLs used for other applications.
+I CH
PLLREF
Phase UP Voltage-controlled
detector oscillator
DN VCO
(PFD)
KPD Loop KVCO
filter
F(s)
ICH Vc
edges of the reference clock and VCO clock (feedback clock) and applies charge-up
or charge-down pulses to the CP that are proportional to the timing difference. The
pulses are most commonly used to switch current sources, which charge or discharge
a capacitor in the LF. The voltage at the output of the LF is applied to the input
of the VCO, which changes oscillation frequency as a function of its input voltage.
Note that ideally when the feedback and reference clocks are equal, that is, they are
both phase and frequency aligned, the CP transistors will operate in such a way as
to maintain the LF voltage at a constant value. In this condition, the PLL is locked
which implies that the output signal phase and frequency is aligned to the input within
a certain limit. Note that the division block up-converts the VCO output frequency
to an integer multiple of the frequency present on its reference input (PLLREF). It
follows that when the PLL is in its locked state:
In Figure 9.1, the following conversion gains are used for the respective blocks.
KPD = phase detector gain = Ich /2 (A rad1 )
F(s) = LF transfer function
KVCO = VCO gain (rad s1 V1 )
Using feedback theory, the generalized transfer equation in the Laplace domain
for the system depicted in Figure 9.1 is
o (s) N KPD KVCO F(s)
H(s) = = (9.2)
i (s) sN + KPD KVCO F(s)
Note that by substituting suitable values for N and F(s) Equation (9.1) will generally
apply to any-order PLL system [1]. Specic transfer equations are provided as part
of the LF description.
Phase-locked loop test methodologies 279
PFDUP
D Q
PLLREF
R
To CP
PLLFB
D Q
PFDDN
It must be noted that, even for the case of a CP-PLL, the implementation details
for the blocks may vary widely; however, in many applications, designers attempt
to design the PLL to exhibit the response of a second-order system. This is owing
to the fact that second-order systems can be characterized using well-established
techniques. The response of a second-order CP-PLL will be generally considered in
this chapter [24].
A brief description of each of the blocks now follows. Further, basic principles of
CP-PLL operation are given in References 1, 3, 5 and 6.
1. FB (t) leads i (t): LF voltage falls and VCO frequency falls to try and reduce
the difference between i (t) and FB (t).
2. i (t) leads FB (t): LF voltage rises and VCO frequency rises to try and reduce
the difference between i (t) and FB (t).
3. i (t) coincident with FB (t): the PLL is locked and in its stable state.
280 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
i(t)
FB(t)
i(t)= FB(t) i(t)
Leads Lags
PFDUP
PFDDN
VCO output
frequency
Figure 9.3 Operation of PFD and associated increase in VCO output frequency
Vdd
I1
M2
UP
VCTRL
DN M1 R1
+
C1
I2
alteration of the VCO output frequency, this action will be observed as PLL output
jitter. Consequently, PLL designers usually spend a great deal of design effort in
screening this node. In addition, correct LF operation is essential if the PLL is to
function properly over all desired operational ranges. Embedded LFs usually include
one or more large area MOSFET capacitors. These structures may be sensitive to spot
defects, such as gate oxide shorts [7].
Matching of the CP currents is also a critical part of PLL design. Leakage and
mismatch in the CP will lead to deterministic jitter on the PLL output.
A1 A2 A3
M2 M4 M6
FFout
out
C1
5fF
5fF
C2
5fF
C3
M1 M3 M5
The transfer gain of the VCO is found from the ratio of output frequency deviation
to a corresponding change in control voltage. That is
F2 F1
KVCO = (MHz/V) or (rad/s/V) (9.7)
V2 V1
where F2 is the output frequency corresponding to V2 and F1 is the output frequency
corresponding to V1 . An example of experimental measurement of the VCO gain is
given in Section 9.2.1.1.
* MOS transistor catastrophic faults: gate to drain shorts; gate to source shorts; drain opens; source
opens.
overshoot
loop bandwidth (3 dB)
output jitter.
All of the above parameters are interrelated to a certain extent. For example, the
loop bandwidth will have an effect on the PLL output jitter. However, loop bandwidth,
lock time, overshoot and step response time are also directly related to the natural
frequency and damping of the system. It must be mentioned that, certain non-idealities
or faults may contribute to further jitter on the PLL output or increased lock time.
Examples of typical measurements for these parameters are provided in later sections.
Table 9.1 provides an initial analysis of testing issues for the PLL sub-blocks.
Fault models are suggested for use in fault coverage calculations for each of the
blocks. Further research and justication for the use of fault models in the key PLL
sub-blocks are given in References 713.
Note also that the fault models suggested in Table 9.1 can also be used to assess
the fault coverage of built-in self-test (BIST) techniques. It should be noted, however,
that many fault types are related to the structure realization of the PLL hence these
guidelines should be used with care. Faults that may be implementation-dependent
include:
Nominal
timing
Increasing deviation Increasing + deviation
lead to excessive jitter in the PLL output. Jitter may be divided into two main classes
as follows:
In both of the above equations, N represents the total number of samples taken and
Ti represents the time dispersion of each individual sample.
For clock signals, jitter measurements are often classied in terms of short-term
jitter and long-term jitter. These terms are further described below:
Short-term jitter: This covers short-term variations in the clock signal output period.
Commonly used terms include:
Period jitter: This is dened as the maximum or minimum deviation (whichever
is the greatest) of the output period from the ideal period.
Cycle-to-cycle jitter: This is dened as the period difference between consecutive
clock cycles, that is cycle-to-cycle jitter = [period (n) period (n 1)]. It must
be noted that cycle-to-cycle jitter represents the upper bound for the period jitter.
Duty cycle distortion jitter: This is the change in the duty cycle relative to the
ideal duty cycle. The relationship often quoted for duty cycle is
Highperiod
Duty_cycle = 100(%) (9.10)
Highperiod + Lowperiod
where Highperiod is the time duration when the signal is high during one cycle of
the waveform and Lowperiod is the time duration when the signal is low over one
period of the measured waveform. In an ideal situation, the duty cycle will be 50
per cent, the duty cycle distortion jitter will measure the deviation of the output
waveform duty cycle from the ideal position. Typical requirements for duty cycle
jitter is that it should be within 45 to 55 per cent [14, 15].
The above jitter parameters are often quoted as being measured in terms of degrees
deviation with respect to an ideal waveform. Another metric often encountered is that
of a unit interval (UI), where one UI is equivalent to 360 . A graphical representation
of a UI is given in Figure 9.7.
Long-term jitter: Provides a measure of the long-term stability of the PLL output;
that is, it represents the drift of the clock signal over time. It is usually specied
over a certain time interval (usually a second) and expressed in parts per million. For
example, a long-term jitter specication of 1 ppm would mean that a signal edge is
allowed to drift by 1 s from the ideal position in 1 s.
286 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
1UI, 360
UI, 270
UI, 180
UI, 90
This section will explain traditional or commonly employed CP-PLL test techniques
that are used for the evaluation of PLL operation. Many of the techniques will be
applicable for an analogue or semi-digital type of PLL or CP-PLL, however, it must
be recognized that although the basic principles may hold, the test stimuli may have
to undergo slight modication for the fully analogue case. The section is subdivided
into two subsections, focusing on characterization and production test techniques,
respectively.
Further tests are often carried out that are based upon structural decompo-
sition of the PLL into its separate building blocks. These techniques are often
used to enhance production test techniques [19, 20]. Typical tests are
CP current monitoring is used to ascertain the CP gain and hence the phase
detector transfer characteristic. It is also used to monitor for excessive CP
mismatch
Direct control of the VCO is used to allow estimation of the VCO transfer
characteristic.
The decomposition tests are also often coupled with some form of noise immu-
nity test that allow the designer ascertain the sensitivity of the VCO or CP
structures to noise on the supply rails of the PLL. As the CP currents and
VCO control inputs are critical controlling nodes of the PLL, and determine
the instantaneous output frequency of the PLL, any coupling of noise onto
these nodes will cause jitter on the PLL output. Thus, noise immunity tests are
particularly important in the initial characterization phases. A more detailed
discussion of the tests now follows.
PLL
Reference
clock
PLL output
Tlock
stable locked condition for a given operational conguration. Stability criteria will
be determined by the application and may consist of an allowable phase or frequency
error at the time of measurement. Typically, this test is carried out in conjunction
with a maximum specied time criteria, that is, if the PLL has failed to achieve lock
after a specied time, then the PLL is faulty. The start of test initiation for the FLT
is usually taken from system startup. It is common for this test to be carried out for
various PLL settings, such as, maximum and minimum divider ratios, different LF
settings, and so on. Owing to its simplicity and the fact that it will uncover many hard
faults and some soft faults in the PLL, this test is often used in many production test
applications. A graphical description of the FLT is given in Figure 9.9.
In the above diagram, T0 represents the start of the test and Tlock indicates the
time taken to achieve lock.
In many applications, the output frequency is simply measured after a predeter-
mined time; this is often the case in automated-test-equipment-based test schemes,
where the tester master clock would be used to determine the time duration. Alter-
natively, in some situations, the PLL itself is tted with LD (lock detect) circuitry
that produces a logic signal when the PLL has attained lock [20]. In this situation, a
digital counter is started at T0 and stopped by the LD signal, thus enabling accurate
lock time calculations to be made. Note that LD circuitry is not test specic, as it
is often included in PLL circuits to inform other system components when a sta-
ble clock signal is available. However, sometimes an LD connection is tted solely
for design-for-testability (DfT) purposes. It must also be mentioned that, in certain
PLL applications, it may be acceptable to access the LF node. If this is the case, the
approximate settling time of the PLL can be monitored from this node. This technique
is sometimes used for characterization of chipset PLLs, however, due to problems
290 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
FSK
PFD LF VCO
generator
Divide by N
F1 F2 F1 F2
Figure 9.10 Basic equipment setup for PLL step response test
outlined in the previous sections, it appears to be less commonly used for test of fully
embedded PLLs.
Step response test. The step response monitoring of PLLs is a commonly used bench
characterization technique [2]; the basic hardware set-up is shown in Figure 9.10.
Further details relating to Figure 9.10 are given below:
The input signal step is applied by using a signal generator set-up capable of
producing a frequency shift keying (FSK) signal. The signal is toggled periodically
between F1 and F2 . Note that a suitable toggling frequency will allow the system
to reach the steady-state condition after each step transition.
If an external LF is used, it is sometimes possible to measure the output response
from the LF node. The signal measured at this node will be directly proportional
to the variation in output frequency that would be observed at the PLLs output.
Also note that as the VCO output frequency is directly proportional to the LF volt-
age, the step response can also be measured at the VCO output. In fact, this is the
technique that must be employed when LF access is prohibited. However, this tech-
nique can only be carried if test equipment with frequency trajectory (FT) probing
capabilities is available. This type of equipment allows a plot or oscilloscope trace of
instantaneous frequency against time to be made, thus providing a correct indication
of the transient step characteristics. Many high-specication bench-test equipment
products incorporate FT functions, but it is often hard to incorporate the technique
into a high-volume production test plan.
An alternative method of introducing a frequency step to the system involves
switching the feedback divider between N and N + 1. This method will produce
an output response from the PLL that is equivalent to the response that would be
observed for application of an FSK input frequency step equal to the PLLs reference
frequency. The technique can be easily veried with reference to Equation (9.1).
Phase-locked loop test methodologies 291
Frequency
(Mhz)
A1
Vstop
(Fstop) VoutSS
FoutSS
A2
V
(F) T
Settling time
0
Vstart
(Fstart) Time (s)
The step response can be used to make estimates of the parameters outlined
in Section 9.1. To further illustrate the technique, a graphical representation for a
second-order system step response is provided in Figure 9.11.
In Figure 9.11, the dashed line indicates the application of the input step parameter,
and the solid line indicates the output response. Note that the parameters of interest
are shown as V -parameters and F-parameters to indicate the similarity between a
common second-order system response and a second-order PLL system response. An
explanation of the parameters is now given.
Vstart (Fstart ): It is the voltage or frequency before the input step is applied.
Vstop (Fstop ): It is the nal value of the input stimulus signal.
V (F): It is the amount by which the input signal is changed.
VoutSS (FoutSS ):This represents the nal steady-state output value of the system.
Settling time: The amount of time it takes, after the application of the input step,
for the system to reach its steady-state value.
A1 : Peak overshoot of the signal.
A2 : Peak undershoot of signal.
T : Time difference between the consecutive peaks of the transient response.
Direct measurement of these parameters can be used to extract n and . Estimation
of the parameters is carried out using the following formulas that are taken from
Reference 2 and are also found in many control texts [4]. The formulas are valid
only for underdamped system, that is, one in which A1 , A2 and hence T can be
calculated. However, if this is the case, other parameters can be used to assess the
system performance such as delay time or rise time. This is true for many applications,
292 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
when what is really desired is the overall knowledge of the transient shape of the step
response.
The damping factor can be found as follows:
ln(A1 /A2 )
=.
/1/2 (9.13)
2 + ln(A1 /A2 )2
The natural frequency n can be found as follows:
2
n = 0 (9.14)
T 1 2
Furthermore, PLL system theory and control system theory texts [24] also con-
tain normalized frequency and phase step response plots, where the amplitude and
time axis are normalized to the natural frequency of the system. Design engineers
commonly employ these types of plots in the initial system design phase.
of the gain will tend to one (0 dB) as the frequency of the excitation signal is reduced.
The slope of this decrease will be determined by the damping of the system. In a
similar manner, the relative phase lag between the input and output of the system will
tend to 0 . Note: for a PLL, the magnitude of the response within the loop bandwidth
can be assumed to be unity [8]. As explained in later sections, this is an important
observation when considering PLL test.
p . This is frequency where the magnitude of the system response is at its maxi-
mum. It is directly analogous to the natural frequency (n ) of the system. In addition,
the relative magnitude of the peak (above the unity gain value can be used to determine
the damping factor ( ) of the system. Relationships between , the decibel magnitude
and the normalized radian frequency are available in many texts concerning control
or PLL theory [24].
3 dB. Following Reference 1, this denes the one-sided loop bandwidth of the
PLL. The PLL will generally be able to track frequency variations of the input signal
that are within this bandwidth and reject variations that are above this bandwidth.
For normally encountered second-order systems, that is, ones with a voltage,
current or force inputs and corresponding voltage, current or displacement outputs,
the Bode plot is constructed by application of sinusoidally varying input signal at
different frequencies; the output of the system is then compared to the input signal to
produce magnitude and phase response information. However, the situation is slightly
different for the PLL systems we are considering. In this situation, the normal input
and output signals are considered to be continuous square wave signals. The PLLs
function is to phase align the input and output signals of the PLL system.
It can be seen from Equation (9.2) and Figure 9.1 that to experimentally measure
the PLL transfer function we need to apply a sinusoidal variation of phase about the
nominal phase of the input signal i (t), that is, we sinusoidally phase modulate the
normal input signal. The frequency of the phase change is then increased and the
output response is measured. The block diagram for an experimental bench type test
set-up is shown in Figure 9.13.
For the above test, the output response can be measured at the LF node or the VCO
output. The output of the LF node will be a sinusoidal varying voltage. The output of
the VCO will be a frequency (or phase) modulated signal. Magnitude measurements
taken at a sufciently low point below p can be approximated to unity gain i.e. the
PLL exhibits 100 per cent feedback within the loop bandwidth. Additionally, the
phase lag can be approximated to 0 . This means that all measurements taken from
the PLL output can be referenced to the rst measurement. Figure 9.14 shows the
method for phase measurement calculation between input and output.
T-cycle represents the time measured for one complete cycle of the input wave-
form. T represents the measured time difference between the input and output
waveform.
Using T-cycle and T , the phase difference () between the input and output
signal can be estimated using the following relationship:
T
(j) = = 360 (9.15)
T -cycle
294 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Phase
modulation PFD LF VCO
generator
Phase modulated
input signal for one
frequency
Phase reference
Input T cycle
waveform
Output
waveform
T
For measurement of the magnitude response it must be recalled that well within
the loop bandwidth the PLL response can be approximated as unity. It follows that
an initial output signal resulting from an input signal, whose modulation frequency
is sufciently low, can be taken as a datum measurement. Thus, all subsequent mea-
surements can be referenced at an initial output measurement, and knowledge of the
input signal is not required. For example, if an initial measurement was taken for
Phase-locked loop test methodologies 295
a modulation of 100 Hz, subsequent measurements could be carried out using the
following relationship.
Vm100 Hz
|H(j) |(dB) = 20 log10 (dB) (9.16)
VmN
absolute CP current
CP mismatch
VCO gain
VCO linearity.
If direct access to the LF control node is permitted, all of these tests can be
enabled using relatively simple methods. Also, to allow these tests to be carried out,
extensive design effort goes into construction of access structures that will place
minimal loading on the LF node. However, injection of noise into the loop is still a
possibility and the technique seems to be less commonly used. A brief explanation
of common test methods is now provided.
CP measurements:
A typical test set-up for measuring the CP current is shown in Figure 9.15.
Here, CPU is the up current control input, CPD is the down current control input,
TEST is the test initiation signal that couples the LF node to the external pin via a
transmission gate network, Rref is an external reference resistor and Vref is the voltage
generated across Rref due to the CP current. The tester senses Vref and thus the CP
current can be ascertained. A typical test sequence for the CP circuitry may contain
296 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
PLL system V
TEST r Rref
External pin e
f
POS
CPU TEST
POS
VCO
CPD Tester
NEG
NEG NEG
5. Activate the down current source by disabling CPU and enabling CPD
6. Wait a sufcient time for the network to settle
7. Measure the resultant down current in the CP using the relationship:
Vrefdn
ICPD = (9.18)
Rref
An estimate of the CP current mismatch can be found by subtracting the results of
Equations (9.17) and (9.18). Also, the CPU and CPD inputs can be often indirectly
controlled via the PFD inputs thus, removing the necessity of direct access to these
points and additionally providing some indication of correct PFD functionality.
Note that in the previous description, the test access point is connected to an
inherently capacitive node consisting of the VCO input transistor and the LF capaci-
tors, respectively. In consequence, if no faults are present in these components, there
should be negligible current ow through their associated networks. It follows that
this type of test will give some indication of the LF structure and interconnect faults.
Phase-locked loop test methodologies 297
POS F2
Ideal transfer
function
CPU TEST
POS
F1
Non-Ideal transfer
VCO
CPD function
NEG
V1 V2
NEG NEG
VCO measurements:
A typical test set-up to facilitate the measurement of the VCO gain and linearity is
shown in Figure 9.16, where CPU is the CP up current control input, CPD is the CP
down current control input and TEST is the test initiation control input.
A typical test sequence would be carried out as follows:
1. Initially, both CP control inputs are set to open the associated CP switch
transistors. This step is carried out to isolate the current sources from the
external control pin.
2. A voltage V1 is forced onto the external pin.
3. The external pin is connected to the LF node by activation of the TEST signal.
4. After settling, the corresponding output frequency, F1 , of the VCO is measured.
5. A higher voltage, V2 , is then forced onto the external pin.
6. After settling, the corresponding output frequency F2 of the VCO is measured.
In the above sequence of events, the values chosen for the forcing voltages will be
dependent on the application.
After taking the above measurements, the VCO gain can be determined using the
following relationship:
F2 F1
KVCO = (Hz/V) (9.19)
V2 V1
Output signal
1 2 3 4 5 6 7 1 2 3 4 5 6
Count = 7 Count = 6
Conditioned input
N cycles
Input
Start Stop
Main gate
This technique essentially carries out a frequency counting operation on the PLL
output signal and will measure or count the number of PLL output transitions in
a predetermined time interval (gate time) determined by the reference signal. The
difference between successive counts will be related to the average period jitter of the
PLL waveform. Obviously, this method cannot be used to carry out the short-term
cycle-to-cycle jitter measurements. Accuracy of this technique will require that the
PLL output signal frequency is much higher than the gate signal.
The signals would be gated as shown in Figure 9.18 and would be used with fast
measurement circuitry to initiate and end the measurements.
Histogram analysis
Histogram-based analysis is often carried out using a strobe-based comparison
method. In this method, the clean reference signal is offset by multiples of equally
spaced time intervals, that is, the reference signal edge can accurately be offset from
300 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Reference
clock
Jitter
distribution
St1 4 96
St2 15 85
St3 35 65
St4 50 52
St5 65 35
St6 85 15
St7 90 10
50
35 35
15 15
10
4
Figure 9.20 Jitter histogram constructed from the values in Table 9.2
Although the primary function of a PLL is relatively simple, previous sections have
shown that there are a wide range of specications that are critical to the stability
and performance of PLL functions that need to be veried during engineering and
production test. These specications range from lock time and capture range to key
parameters encoded in the phase transfer function such as damping and natural fre-
quency. Parameters such as jitter are also becoming more critical as performance
specications become more aggressive.
The challenge therefore associated with self-testing PLLs is to nd solutions that
can be added to the design with minimal impact of the primary PLL function,
have minimal impact of the power consumption,
302 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Divide by N
=Fref
=2Fref
Phase delay
Delay next cycle
circuit
from the divide by N block in the feedback loop to generate an output that is 180
out of phase relative to the input for 1-cycle of the reference clock only. This phase
shift is activated on receipt of a logic 1 on the delay next cycle signal.
This phase-shifted signal is applied to the phase detector via the input multiplexer.
The strategy used eliminates problems in measuring very fast frequency changes
on the VCO output that would result if a constant phase shift was applied. In this
architecture, the VCO frequency will change only during the cycle where the phase
is shifted. Following this cycle, the VCO will lock again (zero phase shift on the
input) hence it is relatively easy to measure the initial and nal VCO frequencies and
calculate the difference. The relationship between the change in VCO frequency as a
function of the phase shift on the input and the reference clock is
Kv ICP
VCO =
fref C
and the open-loop gain is given by
FB Kv ICP
GOL = =
FB 2 NC
2fref
So in summary, a digital circuit can be added to the divide-by-N block to generate a
known phase-shifted input for 1-cycle (stimulus generator) and a multiplexer added
to the PLL input to allow the loop to be broken and the phase-shifted signal from the
stimulus generator to be applied. All that remains is the implementation of an on-chip
solution to measure the output frequency change. This can be achieved digitally using
a gated binary counter with a known reference frequency input.
Capture and lock range measurements are also possible using this architecture by
measuring the maximum and minimum frequencies of the lock/capture range. This
is achieved by continuously applying a frequency error to force the output frequency
to its maximum or minimum value at a controlled rate. The implementation involves
connecting the PLL input to an output derived from the divide-by-N block with a
frequency equal to, double of or half of the VCO output frequency. The VCO output
frequency is continually monitored until the rate of change approaches a dened
small value. This last frequency measurement is recorded. Lock time can also
be measured in this testing phase by closing the PLL loop after this maximum or
minimum frequency has been achieved and counting the number of reference clock
cycles to the point at which the PLL locks.
An alternative architecture for on-chip measurement of the phase transfer function
is described Burbidge et al. [25] and utilizes an input multiplexer as above, a digital
control function and a PFD with a single D-type ip-op added to its output as shown
in Figure 9.22.
The purpose of this modied phase detector is to detect the peak output that
corresponds to the response of the PLL to the peak value of the input phase. If a
strobe signal is generated by the input stimulus generator when the input phase is
at its peak, measurement of the time delay between the input strobe and the output
change on the Q output of the D-type can generate the phase response at a xed input
frequency. In addition, the point at which this D-type output changes corresponds to
304 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
PFDUP
D Q
0
PLLREF
R R
MFREQ
PLLFB D Q
R
D Q
PFDDN PLLREF PLLREF
Existing or leading lagging
1 additional
PFD
EXTREF
Start Phase
M1 M2
counter
0 Stop
Input A PLLREF
modulator 1 C PLL Frequency
forward counter
B D path
PLLFB Gate
control
Feedback
path
1/N
TEST
the PLL being locked; hence, measurement of the output frequency at this point will
allow the magnitude response of the PLL to be calculated at the reference frequency
of the input stimuli. Repeating this process for different values of input frequency will
allow the phase transfer function to be constructed. This modied phase detector and
the methodology described above is used within the overall BIST architecture shown
in Figure 9.23. The input multiplexer M2 is used to connect or break the feedback
loop and apply identical inputs to the PLL forward path to articially lock the PLL.
Phase-locked loop test methodologies 305
(2) Set phase counter 0 A=C B=D Start phase counter at peak of input
modulation
(2) Monitor peak 0 A=C B=D Monitor for peak output signal
frequency
(3) Peak occurred, lock X A=C A=D Holds the output frequency constant
PLL, stop phase
counter
(4) Measure frequency X A=C A=D Count output frequency and store.
and phase Store the result of the phase counter.
(5) Increase modulation frequency FN and repeat steps 1 to 4 until all frequencies of interest
have been monitored.
The algorithm used to construct the phase transfer function is as in Table 9.3.
Note that this technique requires an input stimulus generator that provides either a
frequency or phase-modulated input with a strobe signal generated at the peaks. Either
frequency modulation using a digitally controlled oscillator or phase modulation using
multiplexed delay lines can be used.
A third method of achieving a structural self-test of a PLL structure was proposed
by Kim et al. [9] and involves injecting a constant current into the PLL forward path
and monitoring the LF output that is usually a multiple of the input current injected
and a function of the impedance of the forward path.
In this approach, additional circuitry is placed between the PFD and CP with the
primary objective of applying known control signals directly to the CP transistors.
In the test, the PLL feedback path is broken and control signals referenced to a
common time base are applied to the CP control inputs. The oscillator frequency will
be proportional to the voltage present at the LF node, which is in turn dependent on
the current applied from the CP. Thus, if the output frequency can be determined,
information can be obtained about the forward path PLL blocks. The test proposal
suggests that the loop divider is recongured as a frequency counter. The test basically
comprises three steps as follows. Initially closing both of the CP transistors performs
a d.c. reference count. If the CP currents are matched, the voltage of the LF node
should be at approximately half the supply voltage. The measurement from this test
phase is used as a datum for all subsequent measurements. In the second stage of the
test, the LF is discharged for a known time. Finally, the LF is charged for a known
306 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
time. For all of the test stages the output is measured using the oscillator and the
frequency counter and is stored in a digital format. In all of the tests, the output
response is compared against acceptable limits and pass or fail criteria are evaluated.
This BIST strategy covers most if not all analogue faults and can be extended to
test the phase detector by increasing the complexity of the timing and control of the
input vectors to the PLL.
Finally, it should be noted that methods to measure jitter either directly or indi-
rectly are currently being addressed. In Reference 10 a method is proposed that has
the additional advantage of utilizing minimal additional digital functions for imple-
mentation. To date, however, there are few other credible on-chip solutions. This is
an important goal as frequencies continue to rise.
This chapter has summarized the types of PLL used within electronic systems, the
primary function of the core blocks and key specications. Typical test strategies
and test parameters have been described and a number of DfT and BIST solutions
described.
It is clear that as circuit speeds increase and electronic systems rely more heavily
on accurate and stable clock control and synchronization, the integrity and stability
requirements of the PLL functions will become more aggressive increasing test time
and test complexity. Methods of designing PLLs to be more accurately and more
easily tested will hence become more important as the SoC industry grows.
9.5 References
1 Gardner, F.M.: Phase Lock Techniques, 2nd edn (Wiley Interscience, New York,
1979)
2 Best, R.: Phase Locked Loops, Design Simulation and Applications, 4th edn
(McGraw-Hill, New York, 2003)
3 Gardner, F.M.: Charge-pump phase-lock loops, IEEE Transactions on Commu-
nications, 1980;28:184958
4 Gayakwad, R., Sokoloff, L.: Analog and Digital Control Systems (Prentice Hall,
Eaglewood Cliffs, NJ, 1998)
5 Lee, T.H.: The Design of CMOS Radio-Frequency Integrated Circuits (Cambridge
University Press, Cambridge, 1998), pp. 438549
6 Johns, D.A., Martin, K.: Analog Integrated Circuit Design (John Wiley & Sons,
New York, 1997), pp. 64895
7 Sachdev, M.: Defect Oriented Testing for CMOS Analog and Digital Circuits
(Kluwer, Boston, MA, 1998), pp. 3738 and 7981
8 Kim, S., Soma, M.: Programmable self-checking BIST scheme for deep
sub micron PLL applications, Technical Report, Department of Electrical
Engineering, University of Washington, Seattle, WA
Phase-locked loop test methodologies 307
9 Kim, S., Soma, M., Risbud, D.: An effective defect-oriented BIST architecture for
high-speed phase-locked loops, Proceedings of 18th IEEE VLSI Test Symposium,
San Diego, CA, 30 April 1999, pp. 2317
10 Sunter, S., Roy, A.: BIST for phase-locked loops in digital applications,
Proceedings of 1999 IEEE International Test Conference, Atlantic City, NJ,
September 1999, pp. 53240
11 Goteti, P., Devarayanadurg, G., Soma, M.: DFT for embedded charge-pump PLL
systems incorporating IEEE 1149.1, Proceedings of IEEE Custom Integrated
Circuits Conference, Santa Clara, CA, August 1997, pp. 21013
12 Azais, F., Renovell, M., Bertrand, Y., Ivanov, A., Tabatabaei, S.: A unied
digital test technique for PLLs; catastophic faults covered, Proceeding of 5th
IEEE International Mixed Signal Testing Workshop, Edinburgh, UK, June 2006
13 Vinnakota, B.: Analog and Mixed-Signal Test (Prentice Hall, Englewood Cliffs,
NJ, 1998)
14 Philips Semiconductors, Data Sheet PCK857 66150MHz Phase Locked Loop
Differential 1: 10 SDRAM Clock Driver, 1 December 1998
15 Texas Instruments, Data Sheet TLC2933, High-performance Phase-locked loop,
April 1996 Revised June 1997
16 MAXIM Semiconductors, Data Sheet 19-1537; Rev 2, 622Mbps, 3.3V Clock-
Recovery and Data-Retiming IC with Limiting Amplier, 19-1537; Rev 2; 12/01
17 MAXIM Semiconductors, Application Note: HFAN-4.3.0Jitter Specications
Made Easy: A Heuristic Discussion of Fibre Channel and Gigabit Ethernet
Methods, Rev 0; 02/01
18 Burns, M., Roberts, G.: An Introduction to Mixed Signal IC Test and Measurement
(Oxford University Press, New York, 2002)
19 O K I A S I C PRODUCTS, Data Sheet, Phase-Locked Loop 0.35m, 0.5 m,
and 0.8 m Technology Macrofunction Family, September 1998
20 Chip Express, Application Notes, APLL005 CX2001, CX2002, CX3001,
CX3002 Application Notes, May, 1999
21 Lecroy, Application Brief, PLL Loop Bandwidth: Measuring Jitter Transfer
Function In Phase Locked Loops, 2000
22 Veillette, B.R., Roberts, G.W.: On-chip measurement of the jitter transfer func-
tion of charge pump phase locked loops, Proceedings of IEEE International Test
Conference, Washington, DC, November 1997, pp. 77685
23 Goldberg, B.G.: Digital Frequency Synthesis Demystied (LLH Technology
Publishing, Eagle Rocks, VA, 1999)
24 Goldberg, B.G.: PLL synthesizers: a switching speed tutorial, Microwave
Journal, 2001;41
25 Burbidge, M., Richardson, A., Tijou, J.: Techniques for automatic on-chip closed
loop transfer function monitoring for embedded charge pump phase locked loops.
Presented at DATE (Design Automation and Test in Europe); Munich, Germany,
2003
Chapter 10
On-chip testing techniques for RF wireless
transceiver systems and components
Alberto Valdes-Garcia, Jose Silva-Martinez,
Edgar Sanchez-Sinencio
10.1 Introduction
devices communicate with the ATE through an interface of low-rate digital data
and d.c. voltages. From the extracted information on the transceiver performance at
different intermediate stages, catastrophic and parametric faults can be detected and
located.
Throughout the chapter, special emphasis is made on the description of transistor-
level design techniques to implement embedded test devices that attain robustness,
transparency to CUT operation and minimum area overhead.
To address the problem of testing a system with a high degree of complexity
such as a modern transceiver, three separate tasks are dened: (i) test of the analogue
baseband components which involve frequencies in the range of megahertz; (ii) test
of the RF front-end section at frequencies in the range of gigahertz; and (iii) test of
the transceiver as a full system.
Section 10.2 deals with the rst task. A robust method for built-in magnitude and
phase-response measurements based on an analogue multiplier is discussed. Based on
this technique, a complete frequency-response characterization system (FRCS) [11]
for analogue baseband components is described. Its performance is demonstrated
through an integrated prototype in which the gain and phase shift of two analogue
lters are measured at different frequencies up to 130 MHz.
One of the most difcult challenges in the implementation of BIST techniques
for integrated RF integrated circuits (ICs) is to observe high-frequency signal paths
without affecting the performance of the RF CUT. As a solution to this problem,
a very compact CMOS, RF amplitude detector [12], a methodology for its use in
the built-in measurement of the gain and 1 dB compression point of RF circuits are
described in Section 10.3. Measurement results for an integrated detector operating
in the range from 900 MHz to 2.4 GHz are discussed including its application in the
on-chip test of a 1.6 GHz low-noise amplier (LNA).
Finally, to address the third task, Section 10.4 presents an overall testing strategy
for an integrated wireless transceiver that combines the use of the two above men-
tioned techniques with a switched loop-back architecture [10]. The capabilities of
this synergetic testing scheme are illustrated through its application on a 2.4 GHz
transceiver macromodel.
On-chip testing techniques for RF wireless transceiver systems and components 311
A B
|H()| =
A
H() =
0
A general analogue system, such as a line driver, equalizer or the baseband chain
in a transceiver consists of a cascade of building blocks or stages. At a given fre-
quency 0 each stage is expected to show a gain or loss and a delay (phase shift)
within certain specications; these characteristics can be described by a frequency-
response function H(0 ). An effective way to detect and locate catastrophic and
parametric faults in these analogue systems is to test the frequency response H() of
each of their building blocks. A few BIST implementations for frequency-response
characterization of analogue circuits have been developed recently using sigma-delta
[13], switched-capacitor [14] and direct-digital-synthesis [15] techniques. These test
systems show different trade-offs in terms of complexity and performance. Even
though their frequency-response test capabilities have been demonstrated only in the
range from kilohertz to few megahertz, implementations in current deep-submicron
technologies may extend their frequency of operation.
This section describes an integrated FRCS that enables the test of the magnitude
and phasefrequency responses of a CUT through d.c. measurements. Robust ana-
logue circuits implement the system that attains a frequency-response measurement
range in the range of hundreds of megahertz, suitable for contemporary wireless
analogue baseband circuits.
d.c Output
LPF to ADC LPF LPF
ABcos ()
2 2
X=K 2 A B
Y=K 2 Z=K
2
C<<20 C<<20 C<<20
Moreover, any static d.c. offset that the multiplier may have can be measured when
no signal is present and then cancelled before the computations. In summary, this
technique for the measurement of magnitude and phase responses is inherently robust
to the effect that process variations can have on the main performance characteristics
of the building blocks, which makes it suitable for BIST applications.
The effect of the spectral content of the test signal is now analysed. Let HDi ,
be the relative voltage amplitude of the ith harmonic component (i = 2, 3, , n)
in respect of the amplitude A of the fundamental test tone. Under the pessimistic
assumption that the CUT does not introduce any attenuation or phase shift to either
of these frequency components, the d.c. error voltage (E) introduced by the harmonic
distortion components to each of the voltages X, Y and Z is given by
A2
n
A2
E=K (HDi )2 = K THD = X THD (10.6)
2 2
i=2
where THD is the total harmonic distortion of the signal generator given by the ratio
of the total power of the harmonics over the power of the fundamental tone. If THD is
as high as 0.1 (10 per cent), even in this pessimistic scenario, E would be equivalent
to only 0.01 (1 per cent) of X. This tolerance to harmonic components is an important
advantage since it eliminates the need for a low-distortion sinusoidal signal generator.
i=1
NO
Output vector
[MAGi PHIi] CUT FAIL!
meets spec.?
NO YES
previous section, MAG and PHI can be computed from the outputs of the phase and
amplitude detector (d.c.1, d.c.2 and d.c.3). From the expected magnitude and phase
responses of the CUT, each test vector [Fi Ai ] is associated with acceptable boundaries
for the output vector [MAGi, PHIi]. Using the described test parameters, the algorithm
shown in Figure 10.4 can be employed for the efcient functional verication of the
CUT. Note that the measurement of d.c.1i serves also as a self-verication of the
entire system at the ith frequency, since it involves all of the FRCS components but
not the CUT.
voltage needs to be digitized, the ADC design can be robust and compact, and some
sample implementations are presented in References 11 and 17.
Figure 10.6(a) shows a block diagram of the analogue multiplier employed for
the APD. The complete transistor-level schematic is depicted in Figure 10.6(b). The
core of the four-quadrant multiplier (transistors M1 and M2 ) is based on the one in
Figure 7(c) in Reference 18. The inputs are the differential voltages VA and VB and
the output is the d.c. voltage VOUT . Transistors M1 operate in the triode region; the
multiplication operation takes place between their gate-to-source and drain-to-source
voltages and the result is the current IOUT and the drain of M2 .
Transistors M2 act as source followers. Ideally, the voltage at the source of tran-
sistors M2 should be just a d.c.-shifted version of the voltage signal applied to their
gates (B+ and B). However, the drain current of transistors M1 and M2 is the result
of the multiplication and its variations affect the operation of the source followers.
This results in an undesired phase shift on the voltage signals applied to the drain
of transistors M1 , which signicantly degrades the phase detection accuracy of the
multiplier. To overcome this problem, transistors M3 (which operate in the saturation
region) are added to the multiplier core. These additional transistors provide a xed
d.c. current to the source followers improving their transconductance and reducing
their sensitivity to the a.c. current variations. Simulation results show that this design
feature reduces the error in-phase detection from more than 10 to less than 1 .
The output currents from four single-ended multiplication branches are combined
to form a four-quadrant multiplier that is followed by an LPF. C1 and M4 (diode-
connected transistor) implement the dominant pole of the LPF. M6 and M7 perform a
differential to single-ended conversion. The second pole of the LPF is implemented
by the capacitor C2 and the passive resistor R1 . The d.c. operating point of VOUT can
be set through VBO and hence, no other active circuitry is required to set this voltage.
An important component of this system is the interface between the CUT and the
APD. As shown in Figure 10.5, through a demultiplexer, the frequency response at
316 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
(a)
B+
IOUT
IOUT LPF
B+ M2 A+ VOUT~AB
A+ M1 IDC B LPF
M3 VB
A
LPF
B+
(b)
VBMP
VDD
M5 M4 M4 M5 M6 M6
C1 VOUT
VB+ VA VB
M2 M2 M2 M2 C2
R1
VA+
M1 M1 M1 M1
M7 M7
VBO
VBMN
M3 M3 M3 M3
Figure 10.6 Analogue multiplier for the APD: (a) block diagram and (b) circuit
schematic
different stages of the CUT can be characterized. In addition, the multiplexer should
present a high input impedance (so that the performance of the CUT is not affected)
and provide the appropriate d.c. bias voltages to the phase and amplitude detector. A
circuit that complies with these functions is depicted in Figure 10.7.
The differential pair with active load composed by transistors M11 forms a buffer
with unity gain. The output of the buffers (differential voltages VA and VB ) are con-
nected to the corresponding inputs of the APD. The d.c. operating point of the output
is easily set through the voltages VBA and VBB . The input capacitance as seen from
the input of the multiplexers switches is approximately 50 fF in a 0.35 m CMOS
implementation. This is an insignicant loading in the range of hundreds of megahertz.
On-chip testing techniques for RF wireless transceiver systems and components 317
VDD
VBBN
M9 M9
VN+ VN VN+
(a)
Up Off-chip
fREF loop filter
Charge
PFD Down pump VCO
1 MHz
R1 fOUT
C2
C1
f
DIV
Frequency
selection
Programable
divider
(b)
Up
fREF
Charge
PFD Down VCO
pump fOUT
R1
C2
Off-chip PLL components C1
fDIV~1 MHz
Frequency
selection Programable
divider
Figure 10.8 PLL-based frequency synthesizer for the FRCS: (a) implemented circuit
(b) alternate implementation
The frequency synthesizer for the generation of the input signal to the CUT is
designed as a type-II phase-locked loop (PLL) with a 7-bit programmable counter,
spanning a range of 128 MHz in steps of 1 MHz. The block diagram is shown in
Figure 10.8(a).
One of the main advantages of using a PLL in this application is that to generate
the internal stimulus, only a relatively low-frequency signal (fREF = 1 MHz in this
318 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
VDD
M11 M11
M14 M14
VOUT VOUT
+ M12 M12
VC
C1
C2 M15 M13 M13 M15 C2
Frequency
synthesizer CUT 1
11 MHz
BPF
CUT 2
20 MHz
LPF
Algorithmc ADC
LPF are tuned simultaneously through VC to keep the oscillation amplitude relatively
constant over the entire frequency tuning range (within 3 dB of variation) and a total
harmonic distortion (TMD) less than 10 per cent.
4.5
Error in phase measurement (deg)
3.5
2.5
1.5
0.5
0
180 150 120 90 60 30 0 30 60 90 120 150 180
Phase (deg)
Figure 10.11 Measured error performance of the phase detector as compared with
an external measurement using a digital oscilloscope at 80 MHz
Experimental results for the VCO of Figure 10.9 are shown in Figure 10.12. The
output frequency varies from 0.5 to 140 MHz and the amplitude variations in this
range are within 3.5 dB, which are in good agreement with the design goals.
Figure 10.13 presents the output spectrum of the VC towards the low end of
the tuning range (at around 16 MHz) where a higher THD is observed. Throughout
the complete tuning range, the harmonic components are always below 20 dBc.
According to the analysis presented in Section 10.2.1, this harmonic distortion would
cause relative errors in the magnitude and phase measurements of less than 1 per cent.
The complete frequency synthesizer is operated with a reference frequency of 1
MHz and through the 7-bit programmable counter that covers a range from 1 to 128
MHz in steps of 1 MHz. The measured reference spurs are below 36 dBc. The area
of the entire synthesizer is 380 m390 m and the current consumption changes
from 1.5 to 4 mA as the output frequency increases.
Figure 10.14 describes the experimental set-up for the evaluation of the entire
system in the test of the integrated CUTs. Each fourth-order lter consists of two
OTA-C biquads and each biquad has two nodes of interest, namely bandpass (BP)
node and lowpass (LP) node. Nodes 2 and 4 (biquad outputs) are BP nodes in the 11
MHz BPF (CUT 1) and LP nodes in the 20 MHz LPF (CUT 2). Buffers are added
to the output node of each biquad so that their frequency response can be measured
with an external network analyser.
The results of the operation of the entire FRCS in the magnitude response charac-
terization of the 11 MHz BPF at its two BP outputs are shown in Figure 10.15. These
On-chip testing techniques for RF wireless transceiver systems and components 321
(a)
103
101
100
101
1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2
Control voltage (V)
(b)
8
10
Peak amplitude (dBV)
11
12
13
14
15
16
17
0 20 40 60 80 100 120 140
Frequency (MHz)
Figure 10.12 VCO measurement results. (a) Tuning range (b) amplitude versus
frequency
results are compared against the characterization performed with a commercial net-
work analyser. In this measurement, the dynamic range of the system is limited to
about 21 dB due to the 7-bit resolution of the ADC. The phase response of the lter
as measured by the FRCS is shown in Figure 10.16.
The corresponding results for the characterization of the 20 MHz LPF are pre-
sented in Figures 10.17 and 10.18. In this case the d.c. output of the APD is measured
through a data acquisition card with an accuracy of 10 bits. As it can be observed,
322 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
40 1
1VIEW 15A
50
60
70
80
90
100
110
2 Source
Balun
Commercial
network
CUT: Fourth order Gm-C filter analyser
2
Balun CH1
2
BiQuad 1 BiQuad 2 Balun CH2
On-chip
Node 1 Node 2 Node 3 Node 4 buffers
PLL MUX
output inputs
Control inputs
Integrated frequency response
characterization system
Output
(a)
CH1 Ach log MAG 5 dB/REF 20 dB
Network analyser
Proposed system
Avg
16
Network analyser
Proposed system
Avg
16
Figure 10.15 Magnitude response test of the 11 MHz BPF. (a) Results for the rst
biquad (second-order lter) and (b) results for the complete fourth-
order lter
the APD is able to track the frequency response of the lter and perform phase
measurements in a dynamic range of 30 dB up to 130 MHz.
On average, in the test of both CUTs, the magnitude response measured by the off-
chip equipment is about 2 dB below the estimation of the FRCS. This discrepancy
is in good agreement with the simulated loss of the employed buffers and baluns.
Table 10.2 presents the performance summary of this integrated test system.
324 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
(a)
110
100
90
Phase magnitude (deg)
80
70
60
50
40
30
20
10
0
0 2 4 6 8 10 12 14 16 18 20
Frequency (MHz)
(b)
180
160
Phase magnitude (deg)
140
120
100
80
60
40
20
0
4 6 8 10 12 14 16 18
Frequency (MHz)
Figure 10.16 Phase response test of the 11 MHz BPF. (a) Results for the rst biquad
(second-order lter) and (b) results for the complete fourth-order
lter
(a)
CH1 Ach log MAG 5 dB/REF 20 dB
Network analyser
Proposed system
Avg
16
(b)
CH1 Ach log MAG 5 dB/REF 20 dB
Network analyser
Proposed system
Avg
16
Figure 10.17 Magnitude response test of the 20 MHz LPF. (a) Results for the rst
biquad (second-order lter) and (b) results for the complete fourth-
order lter
to the extra circuitry and output pads would be unaffordable. Therefore, it is desirable
to have an on-chip RF amplitude detector (RFD) to monitor the voltage magnitude
of RF signals through d.c. measurements. Different implementations of RFDs using
bipolar transistors on a SiGe process technology have been reported recently [9, 20].
The desired characteristics of a practical RFD are: (i) a high input impedance
at the testing frequency to prevent loading and performance degradation of the RF
326 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
(a)
180
160
Phase magnitude (deg)
140
120
100
80
60
40
20
0
0 20 40 60 80 100 120 140
Frequency (MHz)
(b)
180
160
Phase magnitude (deg)
140
120
100
80
60
40
20
0
0 10 20 30 40 50 60
Frequency (MHz)
Figure 10.18 Phase response test of the 20 MHz LPF. (a) Results for the rst biquad
(second-order lter) and (b) results for the complete fourth-order
lter
CUT; (ii) a minimum area overhead; and (iii) a dynamic range suitable for the target
building blocks. In addition, the measurement method should be robust to the effect
that process variations may have on the detectors response. Other gures of merit
such as power consumption and temperature stability are not a priority since the RFD
would not be used during the normal operation of the system under test.
On-chip testing techniques for RF wireless transceiver systems and components 327
1.1
1
RF RF
detctor 1 detctor 2
0.9
RF amplitude detector output (V)
RF CUT
0.8
9 dB RF IN RF OUT
0.7
VDD
Input stage Class AB rectifier LPF
R1
M2 M3 IB3 M9 M13 IB5
IB4
AGND
IRF
M8 M10
R3
DCOUT
RFIN
M1 C2
M7 M11 R4
IB2
C3
IB4
R2
IB1 C1 M4 M5 M6 M12 M14 M15
GND
RF amplitude
detector
Detector at
LNA ouput
LNA and
buffer
Detector at
LNA input
both currents that is important. The resultant rectied current is converted to voltage
by R3 . Finally, the passive LPF formed by R4 and C3 extract the d.c. component. This
passive pole also sets the settling time of the detector, which is designed to be in the
order of tenths of nanoseconds. AGND is set to 1.65 V (VDD = 3.3 V) to dene the
d.c. operating point of the output of the detector (d.c.OUT ) as well as the d.c. voltage
at the source of M10 and M11 . It is important to note that all of the signal amplication
and rectication in the detector is done in current mode and all of the high-frequency
internal nodes are at low impedance. These characteristics prevent the occurrence of
large voltage swings and minimize the injection of substrate noise.
From simulation and experimental results, the ratio of the maximum and minimum
signal amplitudes that can be detected (dynamic range) by the rectifying circuit is 30
dB. The sensitivity of the detector is mainly controlled by IB4 . As this current is
reduced, the rectier is sensitive to smaller signal amplitudes. On the other hand, if
IB4 is increased, the minimum detectable signal becomes larger but the compression
point of the rectier is also moved to higher amplitudes. In this way, the useful range
of the detector can be set to higher or lower signal levels according to the expected
conditions of the RF node to be observed.
1.8
1.6
1.4
DC Output voltage (V)
1.2
1
0.9 GHZ 1.9 GHZ
0.8 1.2 GHZ 2.4 GHZ
1.6 GHZ
0.6
0.4
0.2
0
40 35 30 25 20 15 10 5 0 5 10
Input power (dBm)
signal generator was employed and input match to 50 was assured through off-
chip components. In a range of 1.5 GHz (from 0.9 to 2.4 GHz) the detector shows
a conversion gain of approximately 50 mV/dBm in a dynamic range of 30 dB. The
minimum detectable signal is around 35 dBm. The wideband nature of the detec-
tors response is an important advantage for test purposes; the device can be used
to monitor signal amplitudes at different points of an RF system even if they have
different frequency content (e.g., in a multi-standard or a dual-conversion transceiver
architecture) without any further tuning in the design. At 400 MHz and 2.8 GHz the
measured dynamic range is still greater than 20 dB.
Table 10.3 presents the performance summary for the RF amplitude detector. It is
worth mentioning that the fast settling time of the detector allows performing tens of
measurements (e.g., varying the input power or frequency) in just few microseconds.
The response of the proposed detector is not perfectly linear with respect to the
input amplitude in the entire dynamic range. However, as discussed previously, this
is not a limitation for on chip-test purposes. To evaluate the effectiveness of the
proposed RF test device to measure gain and 1 dB compression point of an RF
CUT, an LNA is integrated in the prototype IC. The LNA is a standard single-ended
inductively degenerated cascade amplier [23] that has a gain of 10 dB at 1.6 GHz. The
degeneration and load inductors are implemented on-chip while the gate inductance is
the bonding wire that connects the LNA input to the package pad. A buffer is included
at the output of the LNA to measure its performance with off-chip equipment. The
buffer is a simple common source stage. Resistive source degeneration is employed in
332 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
LNA Buffer
IC
RFD 1 RFD 2 prototype
DC OUT 1 DC OUT 2
Figure 10.24 Experimental set-up for the on-chip characterization of the LNA
the buffer to attain a 1 dB compression point higher than that of the LNA. Post-layout
simulation results show that the buffer has a loss of 10 dB at 1.6 GHz while driving
the 50 load of a spectrum analyser through the package parasitics.
The test set-up employed for the characterization of the LNA is shown in
Figure 10.24. The gain of the LNA is measured with the RFDs and also with exter-
nal instrumentation for different power levels and at different frequencies. Some
signicant examples of the performed measurements are presented next.
Figure 10.25 shows the measured d.c. voltage at the output of each detector for
different input power levels at 1.6 GHz. Employing the discussed technique the LNA
gain is measured as 9.5 dB and the input 1 dB compression point as 1 dBm. It
is worth mentioning that for the on-chip test of the LNA, the rectier current (IB4 )
On-chip testing techniques for RF wireless transceiver systems and components 333
2
RMS Det. at input of LNA
1.8
RMS Det. at output of LNA
1.6
RMS detector outputs (V)
1.4
1.2
LNA Gain: 9.5 dB
1
0.8
Figure 10.25 Measured response of the RFDs at the input and output of the on-chip
LNA at 1.6 GHz
used in the RF detectors is higher than the one used in the measurements presented in
Figure 10.23. This shows how the useful range of the rectier can be adjusted to test
an RF CUT at the signal levels of interest (e.g., around its 1 dB compression point).
Figure 10.26 presents a comparison of the LNA gain measured at 1.7 GHz with the
integrated detectors in comparison with the gain roll-off measured with an external
RF spectrum analyser. At low input power levels (<2 dBm) the estimated gain
appears to be lower due to the reduced gain of the RFD at the input in this range.
From the obtained experimental results at different frequencies and power levels it
is estimated that the practical accuracy of the method is 1 dB, which is adequate for
multiple wafer-level and production test purposes.
This section presents an integral testing strategy for integrated wireless transceivers
based on the BIST techniques discussed in Sections 10.2 and 10.3, in combination
with a switched loop-back architecture.
14
Normalized gain measured with RF
signal generator and spectrum analyser
12
Estimated gain with RFDs
10
LNA gain (dB)
0
10 8 6 4 2 0 2 4 6 8 10
Input power (dBm)
Figure 10.26 Gain compression of the on-chip LNA at 1.7 GHz as measured by
external equipment and the embedded detectors
systems [24, 25]. It does not require an external stimulus and is effective to detect
catastrophic faults in the complete signal path. Figure 10.27(a) depicts this testing
scheme for a transceiver architecture with direct up-conversion. In a complete real-
ization the base band sections include in-phase (I) and quadrature (Q) paths but in
this block diagram only one path is shown for simplicity.
In the loop-back conguration, the baseband section of the transmitter generates
a tone or a modulated signal with a centre frequency fB . With the input from the
local oscillator (LO) at a frequency fRF the up-converter generates a tone at fB + fRF .
The loop-back connection must attenuate the output of the power amplier (PA) to
make it suitable for the dynamic range of the LNA. After the down-conversion with
the same tone from the LO, the resultant signal at the receiver baseband is centred at
fB . The characteristics of the demodulated or digitized signal can be analysed by the
ATE to evaluate the performance of the transceiver. In this conguration, the range
of values that fB can take is limited by the transmitter baseband.
Recent radio implementations use transmitter architectures in which the modu-
lation of the transmitted signal is directly performed on the VCO [26, 27], avoiding
the up-conversion. As shown in Figure 10.27(b), the direct application of loop-back
test is not practical in this kind of transceiver. However, to introduce a switch in the
loop-back path can be useful to overcome this limitation. Figure 10.27(c) illustrates
the principle of operation of a switched loop-back technique applied to a transceiver
with direct VCO modulation.
On-chip testing techniques for RF wireless transceiver systems and components 335
(a)
M(f ) U(f ) D(f ) B(f )
Loop-back
connection
f f f f
fB fRF + fB fB fB
PA LNA
DC offset
cancellation
LO(f )
Local
oscillator f
DAC f RF ADC
Digital
and/or signal and/or
modulator Input data processor Output Data demodulator
(b)
T(f ) Loop-back D(f ) B(f )
connection
f f f
fRF DC
PA LNA
ADC
DSP and/or
Input data
Output data demodulator
(c)
T(f ) S(f ) D(f ) B(f )
f f f f
fRF fRF, fRF f SW DC fSW fSW
fSW
PA ATTN LNA
ADC
DSP and/or
Input data Output data demodulator
If the signal in the loop-back path is switched at a frequency of fSW , two additional
tones are created at frequencies fRF fSW . After the mixing with fRF in the receiver,
both tones are down-converted to fSW . In this way, the frequency of the signal that
controls the switch determines the frequency of the signal at the baseband chain.
Conceptually, this is equivalent to introducing a mixer in the loop-back path; however,
a simple switch is a suitable frequency translation device in this application. As shown
in Figure 10.27(c), an important practical consideration is that in the off state, the
switch must connect the input of the LNA to a 50 resistor and not directly to ground
to preserve LNA stability.
The operation of the switch on the signal from the PA can be modelled as a
multiplication between the RF signal and square wave from zero to one. Such a train
of pulses can be described in the time domain as
P (t) = Kn cos (nSW t + n ) = K0 + K1 cos (SW t + 1 )
n=0
+ K2 cos (2SW t + 2 ) + (10.7)
where SW = 2 fSW and Kn , n are constants that dene the amplitude and phase of
each frequency component respectively. The product of P(t) and the RF signal with
amplitude A and frequency fRF results in the switched signal S(t)
S (t) = Kn cos (nSW t + n ) A cos (RF t)
n=0
= A
2 [2K0 cos (RF t) + K1 cos ((RF SW ) t + 1 ) + K1 cos ((RF + SW ) t + 2 )
where , n and n are phase constants. The nal amplitude of each frequency
component (Cn , En ) depends on the amplitude B of the LO as well as on the conversion
gain of the mixer. The d.c. component C0 is blocked by the d.c. offset cancellation
circuitry and the frequency components located around 2fRF will have a negligible
amplitude since the output of a down-conversion mixer shows a LP characteristic.
On-chip testing techniques for RF wireless transceiver systems and components 337
RF out RF in
1I 2I 7I 8I 9I
RFD RFD f SW RFD RFD
Baseband Baseband
PA ATTN LNA
in out
3 4 5 6
1Q 2Q 7Q 8Q 9Q
Frequency
synthesizer RFD RFD
10I From RF
10Q detectors
Transciever
From baseband DC
To ATE
die Analog
observation points APD DC MUX to digital
multiplexer
1, 2, 7, 8, 9 (I &Q) converter
LNA gain and 1 dB comp. RFDs. The input power to the LNA 5, 6
point. is swept by changing the transmitter
output power or the loop-back
attenuator loss.
signal at the input of the transmitter either from the ATE or from an on-chip sig-
nal generator like the one proposed for the FRCS. The switch loop-back connection
guarantees the ow of a test stimulus throughout the transceiver path that can be used
by the embedded testing devices to perform measurements at different intermediate
points of the system.
By providing independent control of the frequency of the signals across the trans-
mitter and receiver chains, and providing access to internal points in the RF and
baseband sections, the testability of the receiver is improved. Table 10.4 describes
the different tests that can be performed in this architecture. A complete testing solu-
tion for a given transceiver may not have to perform all of the possible tests. This
On-chip testing techniques for RF wireless transceiver systems and components 339
RF 2.4 GHz
Transmitter architecture Direct conversion
Transmitter power 0 dBm
PA gain 15 dB
Receiver architecture Low-IF; IF = 4 MHz
Sensitivity 82 dBm
RF front-end IIP3 4 dBm
RF front-end gain 30 dB (LNA 15 dB + Mixer 15 dB)
Baseband lter Fifth-order bandpass polyphase
Table 10.6 summarizes the specic characteristics of the modelled architecture, which
are taken from the transceiver reported in Reference 28. An IEEE 802.15.4 implemen-
tation is chosen for the example because this standard is targeted for very low-cost
applications. The attenuator in the loop-back connection has a loss of 25 dB to bring
the 0 dBm output of the PA within the linear range of the receiver. The RFDs are
modelled according to the device described in Section 10.3.
Figure 10.30 shows the simulation results for a transceiver meeting specications.
The frequency for the loop-back switching is 4 MHz, since this is the centre frequency
of the baseband lter. Figures 10.30(a) and (b) show the switched signal at the input
of the LNA in the time and frequency domains, respectively. Observe that the fre-
quency components of interest (2400 4 MHz) are at least 10 dB above other tones.
Figures 10.30(c), (d) and (e) show the outputs of the RF RMS detectors at the output
of the PA, the LNA input and LNA output respectively. Finally, the expected 4 MHz
signal at the output of the baseband lter is shown in Figure 10.30(f).
Even though the output of the RFDs placed after the switch is intermittent, the
gain of the LNA can still be estimated provided that the d.c.DC. samples their output
at the appropriate rate. In the presented model, the d.c.DC. has around 100 ns to
sample the output of each detector. In a given scenario where the d.c.DC. is slower,
fSW can be set rst to a lower value (so that the RFDs hold their output for a longer
On-chip testing techniques for RF wireless transceiver systems and components 341
(a) (b)
30 25
30
10
0
35
10
20 40
30
45
1.0 1.2 1.4 2.380 2.388 2.396 2.400 2.404 2.412 2.420
Time (s) Frequency (GHz)
(c) (d)
70
800 60
Detector output [mV]
400 30
20
200
10
0 0
0 25 50 75 100 0 100 200 300 400
Time (ns) Time (ns)
(e) (f)
450
600
400
350 400
Detector output [mV]
300 200
250
0
200
200
150
100 400
50 600
0
0 100 200 300 400 0 1 2 3 4
Time (ns) Time (ns)
Figure 10.30 Simulation results for a transceiver meeting specications. (a) Input
of the LNA in the time domain (b) input of the LNA in the frequency
domain (c) output of the RFD at the output of the PA (d) output of the
RFD at the input of the LNA (e) output of the RFD at the output of the
LNA and (f) output of the baseband lter
time) to test the LNA and then shift to a higher value to test the rest of the receiver
chain.
Figure 10.31 shows the simulation results for a transceiver in which some of the
individual building blocks do not meet the target specications. The PA has a 2 dB
higher gain (12 dB total), the LNA has 5 dB less gain (10 dB total) and the channel
selection lter is not centred at 4 MHz but at 4.5 MHz. Figures 10.31(a) and (b)
show the output of the RFDs at the outputs of the PA and LNA, respectively. It can
be readily noticed that these nal values are different from the ones in the case of
342 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
(a) (b)
1.20 300
1.00
0.80 200
0.60
150
0.40
100
0.20 50
0 0
0 25 50 75 100 0 100 200 300 400
Time (ns) Time (s)
(c) (d)
600 600
400 400
200 200
0 0
200 200
400 400
600 600
0 1 2 3 4 0 1 2 3 4
Time (s) Time (s)
Figure 10.31 Results for transceiver not meeting specications. (a) Output of the
RFD at the output of the PA (b) output of the RFD at the output of the
LNA (c) output of the baseband lter for fSW = 4 MHz and (d) output
of the baseband lter for fSW = 4.5 MHz
Figure 10.30. Figures 10.31(c) and (d) show the output of the channel selection lter
for fSW = 4 MHz and fSW = 4.5 MHz.
Note that through a stand-alone end-to-end test, it would not be possible to deter-
mine the cause of a reduced amplitude at the end of the receiver baseband. Moreover,
if both PA and LNA exhibit a higher gain, the output of the receiver could show the
expected amplitude even if the lter has a deviated centre frequency. If this transceiver
was tested with a conventional loop-back test without the switch, by changing the
input frequency to the transmitter it could be determined that the fault is occurring at
the baseband but not so if it is on the transmitter or receiver side.
The combination of a switched loop-back architecture with the use of the recently
developed on-chip testing devices demonstrated in integrated implementations sig-
nicantly enhances the testability of an RF transceiver. The on-chip testing devices
show that the direct, on-chip observation of analogue and RF building blocks at
megahertz and gigahertz frequencies can be performed in a CMOS process, and with
On-chip testing techniques for RF wireless transceiver systems and components 343
a minimum area and parasitic loading overhead. The presented strategy enables the
test of the entire wireless system and its individual building blocks at the wafer level
through digital information. The use of external analogue/RF equipment or com-
ponents is avoided, allowing the implementation of a practical and cost-effective
test solution. Extending the proposed concepts to implementations in current deep-
submicron technologies opens signicant opportunities for improved performance as
well as the solution to new challenges.
10.6 References
1 Ozev, S., Orailoglu, A., Olgaard, C.V.: Multilevel testability analysis and solu-
tions for integrated Bluetooth transceivers, IEEE Design and Test of Computers,
2002;19 (5):8291
2 Ferrario, J., Wolf, R., Moss, S.: Architecting millisecond test solutions for
wireless phone RFICs, Proceedings of the IEEE International Test Conference,
Charlotte, NC, September 2003, pp. 132532
3 Akbay, S.S., Halder, A., Chatterjee, A., Keezer, D.: Low-cost test of embed-
ded RF/analog/mixed-signal circuits in SOPs, IEEE Transactions on Advanced
Packaging, 2004;27 (2):35263
4 Acar, E., Ozev, S.: Defect-based RF testing using a new catastrophic fault model,
Proceedings of the IEEE International Test Conference, Austin, TX, November
2005, pp. 4219
5 Bhattacharya, S., Halder, A., Srinivasan, G., Chatterjee, A.: Alternate testing of
RF transceivers using optimized test stimulus for accurate prediction of system
specications, Journal of Electronic Testing: Theory and Applications, 2005;21
(3):32339
6 Silva, E., de Gyvez, J.P., Gronthoud, G.: Functional vs. multi-VDD testing of
RF circuits, Proceedings of the IEEE International Test Conference, Austin, TX,
November 2005, pp. 41220
7 Ozev, S., Olgaard, C.: Wafer-level RF test and DFT for VCO modulating
transceiver architectures, Proceedings of the 22nd IEEE VLSI Test Symposium,
Napa Valley, CA, April 2004, pp. 21722
8 Bhattacharya, S., Chatterjee, A.: Use of embedded sensors for built-in-test of RF
circuits, Proceedings of the IEEE International Test Conference, Charlotte, NC,
September 2004, pp. 8019
9 Ryu, J.-Y., Kim, B.C., Sylla, I.: A new low-cost RF built-in self-test measure-
ment for system-on-chip transceivers, IEEE Transactions on Instrumentation
and Measurement, 2006;55 (2):3818
10 Valdes-Garcia, A., Silva-Martinez, J., Snchez-Sinencio, E.: On-chip testing
techniques for wireless RF transceivers, IEEE Design and Test of Computers,
2006;23:26877
11 Valdes-Garcia, A., Hussein, F., Silva-Martinez, J., Sanchez-Sinencio, E.: An
integrated transfer function characterization system with a digital interface for
analog testing, IEEE Journal of Solid State Circuits, 2006;41 (10):230113
344 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
11.1 Introduction
The mixed-signal system-on-a-chip (SoC) has become one of the main drivers for
electronic circuit design. It has become normal to integrate complex systems with
both digital and analogue functions in a single chip, to produce systems such as
wireless transceivers, broadband modems, mobile phone handsets, digital broadcast
receivers and many other application devices. A major motivation for producing such
complex systems as an SoC is cost. With modern submicron semiconductor technol-
ogy, the available complexity of an SoC is continually being rapidly increased, with
relatively little increase in the associated cost of fabrication. Eliminating the require-
ment for many of the external components formerly required also drastically reduces
the manufacturing costs of products incorporating such highly integrated SoCs, as
well as bringing technical advantages such as reduced size and power consumption.
From the viewpoint of the analogue and mixed-signal circuit designer, mixed-
signal SoC design brings many challenges. The vast majority of circuitry in the
SoC is digital; economic requirements dictate that digital CMOS integrated circuit
(IC) processes are used to fabricate the SoC. However, such processes do not yield
optimized analogue components. A major source of difculty in circuit design is the
variability of integrated components [1]. Each process step has a degree of variability
associated with it, leading to loose component tolerances. Components of the same
type integrated on the same die are subject to nearly identical processing, however, a
close matching of component value ratios is possible, even though absolute tolerances
are large. The ability to produce components with well-matched ratios has been
heavily exploited by circuit designers. However, as device geometries shrink with
each increment in technology feature size, statistical variations between components
integrated on the same die increase, leading to a deterioration in ratio matching.
348 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Different component parameters are dependent on different process steps; thus, while
capacitance might primarily depend on oxide thickness, transconductance is also a
strong function of dopant concentration. Therefore, there will be little correlation
between variations in different types of component parameter, even within the same
die. Most components integrated in silicon also have strongly temperature- and bias-
dependent parameters, and are also subject to ageing effects that lead to changes in
component parameters over time.
The variability of integrated components and the need for precise analogue func-
tions in numerous important mixed-signal circuits have led to the development of
on-chip tuning and calibration techniques. Important examples of these techniques
are described in the following sections:
Section 11.2 On-chip automatic lter tuning. Filters are important building
blocks in virtually all applications where analogue signals are processed and,
for the vast majority of continuous-time lter designs, on-chip automatic tuning
schemes are an essential requirement in order to achieve the performance goals
demanded by the application.
Section 11.3 Self-calibration techniques for frequency synthesizers. Precise
frequency generation is also an essential requirement in a very wide range of
applications; for frequencies in the RF range, phase-locked loop (PLL) frequency
synthesis is widely used. The critical analogue circuit block in a PLL is the
voltage-controlled oscillator (VCO). VCO performance parameters are highly
process dependent, and on-chip VCO calibration techniques can be employed to
reduce the effects of process variations on VCO parameters, yielding improved
performance for the overall PLL system.
Section 11.4 On-chip antenna impedance matching. To achieve efcient oper-
ation of RF power ampliers, the impedance of the load must be matched to that
required by the amplier. Normally, the load is actually an antenna, which has a
highly variable impedance depending on the exact operating frequency and the
operating environment of the antenna. Automatic impedance matching maximizes
output and efciency under varying load conditions.
The chapter is concluded in Section 11.5.
+
Vin g0 + VBP
g1
C1
g3 g2
Vtune + +
C2
generates simultaneous lowpass and bandpass outputs; here we focus on the bandpass
transfer function, but the lowpass case is very similar. The transconductances in the
lter are made tuneable by varying their bias currents. The capacitors are xed.
In the case where all components are ideal, the transfer function of Figure 11.1 is
VBP g0 (g3 /C2 ) s
HBP (s) = = (11.1)
Vin g3 s2 + (g3 /C2 ) s + (g1 g2 /C1 C2 )
By equating coefcients with the standard form of the second-order bandpass transfer
function
(0 /Q) s
HBP (s) = KBP (11.2)
s2 + (0 /Q) s + 02
we have
) *
g1 g2 1 g1 g2 C2 g0
0 = , Q= , KBP = (11.3)
C1 C2 g3 C1 g3
Suppose processing variations cause the transconductances of the OTA to vary.
Because of the inherent matching between the components making up the OTAs,
all transconductances will change by the same factor kg . Similarly, process variations
will cause all the capacitor values to change by a factor kc , which in general will be
different from, and unrelated to, kg . Including the factors kg and kc in Equation (11.1)
gives a new transfer function
VBP g0 kg g3 /kc C2 s
HBP (s) = = (11.4)
Vin g3 s2 + k g /k C s + k 2 g g /k 2 C C
g 3 c 2 g 1 2 c 1 2
0 has been changed by a factor kg /kc . In order to restore the design value of 0 , the
transconductances of the four OTAs are simultaneously tuned until kg = kc , in which
case Equation (11.4) reduces to Equation (11.1). In practice, this is achieved by tuning
the transconductances until 0 is equal to the design value. It is also possible to tune
Q independently of 0 by varying g3 alone. However, because Q is determined by
Tuning and calibration of analogue, mixed-signal and RF circuits 351
(a) (b)
+ |g|
Vin g0 + VBP
g1
C1 Gp1
g3 g2 Excess
phase
Vtune + + g
Cp2 C2
In the circuit of Figure 11.2(a), the most signicant inuence of excess phase occurs
in the two integrators made up of g1 , C1 and g2 , C2 . Substituting this frequency-
dependent transconductance for the ideal transconductors in Equation (11.1) and
making appropriate approximations for 0 p gives a new value of Q for the
circuit when excess phase is included:
Q
Q (11.6)
1 2Q 0 /p
Q is signicantly affected by quite small values of excess phase. For example, if the
design value of Q is 10 and p = 100 0 , giving rise to excess phase of about 0.6 ,
Q from Equation (11.6) is 12.5, an increase of 25 per cent. A design Q of 50 will
result in Q approaching innity and instability.
352 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Another parasitic effect that modies Q is the nite output conductance of OTAs.
The output of an ideal OTA behaves as a current source, but real OTAs have signicant
output resistance. This can be modelled by conductances Gp1 and Gp2 shunting each
internal circuit node to ground, as shown in Figure 11.2(a). The transfer function of
this modied circuit is given by
g0 Gp1 /C1 C2 1 + C1 /Gp1 s
H(s) =
s2 + Gp2 /C2 + Gp1 /C1 + (g3 /C2 ) s + Gp1 Gp2 /C1 C2 + Gp1 g3 /C1 C2 + (g1 g2 /C1 C2 )
(11.7)
The Q of the modied circuit is approximated by
Q
Q =
(11.8)
1 + (Q/0 ) Gp1 /C1 + Gp2 /C2
Thus, increasing the output conductance of the OTAs reduces Q. This sets an upper
limit to the Q which can be achieved for a given set of transconductors and capacitors,
as the design Q in Equation (11.3) tends to innity, the maximum achievable Q is
1
Qmax = (11.9)
(1/0 ) Gp1 /C1 + Gp2 /C2
In summary, the large tolerances of integrated components give rise to large frequency
errors in the lter response, so integrated lters almost always require on-chip tun-
ing. Frequency tuning will often be sufcient for lters operating at modest values
of Q and frequency, typically lowpass and bandpass lters where the bandwidth
is of the same order as the centre frequency. However, in high-Q, high-frequency
lters, circuit parasitics, principally the excess phase and nite d.c. gain of the
active circuits, profoundly affect the Q of the lter response, so that Q must also
be tuned.
Signal input
Q Tuning Frequency
signal tuning signal
Reference
Frequency
tuning control
Q Tuning
control
Master
H(s)
filter
Reference Q Tuning Frequency
signal signal tuning signal
Frequency
tuning control
Q Tuning
control
Master amplitude
response
Slave amplitude
response
The essential assumption made in the masterslave scheme is that the ratios of
components in the master and slave sections can be accurately realized, and will track
each other precisely as the master section is tuned. If this is the case, the cut-off
frequencies and Q of all the lter sections in the slave will exactly track those of
the master, and if the master section Q is maintained at the correct value the slave
lter response shape will remain correct as the lter frequency is tuned. There are
practical limitations on how closely this can be achieved, and a substantial amount
of design and layout effort must be expended to ensure that the master accurately
models the tuning behaviour of the slave. Since parasitic effects can substantially
alter the performance of the lter, these must also be accurately modelled in the
master section. These requirements can usually be best met by designing the slave
lter with circuits that are as near identical to the master as possible, and by making the
tuning reference signal frequency close to that of the signal frequency. This allows the
best matching between sections, and also ensures that frequency-dependent parasitic
effects are similar in master and slave. Synthesis techniques are required that result
in lter circuits using the minimum possible spread of component values.
Tuning Tuning
input input
Tuning Tuning
input input
Limiting
VCO amplifier
and the VCO which effectively operates at innite Q and inherently requires a non-
linear amplitude limiting mechanism to achieve a stable signal amplitude. To ensure
that the frequency-determining elements operate within their linear range, the VCO
is usually implemented by adding a limiting amplier to provide feedback around a
bandpass biquad lter section.
Many successful frequency tuning systems using frequency- or PLLs as described
have been implemented in practical designs, for example, in References 7 and 8.
These methods are well suited to masterslave designs, where the tuning loop can
operate continuously. This yields an extremely simple control system and is often
capable of frequency tuning accuracy within 1 per cent. These techniques become
increasingly difcult to apply at the highest frequencies, due to the increasingly severe
errors caused by excess phase, both in the lter or VCO and in the phase detector
itself.
Frequency tuning techniques can utilize the time-domain response of the lter.
The response of a high-Q bandpass lter to a step or impulse function is a damped
sinusoid at the lter output. The period of the sinusoid is approximately equal to the
resonant frequency of the lter. The lter output waveform is squared using a limiting
amplier and the period measured using digital counter techniques. A tuning signal is
derived by comparing the measured period with the desired value. In order to achieve
good accuracy, high resolution in the period measurement is necessary. This requires
that the transient response has a long duration. The duration increases with Q and
lter order, and owing to this and the iterative nature of the measurement technique,
it is most appropriate for ofine tuning of high-order, high-Q bandpass lters [9, 10].
This tuning control method is digital in nature, so is easily combined with switched
array tuning schemes.
A related technique is to measure the time constant of an integrator using a d.c.
charging current. An example of this technique using an OTA-C integrator is shown
in Figure 11.8.
358 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Tuning
input
+
Vref
+
+ Vout Peak Vout(max)
gm detector
Error
amplifier
C
gm control
input
Clock input
t t
An accurate clock signal is used to open the switch for a period t. During t, the
integrator output voltage is a linear ramp that reaches a maximum value of
gm
Vout(max) = Vref t (11.10)
C
This maximum voltage is stored by the peak detector, and compared with the
reference voltage by the error amplier. The resulting lowpass-ltered error signal is
applied to the OTA transconductance control input and causes the capacitor charging
current and therefore Vout (max) to vary. Over a large number of clock cycles, this
feedback loop causes Vout (max) to become equal to Vref :
gm gm
Vref t = Vref , =t (11.11)
C C
Since t is accurately dened by the clock signal, and the resonant frequency of
the lter is accurately proportional to gm /C due to well-dened ratios between all
transconductances and capacitances on the chip, the resonant frequency is set to the
correct value.
In order to avoid problems caused by unwanted phase shifts, frequency tun-
ing methods have been devised based on amplitude measurements. A second-order
response with Q greater than 1/ 2 contains a peak in its amplitude response plotted
against frequency. For high-Q values, the amplitude peak closely approximates the
resonant frequency. Tuning the resonant frequency of the lter with a xed input
reference frequency will also produce a peak in the output response when the two
frequencies coincide. The tuning system only needs to detect when the maximum
output signal is achieved; the amplitude detector need therefore have neither high
accuracy nor linearity, provided it has a monotonic response.
Tuning and calibration of analogue, mixed-signal and RF circuits 359
Envelope
Reference
Master filter detector
frequency
Venv Vpk
Peak
H(s)
detector
Ramp Tuning
generator input Tuning signal
to slave filter
Vtune
Comparator
Control
logic
A tuning scheme using this principle is shown in Figure 11.9. The reference signal
is applied to the biquad input, and the envelope detector produces a d.c. level, Venv ,
proportional to the amplitude of the lter output. In the rst phase of the tuning cycle,
the lter tuning voltage Vtune is swept through its range by the ramp generator. At the
point where the resonant frequency of the lter coincides with the reference frequency,
the lter output amplitude and thus Venv reaches a maximum, and this value is stored
by the peak detector as Vpk . In the second tuning phase, Vtune is swept again and Venv
is compared with Vpk by the comparator. At the point where the resonant frequency
and reference frequency coincide, Venv is equal to Vpk and the control logic opens the
switch. Thus, the value of Vtune giving the correct lter resonant frequency is stored
on the hold capacitor, until the next tuning cycle begins.
In practice, the circuit of Figure 11.9 will suffer tuning errors due to parasitic
charge injection and loss from the tuning voltage holding capacitor offsets in the
comparator and peak detector. However, a more sophisticated implementation of this
technique has been described [11] in which these errors are largely eliminated.
Envelope
Reference amplitude, Vref
Reference Tuning signal
frequency to slave
Master filter
Error
H(s) Loop
Vref amplifier
filter
Q
Q Tuning
input
The Q tuning system of Figure 11.10 is an amplitude locked loop which operates
using this proportionality between gain and Q. It is assumed that separate frequency
tuning circuits maintain 0 of the lter exactly equal to the desired value and that the
gain of the lter is equal to the Q at 0 . The reference signal is attenuated by a factor
1/KQ by a potential divider and is applied to the lter input. The output amplitude
of the lter is therefore Vref Q/KQ . A pair of matched envelope detectors generate
d.c. levels proportional to Vref and the lter output, which are compared by an error
amplier. The resulting feedback signal varies the Q of the lter so that the lter
output is equal to Vref , in which case Q = KQ . Since KQ is determined by component
ratios which can be made accurately, Q is also accurately dened.
Amplitude
detector
I2 I4 I(n1)
RL VL
V1 V3 Vn
S1 C1 L1 C3 C3 L3 Sn Cn Ln
Vs
o
between all lter sections. Thus, tuning any one section of the lter affects all poles
and zeros in the lter transfer function, modifying the lter transfer function in a
complex way. This makes the design of a tuning algorithm capable of realizing the
desired response extremely difcult. This section describes a tuning method based
on Dishals technique [14, 15] which overcomes this problem, and is applicable to
the leapfrog (LF) form of MLF lter and other types of lters based on LC ladder
simulation. This method can be illustrated using the LC ladder bandpass lter shown
in Figure 11.11.
Synthesis of this ladder lter with centre frequency 0 results in the inductor and
capacitor values in each branch of the ladder having the same resonant frequency,
1/Li Ci = 02 . To tune the lter, initially all switches in the series arms are opened and
all those in the shunt arms are closed. A signal is applied to the input at frequency
0 , and V1 is monitored by the amplitude detector. S1 is opened and C1 /L1 is tuned to
parallel resonance, that is, maximum amplitude of V1 . Since S2 is open, the resonator
C1 /L1 is isolated from the rest of the circuit, which therefore does not alter the resonant
frequency. Next, S2 is closed and C2 /L2 is tuned to series resonance and minimum
V1 . Since S3 is closed, C2 /L2 are also isolated from succeeding stages of the lter.
Each successive branch is then adjusted in turn, the shunt branches for maximum V1
and the series branches for minimum V1 , with the associated switch being opened or
closed. Since all preceding branches are already resonant, the reactive component of
their net series or shunt impedance is zero, and they are transparent at frequency 0 .
When Ln /Cn have been adjusted, the tuning process is complete. In tuning schemes for
second-order cascade lters, it is normally necessary to provide Q tuning capability.
This is not done when tuning using Dishals method as described, and so the tuning
process does not completely dene the transfer function of the lter. The bandwidth
and ripple in the response are dened by ratios between component values in different
branches of the circuit, whilst the method described above only tunes the inductor
and capacitor in each individual branch in isolation. However, because all branches
362 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Amplitude
detector S(n1)
1 S3
S2 Sn
1 1 1
1 1
Rs RL
1 1 1 1 1
Vs sC1 sL2 sC3 sL(n1) sCn
Vn =VL
Rs V1 I2 V3 I(n1)
1 1 1 1 1
1 1 1 1 1
sL2 sC1 sL3 sC(n1) sLn
are resonant at 0 , the passband is symmetrical, insertion loss is minimized and gross
distortion of the frequency response does not occur.
Each LC resonator in the prototype is replaced by a two-integrator-loop biquad
with the same 0 . In the LC lter, coupling between resonators occurs because they
are connected together; in the LF lter, this coupling occurs via the feedback paths.
Therefore, the switches in Figures 11.11 and 11.12 perform an equivalent function.
As in the LC prototype, it is only necessary to monitor the test signal amplitude at
one point in the LF circuit, the output of the rst integrator, V1 .
A single biquad making up part of the lter in Figure 11.12 is shown in
Figure 11.13(a). This could be implemented as the OTA-C biquad circuit of
Figure 11.13(b). The transfer function of Figure 11.13(a) is
RS s (1/RS C1 )
H(s) =
R s + s (1/RS C1 ) + (1/L1 C1 )
2
(11.13)
1 0 1 RS
0 = , = , KBP =
L1 C1 Q RS C1 R
where R is a scaling resistance. Thus, for Figure 11.13(b) we can write:
g0 (g3 /C1 ) s
H(s) =
g3 s + (g3 /C1 ) s + (g1 g2 /C1 C2 )
2
) (11.14)
g1 g2 0 g3 g0
0 = , = , KBP =
C1 C2 Q C1 g3
By equating coefcients in Equations (11.13) and (11.14), the circuit of
Figure 11.13(b) can be used to generate the transfer function of Figure 11.13(a), with
L1 = C2 /g1 g2 , RS = 1/g3 , R = 1/g0 . The complete LF lter can be implemented by
cascading a number of these biquads and providing the feedback path connections.
Only the rst and nal biquads have nite Q (corresponding to the terminating resis-
tors of the prototype ladder network), so transconductor g3 is only required for these
stages. Suitable gain and impedance scaling will yield practical component values.
Tuning and calibration of analogue, mixed-signal and RF circuits 363
(a) (b)
1/Rs +
g0 + Vo
Vin
1/R 1 g2
Vin sC1
Vout C2
g3 g1
+ +
1 Vtune C1
In order to implement the tuning method, g0 g3 and the bias current sources
are dimensioned so that the ratio between the transconductances remains constant
as Vtune is varied. Similarly, the ratio of C1 /C2 will be preserved with variations in
absolute capacitance. Suppose process variations change all transconductances by a
factor kg and all capacitances by a factor kc . The transfer function of Figure 11.13(b)
then becomes:
)
g0 s kg g3 /kc C1 kg g1 g2
H (s) = , 0 =
g3 s2 + s k g /k C + k 2 g g /k 2 C C kc C1 C2
g 3 c 1 g 1 2 c 1 2
(11.15)
0 is altered from 0 by a factor of kg /kc . The effect of tuning the circuit to resonance
using Dishals method is to force 0 to the design value 0 by changing Vtune ,
and hence kg g0 kg g3 . This is achieved when kg = kc . Substituting kg = kc into
Equation (11.15) gives the original transfer function. Thus, tuning only the pole
frequencies of the biquad also restores Q to the original value.
An on-chip tuning system which tunes the pole frequency of a single biquad by
detecting the peak of its amplitude response is described in detail in Reference 11.
This system is shown in elementary form in Figure 11.14. A test signal at 0 is
applied to the biquad input and Vo is rectied. The rectied signal Venv is applied to
a peak detector. In the rst tuning phase, Vtune is swept through its range by a ramp
generator. At the point where the pole frequency of the biquad is equal to 0 , Venv is
a maximum, and this value is stored by the peak detector output Vpk . In the second
tuning phase, Vtune is swept again, and Venv is compared with Vpk . At the point where
both are equal, the control logic opens the switch, causing the current value of tuning
voltage to be stored on Chold , which is again the peak of the amplitude response.
Reference 11 describes a more sophisticated implementation in which the effects of
delays and offsets are cancelled.
This scheme may be extended as in Figure 11.15 to sequentially tune a number of
biquads making up the bandpass LF lter. Initially, Vtune1 is adjusted for peak output at
364 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
0
Vin Vo Venv Vpk
Peak
detector
Vtune
Comparator
Chold
Ramp
generator
Control
logic
0 Vo
Vin
V1
Vtune1 Vtune2 Vtune3 VtuneN
Ramp
generator
Control
logic
Minimum Vmin
detector
V1 . To isolate the rst biquad from the rest of the lter, Vtune2 VtuneN are initialized
to zero, de-biasing the other biquads. After Vtune1 has been adjusted, Vtune2 is tuned
for minimum V1 . The minimum detector is a peak detector with inverted polarity.
The process is repeated with Vtune3 VtuneN until all biquads have been tuned.
The tuning scheme described above has a number of benets:
The test signal is connected to the input node, and the lter response is measured at
the input resonator node of the lter throughout the tuning process. This minimizes
Tuning and calibration of analogue, mixed-signal and RF circuits 365
fref fout
Phase/ Vtune
frequency VCO
comparator Loop filter
1/N
divider
the number of signal paths that must be added to the lter, minimizing additional
circuit parasitics.
A single test-signal frequency is required, equal to the lter centre frequency.
Often a suitable signal will already be available in the system as a carrier signal.
The tuning system need only detect amplitude maxima and minima; it is not nec-
essary to measure accurate amplitude ratios or phase, reducing possible sources
of error in high-frequency applications.
VCO
frequency
VCO frequency
tolerance
Required output
frequency range
VCO frequency
tolerance
There are usually stringent requirements on the spectral purity of the output signal,
especially the phase noise sidebands around the output frequency or time-domain
jitter in digital clock applications.
The VCO gain KVCO is the gradient of the VCO output frequency versus tuning
voltage function: as illustrated in Figure 11.17, KVCO often varies over the tuning
range of the VCO. The tuning range of a VCO must be large enough to cover the
range of output frequencies required for the synthesizer application, and also to cover
the tolerance on operating frequency resulting from the effect of process variations
on the frequency-determining component values. In many applications, the tolerance
on operating frequency is much larger than the required operating frequency range,
requiring a VCO with a wide tuning range compared to the actual operating frequency
range. Shrinking CMOS geometries results in lower supply voltages, which in turn
reduce the tuning voltage range that is feasible. The combination of small tuning volt-
age and large output frequency range results in a large value of KVCO being required.
Unfortunately, a VCO with high gain is inherently more noisy than one with low
gain, because a given noise level at the tuning voltage input will result in greater
phase noise in the output signal. LC resonator-based VCOs generally have superior
noise characteristics to relaxation oscillators, but have more restricted tuning ranges,
since the capacitance variation possible with low tuning voltages is restricted. A fur-
ther problem with varactor-based tuning is that KVCO varies with the tuning voltage,
due to the non-linear relationship between voltage, capacitance and frequency. The
changing VCO gain makes it difcult to optimize the dynamic response of the PLL
feedback loop over the whole tuning range.
Vdd
Ltank
Vtune
Digital band
select inputs
Band n
Required output
Band 2 frequency range
Band 1 VCO frequency
Band 0 tolerance
providing discrete coarse tuning steps, as illustrated in Figure 11.19. In this way, a
wide tuning range is provided as a series of overlapping narrow bands.
As well as reducing the required VCO gain, VCO tuning linearity can also be
improved, since by providing sufcient overlap between sub-bands, a relatively linear
portion of the voltage tuning transfer function can be used.
In order to implement this band-switching scheme, the frequency control system
must be able to select the correct sub-band in order to generate the required output
368 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
Calibration
Select
Vmax, Vmin control logic
Voltage
reference;
Coarse tune
Vmax, Vmin
digital inputs
fref Calibrate fout
Phase/ Vtune
frequency VCO
comparator Operate
Loop filter
Select N
1/N
divider
Start calibration
Connect Vtune to
voltage reference
Calibration complete
fref fout
Phase/ Vtune
frequency VCO
comparator Loop filter
Coarse tune
digital inlputs
1/N
divider
Vmax
Calibration
logic
Vmin
reference frequency; if it is lower than the reference frequency, the lower limit of the
required output frequency range is inside the tuning range of the VCO. If the divider
output frequency is higher than the reference frequency, the lowest required output
frequency is beyond the lower limit of the VCO tuning range, and a lower frequency
sub-band must be selected in order for the VCO to tune the required frequency range.
A similar procedure can be applied to the upper end of the VCO tuning range; the VCO
now has the maximum tuning voltage applied, and the divide-by-N is programmed
with the value of N corresponding to the maximum required output frequency. A
frequency at the divider output greater than the reference frequency indicates that
the maximum required output frequency is inside the VCO tuning range, while a
frequency below the reference frequency indicates that a higher-frequency sub-band
must be selected. This algorithm is run iteratively until a sub-band is selected that
satises both minimum and maximum tuning range requirements.
A PLL synthesizer including a closed-loop calibration scheme is shown in
Figure 11.22. The VCO tuning voltage is continuously compared with Vmax and
Vmin , representing the limits of allowable tuning voltage excursion. When Vtune moves
outside the maximum or minimum limit, the coarse tuning word is incremented or
decremented appropriately.
the antenna terminals must match the requirements of the transmitter power ampli-
er and receiver low-noise amplier. The degree of impedance mismatch is usually
represented by the reection coefcient, :
Zant Z0
= (11.16)
Zant + Z0
where Zant is the impedance of the antenna and Z0 is the load impedance required
by the power amplier. Since both Zant and Z0 may be complex, is also a complex
number, with magnitude between zero and one. When Zant and Z0 are equal, that is,
perfectly matched, is zero. Antenna design is a compromise between many fac-
tors; achieving a desirable impedance must be traded off against radiation pattern,
bandwidth, efciency and other factors. This is especially so when electrically small
antennas are required (that is, the dimensions of the antenna are small compared to the
operating wavelength), as is the case for many integrated wireless transceiver appli-
cations operating in the ultra-high frequency (UHF) range. Typically, the impedance
of such antennas varies rapidly with frequency and also due to environmental effects,
so it is not practical to obtain precise impedance matching either through antenna
design or using xed impedance-matching networks.
A possible solution to the antenna-impedance-matching problem is to incorpo-
rate a tunable impedance-matching network between the transmitter/receiver and the
antenna. This has long been widespread practice in the medium frequency, high
frequency and very high frequency ranges where automatic antenna tuners with
discrete-component LC networks are used in order to achieve the relatively large
inductances and capacitances required at lower frequencies [28]. Recently, inte-
grated on-chip matching networks have been investigated for UHF and microwave
transceiver antennas, since it is feasible to produce the smaller components required
in integrated form. An on-chip antenna tuner consists of three major components
(Figure 11.23):
Thus, the automatic antenna tuner functions as a feedback system, adjusting the
matching network components to optimize the transformed impedance at the matching
network input. The system can thus respond to changes in antenna impedance that
occur over time; in mobile and hand-held applications, large impedance changes
occur due to relative movements between the antenna and surrounding conducting
or dielectric objects, especially the users body. The issues involved in the design of
the major components of the automatic antenna tuner are described in the following
sections.
Tuning and calibration of analogue, mixed-signal and RF circuits 373
Antenna
TX PA
Matching network
Control Impedance
system sensor
R1
C1 C2
R2
L 2L 4L 2nL
C 2C 4C 2nC
Impedance Impedance
inverting network inverting network
Input from To
power antenna
amplifier
Digital tuning
inputs
Input from To
power antenna
amplifier
Active devices used as switches also introduce additional loss and circuit parasitics
and, since they are non-linear, generate harmonics and inter-modulation products.
They also place constraints on the power-handling capability of the matching network
due to their limited breakdown voltages. Active devices perform best when congured
as shunt switches, since the full supply voltage can be applied as bias between the
gate and source electrodes and, since the source is grounded, VGS is not modulated
by the signal voltage, as would be the case for a series switch. This minimizes the
switch-on resistance and reduces production of inter-modulation products due to
switch non-linearity. Selection of switching transistor dimensions is a compromise
between increased resistive losses in switches with small widths and increased shunt
capacitance in larger widths. MEMS have also been proposed as low-loss matching
network switches [34, 36].
the control system minimizes the detector output, which optimizes the match to the
impedance level for which the coupler is designed. In one application, the tuning net-
work itself has been utilized as a six-port coupler, with the non-linear characteristic
of the switching devices also performing an amplitude detection function, providing
the impedance data for the control system [35].
11.5 Conclusions
11.6 References
1 Nimmo, R.: Analogue electronics, the poor relation?, Proceedings of IEE Sym-
posium on Analogue Signal Processing, Oxford, 1 November 2000, pp. 1/11/5
2 Deliyannis, T., Sun, Y., Fidler, J.K.: Continuous-Time Active Filter Design (CRC
Press, Boca Raton, FL, 1999)
Tuning and calibration of analogue, mixed-signal and RF circuits 379
3 Sun, Y., Fidler, J.K.: Structure generation and design of multiple loop feedback
OTA-grounded capacitor lters, IEEE Transactions on Circuits and Systems-I,
1997;44 (1):111
4 Sun, Y.: Design of High Frequency Integrated Analogue Filters (The Institution
of Electrical Engineers, London, 2002)
5 Banu, M., Tsividis, Y.: Fully active integrated RC lters in MOS technology,
IEEE Journal of Solid State Circuits, 1983;18 (6):64451
6 Kuhn, W.B., Elshabini-Riad, A., Stephenson, F.W.: A new tuning technique for
implementing very high Q, continuous-time, bandpass lters in radio receiver
applications, Proceedings of IEEE ISCAS 94, 30 May5 June 1994, Vol. 5,
pp. 25760
7 Krummenacher, F., Joehl, N.: A 4 MHz CMOS continuous-time lter with on-
chip automatic tuning, IEEE Journal of Solid State Circuits, 1988;23 (3):7508
8 Shi, B., Shan, W., Andreani, P.: A 57 dB image band rejection CMOS Gm-C
polyphase lter with automatic frequency tuning for bluetooth, Proceedings of
IEEE ISCAS 2002, 2002, Vol. 5, pp. 16972
9 Yamazaki, H., Oishi, K., Gotoh, K.: An accurate center frequency tuning scheme
for 450 kHz CMOS Gm C bandpass lters, IEEE Journal of Solid State Circuits,
1999;34 (12):16917
10 Pham, T.K., Allen, P.E.: A design of a low-power, high accuracy, constant-Q-
tuning continuous-time bandpass lter, Proceedings of IEEE ISCAS 2002, 2002,
Vol. 4, pp. 63942
11 Karsilayan, A.I., Schaumann, R.: Mixed-mode automatic tuning scheme for
high-Q continuous-time lters, IEE Proceedings Circuits, Devices and Systems,
2000;147 (1):5764
12 Linares-Barranco, B., Serrano-Gotarredona, T.: A loss control feedback loop
for VCO stable amplitude tuning of RF integrated lters, Proceedings of IEEE
ISCAS 2002, 2002, Vol. 1, pp. 5214
13 Li, D., Tsividis, Y.: Design techniques for automatically tuned gigahertz range
active LC lters, IEEE Journal of Solid State Circuits, 2002;37 (8):96777
14 Moritz, J.R., Sun, Y.: 100 MHz, 6th order leapfrog Gm-C high Q bandpass lter
and on-chip tuning scheme, Proceedings of IEEE ISCAS 2006, Kos, Greece,
2124 May 2006, pp. 23814
15 Dishal, M.: Alignment and adjustment of synchronously tuned multiple resonant
circuit lters, Electrical Communication, 1952; 15464
16 Kroupa, V.F.: Phase Lock Loops and Frequency Synthesis (Wiley, Chichester,
2003)
17 Wilson, W.B., Moon, U.-K., Lakshmikumar, K.R., Dai, L.: A CMOS self-
calibrating frequency synthesiser, IEEE Journal of Solid State Circuits, 2000;35
(10):14374
18 Lee, K.-S., Sung, E.-Y., Hwang, I.-C., Park, B.-H.: Fast AFC technique using a
code estimation and binary search algorithm for wideband frequency synthesis,
Proceedings of IEEE ESSCIRC 2005, Grenoble, France, September 2005, pp.
1814
380 Test and diagnosis of analogue, mixed-signal and RF integrated circuits
19 Lin, T.-H., Lai, Y.-J.: An agile VCO frequency calibration technique for a
10-GHz CMOS PLL, IEEE Journal of Solid State Circuits, 2007;42 (2):
3409
20 Lee, S.T., Fang, S.J., Allstot, D.J., Bellaouar, A., Fridi, A.R., Fontaine, P.A.: A
quad-band GSM-GPRS transmitter with digital auto-calibration, IEEE Journal
of Solid State Circuits, 2004;39 (12):220014
21 Aktas, A., Ismail, M.: CMOS PLL calibration techniques, IEEE Circuits and
Devices Magazine, 2004;20:611
22 Lin, T.-H., Kaiser,W.J.: A 900-MHz 2.5-mA CMOS frequency synthesiser with
an automatic SC tuning loop, IEEE Journal of Solid State Circuits, March
2001;36 (3):42431
23 Razavi, B.: Challenges in portable RF transceiver design, IEEE Circuits and
Devices Magazine, 1996;12 (5):1225
24 Park, C.-H, Kim, O., Kim, B.: A 1.8 GHz self-calibrated phase-locked loop
with precise I/Q matching, IEEE Journal of Solid State Circuits, May 2001;36
(5):77783
25 Ahn, H.K., Park, I.-C., Kim, B.: A 5-GHz self-calibrated I/Q clock genera-
tor using a quadrature LC-VCO, Proceedings of IEEE ISCAS 2003, Bangkok,
Thailand, 2528 May 2003, pp. I-797I-800
26 Ali, S., Margala, M.: A 2.4-GHz auto-calibration frequency synthesiser with on-
chip built-in-self-test solution, Proceedings of IEEE ISCAS-2006, Kos, Greece,
May 2006, pp. 46514
27 Wu, T., Mayaram, K., Moon U-K.: An on-chip calibration technique for
reducing supply voltage sensitivity in ring oscillators, Digest of Techni-
cal Papers IEEE 2006 Symposium on VLSI Circuits, Hawaii, June 2006,
pp. 1023
28 Moritz, J.R., Sun, Y.: Frequency agile antenna tuning and matching, Proceed-
ings of 8th International IEE Conference on HF Radio Systems and Techniques,
2000 (IEE Conf. Publ. no. 474), Bath, UK, 1013 July 2000, pp. 16974
29 Sun, Y., Fidler, J.K.: Design method for impedance matching networks, IEE
Proceedings Circuits, Devices and Systems, 1996;143 (4):18694
30 Sun, Y., Fidler, J.K.: Component value ranges of tuneable impedance matching
networks in RF communications systems, Proceedings of IEE Conference on
HF Radio Systems and Techniques, Leicester, UK, 710 July 1997, Conference
publication no. 411, pp. 1859
31 Sun, Y., Fidler, J.K.: Determination of the impedance matching domain of passive
LC ladder networks: theory and implementation, Journal of the Franklin Institute,
1996;333(B) (2):14155
32 Chamseddine, A., Haslett, J.W., Okoniewski, M.: CMOS silicon-on-sapphire
tunable matching networks, EURASIP Journal on Wireless Communications and
Networking, Vol. 2006, pp. 111
33 Sjoblom, P., Sjoland, H.: An adaptive impedance tuning CMOS circuit for ISM
2.4-GHz band, IEEE Transactions on Circuits and Systems 1, June 2005;52
(6):111524
Tuning and calibration of analogue, mixed-signal and RF circuits 381
34 Deve, N., Kouki, A.B., Nerguizian, V.: A compact size recongurable 13 GHz
impedance tuner suitable for RF MEMS applications, Proceedings of the 16th
IEEE International Conference on Microelectronics, Nis, Serbia and Montenegro,
68 December 2004, pp. 1014
35 de Lima, R.N., Huyart, B., Bergeault, E., Jallet, L.: MMIC impedance matching
system, Electronics Letters, 2000;36 (16):13934
36 Lange, K.L., Papapolymerou, J., Goldsmith, C.L., Malczewski, A., Kleber, J.:
A recongurable double stub tuner using MEMS devices, Proceedings of IEEE
MTT-S, 2001, Vol. 1, pp. 33740
37 McIntosh, C.E., Pollard, R.D., Miles, R.E.: Novel MMIC source-impedance
tuners for on-wafer microwave noise-parameter measurements, IEEE Transac-
tions on Microwave Theory and Techniques, February 1999;47 (2):12531
38 Zolomy, A., Mernyei, F., Erdelyi, J., Pardoen, M., Toth, G.: Automatic antenna
tuning for RF transmitter IC applying high Q antenna, Proceedings of IEEE
Radio Frequency Integrated Circuits Symposium, Fort Worth, TX, June 2004,
pp. 5014
39 Sun, Y., Lau, W.K.: Automatic impedance matching using genetic algorithms,
Proceedings of IEE Conference on Antennas and Propagation, York, UK, August
1999
INDEX
antennas
impedance matching 371
impedance sensors 376
matching network 373
tuning algorithms 377
artificial neural network (ANN)-based
approaches: see neural-network-based
approaches
automatic test equipment (ATE) interface 142 145 174
calibration techniques
phase-locked loops (PLL) frequency
synthesizers 365
time measurement unit (TMU) 164
time-to-digital converter (TDC) 164
genetic algorithms 30
global ambiguity groups 49 52
HABIST 229
hierarchical techniques 121
extensions using the self-test algorithm 121
large-scale circuit fault diagnosis 31
mixed SBT/SAT approaches 135
neural-network-based approaches 130
NewtonRaphson-based approach 136
simulation-after-test (SAT) 121
simulation-before-test (SBT) 129 131
symbolic analysis 124
jitter measurement
analogue-based device 160
phase-locked loops (PLLs) 164 283 295 298
306
Vernier delay line 158 159
Katznelson-type algorithm 73
k-branch-fault diagnosis method 4
bilinear function 8
design for testability 6
multiple excitation method 8
testability analysis 6
k-cutset-fault diagnosis 12
branch-fault diagnosis equations 14
loop- and mesh-fault diagnosis 14
tree selection 14
KerwinHuelsmanNewcomb (KHN)
biquad filter
multiplexing technique 189
oscillation-based test (OBT) 198
state-variable filter
oscillation-based test (OBT) 193
k-fault diagnosis methods 3
class-fault diagnosis 15
fault incremental circuit 3
non-linear circuits 22
recent advances 29
relation of branch-, node- and cutset-fault
diagnosis 14
test node selection 29
tolerance effects and treatment 15
see also k-branch-fault diagnosis method;
k-cutset-fault diagnosis; k-node-fault
diagnosis
KHN: see KerwinHuelsmanNewcomb
(KHN)
k-node-fault diagnosis 9
parameter identification 10
SC filters
bypassing method
bypassing by bandwidth broadening 183
bypassing using duplicated/switched
opamp 187
oscillation-based test (OBT) 199
self-test (ST) algorithm 116
hierarchical/decomposition techniques 121
see also built-in self test (BIST)
sensitivity analysis 30 120
hierarchical techniques 124 127
sequence of expressions (SOE) 124 128
sigma-delta () converters 235
architecture 239
behavioural model 264
built-in self test (BIST) 255 262
defect-oriented testing 258
design for testability (DfT) 261
digital filtering and decimation 238
dynamic performance
parameters 246
first-order modulators 240 241
functional testing 254 256
high-order modulators 241
histogram testing 246
model-based testing 259
performance characterization 243
polynomial model 262
principle of operation 236
quantization noise 247