Вы находитесь на странице: 1из 655

E&CE 327: Digital Systems Engineering

Course Notes
(with Solutions)
Mark Aagaard
2011t1Winter
University of Waterloo
Dept of Electrical and Computer Engineering
Contents
I Course Notes 1
1 VHDL 3
1.1 Introduction to VHDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Levels of Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.2 VHDL Origins and History . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.1.4 Synthesis of a Simulation-Based Language . . . . . . . . . . . . . . . . . 7
1.1.5 Solution to Synthesis Sanity . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.6 Standard Logic 1164 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2 Comparison of VHDL to Other Hardware Description Languages . . . . . . . . . 9
1.2.1 VHDL Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 VHDL Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.3 VHDL and Other Languages . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.3.1 VHDL vs Verilog . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.3.2 VHDL vs System Verilog . . . . . . . . . . . . . . . . . . . . . 10
1.2.3.3 VHDL vs SystemC . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.3.4 Summary of VHDL Evaluation . . . . . . . . . . . . . . . . . . 11
1.3 Overview of Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.1 Syntactic Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.2 Library Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.3 Entities and Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.4 Concurrent Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.5 Component Declaration and Instantiations . . . . . . . . . . . . . . . . . . 16
1.3.6 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.7 Sequential Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.8 A Few More Miscellaneous VHDL Features . . . . . . . . . . . . . . . . 18
1.4 Concurrent vs Sequential Statements . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4.1 Concurrent Assignment vs Process . . . . . . . . . . . . . . . . . . . . . . 18
1.4.2 Conditional Assignment vs If Statements . . . . . . . . . . . . . . . . . . 18
1.4.3 Selected Assignment vs Case Statement . . . . . . . . . . . . . . . . . . . 19
1.4.4 Coding Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.5 Overview of Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.5.1 Combinational Process vs Clocked Process . . . . . . . . . . . . . . . . . 22
1.5.2 Latch Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
i
ii CONTENTS
1.5.3 Combinational vs Flopped Signals . . . . . . . . . . . . . . . . . . . . . . 25
1.6 Details of Process Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6.1 Simple Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.6.2 Temporal Granularities of Simulation . . . . . . . . . . . . . . . . . . . . 26
1.6.3 Intuition Behind Delta-Cycle Simulation . . . . . . . . . . . . . . . . . . 27
1.6.4 Denitions and Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.6.4.1 Process Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.6.4.2 Simulation Algorithm . . . . . . . . . . . . . . . . . . . . . . . 28
1.6.4.3 Delta-Cycle Denitions . . . . . . . . . . . . . . . . . . . . . . 30
1.6.5 Example 1: Process Execution (Bamboozle) . . . . . . . . . . . . . . . . . 31
1.6.6 Example 2: Process Execution (Flummox) . . . . . . . . . . . . . . . . . 40
1.6.7 Example: Need for Provisional Assignments . . . . . . . . . . . . . . . . 42
1.6.8 Delta-Cycle Simulations of Flip-Flops . . . . . . . . . . . . . . . . . . . . 44
1.7 Register-Transfer-Level Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.7.2 Technique for Register-Transfer Level Simulation . . . . . . . . . . . . . . 52
1.7.3 Examples of RTL Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 53
1.7.3.1 RTL Simulation Example 1 . . . . . . . . . . . . . . . . . . . . 53
1.8 VHDL and Hardware Building Blocks . . . . . . . . . . . . . . . . . . . . . . . . 58
1.8.1 Basic Building Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.8.2 Deprecated Building Blocks for RTL . . . . . . . . . . . . . . . . . . . . 59
1.8.2.1 An Aside on Flip-Flops and Latches . . . . . . . . . . . . . . . 59
1.8.2.2 Deprecated Hardware . . . . . . . . . . . . . . . . . . . . . . . 59
1.8.3 Hardware and Code for Flops . . . . . . . . . . . . . . . . . . . . . . . . 60
1.8.3.1 Flops with Waits and Ifs . . . . . . . . . . . . . . . . . . . . . . 60
1.8.3.2 Flops with Synchronous Reset . . . . . . . . . . . . . . . . . . 60
1.8.3.3 Flops with Chip-Enable . . . . . . . . . . . . . . . . . . . . . . 61
1.8.3.4 Flop with Chip-Enable and Mux on Input . . . . . . . . . . . . . 61
1.8.3.5 Flops with Chip-Enable, Muxes, and Reset . . . . . . . . . . . . 62
1.8.4 An Example Sequential Circuit . . . . . . . . . . . . . . . . . . . . . . . 62
1.9 Arrays and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
1.10 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
1.10.1 Arithmetic Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.2 Shift and Rotate Operations . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.3 Overloading of Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . 68
1.10.4 Different Widths and Arithmetic . . . . . . . . . . . . . . . . . . . . . . . 69
1.10.5 Overloading of Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . 69
1.10.6 Different Widths and Comparisons . . . . . . . . . . . . . . . . . . . . . . 69
1.10.7 Type Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
1.11 Synthesizable vs Non-Synthesizable Code . . . . . . . . . . . . . . . . . . . . . . 71
1.11.1 Unsynthesizable Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.11.1.1 Initial Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.11.1.2 Wait For . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
1.11.1.3 Different Wait Conditions . . . . . . . . . . . . . . . . . . . . . 72
1.11.1.4 Multiple if rising edge in Process . . . . . . . . . . . . . . . . 73
CONTENTS iii
1.11.1.5 if rising edge and wait in Same Process . . . . . . . . . . . 73
1.11.1.6 if rising edge with else Clause . . . . . . . . . . . . . . . . 74
1.11.1.7 if rising edge Inside a for Loop . . . . . . . . . . . . . . . . 74
1.11.1.8 wait Inside of a for loop . . . . . . . . . . . . . . . . . . . 75
1.11.2 Synthesizable, but Bad Coding Practices . . . . . . . . . . . . . . . . . . . 76
1.11.2.1 Asynchronous Reset . . . . . . . . . . . . . . . . . . . . . . . . 76
1.11.2.2 Combinational if-then Without else . . . . . . . . . . . . . 77
1.11.2.3 Bad Form of Nested Ifs . . . . . . . . . . . . . . . . . . . . . . 77
1.11.2.4 Deeply Nested Ifs . . . . . . . . . . . . . . . . . . . . . . . . . 77
1.11.3 Synthesizable, but Unpredictable Hardware . . . . . . . . . . . . . . . . . 78
1.12 Synthesizable VHDL Coding Guidelines . . . . . . . . . . . . . . . . . . . . . . . 78
1.12.1 Signal Declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
1.12.2 Flip-Flops and Latches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
1.12.3 Inputs and Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
1.12.4 Multiplexors and Tri-State Signals . . . . . . . . . . . . . . . . . . . . . . 79
1.12.5 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
1.12.6 State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
1.12.7 Reset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
1.13 VHDL Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
P1.1 IEEE 1164 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
P1.2 VHDL Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
P1.3 Flops, Latches, and Combinational Circuitry . . . . . . . . . . . . . . . . 85
P1.4 Counting Clock Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
P1.5 Arithmetic Overow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
P1.6 Delta-Cycle Simulation: Pong . . . . . . . . . . . . . . . . . . . . . . . . 89
P1.7 Delta-Cycle Simulation: Baku . . . . . . . . . . . . . . . . . . . . . . . . 89
P1.8 Clock-Cycle Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
P1.9 VHDL VHDL Behavioural Comparison: Teradactyl . . . . . . . . . . . 92
P1.10 VHDL VHDL Behavioural Comparison: Ichtyostega . . . . . . . . . . 93
P1.11 Waveform VHDL Behavioural Comparison . . . . . . . . . . . . . . . 95
P1.12 Hardware VHDL Comparison . . . . . . . . . . . . . . . . . . . . . . 97
P1.13 8-Bit Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
P1.13.1 Asynchronous Reset . . . . . . . . . . . . . . . . . . . . . . . . 98
P1.13.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
P1.13.3 Testbench for Register . . . . . . . . . . . . . . . . . . . . . . . 98
P1.14 Synthesizable VHDL and Hardware . . . . . . . . . . . . . . . . . . . . . 99
P1.15 Datapath Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
P1.15.1 Correct Implementation? . . . . . . . . . . . . . . . . . . . . . 101
P1.15.2 Smallest Area . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
P1.15.3 Shortest Clock Period . . . . . . . . . . . . . . . . . . . . . . . 104
iv CONTENTS
2 RTL Design with VHDL 105
2.1 Prelude to Chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
2.1.1 A Note on EDA for FPGAs and ASICs . . . . . . . . . . . . . . . . . . . 105
2.2 FPGA Background and Coding Guidelines . . . . . . . . . . . . . . . . . . . . . . 106
2.2.1 Generic FPGA Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
2.2.1.1 Generic FPGA Cell . . . . . . . . . . . . . . . . . . . . . . . . 106
2.2.2 Area Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
2.2.2.1 Interconnect for Generic FPGA . . . . . . . . . . . . . . . . . . 112
2.2.2.2 Blocks of Cells for Generic FPGA . . . . . . . . . . . . . . . . 112
2.2.2.3 Clocks for Generic FPGAs . . . . . . . . . . . . . . . . . . . . 114
2.2.2.4 Special Circuitry in FPGAs . . . . . . . . . . . . . . . . . . . . 114
2.2.3 Generic-FPGA Coding Guidelines . . . . . . . . . . . . . . . . . . . . . . 115
2.3 Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
2.3.1 Generic Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
2.3.2 Implementation Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
2.3.3 Design Flow: Datapath vs Control vs Storage . . . . . . . . . . . . . . . . 118
2.3.3.1 Classes of Hardware . . . . . . . . . . . . . . . . . . . . . . . . 118
2.3.3.2 Datapath-Centric Design Flow . . . . . . . . . . . . . . . . . . 119
2.3.3.3 Control-Centric Design Flow . . . . . . . . . . . . . . . . . . . 120
2.3.3.4 Storage-Centric Design Flow . . . . . . . . . . . . . . . . . . . 120
2.4 Algorithms and High-Level Models . . . . . . . . . . . . . . . . . . . . . . . . . 120
2.4.1 Flow Charts and State Machines . . . . . . . . . . . . . . . . . . . . . . . 121
2.4.2 Data-Dependency Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 121
2.4.3 High-Level Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
2.5 Finite State Machines in VHDL . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
2.5.1 Introduction to State-Machine Design . . . . . . . . . . . . . . . . . . . . 123
2.5.1.1 Mealy vs Moore State Machines . . . . . . . . . . . . . . . . . 123
2.5.1.2 Introduction to State Machines and VHDL . . . . . . . . . . . . 123
2.5.1.3 Explicit vs Implicit State Machines . . . . . . . . . . . . . . . . 124
2.5.2 Implementing a Simple Moore Machine . . . . . . . . . . . . . . . . . . . 125
2.5.2.1 Implicit Moore State Machine . . . . . . . . . . . . . . . . . . . 126
2.5.2.2 Explicit Moore with Flopped Output . . . . . . . . . . . . . . . 127
2.5.2.3 Explicit Moore with Combinational Outputs . . . . . . . . . . . 128
2.5.2.4 Explicit-Current+Next Moore with Concurrent Assignment . . . 129
2.5.2.5 Explicit-Current+Next Moore with Combinational Process . . . 130
2.5.3 Implementing a Simple Mealy Machine . . . . . . . . . . . . . . . . . . . 131
2.5.3.1 Implicit Mealy State Machine . . . . . . . . . . . . . . . . . . . 132
2.5.3.2 Explicit Mealy State Machine . . . . . . . . . . . . . . . . . . . 133
2.5.3.3 Explicit-Current+Next Mealy . . . . . . . . . . . . . . . . . . . 134
2.5.4 Reset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
2.5.5 State Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
2.5.5.1 Constants vs Enumerated Type . . . . . . . . . . . . . . . . . . 137
2.5.5.2 Encoding Schemes . . . . . . . . . . . . . . . . . . . . . . . . . 138
2.6 Dataow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
2.6.1 Dataow Diagrams Overview . . . . . . . . . . . . . . . . . . . . . . . . 139
CONTENTS v
2.6.2 Dataow Diagrams, Hardware, and Behaviour . . . . . . . . . . . . . . . 142
2.6.3 Dataow Diagram Execution . . . . . . . . . . . . . . . . . . . . . . . . . 143
2.6.4 Performance Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
2.6.5 Area Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
2.6.6 Design Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
2.6.7 Area / Performance Tradeoffs . . . . . . . . . . . . . . . . . . . . . . . . 145
2.7 Design Example: Massey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
2.7.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
2.7.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
2.7.3 Initial Dataow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
2.7.4 Dataow Diagram Scheduling . . . . . . . . . . . . . . . . . . . . . . . . 150
2.7.5 Optimize Inputs and Outputs . . . . . . . . . . . . . . . . . . . . . . . . . 152
2.7.6 Input/Output Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
2.7.7 Register Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
2.7.8 Datapath Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
2.7.9 Datapath for DP+Ctrl Model . . . . . . . . . . . . . . . . . . . . . . . . . 158
2.7.10 Peephole Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
2.8 Design Example: Vanier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
2.8.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
2.8.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
2.8.3 Initial Dataow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
2.8.4 Reschedule to Meet Requirements . . . . . . . . . . . . . . . . . . . . . . 164
2.8.5 Optimize Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
2.8.6 Assign Names to Registered Values . . . . . . . . . . . . . . . . . . . . . 167
2.8.7 Input/Output Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
2.8.8 Tangent: Combinational Outputs . . . . . . . . . . . . . . . . . . . . . . . 170
2.8.9 Register Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
2.8.10 Datapath Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
2.8.11 Hardware Block Diagram and State Machine . . . . . . . . . . . . . . . . 173
2.8.11.1 Control for Registers . . . . . . . . . . . . . . . . . . . . . . . 173
2.8.11.2 Control for Datapath Components . . . . . . . . . . . . . . . . . 174
2.8.11.3 Control for State . . . . . . . . . . . . . . . . . . . . . . . . . . 175
2.8.11.4 Complete State Machine Table . . . . . . . . . . . . . . . . . . 175
2.8.12 VHDL Code with Explicit State Machine . . . . . . . . . . . . . . . . . . 176
2.8.13 Peephole Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
2.8.14 Notes and Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
2.9 Pipelining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
2.9.1 Introduction to Pipelining . . . . . . . . . . . . . . . . . . . . . . . . . . 183
2.9.2 Partially Pipelined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
2.9.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
2.10 Design Example: Pipelined Massey . . . . . . . . . . . . . . . . . . . . . . . . . 188
2.11 Memory Arrays and RTL Design . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
2.11.1 Memory Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
2.11.2 Memory Arrays in VHDL . . . . . . . . . . . . . . . . . . . . . . . . . . 193
2.11.2.1 Using a Two-Dimensional Array for Memory . . . . . . . . . . 193
vi CONTENTS
2.11.2.2 Memory Arrays in Hardware . . . . . . . . . . . . . . . . . . . 194
2.11.2.3 VHDL Code for Single-Port Memory Array . . . . . . . . . . . 195
2.11.2.4 Using Library Components for Memory . . . . . . . . . . . . . 196
2.11.2.5 Build Memory from Slices . . . . . . . . . . . . . . . . . . . . 197
2.11.2.6 Dual-Ported Memory . . . . . . . . . . . . . . . . . . . . . . . 199
2.11.3 Data Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
2.11.4 Memory Arrays and Dataow Diagrams . . . . . . . . . . . . . . . . . . . 201
2.11.5 Example: Memory Array and Dataow Diagram . . . . . . . . . . . . . . 204
2.12 Input / Output Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
2.13 Example: Moving Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
2.13.1 Requirements and Environmental Assumptions . . . . . . . . . . . . . . . 207
2.13.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
2.13.3 Pseudocode and Dataow Diagrams . . . . . . . . . . . . . . . . . . . . . 210
2.13.4 Control Tables and State Machine . . . . . . . . . . . . . . . . . . . . . . 216
2.13.5 VHDL Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
2.14 Design Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
P2.1 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
P2.1.1 Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . 221
P2.1.2 Own Code vs Libraries . . . . . . . . . . . . . . . . . . . . . . 221
P2.2 Design Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
P2.3 Dataow Diagram Optimization . . . . . . . . . . . . . . . . . . . . . . . 222
P2.3.1 Resource Usage . . . . . . . . . . . . . . . . . . . . . . . . . . 222
P2.3.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
P2.4 Dataow Diagram Design . . . . . . . . . . . . . . . . . . . . . . . . . . 223
P2.4.1 Maximum Performance . . . . . . . . . . . . . . . . . . . . . . 223
P2.4.2 Minimum area . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
P2.5 Michener: Design and Optimization . . . . . . . . . . . . . . . . . . . . . 224
P2.6 Dataow Diagrams with Memory Arrays . . . . . . . . . . . . . . . . . . 224
P2.6.1 Algorithm 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
P2.6.2 Algorithm 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
P2.7 2-bit adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
P2.7.1 Generic Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
P2.7.2 FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
P2.8 Sketches of Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
CONTENTS vii
3 Performance Analysis and Optimization 227
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
3.2 Dening Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
3.3 Comparing Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
3.3.1 General Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
3.3.2 Example: Performance of Printers . . . . . . . . . . . . . . . . . . . . . . 229
3.4 Clock Speed, CPI, Program Length, and Performance . . . . . . . . . . . . . . . . 233
3.4.1 Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
3.4.2 Example: CISC vs RISC and CPI . . . . . . . . . . . . . . . . . . . . . . 233
3.4.3 Effect of Instruction Set on Performance . . . . . . . . . . . . . . . . . . . 235
3.4.4 Effect of Time to Market on Relative Performance . . . . . . . . . . . . . 237
3.4.5 Summary of Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
3.5 Performance Analysis and Dataow Diagrams . . . . . . . . . . . . . . . . . . . . 239
3.5.1 Dataow Diagrams, CPI, and Clock Speed . . . . . . . . . . . . . . . . . 239
3.5.2 Examples of Dataow Diagrams for Two Instructions . . . . . . . . . . . . 240
3.5.2.1 Scheduling of Operations for Different Clock Periods . . . . . . 241
3.5.2.2 Performance Computation for Different Clock Periods . . . . . . 241
3.5.2.3 Example: Two Instructions Taking Similar Time . . . . . . . . . 242
3.5.2.4 Example: Same Total Time, Different Order for A . . . . . . . . 243
3.5.3 Example: From Algorithm to Optimized Dataow . . . . . . . . . . . . . 244
3.6 General Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
3.6.1 Strength Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
3.6.1.1 Arithmetic Strength Reduction . . . . . . . . . . . . . . . . . . 252
3.6.1.2 Boolean Strength Reduction . . . . . . . . . . . . . . . . . . . . 252
3.6.2 Replication and Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
3.6.2.1 Mux-Pushing . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
3.6.2.2 Common Subexpression Elimination . . . . . . . . . . . . . . . 253
3.6.2.3 Computation Replication . . . . . . . . . . . . . . . . . . . . . 253
3.6.3 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
3.7 Retiming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
3.8 Performance Analysis and Optimization Problems . . . . . . . . . . . . . . . . . . 256
P3.1 Farmer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
P3.2 Network and Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
P3.2.1 Maximum Throughput . . . . . . . . . . . . . . . . . . . . . . . 257
P3.2.2 Packet Size and Performance . . . . . . . . . . . . . . . . . . . 257
P3.3 Performance Short Answer . . . . . . . . . . . . . . . . . . . . . . . . . . 257
P3.4 Microprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
P3.4.1 Average CPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
P3.4.2 Why not you too? . . . . . . . . . . . . . . . . . . . . . . . . . 258
P3.4.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
P3.5 Dataow Diagram Optimization . . . . . . . . . . . . . . . . . . . . . . . 258
P3.6 Performance Optimization with Memory Arrays . . . . . . . . . . . . . . 259
P3.7 Multiply Instruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
P3.7.1 Highest Performance . . . . . . . . . . . . . . . . . . . . . . . 260
P3.7.2 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . 261
viii CONTENTS
4 Functional Verication 263
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
4.1.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
4.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
4.2.1 Terminology: Validation / Verication / Testing . . . . . . . . . . . . . . . 264
4.2.2 The Difculty of Designing Correct Chips . . . . . . . . . . . . . . . . . . 265
4.2.2.1 Notes from Kenn Heinrich (UW E&CE grad) . . . . . . . . . . 265
4.2.2.2 Notes from Aart de Geus (Chairman and CEO of Synopsys) . . . 265
4.3 Test Cases and Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
4.3.1 Test Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
4.3.2 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
4.3.3 Floating Point Divider Example . . . . . . . . . . . . . . . . . . . . . . . 268
4.4 Testbenches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
4.4.1 Overview of Test Benches . . . . . . . . . . . . . . . . . . . . . . . . . . 271
4.4.2 Reference Model Style Testbench . . . . . . . . . . . . . . . . . . . . . . 272
4.4.3 Relational Style Testbench . . . . . . . . . . . . . . . . . . . . . . . . . . 272
4.4.4 Coding Structure of a Testbench . . . . . . . . . . . . . . . . . . . . . . . 273
4.4.5 Datapath vs Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
4.4.6 Verication Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
4.5 Functional Verication for Datapath Circuits . . . . . . . . . . . . . . . . . . . . . 274
4.5.1 A Spec-Less Testbench . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
4.5.2 Use an Array for Test Vectors . . . . . . . . . . . . . . . . . . . . . . . . 276
4.5.3 Build Spec into Stimulus . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
4.5.4 Have Separate Specication Entity . . . . . . . . . . . . . . . . . . . . . . 278
4.5.5 Generate Test Vectors Automatically . . . . . . . . . . . . . . . . . . . . . 280
4.5.6 Relational Specication . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
4.6 Functional Verication of Control Circuits . . . . . . . . . . . . . . . . . . . . . . 281
4.6.1 Overview of Queues in Hardware . . . . . . . . . . . . . . . . . . . . . . 281
4.6.2 VHDL Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
4.6.2.1 Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
4.6.2.2 Other VHDL Coding . . . . . . . . . . . . . . . . . . . . . . . 283
4.6.3 Code Structure for Verication . . . . . . . . . . . . . . . . . . . . . . . . 283
4.6.4 Instrumentation Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
4.6.5 Coverage Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
4.6.6 Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
4.6.7 VHDL Coding Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
4.6.8 Queue Specication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
4.6.9 Queue Testbench . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
4.7 Example: Microwave Oven . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
4.8 Functional Verication Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
P4.1 Carry Save Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
P4.2 Trafc Light Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
P4.2.1 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
P4.2.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . 296
P4.2.3 Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
CONTENTS ix
P4.3 State Machines and Verication . . . . . . . . . . . . . . . . . . . . . . . 297
P4.3.1 Three Different State Machines . . . . . . . . . . . . . . . . . . 297
P4.3.2 State Machines in General . . . . . . . . . . . . . . . . . . . . . 298
P4.4 Test Plan Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
P4.4.1 Early Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
P4.4.2 Corner Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
P4.5 Sketches of Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
5 Timing Analysis 301
5.1 Delays and Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
5.1.1 Background Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
5.1.2 Clock-Related Timing Denitions . . . . . . . . . . . . . . . . . . . . . . 302
5.1.2.1 Clock Skew . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
5.1.2.2 Clock Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
5.1.2.3 Clock Jitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
5.1.3 Storage-Related Timing Denitions . . . . . . . . . . . . . . . . . . . . . 304
5.1.3.1 Flops and Latches . . . . . . . . . . . . . . . . . . . . . . . . . 304
5.1.3.2 Timing Parameters for a Flop . . . . . . . . . . . . . . . . . . . 305
5.1.3.3 Hold Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
5.1.3.4 Clock-to-Q Time . . . . . . . . . . . . . . . . . . . . . . . . . . 305
5.1.4 Propagation Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
5.1.4.1 Load Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
5.1.4.2 Interconnect Delays . . . . . . . . . . . . . . . . . . . . . . . . 306
5.1.5 Summary of Delay Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 307
5.1.6 Timing Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
5.1.6.1 Minimum Clock Period . . . . . . . . . . . . . . . . . . . . . . 308
5.1.6.2 Hold Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . 309
5.1.6.3 Example Timing Violations . . . . . . . . . . . . . . . . . . . . 309
5.2 Timing Analysis of Latches and Flip Flops . . . . . . . . . . . . . . . . . . . . . . 311
5.2.1 Simple Multiplexer Latch . . . . . . . . . . . . . . . . . . . . . . . . . . 311
5.2.1.1 Structure and Behaviour of Multiplexer Latch . . . . . . . . . . 311
5.2.1.2 Strategy for Timing Analysis of Storage Devices . . . . . . . . . 313
5.2.1.3 Clock-to-Q Time of a Multiplexer Latch . . . . . . . . . . . . . 314
5.2.1.4 Setup Timing of a Multiplexer Latch . . . . . . . . . . . . . . . 315
5.2.1.5 Hold Time of a Multiplexer Latch . . . . . . . . . . . . . . . . . 323
5.2.1.6 Example of a Bad Latch . . . . . . . . . . . . . . . . . . . . . . 326
5.2.2 Timing Analysis of Transmission-Gate Latch . . . . . . . . . . . . . . . . 326
5.2.2.1 Structure and Behaviour of a Transmission Gate . . . . . . . . . 327
5.2.2.2 Structure and Behaviour of Transmission-Gate Latch . . . . . . 327
5.2.2.3 Clock-to-Q Delay for Transmission-Gate Latch . . . . . . . . . 328
5.2.2.4 Setup and Hold Times for Transmission-Gate Latch . . . . . . . 328
5.2.3 Falling Edge Flip Flop . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
5.2.3.1 Structure and Behaviour of Flip-Flop . . . . . . . . . . . . . . . 329
5.2.3.2 Clock-to-Q of Flip-Flop . . . . . . . . . . . . . . . . . . . . . . 330
5.2.3.3 Setup of Flip-Flop . . . . . . . . . . . . . . . . . . . . . . . . . 331
x CONTENTS
5.2.3.4 Hold of Flip-Flop . . . . . . . . . . . . . . . . . . . . . . . . . 332
5.2.4 Timing Analysis of FPGA Cells . . . . . . . . . . . . . . . . . . . . . . . 332
5.2.4.1 Standard Timing Equations . . . . . . . . . . . . . . . . . . . . 333
5.2.4.2 Hierarchical Timing Equations . . . . . . . . . . . . . . . . . . 333
5.2.4.3 Actel Act 2 Logic Cell . . . . . . . . . . . . . . . . . . . . . . . 333
5.2.4.4 Timing Analysis of Actel Sequential Module . . . . . . . . . . . 335
5.2.5 Exotic Flop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
5.3 Critical Paths and False Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
5.3.1 Introduction to Critical and False Paths . . . . . . . . . . . . . . . . . . . 336
5.3.1.1 Example of Critical Path in Full Adder . . . . . . . . . . . . . . 338
5.3.1.2 Preliminaries for Critical Paths . . . . . . . . . . . . . . . . . . 340
5.3.1.3 Longest Path and Critical Path . . . . . . . . . . . . . . . . . . 340
5.3.1.4 Timing Simulation vs Static Timing Analysis . . . . . . . . . . . 343
5.3.2 Longest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
5.3.3 Detecting a False Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
5.3.3.1 Preliminaries for Detecting a False Path . . . . . . . . . . . . . 345
5.3.3.2 Almost-Correct Algorithm to Detect a False Path . . . . . . . . . 349
5.3.3.3 Examples of Detecting False Paths . . . . . . . . . . . . . . . . 349
5.3.4 Finding the Next Candidate Path . . . . . . . . . . . . . . . . . . . . . . . 354
5.3.4.1 Algorithm to Find Next Candidate Path . . . . . . . . . . . . . . 354
5.3.4.2 Examples of Finding Next Candidate Path . . . . . . . . . . . . 355
5.3.5 Correct Algorithm to Find Critical Path . . . . . . . . . . . . . . . . . . . 362
5.3.5.1 Rules for Late Side Inputs . . . . . . . . . . . . . . . . . . . . . 362
5.3.5.2 Monotone Speedup . . . . . . . . . . . . . . . . . . . . . . . . 364
5.3.5.3 Analysis of Side-Input-Causes-Glitch Situation . . . . . . . . . 365
5.3.5.4 Complete Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 366
5.3.5.5 Complete Examples . . . . . . . . . . . . . . . . . . . . . . . . 367
5.3.6 Further Extensions to Critical Path Analysis . . . . . . . . . . . . . . . . . 374
5.3.7 Increasing the Accuracy of Critical Path Analysis . . . . . . . . . . . . . . 375
5.4 Elmore Timing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
5.4.1 RC-Networks for Timing Analysis . . . . . . . . . . . . . . . . . . . . . . 375
5.4.2 Derivation of Analog Timing Model . . . . . . . . . . . . . . . . . . . . . 380
5.4.2.1 Example Derivation: Equation for Voltage at Node 3 . . . . . . . 382
5.4.2.2 General Derivation . . . . . . . . . . . . . . . . . . . . . . . . . 383
5.4.3 Elmore Timing Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
5.4.4 Examples of Using Elmore Delay . . . . . . . . . . . . . . . . . . . . . . 387
5.4.4.1 Interconnect with Single Fanout . . . . . . . . . . . . . . . . . . 387
5.4.4.2 Interconnect with Multiple Gates in Fanout . . . . . . . . . . . . 389
5.5 Practical Usage of Timing Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 392
5.5.1 Speed Binning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
5.5.1.1 FPGAs, Interconnect, and Synthesis . . . . . . . . . . . . . . . 394
5.5.2 Worst Case Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
5.5.2.1 Fanout delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
5.5.2.2 Derating Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 394
5.6 Timing Analysis Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
CONTENTS xi
P5.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
P5.2 Hold Time Violations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
P5.2.1 Cause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
P5.2.2 Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
P5.2.3 Rectication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
P5.3 Latch Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
P5.4 Critical Path and False Path . . . . . . . . . . . . . . . . . . . . . . . . . 398
P5.5 Critical Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
P5.5.1 Longest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
P5.5.2 Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
P5.5.3 Missing Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 399
P5.5.4 Critical Path or False Path? . . . . . . . . . . . . . . . . . . . . 399
P5.6 YACP: Yet Another Critical Path . . . . . . . . . . . . . . . . . . . . . . . 400
P5.7 Timing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
P5.8 Short Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
P5.8.1 Wires in FPGAs . . . . . . . . . . . . . . . . . . . . . . . . . . 402
P5.8.2 Age and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
P5.8.3 Temperature and Delay . . . . . . . . . . . . . . . . . . . . . . 402
P5.9 Worst Case Conditions and Derating Factor . . . . . . . . . . . . . . . . . 402
P5.9.1 Worst-Case Commercial . . . . . . . . . . . . . . . . . . . . . . 402
P5.9.2 Worst-Case Industrial . . . . . . . . . . . . . . . . . . . . . . . 402
P5.9.3 Worst-Case Industrial, Non-Ambient Junction Temperature . . . 402
6 Power Analysis and Power-Aware Design 403
6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
6.1.1 Importance of Power and Energy . . . . . . . . . . . . . . . . . . . . . . . 403
6.1.2 Industrial Names and Products . . . . . . . . . . . . . . . . . . . . . . . . 403
6.1.3 Power vs Energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
6.1.4 Batteries, Power and Energy . . . . . . . . . . . . . . . . . . . . . . . . . 405
6.1.4.1 Do Batteries Store Energy or Power? . . . . . . . . . . . . . . . 405
6.1.4.2 Battery Life and Efciency . . . . . . . . . . . . . . . . . . . . 405
6.1.4.3 Battery Life and Power . . . . . . . . . . . . . . . . . . . . . . 406
6.2 Power Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
6.2.1 Switching Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
6.2.2 Short-Circuited Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
6.2.3 Leakage Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
6.2.4 Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
6.2.5 Note on Power Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
6.3 Overview of Power Reduction Techniques . . . . . . . . . . . . . . . . . . . . . . 414
6.4 Voltage Reduction for Power Reduction . . . . . . . . . . . . . . . . . . . . . . . 415
6.5 Data Encoding for Power Reduction . . . . . . . . . . . . . . . . . . . . . . . . . 416
6.5.1 How Data Encoding Can Reduce Power . . . . . . . . . . . . . . . . . . . 416
6.5.2 Example Problem: Sixteen Pulser . . . . . . . . . . . . . . . . . . . . . . 419
6.5.2.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . 419
6.5.2.2 Additional Information . . . . . . . . . . . . . . . . . . . . . . 420
xii CONTENTS
6.5.2.3 Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
6.6 Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
6.6.1 Introduction to Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . . 424
6.6.2 Implementing Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . . 425
6.6.3 Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
6.6.4 Effectiveness of Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . 427
6.6.5 Example: Reduced Activity Factor with Clock Gating . . . . . . . . . . . 429
6.6.6 Clock Gating with Valid-Bit Protocol . . . . . . . . . . . . . . . . . . . . 431
6.6.6.1 Valid-Bit Protocol . . . . . . . . . . . . . . . . . . . . . . . . . 431
6.6.6.2 How Many Clock Cycles for Module? . . . . . . . . . . . . . . 433
6.6.6.3 Adding Clock-Gating Circuitry . . . . . . . . . . . . . . . . . . 434
6.6.7 Example: Pipelined Circuit with Clock-Gating . . . . . . . . . . . . . . . 437
6.7 Power Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
P6.1 Short Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
P6.1.1 Power and Temperature . . . . . . . . . . . . . . . . . . . . . . 439
P6.1.2 Leakage Power . . . . . . . . . . . . . . . . . . . . . . . . . . 439
P6.1.3 Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
P6.1.4 Gray Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
P6.2 VLSI Gurus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
P6.2.1 Effect on Power . . . . . . . . . . . . . . . . . . . . . . . . . . 439
P6.2.2 Critique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
P6.3 Advertising Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
P6.4 Vary Supply Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
P6.5 Clock Speed Increase Without Power Increase . . . . . . . . . . . . . . . 441
P6.5.1 Supply Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
P6.5.2 Supply Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
P6.6 Power Reduction Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 441
P6.6.1 Supply Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
P6.6.2 Transistor Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . 441
P6.6.3 Adding Registers to Inputs . . . . . . . . . . . . . . . . . . . . 441
P6.6.4 Gray Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
P6.7 Power Consumption on New Chip . . . . . . . . . . . . . . . . . . . . . . 442
P6.7.1 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
P6.7.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
P6.7.3 Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
CONTENTS xiii
7 Fault Testing and Testability 443
7.1 Faults and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
7.1.1 Overview of Faults and Testing . . . . . . . . . . . . . . . . . . . . . . . 443
7.1.1.1 Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
7.1.1.2 Causes of Faults . . . . . . . . . . . . . . . . . . . . . . . . . . 443
7.1.1.3 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
7.1.1.4 Burn In . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
7.1.1.5 Bin Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
7.1.1.6 Testing Techniques . . . . . . . . . . . . . . . . . . . . . . . . 445
7.1.1.7 Design for Testability (DFT) . . . . . . . . . . . . . . . . . . . 445
7.1.2 Example Problem: Economics of Testing . . . . . . . . . . . . . . . . . . 446
7.1.3 Physical Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
7.1.3.1 Types of Physical Faults . . . . . . . . . . . . . . . . . . . . . . 447
7.1.3.2 Locations of Faults . . . . . . . . . . . . . . . . . . . . . . . . 447
7.1.3.3 Layout Affects Locations . . . . . . . . . . . . . . . . . . . . . 448
7.1.3.4 Naming Fault Locations . . . . . . . . . . . . . . . . . . . . . . 448
7.1.4 Detecting a Fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
7.1.4.1 Which Test Vectors will Detect a Fault? . . . . . . . . . . . . . 449
7.1.5 Mathematical Models of Faults . . . . . . . . . . . . . . . . . . . . . . . 450
7.1.5.1 Single Stuck-At Fault Model . . . . . . . . . . . . . . . . . . . 450
7.1.6 Generate Test Vector to Find a Mathematical Fault . . . . . . . . . . . . . 451
7.1.6.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
7.1.6.2 Example of Finding a Test Vector . . . . . . . . . . . . . . . . . 452
7.1.7 Undetectable Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
7.1.7.1 Redundant Circuitry . . . . . . . . . . . . . . . . . . . . . . . . 452
7.1.7.2 Curious Circuitry and Fault Detection . . . . . . . . . . . . . . 454
7.2 Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
7.2.1 A Small Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
7.2.2 Choosing Test Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
7.2.2.1 Fault Domination . . . . . . . . . . . . . . . . . . . . . . . . . 456
7.2.2.2 Fault Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . 457
7.2.2.3 Gate Collapsing . . . . . . . . . . . . . . . . . . . . . . . . . . 457
7.2.2.4 Node Collapsing . . . . . . . . . . . . . . . . . . . . . . . . . . 458
7.2.2.5 Fault Collapsing Summary . . . . . . . . . . . . . . . . . . . . 458
7.2.3 Fault Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
7.2.4 Test Vector Generation and Fault Detection . . . . . . . . . . . . . . . . . 459
7.2.5 Generate Test Vectors for 100% Coverage . . . . . . . . . . . . . . . . . . 459
7.2.5.1 Collapse the Faults . . . . . . . . . . . . . . . . . . . . . . . . 460
7.2.5.2 Check for Fault Domination . . . . . . . . . . . . . . . . . . . . 462
7.2.5.3 Required Test Vectors . . . . . . . . . . . . . . . . . . . . . . . 463
7.2.5.4 Faults Not Covered by Required Test Vectors . . . . . . . . . . . 463
7.2.5.5 Order to Run Test Vectors . . . . . . . . . . . . . . . . . . . . . 464
7.2.5.6 Summary of Technique to Find and Order Test Vectors . . . . . 465
7.2.5.7 Complete Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 466
7.2.6 One Fault Hiding Another . . . . . . . . . . . . . . . . . . . . . . . . . . 467
xiv CONTENTS
7.3 Scan Testing in General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
7.3.1 Structure and Behaviour of Scan Testing . . . . . . . . . . . . . . . . . . . 468
7.3.2 Scan Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
7.3.2.1 Circuitry in Normal and Scan Mode . . . . . . . . . . . . . . . 468
7.3.2.2 Scan in Operation . . . . . . . . . . . . . . . . . . . . . . . . . 469
7.3.2.3 Scan in Operation with Example Circuit . . . . . . . . . . . . . 470
7.3.3 Summary of Scan Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 475
7.3.4 Time to Test a Chip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
7.3.4.1 Example: Time to Test a Chip . . . . . . . . . . . . . . . . . . . 476
7.4 Boundary Scan and JTAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
7.4.1 Boundary Scan History . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
7.4.2 JTAG Scan Pins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
7.4.3 Scan Registers and Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
7.4.4 Scan Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
7.4.5 TAP Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
7.4.6 Other descriptions of JTAG/IEEE 1194.1 . . . . . . . . . . . . . . . . . . 480
7.5 Built In Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
7.5.1 Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
7.5.1.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
7.5.1.2 Linear Feedback Shift Register (LFSR) . . . . . . . . . . . . . . 483
7.5.1.3 Maximal-Length LFSR . . . . . . . . . . . . . . . . . . . . . . 484
7.5.2 Test Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
7.5.3 Signature Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
7.5.4 Result Checker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
7.5.5 Arithmetic over Binary Fields . . . . . . . . . . . . . . . . . . . . . . . . 487
7.5.6 Shift Registers and Characteristic Polynomials . . . . . . . . . . . . . . . 487
7.5.6.1 Circuit Multiplication . . . . . . . . . . . . . . . . . . . . . . . 489
7.5.7 Bit Streams and Characteristic Polynomials . . . . . . . . . . . . . . . . . 489
7.5.8 Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
7.5.9 Signature Analysis: Math and Circuits . . . . . . . . . . . . . . . . . . . . 490
7.5.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
7.6 Scan vs Self Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
7.7 Problems on Faults, Testing, and Testability . . . . . . . . . . . . . . . . . . . . . 497
P7.1 Based on Smith q14.9: Testing Cost . . . . . . . . . . . . . . . . . . . . . 497
P7.2 Testing Cost and Total Cost . . . . . . . . . . . . . . . . . . . . . . . . . 497
P7.3 Minimum Number of Faults . . . . . . . . . . . . . . . . . . . . . . . . . 498
P7.4 Smith q14.10: Fault Collapsing . . . . . . . . . . . . . . . . . . . . . . . 498
P7.5 Mathematical Models and Reality . . . . . . . . . . . . . . . . . . . . . . 498
P7.6 Undetectable Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
P7.7 Test Vector Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
P7.7.1 Choice of Test Vectors . . . . . . . . . . . . . . . . . . . . . . . 499
P7.7.2 Number of Test Vectors . . . . . . . . . . . . . . . . . . . . . . 499
P7.8 Time to do a Scan Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
P7.9 BIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
P7.9.1 Characteristic Polynomials . . . . . . . . . . . . . . . . . . . . 499
CONTENTS xv
P7.9.2 Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . 500
P7.9.3 Signature Analyzer . . . . . . . . . . . . . . . . . . . . . . . . 500
P7.9.4 Probabilty of Catching a Fault . . . . . . . . . . . . . . . . . . . 500
P7.9.5 Probabilty of Catching a Fault . . . . . . . . . . . . . . . . . . . 500
P7.9.6 Detecting a Specic Fault . . . . . . . . . . . . . . . . . . . . . 500
P7.9.7 Time to Run Test . . . . . . . . . . . . . . . . . . . . . . . . . 500
P7.10 Power and BIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
P7.11 Timing Hazards and Testability . . . . . . . . . . . . . . . . . . . . . . . 501
P7.12 Testing Short Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
P7.12.1 Are there any physical faults that are detectable by scan testing
but not by built-in self testing? . . . . . . . . . . . . . . . . . . 501
P7.12.2 Are there any physical faults that are detectable by built-in self
testing but not by scan testing? . . . . . . . . . . . . . . . . . . 501
P7.13 Fault Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
P7.13.1 Design test generator . . . . . . . . . . . . . . . . . . . . . . . 502
P7.13.2 Design signature analyzer . . . . . . . . . . . . . . . . . . . . . 502
P7.13.3 Determine if a fault is detectable . . . . . . . . . . . . . . . . . 502
P7.13.4 Testing time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
8 Review 503
8.1 Overview of the Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
8.2 VHDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
8.2.1 VHDL Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
8.2.2 VHDL Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 504
8.3 RTL Design Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
8.3.1 Design Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
8.3.2 Design Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 505
8.4 Functional Verication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
8.4.1 Verication Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
8.4.2 Verication Example Problems . . . . . . . . . . . . . . . . . . . . . . . . 506
8.5 Performance Analysis and Optimization . . . . . . . . . . . . . . . . . . . . . . . 507
8.5.1 Performance Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
8.5.2 Performance Example Problems . . . . . . . . . . . . . . . . . . . . . . . 507
8.6 Timing Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
8.6.1 Timing Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
8.6.2 Timing Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 508
8.7 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
8.7.1 Power Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
8.7.2 Power Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 509
8.8 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
8.8.1 Testing Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
8.8.2 Testing Example Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 510
8.9 Formulas to be Given on Final Exam . . . . . . . . . . . . . . . . . . . . . . . . . 511
xvi CONTENTS
II Solutions to Assignment Problems 1
1 VHDL Problems 3
P1.1 IEEE 1164 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
P1.2 VHDL Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
P1.3 Flops, Latches, and Combinational Circuitry . . . . . . . . . . . . . . . . . . . . . 7
P1.4 Counting Clock Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
P1.5 Arithmetic Overow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
P1.6 Delta-Cycle Simulation: Pong . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
P1.7 Delta-Cycle Simulation: Baku . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
P1.8 Clock-Cycle Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
P1.9 VHDL VHDL Behavioural Comparison: Teradactyl . . . . . . . . . . . . . . . 20
P1.10VHDL VHDL Behavioural Comparison: Ichtyostega . . . . . . . . . . . . . . 21
P1.11Waveform VHDL Behavioural Comparison . . . . . . . . . . . . . . . . . . . . 23
P1.12Hardware VHDL Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . 25
P1.138-Bit Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
P1.13.1Asynchronous Reset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
P1.13.2Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
P1.13.3Testbench for Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
P1.14Synthesizable VHDL and Hardware . . . . . . . . . . . . . . . . . . . . . . . . . 30
P1.15Datapath Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
P1.15.1Correct Implementation? . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
P1.15.2Smallest Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
P1.15.3Shortest Clock Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2 Design Problems 39
P2.1 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
P2.1.1 Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
P2.1.2 Own Code vs Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
P2.2 Design Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
P2.3 Dataow Diagram Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
P2.3.1 Resource Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
P2.3.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
P2.4 Dataow Diagram Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
P2.4.1 Maximum Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
P2.4.2 Minimum area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
P2.5 Michener: Design and Optimization . . . . . . . . . . . . . . . . . . . . . . . . . 47
P2.6 Dataow Diagrams with Memory Arrays . . . . . . . . . . . . . . . . . . . . . . 48
P2.6.1 Algorithm 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
P2.6.2 Algorithm 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
P2.7 2-bit adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
P2.7.1 Generic Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
P2.7.2 FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
P2.8 Sketches of Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
CONTENTS xvii
3 Functional Verication Problems 55
P3.1 Carry Save Adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
P3.2 Trafc Light Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
P3.2.1 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
P3.2.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
P3.2.3 Assertions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
P3.3 State Machines and Verication . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
P3.3.1 Three Different State Machines . . . . . . . . . . . . . . . . . . . . . . . 57
P3.3.1.1 Number of Test Scenarios . . . . . . . . . . . . . . . . . . . . . 57
P3.3.1.2 Length of Test Scenario . . . . . . . . . . . . . . . . . . . . . . 58
P3.3.1.3 Number of Flip Flops . . . . . . . . . . . . . . . . . . . . . . . 58
P3.3.2 State Machines in General . . . . . . . . . . . . . . . . . . . . . . . . . . 59
P3.4 Test Plan Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
P3.4.1 Early Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
P3.4.2 Corner Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
P3.5 Sketches of Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4 Performance Analysis and Optimization Problems 63
P4.1 Farmer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
P4.2 Network and Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
P4.2.1 Maximum Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
P4.2.2 Packet Size and Performance . . . . . . . . . . . . . . . . . . . . . . . . . 66
P4.3 Performance Short Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
P4.4 Microprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
P4.4.1 Average CPI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
P4.4.2 Why not you too? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
P4.4.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
P4.5 Dataow Diagram Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
P4.6 Performance Optimization with Memory Arrays . . . . . . . . . . . . . . . . . . . 70
P4.7 Multiply Instruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
P4.7.1 Highest Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
P4.7.2 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
xviii CONTENTS
5 Timing Analysis Problems 79
P5.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
P5.2 Hold Time Violations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
P5.2.1 Cause . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
P5.2.2 Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
P5.2.3 Rectication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
P5.3 Latch Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
P5.4 Critical Path and False Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
P5.5 Critical Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
P5.5.1 Longest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
P5.5.2 Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
P5.5.3 Missing Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
P5.5.4 Critical Path or False Path? . . . . . . . . . . . . . . . . . . . . . . . . . . 85
P5.6 YACP: Yet Another Critical Path . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
P5.7 Timing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
P5.8 Short Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
P5.8.1 Wires in FPGAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
P5.8.2 Age and Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
P5.8.3 Temperature and Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
P5.9 Worst Case Conditions and Derating Factor . . . . . . . . . . . . . . . . . . . . . 90
P5.9.1 Worst-Case Commercial . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
P5.9.2 Worst-Case Industrial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
P5.9.3 Worst-Case Industrial, Non-Ambient Junction Temperature . . . . . . . . . 90
CONTENTS xix
6 Power Problems 91
P6.1 Short Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
P6.1.1 Power and Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
P6.1.2 Leakage Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
P6.1.3 Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
P6.1.4 Gray Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
P6.2 VLSI Gurus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
P6.2.1 Effect on Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
P6.2.2 Critique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
P6.3 Advertising Ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
P6.4 Vary Supply Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
P6.5 Clock Speed Increase Without Power Increase . . . . . . . . . . . . . . . . . . . . 95
P6.5.1 Supply Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
P6.5.2 Supply Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
P6.6 Power Reduction Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
P6.6.1 Supply Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
P6.6.2 Transistor Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
P6.6.3 Adding Registers to Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . 97
P6.6.4 Gray Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
P6.7 Power Consumption on New Chip . . . . . . . . . . . . . . . . . . . . . . . . . . 98
P6.7.1 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
P6.7.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
P6.7.3 Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
xx CONTENTS
7 Problems on Faults, Testing, and Testability 101
P7.1 Based on Smith q14.9: Testing Cost . . . . . . . . . . . . . . . . . . . . . . . . . 101
P7.2 Testing Cost and Total Cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
P7.3 Minimum Number of Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
P7.4 Smith q14.10: Fault Collapsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
P7.5 Mathematical Models and Reality . . . . . . . . . . . . . . . . . . . . . . . . . . 105
P7.6 Undetectable Faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
P7.7 Test Vector Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
P7.7.1 Choice of Test Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
P7.7.2 Number of Test Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
P7.8 Time to do a Scan Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
P7.9 BIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
P7.9.1 Characteristic Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . 107
P7.9.2 Test Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
P7.9.3 Signature Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
P7.9.4 Probabilty of Catching a Fault . . . . . . . . . . . . . . . . . . . . . . . . 113
P7.9.5 Probabilty of Catching a Fault . . . . . . . . . . . . . . . . . . . . . . . . 114
P7.9.6 Detecting a Specic Fault . . . . . . . . . . . . . . . . . . . . . . . . . . 114
P7.9.7 Time to Run Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
P7.10Power and BIST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
P7.11Timing Hazards and Testability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
P7.12Testing Short Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
P7.12.1Are there any physical faults that are detectable by scan testing but not by
built-in self testing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
P7.12.2Are there any physical faults that are detectable by built-in self testing but
not by scan testing? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
P7.13Fault Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
P7.13.1Design test generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
P7.13.2Design signature analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . 119
P7.13.3Determine if a fault is detectable . . . . . . . . . . . . . . . . . . . . . . . 120
P7.13.4Testing time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Part I
Course Notes
1
Chapter 1
VHDL: The Language
1.1 Introduction to VHDL
1.1.1 Levels of Abstraction
There are many different levels of abstraction for working with hardware:
Quantum: Schrodingers equations describe movement of electrons and holes through mate-
rial.
Energy band: 2-dimensional diagrams that capture essential features of Schrodingers equa-
tions. Energy-band diagrams are commonly used in nano-scale engineering.
Transistor: Signal values and time are continous (analog). Each transistor is modeled by a
resistor-capacitor network. Overall behaviour is dened by differential equations in terms of
the resistors and capacitors. Spice is a typical simulation tool.
Switch: Time is continuous, but voltage may be either continuous or discrete. Linear equa-
tions are used, rather than differential equations. A rising edge may be modeled as a linear
rise over some range of time, or the time between a denite low value and a denite high
value may be modeled as having an undened or rising value.
Gate: Transistors are grouped together into gates (e.g. AND, OR, NOT). Voltages are discrete
values such as pure Boolean (0 or 1) or IEEE Standard Logic 1164, which has representations
for different types of unknown or undened values. Time may be continuous or may be
discrete. If discrete, a common unit is the delay through a single inverter (e.g. a NOT gate
has a delay of 1 and AND gate has a delay of 2).
Register transfer level: The essential characteristic of the register transfer level is that the
behaviour of hardware is modeled as assignments to registers and combinational signals.
Equations are written where a register signal is a function of other signals (e.g. c = a
3
4 CHAPTER 1. VHDL
and b;). The assignments may be either combinational or registered. Combinational as-
signments happen instanteously and registered assignments take exactly one clock cycle.
There are variations on the pure register-transfer level. For example, time may be measured
in clock phases rather than clock cycles, so as to allow assignments on either the rising or
falling edge of a clock. Another variation is to have multiple clocks that run at different
speeds a clock on a bus might run at half the speed of the primary clock for the chip.
Transaction level: The basic unit of computation is a transaction, such as executing an in-
struction on a microprocessor, transfering data across a bus, or accessing memory. Time
is usually measured as an estimate (e.g. a memory write requires 15 clock cycles, or a
bus transfer requires 250 ns.). The building blocks of the transaction level are processors,
controllers, memory arrays, busses, intellectual property (IP) blocks (e.g. UARTs). The
behaviour of the building blocks are described with software-like models, often written in
behavioural VHDL, SystemC, or SystemVerilog. The transaction level has many similarities
to a software model of a distributed system.
Electronic-system level: Looks at an entire electronic system, with both hardware and soft-
ware.
In this course, we will focus on the register-transfer level. In the second half of the course, we will
look at how analog phenomenon, such as timing and power, affect the register-transfer level. In
these chapters we will occasionally dip down into the transistor, switch, and gate levels.
1.1.2 VHDL Origins and History
VHDL = VHSIC Hardware Description Language
VHSIC = Very High Speed Integrated Circuit
The VHSIC Hardware Description Language (VHDL) is a formal notation intended
for use in all phases of the creation of electronic systems. Because it is both machine
readable and human readable, it supports the development, verication, synthesis and
testing of hardware designs, the communication of hardware design data, and the
maintenance, modication, and procurement of hardware.
Language Reference Manual (IEEE Design Automation Standards Committee,
1993a)
development
verication
synthesis
testing
hardware designs
communication
maintenance
modication
1.1.2 VHDL Origins and History 5
procurement
VHDL is a lot more than synthesis of digital
hardware
VHDL History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Developed by the United States Department of Defense as part of the very high speed integrated
circuit (VHSIC) program in the early 1980s.
The Department of Defense intended VHDL to be used for the documentation, simulation and
verication of electronic systems.
Goals:
improve design process over schematic entry
standardize design descriptions amongst multiple vendors
portable and extensible
Inspired by the ADA programming language
large: 97 keywords, 94 syntactic rules
verbose (designed by committee)
static type checking, overloading
complicated syntax: parentheses are used for both expression grouping and array indexing
Example:
a <= b
*
(3 + c); -- integer
a <= (3 + c); -- 1-element array of integers
Standardized by IEEE in 1987 (IEEE 1076-1987), revised in 1993, 2000.
In 1993 the IEEE standard VHDL package for model interoperability, STD_LOGIC_1164
(IEEE Standard 1164-1993), was developed.
std_logic_1164 denes 9 different values for signals
In 1997 the IEEE standard packages for arithmetic over std logic and bit signals were
dened (IEEE Standard 1076.31997).
numeric_std denes arithmetic over std logic vectors and integers.
Note: This is the package that you should use for arithmetic. Dont
use std logic arith it has less uniform support for mixed inte-
ger/signal arithmetic and has a greater tendency for differences between
tools.
numeric_bit denes arithmetic over bit vectors and integers. We wont use bit
signals in this course, so you dont need to worry about this package.
6 CHAPTER 1. VHDL
1.1.3 Semantics
The original goal of VHDL was to simulate circuits. The semantics of the language dene circuit
behaviour.
a
b
c
simulation c <= a AND b;
But now, VHDL is used in simulation and synthesis. Synthesis is concerned with the structure of
the circuit.
Synthesis: converts one type of description (behavioural) into another, lower level, description
(usually a netlist).
a
b
c c <= a AND b; synthesis
Synthesis is a computer-aided design (CAD) technique that transforms a designers concise, high-
level description of a circuit into a structural description of a circuit.
CAD Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CAD Tools allow designers to automate lower-level design processes in implementing the desired
functionality of a system.
NOTE: EDA = Electronic Design Automation. In digital hardware design EDA = CAD.
Synthesis vs Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For synthesis, we want the code we write to dene the structure of the hardware that is generated.
a
b
c c <= a AND b; synthesis
1.1.4 Synthesis of a Simulation-Based Language 7
The VHDL semantics dene the behaviour of the hardware that is generated, not the structure
of the hardware. The scenario below complies with the semantics of VHDL, because the two
synthesized circuits produce the same behaviour. If the two synthesized circuits had different
behaviour, then the scenario would not comply with the VHDL Standard.
a
b
c
a
b
c
c <= a AND b;
a
b
c
different
structure
same
behaviour s
y
n
t
h
e
s
i
s
simulation
a
b
c
simulation
s
y
n
t
h
e
s
i
s
1.1.4 Synthesis of a Simulation-Based Language
Not all of VHDL is synthesizable
c <= a AND b; (synthesizable)
c <= a AND b AFTER 2ns; (NOT synthesizable)
how do you build a circuit with exactly 2ns of delay through an AND gate?
more examples of non-synthesizable code are in section 1.11
See section 1.11 for more details
Different synthesis tools support different subsets of VHDL
Some tools generate erroneous hardware for some code
behaviour of hardware differs from VHDL semantics
Some tools generate unpredictable hardware (Hardware that has the correct behaviour, but un-
desirable or weird structure).
There is an IEEE standard (1076.6) for a synthesizable subset of VHDL, but tool vendors dont
yet conform to it. (Most vendors still dont have full support for the 1993 extensions to VHDL!).
For more info, see http://www.vhdl.org/siwg/.
1.1.5 Solution to Synthesis Sanity
Pick a high-quality synthesis tool and study its documentation thoroughly
Learn the idioms of the tool
Different VHDL code with same behaviour can result in very different circuits
Be careful if you have to port VHDL code from one tool to another
8 CHAPTER 1. VHDL
KISS: Keep It Simple Stupid
VHDL examples will illustrate reliable coding techniques for the synthesis tools from Synop-
sys, Mentor Graphics, Altera, Xilinx, and most other companies as well.
Follow the coding guidelines and examples from lecture
As you write VHDL, think about the hardware you expect to get.
Note: If you cant predict the hardware, then the hardware probably
wont be very good (small, fast, correct, etc)
1.1.6 Standard Logic 1164
At the core of VHDL is a package named STANDARD that denes a type named bit with values
of 0 and 1. For simulation, it helpful to have additional values, such as undened and
high impedance. Many companies created their own (incompatible) denitions of signal types
for simulation. To regain compatibility amongst packages from different companies, the IEEE
dened std logc 1164 to be the standard type for signal values in VHDL simulation.
U uninitialized
X strong unknown
0 strong 0
1 strong 1
Z high impedance
W weak unknown
L weak 0
H weak 1
-- dont care
The most common values are: U, X, 0, 1.
If you see X in a simulation, it usually means that there is a mistake in your code.
Every VHDL le that you write should begin with: library ieee;
use ieee.std_logic_1164.all;
Note: std logic vs boolean The std logic values 1 and 0 are not
the same as the boolean values true and false. For example, you must
write if a = 1 then .... The code if a then ... will not type-
check if a is of type std logic.
From a VLSI perspective, a weak value will come from a smaller gate. One aspect of VHDL that
we dont touch on in ece327 is resolution, which describes how to determine the value of a signal
if the signal is driven by bmore than one/b process. (In ece327, we restrict ourselves to having
each signal be driven by (be the target of) exactly one process). The std logic 1164 library provides
a resolution function to deal with situation where different processes drive the same signal with
different values. In this situation, a strong value (e.g. 1) will overpower a weak value (e.g. L).
If two processes drive the signal with different strong values (e.g. 1 and 0) the signal resolves
1.2. COMPARISON OF VHDL TO OTHER HARDWARE DESCRIPTION LANGUAGES 9
to a strong unknown (X). If a signal is driven with two different weak values (e.g. H and L),
the signal resolves to a weak unknown (W).
1.2 Comparison of VHDLto Other Hardware Description Lan-
guages
1.2.1 VHDL Disadvantages
Some VHDL programs cannot be synthesized
Different tools support different subsets of VHDL.
Different tools generate different circuits for same code
VHDL is verbose
Many characters to say something simple
VHDL is complicated and confusing
Many different ways of saying the same thing
Constructs that have similar purpose have very different syntax (case vs. select)
Constructs that have similar syntax have very different semantics (variables vs signals)
Hardware that is synthesized is not always obvious (when is a signal a ip-op vs latch vs
combinational)
The infamous latch inference problem (See section 1.5.2 for more information)
1.2.2 VHDL Advantages
VHDL supports unsynthesizable constructs that are useful in writing high-level models, test-
benches and other non-hardware or non-synthesizable artifacts that we need in hardware design.
VHDL can be used throughout a large portion of the design process in different capacities, from
specication to implementation to verication.
VHDL has static typechecking many errors can be caught before synthesis and/or simulation.
(In this respect, it is more similar to Java than to C.)
VHDL has a rich collection of datatypes
VHDL is a full-featured language with a good module system (libraries and packages).
VHDL has a well-dened standard.
10 CHAPTER 1. VHDL
1.2.3 VHDL and Other Languages
1.2.3.1 VHDL vs Verilog
Verilog is a simpler language: smaller language, simple circuits are easier to write
VHDL has more features than Verilog
richer set of data types and strong type checking
VHDL offers more exibility and expressivity for constructing large systems.
The VHDL Standard is more standard than the Verilog Standard
VHDL and Verilog have simulation-based semantics
Simulation vendors generally conform to VHDL standard
Some Verilog constructs give different behaviours in simulation and synthesis
VHDL is used more than Verilog in Europe and Japan
Verilog is used more than VHDL in North America
VHDL is used more in FPGAs than in ASICs
South-East Asia, India, South America: ?????
1.2.3.2 VHDL vs System Verilog
System Verilog is a superset of Verilog. It extends Verilog to make it a full object-oriented
hardware modelling language
Syntax is based on Verilog and C++.
As of 2007, System Verilog is used almost exclusively for test benches and simulation. Very
few people are trying to use it to do hardware design.
System Verilog grew out of Superlog, a proposed language that was based on Verilog and C.
Basic core came fromVerilog. C-like extensions included to make language more expressive and
powerful. Developed by originally the company Co-Design Automation and then standardized
by Accellera, an organization aimed at standardizing EDA languages. Co-Design was purchased
by Synopsys and now Synopsys is the leading proponent of System Verilog.
1.2.3.3 VHDL vs SystemC
System C looks like C familiar syntax
C is often used in algorithmic descriptions of circuits, so why not try to use it for synthesizable
code as well?
If you think VHDL is hard to synthesize, try C....
SystemC simulation is slower than advertised
1.3. OVERVIEW OF SYNTAX 11
1.2.3.4 Summary of VHDL Evaluation
VHDL is far from perfect and has lots of annoying characteristics
VHDL is a better language for education than Verilog because the static typechecking enforces
good software engineering practices
The richness of VHDL will be useful in creating concise high-level models and powerful test-
benches
1.3 Overview of Syntax
This section is just a brief overview of the syntax of VHDL, focusing on the constructs that are
most commonly used. For more information, read a book on VHDL and use online resources.
(Look for VHDL under the Documentation tab in the E&C 327 web pages.)
1.3.1 Syntactic Categories
There are ve major categories of syntactic constructs.
(There are many, many minor categories and subcategories of constructs.)
Library units (section 1.3.2)
Top-level constructs (packages, entities, architectures)
Concurrent statements (section 1.3.4)
Statements executed at the same time (in parallel)
Sequential statements (section 1.3.7)
Statements executed in series (one after the other)
Expressions
Arithmetic (section 1.10), Boolean, Vectors , etc
Declarations
Components , signals, variables, types, functions, ....
1.3.2 Library Units
Library units are the top-level syntactic constructs in VHDL. They are used to dene and include
libraries, declare and implement interfaces, dene packages of declarations and otherwise bind
together VHDL code.
Package body
dene the contents of a library
Packages
determine which parts of the library are externally visible
12 CHAPTER 1. VHDL
Use clause
use a library in an entity/architecture or another package
technically, use clauses are part of entities and packages, but they proceed the entity/package
keyword, so we list them as top-level constructs
Entity (section 1.3.3)
dene interface to circuit
Architecture (section 1.3.3)
dene internal signals and gates of circuit
1.3.3 Entities and Architecture
Each hardware module is described with an Entity/Architecture pair
architecture
entity
architecture
entity
Figure 1.1: Entity and Architecture
Entity: interface
names, modes (in / out), types of
externally visible signals of circuit
Architecture: internals
structure and behaviour of module
library ieee;
use ieee.std_logic_1164.all;
entity and_or is
port (
a, b, c : in std_logic ;
z : out std_logic
);
end and_or;
Figure 1.2: Example of an entity
1.3.3 Entities and Architecture 13
The syntax of VHDL is dened using a variation on Backus-Naur forms (BNF).
[ use_clause ]
entity ENTITYID is
[ port (
SIGNALID : (in | out) TYPEID [ := expr ] ;
);
]
[ declaration ]
[ begin
concurrent_statement ]
end [ entity ] ENTITYID ;
Figure 1.3: Simplied grammar of entity
architecture main of and_or is
signal x : std_logic;
begin
x <= a AND b;
z <= x OR (a AND c);
end main;
Figure 1.4: Example of architecture
[ use_clause ]
architecture ARCHID of ENTITYID is
[ declaration ]
begin
[ concurrent_statement ]
end [ architecture ] ARCHID ;
Figure 1.5: Simplied grammar of architecture
14 CHAPTER 1. VHDL
1.3.4 Concurrent Statements
Architectures contain concurrent statements
Concurrent statements execute in parallel (Figure1.6)
Concurrent statements make VHDL fundamentally different from most software languages.
Hardware (gates) naturally execute in parallel VHDL mimics the behaviour of real hard-
ware.
At each innitesimally small moment of time, each gate:
1. samples its inputs
2. computes the value of its output
3. drives the output
architecture main of bowser is
begin
x1 <= a AND b;
x2 <= NOT x1;
z <= NOT x2;
end main;
architecture main of bowser is
begin
z <= NOT x2;
x2 <= NOT x1;
x1 <= a AND b;
end main;
a
b
z
x1 x2
Figure 1.6: The order of concurrent statements doesnt matter
1.3.4 Concurrent Statements 15
conditional assignment . . . <= . . . when . . . else . . .;
normal assignment (. . . <= . . .)
if-then-else style (uses when)
c <= a+b when sel=1 else a+c when sel=0 else "0000";
selected assignment with . . . select
. . . <= . . . when . . . | . . . ,
. . . when . . . | . . . ,
. . .
. . . when . . . | . . . ;
case/switch style assignment
with color select d <= "00" when red , "01" when . . .;
component instantiation . . . : . . . port map ( . . . => . . . , . . . );
use an existing circuit
section 1.3.5
add1 : adder port map( a => f, b => g, s => h, co => i);
for-generate . . . : for . . . in . . . generate
. . .
end generate;
replicate some hardware
bgen: for i in 1 to 7 generate b(i)<=a(7-i); end generate;
if-generate . . . : if . . . generate
. . .
end generate;
conditionally create some hardware
okgen : if optgoal /= fast then generate
result <= ((a and b) or (d and not e)) or g;
end generate;
fastgen : if optgoal = fast then generate
result <= 1;
end generate;
process process . . . begin
. . .
end process;
the body of a process is executed sequentially
Sections 1.3.6, 1.6
Figure 1.7: The most commonly used concurrent statements
16 CHAPTER 1. VHDL
1.3.5 Component Declaration and Instantiations
There are two different syntaxes for component declaration and instantiation. The VHDL-93 syn-
tax is much more concise than the VHDL-87 syntax.
Not all tools support the VHDL-93 syntax. For E&CE 327, some of the tools that we use do not
support the VHDL-93 syntax, so we are stuck with the VHDL-87 syntax.
1.3.6 Processes
Processes are used to describe complex and potentially unsynthesizable behaviour
A process is a concurrent statement (Section 1.3.4).
The body of a process contains sequential statements (Section 1.3.7)
Processes are the most complex and difcult to understand part of VHDL (Sections 1.5 and 1.6)
process (a, b, c)
begin
y <= a AND b;
if (a = 1) then
z1 <= b AND c;
z2 <= NOT c;
else
z1 <= b OR c;
z2 <= c;
end if;
end process;
process
begin
y <= a AND b;
z <= 0;
wait until rising_edge(clk);
if (a = 1) then
z <= 1;
y <= 0;
wait until rising_edge(clk);
else
y <= a OR b;
end if;
end process;
Figure 1.8: Examples of processes
Processes must have either a sensitivity list or at least one wait statement on each execution path
through the process.
Processes cannot have both a sensitivity list and a wait statement.
Sensitivity List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The sensitivity list contains the signals that are read in the process.
A process is executed when a signal in its sensitivity list changes value.
1.3.7 Sequential Statements 17
An important coding guideline to ensure consistent synthesis and simulation results is to include
all signals that are read in the sensitivity list. If you forget some signals, you will either end up
with unpredictable hardware and simulation results (different results from different programs) or
undesirable hardware (latches where you expected purely combinational hardware). For more on
this topic, see sections 1.5.2 and 1.6.
There is one exception to this rule: for a process that implements a ip-op with an if rising edge
statement, it is acceptable to include only the clock signal in the sensitivity list other signals
may be included, but are not needed.
[ PROCLAB : ] process ( sensitivity_list )
[ declaration ]
begin
sequential_statement
end process [ PROCLAB ] ;
Figure 1.9: Simplied grammar of process
1.3.7 Sequential Statements
Used inside processes and functions.
wait wait until . . . ;
signal assignment . . . <= . . . ;
if-then-else if . . . then . . . elsif . . . end if;
case case . . . is
when . . . | . . . => . . . ;
when . . . => . . . ;
end case;
loop loop . . . end loop;
while loop while . . . loop . . . end loop;
for loop for . . . in . . . loop . . . end loop;
next next . . . ;
Figure 1.10: The most commonly used sequential statements
18 CHAPTER 1. VHDL
1.3.8 A Few More Miscellaneous VHDL Features
Some constructs that are useful and will be described in later chapters and sections:
report : print a message on stderr while simulating
assert : assertions about behaviour of signals, very useful with report statements.
generics : parameters to an entity that are dened at elaboration time.
attributes : predened functions for different datatypes. For example: high and low indices of a
vector.
1.4 Concurrent vs Sequential Statements
All concurrent assignments can be translated into sequential statements. But, not all sequential
statements can be translated into concurrent statements.
1.4.1 Concurrent Assignment vs Process
The two code fragments below have identical behaviour:
architecture main of tiny is
begin
b <= a;
end main;
architecture main of tiny is
begin
process (a) begin
b <= a;
end process;
end main;
1.4.2 Conditional Assignment vs If Statements
The two code fragments below have identical behaviour:
Concurrent Statements
t <= <val1> when <cond>
else <val2>;
Sequential Statements
if <cond> then
t <= <val1>;
else
t <= <val2>;
end if
1.4.3 Selected Assignment vs Case Statement 19
1.4.3 Selected Assignment vs Case Statement
The two code fragments below have identical behaviour
Concurrent Statements
with <expr> select
t <= <val1> when <choices1>,
<val2> when <choices2>,
<val3> when <choices3>;
Sequential Statements
case <expr> is
when <choices1> =>
t <= <val1>;
when <choices2> =>
t <= <val2>;
when <choices3> =>
t <= <val3>;
end case;
1.4.4 Coding Style
Code thats easy to write with sequential statements, but difcult with concurrent:
Sequential Statements
case <expr> is
when <choice1> =>
if <cond> then
o <= <expr1>;
else
o <= <expr2>;
end if;
when <choice2> =>
. . .
end case;
Concurrent Statements
Overall structure:
with <expr> select
t <= ... when <choice1>,
... when <choice2>;
Failed attempt:
with <expr> select
t <= -- want to write:
-- <val1> when <cond>
-- else <val2>
-- but conditional assignment
-- is illegal here
when c1,
. . .
when c2;
Concurrent statement with correct behaviour, but messy:
t <= <expr1> when (expr = <choice1> AND <cond>)
else <expr2> when (expr = <choice1> AND NOT <cond>)
else . . .
;
20 CHAPTER 1. VHDL
1.5 Overview of Processes
Processes are the most difcult VHDL construct to understand. This section gives an overview of
processes. Section 1.6 gives the details of the semantics of processes.
Within a process, statements are executed almost sequentially
Among processes, execution is done in parallel
Remember: a process is a concurrent statement!
entity ENTITYID is
interface declarations
end ENTITYID;
architecture ARCHID of ENTITYID is
begin
concurrent statements =
process begin
sequential statements =
end process;
concurrent statements =
end ARCHID;
Figure 1.11: Sequential statements in a process
Key concepts in VHDL semantics for processes:
VHDL mimics hardware
Hardware (gates) execute in parallel
Processes execute in parallel with each other
All possible orders of executing processes must produce the same simulation results (wave-
forms)
If a signal is not assigned a value, then it holds its previous value
All orders of executing concurrent statements must
produce the same waveforms
It doesnt matter whether you are running on a single-threaded operating system, on a multi-
threaded operating system, on a massively parallel supercomputer, or on a special hardware emu-
lator with one FPGA chip per VHDL process all simulations must be the same.
These concepts are the motivation for the semantics of executing processes in VHDL (Section 1.6)
and lead to the phenomenon of latch-inference (Section 1.5.2).
1.5. OVERVIEW OF PROCESSES 21
architecture
procA: process
stmtA1;
stmtA2;
stmtA3;
end process;
procB: process
stmtB1;
stmtB2;
end process;
execution sequence
A1
A2
A3
B1
B2
execution sequence
A1
A2
A3
B1
B2
execution sequence
A1
A2
A3
B1
B2
single threaded:
procA before procB
single threaded:
procB before procA
multithreaded: procA
and procB in parallel
Figure 1.12: Different process execution sequences
Figure 1.13: All execution orders must have same behaviour
Sections 1.5.11.5.3 discuss the hardware generated by processes.
Sections 1.61.6.7 discuss the behaviour and execution of processes.
22 CHAPTER 1. VHDL
1.5.1 Combinational Process vs Clocked Process
Each well-written synthesizable process is either combinational or clocked. Some synthesizable
processes that do not conform to our coding guidelines are both combinational and clocked. For
example, in a ip-op with an asynchronous reset, the output is a combinational function of the
reset signal and a clocked function of the data input signal. We will deal with only with processes
that follow our coding conventions, and so we will continue to say that each process is either
combinational xor clocked.
Combinational process: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Executing the process takes part of one clock cycle
Target signals are outputs of combinational circuitry
A combinational processes must have a sensitivity list
A combinational process must not have any wait statements
A combinational process must not have any rising_edges, or falling_edges
The hardware for a combinational process is just combinational circuitry
Clocked process: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Executing the process takes one (or more) clock cycles
Target signals are outputs of ops
Process contains one or more wait or if rising edge statements
Hardware contains combinational circuitry and ip ops
Note: Clocked processes are sometimes called sequential processes,
but this can be easily confused with sequential statements, so in E&CE 327
well refer to synthesizable processes as either combinational or clocked.
Example Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Combinational Process
process (a,b,c)
p1 <= a;
if (b = c) then
p2 <= b;
else
p2 <= a;
end if;
end process;
1.5.2 Latch Inference 23
Clocked Processes
process
begin
wait until rising_edge(clk);
b <= a;
end process;
process (clk)
begin
if rising_edge(clk) then
b <= a;
end if;
end process;
1.5.2 Latch Inference
The semantics of VHDL require that if a signal is assigned a value on some passes through a
process and not on other passes, then on a pass through the process when the signal is not assigned
a value, it must maintain its value from the previous pass.
process (a, b, c)
begin
if (a = 1) then
z1 <= b;
z2 <= b;
else
z1 <= c;
end if;
end process;
a
b
c
z1
z2
Figure 1.14: Example of latch inference
When a signals value must be stored, VHDL infers a latch or a ip-op in the hardware to store
the value.
If you want a latch or a ip-op for the signal, then latch inference is good.
If you want combinational circuitry, then latch inference is bad.
Loop, Latch, Flop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24 CHAPTER 1. VHDL
b
a
z
Combinational loop
b z
a EN
Latch
b z
a
D Q
Flip-op
Question: Write VHDL code for each of the above circuits
Answer:
combinational loop
if a = 1 then
z <= b;
else
z <= z;
end if;
latch
if a = 1 then
z <= b;
end if;
op
if rising edge(a) then
z <= b;
end if;
Causes of Latch Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Usually, latch inference refers to the unintentional creation of latches.
The most common cause of unintended latch inference is missing assignments to signals in if-then-
else and case statements.
Latch inference happens during elaboration. When using the Synopsys tools, look for:
Inferred memory devices
in the output or log les.
1.5.3 Combinational vs Flopped Signals 25
1.5.3 Combinational vs Flopped Signals
Signals assigned to in combinational processes are combinational.
Signals assigned to in clocked processes are outputs of ip-ops.
1.6 Details of Process Execution
In this section we go through the detailed semantics of how processes execute. These semantics
formthe foundation for the simulation and synthesis of VHDL. The semantics dene the simulation
behaviour, and the duty of synthesis is to produce hardware that has the same behaviour as the
simulation of the original VHDL code.
1.6.1 Simple Simulation
Before diving into the details of processes, we briey review gate-level simulation with a simple
example, which we will then explore in excruciating detail through the semantics of VHDL.
With knowledge of just basic gate-level behaviour, we simulate the circuit below with waveforms
for a and b and calculate the behaviour for c, d, and e.
a
b
c d
e
a
b
c
d
e
0ns 10ns 12ns 15ns
Different Programs, Same Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
There are many different VHDL programs that will synthesize to this circuit. Three examples are:
26 CHAPTER 1. VHDL
process (a,b)
begin
c <= a and b;
end process;
process (b,c,d)
begin
d <= not c;
e <= b and d;
end process;
process (a,b,c,d)
begin
c <= a and b;
d <= not c;
e <= b and d;
end process;
process (a,b)
begin
c <= a and b;
end process;
process (c)
begin
d <= not c;
end process;
process (b,d)
begin
e <= b and d;
end process;
The goal of the VHDL semantics is that all of these programs will have the same behaviour.
The two main challenges to make this happen are: a value change on a signal must propagate
instantaneously, and all gates must operate in parallel. We will return to these points in section 1.6.3
1.6.2 Temporal Granularities of Simulation
There are several different granularities of time to analyze VHDL behaviour. In this course, we
will discuss three major granularities: clock cycles, timing simulation, and delta cycles.
clock-cycle
smallest unit of time is a clock cycle
combinational logic has zero delay
ip-ops have a delay of one clock cycle
used for simulation early in the design cycle
fastest simulation run times
timing simulation
smallest unit of time is a nano, pico, or fempto second
combinational logic and wires have delay as computed by timing analysis tools
ip-ops have setup, hold, and clock-to-Q timing parameters
used for simulation when ne-tuning design and conrming that timing contraints are
satised
slow simulation times for large circuits
delta cycles
units of time are artifacts of VHDL semantics and simulation software
simulation cycles, delta cycles, and simulation steps are innitesimally small amounts of
time
VHDL semantics are dened in terms of these concepts
In assignments and exams, you will need to be able to simulate VHDL code at each of the three
different levels of temporal granularity. In the laboratories and project, you will use simulation
1.6.3 Intuition Behind Delta-Cycle Simulation 27
programs for both clock-cycle simulation and timing simulation. We dont have access to a pro-
gram that will produce delta-cycle waveforms, but if anyone is looking for a challenging co-op job
or fourth-year design project....
For the remainder of section 1.6, well look at only the delta cycle view of the world.
1.6.3 Intuition Behind Delta-Cycle Simulation
Zero-delay simulation might appear to be the simpler than simulation with delays through gates
(timing simulation), but in reality, zero-delay simulation algorithms are more complicated than
algorithms for timing simulation. The reason is that in zero-delay simulation, a sequence of de-
pendent events must appear to happen instantaneously (in zero time). In particular, the effect of an
event must propagate instantaneously through the combinational circuitry.
Two fundamental rules for zero-delay simulation:
1. events appear to propagate through combinational circuitry instantaneously.
2. all of the gates appear to operate in parallel
To make it appear that events propagate instaneously, VHDL introduces an articial unit of time,
the delta cycle, to represent an innitesimally small amount of time. In each delta cycle, every gate
in the circuit will sample its inputs, compute its result, and drive its output signal with the result.
Because software executes in serial, a simulator cannot run/simulate multiple gates in parallel.
Instead, the simulator must simulate the gates one at a time, but make the waveforms appear as
if all of the gates were simulated in parallel. In each delta cycle, the simulator will simulate any
gate whose input changed in the previous delta cycle. To preserve the illusion that the gates ran in
parallel, the effect of simulating a gate remains invisible until the end of the delta cycle.
1.6.4 Denitions and Algorithm
1.6.4.1 Process Modes
An architecture contains a set of processes. Each process is in one of the following modes: active,
suspended, or postponed.
Note: postponed This use of the word postponed differs from that in
the VHDL Standard. We wont be using postponed processes as dened in the
Standard.
Note: postponed Postponed in VHDL terminology is a synonym for
some operating-systems usage of ready to describe a process that is ready
to execute.
28 CHAPTER 1. VHDL
s
u
s
p
e
n
d
resume
a
c
t
i
v
a
t
e
active
suspended postponed
Suspended
Nothing to currently execute
A process stays suspended until the event
that it is waiting for occurs: either a
change in a signal on its sensitivity list
or the condition in a wait statement
Postponed
Wants to execute, but not currently active
A process stays postponed until the sim-
ulator chooses it from the pool of post-
poned processes
Active
Currently executing
A process stays active until it hits a wait
statement or sensitivity list, at which
point it suspends
Figure 1.15: Process modes
1.6.4.2 Simulation Algorithm
The algorithm presented here is a simplication of the actual algorithm in Section 12.6 of the
VHDL Standard. The most signicant simplication is that this algorithm does not support de-
layed assignments. To support delayed assignments, each signals provisional value would be gen-
eralized to an event wheel, which is a list containing the times and values for multiple provisional
assignments in the future.
A somewhat ironic note, only six of the two hundred pages in the VHDL Standard are devoted to
the semantics of executing processes.
The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Simulations start at step 1 with all processes postponed and all signals with a default value (e.g.,
U for std logic).
1.6.4 Denitions and Algorithm 29
1. While there are postponed processes:
(a) Pick one or more postponed processes to execute (become active).
(b) As a process executes, assignments to signals are provisional new values do
not become visible until step 3
(c) A process executes until it hits its sensitivity list or a wait statement, at which point
it suspends. At a wait statement, the process will suspend even if the condition is
true during the current simulation cycle.
(d) Processes that become suspended stay suspended until there are no more post-
poned or active processes.
2. Each process looks at signals that changed value (provisional value differs from visible
value) and at the simulation time. If a signal in a processs sensitivity list changed value,
or if the wait condition on which a process is suspended became true, then the process
resumes (becomes postponed).
3. Each signal that changed value is updated with its provisional value (the provisional
value becomes visible).
4. If there are no postponed processes, then increment simulation time to the next sched-
uled event.
Note: Parallel execution In n-threaded execution, at most n processes are
active at a time
30 CHAPTER 1. VHDL
1.6.4.3 Delta-Cycle Denitions
Denition simulation step: Executing one sequential assignment or process mode
change.
Denition simulation cycle: The operations that occur in one iteration of the simulation
algorithm.
Denition delta cycle: A simulation cycle that does not advance simulation time.
Equivalently: A simulation cycle with zero-delay assignments where the assignment
causes a process to resume.
Denition simulation round: A sequence of simulation cycles that all have the same
simulation time. Equivalently: a contiguous sequence of zero or more delta cycles
followed by a simulation cycle that increments time (i.e., the simulation cycle is not a
delta cycle).
Note: Ofcial and unofcial terminology Simulation cycle and delta cycle
are ofcial denitions in the VHDL Standard. Simulation step and simulation
round are not standard denitions. They are used in E&CE 327 because we
need words to associate with the concepts that they describe.
1.6.5 Example 1: Process Execution (Bamboozle) 31
1.6.5 Example 1: Process Execution (Bamboozle)
This example (Bamboozle) and the next example (Flummox, section 1.6.6) are very similar. The
VHDL code for the circuit is slightly different, but the hardware that is generated is the same. The
stimulus for signals a and b also differ.
entity bamboozle is
begin
end bamboozle;
architecture main of bamboozle is
signal a, b, c, d : std_logic;
begin
procA : process (a, b) begin
c <= a AND b;
end process;
procB : process (b, c, d)
begin
d <= NOT c;
e <= b AND d;
end process;
procC : process
begin
a <= 0;
b <= 1;
wait for 10 ns;
a <= 1;
wait for 2 ns;
b <= 0;
wait for 3 ns;
a <= 0;
wait for 20 ns;
end main;
Figure 1.16: Example bamboozle circuit for process execution
32 CHAPTER 1. VHDL
Initial conditions (Shown in slides, not in notes)
Step 1(a): Activate procA (Shown in slides, not in notes)
a
b
c d
e
U
U
U U
U
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
A
P
Step 1(a): Activate procA
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
P
wait for 20 ns;
a
b
c
d
e
U
U
U
U
U
0ns
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
A
?
Step 1(c): Suspend procA (Shown in slides, not in notes)
Step 1(a): Activate procC (Shown in slides, not in notes)
Step 1(b): Provisional assignment to a (Shown in slides, not in notes)
Step 1(b): Provisional assignment to b (Shown in slides, not in notes)
a
b
c d
e
U
U
U U
U
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
P
0
Step 1(b): Provisional assignment to b
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
A
U
1
wait for 20 ns;
a
b
c
d
e
U
U
U
U
U
0ns
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
?
U
A S
A
U
U
1.6.5 Example 1: Process Execution (Bamboozle) 33
Step 1(a): Activate procB (Shown in slides, not in notes)
Step 1(b): Provisional assignment to d (Shown in slides, not in notes)
Step 1(b): Provisional assignment to e (Shown in slides, not in notes)
Step 1(c): Suspend procB (Shown in slides, not in notes)
a
b
c
d
e
0ns
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
?
a
b
c d
e
U
U
U U
U
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
S
0
1
Step 1(c): Suspend procB
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
U U
U
wait for 20 ns;
A S
A
U
U
S
A
U
U
U
S
a
b
c d
e
U
U
U U
U
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
S
0
1
All processes suspended
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
U U
U
wait for 20 ns;
0ns
a
b
c
d
e
U
U
U
U
U
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
?
A S
A
U
U
S
A
U
U
U
S
E
?
Step 3: Update signal values (Shown in slides, not in notes)
34 CHAPTER 1. VHDL
a
b
c d
e
U U
U
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
P
P
0
1
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
U U
U
wait for 20 ns;
0ns
P A S
A S
A
U
U
U
S
a
b
c
d
e
U
U
U
U
U
procA
procB P
procC P
sim round
sim cycle
delta cycle
B
B
?
P
P
Step 3: Update signal values
U
U
0
1
a
b
c d
e
U U
U
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
S
0
1
Step 4: Simulation time remains at 0 ns --- delta cycle
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
wait for 20 ns;
0ns
a
b
c
d
e
U
U
U
U
U
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
B
A S
A
U
U
S
A
U
U
U
S
P
P
0
1
E
E
1.6.5 Example 1: Process Execution (Bamboozle) 35
Step 1(a): Activate procA (Shown in slides, not in notes)
Step 1(b): Provisional assignment to c (Shown in slides, not in notes)
Step 1(c): Suspend procA (Shown in slides, not in notes)
Step 1(a): Activate procB (Shown in slides, not in notes)
Step 1(b): Provisional assignment to d (Shown in slides, not in notes)
Step 1(b): Provisional assignment to e (Shown in slides, not in notes)
Step 1(c): Suspend procB (Shown in slides, not in notes)
a
b
c d
e
U U
U
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
S
0
1
U 0
All processes suspended
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
U
wait for 20 ns;
0ns
a
b
c
d
e
U
U
U
U
U
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
B
U
U
E
E
B
P
P
A
U
S
A
U
U
S
U
U
U
0
1
?
Step 3: Update signal values (Shown in slides, not in notes)
Step 4: Simulation time remains at 0ns delta cycle (Shown in slides, not in notes)
Compact simulation cycle (Shown in slides, not in notes)
Begin next simulation cycle (Shown in slides, not in notes)
Step 1(a): Activate procB (Shown in slides, not in notes)
Step 1(b): Provisional assignment to d (Shown in slides, not in notes)
Step 1(b): Provisional assignment to e (Shown in slides, not in notes)
Step 1(c): Suspend procB (Shown in slides, not in notes)
All processes suspended (Shown in slides, not in notes)
36 CHAPTER 1. VHDL
a
b
c d
e
0 U
U
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
S
0
1
1
U
All processes suspended
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
wait for 20 ns;
0ns
a
b
c
d
e
U
U
U
U
U
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
B
U
U
E
E
B
P
B
A
U
S
U
U
U
E
B E
U
U
U
U
0
1
P
P
0
?
Step 3: Update signal values (Shown in slides, not in notes)
a
b
c d
e
0
U
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
P
0
1
1
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
wait for 20 ns;
0ns
a
b
c
d
e
U
U
U
U
U
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
B
U
U
E
E
B
P
B
A
U
S
U
U
U
E
B E
U
U
U
U
0
1
P
P
0
Step 3: Update signal values
1
P
?
Compact simulation cycle (Shown in slides, not in notes)
Begin next simulation cycle (Shown in slides, not in notes)
Step 1(a): Activate procB (Shown in slides, not in notes)
Step 1(b): Provisional assignment to d (Shown in slides, not in notes)
Step 1(b): Provisional assignment to e (Shown in slides, not in notes)
Step 1(c): Suspend procB (Shown in slides, not in notes)
1.6.5 Example 1: Process Execution (Bamboozle) 37
a
b
c d
e
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
S
0
1
U
0 1 1
1
Step 1(c): Suspend procB
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
wait for 20 ns;
0ns
a
b
c
d
e
U
U
U
U
U
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
B
U
U
E
E
B
U
U
U
E
B E
B
B
U
E
E
B
?
U
A
U
S
U
U
U
0
1
P
P
P P
0
1
Step 3: Update signal values (Shown in slides, not in notes)
a
b
c d
e
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
S
0
1
0 1
1
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
wait for 20 ns;
0ns
a
b
c
d
e
U
U
U
U
U
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
B
U
U
E
E
B
U
U
U
E
B E
B
B
U
E
E
B
U
P A
U
S
U
U
U
0
1
P
P
P
Step 3: Update signal values
1
?
0
1
Compact simulation cycle (Shown in slides, not in notes)
38 CHAPTER 1. VHDL
Begin next simulation cycle (Shown in slides, not in notes)
Step 1: No postponed processes (Shown in slides, not in notes)
a
b
c d
e
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
S
S
0
1
0 1
1
Step 1: no postponed processes
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
wait for 20 ns;
0ns
a
b
c
d
e
U
U
U
U
U
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
B
U
U
E
E
B
U
U
U
E
B E
B
B
U
E
E
B
U
U
E
E
10ns
U
U
U
0
1
P
P
P P
0
1
1
Compact simulation cycle (Shown in slides, not in notes)
1.6.5 Example 1: Process Execution (Bamboozle) 39
Begin next simulation cycle (Shown in slides, not in notes)
Step 1(a): Activate procC (Shown in slides, not in notes)
Step 1(b): Provisional assignment to a (Shown in slides, not in notes)
Step 1(c): Suspend procC (Shown in slides, not in notes)
Step 2: Check sensitivity list; resume processes (Shown in slides, not in notes)
Step 3: Update signal values (Shown in slides, not in notes)
a
b
c d
e
procA: process (a, b) begin
c <= a AND b;
end process;
procB: process (b, c, d) begin
d <= NOT c;
end process;
e <= b AND d;
P
S
1
0 1
1
1
Step 3: Update signal values
procC: process begin
a <= 0;
b <= 1;
a <= 1;
b <= 0;
a <= 0;
wait for 10 ns;
wait for 2 ns;
wait for 3 ns;
end process;
S
wait for 20 ns;
0ns
a
b
c
d
e
U
U
U
U
U
procA P
procB P
procC P
sim round
sim cycle
delta cycle
B
B
B
U
U
E
E
B
U
U
U
E
B E
B
B
U
E
E
B
U
B
U
E
E
P A S
B
B
B
10ns
U
U
U
0
1
1
P
P
P
P P
0
1
1
Compact simulation cycle (Shown in slides, not in notes)
40 CHAPTER 1. VHDL
1.6.6 Example 2: Process Execution (Flummox)
This example is a variation of the Bamboozle example from section 1.6.5.
entity flummox is
begin
end flummox;
architecture main of flummox is
signal a, b, c, d : std_logic;
begin
proc1 : process (a, b, c) begin
c <= a AND b;
d <= NOT c;
end process;
proc2 : process (b, d)
begin
e <= b AND d;
end process;
proc3 : process
begin
a <= 1;
b <= 0;
wait for 3 ns;
b <= 1;
wait for 99 ns;
end main;
Figure 1.17: Example ummox circuit for process execution
a
b
c
d
e
proc1
proc2
proc3
delta cycle
sim cycle
sim round B
B
B
P
P
P
U
U
U
U
U
A
U
S
A
1
0
S
A S
U
U
E
E
P
P
A
0
U
S
A S
B
B E
E
P A S
0
1
B
B E
E
P A S
0
B E
E
P A S
1
P
P A S
1
A S
1
1
B
B
B
E
E
P A S
1
0
P A S
0 0
B
B E
E E
E
B
B
0ns
B E
E
102ns 3ns
+1 +2 +3
1.6.6 Example 2: Process Execution (Flummox) 41
To get a more natural view of the behaviour of the signals, we draw just the waveforms and use a
timescale of nanoseconds plus delta cycles:
a
b
c
d
e
U
U
U
U
U
+1 +2 +3
3ns
+1 +2 +3
0ns 102ns
U
U
U
U
U
Finally, we draw the behaviour of the signals using the standard time scale of nanoseconds. Notice
that the delta-cycles within a simulation round all collapse to the left, so the signals change value
exactly at the nanosecond boundaries. Also, the glitch on e dissappears.
Answer:
a
b
c
d
e
3ns 0ns 102ns
U
U
U
U
U
2ns 1ns 4ns 100ns 101ns
Note and Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Note: If a signal is updated with the same value it had in the previous sim-
ulation cycle, then it does not change, and therefore does not trigger processes
to resume.
Question: What are the different granularities of time that occur when doing
delta-cycle simulation?
42 CHAPTER 1. VHDL
Answer:
simulation step, delta cycle, simulation cycle, simulation round
Question: What is the order of granularity, from nest to coarsest, amongst the
different granularities related to delta-cycle simulation?
Answer:
Same order as listed just above. Note: delta cycles have a ner granularity
that simulation cycles, because delta cycles do not advance time, while
simulation cycles that are not delta cycles do advance time.
1.6.7 Example: Need for Provisional Assignments
This is an example of processes where updating signals during a simulation cycle leads to different
results for different process execution orderings.
architecture main of swindle is
begin
p_c: process (a, b) begin
c <= a AND b;
end process;
p_d: process (a, c) begin
d <= a XOR c;
end process;
end main;
a
b
c
d
Figure 1.18: Circuit to illustrate need for provisional assignments
1. Start with all signals at 0.
2. Simultaneously change to a = 1 and b = 1.
1.6.7 Example: Need for Provisional Assignments 43
. .
If assignments are not visible within same simulation cycle (correct: i.e. provisional assignments
are used)
a
b
c
d
0
0
0
0
p_d
p_c P
P
A S
A S P A S
If p c is scheduled before p d, then d will
have a 1 pulse.
a
b
c
d
0
0
0
0
p_d
p_c P
P
A S
A S P A S
If p d is scheduled before p c, then d will
have a 1 pulse.
. .
If assignments are visible within same simulation cycle (incorrect)
a
b
c
d
0
0
0
0
p_d
p_c P
P
A S
A S P A S
If p c is scheduled before p d, then d will
stay constant 0.
a
b
c
d
0
0
0
0
p_d
p_c P
P
A S
A S P A S
If p d is scheduled before p c, then d will
have a 1 pulse.
With provisional assignments, both orders of scheduling processes result in the same behaviour
on all signals. Without provisional assignments, different scheduling orders result in different
behaviour.
44 CHAPTER 1. VHDL
1.6.8 Delta-Cycle Simulations of Flip-Flops
This example illustrates the delta-cycle simulation of a ip-op. Notice how the delta-cycle simu-
lation captures the expected behaviour of the ip op: the signal q changes at the same time (10ns)
as rising edge on the clock.
p_a : process begin
a <= 0;
wait for 15 ns;
a <= 1;
wait for 20 ns;
end process;
p_clk : process begin
clk <= 0;
wait for 10 ns;
clk <= 1;
wait for 10 ns;
end process;
flop : process ( clk ) begin
if rising_edge( clk ) then
q <= a;
end if;
end process;
a
clk
q
flop
p_a
p_clk
sim round
sim cycle
delta cycle
0ns
P
P
U
U
U
P
U
B
B
B
E
E
A S
A S
U
A S P A S
E
E
B
10ns 0ns+1
P
A S
0
0
B/E
A S P
U
15ns
P A S
20ns
P A S
30ns
P A S
A S
1
0
0
A S P
1
1
B/E
B
B
B
E
E
E
E
E
E
E
E
E B E B E B E
B E B/E
B/E
B/E
B/E
B/E
B/E B
B B E
B
B B E
35ns
1
P
Redraw with Normal Time Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
To clarify the behaviour, we redraw the same simulation using a normal time scale.
a
clk
q
0ns 10ns 20ns
U
U
5ns 15ns 30ns 35ns
U
25ns
1.6.8 Delta-Cycle Simulations of Flip-Flops 45
Back-to-Back Flops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
In the previous simulation, the input to the ip-op (a) changed several nanoseconds before the
rising-edge on the clock. In zero delay simulation, the output of a ip-op changes exactly on
the rising edge of the clock. This means that the input to the next ip-op will change at exactly
the same time as a rising edge. This example illustrates how delta-cycle simulation handles the
situation correctly.
p_a : process begin
a <= 0;
wait for 15 ns;
a <= 1;
wait for 20 ns;
end process;
p_clk : process begin
clk <= 0;
wait for 10 ns;
clk <= 1;
wait for 10 ns;
end process;
flops : process ( clk ) begin
if rising_edge( clk ) then
q1 <= a;
q2 <= q1;
end if;
end process;
a
clk
q1
flops
p_a
p_clk
sim round
sim cycle
delta cycle
10ns
P A S
0
0
B/E
A S P
U
15ns
P A S
20ns
P A S
30ns
P A S
A S
1
0
0
A S P
1
1
B/E
B
B
B
E
E E
E
E
E
E
E B E B E B E
B E B/E
B/E
B/E
B/E
B/E
B/E B
B B E
B
B B E
35ns
1
P
U
q2 U U U
B
Redraw with Normal Time Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
To clarify the behaviour, we redraw the same simulation using a normal time scale.
a
clk
q1
0ns 10ns 20ns
U
U
5ns 15ns 30ns 35ns
U
25ns
q2 U
46 CHAPTER 1. VHDL
External Inputs and Flops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
In our work so far with delta-cycle simulation, we have worked through the mechanics of simula-
tion. This example applies knowledge of delta-cycle simulation at a conceptual level. We could
answer the question by thinking about the semantics of delta-cycle simulation or by mechanicaly
doing the simulation.
Question: Do the signals b1 and b2 have the same behaviour from 2030 ns?
architecture mathilde of sauve is
signal clk, a, b : std_logic;
begin
process begin
clk <= 1;
wait for 10 ns;
clk <= 0;
wait for 10 ns;
end process;
process begin
wait for 20 ns;
a1 <= 1;
end process;
process begin
wait until rising_edge(clk);
a1 <= 1;
end process;
process begin
wait until rising_edge( clk );
b1 <= a1;
b2 <= a2;
end process;
end architecture;
Answer:
The signals b1 and b2 will have the same behaviour if a1 and a2 have the
same behaviour. The difference in the code between a1 and a2 is that a1 is
waiting for 20ns and a2 is waiting until a rising edge of the clock. There is a
rising edge of the clock at 20ns, so we might be tempted to conclude
(incorrectly) that both a1 and a2 transition from U to 0 at exactly 20ns and
therefore have exactly the same behaviour.
1.6.8 Delta-Cycle Simulations of Flip-Flops 47
The difference between the behaviour of a1 and a2 is that in the rst
simulation cycle for 20 ns, the process for a1 becomes postponed, while the
process for a2 becomes postponed only after the rising edge of clock.
The signal a1 is waiting for 20ns, so in the rst simulation cycle for 20ns, the
process for a1 becomes postponed. In the second simulation cycle for 20ns,
the clock toggles from 0 to 1 and a1 toggles from U to 1. The rising edge
on the clock causes the processes for a2, b1, and b2 to become postponed.
In the third simulation cycle for 20ns:
a2 toggles from U to 1.
b1 sees the value of 1 for a1, because a1 became 1 in the rst simulation
cycle.
b2 sees the old value of U for a2, because the process for a2 did not run
in the second simulation cycle.
0ns
clk
a1
a2
b1
b2
proc_clk
proc_a1
proc_a2
sim round
sim cycle
delta cycle
B/E
B/E
B
10ns
P
P
20ns
proc_b
20ns+1
B
B
E
E
A S
A S
U
P
P
20ns+2
A S
A S
30ns
B
E
E
U
48 CHAPTER 1. VHDL
Testbenches and Clock Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
env : process begin
a <= 1;
clk <= 0;
wait for 10 ns;
a <= 0;
clk <= 1;
wait for 10 ns;
end process;
flop : process ( clk ) begin
if rising_edge( clk ) then
q1 <= a
end if;
end process;
a
clk
q1
flop2
flop1 P
P
S
A S
P A S
U
U
U
env P
A
U
U
U
P S
A S
A
sim round
sim cycle
delta cycle B
B
B
E
E E
E
B
0ns 0ns+1 10ns
B
B E
E
P S A
B
B E
E
1
0 1
0
S
A S
A
P
P
B
B
B
E
E
20ns
U
Redraw with Normal Time Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a
clk
q1
0ns 10ns 20ns
U
U
U
1.6.8 Delta-Cycle Simulations of Flip-Flops 49
Note: Testbench signals For consistent results across different simulators,
simulation scripts vs test benches, and timing-simulation vs zero-delay simula-
tion do not change signals in your testbench or script at the same time as the
clock changes.
a is output of clocked or combina-
tional process
a
clk
q1
0ns 10ns 20ns
U
U
U
30ns 40ns 50ns 60ns
a is output of timed process
(testbench or environment) POOR
DESIGN
a
clk
q1
0ns 10ns 20ns
U
U
U
30ns 40ns 50ns 60ns
a is output of timed process (test-
bench or environment) GOOD
DESIGN
a
clk
q1
0ns 10ns 20ns
U
U
U
30ns 40ns 50ns 60ns
50 CHAPTER 1. VHDL
1.7 Register-Transfer-Level Simulation
Delta-cycle simulation is very tedious for both humans and computers. For many circuits, the
complexity of delta-cycle simulation is not needed and register-transfer-level simulation, which is
much simpler, can be used instead.
The major complexities of delta-cycle simulation come from running a process multiple times
within a single simulation round and keeping track of the modes of the proceses. Register-transfer-
level simulation avoids both of these complexities. By evaluating each signal only once per sim-
ulation round, an entire simulation round can be reduced to a single column in a timing diagram.
The disadvantage of register-transfer-level simulation is that it does not work for all VHDL pro-
grams in particular, it does not support combinational loops.
a
b
c
d
e
proc1
proc2
proc3
delta cycle
sim cycle
sim round B
B
B
P
P
P
U
U
U
U
U
A
U
S
A
1
0
S
A S
U
U
E
E
P
P
A
0
U
S
A S
B
B E
E
P A S
0
1
B
B E
E
P A S
0
B E
E
P A S
1
P
P A S
1
A S
1
1
B
B
B
E
E
P A S
1
0
P A S
0
102ns
0
B
B E
E E
E
E
B
B
0ns
3ns
B E
E
U
0ns+1 0ns+2 0ns+2 3ns+1 3ns+2 3ns+3
a
b
c
d
e
U
U
U
U
U
1
0
0
1
0
1
1
0
0ns 1ns 2ns 3ns
102ns
Delta cycle simulation
RTL simulation
1.7.1 Overview
In delta-cycle simulations, we often simulated the same process multiple times within the same
simulation round. In looking at the circuit though, we mentally can calculate the output value
by evaluating each gate only once per simulation round. For both humans and computers (or the
humans waiting for results from computers), it is desirable to avoid the wasted work of simulating
a gate when the output will remain at U or will change again later in the same simulation round.
In register-transfer-level simulation, we evaluate each gate only once per simulation round. Register-
transfer-level simulation is simpler and faster than delta-cycle simuation, because it avoids delta
cycles and provisional assignments.
In delta-cycle simulation, we evaluate a gate multiple times in a single simulation round if the
process that drives the gate is active in multiple simulation cycles, which happens when the process
is triggered in multiple simulation cycles. To avoid this, we must evaluate a signal only after all of
the signals that it depends on have stable values, that is, the signals will not change value later in
the simulation round.
A combinational loop is a circuit that contains a cyclic path through the circuit that includes only
combinational gates. Combinational loops can cause signals to oscillate, which in delta-cycle
simulation with zero-delay assignments, corresponds to an innite sequence of delta cycles. We
1.7.1 Overview 51
immediately see that when doing zero-delay simulation of a combinational loop such as
a <= not(a);, the change on a will trigger the process to re-run and re-evaluate a an innite
number of times. Hence, register-transfer-level simulation does not support combinational loops.
To make register-transfer simulation work, we preprocess the VHDL program and transform it so
that each process is dependent upon only those processes that appear before it. This dependency
ordering is called topological ordering. If a circuit has combinational loops, we cannot sort the
processes into a topological order.
The register-transfer level is a coarser level of temporal abstraction than the delta-cycle level.
In delta-cycle simulation, many delta-cycles can elapse without an increment in real time (e.g.
nanoseconds). In register-transfer-level simulation, all of the events that take place in the same
moment of real time take place at same moment in the simulation. In other words, all of the events
that take place at the same time are drawn in the same column of the waveform diagram.
Register-transfer-level simulation can be done for legal VHDL code, either synthesizable or unsyn-
thesizable, so long as the code does not contain combinational loops. For any piece of VHDL code
without combinational loops, the register-transfer-level simulation and the delta-cycle simulation
will have same value for each signal at the end of each simulation round.
By sorting the processes in topological order, when we execute a process, all of the signals that the
process depends on will have already been evaluated, and so we know that we are reading the nal,
stable values that each signal will have for that moment in time. This is good, because for most
processes, we want to read the most recent values of signals. The exceptions are timed processes
that are dependent upon other timed processes running at the same moment in time and clocked
processes that are dependent upon other clocked processes.
process begin
a <= 0;
wait for 10 ns;
a <= 1;
...
end process;
process begin
b <= 0;
wait for 10 ns;
b <= a;
...
end process;
Question: In this code, what value
should b have 10 ns?
Answer:
Both processes will execute in
the same simulation cycle at 10
ns. The statement b <= a will
see the value of a from the
previous simulation cycle, which
is before a <= 1; is
evaluated. The signal b will be
0 at 10 ns.
As the above example illustrates, if a clocked process reads the values of signals from processes
that resume at the same time, it must read the previous value of those signals. Similarly, if a
clocked process reads the values of signals from processes that are sensitive to the same clock,
those processes will all resume in the same simulation cycle the cycle immediately after the
rising-edge of the clock (assuming that the processes use if rising edge or wait until
rising edge statements). Because the processes run in the same simulation cycle, they all read
52 CHAPTER 1. VHDL
the previous values of the signals that they depend on. If this were not the case, then the VHDL
code for pair of back-to-back ip ops would not operate correctly, because the output of the rst
ip-op would appear immediately at the output of the second ip-op.
Simulation rounds begin with incrementing time, which triggers timed processes. Therefore, the
rst processes in the topological order are the timed processes. Timed processes may be run in any
order, and they read the previous values of signals that they depend on. This gives the same effect
as in delta-cycle simulation, where the timed processes would run in the same simulation cycle and
read the values that signals had before the simulation cycle began.
We then sort the clocked and combinational processes based on their dependencies, so that each
process appears (is run) after all of the processes on which it depends.
Although a clocked process may read many signals, we say that a clocked process is dependent
upon only its clock signal. It is the change in the clock signal that causes the process to resume.
So, as long as the process is run after the clock signal is stable, we can be sure that it will not need
to be run again at this time step. Clocked processes may be run in any order. They read the current
value of their clock signal and the previous value of the other signals that they depend on. As
with timed processes, this gives the same effect as in delta-cycle simulation, where the clock edge
would trigger the clocked processes to run in the same simulation cycle and the processes would
read the values that signals had before the simulation cycle began.
1.7.2 Technique for Register-Transfer Level Simulation
1. Pre-processing
(a) Separate processes into combinational and non-combinational (clocked and timed)
(b) Decompose each combinational process into separate processes with one target signal
per process
(c) Sort processes into topological order based on dependencies
2. For each clock cycle or unit of time:
(a) Run non-combinational processes in any order. Non-combinational assignments read
from earlier clock cycle / time step, except that clocked processes read the current value
of the clock signal.
(b) Run combinational processes in topological order. Combinational assignments read
from current clock cycle / time step.
Combinational Process Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.7.3 Examples of RTL Simulation 53
proc(a,b,c)
if a = 1 then
d <= b;
e <= c;
else
d <= not b;
e <= b and c;
end if;
end process;
proc(a,b,c)
if a = 1 then
d <= b;
else
d <= not b;
end if;
end process;
proc(a,b,c)
if a = 1 then
e <= c;
else
e <= b and c;
end if;
end process;
Original code After decomposition
1.7.3 Examples of RTL Simulation
1.7.3.1 RTL Simulation Example 1
We revisit an earlier example from delta-cycle simulation, but change the code slightly and do
register-transfer-level simulation.
1. Original code:
proc1: process (a, b, c) begin
d <= NOT c;
c <= a AND b;
end process;
proc2: process (b, d) begin
e <= b AND d;
end process;
proc3: process begin
a <= 1;
b <= 0;
wait for 3 ns;
b <= 1;
wait for 99 ns;
end process;
2. Decompose combinational processes into single-target processes:
54 CHAPTER 1. VHDL
proc1d: process (c) begin
d <= NOT c;
end process;
proc1c: process (a, b) begin
c <= a AND b;
end process;
proc2: process (b, d) begin
e <= b AND d;
end process;
proc1c: process (a, b) begin
c <= a AND b;
end process;
proc1d: process (c) begin
d <= NOT c;
end process;
proc2: process (b, d) begin
e <= b AND d;
end process;
Decomposed
Sorted
3. To sort combinational processes into topological order, move proc1d after proc1c, be-
cause d depends on c.
4. Run timed process (proc3) until suspend at wait for 3 ns;.
The signal a gets 1 from 0 to 3 ns.
The signal b gets 0 from 0 to 3 ns.
5. Run proc1c
The signal c gets a AND b (0 AND 1 = 0) from 0 to 3 ns.
6. Run proc1d
The signal d gets NOT c (NOT 0 = 1) from 0 to 3 ns.
7. Run proc2
The signal e gets b AND d (0 AND 1 = 0) from 0 to 3 ns.
8. Run the timed process until suspend at wait for 99 ns;, which takes us from 3ns to
102ns.
9. Run combinational processes in topological order to calculate values on c, d, e from 3ns to
102ns.
Waveforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a
b
c
d
e
U
U
U
U
U
1
0
0
1
0
1
1
0
0ns 1ns 2ns 3ns
102ns
1.7.3 Examples of RTL Simulation 55
Question: Draw the RTL waveforms that correspond to the delta-cycle waveform
below.
a
b
c
d
e
proc1
proc2
proc3
delta cycle
sim cycle
sim round B
B
B
P
P
P
U
U
U
U
U
A
U
S
A
1
0
S
A S
U
U
E
E
P
P
A
0
U
S
A S
B
B E
E
P A S
0
1
B
B E
E
P A S
0
B E
E
P A S
1
P
P A S
1
A S
1
1
B
B
B
E
E
P A S
1
0
P A S
0
102ns
0
B
B E
E E
E
E
B
B
0ns
3ns
B E
E
U
0ns+1 0ns+2 0ns+2 3ns+1 3ns+2 3ns+3
Answer:
a
b
c
d
e
U
U
U
U
U
1
0
0
1
0
1
1
0
0ns 1ns 2ns 3ns
102ns
56 CHAPTER 1. VHDL
Example: Communicating State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
huey: process
begin
clk <= 1;
wait for 10 ns;
clk <= 0;
wait for 10 ns;
end process;
dewey: process
begin
a <= to_unsigned(0,4);
wait until re(clk);
while (a < 4) loop
a <= a + 1;
wait until re(clk);
end loop;
end process;
louie: process
begin
wait until re(clk);
d <= 1;
if (a >= 2) then
d <= 0;
wait until re(clk);
end if;
end process;
clk
a
d
I 0 5 10 15 20 25 30 35 40 45 50 55 60 70 65 75 80 85 90 95 100 110 120
U
U
U
0
0
1
1
1
1
2 3 4
1 0
0 1
1 0 1
30 50 70 90 90
110 10
10 30 50
70
90
110
10
20
30
40
50
60
70
80
90
100
110
1.7.3 Examples of RTL Simulation 57
A Related Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Small changes to the code can cause signicant changes to the behaviour.
riri: process
begin
clk <= 1;
wait for 10 ns;
clk <= 0;
wait for 10 ns;
end process;
fifi: process
begin
a <= to_unsigned(0,4);
wait until re(clk);
while (a < 4) loop
a <= a + 1;
wait until re(clk);
end loop;
end process;
loulou: process
begin
wait until re(clk);
d <= 1;
if (a < 2) then
d <= 0;
wait until re(clk);
end if;
end process;
clk
a
d
I 0 5 10 15 20 25 30 35 40 45 50 55 60 70 65 75 80 85 90 95 100 110 120
58 CHAPTER 1. VHDL
1.8 VHDL and Hardware Building Blocks
This section outlines the building blocks for register transfer level design and how to write VHDL
code for the building blocks.
1.8.1 Basic Building Blocks
(also: n-to-1 muxes)
2:1 mux
WE
A0
DI0
DO0
A1 DO1
WE
A
DI
DO CE
S
R
D Q
Hardware VHDL
AND, OR, NAND, NOR, XOR,
XNOR
and, or, nand, nor, xor, xnor
multiplexer if-then-else, case statement,
selected assignment, conditional as-
signment
adder, subtracter, negater +, -, -
shifter, rotater sll, srl, sla, sra, rol, ror
ip-op wait until, if-then-else,
rising edge
memory array, register le, queue 2-d array or library component
Figure 1.19: RTL Building Blocks
1.8.2 Deprecated Building Blocks for RTL 59
1.8.2 Deprecated Building Blocks for RTL
Some of the common gates you have encountered in previous courses should be avoided when
synthesizing register-transfer-level hardware, particularly if FPGAs are the implementation tech-
nology.
1.8.2.1 An Aside on Flip-Flops and Latches
ip-op Edge sensitive: output only changes on rising (or falling) edge of clock
latch Level sensitive: output changes whenever clock is high (or low)
A common implementation of a ip-op is a pair of latches (Master/Slave op).
Latches are sometimes called transparent latches, because they are transparent (input directly
connected to output) when the clock is high.
The clock to a latch is sometimes called the enable line.
There is more information in the course notes on timing analysis for storage devices (Section 5.2).
1.8.2.2 Deprecated Hardware
Latches
Use ops, not latches
Latch-based designs are susceptible to timing problems
The transparent phase of a latch can let a signal leak through a latch causing the
signal to affect the output one clock cycle too early
Its possible for a latch-based circuit to simulate correctly, but not work in real hardware,
because the timing delays on the real hardware dont match those predicted in synthesis
T, JK, SR, etc ip-ops
Limit yourself to D-type ip-ops
Some FPGA and ASIC cell libraries include only D-type ip ops. Others, such as Al-
teras APEX FPGAs, can be congured as D, T, JK, or SR ip-ops.
Tri-State Buffers
Use multiplexers, not tri-state buffers
Tri-state designs are susceptible to stability and signal integrity problems
Getting tri-state designs to simulate correctly is difcult, some library components dont
support tri-state signals
Tri-state designs rely on the code never letting two signals drive the bus at the same time
It can be difcult to check that bus arbitration will always work correctly
60 CHAPTER 1. VHDL
Manufacturing and environmental variablity can make real hardware not work correctly
even if it simulates correctly
Typical industrial practice is to avoid use of tri-state signals on a chip, but allow tri-state
signals at the board level
Note: Unfortunately and surprisingly, PalmChip has been awarded a
US patent for using uni-directional busses (i.e. multiplexers) for system-
on-chip designs. The patent was led in 2000, so all fourth-year design
projects since 2000 that use muxes on FPGAs will need to pay royalties to
PalmChip
1.8.3 Hardware and Code for Flops
1.8.3.1 Flops with Waits and Ifs
The two code fragments below synthesize to identical hardware (ops).
If
process (clk)
begin
if rising_edge(clk) then
q <= d;
end if;
end process;
Wait
process
begin
wait until rising_edge(clk);
q <= d;
end process;
1.8.3.2 Flops with Synchronous Reset
The two code fragments below synthesize to identical hardware (ops with synchronous reset).
Notice that the synchronous reset is really nothing more than an AND gate on the input.
If
process (clk)
begin
if rising_edge(clk) then
if (reset = 1) then
q <= 0;
else
q <= d;
end if;
end if;
end process;
Wait
process
begin
wait until rising_edge(clk);
if (reset = 1) then
q <= 0;
else
q <= d0;
end if;
end process;
1.8.3 Hardware and Code for Flops 61
1.8.3.3 Flops with Chip-Enable
The two code fragments below synthesize to identical hardware (ops with chip-enable lines).
If
process (clk)
begin
if rising_edge(clk) then
if (ce = 1) then
q <= d;
end if;
end if;
end process;
Wait
process
begin
wait until rising_edge(clk);
if (ce = 1) then
q <= d;
end if;
end process;
1.8.3.4 Flop with Chip-Enable and Mux on Input
The two code fragments below synthesize to identical hardware (ops with chip-enable lines and
muxes on inputs).
If
process (clk)
begin
if rising_edge(clk) then
if (ce = 1) then
if (sel = 1) then
q <= d1;
else
q <= d0;
end if;
end if;
end if;
end process;
Wait
process
begin
wait until rising_edge(clk);
if (ce = 1) then
if (sel = 1) then
q <= d1;
else
q <= d0;
end if;
end if;
end process;
62 CHAPTER 1. VHDL
1.8.3.5 Flops with Chip-Enable, Muxes, and Reset
The two code fragments below synthesize to identical hardware (ops with chip-enable lines,
muxes on inputs, and synchronous reset). Notice that the synchronous reset is really nothing
more than a mux, or an AND gate on the input.
Note: The specic combination and order of tests is important to guarantee
that the circuit synthesizes to a op with a chip enable, as opposed to a level-
sensitive latch testing the chip enable and/or reset followed by a op.
Note: The chip-enable pin on the op is connected to both ce and reset.
If the chip-enable pin was not connected to reset, then the op would ignore
reset unless chip-enable was asserted.
If
process (clk)
begin
if rising_edge(clk) then
if (ce = 1 or reset =1 ) then
if (reset = 1) then
q <= 0;
elsif (sel = 1) then
q <= d1;
else
q <= d0;
end if;
end if;
end if;
end process;
Wait
process
begin
wait until rising_edge(clk);
if (ce = 1 or reset = 1) then
if (reset = 1) then
q <= 0;
elsif (sel = 1) then
q <= d1;
else
q <= d0;
end if;
end if;
end process;
1.8.4 An Example Sequential Circuit
There are many ways to write VHDL code that synthesizes to the schematic in gure1.20. The
major choices are:
1. Categories of signals
(a) All signals are outputs of ip-ops or inputs (no combinational signals)
(b) Signals include both opped and combinational
2. Number of opped signals per process
(a) All opped signals in a single process
(b) Some processes with multiple opped signals
(c) Each opped signal in its own process
3. Style of op code
1.8.4 An Example Sequential Circuit 63
(a) Flops use if statements
(b) Flops use wait statements
Some examples of these different options are shown in gures1.211.24.
S
R
S
R
sel reset
clk
c
a
entity and_not_reg is
port (
reset,
clk,
sel : in std_logic;
c : out std_logic
);
end;
Schematic and entity for examples of different code organizations in Figures1.211.24
Figure 1.20: Schematic and entity for and not reg
One Process, Flops, Wait . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
architecture one_proc of and_not_reg is
signal a : std_logic;
begin
process begin
wait until rising_edge(clk);
if (reset = 1) then
a <= 0;
elsif (sel = 1) then
a <= NOT a;
else
a <= a;
end if;
c <= NOT a;
end process;
end one_proc;
Figure 1.21: Implementation of Figure1.20: all signals are ops, all ops in one process, ops use waits
64 CHAPTER 1. VHDL
Two Processes, Flops, Wait . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
architecture two_proc_wait of and_not_reg is
signal a : std_logic;
begin
process begin
wait until rising_edge(clk);
if (reset = 1) then
a <= 0;
elsif (sel = 1) then
a <= NOT a;
else
a <= a;
end if;
end process;
process begin
wait until rising_edge(clk);
c <= NOT a;
end process;
end two_proc_wait;
Figure 1.22: Implementation of Figure1.20: all signals are ops, one op per process, ops use waits
1.8.4 An Example Sequential Circuit 65
Two Processes with If-Then-Else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
architecture two_proc_if of and_not_reg is
signal a : std_logic;
begin
process (clk)
begin
if rising_edge(clk) then
if (reset = 1) then
a <= 0;
elsif (sel = 1) then
a <= NOT a;
else
a <= a;
end if;
end if;
end process;
process (clk)
begin
if rising_edge(clk) then
c <= NOT a;
end if;
end process;
end two_proc_if;
Figure 1.23: Implementation of Figure1.20: all signals are ops, one op per process, ops use if-then-else
66 CHAPTER 1. VHDL
Concurrent Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
architecture comb of and_not_reg is
signal a, b, d : std_logic;
begin
process (clk) begin
if rising_edge(clk) then
if (reset = 1) then
a <= 0;
else
a <= d;
end if;
end if;
end process;
process (clk) begin
if rising_edge(clk) then
c <= NOT a;
end if;
end process;
d <= b when (sel = 1) else a;
b <= NOT a;
end comb;
Figure 1.24: Implementation of Figure1.20: opped and combinational signals, one op per process, ops use if-then-else
1.9 Arrays and Vectors
VHDL supports multidimensional arrays over elements of any type. The most common array is an
array of std logic signals, which has a predened type: std logic vector. Throughout
the rest of this section, we will discuss only std logic vector, but the rules apply to arrays
of any type.
VHDL supports reading from and assigning to slices (aka discrete subranges) of vectors. The
rules for working with slices of vectors are listed below and illustrated in gure1.25.
1. The ranges on both sides of the assignment must be the same.
2. The direction (downto or to) of each slice must match the direction of the signal declara-
tion.
3. The direction of the target and expression may be different.
1.10. ARITHMETIC 67
Declarations
----------------------------------------------------
a, b : in std_logic_vector(15 downto 0);
c, d, e : out std_logic_vector(15 downto 0);
----------------------------------------------------
ax, bx : in std_logic_vector(0 to 15);
cx, dx, ex : out std_logic_vector(0 to 15);
----------------------------------------------------
m, n : in unsigned(15 downto 0);
p, q, r : out unsigned(15 downto 0);
----------------------------------------------------
w, x : in signed(15 downto 0);
y, z : out signed(15 downto 0)
----------------------------------------------------
Legal code
c(3 downto 0) <= a(15 downto 12);
cx(0 to 3) <= a(15 downto 12);
(e(3), e(4)) <= bx(12 to 13);
(e(5), e(6)) <= b(13 downto 12);
Illegal code
d(0 to 3) <= a(15 to 12); -- slice dirs must be same as decl
e(3) & e(2) <= b(12 to 13); -- syntax error on &
p(3 downto 0) <= (m + n)( 3 downto 0); -- syntax error on )(
z(3 downto 0) <= m(15 downto 12); -- types on lhs and rhs must match
Figure 1.25: Illustration of Rules for Slices of Vectors
1.10 Arithmetic
VHDL includes all of the common arithmetic and logical operators.
Use the VHDL arithmetic operators and let the synthesis tool choose the better implementation for
you. It is almost impossible for a hand-coded implementation to beat vendor-supplied arithmetic
libraries.
To use the operators, you must choose which arithmetic package you wish to use (section 1.10.1).
The arithmetic operators are overloaded, and you can usually use any mixture of constants and sig-
nals of different types that you need (Section 1.10.3). However, you might need to convert a signal
from one type (e.g. std logic vector) to another type (e.g. integer) (Section 1.10.7).
68 CHAPTER 1. VHDL
1.10.1 Arithmetic Packages
Rushton Ch-7 covers arithmetic packages. Rushton Appendex A.5 has the code listing for the
numeric std package.
To do arithmetic with signals, use the numeric_std package. This package denes types
signed and unsigned, which are std_logic vectors on which you can do signed or un-
signed arithmetic.
numeric std supersedes earlier arithmetic packages, such as std logic arith.
Use only one arithmetic package, otherwise the different denitions will clash and you can get
strange error messages.
1.10.2 Shift and Rotate Operations
Shift and rotate operations are described with three character acronyms:
shift/rotate ) left/right ) arithmetic/logical )
The shift right arithmetic (sra) operation preserves the sign of the operand, by copying the most
signicant bit into lower bit positions.
The shift left arithmetic (sla) does the analogous operation, except that the least signicant bit is
copied.
1.10.3 Overloading of Arithmetic
The arithmetic operators +, -, and
*
are overloaded on signed vectors, unsigned vectors, and
integers. Tables1.11.4 show the different combinations of target and source types and widths that
can be used.
Table 1.1: Overloading of Arithmetic Operations (+, -)
target src1/2 src2/1
unsigned unsigned integer OK
unsigned signed fails in analysis
In these tables means dont care. Also, src1/2 and src2/1 mean rst or second operand, and
respectively second or rst operand. The rst line of the table means that either the st operand is
unsigned and the second is an integer, or the second operand is unsigned and the rst is an integer.
Or, more concisely: one of the operands is unsigned and the other is integer.
1.10.4 Different Widths and Arithmetic 69
1.10.4 Different Widths and Arithmetic
Table 1.2: Different Vector Widths and Arithmetic Operations (+, -)
target src1/2 src2/1
narrow wide fails in elaboration
wide narrow int fails in elaboration
wide wide OK
narrow narrow narrow OK
narrow narrow int OK
Example vectors
wide unsigned(7 downto 0)
narrow unsigned(4 downto 0)
1.10.5 Overloading of Comparisons
Table 1.3: Overloading of Comparison Operations (=, /=, >=, >, <)
src1/2 src2/1
unsigned integer OK
signed integer OK
unsigned signed fails in analysis
1.10.6 Different Widths and Comparisons
Table 1.4: Different Vector Widths and Comparison Operations (=, /=, >=, >, <)
src1/2 src2/1
wide OK
narrow OK
70 CHAPTER 1. VHDL
1.10.7 Type Conversion
The functions unsigned, signed, to integer, to unsigned and to signed are used
to convert between integers, std-logic vectors, signed vectors and unsigned vectors.
If you convert between two types of the same width, then no additional hardware will be generated.
The listing below summarizes the types of these functions.
unsigned( val : std_logic_vector ) return unsigned;
signed( val : std_logic_vector ) return signed;
to_integer( val : signed ) return integer;
to_integer( val : unsigned ) return integer;
to_unsigned( val : integer; width : natural) return unsigned;
to_signed( val : integer; width : natural) return signed;
The most common need to convert between two types arises when using a signal as an index into
an array. To use a signal as an index into an array, you must convert the signal into an integer
using the function to_integer (Figure1.26).
signal i : unsigned( 3 downto 0);
signal a : std_logic_vector(15 downto 0);
...
... a(i) ... -- BAD: wont typecheck
... a( to_integer(i) ) ... -- OK
Avoid (or at least take care when) converting a signal into an integer and then performing arithmetic
on the signal. The default size for integers is 32 bits, so sometimes when a signal is converted into
an integer, the resulting signals will be 32 bits wide.
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
...
signal bit_sig : std_logic;
signal uns_sig : unsigned(7 downto 0);
signal vec_sig : std_logic_vector(255 downto 0);
...
bit_sig <= vec_sig( to_integer(uns_sig) );
...
Figure 1.26: Using an unsigned signal as an index to array
1.11. SYNTHESIZABLE VS NON-SYNTHESIZABLE CODE 71
To convert a std_logic_vector signal into an integer, you must rst say whether the signal
should be interpreted as signed or unsigned. As illustrated in gure1.27, this is done by:
1. Convert the std_logic_vector signal to signed or unsigned, using the function
signed or unsigned
2. Convert the signed or unsigned signal into an integer, using to_integer
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
...
signal bit_sig : std_logic;
signal std_sig : std_logic_vector(7 downto 0);
signal vec_sig : std_logic_vector(255 downto 0);
...
bit_sig <= vec_sig( to_integer( unsigned( std_sig ) ) );
...
Figure 1.27: Using a std logic vector as an index to array
1.11 Synthesizable vs Non-Synthesizable Code
Synthesis is done by matching VHDL code against templates or patterns. Its important to use
idioms that your synthesis tools recognizes. If you arent careful, you could write code that has
the same behaviour as one of the idioms, but which results in inefcient or incorrect hardware.
Section 1.8 described common idioms and the resulting hardware.
Most synthesis tools agree on a large set of idioms, and will reliably generate hardware for these
idioms. This section is based on the idioms that Synopsys, Xilinx, Altera, and Mentor Graphics are
able to synthesize. One exception is that Alteras Quartus does not support implicit state machines
(as of v5.0).
Section 1.11.1 gives rules for unsynthesizable VHDL code. Section 1.11.2 gives rules for code
that is synthesizable, but violates the ece327 guidelines for good practices. The ece327 coding
guidelines are designed to produce circuits suitable for FPGAs. Bad code for FPGAs produce
circuits with the following features:
latches
asynchronous resets
combinational loops
multiple drivers for a signal
72 CHAPTER 1. VHDL
tri-state buffers
We limit our denition of bad practice to code that produces undesirable hardware. Coding styles
that lead to inefcient hardware might be useful in the early stages of the design process, when the
focus is on functionality and not optimality. As such, inefcient code is not considered bad prac-
tice. Poor coding styles that do not affect the hardware, for example, including extraneous signals
in a sensitivity list, should certainly be avoided, but fall into the general realm of programming
guidelines and will not be discussed.
1.11.1 Unsynthesizable Code
1.11.1.1 Initial Values
Initial values on signals (UNSYNTHESIZABLE)
signal bad_signal : std_logic := 0;
Reason: In most implementation technologies, when a circuit powers up, the values on signals
are completely random. Some FPGAs are an exception to this. For some FPGAs, when a chip is
powered up, all ip ops will be 0. For other FPGAs, the initial values can be programmed.
1.11.1.2 Wait For
Wait for length of time (UNSYNTHESIZABLE)
wait for 10 ns;
Reason: Delays through circuits are dependent upon both the circuit and its operating environment,
particularly supply voltage and temperature.
1.11.1.3 Different Wait Conditions
wait statements with different conditions in a process (UNSYNTHESIZABLE)
-- different clock signals
process
begin
wait until rising_edge(clk1);
x <= a;
wait until rising_edge(clk2);
x <= a;
end process;
-- different clock edges
process
begin
wait until rising_edge(clk);
x <= a;
wait until falling_edge(clk);
x <= a;
end process;
Reason: processes with multiple wait statements are turned into nite state machines. The wait
statements denote transitions between states. The target signals in the process are outputs of ip
1.11.1 Unsynthesizable Code 73
ops. Using different wait conditions would require the ip ops to use different clock signals
at different times. Multiple clock signals for a single ip op would be difcult to synthesize,
inefcient to build, and fragile to operate.
1.11.1.4 Multiple if rising edge in Process
Multiple if rising edge statements in a process (UNSYNTHESIZABLE)
process (clk)
begin
if rising_edge(clk) then
q0 <= d0;
end if;
if rising_edge(clk) then
q1 <= d1;
end if;
end process;
Reason: The idioms for synthesis tools generally expect just a single if rising edge state-
ment in each process. The simpler the VHDL code is, the easier it is to synthesize hardware.
Programmers of synthesis tools make idiomatic restrictions to make their jobs simpler.
1.11.1.5 if rising edge and wait in Same Process
An if rising edge statement and a wait statement in the same process (UNSYNTHESIZ-
ABLE)
process (clk)
begin
if rising_edge(clk) then
q0 <= d0;
end if;
wait until rising_edge(clk);
q0 <= d1;
end process;
Reason: The idioms for synthesis tools generally expect just a single type of op-generating state-
ment in each process.
74 CHAPTER 1. VHDL
1.11.1.6 if rising edge with else Clause
The if statement has a rising edge condition and an else clause (UNSYNTHESIZABLE).
process (clk)
begin
if rising_edge(clk) then
q0 <= d0;
else
q0 <= d1;
end if;
end process;
Reason: Generally, an if-then-else statement synthesizes to a multiplexer. The condition that is
tested in the if-then-else becomes the select signal for the multiplexer. In an if rising edge
with else, the select signal would need to detect a rising edge on clk, which isnt feasible to
synthesize.
1.11.1.7 if rising edge Inside a for Loop
An if rising edge statement in a for-loop (UNSYNTHESIZABLE-Synopsys)
process (clk) begin
for i in 0 to 7 loop
if rising_edge(clk) then
q(i) <= d;
end if;
end loop;
end process;
Reason: just an idiom of the synthesis tool.
Some loop statements are synthesizable (Rushton Section 8.7). For-loops in general are de-
scribed in Ashenden. Examples of for loops in E&CE will appear when describing testbenches for
functional verication (Chapter 4).
1.11.1 Unsynthesizable Code 75
Synthesizable Alternative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A synthesizable alternative to an if rising edge statement in a for-loop is to put the if-rising-
edge outside of the for loop.
process (clk) begin
if rising_edge(clk) then
for i in 0 to 7 loop
q(i) <= d;
end loop;
end if;
end process;
1.11.1.8 wait Inside of a for loop
wait statements in a for loop (UNSYNTHESIZABLE)
process
begin
for i in 0 to 7 loop
wait until rising_edge(clk);
x <= to_unsigned(i,4);
end loop;
end process;
Reason: Unknown. while-loops with the same behaviour are synthesizable.
Note: Combinational for-loops Combinational for-loops are usually
synthesizable. They are often used to build a combinational circuit for each
element of an array.
Note: Clocked for-loops Clocked for-loops are not synthesizable,
but are very useful in simulation, particular to generate test vectors for test
benches.
76 CHAPTER 1. VHDL
Synthesizable Alternative to Wait-Inside-For . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
while loop (synthesizable)
This is the synthesizable alternative to the the wait statement in a for loop above.
process
begin
-- output values from 0 to 4 on i
-- sending one value out each clock cycle
i <= to_unsigned(0,4);
wait until rising_edge(clk);
while (4 > i) loop
i <= i + 1;
wait until rising_edge(clk);
end loop;
end process;
1.11.2 Synthesizable, but Bad Coding Practices
Note: For some of the results in this section, the results are highly depen-
dent upon the synthesis tool that you use and the target technology library.
1.11.2.1 Asynchronous Reset
In an asynchronous reset, the test for reset occurs outside of the test for the clock edge.
process (reset, clk)
begin
if (reset = 1) then
q <= 0;
elsif rising_edge(clk) then
q <= d1;
end if;
end process;
Asynchronous resets are bad, because if a reset occurs very close to a clock edge, some parts of
the circuit might be reset in one clock cycle and some in the subsequent clock cycle. This can lead
the circuit to be out of sync as it goes through the reset sequence, potentially causing erroneous
internal state and output values.
1.11.2 Synthesizable, but Bad Coding Practices 77
1.11.2.2 Combinational if-then Without else
process (a, b)
begin
if (a = 1) then
c <= b;
end if;
end process;
Reason: This code synthesizes c to be a latch, and latches are undesirable.
1.11.2.3 Bad Form of Nested Ifs
if rising edge statement inside another if (BAD HARDWARE)
In Synopsys, with some target libraries, this design results in a level-sensitive latch whose input is
a op.
process (ce, clk)
begin
if (ce = 1) then
if rising_edge(clk) then
q <= d1;
end if;
end if;
end process;
1.11.2.4 Deeply Nested Ifs
Deeply chained if-then-else statements can lead to long chains of dependent gates, rather
than checking different cases in parallel.
Slow (maybe)
if cond1 then
stmts1
elsif cond2 then
stmts2
elsif cond3 then
stmts3
elsif cond4 then
stmts4
end if;
Fast (hopefully)
if only one of the conditions can be true at a
time, then try using a case statement or some
other technique that allows the conditions to
be evaluated in parallel.
78 CHAPTER 1. VHDL
1.11.3 Synthesizable, but Unpredictable Hardware
Some coding styles are synthesizable and might produce desirable hardware with a particular syn-
thesis tool, but either be unsynthesizable or produce undesirable hardware with another tool.
variables
level-sensitive wait statements
missing signals in sens list
If you are using a single synthesis tool for an extended period of time, and want to get the full
power of the tool, then it can be advantageous to write your code in a way that works for your tool,
but might produce undesirable results with other tools.
1.12 Synthesizable VHDL Coding Guidelines
This section gives guidelines for building robust, portable, and synthesizable VHDL code. Porta-
bility is both for different simulation and synthesis tools and for different implementation tech-
nologies.
Remember, there is a world of difference between getting a design to work in simulation and
getting it to work on a real FPGA. And there is also a huge difference between getting a design
to work in an FPGA for a few minutes of testing and getting thousands of products to work for
months at a time in thousands of different environments around the world.
The coding guidelines here are designed both for helping you to get your E&CE 327 project to
work as well as all of the subsequent industrial designs.
Finally, note that there are exceptions to every rule. You might nd yourself in a circumstance
where your particular situation (e.g. choice of tool, target technology, etc) would benet from
bending or breaking a guideline here. Within E&CE 327, of course, there wont be any such
circumstances.
1.12.1 Signal Declarations
Use signals, do not use variables
reason The intention of the creators of VHDL was for signals to be wires and variables to be
just for simulation. Some synthesis tools allow some uses of variables, but when using
variables, it is easy to create a design that works in simulation but not in real hardware.
Use std_logic signals, do not use bit or Boolean
reason std_logic is the most commonly used signal type across synthesis tools, simulation
tools, and cell libraries
Use in or out, do not use inout
reason inout signals are tri-state.
1.12.2 Flip-Flops and Latches 79
note If you have an output signal that you also want to read from, you might be tempted to
declare the mode of the signal to be inout. A better solution is to create a new, internal,
signal that you both read from and write to. Then, your output signal can just read from
the internal signal.
Declare the primary inputs and outputs of chips as either std logic and std logic vector.
Do not use signed or unsigned for primary inputs or outputs.
reason Both the Altera tool Quartus and the Xilinx tool ngd2vhdl convert signed and unsigned
vectors in entities into std-logic-vectors. If you want your same testbench to work for both
functional simulation and timing simulation, you must not use signed or unsigned signals
in the top-level entity of your chip.
note Signed and unsigned signals are ne inside testbenches, for non-top-level entities, and
inside architectures. It is only the top-level entity that should not use signed or unsigned
signals.
1.12.2 Flip-Flops and Latches
Use ops, not latches (see section 1.8.2).
Use D-ops, not T, JK, etc (see section 1.8.2).
For every signal in your design, know whether it should be a ip-op or combinational. Before
simulating your design, examine the log le e.g. LOG/dc shell.log to see if the ip
ops in your circuit match your expectations, and to check that you dont have any latches in
your design.
Do not assign a signal to itself (e.g. a <= a; is bad). If the signal is a op, use a chip enable
to cause the signal to hold its value. If the signal is combinational, then assigning a signal to
itself will cause combinational loops, which are bad.
1.12.3 Inputs and Outputs
Put ip ops on primary inputs and outputs of a chip
reason Creates more robust implementations. Signal delays between chips are unpredictable.
Signal integrity can be a problem(remember transmission lines from E&CE 324?). Putting
ip ops on inputs and outputs of chip provides clean boundaries between circuits.
note This only applies to primary inputs and outputs of a chip (the signals in the top-level
entity). Within a chip, you should adopt a standard of putting ip-ops on either inputs or
outputs of modules. Within a chip, you do not need to put ip-ops on both inputs and
outputs.
1.12.4 Multiplexors and Tri-State Signals
Use multiplexors, not tri-state buffers (see section 1.8.2).
80 CHAPTER 1. VHDL
1.12.5 Processes
For a combinational process, the sensitivity list should contain all of the signals that are read in
the process.
reason Gives consistent results across different tools. Many synthesis tools will implicitly
include all signals that a process reads in its sensitivity list. This differs from the VHDL
Standard. A tool that adheres to the standard will introduce latches if not all signals that
are read from are included in the sensitivity list.
exception In a clocked process using an if rising edge, it is acceptable to have only the
clock in the sensitivity list
For a combinational process, every signal that is assigned to, must be assigned to in every branch
of if-then and case statements.
reason If a signal is not assigned a value in a path through a combinational process, then that
signal will be a latch.
note For a clocked process, if a signal is not assigned a value in a clock cycle, then the ip-op
for that signal will have a chip-enable pin. Chip-enable pins are ne; they are available on
ip-ops in essentially every cell library.
Each signal should be assigned to in only one process.
reason Multiple processes driving the same signal is the same as having multiple gates driving
the same wire. This can cause contention, short circuits, and other bad things.
exception Multiple drivers are acceptable for tri-state busses or if your implementation tech-
nology has wired-ANDs or wired-ORs. FPGAs dont have wired-ANDs or wired-ORs.
Separate unrelated signals into different processes
reason Grouping assignments to unrelated signals into a single process can complicate the
control circuitry for that process. Each branch in a case statement or if-then-else adds a
multiplexor or chip-enable circuitry.
reason Synthesis tools generally optimize each process individually, the larger a process is, the
longer it will take the synthesis program to optimize the process. Also, larger processes
tend to be more complicated and can cause synthesis programs to miss helpful optimiza-
tions that they would notice in smaller processes.
1.12.6 State Machines
In a state machine, illegal and unreachable states should transition to the reset state
reason Creates more robust implementations. In the eld, your circuit will be subjected to
illegal inputs, voltage spikes, temperature uctuations, clock speed variations, etc. At
some point in time, something weird will happen that will cause it to jump into an illegal
state. Having a system reset and reboot is much better than having it generate incorrect
outputs that arent detected.
If your state machine has less than 16 states, use a one-hot encoding.
1.12.7 Reset 81
reason For n states, a one-hot encoding uses n ip-ops, while a binary encoding uses log
2
n
ip-ops. One-hot signals are simpler to decode, because only one bit must be checked to
determine if the circuit is in a particular state. For small values of n, a one-hot signal results
in a smaller and faster circuit. For large values of n, the number of signals required for a
one-hot design is too great of a penalty to compensate for the simplicity of the decoding
circuitry.
note Using an enumerated type for states allows the synthesis tool to choose state encodings
that it thinks will work well to balance area and clock speed. Quartus uses a modied
one-hot encoding, where the bit that denotes the reset state is inverted. That is, when the
reset bit is 0, the system is in the reset state and when the reset bit is a 1 the system
is not in the reset state. The other bits have the normal polarity. The result is that when the
system is in the reset state, all bits are 0 and when the system is in a non-reset state, two
bits are 1.
note Using your own encoding allows you to leverage knowledge about your design that the
synthesis tool might not be able to deduce.
1.12.7 Reset
Include a reset signal in all clocked circuits.
reason For most implementation technologies, when you power-up the circuit, you do not
know what state it will start in. You need a reset signal to get the circuit into a known state.
reason If something goes wrong while the circuit is running, you need a way to get it into a
known state.
For implicit state machines (section 2.5.1.3), check for reset after every wait statement.
reason Missing a wait statement means that your circuit might not notice a reset signal, or
different signals could reset in different clock cycles, causing your circuit to get out of
synch.
Connect reset to the important control signals in the design, such as the state signal. Do not reset
every ip op.
reason Using reset adds area and delay to a circuit. The fewer signals that need reset, the
faster and smaller your design will be.
note Connect the reset signal to critical ip-ops, such as the state signal. Datapath signals
rarely need to be reset. You do not need to reset every signal
Use synchronous, not asynchronous, reset
reason Creates more robust implementations. Signal propagation delays mean that asyn-
chronous resets cause different parts of the circuit to be reset at different times. This can
lead to glitches, which then might cause the circuit to move to an illegal state.
82 CHAPTER 1. VHDL
Covering All Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
When writing case statements or selected assignments that test the value of std logic signals,
you will get an error unless you include a provision for non 1/0 signals.
For example:
signal t : std_logic;
...
case t is
when 1 => ...
when 0 => ...
end case;
will result in an error message about missing cases. You must provide for t being H, U, etc.
The simplest thing to do is to make the last test when other.
1.13. VHDL PROBLEMS 83
1.13 VHDL Problems
P1.1 IEEE 1164
For each of the values in the list below, answer whether or not it is dened in the ieee.std_logic_1164
library. If it is part of the library, write a 23 word description of the value.
Values: -, #, 0, 1, A, h, H, L, Q, X, Z.
P1.2 VHDL Syntax
Answer whether each of the VHDL code fragments q2a through q2f is legal VHDL code.
NOTES: 1) ... represents a fragment of legal VHDL code.
2) For full marks, if the code is illegal, you must explain why.
3) The code has been written so that, if it is illegal, then it is illegal for both
simulation and synthesis.
q2a architecture main of anchiceratops is
signal a, b, c : std_logic;
begin
process begin
wait until rising_edge(c);
a <= if (b = 1) then
...
else
...
end if;
end process;
end main;
q2b architecture main of tulerpeton is
begin
lab: for i in 15 downto 0 loop
...
end loop;
end main;
84 CHAPTER 1. VHDL
q2c architecture main of metaxygnathus is
signal a : std_logic;
begin
lab: if (a = 1) generate
...
end generate;
end main;
q2d architecture main of temnospondyl is
component compa
port (
a : in std_logic;
b : out std_logic
);
end component;
signal p, q : std_logic;
begin
coma_1 : compa
port map (a => p, b => q);
...
end main;
q2e architecture main of pachyderm is
function inv(a : std_logic)
return std_logic is
begin
return(NOT a);
end inv;
signal p, b : std_logic;
begin
p <= inv(b => a);
...
end main;
q2f architecture main of apatosaurus is
type state_ty is (S0, S1, S2);
signal st : state_ty;
signal p : std_logic;
begin
case st is
when S0 | S1 => p <= 0;
when others => p <= 1;
end case;
end main;
P1.3 Flops, Latches, and Combinational Circuitry 85
P1.3 Flops, Latches, and Combinational Circuitry
For each of the signals p...z in the architecture main of montevido, answer whether the signal
is a latch, combinational gate, or ip-op.
entity montevido is
port (
a, b0, b1, c0, c1, d0, d1, e0, e1 : in std_logic;
l : in std_logic_vector (1 downto 0);
p, q, r, s, t, u, v, w, x, y, z : out std_logic
);
end montevido;
architecture main of montevido is
signal i, j : std_logic;
begin
i <= c0 XOR c1;
j <= c0 XOR c1;
process (a, i, j) begin
if (a = 1) then
p <= i AND j;
else
p <= NOT i;
end if;
end process;
process (a, b0, b1) begin
if rising_edge(a) then
q <= b0 AND b1;
end if;
end process;
process
(a, c0, c1, d0, d1, e0, e1)
begin
if (a = 1) then
r <= c0 OR c1;
s <= d0 AND d1;
else
r <= e0 XOR e1;
end if;
end process;
process begin
wait until rising_edge(a);
t <= b0 XOR b1;
u <= NOT t;
v <= NOT x;
end process;
process begin
case l is
when "00" =>
wait until rising_edge(a);
w <= b0 AND b1;
x <= 0;
when "01" =>
wait until rising_edge(a);
w <= -;
x <= 1;
when "1-" =>
wait until rising_edge(a);
w <= c0 XOR c1;
x <= -;
end case;
end process;
y <= c0 XOR c1;
z <= x XOR w;
end main;
86 CHAPTER 1. VHDL
P1.4 Counting Clock Cycles
This question refers to the VHDL code shown below.
NOTES:
1. ... represents a legal fragment of VHDL code
2. assume all signals are properly declared
3. the VHDL code is intendend to be legal, synthesizable code
4. all signals are initially U
P1.4 Counting Clock Cycles 87
entity bigckt is
port (
a, b : in std_logic;
c : out std_logic
);
end bigckt;
architecture main of bigckt is
begin
process (a, b)
begin
if (a = 0) then
c <= 0;
else
if (b = 1) then
c <= 1
else
c <= 0;
end if;
end if;
end process;
end main;
entity tinyckt is
port (
clk : in std_logic;
i : in std_logic;
o : out std_logic
);
end tinyckt;
architecture main of tinyckt is
component bigckt ( ... );
signal ... : std_logic;
begin
p0 : process begin
wait until rising_edge(clk);
p0_a <= i;
wait until rising_edge(clk);
end process;
p1 : process begin
wait until rising_edge(clk);
p1_b <= p1_d;
p1_c <= p1_b;
p1_d <= s2_k;
end process;
p2 : process (p1_c, p3_h, p4_i, clk) begin
if rising_edge(clk) then
p2_e <= p3_h;
p2_f <= p1_c = p4_i;
end if;
end process;
p3 : process (i, s4_m) begin
p3_g <= i;
p3_h <= s4_m;
end process;
p4 : process (clk, i) begin
if (clk = 1) then
p4_i <= i;
else
p4_i <= 0;
end if;
end process;
huge : bigckt
(a => p2_e, b => p1_d, c => h_y);
s1_j <= s3_l;
s2_k <= p1_b XOR i;
s3_l <= p2_f;
s4_m <= p2_f;
end main;
For each of the pairs of signals below, what is the minimum length of time between when a change
occurs on the source signal and when that change affects the destination signal?
88 CHAPTER 1. VHDL
src dst Num clock cycles
i p0 a
i p1 b
i p1 b
i p1 c
i p2 e
i p3 g
i p4 i
s4 m h y
p1 b p1 d
p2 f s1 j
p2 f s2 k
P1.5 Arithmetic Overow
Implement a circuit to detect overow in 8-bit signed addition.
An overow in addition happens when the carry into the most signicant bit is different from the
carry out of the most signicant bit.
When performing addition, for overow to happen, both operands must have the same sign. Pos-
itive overow occurs when adding two positive operands results in a negative sum. Negative
overow occurs when adding two negative operands results in a positive sum.
P1.6 Delta-Cycle Simulation: Pong 89
P1.6 Delta-Cycle Simulation: Pong
Perform a delta-cycle simulation of the following VHDL code by drawing a waveform diagram.
INSTRUCTIONS:
1. The simulation is to be done at the granularity of simulation-steps.
2. Show all changes to process modes and signal values.
3. Each column of the timing diagram corresponds to a simulation step that changes a signal or
process.
4. Clearly show the beginning and end of each simulation cycle, delta cycle, and simulation
round by writing in the appropriate row a B at the beginning and an E at the end of the cycle
or round.
5. End your simulation just before 20 ns.
architecture main of pong_machine is
signal ping_i, ping_n, pong_i, pong_n : std_logic;
begin
reset_proc: process
reset <= 1;
wait for 10 ns;
reset <= 0;
wait for 100 ns;
end process;
clk_proc: process
clk <= 0;
wait for 10 ns;
clk <= 1;
wait for 10 ns;
end process;
next_proc: process (clk)
begin
if rising_edge(clk) then
ping_n <= ping_i;
pong_n <= pong_i;
end if;
end process;
comb_proc: process (pong_n, ping_n, reset)
begin
if (reset = 1) then
ping_i <= 1;
pong_i <= 0;
else
ping_i <= pong_n;
pong_i <= ping_n;
end if;
end process;
end main;
P1.7 Delta-Cycle Simulation: Baku
Perform a delta-cycle simulation of the following VHDL code by drawing a waveform diagram.
INSTRUCTIONS:
90 CHAPTER 1. VHDL
1. The simulation is to be done at the granularity of simulation-steps.
2. Show all changes to process modes and signal values.
3. Each column of the timing diagram corresponds to a simulation step.
4. Clearly show the beginning and end of each simulation cycle, delta cycle, and simulation
round by writing in the appropriate row a B at the beginning and an E at the end of the cycle
or round.
5. Write t=5ns and t=10ns at the top of columns where time advances to 5 ns and 10 ns.
6. Begin your simulation at 5 ns (i.e. after the initial simulation cycles that initialize the signals
have completed).
7. End your simulation just before 15 ns;
entity baku is
port (
clk, a, b : in std_logic;
f : out std_logic
);
end baku;
architecture main of baku is
signal c, d, e : std_logic;
begin
proc_clk: process
begin
clk <= 0;
wait for 10 ns;
clk <= 1;
wat for 10 ns;
end process;
proc_extern : process
begin
a <= 0;
b <= 0;
wait for 5 ns;
a <= 1;
b <= 1;
wait for 15 ns;
end process;
proc_1 : process (a, b, c)
begin
c <= a and b;
d <= a xor c;
end process;
proc_2 : process
begin
e <= d;
wait until rising_edge(clk);
end process;
proc_3 : process (c, e) begin
f <= c xor e;
end process;
end main;
P1.8 Clock-Cycle Simulation 91
P1.8 Clock-Cycle Simulation
Given the VHDL code for anapurna and waveform diagram below, answer what the values of
the signals y, z, and p will be at the given times.
entity anapurna is
port (
clk, reset, sel : in std_logic;
a, b : in unsigned(15 downto 0);
p : out unsigned(15 downto 0)
);
end anapurna;
architecture main of anapurna is
type state_ty is (mango, guava, durian, papaya);
signal y, z : unsigned(15 downto 0);
signal state : state_ty;
begin
proc_herzog: process
begin
top_loop: loop
wait until (rising_edge(clk));
next top_loop when (reset = 1);
state <= durian;
wait until (rising_edge(clk));
state <= papaya;
while y < z loop
wait until (rising_edge(clk));
if sel = 1 then
wait until (rising_edge(clk));
next top_loop when (reset = 1);
state <= mango;
end if;
state <= papaya;
end loop;
end loop;
end process;
proc_hillary: process (clk)
begin
if rising_edge(clk) then
if (state = durian) then
z <= a;
else
z <= z + 2;
end if;
end if;
end process;
y <= b;
p <= y + z;
end main;
92 CHAPTER 1. VHDL
P1.9 VHDL VHDL Behavioural Comparison: Teradactyl
For each of the VHDL architectures q3a through q3c, does the signal v have the same behaviour
as it does in the main architecture of teradactyl?
NOTES: 1) For full marks, if the code has different behaviour, you must explain
why.
2) Ignore any differences in behaviour in the rst few clock cycles that is
caused by initialization of ip-ops, latches, and registers.
3) All code fragments in this question are legal, synthesizable VHDL code.
entity teradactyl is
port (
a : in std_logic;
v : out std_logic
);
end teradactyl;
architecture main of teradactyl is
signal m : std_logic;
begin
m <= a;
v <= m;
end main;
architecture q3a of teradactyl is
signal b, c, d : std_logic;
begin
b <= a;
c <= b;
d <= c;
v <= d;
end q3a;
architecture q3b of teradactyl is
signal m : std_logic;
begin
process (a, m) begin
v <= m;
m <= a;
end process;
end q3b;
architecture q3c of teradactyl is
signal m : std_logic;
begin
process (a) begin
m <= a;
end process;
process (m) begin
v <= m;
end process;
end q3c;
P1.10 VHDL VHDL Behavioural Comparison: Ichtyostega 93
P1.10 VHDL VHDL Behavioural Comparison: Ichtyostega
For each of the VHDL architectures q4a through q4c, does the signal v have the same behaviour
as it does in the main architecture of ichthyostega?
NOTES: 1) For full marks, if the code has different behaviour, you must explain
why.
2) Ignore any differences in behaviour in the rst few clock cycles that is
caused by initialization of ip-ops, latches, and registers.
3) All code fragments in this question are legal, synthesizable VHDL code.
entity ichthyostega is
port (
clk : in std_logic;
b, c : in signed(3 downto 0);
v : out signed(3 downto 0)
);
end ichthyostega;
architecture main of ichthyostega is
signal bx, cx : signed(3 downto 0);
begin
process begin
wait until (rising_edge(clk));
bx <= b;
cx <= c;
end process;
process begin
wait until (rising_edge(clk));
if (cx > 0) then
v <= bx;
else
v <= to_signed(-1, 4);
end if;
end process;
end main;
architecture q4a of ichthyostega is
signal bx, cx : signed(3 downto 0);
begin
process begin
wait until (rising_edge(clk));
bx <= b;
cx <= c;
end process;
process begin
if (cx > 0) then
wait until (rising_edge(clk));
v <= bx;
else
wait until (rising_edge(clk));
v <= to_signed(-1, 4);
end if;
end process;
end q4a;
94 CHAPTER 1. VHDL
architecture q4b of ichthyostega is
signal bx, cx : signed(3 downto 0);
begin
process begin
wait until (rising_edge(clk));
bx <= b;
cx <= c;
wait until (rising_edge(clk));
if (cx > 0) then
v <= bx;
else
v <= to_signed(-1, 4);
end if;
end process;
end q4b;
architecture q4c of ichthyostega is
signal bx, cx, dx : signed(3 downto 0);
begin
process begin
wait until (rising_edge(clk));
bx <= b;
cx <= c;
end process;
process begin
wait until (rising_edge(clk));
v <= dx;
end process;
dx <= bx when (cx > 0)
else to_signed(-1, 4);
end q4c;
P1.11 Waveform VHDL Behavioural Comparison 95
P1.11 Waveform VHDL Behavioural Comparison
Answer whether each of the VHDL code fragments q3a through q3d has the same behaviour as
the timing diagram.
NOTES: 1) Same behaviour means that the signals a, b, and c have the same values at
the end of each clock cycle in steady-state simulation (ignore any irregularities
in the rst few clock cycles).
2) For full marks, if the code does not match, you must explain why.
3) Assume that all signals, constants, variables, types, etc are properly dened
and declared.
4) All of the code fragments are legal, synthesizable VHDL code.
clk
a
b
c
q3a
architecture q3a of q3 is
begin
process begin
a <= 1;
loop
wait until rising_edge(clk);
a <= NOT a;
end loop;
end process;
b <= NOT a;
c <= NOT b;
end q3a;
q3b
architecture q3b of q3 is
begin
process begin
b <= 0;
a <= 1;
wait until rising_edge(clk);
a <= b;
b <= a;
wait until rising_edge(clk);
end process;
c <= a;
end q3b;

96 CHAPTER 1. VHDL
q3c
architecture q3c of q3 is
begin
process begin
a <= 0;
b <= 1;
wait until rising_edge(clk);
b <= a;
a <= b;
wait until rising_edge(clk);
end process;
c <= NOT b;
end q3c;
q3d
architecture q3d of q3 is
begin
process (b, clk) begin
a <= NOT b;
end process;
process (a, clk) begin
b <= NOT a;
end process;
c <= NOT b;
end q3d;

q3e
architecture q3e of q3 is
begin
process
begin
b <= 0;
a <= 1;
wait until rising_edge(clk);
a <= c;
b <= a;
wait until rising_edge(clk);
end process;
c <= not b;
end q3e;
q3f
architecture q3f of q3 is
begin
process begin
a <= 1;
b <= 0;
c <= 1;
wait until rising_edge(clk);
a <= c;
b <= a;
c <= NOT b;
wait until rising_edge(clk);
end process;
end q3f;

P1.12 Hardware VHDL Comparison 97


P1.12 Hardware VHDL Comparison
For each of the circuits q2aq2d, answer
whether the signal d has the same behaviour
as it does in the main architecture of q2.
entity q2 is
port (
a, clk, reset : in std_logic;
d : out std_logic
);
end q2;
architecture main of q2 is
signal b, c : std_logic;
begin
b <= 0 when (reset = 1)
else a;
process (clk) begin
if rising_edge(clk) then
c <= b;
d <= c;
end if;
end process;
end main;
q2a
clk
a
0
reset
d
q2b
clk
a
0
reset
d
q2c
clk
a
0
reset
d
q2d
clk
a
0
reset
d
clk
98 CHAPTER 1. VHDL
P1.13 8-Bit Register
Implement an 8-bit register that has:
clock signal clk
input data vector d
output data vector q
synchronous active-high input reset
synchronous active-high input enable
P1.13.1 Asynchronous Reset
Modify your design so that the reset signal is asynchronous, rather than synchronous.
P1.13.2 Discussion
Describe the tradeoffs in using synchonous versus asynchronous reset in a circuit implemented on
an FPGA.
P1.13.3 Testbench for Register
Write a test bench to validate the functionality of the 8-bit register with synchronous reset.
P1.14 Synthesizable VHDL and Hardware 99
P1.14 Synthesizable VHDL and Hardware
For each of the fragments of VHDL q4a...q4f, answer whether the the code is synthesizable. If the
code is synthesizable, draw the circuit most likely to be generated by synthesizing the datapath of
the code. If the the code is not synthesizable, explain why.
q4a
process begin
wait until rising_edge(a);
e <= d;
wait until rising_edge(b);
e <= NOT d;
end process;
q4b
process begin
while (c /= 1) loop
if (b = 1) then
wait until rising_edge(a);
e <= d;
else
e <= NOT d;
end if;
end loop;
e <= b;
end process;
q4c
process (a, d) begin
e <= d;
end process;
process (a, e) begin
if rising_edge(a) then
f <= NOT e;
end if;
end process;
q4d
process (a) begin
if rising_edge(a) then
if b = 1 then
e <= 0;
else
e <= d;
end if;
end if;
end process;
100 CHAPTER 1. VHDL
q4e
process (a,b,c,d) begin
if rising_edge(a) then
e <= c;
else
if (b = 1) then
e <= d;
end if;
end if;
end process;
q4f
process (a,b,c) begin
if (b = 1) then
e <= 0;
else
if rising_edge(a) then
e <= c;
end if;
end if;
end process;
P1.15 Datapath Design 101
P1.15 Datapath Design
Each of the three VHDL fragments q4aq4c, is intended to be the datapath for the same circuit.
The circuit is intended to perform the following sequence of operations (not all operations are
required to use a clock cycle):
read in source and destination addresses from i src1,
i src2, i dst
read operands op1 and op2 from memory
compute sum of operands sum
write sum to memory at destination address dst
write sum to output o result
i_src1
i_src2
i_dst
o_result
clk
P1.15.1 Correct Implementation?
For each of the three fragments of VHDL q4aq4c, answer whether it is a correct implementation
of the datapath. If the datapath is not correct, explain why. If the datapath is correct, answer in
which cycle you need load=1.
NOTES:
1. You may choose the number of clock cycles required to execute the sequence of operations.
2. The cycle in which the addresses are on i src1, i src2, and i dst is cycle #0.
3. The control circuitry that controls the datapath will output a signal load, which will be 1
when the sum is to be written into memory.
4. The code fragment with the signal declaractions, connections for inputs and outputs, and the
instantiation of memory is to be used for all three code fragments q4aq4c.
5. The memory has registered inputs and combinational (unregistered) outputs.
6. All of the VHDL is legal, synthesizable code.
102 CHAPTER 1. VHDL
-- This code is to be used for
-- all three code fragments q4a--q4c.
signal state : std_logic_vector(3 downto 0);
signal src1, src2, dst, op1, op2, sum,
mem_in_a, mem_out_a, mem_out_b,
mem_addr_a, mem_addr_b
: unsigned(7 downto 0);
...
process (clk)
begin
if rising_edge(clk) then
src1 <= i_src1;
src2 <= i_src2;
dst <= i_dst;
o_result <= sum;
end if;
end process;
mem : ram256x16d
port map (clk => clk,
i_addr_a => mem_addr_a,
i_addr_b => mem_addr_b,
i_we_a => mem_we,
i_data_a => mem_in_a,
o_data_a => mem_out_a,
o_data_b => mem_out_b);
P1.15 Datapath Design 103
q4a
op1 <= mem_out_a when state = "0010"
else (others => 0);
op2 <= mem_out_b when state = "0010"
else (others => 0);
sum <= op1 + op2 when state = "0100"
else (others => 0);
mem_in_a <= sum when state = "1000"
else (others => 0);
mem_addr_a <= dst when state = "1000"
else src1;
mem_we <= 1 when state = "1000"
else 0;
mem_addr_b <= src2;
process (clk)
begin
if rising_edge(clk) then
if (load = 1) then
state <= "1000";
else
-- rotate state vector one bit to left
state <= state(2 downto 0) & state(3);
end if;
end if;
end process;
q4b
process (clk) begin
if rising_edge(clk) then
op1 <= mem_out_a;
op2 <= mem_out_b;
end if;
end process;
sum <= op1 + op2;
mem_in_a <= sum;
mem_we <= load;
mem_addr_a <= dst when load = 1
else src1;
mem_addr_b <= src2;
104 CHAPTER 1. VHDL
q4c
process
begin
wait until rising_edge(clk);
op1 <= mem_out_a;
op2 <= mem_out_b;
sum <= op1 + op2;
mem_in_a <= sum;
end process;
process (load, dst, src1) begin
if load = 1 then
mem_addr_a <= dst;
else
mem_addr_a <= src1;
end if;
end process;
mem_addr_b <= src2;
P1.15.2 Smallest Area
Of all of the circuits (q4aq4c), including both correct and incorrect circuits, predict which will
have the smallest area.
If you dont have sufcient information to predict the relative areas, explain what additional infor-
mation you would need to predict the area prior to synthesizing the designs.
P1.15.3 Shortest Clock Period
Of all of the circuits (q4aq4c), including both correct and incorrect circuits, predict which will
have the shortest clock period.
If you dont have sufcient information to predict the relative periods, explain what additional
information you would need to predict the period prior to performing any synthesis or timing
analysis of the designs.
Chapter 2
RTL Design with VHDL: From
Requirements to Optimized Code
2.1 Prelude to Chapter
2.1.1 A Note on EDA for FPGAs and ASICs
The following is from John Cooleys column The Industry Gady from 2003/04/30. The title of
this article is: The FPGA EDA Slums.
For 2001, Dataquest reported that the ASIC market was US$16.6 billion while the
FPGA market was US$2.6 billion.
Whats more interesting is that the 2001 ASIC EDA market was US$2.2 billion while
the FPGA EDA market was US$91.1 million. Nope, thats not a mistake. Its ASIC
EDA and billion versus FPGA EDA and million. Do the math and youll see that for
every dollar spent on an ASIC project, roughly 12 cents of it goes to an EDA vendor.
For every dollar spent on a FPGA project, roughly 3.4 cents goes to an EDA vendor.
Not good.
Its the old free milk and a cow story according to Gary Smith, the Senior EDA
Analyst at Dataquest. Altera and Xilinx have fowled their own nest. Their free tools
spoil the FPGA EDA market, says Gary. EDA vendors know that theres no money
to be made in FPGA tools.
105
106 CHAPTER 2. RTL DESIGN WITH VHDL
2.2 FPGA Background and Coding Guidelines
2.2.1 Generic FPGA Hardware
2.2.1.1 Generic FPGA Cell
Cell = Logic Element (LE) in Altera
= Congurable Logic Block (CLB) in Xilinx
CE
S
R
D Q
comb_data_in
ctrl_in
carry_in
carry_out
flop_data_out
comb
comb_data_out
flop_data_in
2.2.2 Area Estimation
To estimate the number of FPGA cells that will be required to implement a circuit, recall that an
FPGA lookup-table can implement any function with up to four inputs and one output.
We will describe two methods to estimate the area (number of FPGA cells) required to implement
a gate-level circuit:
1. Rough estimate based simply upon the number of ip-ops and primary inputs that are in
the fanin of each ip-op.
2. A more accurate estimate, based upon greedily including as many gates as possible into each
FPGA cell.
Allocating gates to FPGA cells is a form of technology mapping: moving from the implementation
technology of generic gates to the implementation technology of FPGA cells.
As with almost all other design tasks, allocating gates to cells is an NP-complete problem: the only
way to ensure that we get the smallest design possible is to try all possible designs. To deal with
NP-complete problems, design tools use heuristics or search techniques to explore efciently a
subset of the options and hopefully produce a design that is close to the absolute smallest. Because
2.2.2 Area Estimation 107
different synthesis tools use different heuristics and search algorithms, different tools will give
results.
The circuitry for any ip-op signal with up to four source ip-ops can be implemented on a
single FPGA cell. If a ip-op signal is dependent upon ve source ip-ops, then two FPGA
cells are required.
Source ops/inputs Minimum cells
1 1
2 1
3 1
4 1
5 2
6 2
7 2
8 3
9 3
10 3
11 4
For a single target signal, this technique gives a lower bound on the number of cells needed. For
example, some functions of seven inputs require more than two cells. As a particular example, a
four-to-one multiplexer has six inputs and requires three cells.
When dealing with multiple target signals, this technique might be an overestimate, because a
single cell can drive several other cells (common subexpression elimination).
PLA and Flop for Different Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CE
S
R
D Q
comb_data_in
ctrl_in
carry_in
carry_out
flop_data_out
comb
comb_data_out
flop_data_in
108 CHAPTER 2. RTL DESIGN WITH VHDL
PLA and Flop for Same Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CE
S
R
D Q
comb_data_in
ctrl_in
carry_in
carry_out
flop_data_out
comb
comb_data_out
flop_data_in
2.2.2 Area Estimation 109
PLA and Flop for Same Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CE
S
R
D Q
comb_data_in
ctrl_in
carry_in
carry_out
flop_data_out
comb
comb_data_out
flop_data_in
110 CHAPTER 2. RTL DESIGN WITH VHDL
Estimate Area for Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
To have a more accurate estimate of the area of a circuit, we begin with each ip-op and output,
then traverse backward through the fanin gathering as much combinational circuitry as possible
into the FPGA cell. Usually, this means that we continue as long as we have four or fewer inputs
to the cell. However, when traversing through some circuits, we will temporarily have ve or
more signals as input, then further back in the fanin, the circuit will collapse back to less than ve
signals.
Once we can no longer include more circuitry into an FPGA cell, we start with a fresh FPGA cell
and continue to traverse backward through the fanin.
Many signals have more than one target, so many FPGA cells will be connected to multiple des-
tinations. When choosing whether to include a gate in an FPGA cell, consider whether the gate
drives multiple targets. There are two options: include the gate in an FPGA cell that drives both
targets, or duplicate the gate and incorporate it into two FPGA cells. The choice of which option
will lead to the smaller circuit is dependent on the details of the design.
Question: Map the combinational circuits below onto generic FPGA cells.
a
b
c
d
z
CE
S
R
D Q
comb
a
b
c
d
z
a
b
c
d
z y
x e
f
g
h
i
CE
S
R
D Q
comb
CE
S
R
D Q
comb
x z
y
z
y
a
b
c
d
2.2.2 Area Estimation 111
a
b
c
d
z
w
x e
f
g
h
i
y
CE
S
R
D Q
comb
CE
S
R
D Q
comb
CE
S
R
D Q
comb
x z
y
z
y
a
b
c
d
b
c
d
w
112 CHAPTER 2. RTL DESIGN WITH VHDL
2.2.2.1 Interconnect for Generic FPGA
Note: In these slides, the space between tightly grouped wires sometimes
disappears, making a group of wires appear to be a single large wire.
There are two types of wires that connect a cell to the rest of the chip:
General purpose interconnect (congurable, slow)
Carry chains and cascade chains (verticaly adjacent cells, fast)
2.2.2.2 Blocks of Cells for Generic FPGA
Cells are organized into blocks. There is a great deal of interconnect (wires) between cells within
a single block. In large FPGAs, the blocks are organized into larger blocks. These large blocks
might themselves be organized into even larger blocks. Think of an FPGA as bunch of nested
for-generate statements that replicate a single component (cell) hundreds of thousands of
times.
2.2.2 Area Estimation 113
Cells not used for computation can be used as wires to shorten length of path between cells.
114 CHAPTER 2. RTL DESIGN WITH VHDL
2.2.2.3 Clocks for Generic FPGAs
Characteristics of clock signals:
High fanout (drive many gates)
Long wires (destination gates scattered all over chip)
Characteristics of FPGAs:
Very few gates that are large (strong) enough to support a high fanout.
Very few wires that traverse entire chip and can be connected to every ip-op.
2.2.2.4 Special Circuitry in FPGAs
Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For more than ve years, FPGAs have had special circuits for RAM and ROM. In Altera FPGAs,
these circuits are called ESBs (Embedded System Blocks). These special circuits are possible
because many FPGAs are fabricated on the same processes as SRAM chips. So, the FPGAs simply
contain small chunks of SRAM.
Microprocessors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A new feature to appear in FPGAs in 2001 and 2002 is hardwired microprocessors on the same
chip as programmable hardware.
Hard Soft
Altera Arm 922T with 200 MIPs Nios with ?? MIPs
Xilinx: Virtex-II Pro Power PC 405 with 420 D-MIPs Microblaze with 100 D-MIPs
The Xilinx-II Pro has 4 Power PCs and enough programmable hardware to implement the rst-
generation Intel Pentium microprocessor.
Arithmetic Circuitry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A new feature to appear in FPGAs in 2001 and 2002 is hardwired circuits for multipliers and
adders.
Altera: Mercury 1616 at 130MHz
Xilinx: Virtex-II Pro 1818 at ???MHz
Using these resources can improve signicantly both the area and performance of a design.
2.2.3 Generic-FPGA Coding Guidelines 115
Input / Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Recently, high-end FPGAs have started to include special circuits to increase the bandwidth of
communication with the outside world.
Product
Altera True-LVDS (1 Gbps)
Xilinx Rocket I/O (3 Gbps)
2.2.3 Generic-FPGA Coding Guidelines
Flip-ops are almost free in FPGAs
reason In FPGAs, the area consumed by a design is usually determined by the amount of
combinational circuitry, not by the number of ip-ops.
Aim for using 8090% of the cells on a chip.
reason If you use more than 90% of the cells on a chip, then the place-and-route program
might not be able to route the wires to connect the cells.
reason If you use less than 80% of the cells, then probably:
there are optimizations that will increase performance and still allow the design to t
on the chip;
or you spent too much human effort on optimizing for low area;
or you could use a smaller (cheaper!) chip.
exception In E&CE 327 (unlike in real life), the mark is based on the actual number of cells
used.
Use just one clock signal
reason If all ip-ops use the same clock, then the clock does not impose any constraints on
where the place-and-route tool puts ip-ops and gates. If different ip-ops used different
clocks, then ip-ops that are near each other would probably be required to use the same
clock.
Use only one edge of the clock signal
reason There are two ways to use both rising and falling edges of a clock signal: have rising-
edge and falling-edge ip ops, or have two different clock signals that are inverses of
each other. Most FPGAs have only rising-edge ip ops. Thus, using both edges of a
clock signal is equivalent to having two different clock signals, which is deprecated by the
preceding guideline.
116 CHAPTER 2. RTL DESIGN WITH VHDL
2.3 Design Flow
2.3.1 Generic Design Flow
Most people agree on the general terminology and process for a digital hardware design ow.
However, each book and course has its own particular way of presenting the ideas. Here we will
lay out the consistent set of denitions that we will use in E&CE 327. This might be different from
what you have seen in other courses or on a work term. Focus on the ideas and you will be ne
both now and in the future.
The design ow presented here focuses on the artifacts that we work with, rather than the opera-
tions that are performed on the artifacts. This is because the same operations can be performed at
different points in the design ow, while the artifacts each have a unique purpose.
Analyze
Modify
Analyze
Modify
Analyze
Modify
Analyze
Modify
Analyze
Modify
Requirements
Opt. RTL Code
Implementation
Hardware
DP+Ctrl Code
High-Level Model
dp/ctrl
specific
Algorithm
Figure 2.1: Generic Design Flow
2.3.2 Implementation Flows 117
Table 2.1: Artifacts in the Design Flow
Requirements Description of what the customer wants
Algorithm Functional description of computation. Probably not syn-
thesizable. Could be a owchart, software, diagram,
mathematical equation, etc..
High-Level Model HDL code that is not necessarily synthesizable, but di-
vides algorithm into signals and clock cycles. Possibly
mixes datapath and control. In VHDL, could be a single
process that captures the behaviour of the algorithm. Usu-
ally synthesizable; resulting hardware is usually big and
slow compared to optimized RTL code.
Dataow Diagram A picture that depicts the datapath computation over time,
clock-cycle by clock-cycle (Section 2.6)
Hardware Block Diagram A picture that depicts the structure of the datapath: the
components and the connections between the compo-
nents. (e.g., netlist or schematic)
State Machine A picture that depicts the behaviour of the control cir-
cuitry over time (Section 2.5)
DP+Ctrl RTL code Synthesizable HDL code that separates the datapath and
control into separate processes and assignments.
Optimized RTL Code HDL code that has been written to meet design goals (high
performance, low power, small, etc.)
Implementation Code A collection of les that include all of the information
needed to build the circuit: HDL program targeted for
a particular implementation technology (e.g. a specic
FPGA chip), constraint les, script les, etc.
Note: Recomendation Spend the time up front to plan a good design on
paper. Use dataow diagrams and state machines to predict performance and
area. The E&CE 327 project might appear to be sufciently small and simple
that you can go straight to RTL code. However, you will probably produce
a more optimal design with less effort if you explore high-level optimizations
with dataow diagrams and state machines.
2.3.2 Implementation Flows
Synopsys Design Compiler and FPGA Compiler are general-purpose synthesis programs. They
have very few, if any, technology-specic algorithms. Instead, they rely on libraries to describe
technology-specic parameters of the primitive building blocks (e.g. the delay and area of individ-
ual gates, PLAs, CLBs, ops, memory arrays).
118 CHAPTER 2. RTL DESIGN WITH VHDL
Mentor Graphics product Leonardo Spectrum, Cadences product BuildGates, and Synplicitys
product Synplify are similar. In comparison, Avant! (Now owned by Synopsys) and Cadence sell
separate tools that do place-and-route and other low-level (physical design) tasks.
These general-purpose synthesis tools do not (generally) do the nal stages of the design, such as
place-and-route and timing analysis, which are very specic to a given implementation technology.
The implementation-technology-specic tools generally also produce a VHDL le that accurately
models the chip. We will refer to this le as the implementation VHDL code.
With Synopsys and the Altera tool Quartus, we compile the VHDL code into an EDIF le for
the netlist and a TCL le for the commands to Quartus. Quartus then generates a sof (SRAM
Object File), which can be downloaded to an Altera SRAM-based FPGA. The extension of the
implementation VHDL le is often .vho, for VHDL output.
With the Synopsys and Xilinx tools, we compile VHDL code into a Xilinx-specic design le
(xnf Xilinx netlist le). We then use the Xilinx tools to generate a bit le, which can be
downloaded to a Xilinx FPGA. The name of the implementation VHDL le is often sufxed with
routed.vhd.
Terminology: Behavioural and Structural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Note: behavioural and structural models The phrases behavioural model
and structural model are commonly used for what well call high-level
models and synthesizable models. In most cases, what people call struc-
tural code contains both structural and behavioural code. The technically cor-
rect denition of a structural model is an HDL program that contains only
component instantiations and generate statements. Thus, even a program with
c <= a AND b; is, strictly speaking, behavioural.
2.3.3 Design Flow: Datapath vs Control vs Storage
2.3.3.1 Classes of Hardware
Each circuit tends to be dominated by either its datapath, control (state machine) or storage (mem-
ory).
Datapath
Purpose: compute output data based on input data
Each parcel of input produces one parcel of output
Examples: arithmetic, decoders
2.3.3 Design Flow: Datapath vs Control vs Storage 119
Storage
Purpose: hold data for future use
Data is not modied while stored
Examples: register les, FIFO queues
Control
Purpose: modify internal state based on inputs, compute outputs from state and inputs
Mostly individual signals, few data (vectors)
Examples: bus arbiters, memory-controllers
All three classes of circuits (datapath, control, and storage) follow the same generic design ow
(Figure2.1) and use dataow diagrams, hardware block diagrams, and state machines. The differ-
ences in the design ows appear in the relative amount of effort spent on each type of description
and the order in which the different descriptions are used. The differences are most pronounced
in the transition from the high-level model to the model that separates the datapath and control
circuitry.
2.3.3.2 Datapath-Centric Design Flow
Analyze
Modify
Analyze
Modify
Block Diagram State Machine
High-Level Model
Dataflow
DP+Ctrl RTL Code
Figure 2.2: Datapath-Centric Design Flow
120 CHAPTER 2. RTL DESIGN WITH VHDL
2.3.3.3 Control-Centric Design Flow
Analyze
Modify
Analyze
Modify
Analyze
Modify
High-Level Model
State Machine
Dataflow Diagram
Block Diagram
DP+Ctrl RTL Code
Figure 2.3: Control-Centric Design Flow
2.3.3.4 Storage-Centric Design Flow
In E&CE 327, we wont be discussing storage-centric design. Storage-centric design differs from
datapath- and control-centric design in that storage-centric design focusses on building many repli-
cated copies of small cells.
Storage-centric designs include a wide range of circuits, from simple memory arrays to compli-
cated circuits such as register les, translation lookaside buffers, and caches. The complicated
circuits can contain large and very intricate state machines, which would benet from some of the
techniques for control-centric circuits.
2.4 Algorithms and High-Level Models
For designs with signicant control ow, algorithms can be described in software languages, ow-
charts, abstract state machines, algorithmic state machines, etc.
For designs with trivial control ow (e.g. every parcel of input data undergoes the same computa-
tion), data-dependency graphs (section 2.4.2) are a good way to describe the algorithm.
For designs with a small amount of control ow (e.g. a microprocessor, where a single decision is
made based upon the opcode) a set of data-dependency graphs is often a good choice.
2.4.1 Flow Charts and State Machines 121
Software executes in series;
hardware executes in parallel
When creating an algorithmic description of your hardware design, think about how you can repre-
sent parallelism in the algorithmic notation that you are using, and how you can exploit parallelism
to improve the performance of your design.
2.4.1 Flow Charts and State Machines
Flow charts and various avours of state machines are covered well in many courses. Generally
everything that youve learned about these forms of description are also applicable in hardware
design.
In addition, you can exploit parallelismin state machine design to create communicating nite state
machines. A single complex state machine can be factored into multiple simple state machines that
operate in parallel and communicate with each other.
2.4.2 Data-Dependency Graphs
In software, the expression: (((((a + b) + c) + d) + e) + f) takes the same amount
of time to execute as: (a + b) + (c + d) + (e + f).
But, remember: hardware runs in parallel. In algorithmic descriptions, parentheses can guide
parallel vs serial execution.
Datadependency graphs capture algorithms of datapath-centric designs.
Datapath-centric designs have few, if any, control decisions: every parcel of input data undergroes
the same computation.
Serial Parallel
(((((a+b)+c)+d)+e)+f) (a+b)+(c+d)+(e+f)
a b c d e f
+
+
+
+
+
a b c d e f
+
+
+
+
+
5 adders on longest path (slower) 3 adders on longest path (faster)
5 adders used (equal area) 5 adders used (equal area)
122 CHAPTER 2. RTL DESIGN WITH VHDL
2.4.3 High-Level Models
There are many different types of high-level models, depending upon the purpose of the model
and the characteristics of the design that the model describes. Some models may capture power
consumption, others performance, others data functionality.
High-level models are used to estimate the most important design metrics very early in the design
cycle. If power consumption is more important that performance, then you might write high-
level models that can predict the power consumption of different design choices, but which has
no information about the number of clock cycles that a computation takes, or which predicts the
latency inaccurately. Conversely, if performance is important, you might write clock-cycle accurate
high-level models that do not contain any information about power consumption.
Conventionally, performance has been the primary design metric. Hence, high-level models that
predict performance are more prevalent and more well understood than other types of high-level
models. There are many research and entrepreneurial opportunities for people who can develop
tools and/or languages for high-level models for estimating power, area, maximum clock speed,
etc.
In E&CE 327 we will limit ourselves to the well-understood area of high-level models for perfor-
mance prediction.
2.5. FINITE STATE MACHINES IN VHDL 123
2.5 Finite State Machines in VHDL
2.5.1 Introduction to State-Machine Design
2.5.1.1 Mealy vs Moore State Machines
Moore Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Outputs are dependent upon only the state
No combinational paths from inputs to outputs
s0/0
s1/1 s2/0
s3/0
a !a
Mealy Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Outputs are dependent upon both the state and the in-
puts
Combinational paths from inputs to outputs
s0
s1 s2
s3
a/1 !a/0
/0 /0
2.5.1.2 Introduction to State Machines and VHDL
A state machine is generally written as a single clocked process, or as a pair of processes, where
one is clocked and one is combinational.
124 CHAPTER 2. RTL DESIGN WITH VHDL
Design Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Moore vs Mealy (Sections 2.5.2 and 2.5.3)
Implicit vs Explicit (Section 2.5.1.3)
State values in explicit state machines: Enumerated type vs constants (Section 2.5.5.1)
State values for constants: encoding scheme (binary, gray, one-hot, ...) (Section 2.5.5)
VHDL Constructs for State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The following VHDL control constructs are useful to steer the transition from state to state:
if ... then ... else
case
for ... loop
while ... loop
loop
next
exit
2.5.1.3 Explicit vs Implicit State Machines
There are two broad styles of writing state machines in VHDL: explicit and implicit. Explicit
and implicit refer to whether there is an explicit state signal in the VHDL code. Explicit state
machines have a state signal in the VHDL code. Implicit state machines do not contain a state
signal. Instead, they use VHDL processes with multiple wait statements to control the execution.
In the explicit style of writing state machines, each process has at most one wait statement. For
the explicit style of writing state machines, there are two sub-categories: current state and cur-
rent+next state.
In the explicit-current style of writing state machines, the state signal represents the current state
of the machine and the signal is assigned its next value in a clocked process.
In the explicit-current+next style, there is a signal for the current state and another signal for the
next state. The next-state signal is assigned its value in a combinational process or concurrent state-
ment and is dependent upon the current state and the inputs. The current-state signal is assigned
its value in a clocked process and is just a opped copy of the next-state signal.
For the implicit style of writing state machines, the synthesis program adds an implicit register to
hold the state signal and combinational circuitry to update the state signal. In Synopsys synthesis
tools, the state signal dened by the synthesizer is named multiple wait state reg.
In Mentor Graphics, the state signal is named STATE VAR
We can think of the VHDL code for implicit state machines as having zero state signals, explicit-
current state machines as having one state signal (state), and explicit-current+next state ma-
chines as having two state signals (state and state next).
2.5.2 Implementing a Simple Moore Machine 125
As with all topics in E&CE 327, there are tradeoffs between these different styles of writing state
machines. Most books teach only the explicit-current+next style. This style is the style closest to
the hardware, which means that they are more amenable to optimization through human interven-
tion, rather than relying on a synthesis tool for optimization. The advantage of the implicit style is
that they are concise and readable for control ows consisting of nested loops and branches (e.g.
the type of control ow that appears in software). For control ows that have less structure, it
can be difcult to write an implicit state machine. Very few books or synthesis manuals describe
multiple-wait statement processes, but they are relatively well supported among synthesis tools.
Because implicit state machines are written with loops, if-then-elses, cases, etc. it is difcult to
write some state machines with complicated control ows in an implicit style. The following
example illustrates the point.
s0/0
s1/1
s2/0
s3/0
a
!a
!a
a
Note: The terminology of explicit and implicit is somewhat standard,
in that some descriptions of processes with multiple wait statements describe
the processes as having implicit state machines.
There is no standard terminology to distinguish between the two explicit styles:
explicit-current+next and explicit-current.
2.5.2 Implementing a Simple Moore Machine
s0/0
s1/1 s2/0
s3/0
a !a
entity simple is
port (
a, clk : in std_logic;
z : out std_logic
);
end simple;
126 CHAPTER 2. RTL DESIGN WITH VHDL
2.5.2.1 Implicit Moore State Machine
architecture moore_implicit_v1a of simple is
begin
process
begin
z <= 0;
wait until rising_edge(clk);
if (a = 1) then
z <= 1;
else
z <= 0;
end if;
wait until rising_edge(clk);
z <= 0;
wait until rising_edge(clk);
end process;
end moore_implicit;
Flops 3
Gates 2
Delay 1 gate
s
2
/
0
!
a
2.5.2 Implementing a Simple Moore Machine 127
2.5.2.2 Explicit Moore with Flopped Output
architecture moore_explicit_v1 of simple is
type state_ty is (s0, s1, s2, s3);
signal state : state_ty;
begin
process (clk)
begin
if rising_edge(clk) then
case state is
when s0 =>
if (a = 1) then
state <= s1;
z <= 1;
else
state <= s2;
z <= 0;
end if;
when s1 | s2 =>
state <= s3;
z <= 0;
when s3 =>
state <= s0;
z <= 1;
end case;
end if;
end process;
end moore_explicit_v1;
Flops 3
Gates 10
Delay 3 gates
128 CHAPTER 2. RTL DESIGN WITH VHDL
2.5.2.3 Explicit Moore with Combinational Outputs
architecture moore_explicit_v2 of simple is
type state_ty is (s0, s1, s2, s3);
signal state : state_ty;
begin
process (clk)
begin
if rising_edge(clk) then
case state is
when s0 =>
if (a = 1) then
state <= s1;
else
state <= s2;
end if;
when s1 | s2 =>
state <= s3;
when s3 =>
state <= s0;
end case;
end if;
end process;
z <= 1 when (state = s1)
else 0;
end moore_explicit_v2;
Flops 2
Gates 7
Delay 4 gates
2.5.2 Implementing a Simple Moore Machine 129
2.5.2.4 Explicit-Current+Next Moore with Concurrent Assignment
architecture moore_explicit_v3 of simple is
type state_ty is (s0, s1, s2, s3);
signal state, state_nxt : state_ty;
begin
process (clk)
begin
if rising_edge(clk) then
state <= state_nxt;
end if;
end process;
state_nxt <= s1 when (state = s0) and (a = 1)
else s2 when (state = s0) and (a = 0)
else s3 when (state = s1) or (state = s2)
else s0;
z <= 1 when (state = s1)
else 0;
end moore_explicit_v3;
Flops 2
Gates 7
Delay 4
The hardware synthesized fromthis architecture is the same as that synthesized frommoore explicit v2,
which is written in the current-explicit style.
130 CHAPTER 2. RTL DESIGN WITH VHDL
2.5.2.5 Explicit-Current+Next Moore with Combinational Process
architecture moore_explicit_v4 of simple is
type state_ty is (s0, s1, s2, s3);
signal state, state_nxt : state_ty;
begin
process (clk)
begin
if rising_edge(clk) then
state <= state_nxt;
end if;
end process;
process (state, a)
begin
case state is
when s0 =>
if (a = 1) then
state_nxt <= s1;
else
state_nxt <= s2;
end if;
when s1 | s2 =>
state_nxt <= s3;
when s3 =>
state_nxt <= s0;
end case;
end process;
z <= 1 when (state = s1)
else 0;
end moore_explicit_v4;
For this architecture, we
change the selected assign-
ment to state into a combi-
national process using a case
statement.
Flops 2
Gates 7
Delay 4
The hardware synthe-
sized from this archi-
tecture is the same as
that synthesized from
moore explicit v2 and
v3.
2.5.3 Implementing a Simple Mealy Machine 131
2.5.3 Implementing a Simple Mealy Machine
Mealy machines have a combinational path from inputs to outputs, which often violates good
coding guidelines for hardware. Thus, Moore machines are much more common. You should
know how to write a Mealy machine if needed, but most of the state machines that you design will
be Moore machines.
This is the same entity as for the simple Moore state machine. The behaviour of the Mealy machine
is the same as the Moore machine, except for the timing relationship between the output (z) and
the input (a).
s0
s1 s2
s3
a/1 !a/0
/0 /0
entity simple is
port (
a, clk : in std_logic;
z : out std_logic
);
end simple;
132 CHAPTER 2. RTL DESIGN WITH VHDL
2.5.3.1 Implicit Mealy State Machine
Note: An implicit Mealy state machine is nonsensical.
In an implicit state machine, we do not have a state signal. But, as the example below illustrates,
to create a Mealy state machine we must have a state signal.
An implicit style is a nonsensical choice for Mealy state machines. Because the output is depen-
dent upon the input in the current clock cycle, the output cannot be a op. For the output to be
combinational and dependent upon both the current state and the current input, we must create a
state signal that we can read in the assignment to the output. Creating a state signal obviates the
advantages of using an implicit style of state machine.
architecture implicit_mealy of simple is
type state_ty is (s0, s1, s2, s3);
signal state : state_ty;
begin
process
begin
state <= s0;
wait until rising_edge(clk);
if (a = 1) then
state <= s1;
else
state <= s2;
end if;
wait until rising_edge(clk);
state <= s3;
wait until rising_edge(clk);
end process;
z <= 1 when (state = s0) and a = 1
else 0;
end mealy_implicit;
Flops 4
Gates 8
Delay 2 gates
s
2
!
a
/
0
/
0
2.5.3 Implementing a Simple Mealy Machine 133
2.5.3.2 Explicit Mealy State Machine
architecture mealy_explicit of simple is
type state_ty is (s0, s1, s2, s3);
signal state : state_ty;
begin
process (clk)
begin
if rising_edge(clk) then
case state is
when s0 =>
if (a = 1) then
state <= s1;
else
state <= s2;
end if;
when s1 | s2 =>
state <= s3;
when others =>
state <= s0;
end case;
end if;
end process;
z <= 1 when (state = s0) and a = 1
else 0;
end mealy_explicit;
Flops 2
Gates 7
Delay 3
134 CHAPTER 2. RTL DESIGN WITH VHDL
2.5.3.3 Explicit-Current+Next Mealy
architecture mealy_explicit_v2 of simple is
type state_ty is (s0, s1, s2, s3);
signal state, state_nxt : state_ty;
begin
process (clk)
begin
if rising_edge(clk) then
state <= state_nxt;
end if;
end process;
state_nxt <= s1 when (state = s0) and a = 1
else s2 when (state = s0) and a = 0
else s3 when (state = s1) or (state = s2)
else s0;
z <= 1 when (state = s0) and a = 1
else 0;
end mealy_explicit_v2;
Flops 2
Gates 4
Delay 3
For the Mealy machine, the explicit-current+next style is smaller than the the explicit-current style.
In contrast, for the Moore machine, the two styles produce exactly the same hardware.
2.5.4 Reset 135
2.5.4 Reset
All circuits should have a reset signal that puts the circuit back into a good initial state. However,
not all ip ops within the circuit need to be reset. In a circuit that has a datapath and a state
machine, the state machine will probably need to be reset, but datapath may not need to be reset.
There are standard ways to add a reset signal to both explicit and implicit state machines.
It is important that reset is tested on every clock cycle, otherwise a reset might not be noticed, or
your circuit will be slow to react to reset and could generate illegal outputs after reset is asserted.
Reset with Implicit State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
With an implicit state machine, we need to insert a loop in the process and test for reset after each
wait statement.
Here is the implicit Moore machine from section 2.5.2.1 with reset code added in bold.
architecture moore_implicit of simple is
begin
process
begin
init : loop -- outermost loop
z <= 0;
wait until rising_edge(clk);
next init when (reset = 1); -- test for reset
if (a = 1) then
z <= 1;
else
z <= 0;
end if;
wait until rising_edge(clk);
next init when (reset = 1); -- test for reset
z <= 0;
wait until rising_edge(clk);
next init when (reset = 1); -- test for reset
end process;
end moore_implicit;
136 CHAPTER 2. RTL DESIGN WITH VHDL
Reset with Explicit State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reset is often easier to include in an explicit state machine, because we need only put a test for
reset = 1 in the clocked process for the state.
The pattern for an explicit-current style of machine is:
process (clk) begin
if rising_edge(clk) then
if reset = 1 then
state <= S0;
else
if ... then
state <= ...;
elif ... then
... -- more tests and assignments to state
end if;
end if;
end if;
end process;
Applying this pattern to the explicit Moore machine from section 2.5.2.3 produces:
architecture moore_explicit_v2 of simple is
type state_ty is (s0, s1, s2, s3);
signal state : state_ty;
begin
process (clk)
begin
if rising_edge(clk) then
if (reset = 1) then
state <= s0;
else
case state is
when s0 =>
if (a = 1) then
state <= s1;
else
state <= s2;
end if;
when s1 | s2 =>
state <= s3;
when s3 =>
state <= s0;
end case;
end if;
end if;
end process;
z <= 1 when (state = s1)
else 0;
end moore_explicit_v2;
2.5.5 State Encoding 137
The pattern for an explicit-current+next style is:
process (clk) begin
if rising_edge(clk) then
if reset = 1 then
state_cur <= reset state;
else
state_cur <= state_nxt;
end if;
end if;
end process;
2.5.5 State Encoding
When working with explicit state machines, we must address the issue of state encoding: what
bit-vector value to associate with each state?
With implicit state machines, we do not need to worry about state encoding. The synthesis program
determines the number of states and the encoding for each state.
2.5.5.1 Constants vs Enumerated Type
Using an enumerated type, the synthesis tools chooses the encoding:
type state_ty is (s0, s1, s2, s3);
signal state : state_ty;
Using constants, we choose the encoding:
type state_ty is std_logic_vector(1 downto 0);
constant s0 : state_ty := "11";
constant s1 : state_ty := "10";
constant s2 : state_ty := "00";
constant s3 : state_ty := "01";
signal state : state_ty;
Providing Encodings for Enumerated Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Many synthesizers allow the user to provide hints on how to encode the states, or allow the user to
provide explicitly the desire encoding. These hints are done either through VHDL attributes
or special comments in the code.
138 CHAPTER 2. RTL DESIGN WITH VHDL
Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
When doing functional simulation with enumerated types, simulators often display waveforms
with pretty-printed values rather than bits (e.g. s0 and s1 rather than 11 and 10). However,
when simulating a design that has been mapped to gates, the enumerated type dissappears and you
are left with just bits. If you dont know the encoding that the synthesis tool chose, it can be very
difcult to debug the design.
However, this opens you up to potential bugs if the enumerated type you are testing grows to
include more values, which then end up unintentionally executing your when other branch,
rather than having a special branch of their own in the case statement.
Unused Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
If the number of values you have in your datatype is not a power of two, then you will have some
unused values that are representable.
For example:
type state_ty is std_logic_vector(2 downto 0);
constant s0 : state_ty := "011";
constant s1 : state_ty := "000";
constant s2 : state_ty := "001";
constant s3 : state_ty := "011";
constant s4 : state_ty := "101";
signal state : state_ty;
This type only needs ve unique values, but can represent eight different values. What should we
do with the three representable values that we dont need? The safest thing to do is to code your
design so that if an illegal value is encountered, the machine resets or enters an error state.
2.5.5.2 Encoding Schemes
Binary: Conventional binary counter.
One-hot: Exactly one bit is asserted at any time.
Modied one-hot: Alteras Quartus synthesizer generates an almost-one-hot encoding where the
bit representing the reset state is inverted. This means that the reset state is all Os and all other
states have two 1s: one for the reset state and one for the current state.
Gray: Transition between adjacent values requires exactly one bit ip.
Custom: Choose encoding to simplify combinational logic for specic task.
2.6. DATAFLOW DIAGRAMS 139
Tradeoffs in Encoding Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gray is good for low-power applications where consecutive data values typically differ by 1 (e.g.
no random jumps).
One-hot usually has less combinational logic and runs faster than binary for machines with up
to a dozen or so states. With more than a dozen states, the extra ip-ops required by one-hot
encoding become too expensive.
Custom is great if you have lots of time and are incredibly intelligent, or have deep insight into
the guts of your design.
Note: Dont care values When we dont care what is the value of a signal we
assign the signal -, which is dont care in VHDL. This should allow the
synthesis tool to use whatever value is most helpful in simplifying the Boolean
equations for the signal (e.g. Karnaugh maps). In the past, some groups in
E&CE 327 have used - quite succesfuly to decrease the area of their design.
However, a few groups found that using - increased the size of their design,
when they were expecting it to decrease the size. So, if you are tweaking your
design to squeeze out the last few unneeded FPGA cells, pay close attention as
to whether using - hurts or helps.
2.6 Dataow Diagrams
2.6.1 Dataow Diagrams Overview
Dataow diagrams are data-dependency graphs where the computation is divided into clock
cycles.
Purpose:
Provide a disciplined approach for designing datapath-centric circuits
Guide the design from algorithm, through high-level models, and nally to register transfer
level code for the datapath and control circuitry.
Estimate area and performance
Make tradeoffs between different design options
Background
Based on techniques from high-level synthesis tools
Some similarity between high-level synthesis and software compilation
Each dataow diagram corresponds to a basic block in software compiler terminology.
140 CHAPTER 2. RTL DESIGN WITH VHDL
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
Data-dependency graph for z = a + b + c + d + e + f
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
Dataow diagram for z = a + b + c + d + e + f
2.6.1 Dataow Diagrams Overview 141
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
Horizontal lines mark
clock cycle boundaries
The use of memory arrays in dataow diagrams is described in section 2.11.4.
142 CHAPTER 2. RTL DESIGN WITH VHDL
2.6.2 Dataow Diagrams, Hardware, and Behaviour
Primary Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dataow Diagram
i
x
Hardware
i x
Behaviour
clk
i
x
Register Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dataow Diagram
i
x
Hardware
i
x
Behaviour
clk
i
x
Register Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dataow Diagram
i1
x
+
i2
Hardware
i2
x
i1
+
Behaviour
clk
i1
i2
x


Combinational-Component Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Dataow Diagram
i1
x
+
i2
Hardware
i2
i1
+ x
Behaviour
clk
i1
i2
x


2.6.3 Dataow Diagram Execution 143
2.6.3 Dataow Diagram Execution
Execution with Registers on Both Inputs and Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
clk
a
x1
x2
x3
x4
x5
z
0
1
2
3
4
5
6
0 1 2 3 4 5 6
x5
Execution Without Output Registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
clk
a
x1
x2
x3
x4
x5
z
0
1
2
3
4
5
0 1 2 3 4 5 6
x5
144 CHAPTER 2. RTL DESIGN WITH VHDL
2.6.4 Performance Estimation
Performance Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance
1
TimeExec
TimeExec = Latency ClockPeriod
Denition Latency: Number of clock cycles from inputs to outputs. A combinational
circuit has latency of zero. A single register has a latency of one. A chain of n
registers has a latency of n.
There is much more information on performance in chapter3, which is devoted to performance.
Performance of Dataow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Latency: count horizontal lines in diagram
Min clock period (Max clock speed) limited by longest path in a clock cycle
2.6.5 Area Estimation
Maximum number of blocks in a clock cycle is total number of that component that are needed
Maximum number of signals that cross a cycle boundary is total number of registers that are
needed
Maximum number of unconnected signal tails in a clock cycle is total number of inputs that
are needed
Maximum number of unconnected signal heads in a clock cycle is total number of outputs
that are needed
The information above is only for estimating the number of components that are needed. In fact,
these estimates give lower bounds. There might be constraints on your design that will force you
to use more components (e.g., you might need to read all of your inputs at the same time).
Implementation-technology factors, such as the relative size of registers, multiplexers, and datapath
components, might force you to make tradeoffs that increase the number of datapath components
to decrease the overall area of the circuit.
Of particular relevance to FPGAs:
With some FPGA chips, a 2:1 multiplexer has the same area as an adder.
With some FPGA chips, a 2:1 multiplexer can be combined with an adder into one FPGA cell
per bit.
2.6.6 Design Analysis 145
In FPGAs, registers are usually free, in that the area consumed by a circuit is limited by the
amount of combinational logic, not the number of ip-ops.
In comparison, with ASICs and custom VLSI, 2:1 multiplexers are much smaller than adders, and
registers are quite expensive in area.
2.6.6 Design Analysis
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
num inputs 6
num outputs 1
num registers 6
num adders 1
min clock period delay through op and one adder
latency 6 clock cycles
2.6.7 Area / Performance Tradeoffs
one add per clock cycle two adds per clock cycle
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
0
1
2
3
4
5
6
x5
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
0
1
2
3
4
x5
Note: In the Two-add design, half of the last clock cycle is wasted.
Two Adds per Clock Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
146 CHAPTER 2. RTL DESIGN WITH VHDL
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
0
1
2
3
clk
a
x1
x2
x3
x4
x5
z
0 1 2 3 4 5 6
4
x5
2.6.7 Area / Performance Tradeoffs 147
Design Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
One add per clock cycle Two adds per clock cycle
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
0
1
2
3
4
5
6
x5
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
0
1
2
3
4
x5
inputs 6 6
outputs 1 1
registers 6 6
adders 1 2
clock period op + 1 add op + 2 add
latency 6 4
Question: Under what circumstances would each design option be fastest?
Answer:
time = latency * clock period
compare execution times for both options
T
1
= 6(T
f
+T
a
)
T
2
= 4(T
f
+2T
a
)
One-add will be faster when T
1
< T
2
:
6(T
f
+T
a
) < 4(T
f
+2T
a
)
6T
f
+6T
a
< 4T
f
+8T
a
2T
f
< 2T
a
T
f
< T
a
Sanity check: If add is slower than op, then want to minimize the number of
adds. One-add has fewer adds, so one-add will be faster when add is slower
than op.
148 CHAPTER 2. RTL DESIGN WITH VHDL
2.7 Design Example: Massey
Well go through the following artifacts:
1. requirements
2. algorithm
3. dataow diagram
4. high-level models
5. hardware block diagram
6. RTL code for datapath
7. state machine
8. RTL code for control
Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. Scheduling (allocate operations to clock cycles)
2. I/O allocation
3. First high-level model
4. Register allocation
5. Datapath allocation
6. Connect datapath components, insert muxes where needed
7. Design implicit state machine
8. Optimize
9. Design explicit-current state machine
10. Optimize
2.7.1 Requirements
Functional requirements:
Compute the sum of six 8-bit numbers: output = a + b + c + d + e + f
Use registers on both inputs and outputs
Performance requirements:
Maximum clock period: unlimited
Maximum latency: four
Cost requirements:
Maximum of two adders
2.7.2 Algorithm 149
Small miscellaneous hardware (e.g. muxes) is unlimited
Maximum of three inputs and one output
Design effort is unlimited
Note: In reality multiplexers are not free. In FPGAs, a 2:1 mux is more ex-
pensive than a full-adder. A 2:1 mux has three inputs while an adder has only
two inputs (the carry-in and carry-out signals usually use the special verti-
cal connections on the FPGA cell). In FPGAs, sharing an adder between two
signals can be more expensive than having two adders. In a generic-gate
technology, a multiplexor contains three two-input gates, while a full-adder
contains fourteen two-input gates.
2.7.2 Algorithm
Well use parentheses to group operations so as to maximize our opportunities to perform the work
in parallel:
z = (a + b) + (c + d) + (e + f)
This results in the following data-dependency graph:
a b c d e f
+
+
+
+
+
2.7.3 Initial Dataow Diagram
z
a b c d
e f +
+
+
+
+
This dataow diagram violates the require-
ment to use at most three inputs.
150 CHAPTER 2. RTL DESIGN WITH VHDL
2.7.4 Dataow Diagram Scheduling
We can potentially optimize the inputs, outputs, area, and performance of a dataow diagram by
rescheduling the operations, that is allocating the operations to different clock cycles.
Parallel algorithms have higher performance and greater scheduling exibility than serial algo-
rithms
Serial algorithms tend to have less area than parallel algorithms
Serial Parallel
(((((a+b)+c)+d)+e)+f) (a+b)+(c+d)+(e+f)
a b c d e f
+
+
+
+
+
a b c d e f
+
+
+
+
+
2.7.4 Dataow Diagram Scheduling 151
Scheduling to Optimize Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Original parallel Parallel after scheduling
a b c d e f
+
+
+
+
+
a b c d
e f +
+
+
+
+
inputs 6 4
outputs 1 1
registers 6 4
adders 3 2
clock period op + 1 add op + 1 add
latency 3 3
Scheduling to Optimize Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Rescheduling the dataow diagram from the
parallel algorithm reduced the area from
three adders to two. However, it still vio-
lates the restriction of a maximum of three
inputs. We can reschedule the operations to
keep the same area, but reduce the number
of inputs.
The tradeoff is that reducing the number of
inputs causes an increase in the latency from
four to ve.
z
a b
c d
e f
+
+
+
+
+
A latency of ve violates the design requirement of a maximum latency of four clock cycles. In
comparing the dataow diagram above with the design requirements, we notice that the require-
ments allow a clock cycle that includes two additions and three inputs.
152 CHAPTER 2. RTL DESIGN WITH VHDL
It appears that the parallel algorithm will not
lead us to a design that satises the require-
ments.
We revisit the algorithm and try a serial al-
gorithm:
z = ((((a + b) + c) + d) + e) + f
The corresponding dataow diagram is
shown to the right.
a b c
d e
f
+
+
+
+
+
x1
x2
x3
x4
z
2.7.5 Optimize Inputs and Outputs
When we rescheduled the parallel algorithm, we rescheduled the input values. This requires rene-
gotiating the schedule of input values with our environment. Sometimes the environment of our
circuit will be willing to reschedule the inputs, but in other situations the environment will impose
a non-negotiable schedule upon us.
If you are currently storing all inputs and can change environments behaviour to delay sending
some inputs, then you can reduce the number of inputs and registers.
We will illustrate this on both the one-add and the two-add designs.
One-add before I/O opt One-add after I/O opt
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
a b
c
d
e
f
+
+
+
+
+
x1
x2
x3
x4
z
inputs 6 2
regs 6 2
2.7.5 Optimize Inputs and Outputs 153
Two-add before I/O opt Two-add after I/O opt
a b c d e f
+
+
+
+
+
x1
x2
x3
x4
z
a b c
d e
f
+
+
+
+
+
x1
x2
x3
x4
z
inputs 6 2
regs 6 2
Design Comparison Between One and Two Add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
One-add after I/O opt Two-add after I/O opt
a b
c
d
e
f
+
+
+
+
+
x1
x2
x3
x4
z
a b c
d e
f
+
+
+
+
+
x1
x2
x3
x4
z
inputs 2 3
outputs 1 1
registers 2 3
adders 1 2
clock period op + 1 add op + 2 add
latency 6 4
154 CHAPTER 2. RTL DESIGN WITH VHDL
Hardware Recipe for Two-Add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
We return now to the two-add design, with
the dataow diagram:
a b c
d e
f
+
+
+
+
+
x1
x2
x3
x4
z
Based on the dataow diagram, we can de-
termine the hardware resources required for
the datapath.
Table 2.2: Hardware Recipe for Two-Add
inputs 3
adders 2
registers 3
output 1
registered inputs YES
registered outputs YES
clock cycles from inputs to outputs 4
2.7.6 Input/Output Allocation
Our rst step after settling on a hardware recipe is I/O allocation, because that determines the
interface between our circuit and the outside world.
From the hardware recipe, we know that we need only three inputs and one output. However, we
have six different input values. We need to allocate these input values to input signals before we
can write a high-level model that performs the computation of our design.
Based on the input and output information in the hardware recipe, we can dene our entity:
entity massey is
port (
clk : in std_logic;
i1, i2, i3 : in unsigned(7 downto 0);
o1 : out unsigned(7 downto 0)
);
end massey;
2.7.6 Input/Output Allocation 155
a b c
d e
f
+
+
+
+
+
x1
x2
x3
x4
z
+
+
i1 i2 i3
o1
i2 i3
i2
i1 i2 i3
o1
Figure 2.4: Dataow diagram and hardware block diagram with I/O port allocation
Based upon the dataow diagram after I/O
allocation, we can write our rst high-level
model (hlm v1).
In the high-level model the entire circuit will
be implemented in a single process. For
larger circuits it may be benecial to have
separate processes for different groups of
signals.
In the high-level model, the code between
wait statements describes the work that is
done in a clock cycle.
The hlm v1 architecture uses an implicit
state machine.
Because the process is clocked, all of the
signals that are assigned to in the process are
registers. Combinational signals would need
to be done using concurrent assignments or
combinational processes.
architecture hlm_v1 of massey is
...internal signal decls...
process begin
wait until rising_edge(clk);
a <= i1;
b <= i2;
c <= i3;
wait until rising_edge(clk);
x2 <= (a + b) + c;
d <= i2;
e <= i3;
wait until rising_edge(clk);
x4 <= (x2 + d) + e;
f <= i2;
wait until rising_edge(clk);
z <= (x4 + f);
end process;
o1 <= z;
end hlm_v1;
156 CHAPTER 2. RTL DESIGN WITH VHDL
2.7.7 Register Allocation
The next step after I/O allocation could be either register allocation or datapath allocation. The
benet of doing register allocation rst is that it is possible to write VHDL code after register
allocation is done but before datapath allocation is done, while the inverse (datapath done but
register allocation not done) does not make sense if written in a hardware description language.
In this example, we will do register allocation before datapath allocation, and show the resulting
VHDL code.
a b c
d e
f
+
+
+
+
+
x1
x2
x3
x4
z
i1 i2 i3
o1
i2 i3
i2
+
+
i1 i2 i3
o1
r1 r2 r3
r2 r3
r2
r3
r1
r1
r2 r3 r1
I/O Allocation
i1 a
i2 b, d, f
i3 c, e
o1 z
Register Allocation
r1 a, x2, x4
r2 b, d, f
r3 c, e
architecture hlm_v2 of massey is
...internal signal decls...
process begin
wait until rising_edge(clk);
r1 <= i1;
r2 <= i2;
r3 <= i3;
wait until rising_edge(clk);
r1 <= (r1 + r2) + r3;
r2 <= i2;
r3 <= i3;
wait until rising_edge(clk);
r1 <= (r1 + r2) + r3;
r2 <= i2;
wait until rising_edge(clk);
r3 <= (r1 + r2);
end process;
o1 <= r3;
end hlm_v2;
Figure 2.5: Block diagram after I/O and register allocation
2.7.8 Datapath Allocation 157
2.7.8 Datapath Allocation
In datapath allocation, we allocate each of the data operations in the dataow diagram to one of
the datapath components in the hardware block diagram.
a b c
d e
f
+
+
+
+
+
x1
x2
x3
x4
z
a1
a2
a1
a2
a1
r1 r2 r3
r2 r3
r2
r3
r1
r1
i1 i2 i3
o1
i2 i3
i2
+
+
a1
a2
r2 r3 r1
i1 i2 i3
o1
I/O Allocation
i1 a
i2 b, d, f
i3 c, e
o1 z
Register Allocation
r1 a, x2, x4
r2 b, d, f
r3 c, e
Datapath Allocation
a1 x1, x3, z
a2 x2, x4
architecture hlm_dp of massey is
...internal signal decls...
process begin
wait until rising_edge(clk);
r1 <= i1;
r2 <= i2;
r3 <= i3;
wait until rising_edge(clk);
r1 <= a2;
r2 <= i2;
r3 <= i3;
wait until rising_edge(clk);
r1 <= a2;
r2 <= i2;
wait until rising_edge(clk);
r3 <= a1;
end process;
a1 <= r1 + r2;
a2 <= a1 + r3;
o1 <= r3;
end hlm_dp;
Figure 2.6: Block diagram after I/O, register, and datapath allocation
158 CHAPTER 2. RTL DESIGN WITH VHDL
2.7.9 Datapath for DP+Ctrl Model
We will now evolve from an implicit state machine to an explicit state machine. The rst step is to
label the states in the dataow diagram and then construct tables to nd the values for chip-enable
and mux-select signals.
a b c
d e
f
+
+
+
+
+
x1
x2
x3
x4
z
a1
a2
a1
a2
a1
r1 r2 r3
r2 r3
r2
r3
r1
r1
i1 i2 i3
o1
i2 i3
i2
S0
S1
S2
S3
S0
Datapath for DP+Ctrl Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
r1 r2 r3
S0 ce=1 , d=i1 ce=1 , d=i2 ce=1 , d=i3
S1 ce=1 , d=a2 ce=1 , d=i2 ce=1 , d=i3
S2 ce=1 , d=a2 ce=1 , d=i2 ce=, d=
S3 ce=, d= ce=, d= ce=1 , d=a1
a1 a2
S0 src1=, src2= src1=, src2=
S1 src1=r1, src2=r2 src1=a1, src2=r3
S2 src1=r1, src2=r2 src1=a1, src2=r3
S3 src1=r1, src2=r2 src1=, src2=
Choose Dont-Care Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
r1 r2 r3
S0 ce=1, d=i1 ce=1, d=i2 ce=1, d=i3
S1 ce=1, d=a2 ce=1, d=i2 ce=1, d=i3
S2 ce=1, d=a2 ce=1, d=i2 ce=1, d=i3
S3 ce=1, d=a2 ce=1, d=i2 ce=1, d=a1
a1 a2
S0 src1=r1, src2=r2 src1=a1, src2=r3
S1 src1=r1, src2=r2 src1=a1, src2=r3
S2 src1=r1, src2=r2 src1=a1, src2=r3
S3 src1=r1, src2=r2 src1=a1, src2=r3
2.7.9 Datapath for DP+Ctrl Model 159
Simplify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
r1 r2 = i2 r3
S0 d=i1 d=i3
S1 d=a2 d=i3
S2 d=a2 d=i3
S3 d=a2 d=a1
a1 a2
src1=r1, src2=r2 src1=a1, src2=r3
VHDL Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
architecture explicit_v1 of massey is
signal
type state_ty is std_logic_vector(3 downto 0);
constant s0 : state_ty := "0001"; constant s1 : state_ty := "0010";
constant s2 : state_ty := "0100"; constant s3 : state_ty := "1000";
signal state : state_ty;
begin
160 CHAPTER 2. RTL DESIGN WITH VHDL
----------------------
-- r1
process (clk) begin
if rising_edge(clk) then
if state = S0 then
r_1 <= i_1;
else
r_1 <= a_2;
end if;
end if;
end process;
----------------------
-- r_2
process (clk) begin
if rising_edge(clk) then
r_2 <= i_2;
end if;
end process;
----------------------
-- r_3
process (clk) begin
if rising_edge(clk) then
if state = S3 then
r_3 <= a_1;
else
r_3 <= i_3;
end if;
end if;
end process
----------------------
-- combinational datapath
a_1 <= r_1 + r_2;
a_2 <= a_1 + r_3;
o_1 <= r_3;
----------------------
-- state machine
process (clk) begin
if rising_edge(clk) then
if reset = 1 then
state <= S0;
else
case state is
when S0 => state <= S1;
when S1 => state <= S2;
when S2 => state <= S3;
when S3 => state <= S0;
end case;
end if;
end if;
end process;
end explicit_v1;
2.7.10 Peephole Optimizations
Peephole optimizations are localized optimizations to code, in that they affect only a few lines of
code. In hardware design, peephole optimizations are usually done to decrease the clock period,
although some optimizations might also decrease area. There are many different types of opti-
mizations, and many optimizations that designers do by hand are things that you might expect a
synthesis tool to do automatically.
In a comparison such as: state = S0, when we use a one-hot state encoding, we need com-
pare only one of the bits of the state. The comparison can be simplied to: state(0) = 1.
Without this optimization, many synthesis tools will produce hardware that tests all of the bits of
the state signal. This increases the area, because more bits are required as inputs to the compari-
son, and increases the clock period because the wider comparison leads to a tree-like structure of
combinational logic, or an increased number of FPGA cells.
2.7.10 Peephole Optimizations 161
In this example, we will take advantage of our state encoding to optimize the code for r 1, r 3,
and the state machine.
-- r_1
process (clk) begin
if rising_edge(clk) then
if state = S0 then
r_1 <= i_1;
else
r_1 <= a_2;
end if;
end if;
end process;
-- r_1 (optimized)
process (clk) begin
if rising_edge(clk) then
if state(0) = 1 then
r_1 <= i_1;
else
r_1 <= a_2;
end if;
end if;
end process;
The code for r 2 remains unchanged.
-- r_3
process (clk) begin
if rising_edge(clk) then
if state = S3 then
r_3 <= a_1;
else
r_3 <= i_3;
end if;
end if;
end process;
-- r_3 (optimized)
process (clk) begin
if rising_edge(clk) then
if state(3) then
r_3 <= a_1;
else
r_3 <= i_3;
end if;
end if;
end process;
-- state machine
process (clk) begin
if rising_edge(clk) then
if reset = 1 then
state <= S0;
else
case state is
when S0 => state <= S1;
when S1 => state <= S2;
when S2 => state <= S3;
when S3 => state <= S0;
end case;
end if;
end if;
end process;
-- state machine (optimized)
-- NOTE: "st" = "state"
process (clk) begin
if rising_edge(clk) then
if reset = 1 then
st <= S0;
else
for i in 0 to 3 loop
st( (i+1)mod4 ) <= st(i);
end loop;
end if;
end if;
162 CHAPTER 2. RTL DESIGN WITH VHDL
The hardware-block diagram that corresponds to the tables and VHDL code is:
+
+
a1
a2
r2 r3 r1
i1 i2 i3
o1
State(1) State(2) State(3)
reset
State(0)
2.8 Design Example: Vanier
Well go through the following artifacts:
1. requirements
2. algorithm
3. dataow diagram
4. high-level models
5. hardware block diagram
6. RTL code for datapath
7. state machine
8. RTL code for control
Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. Scheduling (allocate operations to clock cycles)
2. I/O allocation
3. First high-level model
2.8.1 Requirements 163
4. Register allocation
5. Datapath allocation
6. Connect datapath components, insert muxes where needed
7. Design implicit state machine
8. Optimize
9. Design explicit-current state machine
10. Optimize
2.8.1 Requirements
Functional requirements: compute the following formula:
output = (a d) + c + (d b) + b
Performance requirement:
Max clock period: op plus (2 adds or 1 multiply)
Max latency: 4
Cost requirements
Maximum of two adders
Maximum of two multipliers
Unlimited registers
Maximum of three inputs and one output
Maximum of 5000 student-minutes of design effort
Registered inputs and outputs
2.8.2 Algorithm
Create a data-dependency graph for the algo-
rithm.
NOTE: if draw data-dep graph in alphabetical
order, its ugly. Lesson is to think about layout
and possibly re-do the layout to make it simple
and easy to understand before proceeding.
z
a d
+
+
+
b c
164 CHAPTER 2. RTL DESIGN WITH VHDL
2.8.3 Initial Dataow Diagram
Schedule operations into clock cycles. Use an
as soon as possible schedule, obeying perfor-
mance requirement of a maximum clock period
of one multiply or two additions. In this initial
diagram, we ignore the resource requirements.
This allows us to establish a lower bound on
the latency, which gives us the maximum per-
formance that we can hope to achieve.
z
a d
+
+
+
b c
2.8.4 Reschedule to Meet Requirements
We have four inputs, but the requirements allow a maximum of three. We need to move one input
into the second clock cycle. We want to choose an input that can be delayed by one clock cycle
without violating a requirement and with minimal degradation of performance (clock period and
latency).
If delaying an input by a clock cycle causes a requirement to be violated, we can often reschedule
the operations to remove the violation. So, we sometimes create an intermediate dataow diagram
that violates a requirement, then reschedule the operations to bring the dataow diagram back into
compliance.
The critical path is from d and b, through a multiplier, the middle adder, the nal adder, and then
out through z. Because the inputs d and b are on the critical path, it would be preferable to choose
another input (either a or c) as the input to move into the second clock cycle.
If we move c, we will move the rst addition in the second clock cycle, which will force us to use
three adders, which violates our resource requirement of a maximum of two adders.
By process of elimination, we have settled on a
as our input to be delayed. This causes one of
the multiply operations to be moved into second
clock cycle, which is good because it reduces
our resources from two multipliers to just one.
z
d
+
+
+
b c
a
2.8.5 Optimize Resources 165
Moving a into the second clock cycle has caused
a clock period violation, because our clock pe-
riod is now a register, a multiply, and an add.
This forces us to add an additional clock cycle,
which gives us a latency of four.
z
a
d
+
+
+
b c
2.8.5 Optimize Resources
We can exploit the additional clock cycle to
reschedule our operations to reduce the number
of inputs from three to two. The disadvantage is
that we have increased the number of registers
from four to ve.
z
a
d
+
+
+
b
c
Two side comments:
Moving the second addition from the third clock cycle to the second will not improve the per-
formance or the area. The number of adders will remain at two, the number of registers will
remain at ve, and the clock period will remain at the maximum of a multiply or two additions.
In hindsight, if we had chosen originally to move c, rather than a into the second clock cycle,
we would likely have produced this same dataow diagram. After moving c, we would see
the resource violation of three adders in the second clock cycle. This violation would cause us
to add a third clock cycle, and given us an opportunity to move a into the second clock cycle.
The lesson is that there are usually several different ways to approach a design problem, and it
is infeasible to predict which approach will result in the best design. At best, we have many
heuristics, or rules of thumb, that give us guidelines for techniques that usually work well.
Having nalized our input/output scheduling, we can write our entity. Note: we will add a reset
signal later, when we design the state machine to control the datapath.
166 CHAPTER 2. RTL DESIGN WITH VHDL
entity vanier is
port (
clk : in std_logic;
i_1, i_2 : in std_logic_vector(15 downto 0);
o_1 : out std_logic_vector(15 downto 0)
);
end vanier;
2.8.6 Assign Names to Registered Values 167
2.8.6 Assign Names to Registered Values
We must assign a name to each registered value. Optionally, we may also assign names to com-
binational values. Registers require names, because in VHDL each register (except implicit state
registers) is associated with a named signal. Combinational signals do not require names, be-
cause VHDL allows anonymous (unnamed) combinational signals. For example, in the expression
(a+b)+c we do not need to provide a name for the sum of a and b.
If a single value spans multiple clock cycles, it
only needs to be named once. In our example
x 1, x 2, and x 4 each cross two boundaries.
z
a
d
+
+
+
b
c
x1 x2
x3 x4 x5
x6 x7
x8
168 CHAPTER 2. RTL DESIGN WITH VHDL
2.8.7 Input/Output Allocation
Now that we have names for all of our registered signals, we can allocate input and output ports to
signals.
After the input and output ports have been allocated to signals, we can write our rst model. We
use an implicit state machine and dene only the registered values. In each state, we dene the
values of the registered values that are computed in that state.
z
a
d
+
+
+
b
c
i1 i2
o1
i1 i2
x1 x2
x3 x4 x5
x6 x7
x8
architecture hlm_v1 of vanier is
signal x_1, x_2, x_3, x_4, x_5, x_6,
x_7, x_8 : unsigned(15 downto 0);
begin
process begin
------------------------------
wait until rising_edge(clk);
------------------------------
x_1 <= unsigned(i_1);
x_2 <= unsigned(i_2);
------------------------------
wait until rising_edge(clk);
------------------------------
x_3 <= unsigned(i_1);
x_4 <= x_1(7 downto 0)
*
x_2(7 downto 0);
x_5 <= unsigned(i_2);
------------------------------
wait until rising_edge(clk);
------------------------------
x_6 <= x_3(7 downto 0)
*
x_1(7 downto 0);
x_7 <= x_2 + x_5;
------------------------------
wait until rising_edge(clk);
------------------------------
x_8 <= x_6 + (x_4 + x_7);
end process;
o_1 <= std_logic_vector(x_8);
end hlm_v1;
2.8.7 Input/Output Allocation 169
z
a
d
+
+
+
b
c
i1 i2
o1
i1 i2
x1 x2
x3 x4 x5
x6 x7
x8
x1
0
1
2
3
4
x2
x3
x4
x5
x6
x7
x8
0 1 2 3 4 5
r1
r2
r3
r4
r5
0 1 2 3 4 5
i1
i2
i1
i2
The model hlm v1 is synthesizable. If we are happy with the clock speed and area, we can stop
now! The remaining steps of the design process seek to optimize the design by reducing the area
and clock period. For area, we will reduce the number of registers, datapath components, and
multiplexers. Reducing the clock period will occur as we reduce the number of multiplexers and
potentially perform peephole (localized) optimizations, such as Boolean simplication.
170 CHAPTER 2. RTL DESIGN WITH VHDL
2.8.8 Tangent: Combinational Outputs
To demonstrate a high-level model where the output is combinational, we modify hlm v1 so that
the output is combinational, rather than a register (see hlm v1c). To make the output (x 8) com-
binational, we move the assignment to x 8 out of the main clocked process and into a concurrent
statement.
architecture hlm_v1c of vanier is
signal x_1, x_2, x_3, x_4, x_5, x_6, x_7
: unsigned(15 downto 0);
begin
process begin
------------------------------
wait until rising_edge(clk);
------------------------------
x_1 <= unsigned(i_1);
x_2 <= unsigned(i_2);
------------------------------
wait until rising_edge(clk);
------------------------------
x_3 <= unsigned(i_1);
x_4 <= x_1(7 downto 0)
*
x_2(7 downto 0);
x_5 <= unsigned(i_2);
------------------------------
wait until rising_edge(clk);
------------------------------
x_6 <= x_3(7 downto 0)
*
x_1(7 downto 0);
x_7 <= x_2 + x_5;
end process;
o_1 <= std_logic_vector(x_6 + (x_4 + x_7));
end hlm_v1c;
z
a
d
+
+
+
b
c
i1 i2
o1
i1 i2
x1 x2
x3 x4 x5
x6 x7
2.8.9 Register Allocation 171
2.8.9 Register Allocation
Our previous model (hlm v1) uses eight registers (x 1. . . x 8). However, our analysis of the
dataow diagrams says that we can implement the diagram with just ve registers. Also, the code
for hlm v1 contains two occurrences of the multiplication symbol (
*
) and three occurrences of the
addition symbol (+). Our analysis of the dataow diagram showed that we need only one multiply
and two adds. In hlm v1 we are relying on the synthesis tool to recognize that even though the
code contains two multiplies and three adds, the hardware needs only one multiply and two adds.
Register allocation is the task of assigning each of our registered values to a register signal. Dat-
apath allocation is the task of assigning each datapath operation to a datapath component. Only
high-level synthesis tools (and software compilers) do register allocation. So, as hardware design-
ers, we are stuck with the task of doing register allocation ourselves if we want to further optimize
our design. Some register-transfer-level synthesis tools do datapath allocation. If your synthesis
tool does datapath allocation, it is important to learn the idioms and limitations of the tool so that
you can write your code in a style that allows the tool to do a good job of allocation and optimiza-
tion. In most cases where area or clock speed are important design metrics, design engineers do
datapath allocation by hand or ad-hoc software and spreadsheets.
We will now step through the tasks of register allocation and datapath allocation. In our eight-
register model, each register holds a unique value we do not reuse registers. To reduce the
number of registers from eight to ve, we will need to reuse registers, so that a register potentially
holds different values in different clock cycles.
When doing register allocation, we assign a register to each signal that crosses a clock cycle bound-
ary. When creating the hardware block diagram, we will need to add multiplexers to the inputs of
modules that are connected to multiple registers. To reduce the number of multiplexers, we try to
allocate the same registers to the same inputs of the same type of module. For example, x 7 is an
input to an adder, we allocate r 5 to x 7, because r 5 was also an input to an adder in another
clock cycle. Also in the third clock cycle, we allocate r 2 to x 6, because in the second clock
cycle, the inputs to an adder were r 2 and r 5. In the last clock cycle, we allocate r 5 to x 8,
because previously r 5 was used as the output of r 2 + r 5.
We update our model to reect register allocation, by replacing the signals for registered values
(x 1. . . x 8) with the registers r 1. . . r 5.
172 CHAPTER 2. RTL DESIGN WITH VHDL
z
a
d
+
+
+
b
c
i1 i2
o1
i1 i2
x1 x2
x3 x4 x5
x6 x7
x8
r1 r2
r3 r4 r5
r2 r5
r5
architecture hlm_v2 of vanier is
signal r_1, r_2, r_3, r_4, r_5
: unsigned(15 downto 0);
begin
process begin
------------------------------
wait until rising_edge(clk);
------------------------------
r_1 <= unsigned(i_1);
r_2 <= unsigned(i_2);
------------------------------
wait until rising_edge(clk);
------------------------------
r_3 <= unsigned(i_1);
r_4 <= r_1(7 downto 0)
*
r_2(7 downto 0);
r_5 <= unsigned(i_2);
------------------------------
wait until rising_edge(clk);
------------------------------
r_2 <= r_3(7 downto 0)
*
r_1(7 downto 0);
r_5 <= r_2 + r_5;
------------------------------
wait until rising_edge(clk);
------------------------------
r_5 <= r_2 + (r_4 + r_5);
end process;
o_1 <= std_logic_vector(r_5);
end hlm_v2;
Both of our models so far (hlm v1 and hlm v2) have used implicit state machines. The optimiza-
tion from hlm v1 to hlm v2 was done to reduce the number of registers by performing register
allocation. Most of the remaining optimizations require an explicit state machine. We will con-
struct an explicit state machine using a methodical procedure that gradually adds more information
to the dataow diagram. The rst step in this procedure is to datapath allocation, which is similar
to register allocation, except that we allocate datapath components to datapath operations, rather
than allocate registers to names.
To control the datapath, we need to provide the following signals for registers and datapath com-
ponents:
registers chip-enable and mux-select signals
datapath components instruction (e.g. add, sub, etc for ALUs) and mux-select
After we determine the chip-enable, mux-select, and instruction signals, and then calculate what
value each signal needs in each clock cycle, we can build the explicit state machine to control the
datapath.
After we build the state machine, we will add a reset to the design.
2.8.10 Datapath Allocation 173
2.8.10 Datapath Allocation
In datapath allocation, we allocate an adder (ei-
ther a1 or a2) to each addition operation and a
multiplier (either m1 or m2) to each multiplica-
tion operation. As with register allocation, we
attempt to reduce the number of multiplexers
will be required by connecting the same data-
path component to the same register in multiple
clock cycles.
z
a
d
+
+
+
b
c
i1 i2
o1
i1 i2
x1 x2
x3 x4 x5
x6 x7
x8
r1 r2
r3 r4 r5
r2 r5
r5
m1
m1
a1
a2
a1
2.8.11 Hardware Block Diagram and State Machine
To build an explicit state machine, we rst determine what states we need. In this circuit, we need
four states, one for each clock cycle in the dataow diagram. If our algorithmic description had
included control ow, such as loops and branches, then it becomes more difcult to determine the
states that are needed.
We will use four states: S0..S3, where S0 corresponds to the rst clock cycle (during which the
input is read) and S3 corresponds to the last clock cycle.
2.8.11.1 Control for Registers
To determine the chip enable and mux select signals for the registers, we build a table where each
state corresponds to a row and each register corresponds to a column.
For each register and each state, we note whether the register loads in a new value (ce) and what
signal is the source of the loaded data (d).
r1 r2 r3 r4 r5
ce d ce d ce d ce d ce d
S0 1 i1 1 i2
S1 0 0 1 i1 1 m1 1 i2
S2 1 m1 0 1 a1
S3 1 a1
S3 1 a1
174 CHAPTER 2. RTL DESIGN WITH VHDL
Eliminate unnecessary chip enables and muxes.
A chip enable is needed if a register must hold a single value for multiple clock cycles (ce=0).
A multiplexer is needed if a register loads in values from different sources in different clock
cycles.
The register simplications are as follows:
r1 Chip-enable, because S1 has ce=0. No multiplexer, because i1 is the only input.
r2 Chip-enable, because S1 has ce=0. Multiplexer to choose between i2 and m1.
r3 No chip enable, no multiplexer. The register r3 simplies to be just r3=i1 without a mul-
tiplexer or chip-enable, because there is only one state where we care about its behaviour
(S1) all of the other states are dont cares for both chip enable and mux.
r4 Chip-enable, because S2 has ce=0. No multiplexer, because m1 is the only input.
r5 No chip-enable, because do not have any states with ce=0. Multiplexer between i2 and a1.
The simplied register table is shown below. For registers that do not have multiplexers, we show
their input on the top row. For registers that need neither a chip enable nor a mux (e.g. r3), we
write the assignment in the rst row and leave the other rows blank.
r1=i1 r2 r3=i1 r4=m1 r5
ce ce d ce d
S0 1 1 i2
S1 0 0 1 i2
S2 1 m1 0 a1
S3 a1
The chip-enable and mux-select signals that are needed for the registers are: r1 ce, r2 ce,
r2 sel, r4 ce, and r5 sel.
2.8.11.2 Control for Datapath Components
Analogous to the table for registers, we build a table for the datapath components. Each of our
components has two inputs (src1 and src2). Each component performs a single operation (either
addition or multiplication), so we do not need to dene operation or instruction signals for the
datapath components.
a1 a2 m1
src1 src2 src1 src2 src1 src2
S0
S1 r1 r2
S2 r2 r5 r3 r1
S3 r2 a2 r4 r5
2.8.11 Hardware Block Diagram and State Machine 175
Based on the table above, the adder a1 will need a multiplexer for src2. The multiplier m1 will
need two multiplexers: one for each input.
Note that the operands to addition and multiplication are commutative, so we can choose which
signal goes to src1 and which to src2 so as to minimize the need for multiplexers.
We notice that for m1, we can reduce the number of multiplexers from 2 to 1 by swapping the
operands in the second clock cycle. This makes r1 the only source of operands for the src1 input.
This optimization is reected in the table below.
a1 a2 m1
src1 src2 src1 src2 src1 src2
S0
S1 r1 r2
S2 r2 r5 r1 r3
S3 r2 a2 r4 r5
The mux-select signals for the datapath components are: a1 src2 sel and m1 src2 sel.
2.8.11.3 Control for State
We need to control the transition from one state to the next. For this example, the transition is very
simple, each state transitions to its successor: S0 S1 S2 S3 S0....
2.8.11.4 Complete State Machine Table
The state machine table is shown below. Note that the state signal is a register; the table shows the
next value of the signal.
r1 ce r2 ce r2 sel r4 ce r5 sel a1 src2 sel m1 src2 sel state
S0 1 1 i2 S1
S1 0 0 1 i2 r2 S2
S2 1 m1 0 a1 r5 r3 S3
S3 a1 a2 S0
We now choose instantiations for the dont care values so as to simplify the circuitry. Different
state encodings will lead to different simplications. For fully-encoded states, Karnaugh maps are
helpful in doing simplications. For a one-hot state encoding, it is usually better to create situations
where conditions are based upon a single state. The reason for this heuristic with one-hot encodings
will be clear when we get to explicit v2.
176 CHAPTER 2. RTL DESIGN WITH VHDL
r1 ce We rst choose 0 as the dont care instantiation, because that leaves just one state where
we need to load. Additionally, it is conceptually cleaner to do an assignment in just the one
clock cycle where we care about the value, rather than not do an assignment in the one clock
cycle where we must hold the value. (At the end of the dont care allocation, well revisit
this decision and change our mind.)
r2 ce We choose 1 for S3, so that we have just one state where we do not do a load. If we
had chosen 0 for r2ce in S3, we would have two states where we do a load and two where
we do not load. If we were using fully-encoded states, this even separation might have left
us with a very nice Karnaugh map; or it might have left us with a Karnaugh map that has a
checkerboard pattern, which would not simplify. This helps illustrate why state encoding is
a difcult problem.
r2 sel We choose m1 arbitrarily. The choice of i2 would have also resulted in three assign-
ments from one signal and one assignment from the other signal.
r4 ce We choose 0 as we did for r1 ce.
r5 sel Choose a1 so that we have three assignments from the same signal and just one assign-
ment from the other signal.
a1 src2 Choose a2 arbitrarily.
m1 src2 Choose r3 arbitrarily.
r1 ce (again) We examine r1 ce and r2 ce and see that if we choose 1 for the dont care
instantiation of r1 ce, we will have the same choices for both chip enables. This will
simplify our state machine. Also, r4 ce is the negation of r2 ce, so we can use just an
inverter to control r4 ce.
r1 ce r2 ce r2 sel r4 ce r5 sel a1 src2 sel m1 src2 sel state
S0 1 1 i2 0 a1 a2 r3 S1
S1 0 0 m1 1 i2 a2 r2 S2
S2 1 1 m1 0 a1 r5 r3 S3
S3 1 1 m1 0 a1 a2 r3 S0
2.8.12 VHDL Code with Explicit State Machine
VHDL code can be written directly from the tables and the dataow diagram that shows register
allocation, input allocation, and datapath allocation. As a simplication, rather than write explicit
signals for the chip-enable and mux-select signals, we use select and conditional assignment state-
ments that test the state in the condition.
We chose a one-hot encoding of the state, which usually results in small and fast hardware for state
machines with sixteen or fewer states.
2.8.12 VHDL Code with Explicit State Machine 177
architecture explicit_v1 of vanier is
signal r_1, r_2, r_3, r_4, r_5 : std_logic_vector(15 downto 0);
type state_ty is std_logic_vector(3 downto 0);
constant s0 : state_ty := "0001";
constant s1 : state_ty := "0010";
constant s2 : state_ty := "0100";
constant s3 : state_ty := "1000";
signal state : state_ty;
178 CHAPTER 2. RTL DESIGN WITH VHDL
begin
----------------------
-- r_1
process (clk) begin
if rising_edge(clk) then
if state != S1 then
r_1 <= i_1;
end if;
end if;
end process;
----------------------
-- r_2
process (clk) begin
if rising_edge(clk) then
if state != S1 then
if state = S0 then
r_2 <= i_2;
else
r_2 <= m_1;
end if;
end if;
end if;
end process;
----------------------
-- r_3
process (clk) begin
if rising_edge(clk) then
r_3 <= i_1;
end if;
end process;
----------------------
-- r_4
process (clk) begin
if rising_edge(clk) then
if state = S1 then
r_4 <= m_1;
end if;
end if;
end process;
----------------------
-- r_5
process (clk) begin
if rising_edge(clk) then
if state = S1 then
r_5 <= i_2;
else
r_5 <= a_1;
end if;
end if;
end process;
----------------------
-- combinational datapath
with state select
a1_src2 <= r_5 when S2,
a_2 when others;
with state select
m1_src2 <= r_2 when S1
r_3 when others;
a_1 <= a_2 + a1_src2;
a_2 <= r_4 + r_5;
m_1 <= r_1
*
m1_src2;
o_1 <= r_5;
----------------------
-- state machine
process (clk) begin
if rising_edge(clk) then
if reset = 1 then
state <= S0;
else
case state is
when S0 => state <= S1;
when S1 => state <= S2;
when S2 => state <= S3;
when S3 => state <= S0;
end case;
end if;
end if;
end process;
----------------------
end explicit_v1;
The hardware-block diagram that corresponds to the tables and VHDL code is:
2.8.13 Peephole Optimizations 179
z
a
d
+
+
+
b
c
i1 i2
o1
i1 i2
x1 x2
x3 x4 x5
x6 x7
x8
r1 r2
r3 r4 r5
r2 r5
r5
m1
m1
a1
a2
a1
+
+
m1
a1
a2
r1 r2 r3
r4
r5
i1 i2
S0
S1
S2
S3
S0
2.8.13 Peephole Optimizations
We will illustrate several peephole optimizations that take advantage of our state encoding.
-- r_1
process (clk) begin
if rising_edge(clk) then
if state != S1 then
r_1 <= i_1;
end if;
end if;
end process;
-- r_1 (optimized)
process (clk) begin
if rising_edge(clk) then
if state(1) = 0 then
r_1 <= i_1;
end if;
end if;
end process;
Analogous optimizations can be used when comparing against multiple states:
180 CHAPTER 2. RTL DESIGN WITH VHDL
-- r_2
process (clk) begin
if rising_edge(clk) then
if state != S1
if state = S0 then
r_2 <= i_2;
else
r_2 <= m_1;
end if;
end if;
end if;
end process;
-- r_2 (optimized)
process (clk) begin
if rising_edge(clk) then
if state(1) = 0 then
if state(0) = 1 then
r_2 <= i_2;
else
r_2 <= m_1;
end if;
end if;
end if;
end process;
Next-state assignment for a one-hot state machine can be done with a simple shift register:
-- state machine
process (clk) begin
if rising_edge(clk) then
if reset = 1 then
state <= S0;
else
case state is
when S0 => state <= S1;
when S1 => state <= S2;
when S2 => state <= S3;
when S3 => state <= S0;
end case;
end if;
end if;
end process;
-- state machine (optimized)
-- NOTE: "st" = "state"
process (clk) begin
if rising_edge(clk) then
if reset = 1 then
st <= S0;
else
for i in 0 to 3 loop
st( (i+1) mod 4 ) <= st( i );
end loop;
end if;
end if;
end process;
2.8.13 Peephole Optimizations 181
The resulting optimized code is shown on the next page.
architecture explicit_v2 of vanier is
signal r_1, r_2, r_3, r_4, r_5 : std_logic_vector(15 downto 0);
type state_ty is std_logic_vector(3 downto 0);
constant s0 : state_ty := "0001"; constant s1 : state_ty := "0010";
constant s2 : state_ty := "0100"; constant s3 : state_ty := "1000";
signal state : state_ty;
begin
----------------------
-- r_1
process (clk) begin
if rising_edge(clk) then
if state(1) = 0 then
r_1 <= i_1;
end if;
end if;
end process;
----------------------
-- r_2
process (clk) begin
if rising_edge(clk) then
if state(1) = 0 then
if state(0) = 1 then
r_2 <= i_2;
else
r_2 <= m_1;
end if;
end if;
end if;
end process;
----------------------
-- r_3
process (clk) begin
if rising_edge(clk) then
r_3 <= i_1;
end if;
end process;
----------------------
-- r_4
process (clk) begin
if rising_edge(clk) then
if state(1) = 1 then
r_4 <= m_1;
end if;
end if;
end process;
----------------------
-- r_5
process (clk) begin
if rising_edge(clk) then
if state(1) = 1 then
r_5 <= i_2;
else
r_5 <= a_1;
end if;
end if;
end process;
----------------------
-- combinational datapath
a1_src2 <= r_5 when state(2) = 1
else a_2;
m1_src2 <= r_2 when state(1)= 1
else r_3;
a_1 <= a_2 + a1_src2;
a_2 <= r_4
*
r_5;
m_1 <= r_1
*
m1_src2;
o_1 <= r_5;
----------------------
-- state machine
process (clk) begin
if rising_edge(clk) then
if reset = 1 then
state <= S0;
else
for i in 0 to 3 loop
state( (i+1) mod 4) <=
state(i);
end loop;
end if;
end if;
end process;
----------------------
end explicit_v1;
182 CHAPTER 2. RTL DESIGN WITH VHDL
2.8.14 Notes and Observations
Our functional requirements were written as:
output = (a d) + (d b) + b + c
Alternatively, we could have achieved exactly the same functionality with the functional require-
ments written as (the two statements are mathematically equivalent):
output = (a d) + b + (d b) + c
The naive data dependency graph for the alternative formulation is much messier than the data
dependency graph for the original formulation:
Original
(a d) + (d b) + b + c
z
a d
+
+
+
b c
Alternative
(a d) + c + (d b) + b
z
a b
+
+ +
c d
An observation: it can be helpful to explore several equivalent formulations of the mathematical
equations while constructing the data dependency graph. A mathematical formulation that places
occurrences of the same identier close to each other often results in a simpler data dependency
graph. The simpler the data dependency graph, the easier it will be to identify helpful optimizations
and efcient schedules.
2.9. PIPELINING 183
2.9 Pipelining
Pipelining is one of the most common and most effective performance optimizations in hardware.
Pipelining is used in systems ranging from simple signal-processing lters to high-performance
microprocessors. Pipelining increases performance by overlapping the execution of multiple in-
structions or parcels of data, analogous to the way that multiple cars ow through an automobile
assembly line.
Pipelines are difcult to design and verify, because subtle bugs can arise from the interactions
between instructions owing through the pipeline. There are intended interactions, which must
happen correctly, and there might be unintended interactions which constitute bugs. Computer
architects categorize the interactions between instructions according to three principles: struc-
tural hazards, control hazards, and data hazards. Our examples will all be pure datapath pipelines
without any data or control dependencies between parcels of data. This eliminates most of the
complexities of implementing pipelines correctly.
2.9.1 Introduction to Pipelining
Review of unpipelined dataow diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
As a quick review of an unpipelined (also called sequential) dataow diagram we revisit the
one-add example from section 2.6.3.
a b
c
d
e
f
+
+
+
+
+
r1
z
0
1
2
3
4
5
add1
add1
add1
add1
add1
r1 r2
r2
r1 r2
r1 r2
r1 r2
clk
a
r1
z
0 1 2 3 4 5 6

7 8 9 10 11


The key feature to notice, in comparison to a pipelined dataow diagram, is that the second parcel
() begins execution only after the rst parcel () has nished executing.
Pipelined dataow diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
In a pipeline, each stage is a separate circuit, in that we cannot reuse the same component in mul-
tiple stages. When drawing a pipelined dataow diagram, we effectively have multiple dataow
184 CHAPTER 2. RTL DESIGN WITH VHDL
diagrams: one for each stage. As a notational shorthand to avoid drawing multiple dataow di-
agrams, we introduce a new bit of notation: a double line denotes a boundary between stages.
We perform scheduling, resource allocation, and all of the other design steps individually for each
stage.
Our rst example of a pipelined dataow diagram is a fully pipelined version of the previous
example. In a fully pipelined dataowdiagram, each clock becomes a separage stage. Notationally,
we simply replace the single-line clock cycle boundaries with double-line stage boundaries.
a b
c
d
e
f
+
+
+
+
+
r3
z
0
1
2
3
4
5
add1
add2
add3
add4
add5
r1 r2
r4
r5 r5
r7 r8
r9 r10
s
t
a
g
e

1
s
t
a
g
e

2
s
t
a
g
e

3
s
t
a
g
e

4
s
t
a
g
e

5
clk
a
(stage1) r1
(stage2) r3
(stage3) r5
(stage4) r7
(stage5) r9
z
0 1 2 3 4 5 6

7 8 9 10 11

Sequential (Unpipelined) Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


The hardware for the unpipelined dataow diagram contains two registers, one adder, a multiplexer
and a state machine to control the multiplexer. When the data is produced by the adder at the end
of each clock cycle, it is fed back to multiplexer as a value for the next clock cycle.
+
i2
o1
State(1) State(2) State(3)
reset
State(0) State(4)
add1
i1
r1 r2
Pipelined Hardware and VHDL Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9.1 Introduction to Pipelining 185
The hardware for the pipelined dataow diagram contains two registers and one adder for each
stage. The registers and adders do the same thing in each clock cycle, so there is no need for
chip-enables, multiplexers, or a state machine.
+
i2
add1
i1
r1 r2
+
add2
r3 r4
i3
+
add3
r5 r6
i4
+
add4
r7 r8
i5
+
add5
r9 r10
i6
o1
s
t
a
g
e

1
s
t
a
g
e

2
s
t
a
g
e

3
s
t
a
g
e

4
s
t
a
g
e

5
-- stage 1
process begin
wait until rising_edge(clk);
r1 <= i1; r2 <= i2;
end process;
-- stage 2
process begin
wait until rising_edge(clk);
r3 <= r1 + r2; r4 <= i3;
end process;
-- stage 3
process begin
wait until rising_edge(clk);
r5 <= r3 + r4; r6 <= i4;
end process;
-- stage 4
process begin
wait until rising_edge(clk);
r7 <= r5 + r6; r8 <= i5;
end process;
-- stage 5
process begin
wait until rising_edge(clk);
r9 <= r7 + r8; r10 <= i6;
end process;
-- output
o1 <= r9 + r10;
The VHDL code above is designed to be easy to read by matching the structure of the hardware.
An alternative style is to be more concise by grouping all of the registered assignments in a single
clocked process as shown below. The two styles are equivalent with respect to simulation and
synthesis.
-- group all registered assignments into a single process
process begin
wait until rising_edge(clk);
r1 <= i1; r2 <= i2;
r3 <= r1 + r2; r4 <= i3;
r5 <= r3 + r4; r6 <= i4;
r7 <= r5 + r6; r8 <= i5;
r9 <= r7 + r8; r10 <= i6;
end process;
o1 <= r9 + r10;
186 CHAPTER 2. RTL DESIGN WITH VHDL
2.9.2 Partially Pipelined
The previous section illustrated a fully pipelined circuit, which means that the circuit could accept
a new parcel every clock cycle. Sometimes we want to sacrice performance (throughput) in order
to reduce area. We can do this by having a throughput that is less than one parcel per clock-cycle
and reusing some hardware. A pipeline that has a throughput of less than one is said to be partially
pipelined.
If a pipeline is essentially two pipelines running in parallel, then it is said to be superscalar and
will usually have a throughput that is more than one parcel per clock cycle. A superscalar pipeline
that has n pipelines in parallel is said to be n-way superscalar and has a maximum throughput of n
parcels per clock cycle.
a b
c
d
e
f
+
+
+
+
+
r1
z
0
1
2
3
4
5
add1
add1
add2
add2
add3
r1 r2
r2
r3 r4
r3 r4
r5 r6
s
t
a
g
e

1
s
t
a
g
e

2
s
t
a
g
e

3
clk
a
(stage1) r1
z
0 1 2 3 4 5 6

7 8 9 10 11

(stage2) r3
(stage3) r5
Hardware for Partially Pipelined . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.9.3 Terminology 187
State(1)
reset
State(0)
+
i2
add1
i1
r1 r2
+
i2
add2
r3 r4
+
i2
o1
add3
r5 r6
s
t
a
g
e

1
s
t
a
g
e

2
s
t
a
g
e

3
2.9.3 Terminology
Denition Depth: The depth of a pipeline is the number of stages on the longest path
through the pipeline.
Denition Latency: The latency of a pipeline is measured the same as for an
unpipelined circuit: the number of clock cycles from inputs to outputs.
Denition Throughput: The number of parcels consumed or produced per clock cycle.
Denition Upstream/downstream: Because parcels ow through the pipeline
analogously to water in a stream, the terms upstream and downstream are used
respectively to refer to earlier and later stages in the pipeline. For example, stage1 is
upstream from stage2.
Denition Bubble: When a pipe stage is empty (contains invalid data), it is said to
contain a bubble.
188 CHAPTER 2. RTL DESIGN WITH VHDL
Question: How do we know whether the output of the pipeline is a bubble or is valid
data?
Answer:
Add one register per stage to hold valid bit. If valid=0; then the pipe stage
contains a bubble.
2.10 Design Example: Pipelined Massey
In this section, we revisit the Massey example from section 2.7, but now do it with a pipelined
implementation. To allow us to implement a pipelined design, we need to relax our resource
requirements. Originally, we were allowed two adders and three inputs. For the pipeline, we will
allow ourselves six inputs and ve adders. There are six input values and ve additions in the
dataow diagram, so these requirements will enable us to build a fully pipelined implementation.
If we were forced to reuse a component (e.g., a maximum of two adders), then we would need to
build a partially pipelined circuit.
To stay within the normal design rules for pipelines, we will register our inputs but not our outputs.
In summary, the requirements are:
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Functional requirements:
Compute the sum of six 8-bit numbers: output = a + b + c + d + e + f
Registered inputs, combinational outputs
Performance requirements:
Maximum clock period: unlimited
Maximum latency: four
Cost requirements:
Maximum of ve adders
Small miscellaneous hardware (e.g. muxes) is unlimited
Maximum of six inputs and one output
Design effort is unlimited
2.10. DESIGN EXAMPLE: PIPELINED MASSEY 189
Initial Dataow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Our goal is to rst maximize performance and then minimize area with the bounds of the require-
ments. To maximize performance, we want a throughput of one and a minimum clock period.
Revisiting the dataow diagrams from the unpipelined Massey, we nd the two diagrams below as
promising candidates for the pipelined Massey.
Original dataow
z
a b c d
e f +
+
+
+
+
Final unpipelined dataow
a b c
d e
f
+
+
+
+
+
z
For the unpipelined design, we rejected the original dataow diagram because it violated the re-
source requirement of a maximum of three inputs. If we fully pipeline the design, both dataow
diagrams will use six inputs and ve adders. The rst diagram uses ten registers, while the second
uses eight (remember, there is no reuse of components in a fully pipelined design). However, the
rst dataow diagram has a shorter clock period, and so will lead higher performance. Because
our primary goal is to maximime performance, we will pursue the rst dataow diagram.
Dataow Diagram Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
As a variation of the rst dataow diagram, we reschedule all of inputs to be read in the rst clock
cycle.
190 CHAPTER 2. RTL DESIGN WITH VHDL
Variation on original dataow
z
a b c d e f
+
+
+
+
+
The variation has the disadvantage of using one additional register. However, it has the potential
advantage of a simpler interface to the upstream environment, because all of the inputs are pro-
vided at the same time. Conversely, this rescheduling would be a disadvantage if the upstream
environment was optimized to take advantage of the fact that e and f are produced one clock cycle
later than the other values. We do not know anything about the upstream environment, and so will
reject this variation, because it increases the number of registers that we need.
As we said before, to maximize performace, we will fully pipeline the design, so every clock cycle
boundary becomes a stage boundary. At this time, we also add a valid bit to keep track of whether
a stage has a bubble or valid parcel. Pipelined dataow diagram
z
a b c d
e f +
+
+
+
+
i_valid
o_valid
VHDL Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For this simple example, there are no further optimizations, and can write the VHDL code directly
from the dataow diagram.
2.10. DESIGN EXAMPLE: PIPELINED MASSEY 191
-- stage 1
process begin
wait until rising_edge(clk);
r1 <= i1; r2 <= i2; r3 <= i3; r4 <= i4; v1 <= i_valid;
end process;
a1 <= r1 + r2; a2 <= r3 + r4;
-- stage 2
process begin
wait until rising_edge(clk);
r5 <= a1; r6 <= a2; r7 <= i5; r8 <= i6; v2 <= v1;
end process;
a3 <= r5 + r6; a4 <= r7 + r8;
-- stage 3
process begin
wait until rising_edge(clk);
r9 <= a3; r10 <= a4; v3 <= v2;
end process;
a5 <= r9 + r10;
-- outputs
z <= a5;
o_valid <= v3;
192 CHAPTER 2. RTL DESIGN WITH VHDL
2.11 Memory Arrays and RTL Design
2.11.1 Memory Operations
Read of Memory with Registered Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hardware
WE
A
DI
DO a do
M
clk
we
Behaviour
clk

d
a
M(
a
)

d
we
do
-
-
Write to Memory with Registered Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Hardware
WE
A
DI
DO a
M
clk
di
we
do
Behaviour
clk

d
a
M(
a
)

d
we
di
-
-
-
do U
-
-
Dual-Port Memory with Registered Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a0
M
clk
di0
we WE
A0
DI0
DO0
A1 DO1 a1 do1
do0
clk

d
a0
M(
a
)

d
we
di0
-
-
-
-

a
a1
do0
-
-

d
M(
a
)
U

d
do1 -
2.11.2 Memory Arrays in VHDL 193
Sequence of Memory Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
clk

d1
a0
M(
a
)

d
we
di0

a
a1
do0

d
M(
a
)

d
do1 -

d2

a
-
-
-

d1

d
-

d
M(
a
) -

d
M(
a
)
?
2.11.2 Memory Arrays in VHDL
2.11.2.1 Using a Two-Dimensional Array for Memory
A memory array can be written in VHDL as a two-dimensional array:
subtype data is std_logic_vector(7 downto 0);
type data_vector is array( natural range <> ) of data;
signal mem : data_vector(31 downto 0);
These two-dimensional arrays can be useful in high-level models and in specications. However,
it is possible to write code using a two-dimensional array that cannot be synthesized. Also, some
synthesis tools (including Synopsys Design Compiler and FPGA Compiler) will synthesize two-
dimensional arrays very inefciently.
The example below illustrates: lack of interface protocol, combinational write, multiple write
ports, multiple read ports.
194 CHAPTER 2. RTL DESIGN WITH VHDL
architecture main of mem_not_hw is
subtype data is std_logic_vector(7 downto 0);
type data_vector is array( natural range <> ) of data;
signal mem : data_vector(31 downto 0);
begin
y <= mem( a );
mem( a ) <= b; -- comb read
process (clk) begin
if rising_edge(clk) then
mem( c ) <= w; -- write port #1
end if;
end process;
process (clk) begin
if rising_edge(clk) then
mem( d ) <= v; -- write port #2
end if;
end process;
u <= mem( e ); -- read port #2
end main;
2.11.2.2 Memory Arrays in Hardware
Most simple memory arrays are single- or dual-
ported, support just one write operation at a time,
and have an interface protocol using a clock and
write-enable.
WE
A
DI
DO
WE
A0
DI0
DO0
A1 DO1
2.11.2 Memory Arrays in VHDL 195
2.11.2.3 VHDL Code for Single-Port Memory Array
package mem_pkg is
subtype data is std_logic_vector(7 downto 0);
type data_vector is array( natural range <> ) of data;
end;
entity mem is
port (
clk : in std_logic;
we : in std_logic -- write enable
a : in unsigned(4 downto 0); -- address
di : in data; -- data_in
do : out data -- data_out
);
end mem;
architecture main of mem is
signal mem : data_vector(31 downto 0);
begin
do <= mem( to_integer( a ) );
process (clk) begin
if rising_edge(clk) then
if we = 1 then
mem( to_integer( a ) ) <= di;
end if;
end if;
end process;
end main;
The VHDL code above is accurate in its behaviour and interface, but might be synthesized as
distributed memory (a large number of ip ops in FPGA cells), which will be very large and very
slow in comparison to a block of memory.
Synopsys synthesis tools implement each bit in a two-dimensional array as a ip-op.
Each FPGA and ASIC vendors supplies libraries of memory arrays that are smaller and faster than
a two-dimensional array of ip ops. These libraries exploit specialized hardware on the chips to
implement the memory.
Note: To synthesize a reasonable implementation of a memory array with
Synopsys, you must instantiate a vendor-supplied memory component.
Some other synthesis tools, such as Xilinx XST, can infer memory arrays from two-dimensional
arrays and synthesize efcient implementations.
196 CHAPTER 2. RTL DESIGN WITH VHDL
Recommended Design Process with Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. high-level model with two-dimensional array
2. two-dimensional array packaged inside memory entity/architecture
3. vendor-supplied component
2.11.2.4 Using Library Components for Memory
Altera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Altera uses MegaFunctions to implement RAM in VHDL. A MegaFunction is a black-box de-
scription of hardware on the FPGA. There are tools in Quartus to generate VHDL code for RAM
components of different sizes. In E&CE 327 we will provide you with the VHDL code for the
RAM components that you will need in Lab-3 and the Project.
The APEX20KE chips that we are using have dedicated SRAM blocks called Embedded System
Blocks (ESB). Each ESB can store 2048 bits and can be congured in any of the following sizes:
Number of Elements Word Size (bits)
2048 1
1024 2
512 4
256 8
128 16
Xilinx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use component instantiation to get these components
ram16x1s 161 single ported memory
ram16x1d 161 dual-ported memory
Other sizes are also available, consult the datasheet for your chip.
2.11.2 Memory Arrays in VHDL 197
2.11.2.5 Build Memory from Slices
If the vendors libraries of memory components do not include one that is the correct size for your
needs, you can construct your own component from smaller ones.
WE
A
DI
DO
WE
A
DI
DO
NxW NxW
WriteEn
Addr
DataIn[W-1..0]
DataIn[2W-1..2]
Clk
DataOut[W-1..0]
DataOut[2W-1..W]
Figure 2.7: An N2W memory from NW components
WE
A
DI
DO
WE
A
DI
DO
NxW
NxW
WriteEn
Addr[logN-1..0]
DataIn
Clk
DataOut
Addr[logN]
1 0
Figure 2.8: A 2NW memory from NW components
198 CHAPTER 2. RTL DESIGN WITH VHDL
A 164 Memory from 161 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity ram16x4s is
port (
clk, we : in std_logic;
data_in : in std_logic_vector(3 downto 0);
addr : in unsigned(3 downto 0);
data_out : out std_logic_vector(3 downto 0)
);
end ram16x4s;
architecture main of ram16x4s is
component ram16x1s
port (d : in std_logic; -- data in
a3, a2, a1, a0 : in std_logic; -- address
we : in std_logic; -- write enable
wclk : in std_logic; -- write clock
o : out std_logic -- data out
);
end component;
begin
mem_gen:
for i in 0 to 3 generate
ram : ram16x1s
port map (
we => we,
wclk => clk,
----------------------------------------------
-- d and o are dependent on i
a3 => addr(3), a2 => addr(2),
a1 => addr(1), a0 => addr(0),
d => data_in(i),
o => data_out(i)
----------------------------------------------
);
end generate;
end main;
2.11.3 Data Dependencies 199
2.11.2.6 Dual-Ported Memory
Dual ported memory is similar to single ported memory, except that it allows two simultaneous
reads, or a simultaneous read and write.
When doing a simultaneous read and write to the same address, the read will usually not see the
data currently being written.
Question: Why do dual-ported memories usually not support writes on both ports?
Answer:
What should your memory do if you write different values to the same
address in the same clock cycle?
2.11.3 Data Dependencies
Denition of Three Types of Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
There are three types of data dependencies. The names come from pipeline terminology in com-
puter architecture.
M[i] :=
:= M[i]
:=
M[i]
:=
:=
M[i]
:=
M[i]
:=
:=
M[i]
:=
Read after Write Write after Write Write after Read
(True dependency) (Load dependency) (Anti dependency)
Instructions in a program can be reordered, so long as the data dependencies are preserved.
200 CHAPTER 2. RTL DESIGN WITH VHDL
Purpose of Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
R3 := ......
... := ... R3 ...
producer
consumer
W
1
R
1
R3 := ...... W
0
W
2
WAW ordering prevents W
0

from happening after W
1
WAR ordering prevents W
2

from happening before R
1
RAW ordering prevents R
1

from happening before W
1
R3 := ......
Each of the three types of memory dependencies (RAW, WAW, and WAR) serves a specic purpose
in ensuring that producer-consumer relationships are preserved.
Ordering of Memory Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
M[2]
M[3]
M[3]
M[0]
:=
A
B
21
31
32
01
:=
:=
:=
M[2]
M[0]
:=
:=
M[3] M[2] M[1] M[0]
30 20 10 0
M[3] C :=
21
Initial Program with Dependencies
M[2] := 21
M[3] 31 :=
A := M[2]
B := M[0]
M[3] 32 :=
M[0] 01 :=
C := M[3]
Valid Modication
M[2] := 21
M[3] 31 :=
A := M[2]
B := M[0]
M[3] 32 :=
M[0] 01 :=
C := M[3]
Valid (or Bad?) Modication
Answer:
Bad modication: M[3] := 32 must happen before C := M[3].
2.11.4 Memory Arrays and Dataow Diagrams 201
2.11.4 Memory Arrays and Dataow Diagrams
Legend for Dataow Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
name
name name
name (rd) name(wr)
Input port Output port State signal Array read Array write
Basic Memory Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
mem(rd)
addr
data
mem
mem
(anti-dependency)
mem(wr)
data addr mem
mem
data := mem[addr]; mem[addr] := data;
Memory Read Memory Write
Dataow diagrams show the dependencies between operations. The basic memory operations are
similar, in that each arrow represents a data dependency.
There are a few aspects of the basic memory operations that are potentially surprising:
The anti-dependency arrow producing mem on a read.
Reads and writes are dependent upon the entire previous value of the memory array.
The write operation appears to produce an entire memory array, rather than just updating an
individual element of an existing array.
Normally, we think of a memory array as stationary. To do a read, an address is given to the array
and the corresponding data is produced. In datalfow diagrams, it may be somewhat suprising to
see the read and write operations consuming and producing memory arrays.
Our goal is to support memory operations in dataow diagrams. We want to model memory oper-
ations similarly to datapath operations. When we do a read, the data that is produced is dependent
upon the contents of the memory array and the address. For write operations, the apparent depen-
dency on, and production of, an entire memory array is because we do not know which address
in the array will be read from or written to. The antidependency for memory reads is related to
Write-after-Read dependencies, as discussed in Section 2.11.3. There are optimizations that can
be performed when we know the address (Section 2.11.4).
202 CHAPTER 2. RTL DESIGN WITH VHDL
Dataow Diagrams and Data Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Algo: mem[wr addr] := data in;
data out := mem[rd addr];
data_out
mem(wr)
data_in wr_addr
rd_addr
mem
mem(rd)
mem
Read after Write
Algo: mem[wr addr] := data in;
data out := mem[rd addr];
data_out
mem(wr)
data_in wr_addr
rd_addr
mem
mem(rd)
mem
Optimization when rd addr ,= wr addr
Algo: mem[wr1 addr] := data1;
mem[wr2 addr] := data2;
mem(wr)
mem
mem(wr)
data1 wr1_addr
wr2_addr
mem
data2
Write after Write
2.11.4 Memory Arrays and Dataow Diagrams 203
Algo: mem[wr1 addr] := data1;
mem[wr2 addr] := data2;
mem(wr)
mem(wr)
data1 wr1_addr
wr2_addr
mem
data2 mem
Scheduling option when
wr1 addr ,= wr2 addr
Algo: rd data := mem[rd addr];
mem[wr addr] := wr data;
mem(wr)
mem
mem(rd)
rd_addr
wr_addr
mem
wr_data
rd_data
Write after Read
Algo: rd data := mem[rd addr];
mem[wr addr] := wr data;
mem(wr)
mem
mem(rd)
rd_addr wr_addr
mem
wr_data
rd_data
Optimization when rd addr ,= wr addr
204 CHAPTER 2. RTL DESIGN WITH VHDL
2.11.5 Example: Memory Array and Dataow Diagram
M(wr)
data_in wr_addr
2
M(rd)
mem
M 21 2
M(wr)
31 3
A
0
M(rd)
B
M(wr)
32 3
M(wr)
3
01 0
M(rd)
C M
M[2]
M[3]
M[3]
M[0]
:=
A
B
21
31
32
01
:=
:=
:=
M[2]
M[0]
:=
:=
M[3] C :=
1
2
3
4
5
6
7
1
2
3 4
5
6
7
Figure 2.9: Memory array example code and initial dataow diagram
The dependency and anti-dependency arrows in dataow diagram in Figure2.9 are based solely
upon whether an operation is a read or a write. The arrows do not take into account the address
that is read from or written to.
In gure2.10, we have used knowledge about which addresses we are accessing to remove un-
needed dependencies. These are the real dependencies and match those shown in the code fragment
for gure2.9. In gure2.11 we have placed an ordering on the read operations and an ordering on
the write operations. The ordering is derived by obeying data dependencies and then rearranging
the operations to perform as many operations in parallel as possible.
2.11.5 Example: Memory Array and Dataow Diagram 205
M(wr)
2
M(rd)
M 21 2
M(wr)
31 3
A
0
M(rd)
B
M(wr)
32 3
M(wr)
01 0
3
M(rd)
C M
Figure 2.10: Memory array with minimal dependencies
M(wr)
2
M(rd)
M 21 2
M(wr)
31 3
A
0
M(rd)
B
M(wr)
32 3
M(wr)
01 0
3
M(rd)
C M
3
2
1 1 2
3 4
Figure 2.11: Memory array with orderings
M(wr)
2
M(rd)
M
21 2
M(wr)
31 3
A
0
M(rd)
B
M(wr)
32 3
M(wr)
01 0 3
M(rd)
C M
3
2
1 1
2
3
4
Figure 2.12: Final version of Figure2.9
Put as many parallel operations into same clock cycle as allowed by resources. Preserve depencies
by putting dependent operations in separate clock cycles.
206 CHAPTER 2. RTL DESIGN WITH VHDL
2.12 Input / Output Protocols
An important aspect of hardware design is choosing a input/output protocol that is easy to im-
plement and suits both your circuit and your environment. Here are a few simple and common
protocols.
rdy
data
ack
Figure 2.13: Four phase handshaking protocol
Used when timing of communication between producer and consumer is unpredictable. The dis-
advantage is that it is cumbersome to implement and slow to execute.
clk
data
valid
Figure 2.14: Valid-bit protocol
A low overhead (both in area and performance) protocol. Consumer must always be able to accept
incoming data. Often used in pipelined circuits. More complicated versions of the protocol can
handle pipeline stalls.
clk
data_in
start
done
data_out
Figure 2.15: Start/Done protocol
A low overhead (both in area and performance) protocol. Useful when a circuit works on one piece
of data at a time and the time to compute the result is unpredictable.
2.13. EXAMPLE: MOVING AVERAGE 207
2.13 Example: Moving Average
In this section we will design a circuit that performs a moving average as it receives a stream of
data. When each new data item is received, the output is the average of the four most recently
received data.
2 3 5 6 6 0 2 2 5 3 1 i_data
o_avg 4 5 4 3
Time 0 1 2 3 4 5 6 7 8 9 10
2.13.1 Requirements and Environmental Assumptions
1. Input data is sent sporadically, with at least 2 clock cycles of bubbles (invalid data) between
valid data.
2. When the input data is valid, the signal i valid is asserted for exactly one clock cycle.
3. Input data will be 8-bit signed numbers.
4. When output data is ready, o valid shall be asserted.
5. The output data (o avg) shall be the average of the four most recently received input data.
Output numbers shall be truncated to integer values.
2.13.2 Algorithm
We begin by exploring the mathematical behaviour of the system. To simplify the analysis at this
abstract level, we ignore bubbles and time. We focus only the valid data. If we had an input stream
of data x
i
(e.g., x
i
is the value of the i
th
valid data of i data, the equation for the output would be:
avg
i
= (x
i3
+x
i2
+x
i1
+x
i
)/4
To simplify our analysis of the equation, we decompose the computation into computing the sum
of the four most recent data and dividing the sum by four:
sum
i
= x
i3
+x
i2
+x
i1
+x
i
avg
i
= sum
i
/4
We look at the equation of sum over several iterations to try to identify patterns that we can use to
optimize our design:
208 CHAPTER 2. RTL DESIGN WITH VHDL
sum
5
= x
2
+x
3
+x
4
+x
5
sum
6
= x
3
+x
4
+x
5
+x
6
sum
7
= x
4
+x
5
+x
6
+x
7
We see that part of the calculations that are done for index i are the same as those for i +1:
sum
5
= x
2
+(x
3
+x
4
+x
5
)
sum
6
= (x
3
+x
4
+x
5
) +x
6
= sum
5
x
2
+x
6
We check a few more samples and conclude that we can generalize the above for index i as:
sum
i
= sum
i1
x
i4
+x
i
avg
i
= sum
i
/4
The equation for sum
i
is dependent on x
i
and x
i4
, therefore we need the current input value and we
need to store the four most recent input data. These four most recent data form a sliding window:
each time we receive valid data, we remove the oldest data value (x
i4
) and insert the new data (x
i
).
Summary of system behaviour deduced from exploring requirements and algorithm:
1. Dene a signal new for the value of i data each time that i valid is 1.
2. Dene a memory array M to store a sliding windowof the four most recent values of i data.
3. Dene a signal old for the oldest data value from the sliding window.
4. Update sum
i
with sum
i1
old
i
+ new
i
Sliding Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
There are two principal ways to implement a sliding window:
shift-register Each time new data is loaded, all of the registers are loaded with the data in the
register to their right or left and the leftmost or rightmost register is loaded with new data:
R[0] = new and R[i] = R[i 1].
circular buffer Once a data value is loaded into the buffer, the data remains in the same location
until it is overwritten. When new a value is loaded, the new value overwrites the oldest value
in the buffer. None of the other elements in the buffer change. A state machine keeps track
of the position (address) of the oldest piece of data. The state machine increments to point
to the next register, which now holds the oldest piece of data.
2.13.2 Algorithm 209

old new M[3] M[2] M[1] M[0]

Shift register

M[0..3] old new

Circular Buffer
The circular buffer design is usually preferable, because only one element changes value per clock
cycle. This allows the buffer to implemented with a memory array rather than a set of regis-
ters. Also, by having only element change value, power consumption is reduced (fewer capacitors
charging and discharging).
We have only four items to store, so we will use registers, rather than a memory array. For less than
sixteen items, registers are generally cheaper. For sixteen items, the choice between registers and
a memory array is highly dependent on the design goals (e.g. speed vs area) and implementation
technology.
Now that we have designed the storage module, we see that rather than a write-enable and address
signal, the actual signals we need are four chip-enable signals. This suggests that we should use a
one-hot encoding for the index of the oldest element in the circular buffer.
Because we have a one-hot encoding for the index, we do not use normal multiplexers to select
which register to read from. Normal multiplexers take a binary-encoded select signal. Instead, we
will use a 4:1 decoded mux, which is just four AND gates followed by a 4-input OR gate. Because
the data is 8-bits wide, each of the AND gates and the OR gate are 8-bits wide.
210 CHAPTER 2. RTL DESIGN WITH VHDL
CE
D Q
CE
D Q
CE
D Q
CE
D Q
d
ce[0]
ce[1]
ce[2]
ce[3]
M[0]
M[1]
M[2]
M[3]
8
q
8
8
8
8
8
we
addr
idx[0]
idx[1]
idx[2]
idx[3]
Register array with chip-enables and decoded multiplexer
2.13.3 Pseudocode and Dataow Diagrams
There are three different notations that we use to describe the behaviour of hardware systems
abstractly: mathematical equations (for datapath centric designs), state machines (for control-
dominated designs), and pseudocode (for algorithms or designs with memory). Our pseudocode is
similar to three-address assembly code: each line of code has a target variable, an operation, and
one or two operand variables (e.g., C = A + B). The name three address comes from the fact
that there are three addresses, or variables, in each line.
We use the three-address style of pseudocode, because each line of pseudocode then corresponds
to a single datapath operation in the dataow diagram. This gives us greater exibility to optimize
the pseudocode by rescheduling operations.
From the three-address pseudocode, we will construct dataow diagrams.
As an aside, in constrast to three-address languages, some assembly languages for extremely small
processors are limited to two addresses. The target must be the same as one of the operands (e.g.,
A = A + B).
2.13.3 Pseudocode and Dataow Diagrams 211
First Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
For the rst pseudocode, we do not restrict ourselves to three-addresses. In the second version of
the code, we decompose the rst line into two separate lines that obey the three-address restriction.
Pseudo pseudocode
new = i_data
old = M[idx]
sum = sum - old + new
M[idx] = new
idx = idx rol 1
o_avg = sum/4
Real 3-address pseudocode
new = i_data
old = M[idx]
tmp = sum - old
sum = tmp + new
M[idx] = new
idx = idx rol 1
o_avg = sum/4
Data-Dependency Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
To begin to understand what the hardware might be, we draw a data-dependency graph for the
pseudocode.
sum
i_data
sum o_avg
(wired shift)
M idx
Rd
Wr
M idx
1
tmp
new
old
Optimizing the Data-Dependency Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
In our design work so far, we have ignored bubbles and time. As we evolve from the pseudocode
to a datadependency graph and then to a dataow graph, we will include the effect of the bubbles
in our analysis.
212 CHAPTER 2. RTL DESIGN WITH VHDL
In the datadependency graph we observe that we have two arithmetic operations: subtraction and
addition. The requirements guarantee that there are at least two clock cycles of bubbles between
each parcel of valid data, so we have the ability to reuse hardware.
In contrast, we would not be able to reuse hardware if either we had to accept new data in each
clock cycle or we needed a fully pipelined circuit. If we had to accept new data in each clock
cycle, and were not pipelined, then the work would need to be completed in a single clock cycle. If
the design was to be fully pipelined, then each parcel of data would stay in each stage for exactly
one clock cycle: there would be no opportunity for a parcel to visit a stage twice, and hence no
opportunity for reuse.
For our design, where we are attempting to reuse hardware, we hypothesize that a single adder/subtracter
is cheaper than a separate adder and a subtracter. We would like to combine the two lines:
tmp = sum - old
sum = tmp + new
Looking at the data-dependency graph, we see that old is coming from memory and new is
coming from either a register or combinational logic. We cannot allocate new and old to the
same hardware, because new and old are not the same type of hardware: new is an array of
registers and old is a register. So, we will need a multiplexer for the second operand, to choose
between reading from old or and new. A multiplexer might also be required for the rst operand
to choose between sum and tmp. But, both of these signals are regular signals, so we might be
able to allocate both sum and tmp to the same register or datapath output, and hence avoid a
multiplexer for the rst operand. We will make decide how to deal with the rst operand when we
do register and datapath allocation.
We remove the need for a multiplexer for the second operand by reading new from memory. To
accomplish this, we re-write the pseudocode so that we rst write i data to memory, and then
read new from memory. The three versions of the pseudocode below show the transformations.
The datadependency graph is for the third version of the pseudocode.
2.13.3 Pseudocode and Dataow Diagrams 213
Remove intermediate signal old
new = i_data
tmp = sum - M[idx]
sum = tmp + new
M[idx] = new
idx = idx rol 1
o_avg = sum/4
Optimize code byreading new from memory
tmp = sum - M[idx]
M[idx] = i_data
new = M[idx]
sum = tmp + new
idx = idx rol 1
o_avg = sum/4
Remove intermediate signal new
tmp = sum - M[idx]
M[idx] = i_data
sum = tmp + M[idx]
idx = idx rol 1
o_avg = sum/4
Data-dependency graph after removing new
i_data
o_avg
(wired shift)
Rd
Wr
M
1
Rd
tmp
old
new
sum idx
sum M idx
Dataow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
To construct a dataow diagram, we divide the data-dependency graph into clock cycles. Because
we are using registers rather than a memory array, we can schedule the rst read and rst write
operations in the same clock cycle, even though they use the same address. In contrast, with
memory arrays it generally is risky to rely on the value of the output data in the clock cycle in
which we are doing a write (Section 2.11.1).
We need a second clock cycle for the second read from memory.
We now explore two options: with and without a third clock cycle; both are shown below. The
difference between the two options is whether the signals idx and sum refers to the output of
registers or the combinational datapath units (sum being the output of the adder/subtracter and
idx being the output of a rotation). With a latency of three clock cycles, idx is a registered
signal. With a latency of two clock cycles, idx and sum are combinational.
It is a bit misleading to describe the rotate-left unit for idx as combinational, because it is simply
a wire connecting one ip-op to another. However, conceptually and for correct behaviour, it
is helpful to think of the rotation unit as a block of combinational circuitry. This allows us to
distinguish between the output of the idx register and the input to the register (which is the output
of the rotation unit). Without this distinction, we might read the wrong value of idx and be
out-of-sync by one clock cycle.
214 CHAPTER 2. RTL DESIGN WITH VHDL
Latency of three clock cycles
sum
i_data
o_avg
(wired shift)
M idx
Rd Wr
1
Rd
S1
S2
S0
S0
M
sum idx
Latency of two clock cycles
sum
i_data
sum o_avg
(wired shift)
M idx
Rd Wr
M idx
1
Rd
S1
S0
S0
From a performance point of view, a latency of two is somewhat preferable. By keeping our latency
low, there may be another module that will benet by having an additional clock cycle in which to
do its work. The counter argument is that we have two clock cycles of bubbles, which means that
we can tolerate a latency of up to three without a need to pipeline. Well be efcient engineers and
try to achieve a latency of two.
The two dataow diagrams appear to be very similar, but in the dataow diagram with a latency of
two, a multiplexer will be needed for the address signal of the circular buffer. In S0, the address
input to the circular buffer is the output of the rotator. In S1, the address is the output of a register.
To eliminate the need for a multiplexer on the address input to the circular buffer, we move the
rotation from S0 to S1, so that the address is always a registered signal.
2.13.3 Pseudocode and Dataow Diagrams 215
Latency of two clock cycles with registered address
sum
i_data
(wired shift)
idx
Rd Wr
1
Rd
S1
S0
S0
M
sum o_avg M idx
Register and Datapath Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Register allocation is simple: idx and sum are each allocated to registers with their same names
(e.g., idx and sum) on the rst clock cycle boundary. For the second boundary, we similarly
allocate idx to the register idx. This leaves us with the register sum for the output of the
adder/subtracter.
Datapath Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Datapath allocation is even simpler than register allocation: we have one adder/subtracter (as1)
and one rotate-left (rol).
216 CHAPTER 2. RTL DESIGN WITH VHDL
sum idx
sum
i_data
(wired shift)
idx
Rd Wr
1
Rd
as1
as1
S1
S0
S0
M
sum o_avg M idx
idx sum
rol
2.13.4 Control Tables and State Machine
From the dataow diagram, we construct a control table. For memory (M) we need: write enable,
address, and data input columns. For registers (idx, sum) we need chip enable and data input
columns. For datapath components we need data inputs, plus a control signal to determine whether
as1 does addition or subtraction. We name the signal as1.sub, where a value of true means to
do a subtraction and false means do addition.
We proceed in two steps, rst ignoring bubbles, then extending our design to handle bubbles.
Register control table
M idx sum
we addr d ce d ce d
S0 1 idx x 0 1 as1
S1 0 idx 1 rol 1 as1
Datapath control table
as1 rol
sub src1 src2 src1 src2
S0 0 M sum
S1 1 sum M idx 1
Optimized control table
M idx as1
we ce sub
S0 1 1 0
S1 0 0 1
Static assignments in control table
M.addr = idx
M.d = x
idx.d = rol
sum.d = as1
as1.src1 = sum
as1.src2 = M
2.13.4 Control Tables and State Machine 217
Control Table and Bubbles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
If the circuit always had valid parcels arriving in every other clock cycle, then we could proceed
directly from our dataow diagram and optimized control table to VHDL code. However, the
indeterminate number of bubbles complicates the design of our state machine.
We add an idle mode to our state machine. The circuit is in idle mode when there is not a valid
parcel in the circuit. By idle, we mean that all write enable signals are turned off, chip enable
signals are turned off, and the state machine does not change state. The state machine for the
control table must resume in state S0 when i valid becomes true.
In the optimized control table, sum does need a chip enable, but with the addition of ide mode, we
will need to use a chip enable with sum.
The multiplexers for the datapath components are unaffected by the addition of idle mode. When
the circuit is in idle mode, the registers do not load new data, and so the behaviour of the datapath
components is unconstrained.
The nal control table is below.
Almost nal control table
M idx sum as1
we ce ce sub
S0 1 0 1 0
S1 0 1 1 1
idle 0 0 0
Final control table
M idx sum as1
we ce ce sub
S0 1 0 1 0
S1 0 1 1 1
idle 0 0 0 0
Static assignments
M.addr = idx
M.d = x
idx.d = rol
sum.d = as1
as1.src1 = sum
as1.src2 = M
State Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The state machine start in idle, transitions to S0 when i valid is true, then goes to S1 in the next
clock cycle, and then goes to idle.
We will use a modied one-hot encoding and use the valid-bit signals to hold the state. From the
dataow diagram we see that the latency through the circuit is two clock cycles. We need two valid
bit registers and will have three valid-bit signals: i valid (input, no register needed), valid1
(register), o valid (register). For the state encoding, we will use i valid and valid1.
218 CHAPTER 2. RTL DESIGN WITH VHDL
i valid valid1
S0 1 0
S1 0 1
idle 0 0
Updating the control table to show the state encoding gives us:
Final control table with state encoding
state M idx sum as1
i valid valid1 we ce ce sub
S0 1 0 1 0 1 0
S1 0 1 0 1 1 1
idle 0 0 0 0 0 0
Using the state encoding and the nal control table, we write equations for the write-enable signals,
chip-enable signals, and the adder/subtracter control signal.
M.we = i_valid
idx.ce = valid1
sum.ce = i_valid OR valid1
as1.sub = valid1
2.13.5 VHDL Code 219
2.13.5 VHDL Code
-- valid bits
process begin
wait until rising_edge(clk);
valid1 <= i_valid;
o_valid <= valid1;
end process;
-- idx
process begin
wait until rising_edge(clk);
if reset = 1 then
idx <= "0001";
else
if valid1 = 1 then
idx <= idx rol 1;
end if;
end if;
end process;
-- sliding window
process begin
wait until rising_edge(clk);
for i in 3 downto 0 loop
if (i_valid = 1) and (idx(i) = 1) then
M(i) <= i_data;
end if;
end loop;
end process;
mem_out <= M(0) when idx(0) = 1
else M(1) when idx(1) = 1
else M(2) when idx(2) = 1
else M(3);
-- add sub
add_sub <= sum - mem_out when valid1 = 1
else sum + mem_out;
-- sum
process begin
wait until rising_edge(clk);
if i_valid = 1 or valid1 = 1 then
sum <= add_sub;
end if;
end process;
Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
220 CHAPTER 2. RTL DESIGN WITH VHDL
i_data i_valid
valid1
add/sub
sum
o_avg
(wired shift)
M
(wired shift)
idx
CE
CE
CE A
o_valid
2.14. DESIGN PROBLEMS 221
2.14 Design Problems
P2.1 Synthesis
This question is about using VHDL to implement memory structures on FPGAs.
P2.1.1 Data Structures
If you have to write your own code (i.e. you do not have a library of memory components or a
special component generation tool such as LogiBlox or CoreGen), what datastructures in VHDL
would you use when creating a register le?
P2.1.2 Own Code vs Libraries
When using VHDL for an FPGA, under what circumstances is it better to write your own VHDL
code for memory, rather than instantiate memory components from a library?
P2.2 Design Guidelines
While you are grocery shopping you encounter your co-op supervisor from last year. Shes now
forming a startup company in Waterloo that will build digital circuits. Shes writing up the de-
sign guidelines that all of their projects will follow. She asks for your advice on some potential
guidelines.
What is your response to each question?
What is your justication for your answer?
What are the tradeoffs between the two options?
0. Sample Should all projects use silicon chips, or should all use biological chips, or should
each project choose its own technique?
Answer: All projects should use silicon based chips, because biological chips dont
exist yet. The tradeoff is that if biological chips existed, they would probably con-
sume less power than silicon chips.
1. Should all projects use an asynchronous reset signal, or should all use a synchronous reset
signal, or should each project choose its own technique?
2. Should all projects use latches, or should all projects use ip-ops, or should each project
choose its own technique?
222 CHAPTER 2. RTL DESIGN WITH VHDL
3. Should all chips have registers on the inputs and outputs or should chips have the inputs
and outputs directly connected to combinational circuitry, or should each project choose
its own technique? By register we mean either ip-ops or latches, based upon your
answer to the previous question. If your answer is different for inputs and outputs, explain
why.
4. Should all circuit modules on all chips have ip-ops on the inputs and outputs or should
chips have the inputs and outputs directly connected to combinational circuitry, or
should each project choose its own technique? By register we mean either ip-ops or
latches, based upon your answer to the previous question. If your answer is different for
inputs and outputs, explain why.
5. Should all projects use tri-state buffers, or should all projects use multiplexors, or should
each project choose its own technique?
P2.3 Dataow Diagram Optimization
Use the dataow diagram below to answer problems P2.3.1 and P2.3.2.
f
f
a b c
d
g
f
g
e
P2.3.1 Resource Usage
List the number of items for each resource used in the dataow diagram.
P2.4 Dataow Diagram Design 223
P2.3.2 Optimization
Draw an optimized dataow diagram that improves the performance and produces the same output
values. Or, if the performance cannot be improved, describe the limiting factor on the preformance.
NOTES:
you may change the times when signals are read from the environment
you may not increase the resource usage (input ports, registers, output ports, f components,
g components)
you may not increase the clock period
P2.4 Dataow Diagram Design
Your manager has given you the task of implementing the following pseudocode in an FPGA:
if is_odd(a + d)
p = (a + d)
*
2 + ((b + c) - 1)/4;
else
p = (b + c)
*
2 + d;
NOTES: 1) You must use registers on all input and output ports.
2) p, a, b, c, and d are to be implemented as 8-bit signed signals.
3) A 2-input 8-bit ALU that supports both addition and subtraction takes 1
clock cycle.
4) A 2-input 8-bit multiplier or divider takes 4 clock cycles.
5) A small amount of additional circuitry (e.g. a NOT gate, an AND gate, or a
MUX) can be squeezed into the same clock cycle(s) as an ALU operation,
multiply, or divide.
6) You can require that the environment provides the inputs in any order and
that it holds the input signals at the same value for multiple clock cycles.
P2.4.1 Maximum Performance
What is the minimum number of clock cycles needed to implement the pseudocode with a circuit
that has two input ports?
What is the minimum number of ALUs, multipliers, and dividers needed to achieve the minimum
number of clock cycles that you just calculated?
224 CHAPTER 2. RTL DESIGN WITH VHDL
P2.4.2 Minimum area
What is the minimum number of datapath storage registers (8, 6, 4, and 1 bit) and clock cycles
needed to implement the pseudocode if the circuit can have at most one ALU, one multiplier, and
one divider?
P2.5 Michener: Design and Optimization
Design a circuit named michener that performs the following operation: z = (a+d) + ((b -
c) - 1)
NOTES:
1. Optimize your design for area.
2. You may schedule the inputs to arrive at any time.
3. You may do algebraic transformations of the specication.
P2.6 Dataow Diagrams with Memory Arrays
Component Delay
Register 5 ns
Adder 25 ns
Subtracter 30 ns
ALU with +, , >, =, , AND, XOR 40 ns
Memory read 60 ns
Memory write 60 ns
Multiplication 65 ns
2:1 Multiplexor 5 ns
NOTES:
1. The inputs of the algorithms are a and b.
2. The outputs of the algorithms are p and q.
3. You must register both your inputs and outputs.
4. You may choose to read your input data values at any time and produce your outputs at any
time. For your inputs, you may read each value only once (i.e. the environment will not send
multiple copies of the same value).
5. Execution time is measured from when you read your rst input until the latter of producing
your last output or the completion of writing a result to memory
6. M is an internal memory array, which must be implemented as dual-ported memory with one
read/write port and one read port.
7. M supports synchronous write and asynchronous read.
P2.7 2-bit adder 225
8. Assume all memory address and other arithmetic calculations are within the range of repre-
sentable numbers (i.e. no overows occur).
9. If you need a circuit not on the list above, assume that its delay is 30 ns.
10. You may sacrice area efciency to achieve high performance, but marks will be deducted
for extra hardware that does not contribute to performance.
P2.6.1 Algorithm 1
Algorithm
q = M[b];
M[a] = b;
p = M[b+1]
*
a;
Assuming a b, draw a dataow diagram that is optimized for the fastest overall execution
time.
P2.6.2 Algorithm 2
q = M[b];
M[a] = q;
p = (M[b-1])
*
b) + M[b];
Assuming a > b, draw a dataow diagram that is optimized for the fastest overall execution
time.
P2.7 2-bit adder
This question compares an FPGA and generic-gates implementation of 2-bit full adder.
P2.7.1 Generic Gates
Show the implementation of a 2 bit adder using NAND, NOR, and NOT gates.
226 CHAPTER 2. RTL DESIGN WITH VHDL
P2.7.2 FPGA
Show the implementation of a 2 bit adder using generic FPGA cells; show the equations for the
lookup tables.
CE
S
R
D Q
c_in
comb
sum[0]
CE
S
R
D Q
comb
a[0]
b[0]
a[1]
b[1]
sum[1]
c_out
carry_1
P2.8 Sketches of Problems
1. calculate resource usage for a dataow diagram (input ports, output ports, registers, datapath
components)
2. calculate performance data for a dataow diagram (clock period and number of cycles to
execute (CPI))
3. given a dataow diagram, calculate the clock period that will result in the optimum perfor-
mance
4. given an algorithm, design a dataow diagram
5. given a dataow diagram, design the datapath and nite state machine
6. optimize a dataow diagram to improve performance or reduce resource usage
7. given fsm diagram, pick VHDL code that best implements diagram correct behaviour,
simple, fast hardware or critique hardware
Chapter 3
Performance Analysis and Optimization
3.1 Introduction
Hennessey and Pattersons Quantitative Computer Achitecture (textbook for E&CE 429) has good
information on performance. We will use some of the same denitions and formulas as Hennessey
and Patterson, but we will move away from generic denitions of performance for computer sys-
tems and focus on performance for digital circuits.
3.2 Dening Performance
Performance =
Work
Time
You can double your performance by:
doing twice the work in the same amount of time
OR doing the same amount of work in half the time
Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Measuring time is easy, but how do we accurately measure work?
The game of benchmarketing is nding a denition of work that makes your system appear to get
the most work done in the least amount of time.
227
228 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
Measure of Work Measure of Performance
clock cycle MHz
instruction MIPs
synthetic program Whetstone, Dhrystone, D-MIPs (Dhrystone MIPs)
real program SPEC
travel 1/4 mile drag race
The Spec Benchmarks are among the most respected and accurate predictions of real-world per-
formance.
Denition SPEC: Standard Performance Evaluation Corporation MISSION: To
establish, maintain, and endorse a standardized set of relevant benchmarks and
metrics for performance evaluation of modern computer systems
http://www.spec.org.
The Spec organization has different benchmarks for integer software, oating-point software, web-
serving software, etc.
3.3 Comparing Performance
3.3.1 General Equations
Equation for Big is n% greater than Small :
n% =
Big Small
Small
For the above equation, it can be difcult to remember whether the denominator is the larger
number or the smaller number. To see why Small is the only sensible choice, consider the situation
where a is 100% greater than b. This means that the difference between a and b is 100% of
something. Our only variables are a and b. It would be nonsensical for the difference to be a,
because that would mean: a b = a. However, if a b = b, then for a to be 100% greater than b
simply means that a = 2b.
Using n% greater formula, the phrase The performance of A is n% greater than the performance
of B is:
n% =
Performance
A
Performance
B
Performance
B
3.3.2 Example: Performance of Printers 229
Performance is inversely proportional to time:
Performance =
1
Time
Substituting the above equation into the equation for the performance of A is n% greater than the
performance of B gives:
n% =
Time
B
Time
A
Time
A
In general, the equation for a fast system to be n% faster than a slow system is:
n% =
TSlow TFast
TFast
Another useful formula is the average time to do one of k different tasks, each of which happens
%i of the time and takes an amount of time T
i
to do each time it is done .
TAvg =
k

i=1
(%i)(T
i
)
We can measure the performance of practically anything (cars, computers, vacuum cleaners, print-
ers....)
3.3.2 Example: Performance of Printers
Black and White Colour
printer1 9ppm 6ppm
printer2 12ppm 4ppm
Question: Which printer is faster at B&W and how much faster is it?
Answer:
230 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
BW Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
n% faster =
TSlowTFast
TFast
BW1 =
1
9ppm
= 0.1111min/page
BW2 =
1
12ppm
= 0.0833min/page
BWFaster =
TSlowTFast
TFast
=
BW1BW2
BW2
=
0.11110.08333
0.08333
= 33%faster
Performance for Different Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: If average workload is 90% BW and 10% Colour, which printer is faster
and how much faster is it?
3.3.2 Example: Performance of Printers 231
Answer:
TAvg1 = %BWBW1+%CC1
= (0.900.1111) +(0.100.1667)
= 0.1167min/page
TAvg2 = %BWBW2+%CC2
= (0.900.0833) +(0.100.2500)
= 0.1000min/page
AvgFaster =
TSlowTFast
TFast
=
Avg1Avg2
Avg2
=
0.11670.1000
0.1000
= 16.7%faster
Optimizing Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: If we want to optimize printer1 to match performance of printer2, should
we optimize BW or Colour printing?
Answer:
Colour printing is slower, so appears that can save more time by optimizing
colour printing.
However, look at extreme case of optimizing colour printing to be
instantaneous for P1:
232 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
0.000m/p
0.050m/p
0.100m/p
0.150m/p
P1 P2
Even if make colour printing instantaneous for printer 1 and kept same for
printer 2, printer 1 would not be measurably faster.
Amdahls law Make the common case fast.
Optimizations need to take into account
both run time and frequency of
occurrence.
We should optimize black and white printing.
Question: If you have to re all of the engineers because your stock price plummeted,
how can you get printer1 to be faster than printer2?
Note: This question was actually humorous during the high-tech bubble of
2000...
Answer:
Hire more marketing people!
Notice that colour printing on printer 1 is faster than on printer 2. So,
marketing suggests that people are increasing the percentage of printing that
is done in colour.
Question: Revised question: what percentage of printing must be done in colour for
printer1 to beat printer2?
3.4. CLOCK SPEED, CPI, PROGRAM LENGTH, AND PERFORMANCE 233
Answer:
TAvg1 TAvg2
%BWBW1+%CC1 %BWBW2+%CC2
%BW = 1%C
(1%C) BW1+%CC1 (1%C) BW2+%CC2
BW1+%C(C1BW1) BW2+%C(C2BW2)
%C
BW1BW2
BW1BW2+C2C1
%C
0.11110.0833
0.11110.0833+0.25000.1667
%C 0.25
3.4 Clock Speed, CPI, Program Length, and Performance
3.4.1 Mathematics
CPI Cycles per instruction
NumInsts Number of instructions
ClockSpeed Clock speed
ClockPeriod Clock period
Time = NumInstsCPI ClockPeriod
Time =
NumInstsCPI
ClockSpeed
3.4.2 Example: CISC vs RISC and CPI
Clock Speed SPECint
AMD Athlon 1.1GHz 409
Fujitsu SPARC64 675MHz 443
234 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
The AMD Athlon is a CISC microprocessor (it uses the IA-32 instruction set). The Fujitsu
SPARC64 is a RISC microprocessor (it uses Suns Sparc instruction set). Assume that it requires
20% more instructions to write a program in the Sparc instruction set than the same program re-
quires in IA-32.
Question: Which of the two processors has higher performance?
Answer:
SPECint, SPECfp, and SPEC are measures of performance. Therefore, the
higher the SPEC number, the higher the performance. The Fujitsu SPARC64
has higher performance
Question: What is the ratio between the CPIs of the two microprocessors?
Answer:
We will use a as the subscript for the Athlon and s as the subscript for the
Sparc.
Time =
NumInstsCPI
ClockSpeed
CPI =
TimeClockSpeed
NumInsts
CPI =
ClockSpeed
Perf NumInsts
CPI
A
CPI
S
=
_
ClockSpeed
A
Perf
A
NumInsts
A
_

_
Perf
S
NumInsts
S
ClockSpeed
S
_
ClockSpeed
A
= 1.1
ClockSpeed
S
= 0.675
Perf
A
= 409
Perf
S
= 443
NumInsts
S
= 1.2NumInsts
A
=
_
1.1
409NumInsts
A
_

_
4431.2NumInsts
A
0.675
_
= 2.1
= 110%more
3.4.3 Effect of Instruction Set on Performance 235
Executing the average Athlon instruction requires 110% more clock cycles
than executing the average Sparc instruction.
Stated more awkwardly: executing the average Athlon instruction requires
210% of the clock cycles required to execute the average Sparc instruction.
Question: Can you determine the absolute (actual) CPI of either microprocessor?
Answer:
To determine the absolute CPI, we would need to know the actual number of
instructions execute by at least one of the processors.
3.4.3 Effect of Instruction Set on Performance
Your group designs a microprocessor and you are considering adding a fused multiply-accumulate
to the instruction set. (A fused multiply accumulate is a single instruction that does both a multiply
and an addition. It is often used in digital signal processing.)
Your studies have shown that, on average, half of the multiply operations are followed by an add
instruction that could be done with a fused multiply-add.
Additionally, you know:
cpi %
ADD 0.8 CPIavg 15%
MUL 1.2 CPIavg 5%
Other 1.0 CPIavg 80%
You have three options:
option 1 : no change
option 2 : add the MAC instruction, increase the clock period by 20%, and MAC has the same
CPI as MUL.
option 3 : add the MAC instruction, keep the clock period the same, and the CPI of a MAC is
50% greater than that of a multiply.
Question: Which option will result in the highest overall performance?
236 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
Answer:
Time =
NumInstsCPI
ClockSpeed
Perf =
ClockSpeed
NumInstsCPI
We need to nd NumInsts, CPI, and ClockSpeed for each of the three
options. Option 1 is the baseline, so we will dene values for variables in
Options 2 and 3 in terms of the Option 1 variables.
Options 2 and 3 will have the same number of instructions. Half of the
multiply instructions are followed by an add that can be fused.
In questions that involve changing both CPI and NumInsts, it is often easiest
to work with the product of CPI and NumInsts, which represents the total
number of clock cycles needed to execute the program. Additionally, set the
problem up with an imaginary program of 100 instructions on the baseline
system.
NumMAC
2
= 0.5NumMul
1
= 0.55
= 2.5
NumMUL
2
= 0.5NumMul
1
= 0.55
= 2.5
NumADD
2
= NumAdd
1
0.5NumMul
1
= 150.55
= 12.5
Find the total number of clock cycles for each option.
Cycles
1
= NumMUL
1
CPI
MUL
+NumADD
1
CPI
ADD
+NumOth
1
CPI
Oth
= (51.2) +(150.8) +(801.0)
= 98
Cycles
2
= (NumMAC
2
CPI
MAC
) +(NumMUL
2
CPI
MUL
)
+(NumADD
2
CPI
ADD
) +(NumOth
2
CPI
Oth
)
= (2.51.2) +(2.51.2) +(12.50.8) +(801.0)
= 96
Cycles
3
= (NumMAC
3
CPI
MAC
) +(NumMUL
3
CPI
MUL
)
+(NumADD
3
CPI
ADD
) +(NumOth
3
CPI
Oth
)
= (2.5(1.51.2)) +(2.51.2) +(12.50.8) +(801.0)
= 97.5
3.4.4 Effect of Time to Market on Relative Performance 237
Calculate performance for each option using the formula:
Performance =
1
CyclesClockPeriod
Performance
1
= 1/(981)
= 1/98
Performance
2
= 1/(961.2)
= 1/115
Performance
3
= 1/(97.51)
= 1/97.5
The third option is the fastest.
3.4.4 Effect of Time to Market on Relative Performance
Assume that performance of the average product in your market segment doubles every 18 months.
You are considering an optimization that will improve the performance of your product by 7%.
Question: If you add the optimization, how much can you allow your schedule to slip
before the delay hurts your relative performance compared to not doing the
optimization and launching the product according to your current schedule?
Answer:
P(t) = performance at time t
= P
0
2
t/18
From problem statement:
P(t) = 1.07P
0
Equate two equations for P(t), then solve for t.
1.07P
0
= P
0
2
t/18
2
t/18
= 1.07
t/18 = log
2
1.07
t = 18(log
2
1.07)
Use: log
b
x =
logx
logb
= 18
_
log1.07
log2
_
= 1.76months
238 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
3.4.5 Summary of Equations
Time to perform a task:
Time =
NumInstsCPI
ClockSpeed
Average time to do one of k different tasks:
TAvg =
k

i=1
(%i)(T
i
)
Performance:
Performance =
Work
Time
Speedup:
Speedup =
TSlow
TFast
TFast is n% faster than TSlow:
n% faster =
TSlowTFast
TFast
Performance at time t if performance increases by factor of k every n units of time:
Perf (t) = Perf (0) k
t/n
3.5. PERFORMANCE ANALYSIS AND DATAFLOW DIAGRAMS 239
3.5 Performance Analysis and Dataow Diagrams
3.5.1 Dataow Diagrams, CPI, and Clock Speed
One of the challenges in designing a circuit is to choose the clock speed. Increasing the clock
speed of a circuit might not improve its performance. In this section we will work through several
example dataow diagrams to pick a clock speed for the circuit and schedule operations into clock
cycles.
When partitioning dataow diagrams into clock cycles, we need to choose a clock period. Choos-
ing a clock period affects many aspects of the design, not just the overall performance. Different
design goals might put conicting pressure on the clock period: some goals will tend toward short
clock periods and some goals will tend toward long clock periods. For performance, not only is
clock period a poor indicator of the relative performance of two different systems, even for the
same system decreasing the clock period might not increase the performance.
Goal Action Affect
Minimize area decrease clock pe-
riod
fewer operations per clock cycle, so
fewer datapath components and more
opportunities to reuse hardware
Increase scheduling exibil-
ity
increase clock pe-
riod
more exibility in grouping operations
in clock cycles
Decrease percentage of clock
cycle spent in ops (overhead
time in ops is not doing
useful work)
increase clock pe-
riod
decreases number of ops that data tra-
verses through
Decrease time to execute an
instruction
???? depends on dataow diagram
Our general plan to nd the clock period for maximum performance is:
1. Pick clock period to be delay through slowest component + delay through op.
2. For each instruction, for each operation, schedule the operation in the earliest clock cycle
possible without violating clock-period timing constraints.
3. Calculate average time to execute an instruction as:
Combine: Time =
NumInstsCPI
ClockSpeed
and: CPI
avg
=
k

i=1
%i CPI
i
to derive: Time =
NumInsts
_
k

i=1
%i CPI
i
_
ClockSpeed
240 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
4. If the maximum latency through dataow diagram is greater than 1, then increase clock
period by minimum amount needed to decrease latency by one clock period and return to
Step 2.
5. If the maximum latency through dataow diagram is 1, then clock period for highest perfor-
mance is clock period resulting in fastest Time.
6. If possible, adjust the schedule of operations to reduce the maximum number of occurrences
of a component per instruction per clock cycle without increasing latency for any instruction.
3.5.2 Examples of Dataow Diagrams for Two Instructions
Circuit supports two instructions, A and B (e.g. multiply and divide). At any point in time, the
circuit is doing either A or B it does not need to support doing A and B simultaneously.
The diagrams below show the ow for each instruction and the delay through the components
(f,g,h,i) that the instructions use.
The delay through a register is 5ns.
Each operation (A and B) occurs 50% of the time.
Our goal is to nd a clock period and dataow diagram for the circuit that will give us the highest
overall performance.
Instruction A
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
Instruction B
i (40ns)
g (50 ns)
3.5.2 Examples of Dataow Diagrams for Two Instructions 241
3.5.2.1 Scheduling of Operations for Different Clock Periods
55ns Clock Period
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
55ns
55ns
55ns
55ns
Instr A Instr B
75ns Clock Period
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
75ns
75ns
75ns
Instr A Instr B
85ns Clock Period
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns) 85ns
85ns
Instr A Instr B
95ns Clock Period
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
95ns
95ns
Instr A Instr B
155ns Clock Period
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
155ns
Instr A Instr B
3.5.2.2 Performance Computation for Different Clock Periods
Question: Which clock speed will result in the highest overall performance?
242 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
Answer:
Clock Period CPI
A
CPI
B
Tavg
55ns 4 2 55(0.54+0.52) = 165
75ns 3 2 75(0.53+0.52) = 187.5
85ns 2 2 85(0.52+0.52) = 170
95ns 2 1 95(0.52+0.51) = 143
155ns 1 1 155(0.51+0.51) = 155
3.5.2.3 Example: Two Instructions Taking Similar Time
Question: For the ow below, which clock speed will result in the highest overall
performance?
A B
30ns 40ns
50ns 50ns
20ns 40ns
50ns
Answer:
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
55ns
55ns
55ns
55ns
i (40ns)
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
75ns
75ns
75ns
i (40ns)
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
85ns
85ns
85ns
i (40ns)
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
95ns
95ns
i (40ns)
3.5.2 Examples of Dataow Diagrams for Two Instructions 243
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
105ns
105ns
i (40ns)
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
135ns
135ns
i (40ns)
Should skip 105 ns, because it has same latency as 95 ns.
f (30ns)
g (50 ns)
h (20 ns)
g (50 ns)
i (40ns)
g (50 ns)
155ns
i (40ns)
Clock Period CPI
A
CPI
B
Tavg
55ns 4 3 193
75ns 3 3 225
85ns 2 3 213
95ns 2 2 190
105ns 2 2 NO GAIN
135ns 2 1 203
155ns 1 1 155
A clock period of 155 ns results in the highest performance.
For a clock period of 105 ns, we did not calculate the performance, because
we could see that it would be worse than the performance with a clock period
of 95 ns. The dataow diagram with a 105 ns clock period has the same
latency as the diagram with a clock period of 95 ns. If the data ow diagram
with the longer clock period has the same latency as the diagram with the
shorter clock period, then the diagram with the longer clock period will have
lower performance.
3.5.2.4 Example: Same Total Time, Different Order for A
Question: For the ow below, which clock speed will result in the highest overall
performance?
244 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
A B
30ns 40ns
20ns 50ns
50ns 40ns
50ns
Answer:
Clock Period CPI
A
CPI
B
Tavg
55ns 3 3 165ns
95ns 3 2 238ns
105ns 2 2 210ns
135ns 2 1 203ns
155ns 1 1 155ns
A clock period of 155 ns results in lowest average
execution time, and hence the highest
performance.
This is the same answer as the previous problem,
but the total times for higher clock frequencies
differ signicantly between the two problems.
3.5.3 Example: From Algorithm to Optimized Dataow
This question involves doing some of the design work for a circuit that implements InstP and InstQ
using the components described below.
Instruction Algorithm Frequence of Occurrence
InstP ab((ab) +(bd) +e) 75%
InstQ (i + j +k +l) m 25%
Component Delays
2-input Mult 40ns
2-input Add 25ns
Register 5ns
NOTES
There is a resource limitation of a maximum of 3 input ports. (There are no other resource
limitations.)
You must put registers on your inputs, you do not need to register your outputs.
The environment will directly connect your outputs (its inputs) to registers.
Each input value (a, b, c, d, e, i, j, k, l, m) can be input only once if you need to use a value
in multiple clock cycles, you must store it in a register.
Question: What clock period will result in the best overall performance?
Answer:
3.5.3 Example: From Algorithm to Optimized Dataow 245
Algorithm Answers (InstP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a b e
*
+
d
*
+
*
b*d a*b
(a*b) + (b*d)
(a*b) + (b*d) + e
(a*b)*((a*b) + (b*d) + e)
*
InstP data-dep graph
a b e
+
d
*
+
*
b*d a*b
(a*b) + (b*d) + e
(a*b)*((a*b) + (b*d) + e)
*
InstP: common subexpr elim
b
e
+
d
*
+
*
b*d
a*b (a*b) + (b*d) + e
(a*b)*((a*b) + (b*d) + e)
a
(b*d) + e
*
InstP: alternative data dependency graph.
Both options have critical path of 2mults+2adds.
First option allows three operations to be done
with just three inputs (a,b,d). Second option
requires all four inputs to do three operations.
a b
e
+
d
*
+
*
b*d a*b
(a*b) + (b*d) + e
(a*b)*((a*b) + (b*d) + e)
*
InstP: clock=45ns, lat=4, T=200
246 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
a b
e
+
d
*
+
*
b*d a*b
(a*b) + (b*d) + e
(a*b)*((a*b) + (b*d) + e)
*
InstP: clock=55ns, lat=3, T=165ns
a b
e
+
d
*
+
*
b*d a*b
(a*b) + (b*d) + e
(a*b)*((a*b) + (b*d) + e)
*
InstP: clock=70ns, lat=2, T=140
a b e
+
d
*
+
*
b*d a*b
(a*b) + (b*d) + e
(a*b)*((a*b) + (b*d) + e)
*
InstP: illegal: 4 inputs
b
e
+
d
*
+
*
b*d
a*b (a*b) + (b*d) + e
(a*b)*((a*b) + (b*d) + e)
70ns
a
(b*d) + e
*
InstP: dataflow diagram with alternative
data-dep graph.
Adds a third clock cycle without any gain
in clock speed. From diagram, its clear that
its better to put a*b in first clock cycle and e in
second, because a*b can be done in parallel
with b*d.
Fastest option for InstP is 70ns clock, which gives a total execution time of
140 ns.
3.5.3 Example: From Algorithm to Optimized Dataow 247
Algorithm Answers (InstQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
i j l m
+ +
k
*
+
InstQ: data-dep graph with max parallelism
i j
l m
+
+
k
*
+
InstQ: alternative data-dep graph:
able to do two operations with three inputs,
while first data-dep graph required four inputs
to do two operations. We are limited to three
inputs, so choose this data-dep graph for
dataflow diagrams.
i j
l m
+
+
k
*
+
InstQ: clock=50ns, lat=4, T=200ns.
i j
l m
+
+
k
*
+
InstQ: clock=55ns, lat=3, T=165ns.
248 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
i j
l m
+
+
k
*
+
InstQ: clock=70ns, lat=2, T=140ns.
i j
l m
+
+
k
*
+
InstQ: irrelevant: lat did not decrease
i j
l m
+
+
k
*
+
InstQ: clock=120ns, lat=1, T=120ns
i j
l m
+
+
k
*
+
70ns
InstQ
Fastest option for InstQ is 70ns clock, which gives a total execution time of
140 ns.
Both InstP and InstQ need a 70ns clock period to maximize their
performance. So, use a 70ns clock, which gives a latency of 2 clock cycles for
both instructions.
Fastest execution time 140ns
Clock period 70ns
3.5.3 Example: From Algorithm to Optimized Dataow 249
Question: Find a minimal set of resources that will achieve the performance you
calculated.
Answer:
Final dataow graphs for InstP and InstQ
a b
e
+
d
*
+
*
b*d a*b
(a*b) + (b*d) + e
(a*b)*((a*b) + (b*d) + e)
*
InstP: clock=70ns, lat=2, T=140
i j
l m
+
+
k
*
+
70ns
InstQ
Need do only one of InstP and InstQ at any time, so simply take max of each
resource.
InstP InstQ System
Inputs 3 3 3
Outputs 1 1 1
Registers 3 3 3
Adders 2 2 2
Multipliers 2 1 2
250 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
Question: Design the datapath and state machine for your design
Answer:
a b
e
+
d
*
+
* *
i j
l m
+
+
k
*
+
InstQ: clock=70ns, lat=2, T=140ns. InstP: clock=70ns, lat=2, T=140ns.
r1 r2 r3
m1 m2
r1 r2 r3
a2
a1
r1 r2 r3
a2
r1 r2 r3
a1
a1
m2
m2
S0
S1
S0
S0
S1
S0
i1 i2 i3
i1 i2 i3
o1
o1
i2
i2 i3
Control Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
r1 r2 r3 m1 m2 a1 a2
ce mux ce mux ce mux src1 src2 src1 src2 src1 src2 src1 src2
InstP S0 1 i1 1 i2 1 i3 r1 r2 r3 a1 m1 m2
InstP S1 1 a2 1 i2 1 m1 r2 r3 r1 r2
InstQ S0 1 i1 1 i2 1 i3 a1 r3 r1 r2 a1 r3
InstQ S1 1 a2 1 i2 1 i3 r1 r2
Optimize Control Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
r1 r2 r3 m1 m2 a1 a2
mux mux mux src1 src2 src1 src2 src1 src2 src1 src2
InstP S0 i1 i2 i3 r1 r2 a1 r3 r1 r2 m1 m2
InstP S1 a2 i2 m1 r1 r2 r2 r3 r1 r2 m1 m2
InstQ S0 i1 i2 i3 r1 r2 a1 r3 r1 r2 a1 r3
InstQ S1 a2 i2 i3 r1 r2 r2 r3 r1 r2 a1 r3
3.5.3 Example: From Algorithm to Optimized Dataow 251
Write VHDL Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Use the optimized control table as basis for VHDL code.
process (clk) begin
if rising_edge(clk) then
if state=S0 then
r1 <= i1
else
r1 <= a2
end if;
end if;
end process;
process (clk) begin
if rising_edge(clk) then
r2 <= i2
end if;
end process;
process (clk) begin
if rising_edge(clk) then
if inst=instP and state=S0 then
r3 <= m1
else
r1 <= i3
end if;
end if;
end process;
m1 <= r1
*
r2;
m2_src1 <= r2 when state=S0
else a1;
m2 <= m2_src1
*
r3;
a1 <= r1 + r2;
a2 <= a2_src1 + a2_src2;
process (inst, m1, m2, a1, r3) begin
if inst=instP then
a2_src1 <= m1;
a2_src2 <= m2;
else
a2_src1 <= a1;
a2_src2 <= r3;
end if;
end process;
252 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
3.6 General Optimizations
3.6.1 Strength Reduction
Strength reduction replaces one operation with another that is simpler.
3.6.1.1 Arithmetic Strength Reduction
Multiply by a constant power of two wired shift logical left
Multiply by a power of two shift logical left
Divide by a constant power of two wired shift logical right
Divide by a power of two shift logical right
Multiply by 3 wired shift and addition
3.6.1.2 Boolean Strength Reduction
Boolean tests that can be implemented as wires
is odd, is even : least signicant bit
is neg, is pos : most signicant bit
NOTE: use is odd(a) rather than a(0)
By choosing your encodings carefully, you can sometimes reduce a vector comparison to a wire.
For example if your state uses a one-hot encoding, then the comparison state = S3 reduces
to state(3) = 1. You might expect a reasonable logic-synthesis tool to do this reduction
automatically, but most tools do not do this reduction.
When using encodings other than one-hot, Karnaugh maps can be useful tools for optimizing vector
comparisons. By carefully choosing our state assignments, when we use a full binary encoding for
8 states, the comparison:
(state = S0 or state = S3 or state = S4) = 1
can be reduced from looking at 3 bits, to looking at just 2 bits. If we have a condition that is true
for four states, then we can nd an encoding that looks at just 1 bit.
3.6.2 Replication and Sharing 253
3.6.2 Replication and Sharing
3.6.2.1 Mux-Pushing
Pushing multiplexors into the fanin of a signal can reduce area.
Before
z <= a + b when (w = 1)
else a + c;
After
tmp <= b when (w = 1)
else c;
z <= a + tmp;
The rst circuit will have two adders, while the second will have one adder. Some synthesis tools
will perform this optimization automatically, particularly if all of the signals are combinational.
3.6.2.2 Common Subexpression Elimination
Introduce new signals to capture subexpressions that occur multiple places in the code.
Before
y <= a + b + c when (w = 1)
else d;
z <= a + c + d when (w = 1)
else e;
After
tmp <= a + c;
y <= b + tmp when (w = 1)
else d;
z <= d + tmp when (w = 1)
else e;
Note: Clocked subexpressions Care must be taken when doing common
subexpression elimination in a clocked process. Putting the temporary sig-
nal in the clocked process will add a clock cycle to the latency of the com-
putation, because the tmp signal will be ip-op. The tmp signal must be
combinational to preserve the behaviour of the circuit.
3.6.2.3 Computation Replication
To improve performance
If same result is needed at two very distant locations and wire delays are signicant, it might
improve performance (increase clock speed) to replicate the hardware
To reduce area
If same result is needed at two different times that are widely separated, it might be cheaper to
reuse the hardware component to repeat the computation than to store the result in a register
Note: Muxes are not free Each time a component is reused, multiplexors
are added to inputs and/or outputs. Too much sharing of a component can cost
more area in additional multiplexors than would be spent in replicating the
component
254 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
3.6.3 Arithmetic
VHDL is left-associative. The expression a + b + c + d is interpreted as (((a + b) +
c) + d). You can use parentheses to suggest parallelism.
Perform arithmetic on the minimum number of bits needed. If you only need the lower 12 bits of a
result, but your input signals are 16 bits wide, trim your inputs to 12 bits. This results in a smaller
and faster design than computing all 16 bits of the result and trimming the result to 12 bits.
3.7 Retiming
state
a
b
c
sel
x
y z
critical path
state S0 S1 S2 S3 S0 S1 S2 S3
a
b
c
sel
x
y
z

+
+
process begin
wait until rising_edge(clk);
if state = S1 then
z <= a + c;
else
z <= b + c;
end if;
end process;
Retimed Circuit and Waveform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.7. RETIMING 255
state
a
b
c
sel
x
y z
state S0 S1 S2 S3 S0 S1 S2 S3
a
b
c
sel
x
y
z

process (state) begin


if state = S1 then
sel = 1
else
sel = 1
end if;
end process;
process begin
wait until rising_edge(clk);
if sel = 1 then
... -- code for z
end if;
end process;
process begin
wait until rising_edge(clk);
if state = then
sel = 1
else
sel = 1
end if;
end process;
process begin
wait until rising_edge(clk);
if sel = 1 then
... -- code for z
end if;
end process;
256 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
3.8 Performance Analysis and Optimization Problems
P3.1 Farmer
A farmer is trying to decide which of his two trucks to use to transport his apples from his orchard
to the market.
Facts:
capacity of
truck
speed when
loaded with
apples
speed when
unloaded (no
apples)
big truck 12 tonnes 15kph 38kph
small truck 6 tonnes 30kph 70kph
distance to market 120 km
amount of apples 85 tonnes
NOTES:
1. All of the loads of apples must be carried using the same truck
2. Elapsed time is counted from beginning to deliver rst load to returning to the orchard after
the last load
3. Ignore time spent loading and unloading apples, coffee breaks, refueling, etc.
4. For each trip, a truck travels either its fully loaded or empty speed.
Question: Which truck will take the least amount of time and what percentage faster
will the truck be?
Question: In planning ahead for next year, is there anything the farmer could do to
decrease his delivery time with little or no additional expense? If so, what is it, if not,
explain.
P3.2 Network and Router 257
P3.2 Network and Router
In this question there is a network that runs a protocol called BigLan. You are designing a router
called the DataChopper that routes packets over the network running BigLan (i.e. theyre BigLan
packets).
The BigLan network protocol runs at a data rate of 160 Mbps (Mega bits per second). Each BigLan
packet contains 100 Bytes of routing information and 1000 Bytes of data.
You are working on the DataChopper router, which has the following performance numbers:
75MHz clock speed
4 cycles for a byte of either data or header
500 number of additional clock cycles to process the routing information
for a packet
P3.2.1 Maximum Throughput
Which has a higher maximum throughput (as measured in data bits per second that is only the
payload bits count as useful work), the network or your router, and how much faster is it?
P3.2.2 Packet Size and Performance
Explain the effect of an increase in packet length on the performance of the DataChopper (as
measured in the maximum number of bits per second that it can process) assuming the header
remains constant at 100 bytes.
P3.3 Performance Short Answer
If performance doubles every two years, by what percentage does performance go up every month?
This question is similar to compound growth from your economics class.
P3.4 Microprocessors
The Yme microprocessor is very small and inexpensive. One performance sacrice the designers
have made is to not include a multiply instruction. Multiplies must be written in software using
loops of shifts and adds.
The Yme currently ships at a clock frequency of 200MHz and has an average CPI of 4.
A competitor sells the Y!v1 microprocessor, which supports exactly the same instructions as the
Yme. The Y!v1 runs at 150MHz, and the average program is 10% faster on the Yme than it is on
the Y!v1.
258 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
P3.4.1 Average CPI
Question: What is the average CPI for the Y!v1? If you dont have enough
information to answer this question, explain what additional information you need
and how you would use it?
A new version of the Y!, the Y!u2 has just been announced. The Y!u2 includes a multiply
instruction and runs at 180MHz. The Y!u2 publicity brochures claim that using their multiply
instruction, rather than shift/add loops, can eliminate 10% of the instructions in the average pro-
gram. The brochures also claim that the average performance of Y!u2 is 30% better than that of
the Y!v1.
P3.4.2 Why not you too?
Question: Assuming the advertising claims are true, what is the average CPI for the
Y!u2? If you dont have enough information to answer this question, explain what
additional information you need and how you would use it?
P3.4.3 Analysis
Question: Which of the following do you think is most likely and why.
1. the Y!u2 is basically the same as the Y!v1 except for the multiply
2. the Y!u2 designers made performance sacrices in their design in order to include a multiply
instruction
3. the Y!u2 designers performed other signicant optimizations in addition to creating a mul-
tiply instruction
P3.5 Dataow Diagram Optimization
Draw an optimized dataow diagram that improves the performance and produces the same output
values. Or, if the performance cannot be improved, describe the limiting factor on the performance.
NOTES:
you may change the times when signals are read from the environment
you may not increase the resource usage (input ports, registers, output ports, f components, g
components)
you may not increase the clock period
P3.6 Performance Optimization with Memory Arrays 259
f
f
a b c
d
g
f
g
e
Before Optimization
f
f
a b
c
d
g
f
g
e
After Optimization
P3.6 Performance Optimization with Memory Arrays
This question deals with the implementation and optimization for the algorithm and library of
circuit components shown below.
Algorithm
q = M[b];
if (a > b) then
M[a] = b;
p = (M[b-1])
*
b) + M[b];
else
M[a] = b;
p = M[b+1]
*
a;
end;
Component Delay
Register 5 ns
Adder 25 ns
Subtracter 30 ns
ALU with +, , >, =, , AND, XOR 40 ns
Memory read 60 ns
Memory write 60 ns
Multiplication 65 ns
2:1 Multiplexor 5 ns
NOTES:
1. 25% of the time, a > b
2. The inputs of the algorithm are a and b.
3. The outputs of the algorithm are p and q.
4. You must register both your inputs and outputs.
5. You may choose to read your input data values at any time and produce your outputs at any
time. For your inputs, you may read each value only once (i.e. the environment will not send
multiple copies of the same value).
6. Execution time is measured from when you read your rst input until the latter of producing
your last output or the completion of writing a result to memory
260 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
7. M is an internal memory array, which must be implemented as dual-ported memory with one
read/write port and one write port.
8. Assume all memory address and other arithmetic calculations are within the range of repre-
sentable numbers (i.e. no overows occur).
9. If you need a circuit not on the list above, assume that its delay is 30 ns.
10. Your dataow diagram must include circuitry for computing a > b and using the result to
choose the value for p
Draw a dataow diagram for each operation that is optimized for the fastest overall execution time.
NOTE: You may sacrice area efciency to achieve high performance, but marks will be deducted
for extra hardware that does not contribute to performance.
P3.7 Multiply Instruction
You are part of the design team for a microprocessor implemented on an FPGA. You currently im-
plement your multiply instruction completely on the FPGA. You are considering using a special-
ized multiply chip to do the multiplication. Your task is to evaluate the performance and optimality
tradeoffs between keeping the multiply circuitry on the FPGA or using the external multiplier chip.
If you use the multipliplier chip, it will reduce the CPI of the multiply instruction, but will not
change the CPI of any other instruction. Using the multiplier chips will also force the FPGA to run
at a slower clock speed.
FPGA option FPGA + MULT option
FPGA FPGA
MULT
average CPI 5 ???
% of instrs that are multiplies 10% 10%
CPI of multiply 20 6
Clock speed 200 MHz 160 MHz
P3.7.1 Highest Performance
Which option, FPGA or FPGA+MULT, gives the higher performance (as measured in MIPs), and
what percentage faster is the higher-performance option?
P3.7 Multiply Instruction 261
P3.7.2 Performance Metrics
Explain whether MIPs is a good choice for the performance metric when making this decision.
262 CHAPTER 3. PERFORMANCE ANALYSIS AND OPTIMIZATION
Chapter 4
Functional Verication
4.1 Introduction
4.1.1 Purpose
The purpose of this chapter is to illustrate techniques to quickly and reliably detect bugs in datapath
and control circuits.
Section 4.5 discusses verication of datapath circuits and introduces the notions of testbench, spec-
ication, and implementation. In section 4.6 we discuss techniques that are useful for debugging
control circuits.
The verication guild website:
http://www.janick.bergeron.com/guild/default.htm
is a good source of information on functional verication.
4.2 Overview
The purpose of functional verication is to detect and correct errors that cause a system to produce
erroneous results. The terminology for validation, verication, and testing differs somewhat from
discipline to discipline. In this section we outline some of the terminology differences and describe
the terminology used in E&CE 327. We then describe some of the reasons that chips tend to work
incorrectly.
263
264 CHAPTER 4. FUNCTIONAL VERIFICATION
4.2.1 Terminology: Validation / Verication / Testing
functional validation
Comparing the behaviour of a design against the customers expectations. In validation, the
specication is the customer. There is no specication that can be used to evaluate the
correctness of the design (implementation).
functional verication
Comparing the behaviour of a design (e.g. RTL code) against a specication (e.g. high-level
model) or collection of properties
usually treats combinational circuitry as having zero-delay
usually done by simulating circuit with test vectors
big challenges are simulation speed and test generation
formal verication
checking that a design has the correct behaviour for every possible input and internal state
uses mathematics to reason about circuit, rather than checking individual vectors of 1s and
0s
capacity problems: only usable on detailed models of small circuits or abstract models of
large circuits
mostly a research topic, but some practical applications have been demonstrated
tools include model checking and theorem proving
formal verication is not a guarantee that the circuit will work correctly
performance validation
checking that implementation has (at least) desired performance
power validation
checking that implementation has (at most) desired power
equivalence verication (checking)
checking that the design generated by a synthesis tool has same behaviour as RTL code.
timing verication
checking that all of the paths in a circuit t meet the timing constraints
Hardware vs Software Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Note: in software testing refers to running programs with specic inputs and checking if the
program does the right thing. In hardware, testing usually means manufacturing testing, which
is checking the circuits that come off of the manufacturing line.
4.2.2 The Difculty of Designing Correct Chips 265
4.2.2 The Difculty of Designing Correct Chips
4.2.2.1 Notes from Kenn Heinrich (UW E&CE grad)
Everyone should get a lecture on why their rst industrial design wont work in the eld.
Here are few reasons getting a single system to work correctly for a few minutes in a university lab
is much easier than getting thousands of systems to work correctly for months at a time in dozens
of countries around the world.
1. You forgot to make your unreachable states transition to the initial (reset) state. Clock
glitches, power surges, etc will occasionally cause your system to jump to a state that isnt
dened or produce an illegal data value. When this happens, your design should reset itself,
rather than crash or generatel illegal outputs.
2. You have internal registers that you cant access or test. If you can set a register you must
have some way of reading the register from outside the chip.
3. Another chip controls your chip, and the other chip is buggy. All of your external control
lines should be able to be disabled, so that you can isolate the source of problems.
4. Not enough decoupling capacitors on your board. The analog world is cruel and and un-
usual. Voltage spikes, current surges, crosstalk, etc can all corrupt the integrity of digital
signals. Trying to save a few cents on decoupling capacitors can cause headaches and sig-
nicant nancial costs in the future.
5. You only tested your system in the lab, not in the real world. As a product, systems will
need to run for months in the eld, simulation and simple lab testing wont catch all of the
weirdness of the real world.
6. You didnt adequately test the corner cases and boundary conditions. Every corner case is as
important as the main case. Even if some weird event happens only once every six months,
if you do not handle it correctly, the bug can still make your system unusable and unsellable.
4.2.2.2 Notes from Aart de Geus (Chairman and CEO of Synopsys)
More than 60% of the ASIC designs that are fabricated have at least one error, issue, or a problem
that whose severity forced the design to be reworked.
Even experienced designers have difculty building chips that function correctly on the rst pass
(gure4.1).
266 CHAPTER 4. FUNCTIONAL VERIFICATION
61% of new chip designs require at least one re-spin
Functional logic error
Analog tuning issue
Signal integrity issue
Clock scheme error
Reliability issue
Mixed-signal problem
Timing issue (slow paths)
Timing issue (fast paths)
IR drop issues
Firmware error
Other problem
(43%)
(20%)
(17%)
(14%)
(12%)
(11%)
(11%)
(10%)
(10%)
(7%)
(4%)
(3%)
10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
At least one
error/issue/problem
(61%)
Uses too much power
Source: Aart de Geus, Chairman and CEO of Synopsys. Keynote address. Synopsys Users
Group Meeting, Sep 9 2003, Boston USA.
Figure 4.1: Problems found on rst-spins of new chip designs
4.3 Test Cases and Coverage
4.3.1 Test Terminology
Test case / test vector :
A combination of inputs and internal state values. Represents one possible test of the system.
Boundary conditions / corner cases :
A test case that represents an unusual situation on input and/or internal state signals. Corner
cases are likely to contain bugs.
Test scenario :
Asequence of test vectors that, together, exercise a particular situation (scenario) on a circuit.
For example, a scenario for an elevator controller might include a sequence of button pushes
and movements between oors.
Test suite :
A collection of test vectors that are run on a circuit.
4.3.2 Coverage 267
4.3.2 Coverage
To be absolutely certain that an implementation is correct, we must check every combination of
values. This includes both input values and internal state (ip ops).
If we have ni bits of inputs and ns bits in ip-ops, we have to test 2
ni +ns
different cases when
doing functional verication.
Question: If we have nc combinational signals, why dont we have to test
2
ni+ns+nc
different cases?
Answer:
The value of each combinational signal is determined by the ip ops and
inputs in its fanin. Once the values of the inputs and ip ops are known, the
value of each combinational signal can be calculated. Thus, the
combinational signals do not add additional cases that we need to consider.
Denition Coverage: The coverage that a suite of tests achieves on a circuit is the
percentage of cases that are simulated by the tests. 100% coverage means that the
circuit has been simulated for all combinations of values for input signals and internal
signals.
Note: Coverage Terminology There are many different types of coverage,
which measure everything from percentage of cases that are exercised to num-
ber of output values that are exercised.
There are many different commercial software programs that measure code and other types of
coverage.
Company Tool Coverage
Cadence Afrma Coverage Analyzer
Cadence DAI Coverscan code, expressions, fsm
Cadence Codecover code, expressions, fsm
Fintronic FinCov code
Summit Design HDLScore code, events, variables
Synopsys CoverMeter code coverage (dead?)
TransEDA Verication Navigator code and fsm
Verisity SureCov code, block, values, fsm
Veritools Express VCT, VeriCover code, branch
Aldec Riviera code, block
268 CHAPTER 4. FUNCTIONAL VERIFICATION
4.3.3 Floating Point Divider Example
This example illustrates the difculty of achieving signicant coverage on realistic circuits.
Consider doing the functional simulation for a double precision (64-bit) oating-point divider.
Given Information
Data width 64 bits
Number of gates in circuit 10 000
Number of assembly-language instructions to simulate one
gate for one test case
100
Number of clock cycles required to execute one assembly
language instruction on the computer that is running the
simulation
0.5
Clock speed of computer that is running the simulation 1 Gigahertz
Number of Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: How many cases must be considered?
Answer:
item bits num values
src1 64 2
64
= 1.8E+19
src2 64 2
64
= 1.8E+19
NumTestsTot = NumInputCases NumStateCases
= (2
64
2
64
) (2
0
)
= 3.4E+38cases
4.3.3 Floating Point Divider Example 269
Simulation Run Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: How long will it take to simulate all of the different possible cases using a
single computer?
Answer:
1. Calculate number of seconds to simulate one test case
TestTime1:1 = 10000gates 100
instrs
gate
0.5
cycles
instr
1E9
secs
cycle
= 5E4secs
2. Number of tests per year
NumTests:1 =
60
secs
min
60
mins
hour
24
hours
day
365.25
days
year
TestTime1:1

SpeedOfLight in m/s
TestTime1:1
=
3E+8secs
5E4secs
= 6E+12cases/year
3. Number of years to test all cases
TestTimeTot =
NumTestsTot
NumTests:1
=
3.4E+38cases
6E+12cases/year
= 5.6E+26years
Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: If you can run simulations non-stop for one year on ten computers, what
coverage will you achieve?
Answer:
270 CHAPTER 4. FUNCTIONAL VERIFICATION
1. Number of tests per year using ten computers
NumTests:10 = 10NumTests:1
= 106E+12cases
= 6E+13cases
2. Calculate coverage achieved by running tests on ten computers for one
year
Covg =
NumTestsRun
NumTestsTot
=
NumTests:10
NumTestsTot
=
6E+13
3E+38
= 2E25
= 0.000000000000000000000002%
The message is that, even with large amounts of computing resources, it is
difcult to achieve numerically signicant coverage for realistic circuits.
An effective functional verication plan requires carefully chosen test cases,
so that even the miniscule amount of coverage than is realistically achievable
catches most (all?!?!) of the bugs in the design.
Simulation vs the Real World . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FromValidating the Intel(R) Pentium(R) Microprocessor by Bob Bentley, Design Automation Con-
ference 2001. (Link on E&CE 327 web page.)
Simulating the Pentium 4 Processor on a Pentium 3 Processor ran at about 15 MHz.
By tapeout, over 200 billion simulation cycles had been run on a network of computers.
All of these simulations represent less than two minutes of running a real processor.
4.4 Testbenches
A test bench (also known as a test rig, test harness, or test jig) is a collection of code used
to simulate a circuit and check if it works correctly.
Testbenches are not synthesized. You do not need to restrict yourself to the synthesizable subset of
VHDL. Use the full power of VHDL to make your testbenches concise and powerful.
4.4.1 Overview of Test Benches 271
4.4.1 Overview of Test Benches
stimulus
implementation
specification
check
testbench
Implementation Circuit that youre checking for bugs
also known as: design under test or unit under test
Stimulus Generates test vectors
Specication Describes desired behaviour of implementation
Check Checks whether implementation obeys specication
Notes and observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Testbenches usually do not have any inputs or outputs.
Inputs are generated by stimulus
Outputs are analyzed by check and relevant information is printed using report statements
Different circuits will use different stimuli, specications, and checks.
The roles of the specication and check are somewhat exible.
Most circuits will have complex specications and simple checks.
However, some circuits will have simple specications and complex checks.
If two circuits are supposed to have the same behaviour, then they can use the same stimuli,
specication, and check.
If two circuits are supposed to have the same behaviour, then one can be used as the specication
for the other.
Testbenches are restricted to stimulating only primary inputs and observing only primary out-
puts. To check the behaviour of internal signals, use assertions.
272 CHAPTER 4. FUNCTIONAL VERIFICATION
4.4.2 Reference Model Style Testbench
stimulus
implementation
specification
reference model testbench
Specication has same inputs and outputs as implementation.
Specication is a clock-cycle accurate description of desired behaviour of implementation.
Check is an equality test between outputs of specication and implementation.
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Execution modules: output is sum, difference, product, quotient, etc.of inputs
DSP lters
Instruction decoders
Note: Functional specication vs Reference model Functional speci-
cation and reference model are often used interchangeably.
4.4.3 Relational Style Testbench
stimulus
implementation
relational testbench
check
Relational testbenches, or relational specications are used when we do not want to specify the
specic output values that the implementation must produce.
Instead, we want to check that some relationship holds between the output and the input, or
that some relationship holds amongst the output values (independent of the values of the input
signals.)
Specication is usually just wires to feed the input signals to the check.
Check is the brains and encodes the desired behaviour of the circuit.
4.4.4 Coding Structure of a Testbench 273
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Carry-save adders: the two outputs are the sum of the three inputs, but do not specify exact
values of each individiual output.
Arbiters: every request is eventually granted, but do not specify in which order requests are
granted.
One-hot encoding: exactly one bit of vector is a 1, but do not specify which bit is a 1.
Note: Relational specication vs relational testbench Relational speci-
cation and relational testbench are often used interchangeably.
4.4.4 Coding Structure of a Testbench
architecture main of athabasca_tb is
component declaration for implementation;
other declarations
begin
implementation instantiation;
stimulus process;
specification process (or component instantiation);
check process;
end main;
4.4.5 Datapath vs Control
Datapath and control circuits tend to use different styles of testbenches.
Datapath circuits tend to be well-suited to reference-model style testbenches:
Each set of inputs generates one set of outputs
Each set of outputs is a function of just one set of inputs
Control circuits often pose problems for testbenches,
Many more internal signals than outputs.
The behaviour of the outputs provides a view into only a fragment of the current state of the
circuit.
It may take many clock cycles from when a bug is exercised inside the circuit until it generates
a deviation from the correct behaviour on the outputs.
When the deviation on the outputs is observed, it is very difcult to pinpoint the precise cause
of the deviation (the root cause of the bug).
Assertions can be used to check the behaviour of internal signals. Control circuits tend to use
assertions to check correctness and rely on testbenches only to stimulate inputs.
274 CHAPTER 4. FUNCTIONAL VERIFICATION
4.4.6 Verication Tips
Suggested order of simulation for functional verication.
1. Write high-level model.
2. Simulate high-level model until have correct functionality and latency.
3. Write synthesizable model.
4. Use zero-delay simulation (uw-sim) to check behaviour of synthesizable model against
high-level model.
5. Optimize the synthesizable model.
6. Use zero-delay simulation (uw-sim) to check behaviour of optimized model against high-
level model.
7. Use timing-simulation (uw-timsim) to check behaviour of optimized model against high-
level model.
section 4.5 describes a series of testbenches that are particularly useful for debugging datapath
circuits in the early phases of the design cycle.
4.5 Functional Verication for Datapath Circuits
In this section we will incrementally develop a testbench for a very simple circuit: an AND gate.
Although the example circuit is trivial in size, the process scales well to very large circuits. The
process allows verication to begin as soon a circuit is simulatable, even before a complete speci-
cation has been written.
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
entity and2 is
port (
a, b : in std_logic;
c : out std_logic
);
end and2;
architecture main of and2 is
begin
c <= 1 when (a = 1 AND b = 1)
else 0;
end and2;
4.5.1 A Spec-Less Testbench 275
4.5.1 A Spec-Less Testbench
(NOTE: this code has been reviewed manually but has not been simulated. The concepts are
illustrated correctly, but there might be typographical errors in the code.)
First, use waveform viewer to check that implementation generates reasonable outputs for a small
set of inputs.
entity and2_tb is
end and2_tb;
architecture main_tb of and2_tb is
component and2
port (
a, b : in std_logic;
c : out std_logic
);
end component;
signal ta, tb, tc_impl : std_logic;
signal ok : boolean;
begin
---------------------------------------------
impl : and2 port map (a => ta, b => tb, c => tc_impl);
---------------------------------------------
stimulus : process
begin
ta <= 0; tb <= 0;
wait for 10ns;
ta <= 1; tb <= 1;
wait for 10ns;
end process;
---------------------------------------------
end main_tb;
Use the spec-less testbench until implementation generates solid Boolean values (No X or U data)
and have checked that a few simple test cases generate correct outputs.
276 CHAPTER 4. FUNCTIONAL VERIFICATION
4.5.2 Use an Array for Test Vectors
Writing code to drive inputs and repetitively typing wait for 10 ns; can get tedious, so code
up test vectors in an array.
(NOTE: this code has not been checked for correctness)
architecture main_tb of and2_tb is
...
begin
...
stimulus : process
type test_datum_ty is record
ra, rb : std_logic;
end record;
type test_vectors_ty is
array(natural range <>) of test_datum_ty
;
constant test_vectors : test_vectors_ty :=
-- a b
( ( 0, 0),
( 1, 1)
);
begin
for i in test_vectorslow to test_vectorshigh loop
ta <= test_vectors(i).ra;
tb <= test_vectors(i).rb;
wait for 10 ns;
end loop;
end process;
end main_tb;
Use this testbench until checking the correctness of the outputs by hand using waveform viewer
becomes difcult.
4.5.3 Build Spec into Stimulus 277
4.5.3 Build Spec into Stimulus
(NOTE: this code has not been checked for correctness)
After a few test vectors appear to be working correctly (via a manual check of waveforms on
simulation), begin automatically checking that outputs are correct.
Add expected result to stimulus
Add check process
architecture main_tb of and2_tb is
...
begin
------------------------------------------
impl : and2 port map (a => ta, b => tb, c => tc_impl);
------------------------------------------
stimulus : process
type test_datum_ty is record
ra, rb, rc : std_logic;
end record;
type test_vectors_ty is array(natural range <>) of test_datum_ty;
constant test_vectors : test_vectors_ty :=
-- a, b: inputs
-- c : expected output
-- a b c
( ( 0, 0, 0),
( 0, 1, 0),
( 1, 1, 1)
);
begin
for i in test_vectorslow to test_vectorshigh loop
ta <= test_vectors(i).ra;
tb <= test_vectors(i).rb;
tc_spec <= test_vectors(i).rc;
wait for 10 ns;
end loop;
end process; ------------------------------------------
check : process (tc_impl, tc_spec)
begin
ok <= (tc_impl = tc_spec);
end process;
------------------------------------------
end main_tb;
Use this testbench until it becomes tedious to calculate manually the correct result for each test
case.
278 CHAPTER 4. FUNCTIONAL VERIFICATION
4.5.4 Have Separate Specication Entity
Rather than write the specication as part of stimulus, create separate specication entity/architecture.
The specication component then calculates the expected output values.
(NOTE: if your simulation tool supports congurations, the spec and impl can share the same
entity, well see this in section 4.6)
4.5.4 Have Separate Specication Entity 279
entity and2_spec is
...(same as and2 entity)...
end and2_spec;
architecture spec of and2_spec is
begin
c <= a AND b;
end spec;
architecture main_tb of and2_tb is
component and2 ...;
component and2_spec ...;
signal ta, tb, tc_impl, tc_spec : std_logic;
signal ok : boolean;
begin
------------------------------------------
impl : and2 port map (a => ta, b => tb, c => tc_impl);
spec : and2_spec port map (a => ta, b => tb, c => tc_spec);
------------------------------------------
stimulus : process
type test_datum_ty is record
ra, rb : std_logic;
end record;
type test_vectors_ty is array(natural range <>) of test_datum_ty;
constant test_vectors : test_vectors_ty :=
-- a b
( ( 0, 0),
( 1, 1)
);
begin
for i in test_vectorslow to test_vectorshigh loop
ta <= test_vectors(i).ra;
tb <= test_vectors(i).rb;
wait for 10 ns;
end loop;
end process;
------------------------------------------
check : process (tc_impl, tc_spec)
begin
ok <= (tc_impl = tc_spec);
end process;
------------------------------------------
end main_tb;
280 CHAPTER 4. FUNCTIONAL VERIFICATION
4.5.5 Generate Test Vectors Automatically
When it becomes tedious to write out each test vector by hand, we can automaticaly compute them.
This example uses a pair of nested for loops to generate all four permutations of input values
for two signals.
architecture main_tb of and2_tb is
...
begin
...
stimulus : process
subtype std_test_ty of std_logic is (0, 1);
begin
for va in std_test_tylow to std_test_tyhigh loop
for vb in std_test_tylow to std_test_tyhigh loop
ta <= va;
tb <= vb;
wait for 10 ns;
end loop;
end loop;
end process;
...
end main_tb;
4.5.6 Relational Specication
architecture main_tb of and2_tb is
...
begin
------------------------------------------
impl : and2 port map (a => ta, b => tb, c => tc_impl);
------------------------------------------
stimulus : process
...
end process;
------------------------------------------
check : process (tc_impl, tc_spec)
begin
ok <= NOT (tc_impl = 1 AND (ta =0 OR tb = 0));
end process;
------------------------------------------
end main_tb;
4.6. FUNCTIONAL VERIFICATION OF CONTROL CIRCUITS 281
4.6 Functional Verication of Control Circuits
Control circuits are often more challenging to verify than datapath circuits.
Control circuits have many internal signals. Testbenches are unable access key information
about the behaviour of a control circuit.
Many clock cycles can elapse between when a bug causes an internal signal to have an incorrect
value and when an output signal shows the effect of the bug.
In this section, we will explore the functional verication of state machines via a First-In First-Out
queue.
The VHDL code for the queue is on the web at:
http://www.ece.uwaterloo.ca/ece327/exs/queue
4.6.1 Overview of Queues in Hardware
write read

q
u
e
u
e

Figure 4.2: Structure of queue
Empty
Write 1
A
Write 2
A
Figure 4.3: Write Sequence
282 CHAPTER 4. FUNCTIONAL VERIFICATION
Write 1
B
A
Write 2
B
A
Figure 4.4: A Second Example Write
Read 1
B
A
Read 2
B
A
Figure 4.5: Example Read Sequence
Write 1
B
C
D
E
F
G
H
I
J
Write 2
B
C
D
E
F
G
H
I
J
Figure 4.6: Write Illustrating Index Wrap
Write 1
B
C
D
E
F
G
H
I
J
K
Write 2
B
C
D
E
F
G
H
I
J
K
Figure 4.7: Write Illustrating Full Queue
empty
mem
wr_idx
rd_idx
data_wr
data_rd
do_wr
do_rd
Figure 4.8: Queue Signals
empty
mem
wr_idx
rd_idx
data_wr
data_rd
do_wr
do_rd
WE
A0
DI0
DO0
A1 DO1
Figure 4.9: Incomplete Queue Blocks
Control circuitry not shown.
4.6.2 VHDL Coding 283
4.6.2 VHDL Coding
4.6.2.1 Package
Things to notice in queue package:
1. separation of package and body
package queue_pkg is
subtype data is std_logic_vector(3 downto 0);
function to_data(i : integer) return data;
end queue_pkg;
package body queue_pkg is
function to_data(i : integer) return data is
begin
return std_logic_vector(to_unsigned(i, 4));
end to_data;
end queue_pkg;
4.6.2.2 Other VHDL Coding
VHDL coding techniques to notice in queue implementation:
1. type declaration for vectors
2. attributes
(a) low, high, length,
3. functions (reduce overall implementation and maintenance effort)
(a) reduce redundant code
(b) hide implementation details
(c) (just like software engineering....)
4.6.3 Code Structure for Verication
Verication things to notice in queue implementation:
1. instrumentation code
2. coverage monitors
3. assertions
284 CHAPTER 4. FUNCTIONAL VERIFICATION
architecture ... is
...
begin
... normal implementation ...
process (clk)
begin
if rising_edge(clk) then
... instrumentation code ...
prev_signame <= signame;
end if;
end process;
... assertions ...
... coverage monitors ...
end;
4.6.4 Instrumentation Code
Added to implementation to support verication
Usually keeps track of previous values of signals
Does not create hardware (Optimized away during synthesis)
Does not feed any output signals
Must use synthesizable subset of VHDL
process (clk) begin
if rising_edge(clk) then
prev_rd_idx <= rd_idx;
prev_wr_idx <= wr_idx;
prev_do_rd <= do_rd;
prev_do_wr <= do_wr;
end if;
end process;
Note: Naming convention for instrumentation For assertions, signals are
named prev signame and signame, rather than next signame and
signame as is done for state machines. This is because for assertions we
use the prev signals as history signals, to keep track of past events. In con-
trast, for state machines, we name the signals next, because the state machine
computes the next values of signals.
4.6.5 Coverage Monitors
The goal of a coverage monitors is to check if a certain event is exercised in a simulation run. If a
test suite does not trigger a coverage monitor, then we probably want to add a test vector that will
trigger the monitor.
4.6.5 Coverage Monitors 285
For example, for a circuit used in a microwave oven controller, we might want to make sure that
we simulate the situation when the door is opened while the power is on.
1. Identify important events, conditions, transitions
2. Write instrumentation code to detect event
3. Use report to write when event happens
4. When run simulation, report statements will print when coverage condition detected
5. Pipe simulation results to log le
6. Examine log le and coverage monitors to nd cases and transitions not tested by existing
test vectors
7. Add test vectors to exercise missing cases
8. Idea: automate detection of missing cases using Perl script to nd coverage messages in
VHDL code that arent in log le
9. Real world: most commercial synthesis tools come with add-on packages that provide dif-
ferent types of coverage analysis
10. Research/entrepreneurial idea: based on missing coverage cases, nd new test vectors to
exercise case
Coverage Events for Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Prev Now
wr rd
rd wr
Prev Now
wr
rd
rd wr
Prev Now
wr
rd rd wr
286 CHAPTER 4. FUNCTIONAL VERIFICATION
Question: What events should we monitor to estimate the coverage of our functional
tests?
Answer:
wr idx and rd idx are far apart
wr idx and rd idx are equal
wr idx catches rd idx
rd idx catches wr idx
rd idx wraps
wr idx wraps
Coverage Monitor Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
process (signals read)
begin
if (condition) then
report "coverage: message";
elsif (condition) ) then
report "coverage: message";
else
report "error: case fall through on message"
severity warning;
end if;
end process;
Coverage Monitor Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Events related to rd idx equals wr idx.
4.6.6 Assertions 287
process (prev_rd_idx, prev_wr_idx, rd_idx, wr_idx)
begin
if (rd_idx = wr_idx) then
if ( prev_rd_idx = prev_wr_idx ) then
report "coverage: read = write both moved";
elsif ( rd_idx /= prev_rd_idx ) then
report "coverage: Read caught write";
elsif ( wr_idx /= prev_wr_idx ) then
report "coverage: Write caught read";
else
report "error: case fall through on rd/wr catching"
severity warning;
end if;
end if;
end process;
Events related to rd idx wrapping.
process (rd_idx)
begin
if (rd_idx = low_idx) then
report "coverage: rd mv to low";
elsif (rd_idx = high_idx) then
report "coverage: rd mv to high";
else
report "coverage: rd mv normal";
end if;
end process;
4.6.6 Assertions
Assertions for Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. If rd idx changes, then it increments or wraps.
2. If rd idx changes, then do rd was 1, or reset is 1.
3. If wr idx changes, then it increments or wraps.
4. If wr idx changes, then do wr was 1, or reset is 1.
5. And many others....
Assertion Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
process (signals read) begin
assert (required condition)
report "error: message" severity warning;
end process;
288 CHAPTER 4. FUNCTIONAL VERIFICATION
Assertions: Read Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
process (rd_idx) begin
assert ((rd_idx > prev_rd_idx) or (rd_idx = low_idx))
report "error: rd inc" severity warning;
assert ((prev_do_rd = 1) or (reset = 1))
report "error: rd imp do_rd" severity warning;
end process;
Assertions: Write Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
process (wr_idx) begin
assert ((wr_idx > prev_wr_idx) or (wr_idx = low_idx))
report "error: wr inc" severity warning;
assert ((prev_do_wr = 1) or (reset = 1))
report "error: wr imp do_wr" severity warning;
end process;
4.6.7 VHDL Coding Tips
Vector Type Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
type data_array_ty is array(natural range <>) of data;
signal data_array : data_array_ty(7 downto 0);
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
function to_idx
(i : natural range data_arraylow to data_arrayhigh)
return idx_ty
is
begin
return to_unsigned(i, idx_tylength);
end to_idx;
Conversion to Index
Without Function With Function
rd_idx <= to_unsigned(5, 3); rd_idx <= to_idx(5);
The function code is verbose, but is very maintainable, because neither the function itself nor uses
of the function need to know the width of the index vector.
4.6.8 Queue Specication 289
Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
function inc_idx (idx : idx_ty) return idx_ty is
begin
if idx < data_arrayhigh then
return (idx + 1);
else
return (to_idx(data_arraylow));
end if;
end inc_idx;
Feedback Loops, and Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Coding guideline: use functions. Dont use procedures.
inc as fun inc as proc
wr_idx <= inc_idx(wr_idx); inc_idx(wr_idx);
Functions clearly distinguish between reading from a signal and writing to a signal. By examining
the use of a procedure, you cannot tell which signals are read from and which are written to. You
must examine the declaration or implementation of the procedure to determine modes of signals.
Modifying a signal within a procedure results in a tri-state signal. This is bad.
File I/O (textio package) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
TEXTIO denes read, write, readline, writeline functions.
Described in:
http://www.eng.auburn.edu/department/ee/mgc/vhdl.html#textio
These functions can be used to read test vectors from a le and write results to a le.
4.6.8 Queue Specication
Most bugs in queues are related to the queue becoming full, becoming empty, and/or wrap of
indices.
Specication should be obviously correct. Avoid bugs in specication by making specication
queue larger than the max number of writes that we will do in test suite. Thus, the specication
queue will never become full or wrap. However, the implementation queue will become full and
wrap.
290 CHAPTER 4. FUNCTIONAL VERIFICATION
Write Index Update in Specication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
We increment write-index on every write, we never wrap.
process (clk) begin
if rising_edge(clk) then
if (reset = 1) then
wr_idx <= 0;
elsif (do_wr = 1) then
wr_idx <= wr_idx + 1;
end if;
end if;
end process;
Things to Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Things to notice in queue specication:
1. dont care conditions (-)
2. uninitialized data (hint: what is the value of rd_data when do more reads than writes?
Dont Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
rd_data <= data_array(rd_idx) when (do_rd =1)
else (others => -);
4.6.9 Queue Testbench
Things to notice in queue testbench:
1. running multipe test sequences
2. uninitialized data U
3. std_match to compare spec and impl data
0 0
0 L
1 1
1 H
- everything
everything else , everything
With equality, - ,= 1, but we want to use - to mean dont care in specication.
The solution is to use std match, rather than = to check implementation signals against
the specication.
4.7. EXAMPLE: MICROWAVE OVEN 291
Stimulus Process Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The stimulus process runs multiple test vectors in a single simulation run.
stimulus : process
type test_datum_ty is
record
r_reset, ... normal fields ...
end record;
type test_vectors_ty is array(natural range <>) of test_datum_ty;
constant test_vectors : test_vectors_ty :=
( -- reset ... other signal ...
( 1, normal fields), -- test case 1
( 0, normal fields),
...
( 1, normal fields), -- test case 2
( 0, normal fields),
...
);
begin
for i in test_vectorsrange loop
if (test_vectors(i).r_reset = 1) then
... reset code ...
end if;
reset <= 0;
... normal sequence ...
wait until rising_edge(clk);
end loop;
end process;
After reset is asserted, set signals to U.
4.7 Example: Microwave Oven
This question concerns the VHDL code microwave, which controls a simple microwave oven;
the properties prop1...prop3; and two proposed changes to the VHDL code.
INSTRUCTIONS:
1. Assume that the code as currently written is correct any change to the code that causes a
change to the behaviour of the signals heat or count is a bug.
2. For each of the two proposed code changes, answer whether the code change will cause a
bug.
3. If the code change will cause a bug, provide a test case that will exercise the bug and identify
all of the given properties (prop1, prop2, and prop3) that will detect the bug with the test
case you provide.
292 CHAPTER 4. FUNCTIONAL VERIFICATION
4. If none of the three properties can detect the bug, provide a property of your own that will
detect the bug with the testcase you provide.
Question: For each of the three properties prop1...prop2, answer whether the
property is best checked as part of a testbench or assertion. For each property, justify
why a testbench or an assertion is the best method to validate that property.
prop1 If start is pushed and the door is closed, then heat remains on for exactly the time specied
by the timer when start was pushed, assuming reset remains false and the door remains
closed.
Answer:
Testbench: All relevant signals are primary inputs or outputs, so can
check property without seeing internal signals. Testbenches are only able
to set and observe primary inputs and outputs.
prop2 If the door is open, then heat is off.
Answer:
Testbench: same as previous property.
prop3 If start is not pushed, reset is false, and count is greater than zero, then count is decre-
mented.
Answer:
Assertion: To see count, need access to internal signals.
entity microwave is
port (
timer -- time input from user
: in unsigned(7 downto 0);
reset, -- resets microwave
clk, -- clock signal input
is_open, -- detects when door is open
start -- start button input from user
: in std_logic;
heat : out std_logic -- 1=on, 0=off
);
end microwave;
architecture main of microwave is
signal count : unsigned(7 downto 0); -- internal time count
signal x_heat : std_logic;
begin
4.7. EXAMPLE: MICROWAVE OVEN 293
-- heat process ------------------------------
process (clk)
begin
if rising_edge(clk) then
if reset = 1 then
x_heat <= 0;
elsif (is_open = 0) and (start = 1) and -- region of
(time > 0) -- change #1
then --
x_heat <= 1; --
elsif (is_open = 0) and (count > 0) then --
x_heat <= x_heat; --
else
x_heat <= 0;
end if;
end if;
end process;
-- count process ------------------------------
process (clk)
begin
if rising_edge(clk) then
if (reset = 1) then
count <= to_unsigned(0, 8);
elsif (start = 1) then -- region of
count <= timer; -- change #2
elsif (count > 0) then --
count <= count - 1; --
end if;
end if;
end process;
heat <= x_heat;
end main;
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
prop1 If start is pushed and the door is closed, then heat remains on for exactly the time specied
by the timer when start was pushed, assuming reset remains false and the door remains
closed.
prop2 If the door is open, then heat is off.
prop3 If start is not pushed, reset is false, and count is greater than zero, then count is decre-
mented.
Change #1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
294 CHAPTER 4. FUNCTIONAL VERIFICATION
From:
elsif (start = 1) then
count <= time;
elsif (count > 0) then
count <= count - 1;
To:
elsif (count > 0) then
count <= count - 1;
elsif (start = 1) then
count <= time;
Answer:
The change introduces a bug that is caught by properties 1 and 3.
Test Cases
testcase1 Maintain reset=0. Close door, set timer to some value (v
1
) and
then push start. Leave the door closed. While the microwave is on, set
timer to a value (v
2
) that is greater than v
1
and then push start.
In old the code, the new value on the timer will be read in. In the new
code, the new value on the timer will be ignored. The reason to make v
2
greater than v
1
is to prevent counter from being exactly equal to v
2
when
start is pushed a second time. In that case, the bug would not be
exercised. Note, the old code violated prop1.
testcase2 reset = 0, microwave off, door closed, count = 0. Set timer to a
non-zero value. Press and hold start for a number of cycles. In the
original code, the value of timer would be reloaded into count on each
rising edge of the clock. With the change, the value of count continues to
decrement and the timer is not reloaded into count. Note: in this case,
only prop1 will detect the bug. Prop3 will not detect the bug because the
antecedent, or precondition for the property is false.
Change #2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
From:
elsif (is_open = 0) and (start = 1) and (time > 0)
then x_heat <= 1;
elsif (is_open = 0) and (count > 0)
then x_heat <= x_heat;
To:
elsif (is_open = 0)
and ((start = 1) or (count > 0))
then x_heat <= 1;
else x_heat <= 0;
4.7. EXAMPLE: MICROWAVE OVEN 295
Answer:
The change introduces a bug that would be caught by prop1, but not by prop2
or prop3.
The following scenario or test case will catch the bug with prop1. Maintain
reset=0. Microwave is off, door is closed, timer is set to 0. Push start. With
old code, microwave will remain off. With new code, microwave will turn on
and remain on as long as start is pushed.
The change to code exercises another bug that is not caught by prop1. This
bug demonstrates a weakness in prop1 that should be remedied.
Testcase: reset = 0, microwave off, door closed. Set timer to a non-zero
value. Press (and release) start. Before timer expires, open door. Close door
before count = 0. In the original code, the microwave will remain off, but with
the change, the microwave will start again. Note: the same properties detect
the bug as with the original solution.
The weakness in prop1 is that it assumes that door remains closed. So, any
testcase where the door is opened will pass prop1. In verication, this is
known as the false implies anything problem, or a testcase that passes a
property vacuously.
To catch this bug, we must either change prop1 or add another property. In
fact, we probably should do both.
First we strengthen prop1 to deal with situations where the door is opened
while the microwave is on. The property gets a bit complicated: If start is
pushed and the door is closed, then heat remains on until the earlier of either
opening of the door or the expiration of the time specied by the timer when
start was pushed, assuming reset remains false.
Second, we add a property to ensure that the microwave does not turn back
on when the door is re-closed with time remaining on the counter: If the
microwave is off, it remains off until start is pushed. This fourth property is
written to be as general as possible. We want to write properties that catch as
many bugs as possible, rather than write properties for specic testcases or
bugs.
Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: If msb of src1 is 1 and lsb of src2 is 0 or sum(3) is 1, then result is
wrong. What is the minimum coverage needed to detect bug? What is the minimim
coverage needed to guarantee that the bug will be detected?
296 CHAPTER 4. FUNCTIONAL VERIFICATION
4.8 Functional Verication Problems
P4.1 Carry Save Adder
1. Functionality Briey describe the functionality of a carry-save adder.
2. Testbench Write a testbench for a 16-bit combinational carry save adder.
3. Testbench Maintenance Modify your testbench so that it is easy to change the width of the
adder and the latency of the computation.
NOTES:
(a) You do not need to support pipelined adders.
(b) VHDL generics might be useful.
P4.2 Trafc Light Controller
P4.2.1 Functionality
Briey describe the functionality of a trafc-light controller that has sensors to detect the presence
of cars.
P4.2.2 Boundary Conditions
Make a list of boundary conditions to check for your trafc light controller.
P4.2.3 Assertions
Make a list of assertions to check for your trafc light controller.
P4.3 State Machines and Verication 297
P4.3 State Machines and Verication
P4.3.1 Three Different State Machines
s0 s1
s2 s3
1/0
0/0
*/0
*/0
*/1
Figure 4.10: A very simple machine
s0 s1
s3
s4
*/0
s2
s8
s7
s9
s6
s5
*/0
*/0
*/0
*/0
*/0
*/0
*/0
*/0
*/1
Figure 4.11: A very big machine
s0 s1
s2
*/0
*/0
*/0
*/1
q0 q1
q2
q4
*/0
*/0
*/0
*/1
q3
*/0
Figure 4.12: A concurrent machine
input/output
* = dont care
Figure 4.13: Legend
Answer each of the following questions for the three state machines in gures4.104.12.
Number of Test Scenarios How many test scenarios (sequences of test vectors) would you
need to fully validate the behaviour of the state machine?
Length of Test Scenario What is the maximum length (number of test vectors) in a test scenario
for the state machine?
298 CHAPTER 4. FUNCTIONAL VERIFICATION
Number of Flip Flops Assuming that neither the inputs nor the outputs are registered, what is
the minimum number of ip-ops needed to implement the state machine?
P4.3.2 State Machines in General
If a circuit has i signals of 1-bit each that are inputs, f 1-bit signals that are outputs of ip-ops
and c 1-bit signals that are the outputs of combinational circuitry, what is the maximum number of
states that the circuit can have?
P4.4 Test Plan Creation
Youre on the functional verication team for a chip that will control a simple portable CD-
player. Your task is to create a plan for the functional verication for the signals in the entity
cd digital.
Youve been told that the player behaves just like all of the other CD players out there. If your
test plan requires knowledge about any potential non-standard features or behaviour, youll need
to document your assumptions.
pwr
track min
prev next stop play
sec
entity cd_digital is
port (
----------------------------------------------------
-- buttons
prev,
stop,
play,
next,
pwr : in std_logic;
----------------------------------------------------
-- detect if player door is open
open : in std_logic;
----------------------------------------------------
-- output display information
track : out std_logic_vector(3 downto 0);
min : out unsigned(6 downto 0);
sec : out unsigned(5 downto 0)
);
end cd_digital;
P4.5 Sketches of Problems 299
P4.4.1 Early Tests
Describe ve tests that you would run as soon as the VHDL code is simulatable. For each test:
describe what your specication, stimulus, and check. Summarize the why your collection of tests
should be the rst tests that are run.
P4.4.2 Corner Cases
Describe ve corner-cases or boundary conditions, and explain the role of corner cases and
boundary conditions in functional verication.
NOTES:
1. You may reference your answer for problem P4.4.1 in this question.
2. If you do not know what a corner case or boundary condition is, you may earn partial
credit by: checking this box and explaining ve things that you would do in functional
verication.
P4.5 Sketches of Problems
1. Given a circuit, VHDL code, or circuit size info; calculate simulation run time to achieve
n% coverage.
2. Given a fragment of VHDL code, list things to do to make it more robust e.g. illegal data
and states go to initial state.
3. Smith Problem 13.29
300 CHAPTER 4. FUNCTIONAL VERIFICATION
Chapter 5
Timing Analysis
5.1 Delays and Denitions
In this section we will look at the different timing parameters of circuits. Our focus will be on
those parameters that limit the maximum clock speed at which a circuit will work correctly.
5.1.1 Background Denitions
Denition fanin: The fanin of a gate or signal x are all of the gates or signals y where an
input of x is connected to an output of y.
Denition fanout: The fanout of a gate or signal x are all of the gates or signals y where
an output of x is connected to an input of y.
y1
y2
y3
y4
y0
x
Figure 5.1: Immediate Fanin of x
x
y1
y2
y3
y4
y0
Figure 5.2: Immediate Fanout of x
301
302 CHAPTER 5. TIMING ANALYSIS
Denition immediate fanin/fanout: The phrases immediate fanout and immediate fanin
mean that there is a direct connection between the gates.
x
Figure 5.3: Transitive Fanin
x
Figure 5.4: Transitive Fanout
Denition transitive fanin/fanout: The phrases transitive fanout and transitive fanin
mean that there is either a direct or indirect connection between the gates.
Note: Immediate vs Transitive fanin and fanout Be careful to dis-
tinguish between immediate fan(in/out) and transitive fanin/out. If fanin
or fanout are not qualied with immediate or transitive, be sure to
make sure whether immediate or transitive is meant. In E&CE 327,
fan(in/out) will mean immediate fan(in/out).
5.1.2 Clock-Related Timing Denitions
5.1.2.1 Clock Skew
skew
clk1
clk2
clk3
clk4
clk1
clk2
clk3
clk4
Denition Clock Skew: The difference in arrival times for the same clock edge at
different ip-ops.
5.1.2 Clock-Related Timing Denitions 303
Clock skew is caused by the difference in interconnect delays to different points on the chip.
Clock tree design is critical in high-performance designs to minimize clock skew. Sophisticated
synthesis tools put lots of effort into clock tree design, and the techniques for clock tree design still
generate PhD theses.
5.1.2.2 Clock Latency
latency
master clock
intermediate clock
final clock
master clock
i
n
t
e
r
m
e
d
i
a
t
e

c
l
o
c
k
final clock
Denition Clock Latency: The difference in arrival times for the same clock edge at
different levels of interconnect along the clock tree. (Intuitively different points in
the clock generation circuitry.)
Note: Clock latency Clock latency does not affect the limit on the minimim
clock period.
5.1.2.3 Clock Jitter
jitter
ideal clock
clock with jitter
Denition Clock Jitter: Difference between actual clock period and ideal clock period.
Clock jitter is caused by:
304 CHAPTER 5. TIMING ANALYSIS
temperature and voltage variations over time
temperature and voltage variations across different locations on a chip
manufacturing variations between different parts
5.1.3 Storage-Related Timing Denitions
Storage devices (latches, ip-ops, memory arrays, etc) dene setup, hold and clock-to-Q times.
5.1.3.1 Flops and Latches
d
clk
q
Flop Behaviour
d
clk
q
Latch Behaviour
Storage devices have two modes: load mode and store mode.
Flops are edge sensitive: either rising edge or falling edge. An ideal op is in load mode only for
the instant just before the clock edge. In reality, ops are in load mode for a small window on
either side of the edge.
Latches are level sensitive: either active high or active low. A latch is in load mode when its enable
signal is at the active level.
Timing Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

d
clk
q
Clock-to-Q
Hold Setup

Flip-op
d
clk
q
Clock-to-Q
Hold Setup


Active-high latch
d
clk
q
Clock-to-Q
Hold Setup


Active-low latch
Setup and hold dene the window in which input data are required to be constant in order to
guarantee that storage device will store data correctly. Setup denes the beginning of the window.
Hold denes the end of the window. Setup and hold timing constraints ensure that, when the
storage device transitions from load mode to store mode, the input data is stored correctly in the
storage device. Thus, the setup and hold timing constraints come into play when the storage device
transitions from load mode to store mode. Setup is assumed to happen before the clock edge and
5.1.3 Storage-Related Timing Denitions 305
hold is assumed to happen after the edge. If the end of the time window constraint occurs before
the clock edge, then the hold constraint is negative.
Clock-to-Q denes the delay from the clock edge to when the output is guaranteed to be stable.
Note: Require / Guarantee Setup and hold times are requirements that the
storage device imposes upon its environment. Clock-to-Q is a guarantee that
the storage device provides its environment. If the environment satises the
setup and hold times, then the storage device guarantees that it will satisfy the
clock-to-Q time.
In this section, we will use the denitions of setup, hold and clock-to-Q. Section 5.2 will show how
to calculate setup, hold, and clock-to-Q times for ip ops, latches, and other storage devices.
5.1.3.2 Timing Parameters for a Flop
Setup Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Denition Setup Time (T
SUD
) : Latest time before arrival of clock edge (ip op), or
deasserting of enable line (latch), that input data is required to be stable in order for
storage device to work correctly.
If setup time is violated, current input data will not be stored; input data from previous clock cycle
might remain stored.
5.1.3.3 Hold Time
Denition Hold Time (T
HO
): Latest time after arrival of clock edge (ip op), or
deasserting of enable line (latch), that input data is required to remain stable in order
for storage device to work correctly.
If hold time is violated, current input data will not be stored; input data from next clock cycle
might slip through and be stored.
5.1.3.4 Clock-to-Q Time
Denition Clock-to-Q Time (T
CO
): Earliest time after arrival of clock edge (ip op),
or asserting of enable line (latch) when output data is guaranteed to be stable.
306 CHAPTER 5. TIMING ANALYSIS
Review: Timing Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Setup : Time before arrival of clock edge (ip op), or deasserting of enable line (latch), that
input data is required to start being stable
Hold : time after arrival of clock edge (ip op), or deasserting of enable line (latch), that input
data is required to remain stable
Clock-to-Q : Time after arrival of clock edge (ip op), or asserting of enable line (latch) when
output data is guaranteed to start being stable
5.1.4 Propagation Delays
Propagation delay is the time it takes a signal to travel from the source (driving) op to the desti-
nation op. The two factors that contribute to propagation delay are the load of the combinational
gates between the ops and the delay along the interconnect (wires) between the gates.
5.1.4.1 Load Delays
Load delay is proportional to load capacitance.
Timing of a simple inverter with a load.
Vi Vo
Schematic
1->0
0->1
Input 1 0:
Charge output cap
0->1
1->0
Input 0 1:
Discharge output
cap
Load capacitance is a dependent on the fanout (how many other gates a gate drives) and how big
the other gates are.
Section 5.4.2 goes into more detail on timing models and equations for load delay.
5.1.4.2 Interconnect Delays
Wires, also known as interconnect, have resistance, and there is a capacitance between a wire and
both the substrate and parallel wires. Both the resistance and capacitance of wires increase delay.
Wire resistance is dependent upon the material and geometry of the wire.
5.1.5 Summary of Delay Factors 307
Wire capacitance is dependent on wire geometry, geometry of neighboring wires, and materials.
Shorter wires are faster.
Fatter wires are faster.
FPGAs have special routing resources for long wires.
CMOS processes use higher metal layers for long wires, these layers have wires with much
larger cross sections than lower levels of metal.
More on this in section 5.4.
5.1.5 Summary of Delay Factors
Name Symbol Denition
Skew Difference in arrival times for different clock
signals
Jitter Difference in clock period over time
Clock-to-Q T
CO
Delay from clock signal to Q output of op
Setup T
SUD
Length of time prior to clock/enable that data
must be stable
Hold T
HO
Length of time after clock/enable that data must
be stable
Load Delay due to load (fanout/consumers/readers)
Interconnect Delay along wire
Table 5.1: Summary of delay factors
5.1.6 Timing Constraints
For a circuit to operate correctly, the clock period must be longer than the sum of the delays shown
in table5.1.
Denition Margin: The difference between the required value of a timing parameter
and the actual value. A negative margin means that there is a timing violation. A
margin of zero means that the timing parameter is just satised: changing the timing
of the signals (which would affect the actual value of the parameter) could violate the
timing parameter. A positive margin means that the constraint for the timing
parameter is more than satised: the timing of the signals could be changed at least a
little bit without violating the timing parameter.
Note: Margin is often called slack. Both terms are used commonly.
308 CHAPTER 5. TIMING ANALYSIS
5.1.6.1 Minimum Clock Period
signal may change
signal is stable
a b
clk1 clk2
signal may rise
signal may fall
clk1
clk2
a
b
skew jitter clock-to-Q interconnect + load setup
clock period
propagation
slack
ClockPeriod >
_
Skew+Jitter +T
CO
+Interconnect +Load+T
SUD
_
Note: The minimum clock period is independent of hold time.
5.1.6 Timing Constraints 309
5.1.6.2 Hold Constraint
clk1
clk2
a
b
clock period
_
Skew+Jitter +T
HO
_

_
T
CO
+Interconnect +Load
_
5.1.6.3 Example Timing Violations
The gures below illustrate correct timing behaviour of a circuit and then two types of violations:
setup violation and hold violation. In the gures, the black rectangles identify the point where the
violation happens.
310 CHAPTER 5. TIMING ANALYSIS
a
b
clk
a
clk
b
d
c
c
Clock-to-Q
Setup
Prop
d

Hold
Figure 5.5: Good Timing

a
clk
b
c
???
a
clk
b
c
Clock-to-Q
Setup
Prop
d

???
Figure 5.6: Setup Violation
5.2. TIMING ANALYSIS OF LATCHES AND FLIP FLOPS 311
a
b
clk
a
clk
b
d
c
c
Hold
d


???

Clock-to-Q
Prop
Figure 5.7: Hold Violation
5.2 Timing Analysis of Latches and Flip Flops
In this section, we show how to nd the clock-to-Q, setup, and hold times for latches, ip-ops,
and other storage elements.
5.2.1 Simple Multiplexer Latch
We begin our study of timing analysis for storage devices with a simple latch built from an inverter
ring and multiplexer. There are many better ways to build latches, primarily by doing the design
at the transistor level. However, the simplicity of this design makes it ideal for illustrating timing
analysis.
5.2.1.1 Structure and Behaviour of Multiplexer Latch
Two modes for storage devices:
loading data:
312 CHAPTER 5. TIMING ANALYSIS
loads input data into storage circuitry
input data passes through to output
using stored data
input signal is disconnected from output
storage circuitry drives output
i
o
clk
Schematic
i
o
1
Loading / pass-through mode
i
o
0
Storage mode
Unfold Multiplexer to Simple Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a
b
s
o
a
sel
b
o
Multiplexer: symbol and implementation
d
clk
o
Latch implementation
Note: inverters on clk Both of the inverters on the clk signal are needed.
Together, they prevent a glitch on the OR gate when clk is deasserted. If
there was only one inverter, a glitch would occur. For more on this, see sec-
tion 5.2.1.6
0
1
1
1
0
0
d=0
clk=1
o
1
Loading 0
1
0
0
0
0
0
d=1
clk=1
o
1
Loading 1
5.2.1 Simple Multiplexer Latch 313
0 1
0
1
1
d
clk=0
o=0
0
1
Storing 0
1 0
0
1
0
d
clk=0
o=1
0
0
Storing 1
5.2.1.2 Strategy for Timing Analysis of Storage Devices
The key to calculating setup and hold times of a latch, op, etc is to identify:
1. how the data is stored when not connected to the input (often a pair of inverters in a loop)
2. the gate(s) that the clock uses to cause the stored data to drive the output (often a transmission
gate or multiplexor)
3. the gate(s) that the clock uses to cause the input to drive the output (often a transmission gate
or multiplexor)
0
1
d
clk=0
o
0 1
0
d
clk=1
o
0
Note: Storage devices vs. Signals We can talk about the setup and hold
time of a signal or of a storage device. For a storage device, the setup and
hold times are requirements that it imposes upon all environments in which it
operates. For an individual signal in a circuit, there is a setup and hold time,
which is the amount of time that the signal is stable before and after a clock
edge.
314 CHAPTER 5. TIMING ANALYSIS
5.2.1.3 Clock-to-Q Time of a Multiplexer Latch
clk
d
l1
l2
qn q
s2
s1
cn
c2
Figure 5.8: Latch for Clock-to-Q analysis
d
l1
l2
qn
q
s1
s2
clk
cn

c2

clock-to-Q
Figure 5.9: Waveforms of latch showing Clock-to-Q timing
Assume that input is stable, and then clock signal transitions to cause the circuit to move from
storage mode to load mode.
Calculate clock-to-Q time by nding delay of critical path from where clock signal enters storage
circuit to where q exits storage circuit.
The path is: clk cn c2 l2 qn q, which has a delay of 5 (assuming each gate has a
delay of exactly one time unit).
5.2.1 Simple Multiplexer Latch 315
5.2.1.4 Setup Timing of a Multiplexer Latch
Storage device transitions from load mode to store mode. Setup is time that input must be stable
before clock changes.
clk
d
l1
l2
qn q
s2
s1
cn
c2
Figure 5.10: Latch for Setup Analysis
d
l1
l2
qn
q
s1
s2
clk
cn

c2

setup + margin

Figure 5.11: Setup with margin: goal is to store


Step-by-step animation of latch transitioning from load to store mode.
clk
d

1 0 1

0
0
Circuit is stable in load mode
clk
d

0 1 0

1
t=3: l2 is set to 0,
because c2 turns off AND gate

316 CHAPTER 5. TIMING ANALYSIS


clk
d

0 0 1

0
0
t=0: Clk transitions from load to store
clk
d

0 1 0

1
t=4: from store path propagates to q

clk
d

0 1 1

1
0
t=1: Clk transitions from load to store
clk
d

0 1 0

1
t=5: from store path completes cycle

clk
d

0 1 0

1
t=2: s1 propagates to s2,
because cn turns on AND gate

The value on s1 at t=1 will propagate from the store loop to the output and back through the store
loop. At t=1, s1 must have the value that we want to store. Or, equivalently, the value to store must
have saturated the store loop by t=1. It takes 5 time units for a value on the input d to propagate to
s1 (d l1 l2 qn q s1).
The setup time is the difference in the delay from d to s1 and the delay from clk to cn: 5 1 = 4,
so the setup time for this latch is 4 time units.
5.2.1 Simple Multiplexer Latch 317
d
l1
l2
qn
q
s1
s2
clk
cn

setup with negative margin


c2

/
/
/
/
/
/
/
/
/
/
/
/

Figure 5.12: Setup Violation


318 CHAPTER 5. TIMING ANALYSIS
Step-by-step animation of latch transitioning from load to store mode with setup violation where
arrives 1 time-unit before the rising edge of the clock.
clk
d
1 0 1

0
0
Circuit is stable in load mode with

clk
d

0
t=1: propagates through AND
Clk propagates through inverter
0 1
1
1
clk
d

1 0 1

0
0
t=-1: D transitions from to
Trouble: inconsistent values on load path and store path.
Old value () still in store path when store path is enabled.
clk
d

t=2: old propagates through AND


1 0
1
clk
d

0 1

0
0
t=0: propagates through inverter
Clk transitions from load to store

0
clk
d

0

t=3: l2 is set to 0,
because c2 turns off AND gate

0 1 0
1
/
5.2.1 Simple Multiplexer Latch 319
clk
d

/
/


0 1 0
1
t=4: / from store path propagates to q
clk
d
=1
0 1 0
0
0
0 1
1 1
1
1
t=5: Illustrate instability with =0, =1
0
clk
d

0 1 0

/
/
1

t=5: / from store path completes cycle

d
l1
l2
qn
q
s1
s2
clk
cn

setup with negative margin


c2

/
/
/
/
/
/
/
/
/
/
/
/

-3 -2 -1 0 1 2 3 4 5 6
320 CHAPTER 5. TIMING ANALYSIS
We now repeat the analysis of setup violation, but illustrate the minimum violation (input transi-
tions from to 3 time-units before the clock edge).
clk
d
1 0 1

0
0
Circuit is stable in load mode with

clk
d

1 0 1

0
0
t=-1: propagates through AND
clk
d

1 0 1

0
0
t=-3: D transitions from to
clk
d

0 0 1

0
0
t=0: Clk transitions from load to store
clk
d

1 0 1

0
0
t=-2: propagates through inverter

clk
d

0 1 1

1
0
t=1: Clk propagates through inverter
clk
d

0 1 0

1
t=2: old propagates through AND

Trouble: inconsistent values on load path and store path.


Old value () still in store path when store path is enabled.
clk
d

0 1 0

0

/
/
1
t=5: / from store path completes cycle

5.2.1 Simple Multiplexer Latch 321


clk
d

0 1 0

0
/

1
t=3: l2 is set to 0,
because c2 turns off AND gate

clk
d
=1
0 1 0
0
0
0 1
1 1
1
1
t=5: Illustrate instability with =0, =1
0
clk
d

0 1 0

0
/
/

1
t=4: / from store path propagates to q

d
l1
l2
qn
q
s1
s2
clk
cn

setup with negative margin


c2

/
/
/
/
/
/
/
/
/
/
/
/

-3 -2 -1 0 1 2 3 4 5 6
322 CHAPTER 5. TIMING ANALYSIS
d
l1
l2
qn
q
s1
s2
clk
cn

setup with negative margin


c2

/
/
/
/
/
/
/
/
/
/
/
/

Figure 5.13: Setup Violation


d
l1
l2
qn
q
s1
s2
clk
cn

setup
c2

Figure 5.14: Minimum Setup Time


When cn is asserted, must be at s1. Otherwise, will affect storage circuitry when data input
is disconnected.
5.2.1 Simple Multiplexer Latch 323
5.2.1.5 Hold Time of a Multiplexer Latch
clk
d
l1
l2
qn q
s2
s1
cn
c2
Figure 5.15: Latch for Hold Analysis
d
l1
l2
qn
q
s1

s2
clk
c2

cn

hold + margin

Figure 5.16: Hold OK: goal is to store


324 CHAPTER 5. TIMING ANALYSIS
clk
d

1 0

0
0
Circuit is stable in load mode
1 clk
d

0 1

1
t=6: Clk transition propagates to c2,
l1 may change now without affecting storage device
0

clk
d

0 0

0
0
t=0: Clk transitions from load to store
1 clk
d

0 1

1
t=7: Clk transition propagates to l2,
0
0

clk
d

0 1

1
0
t=5: Clk transition propagates to cn
1
Figure 5.17: Animation of hold analysis
It takes 6 time units for a change on the clock signal to propagate to the input of the AND gate that
controls the load path. It takes 1 time unit for a change on d to propagate to its input to this AND
gate. The data input must remain stable for 6 1 = 5 time units after the clock transitions from
load to store mode, or else the new data value (e.g., ) will slip into the storage loop and corrupt
the value that we are trying to store.
5.2.1 Simple Multiplexer Latch 325
d
l1
l2
qn
q
s1

s2
clk
c2

cn


hold with negative margin
Figure 5.18: Hold violation: slips through to q
d
l1
l2
qn
q
s1

s2
clk
c2

cn

hold
Figure 5.19: Minimum Hold Time
Cant let affect l1 before c2 deasserts.
Hold time is difference between path from clk to c2 and path from d to l1.
326 CHAPTER 5. TIMING ANALYSIS
5.2.1.6 Example of a Bad Latch
This latch is very similar to the one from section 5.2.1.5, however this one does not work correctly.
The difference between this latch and the one from section 5.2.1.5 is the location of the inverter
that determines whether l2 or s2 is enabled. When the clock signal is deasserted, c2 turns off the
AND gate l2 before the AND gate s2 turns on. In this interval when both l2 and s2 are turned
off, a glitch is allowed to enter the feedback loop.
The glitch on the feedback loop is independent of the timing of the signals d and clk.
clk
d
l1
l2
qn q
s2
s1
cn
c2
d
l1
l2
qn
q
s1

s2
clk
c2

cn

5.2.2 Timing Analysis of Transmission-Gate Latch


The latch that we now examine is more realistic than the simple multiplexer-based latch. We
replace the multiplexer with a transmission gate.
5.2.2 Timing Analysis of Transmission-Gate Latch 327
5.2.2.1 Structure and Behaviour of a Transmission Gate
Symbol
i o
s
s
Implementation
i o
1
0
Open
i o
0
1
Closed
0
1
1
Transmit 1
0
1
0
Transmit 0
i o
s
Transmission gate as switch
5.2.2.2 Structure and Behaviour of Transmission-Gate Latch
(Smith 2.5.1)
d
clk
q
d
clk
q
1
0
1
0
1
Loading data into latch
d
clk
q
1
0
1
0
1
Using stored data from latch
328 CHAPTER 5. TIMING ANALYSIS
5.2.2.3 Clock-to-Q Delay for Transmission-Gate Latch
d
clk
q
1
5.2.2.4 Setup and Hold Times for Transmission-Gate Latch
d
clk
q
1
path1
path2
Setup time = path1 path2
Setup time for latch
d
clk
q
1
path1
path2
Hold time = path1 path2
Hold time for latch
5.2.3 Falling Edge Flip Flop
(Smith 2.5.2)
We combine two active-high latches to create a falling-edge, master-slave ip op. The analysis
of the master-slave ip-op illustrates how to do timing analysis for hierarchical storage devices.
Here, we use the timing information for the active high latch to compute the timing information
of the ip-op. We do not need to know the primitive structure of the latch in order to derive the
timing information for the ip op.
5.2.3 Falling Edge Flip Flop 329
5.2.3.1 Structure and Behaviour of Flip-Flop
EN EN
d m q
clk
A
??
B C D E F
A B D E

d
clk
m
clk_b
q ??
EN EN
d m q
clk

d
clk
m
clk_b
q
Latch
Clock-Q
TInv
Latch
Setup
Tmd Tinv

TInv delay through an inverter


Tmd propagation delay from m to d
330 CHAPTER 5. TIMING ANALYSIS
5.2.3.2 Clock-to-Q of Flip-Flop
EN EN
d m q
clk

d
clk
m
clk_b
q
Latch
Clock-to-Q
Tinv

Flop
Clock-to-Q
T
CO
Flop = TInv+T
CO
Latch
5.2.3 Falling Edge Flip Flop 331
5.2.3.3 Setup of Flip-Flop
EN EN
d m q
clk

d
clk
m
clk_b
q

Flop
Setup
Latch
Setup
T
SUD
Flop =T
SUD
Latch
The setup time of the ip op is the same as the setup time of the master latch. This is because,
once the data is stored in the master latch, it will be held for the slave latch.
332 CHAPTER 5. TIMING ANALYSIS
5.2.3.4 Hold of Flip-Flop
EN EN
d m q
clk

d
clk
m
clk_b
q

Hold time for latch

Hold time for flop


T
HO
Flop =T
HO
Latch
The hold of the ip op is the same as the hold time of the master latch. This is because, once the
data is stored in the master latch, it will be held for the slave latch.
5.2.4 Timing Analysis of FPGA Cells
(Smith 5.1.5)
We can apply hierarchical analysis to structures that include both datapath and storage circuitry.
We use an Actel FPGA cell to illustrate. The description of the Actel FPGA cell in the course notes
is incomplete, refer to Smiths book for additional material.
5.2.4 Timing Analysis of FPGA Cells 333
5.2.4.1 Standard Timing Equations
T
PD
= delay from D-inputs to storage element
T
CLKD
= delay from clk-input to storage element
T
OUT
= delay from storage element to output
T
SUD
= setup time
= slowest D path fastest clk path
= T
PD Max
T
CLKD Min
T
HO
= hold time
= slowest clk path fastest D path
= T
CLKD Max
T
PD Min
T
CO
= delay clk to Q
= clk path+output path
= T
CLKD
+T
OUT
5.2.4.2 Hierarchical Timing Equations
Add combinational logic to inputs, clock, and outputs of storage element.
t
SUD
HO
t
t
CO
PD
t
CLKD
t
data inputs
clk
d
q
clk
t
OUT
T
SUD
= T
SUD

+T
PD Max
T
CLKD Min
T
HO
= T
HO

+T
CLKD Max
T
PD Min
T
CO
= T
CO

+T
CLKD Max
+T
OUT Max
5.2.4.3 Actel Act 2 Logic Cell
Timing analysis of Actel Act 2 logic cell (Smith 5.1.5).
334 CHAPTER 5. TIMING ANALYSIS
Actel ACT
Basic logic cells are called Logic Module
ACT 1 family: one type of Logic Module (see Figure 5.1, Smiths pp. 192)
ACT 2 and ACT 3 families: use two different types of Logic Module (see Figure 5.4,
Smiths pp. 198)
C-Module (Combinatorial Module) combinational logic similar to ACT 1 Logic Mod-
ule but capable of implementing ve-input logic function
S-Module (Sequential Module) C-Module + Sequential Element (SE) that can be con-
gured as a ip-op
Actel Timing
ACT family: (see Figure 5.5, Smiths pp. 200)
Simple. Why?
Only logic inside the chip
Not exact delay (as no place and route, physical layout, hence not accounting for inter-
connection delay)
Non-Deterministic Actel Architecture
All primed parameters inside S-Module are assumed Calculate tSUD, tH, and tCO
The combinational logic delay of 3 ns: 0.4 went into increasing the setup time, tSUD, and
2.6 ns went into increasing the clock-output delay, tCO. From outside we can say that the
combinational logic delay is buried in the ip-op set up time
d
clk
q
Simple Actel-style latch
d
clk
q
clr
Actel latch with active-low
clear
d
clk
m
clr
q
Actel op with active-low clear
5.2.4 Timing Analysis of FPGA Cells 335
clk
m
clr
q
d00
d01
d10
d11
a1
b1
a0
b0
C-Module
SE-Module
se_clk se_clk_n
Actel sequential module
5.2.4.4 Timing Analysis of Actel Sequential Module
Timing parameters for Actel latch
with active-low clear
T
SUD
0.4ns
T
HO
0.9ns
T
CO
0.4ns
Other given timing parameters
C-Module delay (t

PD
) 3ns
tCLKD (from clk to se clk and se clk n) 2.6ns
Question: What are the setup, hold, and T
CO
times for the entire Actel sequential
module?
Answer:
See Smith pp 199. Use Smiths eqn 5.15, 5.16, and assume t

CLKD
= 2.6ns.
T
SUD
0.8ns
T
HO
0.5ns
T
CO
3.0ns
336 CHAPTER 5. TIMING ANALYSIS
5.2.5 Exotic Flop
As a contrast to the gate-level implementations of latches that we looked at previously, the gure
below is the schematic for a state-of-the-art high-performance latch circa 2001.
d
clk
q
inverter chain
precharge node precharge node keeper keeper
The inverter chain creates an evaluation window in time when clock has just risen and the p tran-
sistors are turned on.
When clock is 0, the left precharge node charges to 1 and the right precharge node discharges
to 0.
If d is 1 during the evaluation window, the left precharge node discharges to 0. The left
precharge nodes goes through an inverter to the second precharge node, which will charge from
0 to texttt1, resulting in a 0 on q.
If d is 0 during the evaluation window, the left precharge node stays at the precharge value of
1. The left precharge nodes goes through an inverter to the second precharge node, which will
stay at 0, resulting in a 1 on q.
The two inverter loops are keepers, which provide energy to keep the precharge nodes at their
values after the evaluation window has passed and the clock is still 1.
5.3 Critical Paths and False Paths
5.3.1 Introduction to Critical and False Paths
In this section we describe how to nd the critical path through the circuit: the path that limits the
maximumclock speed at which the circuit will work correctly. A complicating factor in nding the
5.3.1 Introduction to Critical and False Paths 337
critical path is the existence of false paths: paths through the circuit that appear to be the critical
path, but in fact will not limit the clock speed of the circuit. The reason that a path is false is that
the behaviour of the gates prevents a transition (either 0 1 or 1 0) from travelling along the
path from the source node the destination node.
Denition critical path: The slowest path on the chip between ops or ops and pins.
The critical path limits the maximum clock speed.
Denition false path: : a path along which an edge cannot travel from beginning to end.
To conrm that a path is a true critical path, and not a false path, we must nd a pair of input
vectors that exercise the critical path. The two input vectors usually differ only their value for the
input signal on the critical path.
1
The change on this signal (either 0 1 or 1 0) must propagate
along the candidate critical path from the input to the output.
Usually the two input vectors will produce different output values. However, a critical path might
produce a glitch (0 1 0 or 1 0 1) on the output, in which case the path is still the critical
path, but the two input vectors both result in the same value on the output signal. Glitches should be
ignored, because they may result in setup violations. If the glitching value is inside the destination
op or latch at the end of the clock period, then the storage element will not store a stable value.
Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The algorithm that we present comes from McGeer and Brayton in a DAC 198? paper. The
algorithm to nd the critical path through a circuit is presented in several parts.
1. Section 5.3.2: Find the longest path ignoring the possibility of false paths.
2. Section 5.3.3: Almost-correct algorithm to test whether a candidate critical path is a false
path.
3. Section 5.3.4: If a candidate path is a false path, then nd the next candidate path, and repeat
the false-path detection algorithm.
4. Section 5.3.5: Correct, complete, and complex algorithm to nd the critical path in a circuit.
1
Section 5.3.5 discusses late-side inputs and situations where more than one input needs to change for the critical
path to be exercised.
338 CHAPTER 5. TIMING ANALYSIS
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Note: The analysis of critical paths and false paths assumes that all inputs
change values at exactly the same time. Timing differences between inputs are
modelled by the skew parameter in timing analysis.
Throughout our discussion of critical paths, we will use the delay values for gates shown in the
table below.
gate delay
NOT 2
AND 4
OR 4
XOR 6
5.3.1.1 Example of Critical Path in Full Adder
Question: Find the critical path through the full-adder circuit shown below.
ci
a
b
co
s
i
j
k
Answer:
Annotate with Max Distance to Destination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ci
a
b
co
s
6
6
0
8
8
4 4
4
0
14
14
8
8
8
4
0
0
8
14
14
Find Candidate Critical Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.3.1 Introduction to Critical and False Paths 339
ci
a
b
co
s
6
6
0
8
8
4 4
4
0
14
14
8
8
8
4
0
0
8
14
14
There are two paths of length 14: aco and bco. We arbitrarily choose
aco.
Test if Candidate is Critical Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
ci
a
b
co
s
0 0
0
0 0 1
1
Yes, the candidate path is the critical path.
The assignment of ci=1, a=0, b=0 followed in the next clock cycle by ci=1,
a=1, b=0 will exercise the critical path. As a shortcut, we write the pair of
assignments as: ci=1, a=, b=0.
Question: Do the input values of ci=0, a=, b=1 exercise the critical path?
Answer:
ci
a
b
co
s
0
0
1 1
1
0
0
The alternative does not exercise the critical path. Instead, the alternative
excitation follows a shorter path, so the output stabilizes sooner.
Lesson: not all all transitions on the inputs will exercise the critical path.
Using timing simulation to nd the maximum clock speed of a circuit might
overestimate the clock speed, because the inputs values that you simulate
might not exercise the critical path.
340 CHAPTER 5. TIMING ANALYSIS
5.3.1.2 Preliminaries for Critical Paths
There are three classes of paths on a chip:
entry path: from an input to a op
Quartus does not report this by default. When Quartus reports this path, it is reported as the
period associated with System fmax.
In Xilinx timing reports this is reported as Maximum Delay
stage path: from one op to another op
In Quartus timing reports, this is reported as the period associated with Internal fmax.
In Xilinx timing reports, this is reported as Clock to Setup and Maximum Frequency.
exit path: from a op to an output
Quartus does not report this by default. When Quartus reports this path, it is reported as the
period associated with System fmax.
In Xilinx timing reports this is reported as Maximum Delay
5.3.1.3 Longest Path and Critical Path
The longest path through the circuit might not be the critical path, because the behaviour of the
gates might prevent an edge (0 1 or 1 0) from travelling along the path.
Example False Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Determine whether the longest path in the circuit below is a false path
y
a
b
5.3.1 Introduction to Critical and False Paths 341
Answer:
For this example, we use a very naive approach simply to illustrate the
phenomenon of false paths. Sections 5.3.25.3.5 present a better algorithm
to detect false paths and nd the real critical path.
In the circuit above, the longest path is from b to y:
The four possible scenarios for the inputs are:
(a = 0, b = 0 1)
(a = 0, b = 1 0)
(a = 1, b = 0 1)
(a = 1, b = 1 0)
a = 0, b = 0 1 a = 0, b = 1 0
y
a
b
y
a
b
0
0
0
0
0
0
a = 1, b = 0 1 a = 1, b = 1 0
y
a
b
0
0
0
0
0
0
y
a
b
1
1
1
1
In each of the four scenarios, the edge is blocked at either the AND gate or
the OR gate. None of the four scenarios result in an edge on the output y, so
the path from b to y is a false path.
Question: How can we determine analytically that this is a false path?
Answer:
The value on a will always force either the AND gate to be a 0 (when a is
0) or the the OR gate to be a 1 (when a is 1). For both a=0 and
a=1, a change on b will be unable to propagate to y. The algorithm to
detect false paths is based upon this type of analysis.
342 CHAPTER 5. TIMING ANALYSIS
Preview of Complete Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
This example illustrates all of the concepts in analysing critical paths. Here, we explore the circuit
informally. In section 5.3.5, we will revisit this circuit and analyse it according to the complete,
correct, and complex algorithm.
Question: Find the critical path through the circuit below.
a
b
c
d e
f
g
Answer:
Even though the equation for this circuit reduces to false, the output signal (g)
is not a constant 0. Instead, glitches can occur on g. To explore the
behaviour of the circuit, we will stimulate the circuit rst with a falling edge,
then a rising edge.
Stimulate the circuit with a falling edge and see which path the edge follows.
0 0 2 4 6
0
2
0
10 a
b
c
d e
f
g
The longest path through the circuit is the middle path.
At g, the side input (a) has a controlling value before the falling edge arrives
on the path input (e). Thus, a falling edge is unable to excite the longest path
through the circuit.
Stimulate the circuit with a rising edge and see which path the edge follows.
0 0 2 4 6
0
2
0
6
10
a
b
c
d e
f
g
At f, the side input c has a controlling value before the falling edge arrives on
the path input (e). Thus, a rising edge is unable to excite the longest path
through the circuit.
5.3.2 Longest Path 343
Of the two scenarios, the falling edge follows a longer path through the circuit
than the rising edge. The critical path is the lower path through the circuit.
When we develop our rst algorithm to detect false paths (section 5.3.3), we
will assume that at each gate, the input that is on the critical path will arrive
after the other inputs. Not all circuits satisfy the assumption. At f, when a is a
falling edge, the path input (c) arrives before the side input e. This
assumption is removed in section 5.3.5, where we present the complete
algorithm by dealing with late-arriving side inputs.
5.3.1.4 Timing Simulation vs Static Timing Analysis
The delay through a component is usually dependent upon the values on signals. This is because
different paths in the circuit have different delays and some input values will prevent some paths
from being exercised. Here are two simple examples:
In a ripple-carry adder, if a carry out of the MSB is generated from the least signicant bit,
then it will take longer for the output to stabilize than if no carries generated at all.
In a state machine using a one-hot state encoding, false paths might exist when more than
one state bit is a 1.
Because of these effects, static timing analysis might be overly conservative and predict a delay
that is greater than you will experience in practice. Conversely, a timing simulation may not
demonstrate the actual slowest behaviour of your circuit: if you dont ever generate a carry from
LSB to MSB, then youll never exercise the critical path in your adder. The most accurate delay
analysis requires looking at the complete set of actual data values that will occur in practice.
5.3.2 Longest Path
The following is an algorithm to nd the longest path from a set of source signals to a set of
destination signals. We rst provide a high-level, intuitive, description, and then present the actual
algorithm.
Outline of Algorithm to Find Longest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The basic idea is to annotate each signal with the maximum delay from it to an output.
Start at destination signals and traverse through fanin to source signals.
Destination signals have a delay of 0
At each gate, annotate the inputs by the delay through the gate plus the delay of the output.
344 CHAPTER 5. TIMING ANALYSIS
When a signal fans out to multiple gates, annotate the output of the source (driving) gate with
maximum delay of the destination signals.
The primary input signal with the maximum delay is the start of the longest path. The delay
annotation of this signal is the delay of the longest path.
The longest path is found by working from the source signal to the destination signals, picking
the fanout signal with the maximum delay at each step.
Algorithm to Find Longest Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1. Set current time to 0
2. Start at destination signals
3. For each input to a gate that drives a destination signal, annotate the input with the current
time plus the delay through the gate
4. For each gate that has times on all of its fanout but not a time for itself,
(a) annotate each input to the gate with the maximum time on the fanout plus the delay
through the gate
(b) go to step 4
5. To nd the longest path, start at the source node that has the maximum delay. Work forward
through the fanout. For signals that fanout to multiple signals, choose the fanout signal with
the maximum delay.
Longest Path Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Find the longest path through the circuit below.
a
b
c
l
m
d
e
f
g
h
i
j
k
Answer:
Annotate signals with the maximum delay to an output:
5.3.3 Detecting a False Path 345
a
b
c
0
14 12 12
12
6 4
4
8 8
8
4
4
8 2 0
16
12
10
d
e
f
g
h
i
j
k
Find longest path:
a
b
c
0
14 12 12
12
6 4
4
8 8
8
4
4
8 2 0
16
12
10
d
e
f
g
h
i
j
k
The path from a to y has a delay of 16.
5.3.3 Detecting a False Path
In this section, we will explore a simple and almost correct algorithm to determine if a path is a
false path. The simple algorithm in this section sometimes gives the incorrect results if the candi-
date path intersects false paths. For all of the example circuits in this section, the algorithm gives
the correct result. The purpose of presenting this almost-correct algorithm is that it is relatively
easy to understand and introduces one of the key concepts used in the complicated, correct, and
complete algorithm for nding the critical path in section 5.3.5.
5.3.3.1 Preliminaries for Detecting a False Path
The controlling value of a gate is the value such that if one of the inputs has this value, the output
can be determined independently of the other inputs.
For an AND gate, the controlling value is 0, because when one of the inputs is a 0, we know
that the output will be 0 regardless of the values of the other inputs.
The controlled output value is the value produced by the controlling input value.
Gate Controlling Value Controlled Output
AND 0 0
OR 1 1
NAND 0 1
NOR 1 0
XOR none none
346 CHAPTER 5. TIMING ANALYSIS
Path Input, Side Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Denition path input: For a gate on a path (either a candidate critical path, or a real
critical path), the path input is the input signal that is on the path.
Denition side input: For a gate on a path (either a candidate critical path, or a real
critical path), the side inputs are the input signals that are not on the path.
The key idea behind the almost-correct algorithm is that: for an edge to propagate along a path,
the side inputs to each gate on the path must have non-controlling values. The complete, correct,
and complicated algorithm generalizes this constraint to handle circuits where the side inputs are
on false paths.
Reconvergent Fanout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Denition reconvergent fanout: There are paths from signals in the fanout of a gate that
reconverge at another gate.
Most of the difculties both with critical paths and with testing circuits for manufacturing faults
(Chapter 7) are caused by reconvergent fanout.
y
a
b
c
z
d e
f
h
g
There are two sets of reconvergent paths in the circuit above. One set of reconvergent paths goes
from a to y and one set goes from d to z.
If a candidate path has reconvergent fanout, then the rising or falling edge on the input to the path
might cause a side input along the path to have a rising or falling edge, rather than a stable 0 or
1.
To support reconvergent fanout, we extend the rule for side inputs having non-controlling values
to say that side inputs must have either non-controlling values or have edges that stabilize in non-
controlling values.
5.3.3 Detecting a False Path 347
Rules for Propagating an Edge Along a Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
These rules assume that side inputs arrive before path inputs. Section 5.3.5 relaxes this constraint.
1 1
0 0
1 1
0 0
NOT
AND
OR
XOR
Question: Why do the rules not have falling edges for AND gates or rising edges for
OR gates on the side input?
Answer:
a
b c
a
b
c
For an AND gate, a falling edge on side-input will force the output to change
and prevent the path input from affecting the output. This is because the nal
value of a falling edge is the controlling value for an AND gate. Similarly, for an
OR gate, the nal value of a rising edge is the controlling value for the gate.
348 CHAPTER 5. TIMING ANALYSIS
Analyzing Rules for Propagating Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The pictures below show all combinations of output edge (rising or falling) and input values (con-
stant 1, constant 0, rising edge, falling edge) for AND and OR gates. These pictures assume that
the side input arrives before the path intput. The pictures that are crossed out illustrate situations
that prevent the path input from affecting the output. In these situations the inputs cause either a
constant value on the output or the side input affects the output but the path input does not. The
pictures that are not crossed out correspond to the rules above for pushing edges through AND and
OR gates.
0 1
0 1
constant 0 output
0 is controlling
constant 0 output
constant 0 output
AND
0 1
0 1
constant 1 output
1 is controlling constant 1 output
constant 1 output
OR
Viability Condition of a Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Denition Viability condition: For a path (p) though a circuit, the viability condition
(sometimes called the viability constraint) is a Boolean expression in terms of the
input signals that denes the cases where an edge will propagate along the path.
Equivalently: the cases where a transition on the primary input to the path will excite
the path.
Based upon the rules for propagating an edge that we have seen so far, the viability condition for
a path is: every side input has a non-controlling value. As always, section 5.3.5 has the complete
viability condition.
5.3.3 Detecting a False Path 349
5.3.3.2 Almost-Correct Algorithm to Detect a False Path
The rules above for propagating an edge along a candidate path assume that the values on side
inputs always arrive before the value on the path input. This is always true when the candidate
path is the longest path in the circuit. However, if the longest path is a false path, then when we are
testing subsequent candidate paths, there is the possibility that a side input will be on a false path
and the side input value will arrive later than the value from the path input.
This almost-correct algorithm assumes that values on side inputs always arrive before values on
path inputs. The correct, complex, and complete critical path algorithm in section 5.3.5 extends
the almost correct algorithm to remove this assumption.
To determine if a path through a circuit is a false path:
1. Annotate each side input along the path with its non-controlling value. These annotations
are the constraints that must be satised for the candidate path to be exercised.
2. Propagate the constraints backward fromthe side inputs of the path to the inputs of the circuit
under consideration.
3. If there is a contradiction amongst the constraints, then the candidate path is a false path.
4. If there is no contradiction, then the constraints on the inputs give the conditions under which
an edge will traverse along the candidate path from input to output.
5.3.3.3 Examples of Detecting False Paths
False-Path Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Determine if the longest path in the circuit below is a false path.
a
b
c
0
14 12 12
12
6 4
4
8 8
8
4
4
8 2 0
16
12
10
d
e
f
g
h
i
j
k
Answer:
Compute constraints for side inputs to have non-controlling values:
350 CHAPTER 5. TIMING ANALYSIS
a
b
c
l
m
1
1 0
1
Contradictory values.
0
0
1
d
e
f
g
h
i
j
k
side input non-controlling value constraint
g[b] 1 b
i[e] 0 c
k[h] 1 b
Found contradiction between g[b] needing b and k[h] needing b, therefore the
candidate path is a false path.
Analyze cause of contradiction:
a
b
c
l
m
2
These side inputs will always have
opposite values. Both side inputs
feed the same type of gate (AND),
so it always be the case that one of the
side inputs will be a controlling value (0).
d
e
f
g
h
i
j
k
False-Path Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Determine if the longest path through the circuit below is a critical path.
If the longest path is a critical path, nd a pair of input vectors that will exercise the
path.
a
c
b
e
d
f
g
h
5.3.3 Detecting a False Path 351
Answer:
a
c
b
e
d
f
g
h
1
0
1
side input non-controlling value constraint
e[a] 1 a
g[b] 0 b
h[f] 1 a+b
Complete constraint is conjunction of constraints: ab(a+b), which reduces to
false. Therefore, the candidate path is a false path.
False-Path Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
This example illustrates a candidate path that is a true path.
Question: Determine if the longest path through the circuit below is a critical path. If
the longest path is a critical path, nd a pair of input vectors that will exercise the
path.
a
c
b
e
d
f
g
h
Answer:
Find longest path; label side inputs with non-controlling values:
a
c
b
e
d
f
g
h
0
0
1
352 CHAPTER 5. TIMING ANALYSIS
Table of side inputs, non-controlling values, and constraints on primary inputs:
side input non-controlling value constraint
e[a] 0 a
g[b] 0 b
h[b] 1 a+b
The complete constraint is ab(a+b), which reduces to ab. Thus, for an edge
to propagate along the path, a must be 0 and b must be 0.
The primary input to the path (c) does not appear in the constraint, thus both
rising and falling edges will propagate along the path. If the primary input to
the path appears with a positive polarity (e.g. c) in the constraint, then only a
rising edge will propagate. Conversely, if the primary input appears negated
(e.g., c), then only a falling edge will propagate.
The primary input to the path (c) does not appear in the constraint, thus both
rising and falling edges will propagate along the path. If the primary input to
the path appears with a positive polarity (e.g. c) in the constraint, then only a
rising edge will propagate. Conversely, if the primary input appears negated
(e.g., c), then only a falling edge will propagate.
Critical path c, e, g, h
Delay 14
Input vector a=0, b=0, c=rising edge
Illustration of rising edge propagating along path:
a
c
b
e
d
f
g
h
0
0
1
0
0 1 1
1
0
Illustration of falling edge propagating along path:
a
c
b
e
d
f
g
h
0
0
1
0
0 1 1
1
0
False-Path Example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
This example illustrates reconvergent fanout.
5.3.3 Detecting a False Path 353
Question: Determine if the longest path through the circuit below is a critical path. If
the longest path is a critical path, nd a pair of input vectors that will exercise the
path.
a
c
b
e
d
f
g
Answer:
a
c
b
e
d
f
g
1
1
side input non-controlling value constraint
e[b] 1 b
g[d] 1 a
The complete constraint is ab.
The constraint includes the input to the path (a), which indicates that not all
edges will propagate along the path. The polarity of the path input indicates
the nal value of the edge. In this case, the constraint of a means that we
need a rising edge.
Critical path a, c, e, f, g
Delay 12
Input vector a=rising edge, b=1
Illustration of rising edge propagating along path:
a
c
b
e
d
f
g
1 1
If we try to propagate a falling edge along the path, the falling edge on the
side input d forces the output g to fall before the arrival of the falling edge on
the path input f. Thus, the edge does not propagate along the candidate
path.
354 CHAPTER 5. TIMING ANALYSIS
a
c
b
e
d
f
g
1 1 1
Patterns in False Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
After analyzing these examples, you might have begun to observe some patterns in how false paths
arise. There are several patterns in the types of reconvergent fanout that lead to false paths. For
example, if the candidate path has an OR gate and an AND that are both controlled by the same
signal and the candidate has an even number of inverters between these gates then the candidate
path is almost certainly a false path. The reason is the same as illustrated in the rst example of a
false path. The side input will always have a controlling value for either the OR gate or the AND
gate.
5.3.4 Finding the Next Candidate Path
If the longest path is a false path, we need to nd the next longest path in the circuit, which will be
our next candidate critical path. If this candidate fails, we continue to nd the next longest of the
remaining paths, ad innitum.
5.3.4.1 Algorithm to Find Next Candidate Path
To nd the next candidate path, we use a path table, which keeps track of the partial paths that
we have explored, their maximum potential delay, and the signals that we can follow to extend a
partial path toward the outputs. We keep the path table sorted by the maximum potential delay of
the paths. We delete a path from the table if we discover that it is a false path.
The key to the path table is how to update the potential delay of the partial paths after we discover
a false path. All partial paths that are prexes of the false path will need to have their potential
delay values recomputed. The updated delay is found by following the unexplored signals in the
fanout of the end of the partial path.
1. Initialize path table with primary inputs, their potential delay, and fanout.
2. Sort path table by potential delay (path with greatest potential delay at bottom of table)
3. If the partial path with the maximum potential delay has just one unused fanout signal,
then extend the partial path with this signal.
Otherwise:
(a) Create a new entry in the path table for the partial path extended by the unused fanout
signal with the maximum potential delay.
5.3.4 Finding the Next Candidate Path 355
(b) Delete this fanout signal from the list of unused fanout signals for the partial path.
4. Compute the constraint that side input of the new signal does not have a controlling value,
and update constraint table.
5. If the new constraint does not cause a contradiction,
then return to step 3.
Otherwise:
(a) Mark this partial path as false.
(b) For each partial path that is a prex of the false path:
reduce the potential delay of the path by the difference between the potential delay
of the fanout that was followed and the unused fanout with next greatest delay value.
(c) Return to step 2
5.3.4.2 Examples of Finding Next Candidate Path
Next-Path Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Starting from the initial delay calculation and longest path, nd the next
candidate path and test if it is a false path.
a
b
c
0
14 12 12
12
6 4
4
8 8
8
4
4
8 2 0
16
12
10
d
e
f
g
h
i
j
k
Answer:
Initial state of path table:
potential unused
delay fanout path
10 e c
12 h, g b
16 d a
Extend path with maximum potential delay until nd contradiction or reach
end of path. Add an entry in path table for each intermediate path with
multiple signals in the fanout.
356 CHAPTER 5. TIMING ANALYSIS
Path table and constraint table after detecting that the longest path is a false
path:
potential unused
delay fanout path
10 e c
12 h, g b
16 j, i a, d, f, g
false a, d, f, g, i, k
side input non-controlling value constraint
g[b] 1 b
i[e] 0 c
k[h] 1 b
The longest path is a false path. Recompute potential delay of all paths in
path table that are prexes of the false path.
The one path that is a prex of the false path is: a,d,f,g). The remaining
unused fanout of this path is j, which has a potential delay on its input of 2.
The previous potential delay of g was 8, thus the potential delay of the prex
reduces by 82 = 6, giving the path a potential delay of 166 = 10.
Path table after updating with new potential delays:
potential unused
delay fanout path
false a, d, f, g, i, k
10 e c
10 i a, d, f, g
12 h, g b
Extend b through g, because g has greater potential delay than the other
fanout signal (h).
potential unused
delay fanout path
false a, d, f, g, i, k
10 e c
10 i a, d, f, g
12 h, g b
12 i, j b, g
side input non-controlling value constraint
g[a] 1 a
From g, we will follow i, because it has greater potential delay than j.
5.3.4 Finding the Next Candidate Path 357
potential unused
delay fanout path
false a, d, f, g, i, k
10 e c
10 i a, d, f, g
12 h, g b
12 i, j b, g
12 b, g, i, k
side input non-controlling value constraint
g[a] 1 a
i[e] 0 c
k[h] 1 b
We have reached an output without encountering a contradiction in our
constraints. The complete constraint is abc.
Critical path b, g, i, k
Delay 12
Input vector a=1, b=falling edge, c=1
Illustrate the propagation of a falling edge:
a
b
c
2
d
e
f
g
h
i
j
k
1
0 1
1
At k, the rising edge on the side input (h) arrives before the falling edge on
the path input (i). For a brief moment in time, both the side input and path
input are 1, which produces a glitch on k.
Next-Path Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Find the critical path in the circut below
a
b
c
d
e
f
g h
i
j
k
l m
m
k
358 CHAPTER 5. TIMING ANALYSIS
Answer:
Find the longest path:
a
b
c
d
e
f
g h
i
j
k
l m
m
k
0
0
2
6
6
6
0
4
4
10
10 10
14 16
20
20 22
14
14
10
20
4
Initial state of path table:
potential unused
delay fanout path
4 k e
10 j, l a
14 i b
20 g c
22 f d
Extend path with maximum potential delay until nd contradiction or reach
end of path. Add an entry in path table for each intermediate path with
multiple fanout signals.
potential unused
delay fanout path
4 k e
10 j, l a
14 i b
20 g c
22 j, k d, f, g, h, i
false d, f, g, h, i, j, l
side input non-controlling value constraint
g[c] 1 c
i[b] 0 b
j[a] 0 a
l[a] 1 a
Contradiction between j[a] and l[a], therefore the path d,f,g,h,i,j,l) is
a false path. And, any path that extends this path is also false.
To nd next candidate, begin by recomputing delays along the candidate
path. The second gate in the contradiction is l. The last intermediate path
before l with unused fanout is i. Cut the candidate path at this signal. The
5.3.4 Finding the Next Candidate Path 359
remaining initial part of the candidate path is: d, f, g, h, i. The only unused
fanout of this path is k.
We now calculate the new maximum potential delay of d, f, g, h, i), taking
into account the false path that we just discovered. The delay from i along
the candidate path j, l, m) is 10 and the maximum potential delay along the
remaining unused (k) is 4. The difference is: 104 = 6, and so the potential
delay of d, f, g, h, i) is reduced to 226 = 16.
After updating the partial delay of d, f, g, h, i), the partial path with the
maximum potential delay is c. The new critical path candidate will be: c, g, h,
i, j, l, m.
Update the path table with delay of 16 for previous candidate path. Extend c
along path with maximum potential delay until nd contradiction or reach end
of path. Add an entry in path table for each intermediate path with multiple
fanout signals.
potential unused
delay fanout path
false d, f, g, h, i, j, l
4 k e
10 j, l a
14 i b
16 k d, f, g, h, i
20 k c, f, g, h, i
false c, f, g, h, i, j, l
We encounter the same contradiction as with the previous candidate, and so
we have another false path. We could have detected this false path without
working through the path table, if we had recognized that our current
candidate path overlaps with the section (j, l) of previous candidate that
caused the false path.
As with the previous candidate, we reduce the potential delay of the current
candidate the path up through i by 6, giving us a potential delay of
2010 = 14 for c, f, g, h, i). The next candidate path is d, f, g, h, i, k)
with a delay of 16.
potential unused
delay fanout path
false d, f, g, h, i, j, l
false c, f, g, h, i, j, l
4 k e
10 j, l a
14 i b
14 k c, f, g, h, i
16 k d, f, g, h, i
360 CHAPTER 5. TIMING ANALYSIS
We extend the path through k and compute the constraint table.
side input non-controlling value constraint
g[c] 1 c
i[b] 0 b
k[e] 0 e
The complete constraint is bce. There is no constraint on a and d may be
either a rising edge or a falling edge.
Critical path d, f, g, h, i, k
Delay 16
Input vector a=0, b=0, c=1, d=rising edge, e=0
Next Path Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Find the critical path in the circuit below.
m
a
b
p
d
c
e
f
j
k
g h i
l
m
n
o
p
Answer:
m
a
b
p
d
c
e
f
j
k
g h i
l
m
n
o
p
0
0
4
4
4 6
4
8
8
8
8 16
4
4
4
8
8
8
8
12
10 12
12 14
Initial state of path table:
5.3.4 Finding the Next Candidate Path 361
potential unused
delay fanout path
8 n, o d
12 j, k a
14 e b
16 f c
Extend c through f:
potential unused
delay fanout path
8 n, o d
12 j, k a
14 e b
16 m, n c,f,g,h,i
false c,f,g,h,i,n,p
side input non-controlling value constraint
n[d] 1 d
p[o] 1 d
The rst candidate is a false path. Recompute potential delay of c, f, g, h, i,
which reduces it from 16 to 12.
potential unused
delay fanout path
false c,f,g,h,i,n,p
8 n, o d
12 j, k a
12 m c,f,g,h,i
14 e b
Extend b through e:
potential unused
delay fanout path
false c,f,g,h,i,n,p
8 n, o d
12 j, k a
12 m c,f,g,h,i
false b,e,k,l
side input non-controlling value constraint
k[a] 1 a
l[j] 1 a
362 CHAPTER 5. TIMING ANALYSIS
The second candidate is a false path. There is no unused fanout signal from
l for the path b, e, k, l, so this partial path is a false path and there is no new
delay information to compute.
There are two paths with a potential delay of 12. Choose c, f, g, h, i),
because the end of the path is closer to an output, so there will be less work
to do in analyzing the path.
potential unused
delay fanout path
false c,f,g,h,i,n,p
false b,e,k,l
8 n, o d
12 j, k a
12 c,f,g,h,i,m
side input non-controlling value constraint
m[l] 0 (a(ab)) = true
Critical path c,f,g,h,i,m
Delay 12
Input vector a=0, b=1, c=rising edge, d=0
5.3.5 Correct Algorithm to Find Critical Path
In this section, we remove the assumption that values on side inputs always arrive earlier than the
value on the path input. We now deal with late arriving side inputs, or simply late side inputs.
The presentation of late side inputs is as follows:
Section 5.3.5.1 rules for how late side inputs can allow path inputs to exercise gates
Section 5.3.5.2 idea of monotone speedup, which underlies some of the rules
Section 5.3.5.3 one of the potentially confusing situations in detail.
Section 5.3.5.4 complete, correct, and complex algorithm.
Section 5.3.5.5 examples
5.3.5.1 Rules for Late Side Inputs
For each gate, there are eight sitations: the side input is controlling or non-controlling, the path
input is controlling or non-controlling, and the side input arrives early or arrives late.
5.3.5 Correct Algorithm to Find Critical Path 363
Early Side
monotone speedup side input causes glitch path input propogates
Late Side
path=CTRL
side=non-ctrl
path=non-ctrl path=CTRL path=non-ctrl
side=non-ctrl side=CTRL side=CTRL
path input causes glitch path input propogates neither input propogates side input propogates
monotone speedup
Early Side
monotone speedup side input causes glitch path input propogates
Late Side
path=CTRL
side=non-ctrl
path=non-ctrl path=CTRL path=non-ctrl
side=non-ctrl side=CTRL side=CTRL
path input causes glitch path input propogates neither input propogates side input propogates
monotone speedup
Late side inputs give us three more situations for each of AND and OR gates where the path input
will/might excite the gate. In the two cases labeled monotone speedup, the path input does not
excite the gate with the current timing, but if our timing estimates for the side input are too slow,
or the timing of the side speeds up due to voltage or temperature variations, then the late input
might become an early side input.
The ve situations where the path input excites the gate are:
side is early
side=non-ctrl, path=non-ctrl The path input is the later of the two inputs to transition to
a non-controlling value, so it is the one that causes the output to transition.
side=non-ctrl, path=ctrl The side input transitions to a non-controlling value while the
path input is a non-controlling value; this causes the output to transition to a
non-controlled value. The path input then transitions to a controllilng value, causing a
glitch on the output as it transitions to a controlled value.
side is late
side=non-ctrl, path=non-ctrl If the side input arrives earlier than expected, then we will
have an early arriving side input with a non-controlling value.
side=non-ctrl, path=ctrl If the side input arrives earlier than expected, then we will have
an early arriving side input with a non-controlling value.
side=ctrl, path=ctrl The path input transitions to a controlling value before the side
input; so, it is the input that causes the output to transition.
364 CHAPTER 5. TIMING ANALYSIS
The three situations where the path input does not excite the gate are:
side is early
side=ctrl, path=ctrl The side input transitions to a controlling value before the path input
transitions to a controlling value. The edge on the path input does not propagate to the
output.
side=ctrl, path=non-ctrl It is always the case that at least one of the inputs is a
controlling value, so the output of the gate is a constant controlled value.
side is late
side=ctrl, path=non-ctrl The path input transitions to a non-controlling value while the
side input is still non-controlling. This causes the output to transition to a
non-controlled value. The side input then transitions to a controlling value, which
causes the glitch as the output transitions to a controlled value. The second edge of the
glitch is caused by the side input, so the side input determines the timing of the gate.
Combining together the ve situations where the path input excites the gate gives us our complete
and correct rule: a path input excites the gate if the side-input is non-controlling or the side-input
arrives late and the path input is controlling.
Section 5.3.5.2 discusses monotone speedup in more detail, then section 5.3.5.3 demonstrates that
a late-arriving side input that causes a glitch cannot result in a true path. After these two tangents,
we nally present the correct, complete, and complex algorithm for critical path analysis.
5.3.5.2 Monotone Speedup
When we have a late side input with a non-controlling value, the path input does not excite the
gate, but the rules state that we should consider this to be a true path. The reason that we report
this as a true path, even though the path input does not excite the gate is due to the idea of
monotone speedup.
Denition monotonic: A function ( f ) is monotonic if increasing its input causes the
output to increase or remain the same. Mathematically: x < y = f (x) f (y).
Denition monotononous: A lecture is monotonous if increasing the length of the
lecture increases the number of people who are asleep.
Denition monotone speedup: The maximum clockspeed of a circuit should be
monotonic with respect to the speed of any gate or sub-circuit. That is, if we increase
the speed of part of the circuit, we should either increase the clockspeed of the
circuit, or leave it unchanged.
5.3.5 Correct Algorithm to Find Critical Path 365
Denition monotononous speedup: A lecture has monotonous speedup if increasing the
pace of the lecture increases the number of people who are awake.
In the monotone speedup situations, if we were to report the candidate path as false and the side
input arrives sooner than expected, the path might generate an edge. Thus, a path that we initially
thought was a false path becomes a real path. Speeding up a part of the circuit turned a false path
into a real path, and thereby actually reduced the maximum clock speed of the circuit.
Monotone speedup is desirable, because if we claim that a circuit has a certain minimum delay
and then speed up some of the gates in the circuit (because of resizing gates, process variations,
temperature or voltage uctuations), we would be quite distraught to discover that we have in fact
increased the minimum delay.
We can see the rationale behind the monotone speedup rules by observing that if we have a late
side input that transitions to a non-controlling value, and the circuitry that drives the late side
input speeds up, the late side input might become an early side input. For each of the two
monotone speedup situations, the corresponding early side input situation has a true path.
5.3.5.3 Analysis of Side-Input-Causes-Glitch Situation
In the following paragraphs we analyze the rule for a late side input where the side input is
controlling and the path input is non-controlling. The excitation rules say that in this situation the
path input cannot excite gate. We might be tempted to think that we could construct a circuit
where the rst edge of the glitch (which is caused by the path input) propagates and the second
edge (which is caused by the late side input) does not propagate. Here we demonstrate why we
cannot create such a circuit. Readers who are willing to accept that the Earth is round without
personally circumnavigating the globe may wish to skip to section 5.3.5.4.
In the picture below, c is the gate that produces a glitching output because of a late-arriving side
input. We know that a, c) is part of a false path and will demonstrate that in the current situation,
b, c) must also be part of a false path.
a
b
c
For a, c) to be a part of a false path, there must be a gate that appears later in the circuit that
prevents the second edge of the glitch from propagating. In the gure below, this later gate is f,
with e being the path input (from c) and d being the side input.
d
f
e
d
f
e
d
f
e
very early side ctrl middling early side ctrl late side ctrl
d
f
e
late side non-ctrl
d
f
e
d
f
e
very early side non-ctrl middling early side non-ctrl
366 CHAPTER 5. TIMING ANALYSIS
For the rst edge on e to propagate, the side input (d) must have a non-controlling value at the
time of the rst edge. To prevent the second edge of the glitch from propagating from e to f, d
must be a controlling value. That is, d must transition from a non-controlling value to a
controlling value in the middle of the glitch on e. This corresponds to the middling early side
ctrl situation in the gure. From the perspective of the rst edge of the glitch, this is identical to
the situation with the rst gate (c), in that a late-arriving side input transitions to a controlling
value.
In this case of middling early side ctrl, the edge on d arrives later than the rst edge on e,
which means that d, f) is a slower path than b, c, ..., e, f), which means that d, f) is part of a
false path. Thus, there is a gate later in the circuit that prevents the second edge of the glitch on f
from propagating. We wrap up the argument that the situation illustrated with a, b, c cannot lead
to a critical path through b, c) in two ways: intuitively and mathematically.
Intuitively, for b, c) to be part of a critical path, c must be followed by f, which itself must be
followed by another gate with a middling-early side input. All of the other cases that prevent the
second edge of the glitch from propagating will prevent both edges of the glitch from
propagating. This other gate with the middling-early side input produces a glitch and so must
itself be followed by yet another gate with a middling side input. This process continues ad
innitum we cannot construct a nite circuit that allows the rst edge of the glitch on c to
propagate and prevents a second edge of the glitch from propagating.
Mathematically, we construct a simple inductive proof based on the number of later gates in the
candidate path. In the base case, f is the last gate in the path, and so it must be the gate that
propagates the rst edge of the glitch and does not generate a glitch. There is no situation in
which this happens, thus the last gate in the path cannot have a middling-early input. In the
inductive case we assume that there are n gates later in the path and none of them have
middling-early side inputs. We can then prove that the gate just prior to the n
th
gate cannot have a
middling-early side input, because for it to have a middling-early side input, one of the n later
gates would need to have a middling-early side input that would allow the rst edge of the glitch
to propagate and prevent the second edge of the glitch from propagating. From the inductive
hypothesis, we know that none of the n gates have a middling-early input, and so we have
completed the proof by contradiction.
5.3.5.4 Complete Algorithm
The possibility of late-arriving side inputs caused us to modify our rules for when a path input
will excite a gate. The complete rule (section 5.3.5.1) is: the side-input is non-controlling or the
side-input arrives late and the path input is controlling. Because we explore candidate critical
paths beginning with the slowest and working through faster and faster paths, a late-arriving side
input must be part of a previously discovered false path.
In the previous sections, when we did not have late-arriving side inputs, we could exercise the
critical path a change on just one input signal. With late-arriving side inputs, both the
primary-input to the critical path and the late-arriving side inputs might need to change.
5.3.5 Correct Algorithm to Find Critical Path 367
When using the late-arriving side input portion of our excitation rule, we must ensure that the side
input does in fact arrive later than the path input. If we do not, we would fall into the situation
where both inputs are controlling and the side input arrives early. In this situation, the side input
excites the gate.
For the side input to arrive late, the late path to the side input must be viable. Stated more
precisely, the prex of the previously discovered false path that ends at the side input must be
viable. The entire previously discovered false path is clearly not viable, it is only the prex up to
the side input that must be viable. The viability condition for the prex uses the same rule as we
use for normal path analysis: for every gate along the prex the side-input is non-controlling or
the prexs side input arrives late and the prexs path input is controlling.
The complete, correct, and complex algorithm is:
If nd a contradiction on the path, check for side inputs that are on previously discovered false
paths.
If a gate and its side input are on a previously discovered false path, then the side input denes
a prex of a false path that is a late-arriving side input.
For each late-arriving prex, compute its viability (the conditions under which an edge will
propagate along the prex to the late side input).
To the row of the late arriving side input in the constraint table, add as a disjunction the
constraint that: the path input has a controlling value and at least one of the prexes is viable.
5.3.5.5 Complete Examples
Complete Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Find the critical path in the circuit below.
a
b
c
d e
f
g
Answer:
a
b
c
d e
f
g
0
4
4 4
8
8
8 10
8
10 12 14 14
368 CHAPTER 5. TIMING ANALYSIS
potential unused
delay fanout path
14 g, b, c a
false a,b,d,e,f,g
side input non-controlling value constraint
f[c] 1 a
g[a] 1 a
First false path, pursue next candidate.
potential unused
delay fanout path
false a,b,d,e,f,g
10 g, c a
10 a,c,f,g
side input non-controlling value constraint
f[e] 1 a
g[a] 1 a
At rst, this path appears to be false, but the side input f[e] is on the prex
of the false path a,b,d,e,f,g. Thus, f[e] is a late arriving side input.
The candidate path will be a true path if the side input arrives late and the
path input is a controlling value. The viability condition for the path a,b,d,e is
true. The constraint for the path input (c) to have a controlling value for f is a.
Together, the viability constraint of true and the controlling value constraint of
a give us a late-side constraint of a.
Updating the constraint table with the late arriving side input constraint gives
us:
side input non-controlling value constraint
f[e] 1 a+a =true
g[a] 1 a
The constraint reduces to a. A rising edge will exercise the path.
Critical path a, c, f, g
Delay 10
Input vector a=rising edge
Illustration of rising edge exercising the critical path:
0 0 2 4 6
0
2
0
6
10
a
b
c
d e
f
g
5.3.5 Correct Algorithm to Find Critical Path 369
Complete Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Find the critical path in the circuit below.
a
c
i
b
d
Answer:
Find longest path:
a
c
h
i
j
j
i
g
b
f
0
4
4 4
4
8
8
8
8
8
8
12
12 12
8
12 14 14
e
16
d
18
Explore longest path:
potential unused
delay fanout path
8 f a
12 h c
18 f, g b,d,e
18 h, i b,d,e,g
false b,d,e,g,h,i,j
side input non-controlling value constraint
h[c] 0 c
i[g] 0 b
j[f] 0 ab
Contradiction.
a
c
h
i
j
j
i
g
b
f
0
0
0
0
0
0
1 0
e d
First false path, nd next candidate.
370 CHAPTER 5. TIMING ANALYSIS
Changes in potential delays:
Signal / path old new
g on b, d, e, g) 12 8
b, d, e, g) 18 14
g[e] on b, d, e) 14 10
e on b, d, e) 14 10
b, d, e) 18 14
potential unused
delay fanout path
false b,d,e,g,h,i,j
8 f a
12 h c
14 f, g b,d,e
14 b,d,e,g,i,j
a
c
h
i
j
j
i
g
b
f
0
4
4 4
4
8
8
8
8
8
8
12
12 12
8
8 14 10 10
e d
12
side input non-controlling value constraint
h[c] 0 c
i[h] 0 cb
j[f] 0 ab
Initially, found contradiction, but b, d, e, g, h) is a prex of a false path, and
i[h] is a side input to the candidate path. We have a late side input.
Note that at the time that we passed through i, we could not yet determine
that we would need to use i[h] as a late side input. The lesson is that when
a contradiction is discovered, we must look back along the entire candidate
path covered so far to see if we have any late side inputs.
Our late-arriving constraint for i[h] is:
late side path (b, d, e, g, h)) is viable: c.
path input (i[g]) has a controlling value of 1: b.
Combining these constraints together gives us bc.
Adding the constraint of the late side input to to the condition table gives us:
side input non-controlling value constraint
h[c] 0 c
i[h] 0 bc +bc = c
j[f] 0 ab
5.3.5 Correct Algorithm to Find Critical Path 371
The constraints reduce to abc.
Critical path b, d, e, g, i, j)
Delay 14
Input vector a=0, b=falling edge, c=0
Illustration of falling edge exercising the critical path:
a
c
h
i
j
j
i
g
b
f
e d
0 2 4
4
8
8
4 6
6
10
10
10
0
0
14
Complete Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
This example illustrates the benets of the principle of monotone speedup when analyzing critical
paths.
a
b
e
f
c
d
Critical-path analysis says that the critical path is a, c, e, f), with a late side input of e[d] and a
total delay of 10. The required excitation is a rising edge on a. However, with the given delays,
this excitation does not produce an edge on the output.
a
b
e
f
c
d
0 0 2 4
0 2
0
For a more complete analysis of the behaviour, we also try a falling edge. The falling edge path
exercises the path a, f) with a delay of 4.
a
b
e
f
c
d
0 0 2 4
0 2
0
4
6
Monotone speedup says that if we reduce the delay of any gate, we must not increase the delay of
the overall circuit. We reduce the delays of b and d from 2 to 0.5 and produce an edge at time 10
via the path a, c, e, f).
372 CHAPTER 5. TIMING ANALYSIS
a
b
e
f
c
d
0 0 0.5 1
0 2
0
6
10
The critical path analysis said that the critical path was a, c, e, f) with a delay of 10. With the
original circuit, the slowest path appeared to have a delay of 4. But, by reducing the delays of two
gates, we were able to produce an edge with a delay of 10. Thus, the critical path algorithm did
indeed satisfy the principle of monotone speedup.
Complete Example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
This example illustrates that we sometimes need to allow edges on the inputs to late side paths.
Question: Find the critical path in the circuit below.
a
b
c d
e
f
g
h
i j
k
Answer:
The purpose of this example is to illustrate a situation where we need the
primary input of a late-side path to toggle. To focus on the behaviour of the
circuit, we show pictures of different situations and do not include the path
and constraint tables.
Longest path in the circuit, showing a contradiction between e[b] and j[h].
0
1 0 1
0 1
a
b
c d
e
f
g
h
i j
k
Second longest path b, f, g, h, i, j, k), using only early side inputs, showing a
contradiction between k[e] and i[e].
0
1
0 1 a
b
c d
e
f
g
h
i j
k
5.3.5 Correct Algorithm to Find Critical Path 373
Second longest path using late side input i[e], which has a controlling value
of 1 (rising edge) on i[h]. However, we neglect to put a rising edge on a.
The late-side path is not exercised and our candidate path is also not
exercised.
0
0 2 4 6
6
1 0 1
1 1
1
1 0
0
a
b
c d
e
f
g
h
i j
k
We now put a rising edge on a, which causes our late side input (i[e]) to be
a non-controlling value when our path input (i[h]) arrives.
2 4
4
0
8
4 8
0 2 4 6
6
810
10 12
14 16
a
b
c d
e
f
g
h
i j
k
0
0
In looking at the behaviour of i, we might be concerned about the precise
timing of of the glitch on e and the rising ege on h. The gure below shows
normal, slow, and fast timing of e. With slow timing, the rst edge of glitch on
e arrives after the rising edge on h. The timing of the second edge of the
glitch remains unchanged. The value of i remains constant, which could lead
us to believe (incorrectly!) that our critical path analysis needs to take into
account the rst edge of the glitch. However, this is in fact an illustration of
monotone speedup. The fast timing scenario move the glitch earlier, such that
the edge on h does in fact determine the timing of the circuit, in that h
produces the last edge on i. In summary, with the glitch on e and the rising
edge on h, either h causes the last edge on i or there is no edge on i.
Normal timing e
h
i
4 8
10 8
6
Slow timing on e e
h
i
4 8
10 8
6
Fast timing on e e
h
i
4 8
10 8
6
Complete Example 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
This example demonsrates that a late side path must be viable to be helpful in making a true path.
Question: Find the critical path in the circuit below.
374 CHAPTER 5. TIMING ANALYSIS
a
b
c
d
e
f
g
h
i
j
k
Answer:
Find that the two longest paths are false paths, because of contradiction
between g[d] and i[c].
a
b
c
d
e
f
g
h
i
j
k
0
1
Try third longest path d, f, h, j, k) using early side inputs. Find contradiction
between k[i] and j[c].
a
b
c
d
e
f
g
h
i
j
k
0
1
0
0
1 0
Try using late side paths a, e, g, i, k) or b, e, g, i, k). Find that neither path is
viable by itself, because of contradiction between g[d] and i[c]. Also,
neither path is viable in conjunction with the candidate path, because of
contradiction between i[c] on late side path and j[c] on candidate path.
Either one of these contradictions by itself is sufcient to prevent the late side
path from helping to make the candidate path a true path.
a
b
c
d
e
f
g
h
i
j
k
0
1
0
0
5.3.6 Further Extensions to Critical Path Analysis
McGeer and Braytons paper includes two extensions to the critical path algorithm presented here
that we will not cover.
5.3.7 Increasing the Accuracy of Critical Path Analysis 375
gates with more than two inputs
nding all input values that will exercise the critical path
multiple paths with the same delay to the same gate
5.3.7 Increasing the Accuracy of Critical Path Analysis
When doing critical path calculations, it is often useful to strike a balance between accuracy and
effort. In the examples so far, we assumed that all signals had the same wire and load delays. This
assumption simplies calculations, but reduces accuracy. Section 5.4 discusses how the analog
world affects timing analysis.
5.4 Elmore Timing Model
There are many different models used to describe the timing of circuits. In the section on critical
paths, we used a timing model that was based on the size of the gate. The timing model ignored
interconnect delays and treated all gates as if they had the same fanout. For example, the delay
through an AND gate was 4, independent of how many gates were in its immediate fanout.
In this section and the next (section 5.4) we discuss two timing models. In this section, we discuss
the detailed analog timing model, which reects quite accurately the actual voltages on different
nodes. The SPICE simulation program uses very detailed analog models of transistors (dozens of
parameters to describe a single transistor). In the next section, we describe the Elmore delay
model, which achieves greater simplicity than the analog model, but at a loss of accuracy.
5.4.1 RC-Networks for Timing Analysis
Transistor Level
(P-Tran)
gate
source
drain
Mask Level (P-Tran)
gate
source
poly
p-diff
contact
drain
Cross-Section of
Fabricated Transistor
poly
p-diff
contact
substrate
Switch Level (P-Tran)
gate
source
drain
Transistor Level
(N-Tran)
gate
source
drain
Mask Level (N-Tran)
gate
source
poly
n-diff
drain
contact
Cross-Section of
Fabricated Transistor
poly
p-diff
contact
substrate
Switch Level
(N-Tran)
gate
source
drain
376 CHAPTER 5. TIMING ANALYSIS
Different Levels of Abstraction for Inverter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gate Level
a b
Transistor Level
a b
VDD
GND
Mask Level
VDD
GND
a b
poly
n-diff
p-diff
metal
metal
contact
From the electrical characteristics of fabricated transistors, VLSI and device engineers derive
models of how transistors behave based on mask-level descriptions. For our purposes, we will use
the very simple resistor-capacitor model shown below.
Each of the P- and N-transistor models contains a resistor (pullup for the P-transistor and
pulldown for the N-transistor) and a parasitic capacitor.
When we combine a P-transistor and an N-transistor to create an invertor, we combine the
capacitors into a single parasitic capacitor that is the sum of the two individual capacitors.
RC-Network models of P- and N-transistors
gate
Rpu
Rpd
Cp
source
drain
Cp
source
gate
drain
RC-Network for Timing Analysis
a b
Rpu
Rpd
Cp
VDD
GND
C
L
Contacts (vias) have resistance (R
V
)
Metal areas (wires) have resistance (R
W
) and capacitance (C
W
).
The resistance is dependent upon the geometry of the wire.
The capacitance is dependent upon the geometry of the wire and the other wires adjacent to
it.
For most circuits, the via resistance is much greater than the wire resistance (R
V
R
W
)
To reduce area, modern wires tend to have tall and narrow cross sections. When wires are packed
close together (e.g. a signal that is an array or vector), the wires act like capacitors.
5.4.1 RC-Networks for Timing Analysis 377
A Pair of Inverters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Gate Level
a
b
c
Transistor Level
a
b
VDD
GND
c
Mask Level
a
b
c
A Pair of Inverters (Contd) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Mask Level
VDD
GND
a
b c
RC-Network for Timing Analysis
a
b
Rpu
Rpd
Cp
VDD
GND
c
Rpu
Rpd
Cp C
L
C
L
C
W
R
W
R
V
To analyze the delay from one inverter to the next, we analyze how long it takes the capacitive
load of the second (destination) inverter to charge up from ground to VDD, or to discharge from
VDD to ground. In doing this analysis, the gate side of the driving inverter is irrelevant and can be
removed (trimmed). Similarly, the pullup resistor, pulldown resistor, and parasitic capacitance of
the destination inverter can also be removed.
RC-Network for Timing Analysis (trimmed)
378 CHAPTER 5. TIMING ANALYSIS
Rpu
Rpd
Cp
VDD
GND
C
L
R
V
b
C
W
R
W
A Circuit with Fanout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
We will look at one more example of inverters and their RC-network before beginning the timing
analysis of these networks.
Gate Level
a
b
c
d
Gate Level (physical layout)
a
b c
d
c
Transistor Level
a
b
VDD
GND
c b d
c
Mask Level
VDD
GND
a d b
b
c
c
5.4.1 RC-Networks for Timing Analysis 379
RC-Network for Timing Analysis
a
Rpu
Rpd
Cp
GND
c
Rpu
Rpd
Cp
d
Rpu
Rpd
Cp
c
C
L
C
L
C
L
VDD
b
C
W1
R
W1
R
V
b
C
W2
R
W2
R
V
C
W3
R
W3
RC-Network for Timing Analysis (trimmed)
Rpu
Rpd
Cp
GND
C
L
C
L
VDD
R
V
b
R
V
b
C
W1
R
W1
C
W2
R
W2
We will use this circuit as our primary example for the analog and Elmore timing models, so we
draw a simplied version of the trimmed RC-network before proceeding.
380 CHAPTER 5. TIMING ANALYSIS
RC-Network for Timing Analysis (cleaned up)
Rpu
Rpd
Cp
GND
C
L
C
L
VDD
R
V
b
R
V
b
C
W1
R
W1
C
W2
R
W2
5.4.2 Derivation of Analog Timing Model
The primary purpose of our timing model is provide a mechanism to calculate the approximate
delay of a circuit. For example, to say that a gate has a delay of 100 ps. The actual gate behaviour
is a complicated function of the input signal behaviour.
The waveforms below are all possible behaviours of the same circuit. From these various
waveforms, it would be very difcult to claim that the circuit has a specic delay value.
Slow input
time
input
voltage
time
output
voltage
Fast input
time
input
voltage
time
input
voltage
Steps Toward Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
We begin with two simplications as steps toward calculating a single delay value for a circuit.
1. Look at the circuits response to a step-function input.
2. Measure the delay to go from GND to 65% of VDD and from VDD to 35% of VDD.
These values of 65% VDD and 35% VDD are trip points.
5.4.2 Derivation of Analog Timing Model 381
Denition Trip Points: A high or 1 trip point is the voltage level where an upwards
transition means the signal represents a 1.
A low or 0 trip point is the voltage level where a downwards transition means the
signal represents a 0.
In the gure below the gray line represents the actual voltage on a signal. The black line is digital
discretization of the analog signal.
a
b
Node Numbering, Initial Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
To motivate our derivation of the analog timing model, we will use the inverter that fans out to
two other inverters as our example circuit.
The source (VDD in our case) and each capacitor is a node. We number the nodes, capacitors,
and resistors. Resistors are numbered according to the capacitor to their right. Multiple
resistors in series without an intervening capacitor are lumped into a single resistor.
All nodes except the source start at GND.
We calculate the voltage at a node when we turn on the P-transistor (connect to VDD).
The process for analyzing a transition from VDD to GND on a node is the dual of the process just
described. The source node is GND, all other nodes start at VDD, we calculate the voltage when
we turn on the N-transistor (connect it to GND).
Rpu
Rpd
Cp
GND
C
L
C
L
VDD
R
V
b
R
V
b
C
W1
R
W1
C
W2
R
W2
1 2 5
3 4 0
R1
R2 R5
R3 R4
Dene: Path and Downstream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
We still have a few more preliminaries to get through. To discuss the structure of a network, we
introduce two terms: path and downstream
382 CHAPTER 5. TIMING ANALYSIS
Denition path: The path from the source node to a node i is the set of all resistors
between the source and i. Example: path(3) = R
1
, R
2
, R
3

Denition down: The set of capactitors downstream from a node is the set of all
capacitors where current would ow through the node to charge the capacitor. You
can think of this as the set of capacitors that are between the node and ground.
Example: down(2) = C
2
,C
3
,C
4
,C
5
. Example: down(3) = C
3
,C
4

5.4.2.1 Example Derivation: Equation for Voltage at Node 3


As a concrete example of deriving the analog timing model, we derive the equation for the voltage
at Node 3 in our example circuit. After this concrete example, we do the general derivation.
V
3
(t) = V
0
(t) voltage drop fromNode
0
toNode
3
The voltage drop is the sum of the voltage drops across the
resistors on the path from Node
0
to Node
3
= V
0
(t)

rpath(3)
R
r
I
r
(t)
= V
0
(t) (R
1
I
1
(t) +R
2
I
2
(t) +R
3
I
3
(t))
The current through a resistor is the sum of the currents
through all of the downstream capacitors
I
r
(t) =

cdown(r)
I
c
I
1
(t) = I
c1
+I
c2
+I
c3
+I
c4
+I
c5
I
2
(t) = I
c2
+I
c3
+I
c4
+I
c5
I
3
(t) = I
c3
+I
c4
Substitute I
r
into the equation for V
3
V
3
(t) = V
0
(t)
_
_
R
1
(I
c1
+I
c2
+I
c3
+I
c4
+I
c5
)
+ R
2
(I
c2
+I
c3
+I
c4
+I
c5
)
+ R
3
(I
c3
+I
c4
)
_
_
Use associativity to group terms by currents.
V
3
(t) = V
0
(t)
_
_
_
_
_
_
I
c1
(R
1
)
+ I
c2
(R
1
+R
2
)
+ I
c3
(R
1
+R
2
+R
3
)
+ I
c4
(R
1
+R
2
+R
3
)
+ I
c5
(R
1
+R
2
)
_
_
_
_
_
_
5.4.2 Derivation of Analog Timing Model 383
Current through a capacitor
I
c
(t) = C
c
V
c
(t)
t
Substitute I
c
into equation for V
3
V
3
(t) = V
0
(t)
_
_
_
_
_
_
_
_
_
_
_
_
_
_
(R
1
)C
c1
V
c1
(t)
t
+ (R
1
+R
2
)C
c2
V
c2
(t)
t
+ (R
1
+R
2
+R
3
)C
c3
V
c3
(t)
t
+ (R
1
+R
2
+R
3
)C
c4
V
c4
(t)
t
+ (R
1
+R
2
)C
c5
V
c5
(t)
t
_
_
_
_
_
_
_
_
_
_
_
_
_
_
In each of the resistance-capacitance terms (e.g., (R
1
+
R
2
)C
c2
), the resistors are the set of resistors on the path
to the capacitor that are also on the path to Node
3
.
We capture this observation by dening the Elmore resis-
tance R
i,k
for a pair of nodes i and k to be the resistors on
the path to Node
i
that are also on the path to Node
k
.
R
i,k
=

r(path(k)path(k))
R
r
R
3,1
= R
1
R
3,2
= R
1
+R
2
R
3,3
= R
1
+R
2
+R
3
R
3,4
= R
1
+R
2
+R
3
R
3,5
= R
1
+R
2
Substitute R
i,k
into V
3
V
3
(t) = V
0
(t)
_
_
_
R
3,1
C
c1
V
c1
(t)
t
+ R
3,2
C
c2
V
c2
(t)
t
+ R
3,3
C
c3
V
c3
(t)
t
+ R
3,4
C
c4
V
c4
(t)
t
+ R
3,5
C
c5
V
c5
(t)
t
_
_
_
We are left with a system of dependent equations, in that V
3
is dependent upon all of the voltages
in the circuit. In the general derivation that follows next, we repeat the steps we just did, and then
show how the Elmore delay is an approximation of this system of dependent differential
equations.
5.4.2.2 General Derivation
We derive the equation for the voltage at Node
i
as a function of the voltage at Node
0
.
384 CHAPTER 5. TIMING ANALYSIS
V
i
(t) = V
0
(t) voltage drop fromNode
0
toNode
i
The voltage drop is the sum of the voltage drops across the
resistors on the path from Node
0
to Node
i
= V
0
(t)

rpath(i)
R
r
I
r
(t)
The current through a resistor is the sum of the currents
through all of the downstream capacitors
I
r
(t) =

cdown(r)
I
c
Substitute I
r
into the equation for V
i
V
i
(t) = V
0
(t)

rpath(i)
_
_
R
r


cdown(r)
I
c
_
_
Use associativity to push R
r
into the summation over c
V
i
(t) = V
0
(t)

rpath(i)

cdown(r)
R
r
I
c
Current through a capacitor
I
c
(t) = C
c
V
c
(t)
t
Substitute I
c
into equation for V
i
V
i
(t) = V
0
(t)

rpath(i)

cdown(r)
R
r
C
c
V
c
(t)
t
A little bit of handwaving to prepare for Elmore resistance
V
i
(t) = V
0
(t)

kNodes
_
_

rpath(i)path(k)
R
r
_
_
C
k
V
c
(t)
t
Dene Elmore resistance R
i,k
R
i,k
=

r(path(k)path(k))
R
r
Substitute R
i,k
into V
i
V
i
(t) = V
0
(t)

kNodes
R
i,k
C
k
V
c
(t)
t
5.4.3 Elmore Timing Model 385
The nal equation above is an exact description of the behaviour of the RC-network model of a
circuit. More accurate models would result in more complicated equations, but even this equation
is more complicated than we want for calculating a simple number for the delay through a circuit.
The equation is actually a system of dependent equations, in that each voltage V
i
is dependent
upon all of the voltages V
c
in the circuit. Spice and other analog simulators use numerical
methods to calculate the behaviour of these systems. Elmores contribution was to nd a simple
approximation of the behaviour of such systems.
5.4.3 Elmore Timing Model
Assume that V
0
(t) is a step function from 0 to 1 at time 0.
Derive upper and lower bounds for V
i
(t).
Find RC time constants for upper and lower bounds.
Elmore delay is guaranteed to be between upper and lower bounds.
Upper and lower bounds
Elmore model
RC-network model
T
D
-T
Ri
T
P
-T
Ri
T
Ri
T
D
T
P
Equations for Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Time : 0 T
D
i
T
R
i
T
P
T
R
i

Upper 1+
t T
D
i
T
P
1
T
R
i
T
P
e
T
D
i
T
P
t
T
R
i
Elmore 1e
t/T
Di
Lower 0 1
T
D
i
t +T
R
i
1
T
D
i
T
P
e
T
P
T
R
i
t
T
P
Fact: 0 T
R
i
T
D
i
T
P
386 CHAPTER 5. TIMING ANALYSIS
Denitions of Time Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
T
R
i
=

kNodes
R
2
k,i
C
k
R
i,i
Mathematical artifact, no intuitive meaning
T
D
i
=

kNodes
R
k,i
C
k
Elmore delay
T
P
=

kNodes
R
k,k
C
k
RC-time constant for lumped network
Picking the Trip Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
V
i
(t) = VDD(1e
t/T
Di
)
Pick trip point of V
i
(t) = 0.65VDD, then solve for t
0.65VDD = VDD(1e
t/T
Di
)
0.35 = e
t/T
Di
Take ln of both sides
ln0.35 = ln(e
t/T
Di
)
ln0.35 =1.05 1.0
1.0 = t/T
Di
t = T
Di
By picking a trip point of 0.65VDD, the time for V
i
to reach the trip is the Elmore delay.
5.4.4 Examples of Using Elmore Delay 387
5.4.4 Examples of Using Elmore Delay
5.4.4.1 Interconnect with Single Fanout
G1 G2
G1
Ra1
C1
Ra2
Ra3
C2
C3
Ra4
G2
Rw1
Rw2
Rw3
C1
G1
Vi
Rpu
Rpd
Cp C2
Rw1
C3
Rw2 Rw3
CG2
G2
Ra1 Ra2 Ra3 Ra4
G* gate
C* capacitance on wire
Ra* resistance through antifuse
Rw* resistance through wire
Question: Calculate delay from gate 1 to gate 2
Answer:
Gate 2 represents node 4 on the RC tree.
388 CHAPTER 5. TIMING ANALYSIS

D4
=
4

k=1
ER
k,i
C
k
= ER
1,4
C
1
+ER
2,4
C
2
+ER
3,4
C
3
+ER
4,4
C
4
= (Ra
1
+Rw
1
+Ra
2
+Rw
2
+Ra
3
+Rw
3
+Ra
4
)CG2
+(Ra
1
+Rw
1
+Ra
2
+Rw
2
+Ra
3
+Rw
3
)C
3
+(Ra
1
+Rw
1
+Ra
2
+Rw
2
)C
2
+(Ra
1
+Rw
1
)C
1
approximate Ra Rw
= (Ra
1
)C
1
+(Ra
1
+Ra
2
)C
2
+(Ra
1
+Ra
2
+Ra
3
)C
3
+(Ra
1
+Ra
2
+Ra
3
+Ra
4
)CG2
approximate Ra
i
= Ra
j
= 4(Ra)CG2+3(Ra)C
3
+2(Ra)C
2
+(Ra)C
1
Question: If you double the number of antifuses and wires needed to connect two
gates, what will be the approximate effect on the wire delay between the gates?
Answer:

Di
=
n

k=1
ER
k,i
C
k
Assume all resistances and capacitances are the same
values (R and C), and assume that all intermediate
nodes are along path between the two gates of inter-
est.
ER
k,i
= k R

Di
= (
n

k=1
k)RC
Using the mathematical theorem:
5.4.4 Examples of Using Elmore Delay 389
n

i=1
i =
(n+1)n
2
n
2
We simplify delay equation:

Di
= (
n

k=1
k)RC
= n
2
RC
We see that the delay is propotional to the square of the number of antifuses
along the path.
5.4.4.2 Interconnect with Multiple Gates in Fanout
G1 G2
G3
G1
G2
G3
Question: Assuming that wire resistance is much less than antifuse resistance and
that all antifuses have equal resistance, calculate the delay from the source inverter
(G1) to G2
Answer:
1. There are a total of 7 nodes in the circuit (n = 7).
2. Label interconnect with resistance and capacitance identiers.
G1
R1
C1
R2
R3
C2
C4
R4
G2
C6
R6
R5
G3
C3
C5
C7
390 CHAPTER 5. TIMING ANALYSIS
3. Draw RC tree
C1
G1
Vi
Rpu
Rpd
Cp C2
R1
C4
R2 R3 R4
C5
G2
C6
R5 R6
C7
G3
C3
n1 n2 n3 n4 n5
n6 n7
4. G2 is node 5 in the circuit (i = 5).
5. Elmore delay equations

D5
=
7

k=1
ER
k,5
Ck
= ER
1,5
C
1
+ER
2,5
C
2
+ER
3,5
C
3
+ER
4,5
C
4
+ER
5,5
C
5
+ER
6,5
C
6
+ER
7,5
C
7
6. Elmore resistances
ER
1,5
= R1 = R
ER
2,5
= R1 + R2 = 2R
ER
3,5
= R1 + R2 = 2R
ER
4,5
= R1 + R2 + R3 = 3R
ER
5,5
= R1 + R2 + R3 + R4 = 4R
ER
6,5
= R1 + R2 = 2R
ER
7,5
= R1 + R2 = 2R
7. Plug resistances into delay equations

D5
= (R)C
1
+(2R)C
2
+(2R)C
3
+(3R)C
4
+(4R)C
5
+(2R)C
6
+(2R)C
7
5.4.4 Examples of Using Elmore Delay 391
Delay from G1 to G3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Assuming that wire resistance is much less than antifuse resistance and
that all antifuses have equal resistance, calculate the delay from the source inverter
(G1) to G3
Answer:
1. G3 is node 7 in the circuit (i = 7).
2. Elmore delay equations

Di
=
n

k=1
ER
k,i
C
k

D7
=
7

k=1
ER
k,7
C
k
= ER
1,7
C
1
+ER
2,7
C
2
+ER
3,7
C
3
+ER
4,7
C
4
+ER
5,7
C
5
+ER
6,7
C
6
+ER
7,7
C
7
3. Elmore resistances
ER
1,7
= R1 = R
ER
2,7
= R1 + R2 = 2R
ER
3,7
= R1 + R2 = 2R
ER
4,7
= R1 + R2 = 2R
ER
5,7
= R1 + R2 = 2R
ER
6,7
= R1 + R2 + R5 = 3R
ER
7,7
= R1 + R2 + R5 + R6 = 4R
4. Plug resistances into delay equations

D7
= (R)C
1
+(2R)C
2
+(2R)C
3
+(2R)C
4
+(2R)C
5
= +(3R)C
6
+(4R)C
7
392 CHAPTER 5. TIMING ANALYSIS
Delay to G2 vs G3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Question: Assuming all wire segments at same level have roughly the same
capacitance, which is greater, the delay to G2 or the delay to G3?
Answer:
1. Equations for delay to G2 (
D5
) and G3 (
D7
)

D5
= (R)C
1
+(2R)C
2
+(2R)C
3
+(3R)C
4
+(4R)C
5
+(2R)C
6
+(2R)C
7

D7
= (R)C
1
+(2R)C
2
+(2R)C
3
+(2R)C
4
+(2R)C
5
+(3R)C
6
+(4R)C
7
2. Difference in delays

D5

D7
= RC
4
+2RC
5
RC
6
2RC
7
3. Compare capacitances
C
4
C6
C
5
C7
4. Conclusion: delays are approximately equal.
5.5 Practical Usage of Timing Analysis
Speed Grading
Fabs sort chips according to their speed (sorting is known as speed grading or speed
binning)
Faster chips are more expensive
In FPGAs, sorting is based usualy on propagation delay through an FPGA cell. As wires
become a larger portiono of delay, some analysis of wire delays is also being done.
Propagation delay is the average of the rising and falling propagation delays.
Typical speed grades for FPGAs:
Std standard speed grade
1 15% faster than Std
2 25% faster than Std
5.5.1 Speed Binning 393
3 35% faster than Std
Worst-Case Timing
Maximum Delay in CMOS. When?
Minimum voltage
Maximum temperature
Slow-slow conditions (process variation/corner which result in slow p-channel and
slow n-channel). We could also have fast-fast, slow-fast, and fast-slow process corners
Increasing temperature increases delay
Temp = resistivity
resistivity = electron vibration
electron vibration = colliding with current electrons
colliding with current electrons = delay
Increasing supply voltage decreases delay
supply voltage = current
current = load capacitor charge time
load capacitor charge time = total delay
Derating factor is a number used to adjust timing number to account for voltage and temp
conditions
ASIC manufacturers classes, based on variety of environments:
VDD TA (ambient temp) TC (case temp)
Commercial 5V 5% 0 to +70C
Industrial 5V 10% 40 to +85C
Military 5V 10% 55 to +125C
What is important is the transistor temperature inside the chip, TJ (junction temperature)
5.5.1 Speed Binning
Speed binning is the process of testing each manufactured part to determine the maximum clock
speed at which it will run reliably.
Manufacturers sell chips off of the same manufacturing line at different prices based on how fast
they will run.
A speed bin is the clock speed that chips will be labeled with when sold.
Overclocking: running a chip at a clock speed faster than what it is rated for (and hoping that your
software crashes more frequently than your over-stressed hardware will).
394 CHAPTER 5. TIMING ANALYSIS
5.5.1.1 FPGAs, Interconnect, and Synthesis
On FPGAs 40-60% of clock cycle is consumed by interconnect.
When synthesizing, increasing effort (number of iterations) of place and route can signicantly
reduce the clock period on large designs.
5.5.2 Worst Case Timing
5.5.2.1 Fanout delay
In Smiths book, Table 5.2 (Fanout delay) combines two separate parameters:
capacitive load delay
interconnect delay
into a single parameter (fanout). This is common, and ne.
But, when reading a table such as this, you need to know whether fanout delay is combining both
capacitive load delay and interconnect delay, or is just capacitive load.
5.5.2.2 Derating Factors
Delays are dependent upon supply voltage and temperature.
Temp = Delay
Supply voltage = Delay
Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Temp = Delay
Temp = Resistivity of wires
As temp goes up, atoms vibrate more, and so have greater probability of colliding with
electrons owing with current.
Supply voltage = Delay
Supply voltage = current (V = IR)
current = time to charge load capacitors to threshold voltage
5.5.2 Worst Case Timing 395
Derating Factor Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A derating factor is a number to adjust timing numbers to account for different temperature and
voltage conditions.
Excerpt from table 5.3 in Smiths book (Actel Act 3 derating factors):
Derating factor Temp Vdd
1.17 125C 4.5V
1.00 70C 5.0V
0.63 -55C 5.5V
396 CHAPTER 5. TIMING ANALYSIS
5.6 Timing Analysis Problems
P5.1 Terminology
For each of the terms: clock skew, clock period, setup time, hold time, and clock-to-q, answer
which time periods (one or more of t1 t9 or NONE) are examples of the term.
NOTES:
1. The timing diagram shows the limits of the allowed times (either minimum or maximum).
2. All timing parameters are non-negative.
3. The signal a is the input to a rising-edge op and b is the output. The clock is clk1.
signal may change
signal is stable
t10 t11
clk1
clk2
b
a
b
t1 t2
t3
t9
t6
t7
t4
t5
t8
clock skew
clock period
setup time
hold time
P5.2 Hold Time Violations
P5.2.1 Cause
What is the cause of a hold time violation?
P5.3 Latch Analysis 397
P5.2.2 Behaviour
What is the bad behaviour that results if a hold time violation occurs?
P5.2.3 Rectication
If a circuit has a hold time violation, how would you correct the problem with minimal effort?
P5.3 Latch Analysis
Does the circuit below behave like a latch? If not, explain why not. If so, calculate the clock-to-Q,
setup, and hold times; and answer whether it is active-high or active-low.
Gate Delays
AND 4
OR 2
NOT 1
en
d
q
398 CHAPTER 5. TIMING ANALYSIS
P5.4 Critical Path and False Path
Find the critical path through the following circuit:
a

b

c

d
e
f g
h
i
j
k l
m
P5.5 Critical Path 399
P5.5 Critical Path
a
b
c
d
e
f
g
h
l
i
j
k
m
gate delay
NOT 2
AND 4
OR 4
XOR 6
Assume all delay and timing factors other than combinational logic delay are negligible.
P5.5.1 Longest Path
List the signals in the longest path through this circuit.
P5.5.2 Delay
What is the combinational delay along the longest path?
P5.5.3 Missing Factors
What factors that affect the maximum clock speed does your analysis for parts 1 and 2 not take
into account?
P5.5.4 Critical Path or False Path?
Is the longest path that you found a real critical path, or a false path? If it is a false path, nd the
real critical path. If it is a critical path, nd a set of assignments to the primary inputs that
exercises the critical path.
400 CHAPTER 5. TIMING ANALYSIS
P5.6 YACP: Yet Another Critical Path
Find the critical path in the circuit below.
a
b
c
d
e
f
g
h
P5.7 Timing Models 401
P5.7 Timing Models
In your next job, you have been told to use a fanout timing model, which states that the delay
through a gate increases linearly with the number of gates in the immediate fanout. You dimly
recall that a long time ago you learned about a timing model named Elmo, Elmwood, Elmore,
El-Morre, or something like that.
For the circuit shown below as a schematic and as a layout, answer whether the fanout timing
model closely matches the delay values predicted by the Elmore delay model.
G1
G2
G3
G4
G5
G1
G2 G3 G4 G5
Gate
Interconnect level 2
Interconnect level 1
Symbol Description Capacitance
Cg
Cx
Cy
Resistance
Antifuse R
0
0
0
0
Assumptions:
The capacitance of a node on a wire is independent of where the node is located on the wire.
402 CHAPTER 5. TIMING ANALYSIS
P5.8 Short Answer
P5.8.1 Wires in FPGAs
In an FPGA today, what percentage of the clock period is typically consumed by wire delay?
P5.8.2 Age and Time
If you were to compare a typical digital circuit from 5 years ago with a typical digital circuit
today, would you nd that the percentage of the total clock period consumed by capacative load
has increased, stayed the same, or decreased?
P5.8.3 Temperature and Delay
As temperature increases, does the delay through a typical combinational circuit increase, stay
the same, or decrease?
P5.9 Worst Case Conditions and Derating Factor
Assume that we have a Std speed grade Actel A1415 (an ACT 3 part) Logic Module that drives
4 other Logic Modules:
P5.9.1 Worst-Case Commercial
Estimate the delay under worst-case commercial conditions (assume that the junction temperature
is the same as the ambient temperature)
P5.9.2 Worst-Case Industrial
Find the derating factor for worst-case industrial conditions and calculate the delay (assume that
the junction temperature is the same as the ambient temperature).
P5.9.3 Worst-Case Industrial, Non-Ambient Junction Temperature
Estimate the delay under the worst-case industrial conditions (assuming that the junction
temperature is 105C).
Chapter 6
Power Analysis and Power-Aware Design
6.1 Overview
6.1.1 Importance of Power and Energy
Laptops, PDA, cell-phones, etc obvious!
For microprocessors in personal computers, every watt above 40W adds $1 to manufacturing
cost
Approx 25% of operating expense of server farm goes to energy bills
(Dis)Comfort of Unix labs in E2
Sandia Labs had to build a special sub-station when they took delivery of Teraops massively
parallel supercomputer (over 9000 Pentium Pros)
High-speed microprocessors today can run so hot that they will damage themselves Athlon
reliability problems, Pentium 4 processor thermal throttling
In 2000, information technology consumed 8% of total power in US.
Future power viruses: cell phone viruses cause cell phone to run in full power mode and
consume battery very quickly; PC viruses that cause CPU to meltdown batteries
6.1.2 Industrial Names and Products
All of the articles and papers below are linked to from the Documentation page on the E&CE 327
web site.
Overview white paper by Intel:
PC Energy-Efciency Trends and Technologies An 8-page overview of energy and power trends,
written in 2002. Available from the web at an intolerably long URL.
403
404 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
AMDs Athlon PowerNow!
Reduce power consumption in laptops when running on battery by allowing software to
reduce clock speed and supply voltage when performance is less important than battery life.
Intel Speedstep
Reduce power consumption in laptops when running on battery by reducing clock speed to
70-80% of normal.
Intel X-Scale
An ARM5-compatible microprocessor for low-power systems:
http://developer.intel.com/design/intelxscale/
Synopsys PowerMill
A simulator that estimates power consumption of the circuit as it is simulated:
http://www.synopsys.com/products/etg/powermill ds.html
DEC / Compaq / HP Itsy A tiny but powerful PDA-style computer running linux and
X-windows. Itsy was created in 1998 by DECs Western Research Laboratory to be an
experimental platform in low-power, energy-efcient computing. Itsy lead to the iPAQ
PocketPC.
www.hpl.hp.com/techreports/Compaq-DEC/WRL-2000-6.html
www.hpl.hp.com/research/papers/2003/handheld.html
Satellites Satellites run on solar power and batteries. They travel great distances doing very
little, then have a brief period very intense activity as they pass by an astronomical object of
interest. Satellites need efcient means to gather and store energy while they are ying
through space. Satellites need powerful, but energy efcient, computing and
communication devices to gather, process, and transmit data. Designing computing devices
for satellites is an active area of research and business.
6.1.3 Power vs Energy
Most people talk about power reduction, but sometimes they mean power and sometimes
energy.
Power minimization is usually about heat removal
Energy minimization is usually about battery life or energy costs
Type Units Equivalent Types Equations
Energy Joules Work =VoltsCoulombs
=
1
2
CVolts
2
Power Watts Energy / Time =VoltsI
=Joules/sec
6.1.4 Batteries, Power and Energy 405
6.1.4 Batteries, Power and Energy
6.1.4.1 Do Batteries Store Energy or Power?
Energy = VoltsCoulombs
Power =
Energy
Time
Batteries rated in Amp-hours at a voltage.
battery = Amps Seconds Volts
=
Coulombs
Seconds
Seconds Volts
= Coulombs Volts
= Energy
Batteries store energy.
6.1.4.2 Battery Life and Efciency
To extend battery life, we want to increase the amount of work done and/or decrease energy
consumed.
Work and energy are same units, therefore to extend battery life, we truly want to improve
efciency.
Power efciency of microprocessors normally measured in MIPS/Watt. Is this a real measure of
efciency?
MIPs
Watts
=
millions of instructions
Seconds

Seconds
Energy
=
millions of instructions
Energy
Both instructions executed and energy are measures of work, so MIPs/Watt is a measure of
efciency.
(This assumes that all instructions perform the same amount of work!)
406 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
6.1.4.3 Battery Life and Power
Question: Running a VHDL simulation requires executing an average of 1 million
instructions per simulation step. My computer runs at 700MHz, has a CPI of 1.0, and
burns 70W of power. My battery is rated at 10V and 2.5AH. Assuming all of my
computers clock cycles go towards running VHDL simulations, how many
simulation steps can I run on one battery charge?
Answer:
Outline of approach:
1. Unify the units
2. Calculate amount of energy stored in battery
3. Calculate energy consumed by each simulation step
4. Calculate number of simulation steps that can be run
Unify the units:
Amp (current) Coulomb/sec
Volt (potential difference, energy per charge) Joule/Coulomb
Watt (power) Joule/sec
Energy stored in battery:
E
batt
= check equation by checking the units
= AmpHours V
batt
= Amphour
sec
hour
Volt
=
Coulomb
sec
hour
sec
hour

Joule
Coulomb
= Joule
unit match, do the math
= 2.5AH3600sec/hour 10V
= 90 000Joules
Energy per simulation step:
6.1.4 Batteries, Power and Energy 407
E
step
check the units
= Watts. . .
=
Joule
sec

sec
cyc

cyc
instr

instr
step
=
Joule
step
units check, do the math
= 70Watts
1
70010
6
cyc/sec
1.0cyc/instr 10
6
instr/step
= 0.1Joule/step
= 0.1Joule/step
Number of steps:
NumSteps =
E
batt
E
step
=
90 000
0.1
= 900, 000steps
Question: If I use the SpeedStep feature of my computer, my computer runs at
600MHz with 60W of power. With SpeedStep activated, much longer can I keep the
computer running on one battery?
Answer:
Approach:
1. Calculate uptime with Speedstep turned off (high power)
2. Calculate uptime with Speedstep turned on (low power)
3. Calculate difference in uptimes
High-power uptime:
408 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
T
H
=
E
batt
P
H
=
90 000Watt-Secs
70Watt
= 1285Secs
= 21minutes
Low-power uptime:
T
L
=
E
batt
P
L
=
90 000Watt-Secs
60Watt
= 1500Secs
= 25minutes
Difference in uptimes:
T
diff
= T
L
T
H
= 2521
= 4minutes
Analysis:
This question is based on data from a typical laptop. So, why are the
predicted uptimes so much shorter than those experienced in reality?
Answer: The power consumption gures are the maximum peak power
consumption of the laptop: disk spinning, fan blowing, bus active, all
peripherals active, all modules on CPU turned on. In reality, laptop almost
never experience their maximum power consumption.
Question: With SpeedStep activated, how many more simulation steps can I run on
one battery?
6.2. POWER EQUATIONS 409
Answer:
Clock speed is proportional to power consumption. In both high-power and
low-power modes, the system runs the same number of clock cycles on the
energy stored in the battery. So, we are run the same number of simulation
steps both with and without SpeedStep activated.
Analysis:
In reality, with SpeedStep activated, I am able to run more simulation steps.
Why does the theoretical calculation disagree with reality?
Answer: In reality, the processor does not use 100% of the clock cycles for
running the simulator. Many clock cycles are wasted while waiting for I/O
from the disk, user, etc. When reducing the clock speed, a smaller number of
clock cycles are wasted as idle clock cycles.
6.2 Power Equations
Power = SwitchPower +ShortPower
. .
+ LeakagePower
. .
DynamicPower StaticPower
Dynamic Power dependent upon clock speed
Switching Power useful charges up transistors
Short Circuit Power not useful both N and P transistors are on
Static Power independent of clock speed
Leakage Power not useful leaks around transistor
Dynamic power is proportional to how often signals change their value (switch).
Roughly 20% of signals switch during a clock cycle.
Need to take glitches into account when calculating activity factor. Glitches increase the
activity factor.
Equations for dynamic power contain clock speed and activity factor.
410 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
6.2.1 Switching Power
1->0
0->1
CapLoad
Charging a capacitor
0->1
1->0
CapLoad
Disharging a capacitor
energy to (dis)charge capacitor =
1
2
CapLoadVoltSup
2
When a capacitor C is charged to a voltage V, the energy stored in capacitor is
1
2
CV
2
.
The energy required to charge the capacitor from 0 to V is CV
2
. Half of the energy (
1
2
CV
2
is
dissipated as heat through the pullup resistance. Half of energy is transfered to the capacitor.
When the capacitor discharges from V to 0, the energy stored in the capacitor (
1
2
CV
2
) is
dissipated as heat through the pulldown resistance.
f

: frequency at which invertor goes through complete charge-discharge cycle. (eqn 15.4 in
Smith)
average switching power = f

CapLoadVoltSup
2
ClockSpeed clock speed
ActFact average number of times that signal switches from 0 1 or from
1 0 during a clock cycle
average switching power =
1
2
ActFact ClockSpeedCapLoadVoltSup
2
6.2.2 Short-Circuited Power 411
6.2.2 Short-Circuited Power
Vi Vo
IShort
VoltSup
GND
VoltThresh
VoltSup - VoltThresh
P-trans on
N-trans on
TimeShort
Gate Voltage
PwrShort =ActFact ClockSpeedTimeShort IShort VoltSup
6.2.3 Leakage Power
N-substrate
P
Vi
Vo
N N P
P
Cross section of invertor showing parasitic
diode
I
V
ILeak
Leakage current through parasitic diode
PwrLk = ILeakVoltSup
ILeak e
_
qVoltThresh
k T
_
412 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
6.2.4 Glossary
ClockSpeed def Clock speed
aka f
ActFact def activity factor
aka A
=
NumTransitions
NumSignals NumClockCycles
= Per signal: percentage of clock cycles when signal changes value.
= Per clock cycle: percentage of signals that change value per clock
cycle. Note: When measuring per circuit, sometimes approximate by
looking only at ops, rather than every single signal.
TimeShort def short circuit time
aka
= Time that both N and P transistors are turned on when signal changes
value.
MaxClockSpeed def Maximum clock speed that an implementation technology can sup-
port.
aka f
max

(VoltSupVoltThresh)
2
VoltSup
VoltSup def Supply voltage
aka V
VoltThresh def Threshold voltage
aka V
th
= voltage at which P transistors turn on
ILeak def Leakage current
aka I
S
(reverse bias saturation current)
e
_
qVoltThresh
k T
_
IShort def Short circuit current
aka I
short
= Current that goes through transistor network while both N and P tran-
sistors are turned on.
CapLoad def load capacitance
aka C
L
PwrSw def switching power (dynamic)
=
1
2
ActFact ClockSpeedCapLoadVoltSup
2
PwrShort def switching power (dynamic)
= ActFact ClockSpeedTimeShort IShort VoltSup
PwrLk def leakage power (static)
= ILeakVoltSup
Power def total power
= PwrSw+PwrShort +PwrLk
6.2.5 Note on Power Equations 413
q def electron charge
= 1.6021810
19
C
k def Boltzmanns constant
= 1.3806610
23
J/K
T def temperature in Kelvin
6.2.5 Note on Power Equations
The power equation:
Power = DynamicPower +StaticPower
= PwrSw+PwrShort +PwrLk
= (ActFact ClockSpeed
1
2
CapLoadVoltSup
2
)
+ (ActFact ClockSpeedTimeShort IShort VoltSup)
+ (ILeakVoltSup)
is for an individual signal.
To calculate dynamic power for n signals with different CapLoad, TimeShort, and IShort:
DynamicPower = (
n

i=1
ActFact
i

1
2
CapLoad
i
ClockSpeedVoltSup
2
)
+ (
n

i=1
ActFact
i
ClockSpeedTimeShort
i
IShort
i
VoltSup)
If know the average CapLoad, TimeShort, and IShort for a collection of n signals, then the
above formula simplies to:
DynamicPower = (nActFact
AVG

1
2
CapLoad
AVG
ClockSpeedVoltSup
2
)
+ (nActFact
AVG
ClockSpeedTimeShort
AVG
IShort
AVG
VoltSup)
If capacitances and short-circuit parameters dont have an even distribution, then dont average
them. If high-capacitance signals have high-activity factors, then averaging the equations will
result in erroneously low predictions for power.
414 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
6.3 Overview of Power Reduction Techniques
We can divide power reduction techniques into two classes: analog and digital.
analog
Parameters to work with:
capacitance for example, Silicon on Insulator (SOI)
resistance for example, copper wires
voltage low-voltage circuits
Techniques:
dual-VDD Two different supply voltages: high voltage for performance-critical
portions of design, low voltage for remainder of circuit. Alternatively, can vary
voltage over time: high voltage when running performance-critical software and
low voltage when running software that is less sensitive to performance.
dual-Vt Two different threshold voltages: transistors with low threshold voltage for
performance-critical portions of design (can switch more quickly, but more
leakage power), transistors with high threshold voltage for remainder of circuit
(switches more slowly, but reduces leakage power).
exotic circuits Special ops, latches, and combinational circuitry that run at a high
frequency while minimizing power
adiabatic circuits Special circuitry that consumes power on 0 1 transitions, but
not 1 0 transitions. These sacrice performance for reduced power.
clock trees Up to 30% of total power can be consumed in clock generation and
clock tree
digital
Parameters to work with:
capacitance (number of gates)
activity factor
clock frequency
Techniques:
multiple clocks Put a high speed clock in performance-critical parts of design and a
low speed clock for remainder of circuit
clock gating Turn off clock to portions of a chip when its not being used
data encoding Gray coding vs one-hot vs fully encoded vs ...
glitch reduction Adjust circuit delays or add redundant circuitry to reduce or
eliminate glitches.
asynchronous circuits Get rid of clocks altogether....
Additional low-power design techniques for RTL from a Qualis engineer:
http://home.europa.com/

celiac/lowpower.html
6.4. VOLTAGE REDUCTION FOR POWER REDUCTION 415
6.4 Voltage Reduction for Power Reduction
If our goal is to reduce power, the most promising approach is to reduce the supply voltage,
because, from:
Power = (ActFact ClockSpeed
1
2
CapLoadVoltSup
2
)
+ (ActFact ClockSpeedTimeShort IShort VoltSup)
+ (ILeakVoltSup)
we observe:
Power VoltSup
2
Reducing Difference Between Supply and Threshold Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . .
As the supply voltage decreases, it takes longer to charge up the capacitive load, which increases
the load delay of a circuit.
In the chapter on timing analysis, we saw that increasing the supply voltage will decrease the
delay through a circuit. (From V = IR, increasing V causes an increase in I, which causes the
capacitive load to charge more quickly.) However, it is more accurate to take into account both
the value of the supply voltage, and the difference between the supply voltage and the threshold
voltage.
MaxClockSpeed
(VoltSupVoltThresh)
2
VoltSup
Question: If the delay along the critical path of a circuit is 20 ns, the supply voltage
is 2.8 V, and the threshold voltage is 0.7 V, calculate the critical path delay if the
supply voltage is dropped to 2.2 V.
Answer:
d 20ns current delay along critical path
d

?? new delay along critical path


V 2.8V current supply voltage
V

2.2V new supply voltage


V
t
0.7V threshold voltage
416 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
MaxClockSpeed 1/d
MaxClockSpeed
(VoltSupVoltThresh)
2
VoltSup
d
V
(V V
t
)
2
d

d
=
(V V
t
)
2
V

V

(V

V
t
)
2
d

= d
(V V
t
)
2
V

V

(V

V
t
)
2
= 20ns
(2.8V0.7V)
2
2.8V

2.2V
(2.2V0.7V)
2
= 31ns
Reducing Threshold Voltage Increases Leakage Current . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
If we reduce the supply voltage, we want to also reduce the threshold voltage, so that we do not
increase the delay through the circuit. However, as threshold voltage drops, leakage current
increases:
ILeak e
_
qVoltThresh
k T
_
And increasing the leakage current increases the power:
Power ILeak
So, need to strike a balance between reducing VoltSup (which has a quadratic affect on reducing
power), and increasing ILeak, which has a linear affect on increasing power.
6.5 Data Encoding for Power Reduction
6.5.1 How Data Encoding Can Reduce Power
Data encoding is a technique that chooses data values so that normal execution will have a low
activity factor.
The most common example is Gray coding where exactly one bit changes value each clock
cycle when counting.
6.5.1 How Data Encoding Can Reduce Power 417
Decimal Gray Binary
0 0000 0000
1 0001 0001
2 0011 0010
3 0010 0011
4 0110 0100
5 0111 0101
6 0101 0110
7 0100 0111
8 1100 1000
9 1101 1001
10 1111 1010
11 1110 1011
12 1010 1100
13 1011 1101
14 1001 1110
15 1000 1111
Two ways to understand the pattern for Gray-code count-
ing. Both methods are based on noting when a bit in the
Gray code toggles from 0 to 1 or 1 to 0.
To convert from binary to Gray, a bit in the Gray
code toggles whenever the corresponding bit in the
binary code goes from 0 to 1. (US Patent 4618849
issued in 1984).
To implement a Gray code counter from scratch,
number the bits from 1 to n, with a special less-than-
least-signicant bit q
0
. The output of the counter
will be q
n
. . . q
1
.
1. create a op that toggles in each clock cycle:
q
0
<= not q
0
2. bit 1 toggles whenever q
0
is 1.
3. For each bit i 2..n, the counter bit q
i
toggles
whenever q
i1
is 1 and all of the bits q
i2
. . . q
0
are 0.
4. This behaviour can be implemented in a
ripple-carry style by introducing carry (c
i
) and
toggle (qt
i
) signals for each bit.
q
0
<= not(q
0
) reg asn
c
0
<= not(q
0
) comb asn
c
i
<= c
i1
and not(q
i
) comb asn
qt
i
<= q
i1
and c
i2
comb asn
We create a toggle ip-op by xoring the out-
put of a D-op with its toggle signal:
q
i
<= q
i
xor qt
i
reg asn
Question: For an eight-bit counter, how much more power will a binary counter
consume than a Gray-code counter?
Answer:
Power consumption is dependent on area and activity factor. The original
purpose of this problem was to focus on activity factor. The problem
was created under the mistaken assumption that a Gray code counter
and a binary code counter will both use the same area (1 fpga cell per
418 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
bit) and so the power difference comes from the difference in activity
factors. This mistake is addressed at the end of the solution.
For Gray coding, exactly one-bit toggles in each clock cycle. Thus, the activity
factor for an n-bit Gray counter will be
1
n
.
For binary coding, the least signicant bit toggles in every clock cycle, so it
has an activity factor of 1. The 2nd least-signicant bit toggles in every other
clock cycle, so it has an activity factor of
1
2
. We study the other bits and try to
nd a pattern based on the bit position, i, where i = 0 for the least-signicant
bit and n1 for the most signicant bit of an n-bit counter. We see that for bit
i, the activity factor is
1
2
i
.
For an n-bit binary counter, the average activity factor is the sum of the
activity factors for the signals over the number of signals:
BinaryActFact =
1
2
0
+
1
2
1
+
1
2
2
++
1
2
n1
n
=
1
n

n1

i=0
2
i1
The limit of the summation term as n goes to innity is 2. We can see this as
an instance of Zenos paradox, in that with each step we halve the distance to
2.
BinaryActFact
1
n
2

2
n
Find the ratio of the binary activity factor to the Gray-code activity factor.
BinaryActFact
GrayActFact
=
2
n

n
1
= 2
In reality, the ripple-carry Gray code counter will always have two
transitions per clock cycle: one for the q
0
toggle op and one for the
actual signal in the counter that toggles. Thus the Gray code counter
will consume more power than the binary counter. The overall power
reduction comes from the circuit that uses the Gray code.
6.5.2 Example Problem: Sixteen Pulser 419
Question: For completely random eight-bit data, how much more power will a binary
circuit consume than a Gray-code circuit?
Answer:
If the data is completely random, then the Gray code loses its feature that
consecutive data will differ in only one bit position. In fact, the activity factor
for Gray code and binary code will be the same. There will not be any power
saving by using Gray code. A binary counter will consume the same power as
a Gray-code circuit.
On average, half of the bits will be 1 and half will be 0. For each bit, there are
four possible transitions: 00, 01, 10, and 11. In these four
transitions, two causes changes in value and two do not cause a change. Half
of the transitions result in a change in value, therefore for random data the
activity factor will be 0.5, independent of data encoding or the number of bits.
6.5.2 Example Problem: Sixteen Pulser
6.5.2.1 Problem Statement
Your task is to do the power analysis for a circuit that should send out a one-clock-cycle pulse on
the done signal once every 16 clock cycles. (That is, done is 0 for 15 clock cycles, then 1 for
one cycle, then repeat with 15 cycles of 0 followed by a 1, etc.)
done
1 2 3 16 15 17 32 31 33
clk
Required behaviour
You have been asked to consider three different types of counters: a binary counter, a Gray-code
counter, and a one-hot counter. (The table below shows the values from 0 to 15 for the different
encodings.)
Question: What is the relative amount of power consumption for the different
options?
420 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
6.5.2.2 Additional Information
Your implementation technology is an FPGA where each cell has a programable combinational
circuit and a ip-op. The combinational circuit has 4 inputs and 1 output. The capacitive load of
the combinational circuit is twice that of the ip-op.
PLA
cell
1. You may neglect power associated with clocks.
2. You may assume that all counters:
(a) are implemented on the same fabrication process
(b) run at the same clock speed
(c) have negligible leakage and short-circuit currents
6.5.2.3 Answer
Outline of Thinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Factors to consider that distinguish the options: capacitance and activity factor:
Capacitance is dependent upon the number of signals, and whether a signal is combinational or a
op.
Sketch out the circuitry to evaluate capacitance.
Sketch the Circuitry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Name the output done and the count digits d().
6.5.2 Example Problem: Sixteen Pulser 421
PLA
PLA
PLA
PLA
d(0)
d(1)
d(2)
d(3)
done
PLA
Block diagram for Gray and Binary Counters
d(0) d(1)
d(15)
done
PLA PLA PLA
Block diagram for One-Hot
Observation:
The Gray and Binary counters have the same design, and the Gray counter will have
the lower activity factor. Therefore, the Gray counter will have lower power than the
Binary counter.
However, we dont know how much lower the power of the Gray counter will be, and
we dont know how much power the One-Hot counter will consume.
Capacitance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
422 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
cap number subtotal cap
Gray d() PLAs 2 4 8
Flops 1 4 4
done PLAs 2 1 2
Flops 1 0 0
1-Hot d() PLAs 2 0 0
Flops 1 16 16
done PLAs 2 0 0
Flops 1 0 0
Binary d() PLAs 2 4 8
Flops 1 4 4
done PLAs 2 1 2
Flops 1 0 0
Activity Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
d(0)
d(1)
d(2)
d(3)
done
clk
4/16
2/16
2/16
2/16
8/16
Gray coding
d(0)
d(1)
d(2)
done
clk
2/16
2/16
2/16
2/16
2/16
One-hot coding
d(0)
d(1)
d(2)
d(3)
done
clk
8/16
4/16
2/16
2/16
16/16
Binary coding
6.5.2 Example Problem: Sixteen Pulser 423
act fact
Gray d() PLAs 1/4 signals in each clock cycle
Flops 1/4 signals in each clock cycle
done PLAs 2 transitions / 16 clock cycles
Flops
1-Hot d() PLAs
Flops 2 transitions / 16 clock cycles
done PLAs
Flops
Binary d() PLAs
16 + 8 + 4 + 2 transitions
4 signals 16 clock cycles
= 0.47
Flops
16 + 8 + 4 + 2 transitions
4 signals 16 clock cycles
= 0.47
done PLAs 2 transitions / 16 clock cycles
Flops
Note: Activity factor for One-Hot counter Because all signals have same
capacitance, and all clock cycles have the same number of transitions for the
One-Hot counter, could have calculated activity factor as two transitions per
sixteen signals.
424 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
Putting it all Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
subtotal cap act fact power
Gray d() PLAs 8 1/4 2
Flops 4 1/4 1
done PLAs 2 2/16 4/16
Flops 0 0
Total 3.25
1-Hot d() PLAs 0 0
Flops 16 2/16 2
done PLAs 0 0
Flops 0 0
Total 2
Binary d() PLAs 8 0.47 3.76
Flops 4 0.47 1.88
done PLAs 2 2/16 0.25
Flops 0 0
Total 5.87
If choose Binary counting as baseline, then relative amounts of power are:
Gray 54%
One-Hot 35%
Binary 100%
If choose One-Hot counting as baseline, then relative amounts of power are:
Gray 156%
One-Hot 100%
Binary 288%
6.6 Clock Gating
The basic idea of clock gating is to reduce power by turning off the clock when a circuit isnt
needed. This reduces the activity factor.
6.6.1 Introduction to Clock Gating
Examples of Clock Gating
Condition Circuitry turned off
O/S in standby mode Everything except core state (PC, registers, caches, etc)
No oating point instructions
for k clock cycles
oating point circuitry
Instruction cache miss Instruction decode circuitry
No instruction in pipe stage i Pipe stage i
6.6.2 Implementing Clock Gating 425
Design Tradeoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+ Can signicantly reduce activity factor (Synopsys PowerCompiler claims that can cut power
to be 5080% of ungated level)
Increases design complexity
design effort
bugs!
Increases area
Increases clock skew
Functional Validation and Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Its a functional bug to turn a clock off when its needed for valid data.
Its functionally ok, but wasteful to turn a clock on when its not needed.
(About 5% of the bugs caught on Willamette (Intel Pentium 4 Processor) were related to clock
gating.) Nicolas Mokhoff. EE Times. June 27, 2001.
http://www.edtn.com/story/OEG20010621S0080
6.6.2 Implementing Clock Gating
Clock gating is implemented by adding a component that disables the clock when the circuit isnt
needed.
i_data
clk
o_data
i_valid
o_valid
Without clock gating
Clock Enable
State Machine
clk
i_wakeup
clk_en
cool_clk
i_data o_data
i_valid
o_valid
With clock gating
426 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
The total power of a circuit with clock gating is the sum of the power of the main circuit with a
reduced activity factor and the power of the clock gating state machine with its activity factor.
The clock-gating state machine must always be on, so that it will detect the wakeup signal do
not make the mistake of gating the clock to your clock gating circuit!
6.6.3 Design Process
Design Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
What level of granularity for gated clocks?
entire module?
individual pipe stages?
something in between?
When should the clocks turn off?
When should the clocks turn on?
Protocol for incoming wakeup signal?
Protocol for outgoing wakeup signal?
Wakeup Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Designers negotiate incoming and outgoing wakeup protocol with environment.
An example wakeup protocol:
wakeup in will arrive 1 clock cycle before valid data
wakeup in will stay high until have at least 3 cycles of invalid data
Design Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
When designing clock gating circuitry, consider the two extreme case:
a constant stream of valid data
circuit is turned off and receives a single parcel of valid data
For a constant stream of valid data, the key is to not incur a large overhead in design complexity,
area, or clock period when clocks will always be toggling.
For a single parcel of valid data, the key is to make sure that the clocks are toggling so that data
can percolate through circuit. Also, we want to turn off the clock as soon as possible after data
leaves.
6.6.4 Effectiveness of Clock Gating 427
6.6.4 Effectiveness of Clock Gating
We can measure the effectiveness of clock gating by comparing the percentage of clock cycles
when the clock is not toggling to the percentage of clock cycles that the circuit does not have
valid data (i.e. the clock does not need to toggle).
The most ineffective clock gating scheme is to never turn off the clock (let the clock always
toggle). The most effective clock gating scheme is to turn off the clock whenever the circuit is not
processing valid data.
Parameters to characterize effectiveness of clock gating:
Eff = effectiveness of clock gating
PctValid = percentage of clock cycles with valid data in the circuit the clock
must be toggling
PctClk = percentage of clock cycles that clock toggles
Effectiveness measures the percentage of clock cycles with invalid data in which the clock is
turned off. Equation for effectiveness of clock gating:
Eff =
PctClkOff
PctInvalid
=
1PctClk
1PctValid
Question: What is the effectiveness if the clock toggles only when there is valid data?
Answer:
PctClk = PctValid and the effectiveness should be 1:
Eff =
1PctClk
1PctValid
=
1PctValid
1PctValid
= 1
Question: What is the effectiveness of a clock that always toggles?
428 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
Answer:
If the clock is always toggling, then PctClk = 100% and the effectiveness
should be 0.
Eff =
1PctClk
1PctValid
=
11
1PctValid
= 0
Question: What does it mean for a clock gating scheme to be 75% effective?
Answer:
75% of the time that the there is invalid data, the clock is off.
Question: What happens if PctClk < PctValid?
Answer:
If PctClk < PctValid, then:
1PctClk > 1PctValid
so, effectiveness will be greater than 100%.
In some sense, it makes sense that the answer would be nonsense, because
a clock gating scheme that is more than 100% effective is too effective: it is
turning off the clock sometime when it shouldnt!
We can see the effect of the effectiveness of a clock-gating scheme on the activity factor:
A
Eff
A
0
1
0
PctValid * A
When the effectiveness is zero, the new activity factor is the same as the original activity factor.
For a 100% effective clock gating scheme, the activity factor is APctValid. Between 0% and
100% effectiveness, the activity factor decreases linearly.
The new activity factor with a clock gating scheme is:
A

= A(1PctValid) Eff A
6.6.5 Example: Reduced Activity Factor with Clock Gating 429
6.6.5 Example: Reduced Activity Factor with Clock Gating
Question: How much power will be saved in the following clock-gating scheme?
70% of the time the main circuit has valid data
clock gating circuit is 90% effective (90% of the time that the circuit has invalid data, the clock
is off)
clock gating circuit has 10% of the area of the main circuit
clock gating circuit has same activity factor as main circuit
neglect short-circuiting and leakage power
Answer:
1. Set up main equations
430 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
Pwr
Main
= power for main circuit without clock gating
Pwr

Main
= power for main circuit with clock gating
Pwr
ClkFsm
= power for clock enable state machine
Pwr
Tot
= Pwr
Main
+Pwr
ClkFsm
Pwr = PwrSw+PwrLk +PwrShort
PwrSw =
1
2
ACV
2
PwrLk = negligible
PwrShort = negligible
Pwr =
1
2
ACV
2
Pwr
Tot
=
_
1
2
A
Main
C
Main
V
2
_
+
_
1
2
A
ClkFsm
C
ClkFsm
V
2
_
A
Main
= A
C
Main
= C
A
ClkFsm
= A
C
ClkFsm
= 0.1C
A

Main
= A

ClkFsm
= A
Pwr

Tot
Pwr
Tot
=
_
1
2
A

CV
2
_
+
_
1
2
A0.1CV
2
_
1
2
ACV
2
=
A

+0.1A
A
2. Find new activity factor for main circuit (A

):
A

= (1Eff(1PctValid)) A
= (10.9(10.7)) A
= 0.73A
3. Find ratio of new total power to previous total power:
6.6.6 Clock Gating with Valid-Bit Protocol 431
Pwr

Tot
Pwr
Tot
=
A

+0.1A
A
=
0.73A+0.1A
A
= 0.83
4. Final answer: new power is 83% of original power
6.6.6 Clock Gating with Valid-Bit Protocol
A common technique to determine when a circuit has valid data is to use a valid-bit protocol. In
section 6.6.6.1 we review the valid-bit procotol and then in section 6.6.6.3 we add clock-gating
circuitry to a circuit that uses the valid-bit protocol.
6.6.6.1 Valid-Bit Protocol
Need a mechanism to tell circuit when to pay attention to data inputs e.g. when is it supposed
to decode and execute an instruction, or write data to a memory array?
clk
i_valid
i_data o_data
o_valid
clk
i_valid
i_data
o_data
o_valid


i valid: high when i data has valid data signies whether circuit should pay attention to
or ignore data.
o valid: high when o data has valid data signies whether whether environment should
pay attention to output of circuit.
For more on circuit protocols, see section 2.12.
432 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
Microscopic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Which clock edges are needed?
i_valid
clk
o_valid
clk
i_valid
o_valid
6.6.6 Clock Gating with Valid-Bit Protocol 433
6.6.6.2 How Many Clock Cycles for Module?
Given a module with latency Lat , if the module receives a stream of NumPcls consecutive valid
parcels, how many clock cycles must the clock-enable signal be asserted?
Latency NumPcls NumClkEn
i_valid
o_valid
clk_en
Latency NumPcls NumClkEn
i_valid
o_valid
clk_en
i_valid
o_valid
clk_en
i_valid
o_valid
clk_en
i_valid
o_valid
clk_en
i_valid
o_valid
clk_en
i_valid
o_valid
clk_en
t
i1
time of rst i valid
t
o1
time of rst o valid
t
ik
time of last i valid
t
ok
time of last o valid
t
start
rst clock cycle with clock enabled
t
last
last clock cycle with clock enabled
Initial equations to describe relationships between different points in time:
t
o1
= t
i1
+Lat
t
ok
= t
o1
+NumPcls 1
t
rst
t
i1
+1
t
last
t
ok
+1
To understand the 1 in the equation for t
ok
, examine the situation when NumPcls = 1. With just
one parcel going through the system t
o1
= t
i1
+Lat , so we have: t
ok
=t
o1
+11.
In the equation for t
last
, we need the +1 to clear the last valid bit.
Solve for the length of time that the clock must be enabled. The +1 at the end of this equation is
becuase if t
last
= t
rst
, we would have the clock enabled for 1 clock cycle.
434 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
ClkEnLen = t
last
t
rst
+1
= t
ok
+1(t
i1
+1) +1
= t
ok
t
i1
+1
= t
o1
+NumPcls 1t
i1
+1
= t
o1
+NumPcls t
i1
= t
i1
+Lat +NumPcls t
i1
= Lat +NumPcls
We are left with the formula that the number of clock cycles that the modules clock must be
enabled is the latency through the module plus the number of consecutive parcels.
6.6.6.3 Adding Clock-Gating Circuitry
Before Clock Gating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
data_in
clk
data_out
valid_in valid_out
clk


data_in
valid_in
data_out
valid_out
dont care
uninitialized
After Clock Gating: Circuitry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Clock Enable
State Machine
data_in
hot_clk
wakeup_in
data_out
clk_en
cool_clk
valid_in valid_out
wakeup_out
hot clk: clock that always toggles
6.6.6 Clock Gating with Valid-Bit Protocol 435
cool clk: gated clock sometimes toggles, sometimes stays low
wakeup: alerts circuit that valid data will be arriving soon
clk en: turns on cool clk
436 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
After Clock Gating: New Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


data_in
valid_in
hot_clk
data_out
valid_out
wakeup_in
cool_clk
clk_en
wakeup_out
6.6.7 Example: Pipelined Circuit with Clock-Gating 437
6.6.7 Example: Pipelined Circuit with Clock-Gating
Design a clock enable state machine for the pipelined component described below.
capacitance of pipelined component = 200
latency varies from 5 to 10 clock cycles, even distribution of latencies
contains a maximum of 6 instructions (parcels of data).
60% of incoming parcels are valid
average length of continuous sequence of valid parcels is 80
use input and output valid bits for wakeup
leakage current is negligible
short-circuit current is negligible
LUTs have a capacitance of 1, ops have a capacitance of 2
The two factors affecting power are activity factor and capacitance.
1. Scenario: turned off and get one parcel.
(a) Need to turn on and stay on until parcel departs
(b) idea #1 (parcel count):
count number of parcels inside module
keep clocks toggling if have non-zero parcels.
(c) idea #2 (cycle count):
count number of clock cycles since last valid parcel entered module
once hit 10 clock cycles without any valid parcels entering, know that all parcels
have exited.
keep clocks toggling if counter is less than 10
2. Scenario: constant stream of parcels
(a) parcel count would require looking at input and output stream and conditionally
incrementing or decrementing counter
(b) cycle count would keep resetting counter
Waveforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
i_valid
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
o_valid
parcel_count
parcel_clk_en
18 19 20 21 22 23 24
438 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
i_valid
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
o_valid
cycle_count
1 2 0 0 0 1 2 3 4 1 2 3 4 5 6 7 8 9 10 0 0
cycle_clk_en
18 19 20 21 22 23 24
5
Outline:
1. sketch out circuitry for parcel count and cycle count state machine
2. estimate capacitance of each state machine
3. estimate activity factor of main circuit, based on behaviour
Parcel Count Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Need to count (0..6) parcels, therefore need 3 bits for counter.
Counter must be able to increment and decrement.
Equations for counter action (increment/decrement/no-change):
i valid o valid action
0 0 no change
0 1 decrement
1 0 increment
1 1 no change
6.7. POWER PROBLEMS 439
6.7 Power Problems
P6.1 Short Answers
P6.1.1 Power and Temperature
As temperature increases, does the power consumed by a typical combinational circuit increase,
stay the same, or decrease?
P6.1.2 Leakage Power
The new vice president of your company has set up a contest for ideas to reduce leakage power in
the next generation of chips that the company fabricates. The prize for the person who submits
the suggestion that makes the best tradeoff between leakage power and other design goals is to
have a door installed on their cube. What is your door-winning idea, and what tradeoffs will your
idea require in order to achieve the reduction in leakage power?
P6.1.3 Clock Gating
In what situations could adding clock-gating to a circuit increase power consumption?
P6.1.4 Gray Coding
What are the tradeoffs in implementing a program counter for a microprocessor using Gray
coding?
P6.2 VLSI Gurus
The VLSI gurus at your company have come up with a way to decrease the average rise and fall
time (0-to-1 and 1-to-0 transitions) for signals. The current value is 1ns. With their fabrication
tweaks, they can decrease this to 0.85ns .
P6.2.1 Effect on Power
If you implement their suggestions, and make no other changes, what effect will this have on
power? (NOTE: Based on the information given, be as specic as possible.)
440 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
P6.2.2 Critique
A group of wannabe performance gurus claim that the above optimization can be used to improve
performance by at least 15%. Briey outline what their plan probably is, critique the merits of
their plan, and describe any affect their performance optimization will have on power.
P6.3 Advertising Ratios
One day you are strolling the hallways in search of inspiration, when you bump into a person
from the marketing department. The marketing department has been out surng the web and has
noticed that companies are advertising the MIPs/mm
2
, MIPs/Watt, and Watts/cm
3
of their
products. This wide variety of different metrics has confused them.
Explain whether each metric is a reasonable metric for customers to use when choosing a system.
If the metric is reasonable, say whether bigger is better (e.g. 500 MIPs/mm
2
is better than 20
MIPs/mm
2
) or smaller is better (e.g. 20 MIPs/mm
2
is better than 500 MIPs/mm
2
), and which
one type of product (cell phone, desktop computer, or compute server) is the metric most relevant
to.
MIPs/mm
2
MIPs/Watt
Watts/cm
3
P6.4 Vary Supply Voltage
As the supply voltage is scaled down (reduced in value), the maximum clock speed that the circuit
can run at decreases.
The scaling down of supply voltage is a popular technique for minimizing power. The maximum
clock speed is related to the supply voltage by the following equation:
MaxClockSpeed
(VoltSupVoltThresh)
2
VoltSup
Where VoltSup is supply voltage and VoltThresh is threshold voltage.
With a supply voltage of 3V and a threshold voltage of 0.8V, the maximum clock speed is
measured to be 200MHz. What will the maximum clock speed be with a supply voltage of 1.5V?
P6.5 Clock Speed Increase Without Power Increase 441
P6.5 Clock Speed Increase Without Power Increase
The following are given:
You need to increase the clock speed of a chip by 10%
You must not increase its dynamic power consumption
The only design parameter you can change is supply voltage
Assume that short-circuiting current is negligible
P6.5.1 Supply Voltage
How much do you need to decrease the supply voltage by to achieve this goal?
P6.5.2 Supply Voltage
What problems will you encounter if you continue to decrease the supply voltage?
P6.6 Power Reduction Strategies
In each low power approach described below identify which component(s) of the power equation
is (are) being minimized and/or maximized:
P6.6.1 Supply Voltage
Designers scaled down the supply voltage of their ASIC
P6.6.2 Transistor Sizing
The transistors were made larger.
P6.6.3 Adding Registers to Inputs
All inputs to functional units are registered
P6.6.4 Gray Coding
Gray coding of signals is used for address signals.
442 CHAPTER 6. POWER ANALYSIS AND POWER-AWARE DESIGN
P6.7 Power Consumption on New Chip
While you are eating lunch at your regular table in the company cafeteria, a vice president sits
down and starts to talk about the difculties with a new chip.
The chip is a slight modication of existing design that has been ported to a new fabrication
process. Earlier that day, the rst sample chips came back from fabrication. The good news is that
the chips appear to function correctly. The bad news is that they consume about 10% more power
than had been predicted.
The vice president explains that the extra power consumption is a very serious problem, because
power is the most important design metric for this chip.
The vice president asks you if you have any idea of what might cause the chips to consume more
power than predicted.
P6.7.1 Hypothesis
Hypothesize a likely cause for the surprisingly large power consumption, and justify why your
hypothesis is likely to be correct.
P6.7.2 Experiment
Briey describe how to determine if your hypothesized cause is the real cause of the surprisingly
large power consumption.
P6.7.3 Reality
The vice president wants to get the chips out to market quickly and asks you if you have any ideas
for reducing their power without changing the design or fabrication process. Describe your ideas,
or explain why her suggestion is infeasible.
Chapter 7
Fault Testing and Testability
7.1 Faults and Testing
7.1.1 Overview of Faults and Testing
7.1.1.1 Faults
During manufacturing, faults can occur that make the physical product behave incorrectly.
Denition: A fault is a manufacturing defect that causes a wire, poly, diffusion, or via to either
break or connect to something it shouldnt.
Good wires Shorted wires Open wire
7.1.1.2 Causes of Faults
Fabrication process (initial construction is bad)
chemical mix
impurities
dust
Manufacturing process (damage during construction)
handling
probing
cutting
mounting
443
444 CHAPTER 7. FAULT TESTING AND TESTABILITY
materials
corrosion
adhesion failure
cracking
peeling
7.1.1.3 Testing
Denition Testing is the process of checking that the manufactured wafer/chip/board/system has
the same functionality as the simulations.
7.1.1.4 Burn In
Some chips that come off the manufacturing line will work for a short period of time and then fail.
Denition Burn-in: The process of subjecting chips to extreme conditions (high and low temps,
high and low voltages, high and low clock speeds) before and during testing.
The purpose is to cause (and catch) failures in chips that would pass a normal test, but fail in early
use by customers.
Soon to break wire
The hope is that the extreme conditions will cause chips to break that would otherwise have
broken in the customers system soon after arrival.
The trick is to create conditions that are extreme enough that bad chips will break, but not so
extreme to cause good chips to break.
7.1.1.5 Bin Sorting
Each chip (or wafer) is run at a variety of clock speeds. The chips are grouped and labeled
(binned) by the maximum clock frequency at which they will work reliably.
For example, chips coming off of the same production line might be labelled as 800MHz,
900MHz, and 1000MHz.
Overclocking is taking a chip rated at nMHz and running it at 1.x nMHz. (Sure your computer
often crashes and loses your assignment, but just think how much more productive you are when it
is working...)
7.1.1 Overview of Faults and Testing 445
7.1.1.6 Testing Techniques
Scan Testing or Boundary Scan Testing (BST, JTAG)
Load test vector from tester into chip
Run chip on test data
Unload result data from chip to tester
Compare results from chip against those produced by simulation
If results are different, then chip was not manufactured correctly
Built In Self Test (BIST)
Build circuitry on chip that generates tests and compares actual and expected results
IDDQ Testing
Measure the quiescent current between VDD and GND.
Variations from expected values indicate faults.
Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The challenges in testing:
test circuitry consumes chip area
test circuitry reduces performance
decrease fault escapee rate of product that ships while having minimal impact on production
cost and chip performance
external tester can only look at I/O pins
ratio of internal signals to I/O pins is increasing
some faults will only manifest themselves at high-clock frequencies
The crux of testing is to use yesterdays technology to nd faults in tomorrows chips. Agilent
engineer at ARVLSI 2001.
7.1.1.7 Design for Testability (DFT)
Scan testing and self-testing require adding extra circuitry to chips.
Design for test is the process of adding this circuitry in a disciplined and correct manner.
A hot area of research, that is becoming mainstream practice, is developing synthesis tools to
automatically add the testing circuitry.
446 CHAPTER 7. FAULT TESTING AND TESTABILITY
7.1.2 Example Problem: Economics of Testing
Given information:
The ACHIP costs $10 without any testing
Each board uses one ACHIP (plus lots of other chips that we dont care about)
68% of the manufactured ACHIPS do not have any faults
For the ACHIP, it costs $1 per chip to catch half of the faults
Each 50% reduction in fault escapees doubles cost of testing (intuition: doubles number of tests
that are run)
If board-level testing detects a bad ACHIP, it costs $200 to replace the ACHIP
Board-level testing will detect 100% of the faults in an ACHIP
Question: What escapee fault rate will minimize cost of the ACHIP?
Answer:
TotCost =NoTestCost +TestCost +EscapeeProbReplaceCost
NoTestCost Testcost EscapeeProb ReplaceCost TotCost
$10 $0 32% (2000.32 = $64) $74
$10 $1 16% (2000.16 = $32) $43
$10 $2 8% (2000.08 = $16) $28
$10 $4 4% (2000.04 = $8) $22
$10 $8 2% (2000.02 = $4) $22
$10 $16 1% (2000.01 = $2) $28
$10 $32 0.5% (2000.005 = $1) $43
The lowest total cost is $22. There are option with a total cost of $22: $4 of
testing and $8 of testing. Economically, we can choose either option.
For high-volume, small-area chips, testing can consume more than 50% of the total cost.
7.1.3 Physical Faults 447
7.1.3 Physical Faults
7.1.3.1 Types of Physical Faults
Good Circuit Bad Circuits
a
b
c
d
open
a
b
c
d
wired-AND bridging short
a
b
c
d
wired-OR bridging short
a
b
c
d
stronger wins bridging short
a
b
c
d
(b is stronger)
short to VDD
a
b
c
d
short to GND
a
b
c
d
7.1.3.2 Locations of Faults
Each segment of wire, poly, diffusion, via, etc is a potential fault location.
Different segments affect different gates in the fanout.
A potential fault location is a segment or segments where a fault at any position affects the same
set of gates in the same way.
b
b
b
BAD
BAD
b
OK
BAD
b
OK
BAD
Three different locations for potential faults.
448 CHAPTER 7. FAULT TESTING AND TESTABILITY
When working with faults, we work with wire segments, not signals. In the circuit below, there
are 8 different wire segments (L1L8). Each wire segment corresponds to a logically distinct fault
location. All physical faults on a segment affect the same set of signals, so they are grouped
together into a logical fault. If a signal has a fanout of 1, then there is one wire segment. A
signal with a fanout of n, where n > 1, has at least n+1 wire segments one for the source
signal and one for each gate of fanout. As shown in section 7.1.3.3, the layout of the circuit can
have more than n+1 segments.
a
b
c
z
L1
L2
L3
L4
L5
L6
L7
L8
7.1.3.3 Layout Affects Locations
a
d
e
f
g
h
i
b
c
e
g
h
b
L1
L2
L3
L4
e
g
h
b
L1
L2
L3
L4
L5
For the signal b in the schematic above, we can have either four or ve different locations for
potential faults, depending upon how the circuit is layed out.
7.1.3.4 Naming Fault Locations
Two ways to name a fault location:
pin-fault model Faults are modelled as occuring on input and output pins of gates.
net-fault model Faults are modelled as occuring on segments of wires.
In E&CE 327, well use the net-fault model, because it is simpler to work with and is closer to
what actually happens in hardware.
7.1.4 Detecting a Fault
To detect a fault, we compare the actual output of the circuit against the expected value.
To nd a test vector that will detect a fault:
7.1.4 Detecting a Fault 449
1. build Boolean equation (or Karnaugh map) of correct circuit
2. build Boolean equation (or Karnaugh map) of faulty circuit
3. compare equations (or Karnaugh maps), regions of difference represent test vectors that
will detect fault
7.1.4.1 Which Test Vectors will Detect a Fault?
Question: For the good circuit and faulty circuit shown below, which test vectors will
detect the fault?
a
b
c
d
e
Good circuit
a
b
c
d
e
Faulty circuit
Answer:
a b c good faulty
0 0 0 0 0
0 0 1 1 1
0 1 0 0 0
0 1 1 1 1
1 0 0 0 0
1 0 1 1 1
1 1 0 1 0
1 1 1 1 1
The only test vector that will detect
the fault in the circuit is 110.
Sometimes multiple test vectors will catch the same fault.
Sometimes a single test vector can catch multiple faults.
a
b
c
d
e
Another fault
a b c good faulty
1