Академический Документы
Профессиональный Документы
Культура Документы
Verification Methodology
1
Simulation/Verification Flow
Quartus/Vivado
Modelsim Prototype in
FPGA
2
The need for Verification…
• ~70% of effort when developing RTL
– trend: growing
• Testbenches are more complex than RTL models
• Growth of testbench complexity is more than linear with
RTL complexity.
– example:
• 10 state machines with 4 states: 410 total states
• 20 state machines with 4 states: 420 total states
• 10 state machines with 8 states: 810 total states
4
What is verification?
Verification ensures that the RTL performs the correct
function.
5
Why Verify?
The later in the product cycle a bug is found the more
costly it is.
6
Needs of a Verification Language
• Very different needs from RTL
– RTL: the language must be simple enough for the
“stupid” synthesis tools to understand them and know
how to synthesize them
• ex – fifo: use a RAM, with pointer A and pointer B, increment
the pointers when reading or writing
7
Hardware Description Languages
• VHDL/Verilog
– Developed in the 1980s
– Good languages for RTL design – synthesizable code
– Very few non-synthesizable constructs for writing
testbenches
8
Hardware Verification languages
• e/Vera
– Developed in the 1990s
– Only high-level constructs, impossible to write RTL
code
– People had to mix VHDL/Verilog RTL models with
e/Vera testbenches
• HVL cover three main features lacking in HDLs:
– Random constrained stimuli generation
– Assertions
– Functional coverage
9
System Verilog
• Hardware Description and Verification language
• Superset of Verilog – all Verilog systems work in
System Verilog
• First standardized by IEEE in 2005. In 2013: IEEE
standard 1800-2012
– Download the standard
– “Holy book of SystemVerilog”
– Answer to all of your questions are inside
– Good for reference, not for learning
• System Verilog “is” the standard IEEE 1800-2012
– Simulators/Synthesizers implementations might be
incomplete
10
System Verilog (ctd.)
• RTL subset of the language:
– Small superset of the Verilog RTL subset – some
constructs have been inserted to simplify different
things
• Verification subset of the language:
– Very very very big superset of Verilog
• Object-oriented constructs
• Random constrained stimuli generation
• Assertions
• Functional Coverage
11
System Verilog vs VHDL-2019
• System Verilog testbenches can be used to test
System Verilog RTL models, but can also be
used to test VHDL/Verilog RTL models – mixed-
language simulation
12
Today’s lecture
• What is verification?
• Why verify?
• The verification process
• Verification plan exercise
• Directed testing
• Testbench exercise
• Randomization
• Functional coverage
• Testbench components
• Black/white/gray box verification methods
• HW 1
13
Verification Basics
14
Software design flow
• Goes through the following steps (in order):
– Specifications
– High-level language (C, ...)
– Low-level language (assembly)
– Machine language
15
Software implementation flow
• The high-level language to machine-language flow is:
– Fast
– Inexpensive
• Software flow:
1) Write HL language
2) Translate HL language to machine language
3) Run machine language and find bugs
4) Fix HL language
5) Translate HL language to machine language
6) Run machine language and find bugs
7) Repeat from 4 until finished
16
FPGA implementation flow
17
ASIC design flow
• Goes in order through the following steps:
– Specifications/System Design
– Register Transfer Level
– Netlist
– Layout
– Physical chip
18
ASIC design flow (ctd.)
• From specs to RTL:
– human translation
• From RTL to Netlist to Layout to Physical:
– mostly automated translation (considered a solved – or
mostly solved - problem)
– If RTL is bug-free, the physical chip will work
19
ASIC implementation flow (ctd.)
20
Validation vs Verification
• Validation:
– are we making the right thing, that will answer to
the needs of the user?
– are the specs right?
• Verification:
– are we making what we wanted to make?
– is the model equivalent to the specs?
(Actually, I don’t fully agree on this definition. In my opinion, validation checks for
presence of faults, while verification checks for the absence of faults)
21
The Verification Process
Specification
22
Checking RTL to spec equivalence
• The person (team) who wrote the RTL must not
be the one who checks it correctness
– Else, bugs might not be found due to double-mistake in
interpreting the specs
– In case of errors, the mistake in interpreting the specs
might be due to either the designer or the verifier, or
both.
23
Testing at Different Levels
Block Level
- Maximum control
- Need to emulate peripheral blocks
- Might need to emulate DUT I/O
- Easiest to debug
- Highest simulation performance
System Level
- Minimum control
- Need to emulate system I/O
- Hardest to debug
- Lowest simulation performance
24
The Verification Plan
1. Binds a requirement in the specification to the test(s)
26
Formal verification vs
Simulation-based verification
• Formal verification is a new paradigm:
– Tools prove that the RTL is equivalent to a high-level
model or that it satisfies certain properties
– No need for simulation
– Might totally substitute simulation-based verification
one day…
27
Evolution of the Verification Plan
Continuously
y
updated
1. Generate stimulus
2. Apply stimulus to DUT
3. Capture the responses
4. Check for correctness
5. Measure progress
29
Directed Testing
Most (all) designers probably specify directed testing in their
test plan
- Steady progress
- Little up-front infrastructure development
- Small design changes could mean massive test changes
30
Directed tests
• Testbenches without randomness, targeting a
specific item in the verification plan.
– Example:
write in the fifo for 16 cycles after each other, check that the fifo is full,
then read all 16 elements, check that it is empty
31
Methodology Basics
Our verification environments will use the following principles
1. Constrained random stimulus
2. Functional coverage
3. Layered testbench using transactors
4. Common testbench for all tests
5. Test-specific code kept separate from testbench
32
Random Verification
• Random Verification Procedure:
1) Generate random tests using random constrained stimuli
generation
2) Check for bugs and correct them if there are any
3) Check for the coverage values. If not satisfying, add constraints
and repeat from 1
33
Random Constrained Stimuli
Generation
• Not to be confused with the “random” construct in
Verilog/VHDL
• Main idea:
– define random variables and constraints
– ask the “random solver” to find a random set of
variables that satisfies the constraints
– constraints can be added, and/or disabled to create
different tests
34
Random Constrained Stimuli
Generation
• Example:
– random bit a
– random integers b, c between 0 and 255
– constraint CA: if a=1 then (b+c)!=256
– constraint CB: b>c
• It would be hard to make a routine to randomly
generate one of the legal combinations using only
direct randomization of variables
35
What should you randomize?
36
$random
• Available since Verilog 1995.
$random creates a 32-bit signed random number.
If your range is a power of 2, use implicit bit selection
logic [3:0] addr;
addr = $random;
If your range is not a power of 2 use modulus (%).
Example: generate a random addr between 0 and 5
addr = $unsigned($random) % 6;
New to System Verilog is $urandom and $urandom_range
$urandom_range(5,0);
Unless your code changes, the random generation will be the same. Use
a seed to change the generation
integer r_seed = 2;
addr = $unsigned($random(r_seed)) % 6;
37
Functional Coverage
How do you know your random test bench is doing anything
useful?
Functional coverage measures how many items in your test plan have
been tested.
For the ALU Example:
Have all opcodes been exercised?
Have operands taken values of max pos, max neg, 0?
Have all permutation of operands been exercised?
38
Assertions
• Tools for automatic checking of properties
– “automated waveform checker”
– Example:
when req goes at one, grant must be 1 between 2 and 3 cycles later
req must never be at one for more than two consecutive cycles
39
Feedback from Functional Coverage
• Tests might need to be modified or additional tests written
due to feedback from functional coverage.
Manual or Automatic
feedback
DUT
40
Code Coverage
• Except functional coverage, there are other coverage
metrics that are tool features, and can be used
independently of the language
– These do not require to write code to enable coverage
– Statement coverage: has every statement in the DUT been
executed?
– Path coverage: have all paths been followed?
– Expression coverage: have all causes for control-flow change
been tried?
– FSM coverage: has every state in an FSM been visited? Every
transition followed?
41
Code coverage –
Statement coverage
y if (a>1 || b>1) begin
y c <= d;
y d <= d+1;
y end
y else begin
y if (a==2) begin
n d <= d-1;
y end
y else
y d <= d-2;
y end
y end
42
Code coverage – Path coverage
if (a>1 && b>1) begin Run 1
c <= d;
d <= d+1;
end
if (a>2) begin
if (a==3) begin
d <= d-1;
end
else
d <= d-2;
end
end
43
Code coverage – Path coverage
if (a>1 && b>1) begin Run 2
c <= d;
d <= d+1;
end
if (a>2) begin
if (a==3) begin
d <= d-1;
end
else
d <= d-2;
end
end
44
Code coverage – Path coverage
if (a>1 && b>1) begin At the end of all runs,
c <= d;
we find out this legal
d <= d+1;
end
path was not exercised
if (a>2) begin
if (a==3) begin
d <= d-1;
end
else
d <= d-2;
end
end
45
Code coverage –
Expression coverage
if ((a>1 && b>1) || (a<0) || (b<0)) begin
c <= d;
d <= d+1;
end
46
Code coverage –
FSM coverage
• Usually checks if all states in a state machine
were exercised
– But you should also check that all transitions in the
state machine were exercised
47
Testbench Components
• Testbench wraps around the Design Under Test
- Generate stimulus
- Capture response
- Check for correctness
- Measure progress through coverage numbers
• Features of an effective testbench
• Reusable and easy to modify for different DUTs <- Object oriented
• Testbench should be layered to enable reuse
- Flat testbenches are hard to expand.
- Separate code into smaller pieces that can be developed separately
and combine common actions together
• Catches bugs and achieves coverage quickly <- Randomize!
48
Testbenches
49
Verification models
• Basic model:
– Give inputs, check outputs - black box verification
– We might use white-box verification (give inputs,
check internal signals)
– Or grey-box verification (give inputs, check some
internal signals specifically inserted for debug purpose)
50
Black/White/Grey Box testing
methods
• Blackbox verification
• Use I/O only for determining bugs
• Difficult to fully verify and debug
• Whitebox verification
• Use I/O and signals in the DUT for determining bugs
• Easy to debug
• Greybox verification
• A hybrid of blackbox and white box
verification
51
Testbench Structuring
• It is important to conceptually divide testbenches into blocks,
depending on the function:
– Generator of high-level input data
(example: we decide to send out 4 packets of 1024 bytes followed by 2 packets of
64 bytes)
– Driver: read the high-level input data and drive the DUT input ports.
– Output monitor: read data from the DUT’s output ports.
– Output checking: check correctness of the output
• This allows easier readability and reuse. If the DUT input protocol
change, only the driver must change.
• This requirement is most important for big systems, with a lot of reuse
and many people working on it.
52
Layered Testbench
Flat testbench Layered testbench
//Write 32’hFFFF to 16’h50 task write(reg[15:0] addr,
@(posedge clk); reg [31:0] data);
PAddr <= 16’h50;
PWData <= 32’hFFFF; @(posedge clk);
PWrite <= 1’b1; PAddr <= addr;
PSel <= 1’b1; PWData <= data;
PWrite <= 1’b1;
// Toggle PEnable PSel <= 1’b1;
@(posedge clk);
PEnable <= 1’b1; // Toggle PEnable
@(posedge clk); @(posedge clk);
PEnable <= 1’b0; PEnable <= 1’b1;
@(posedge clk);
PEnable <= 1’b0;
endtask
53
Structured testbenches
Outputs checker
Generator of
Checks outputs with
High‐level inputs Expected HL outputs
Inputs driver Outputs monitor
Gives the inputs to RTL model Reads the outputs and
The DUT Translates into HL
54
Signal and Command Layers
• Signal Layer
- DUT and all pin level connections
• Command Layer
- Driver converts low level commands to inputs of DUT
- Assertions look at I/O or internal signals of DUT
- Monitor converts outputs of DUT to low level transaction results
55
Testbenches
Golden model Outputs checker
Generator of
(matlab, TLM, Checks outputs with
High‐level inputs Expected HL outputs
Timed, untimed, …)
Inputs driver Outputs monitor
Gives the inputs to RTL model Reads the outputs and
The DUT Translates into HL
57
Scenario Layer
• Generator
- Breaks down a scenario into high level commands
• Scenario examples for an MP3 player
- Play music from storage
- Download new music
- Adjust volume
- Change tracks, etc.
58
Test Layer and Functional Coverage
• Test block determines: • Functional Coverage
- What scenarios to run - Measures progress of tests
- Timing of scenarios - Changes throughout project
- Random constraints
Functional Coverage
59
SoC Verification
• Collection of IPs
– Each IP must first be verified at block-level
• Then top-level verification follows
– Verification systems for IPs are packaged into VIPs (Verification
IPs), with drivers, monitors, assertions to check input correctness,
high-level models, etc.
– A scoreboard keeps track of which tests have been run and
coverage
• Possible to build a chip in which only some components
are RTL, the others are golden models
– Using VIPs it is easy to build fast complex models of what is
around a block or a chip
60
UVM
• Universal Verification Methodology
– Methodology on top of System Verilog that automates all this
– Supported by VHDL-2019
• Key focus: reuse
– Components are enclosed into agents, containing checkers,
monitors, drivers, etc.
– A chip can be built connecting together the different VIPs
62
Simulation Environment Phases
• Build
- Generate DUT configuration and environment configuration
- Build testbench environment
- Reset the DUT
- Configure the DUT
• Run
- Start environment
- Run the test
• Wrap-up
- Sweep
- Report
63
Maximum Code Reuse
• Put your effort into your testbench, not into your
tests.
• Write 100’s or 1000’s of directed tests or far fewer
random tests.
• Use directed test to cover missed functional coverage
64
Testbench Performance
• Directed tests run quicker than random tests
- Random testing explores a wide range of the state space
- Simulation time is cheap, your time is not
65
IL2203 Rest of the course
• Remaining parts of the course (3 ECTS)
– SEM1 – Seminars (1.5 ECTS)
– LAB2 – System Verilog Labs (1.5 ECTS)
66
Verification Course Book
• We use the book(s)
67
Reversed Classroom Pedagogics
• You will study the chapters and solve the
exercises at home before the lecture
– Exercises are associated with the chapters in the
electronic book (on the home page).
68
IL2203 Part 2 - Seminar & Lab Order
• Seminar 1 – Solving/discussing selected problems from
– Chapter 2 – Data_types
– Chapter 3 – Procedural Statements and Routing
– Chapter 4 - Connecting the Testbench and Design
– Chapter 5 - Basic OOP
• Seminar 2
– Chapter 6 – Randomization
– Chapter 7 - Threads and Interprocess Communication
– Chapter 8 - Advanced OOP and Testbench Guidelines
– Chapter 9 - Functional Coverage and Assertions (Self-study H)
69
Seminar Schedule
• Seminar slots allocated
– 3/10 10-12 Before VHDL exam - Replace with
VHDL Exercise instead?
– 31/10 8-10 Two days in a row does not make
– 1/11 13-15 sense - remove one of these two
– 5/11 10-12 Keep?
• Laboration slots
– 6/11 Testing Unknown functionality
– 13/11 Advanced testbench in System Verilog
70
Labs & Tools - QuestaSim 10.0b 64-bit
71