Вы находитесь на странице: 1из 32

W

The Mentor Graphics 0-In Formal Verification Technology Backgrounder

Dr. L. Curtis Widdoes Jr. Chief Engineering Scientist Mentor Graphics Corporation

w w w. m e n t o r. c o m / f v

Introduction
This paper explains how the technologies and methodologies underlying the Mentor Graphics 0-In Formal Verification (FV) tool facilitate practical formal verification of IC functionality. 0-In Formal Verification includes the 0-In Search, 0-In Confirm, and 0-In Prove engines and uses assertion-based verification (ABV) to accelerate the discovery and diagnosis of design flaws during the verification process. The differences between the Mentor Graphics 0-In FV tool and FV tools supplied by other vendors are also described. This paper is intended for formal verification tool evaluators and users designers, design-verification (DV) engineers, and their managers who want to know more about the practicalities of FV technology.

0-In Background and ABV


Assertion-based verification is a technique for finding more bugs, finding tough bugs, and finding bugs faster and more efficiently than traditional black-box verification methods. Specifically, ABV is a methodology by which the design team thoroughly instruments the HDL code with assertions and then uses extensive automation, including FV technology, to find the bugs. An assertion is a statement about how a particular design should or should not behave. The assertion may be expressed as a condition that must never occur in the design, or it may be expressed as a condition that must always be true. A simple example of an assertion is, The state machine implemented with state variable s1 is intended to have one-hot encoding. This statement identifies the correct behavior of the design, namely, that exactly one bit of the variable s1 must be true at any given time. It also implies conditions that should be considered bugs in the design; in other words, when more than one bit of s1 is true or when all bits are false. Other examples of assertions include: This FSM should never make any illegal state transitions. This memory should never read an uninitialized location. This case statement is parallel; in other words, no two case items should be true simultaneously. This register should count up or down only by specific values. The data in this register should be used before it is overwritten. These two signals of an interface should follow a specific req-ack handshake protocol. This arbiter implements a fair arbitration scheme. This interface complies with the AMBA AHB standard bus protocol. Data packets should cross this bus bridge without being lost or corrupted.

In all cases, the designer implements the logic in a way that is intended to match the asserted behavior. But designers often make subtle mistakes. An assertion provides a crosscheck that will detect when the designs actual behavior differs from what was intended.

Formal Verification Technology Backgrounder

ABV allows formal verification to be easily applied within an existing simulation-based verification flow. A basic ABV flow including formal verification is as follows: 1. The RTL design is instrumented using simulatable assertions. A typical 10M-gate design may end up containing more than 100,000 user-specified assertions, comprising a mix of combinational assertions and sequential assertions and spanning the range from simple, local assertions to complex end-to-end assertions. 2. The test suites for the design are simulated with the assertions in place. The web of assertions increases observability and catches bugs that would otherwise be missed. 3. FV tools are used to exhaustively verify the assertions. During formal verification, the assertions on the interfaces of the design (verified using simulation in Step 2) serve as constraints. During exhaustive verification, the formal tools expose bugs (in other words, stimulus sequences that will cause assertions to be violated). 0-In Design Automation was founded in 1996 and pioneered ABV. Early in 2000, 0-In began offering a full suite of ABV tools, including assertion libraries, tools for specification and management of assertions, the infrastructure for supporting simulation with assertions, and tools for formal verification of assertions. The major EDA vendors then began moving to support ABV by offering new assertion languages and libraries, including the Open Verification Library (OVL), OpenVera Assertions (OVA), SystemVerilog Assertions (SVA), and the Property Specification Language (PSL). By 2004, ABV had become a mainstream verification methodology and 0-In had become the market leader, leading in the number of active ABV customers, the number of assertions in use in real designs, and total FV tool sales. In September 2004, Mentor Graphics purchased 0-In. By 2006, all three leading EDA vendors were offering full ABV support, including assertions in simulation and the formal verification of assertions. Today, 0-In continues to develop advanced ABV and FV tools as the 0-In Functional Verification Business Unit within Mentor Graphics. Many users are justifiably confused by the differences between the ABV tools offered by the three major EDA vendors. Synopsys advocates and supports only SVA. Cadence advocates PSL. Mentor Graphics advocates no single format, but supports all standard assertion formats (OVL, PSL, and SVA) as well as 0-In CheckerWare. Cadence markets a static FV tool (IFV) based on FormalCheck acquired from Bell Labs. Synopsys markets a from-scratch semi-formal verification tool called Magellan. Mentor Graphics provides both static and dynamic FV tools based on 0-In technology, including 0-In Search, 0-In Confirm, and 0-In Prove. Assertions are now widely used by mainstream design teams and ample information is available about assertion formats, assertion management and the use of assertions in simulation. Therefore, this paper does not attempt to cover those topics. Instead, it describes the key technologies used by 0-In FV tools and the practicalities of using FV tools for verification of modern HDL designs.

Formal Verification Technology Backgrounder

The 0-In Approach to FV Technology


In 1996, FV tools were primarily used by researchers, academics and a few industry experts. FormalCheck (originally offered by Bell Labs, subsequently purchased by Cadence and used as the basis for IFV) had been on the market for about ten years but had gained little traction because it was fundamentally designed to be used by FV experts to verify small numbers of assertions in small designs. Beginning in 1996, 0-In began working to develop FV technologies and methodologies that could be easily adopted by designers and DV engineers who are not FV experts. Before 1996, the primary application of FV technology was to prove that target assertions could never be violated. 0-In recognized that designers and DV engineers need to find bugs (in other words, find ways to violate assertions), and that the FV technology needed for finding bugs is different from the FV technology for finding unbounded proofs. Therefore, 0-In focused on developing new, specialized FV technology and infrastructure that is uniquely effective at finding bugs in real industrial designs. Importantly, 0-In FV technology was designed to complement and extend simulation rather than attempting to replace it. The FV engines underlying 0-In FV tools precisely handle four-state simulation semantics, so that FV results match simulation results. Simulation states from ordinary directed simulation tests can be automatically prioritized and used in priority order as initial states for FV. Inputs provided during simulation can be used as constraints for FV. Simulation clocks can be automatically extracted and used in place of formal clock models. Assertions can be simulated and the results viewed together with FV results. Constraints can be verified using ordinary simulation. Similarly, 0-In FV technology was designed to support large modern designs without modification. 0-In FV tools handle designs containing 10M+ gates (both directly, and by supporting partitioning), latches, gated clocks, multiple clocks, asynchronous clocks, large multi-dimensional memories and even non-synthesizable constructs (via graceful degradation). Large designs contain large numbers of assertions. 0-In FV technology was designed to effectively analyze many thousands of assertions and to deliver partial results to the user as soon as they are ready. To increase efficiency for large numbers of assertions, 0-In FV tools are designed so that analyses of different assertions share intermediate results. Constraint methodology is as important as FV technology. (Constraints are the specification of the legal behavior of the inputs of the design being verified.) 0-In FV technology was designed to support constraints that are expressed as normal assertions. 0-In products include all the infrastructure that is necessary for managing and verifying constraints expressed as normal assertions. 0-In FV technology was designed to support complex, standard-interface monitors (for example, AMBA AHB) as assertions, as constraints, or both. 0-In has developed many such monitors and includes them in the 0-In CheckerWare library. From the beginning, 0-In recognized the importance of methodology in getting good results from FV tools and focused on developing and fully supporting a set of practical FV methodologies for specific applications.

Formal Verification Technology Backgrounder

These basic philosophies have strongly shaped 0-In FV technology and have resulted in many differences between 0-In FV tools and FV tools from other vendors.

What is Formal Verification


Formal verification is the process of using mathematical techniques to exhaustively verify the correct functionality of a design under verification (DUV). By exhaustive, we mean complete within some well defined domain of operation, for example: 1. All possible inputs for N cycles of operation, starting from an initial state 2. All possible initial states 3. All reachable states Correct functionality is defined using assertions. Today, assertions are specified using one or more of several popular assertion formats: OVL (sponsored, donated, extended and maintained by Mentor Graphics), PSL (sponsored by Cadence and supported by Mentor Graphics), SVA (sponsored by Synopsys and supported by Mentor Graphics), 0-In CheckerWare (developed by 0-In and supported by Mentor Graphics), standard Verilog and standard VHDL. Very often, different assertion formats are mixed within a single design. Mentor Graphics supports all the standard assertion formats (in other words, OVL, PSL, SVA, Verilog and VHDL). Mentor Graphics also supports assertions specified using //0in CheckerWare directives. //0in CheckerWare directives use inferencing to simplify assertion specification. 0-In pioneered ABV and assertion specification methods and holds several key patents covering assertion specification methods, including U.S. Patent Numbers 6,175,946 and 6,609,229, both entitled Method for Automatically Generating Checkers for Finding Functional Defects in a Description of a Circuit. Some designers and DV engineers think that formal verification means exhaustively verifying that the design is correct. This is not true. In the first place, results produced by a formal verification tool are only as good as the assertions provided; in other words, if the assertions provided are incorrect or incomplete, the FV results will also be incorrect or incomplete. Similarly, the constraints on the inputs of the DUV and the initial states specified for the DUV must be correct and complete. All IC designs are finite-state systems. In principal, the process of formal verification simply involves examining each reachable state of the DUV and determining whether any assertion is violated in any reachable state. However, because modern designs may have 101,000,000 states (or more!), examining all of the reachable states is difficult. Because exhaustive analysis is difficult, various non-exhaustive verification methods have been developed. For example, some verification tools (for example, Synopsys Vera and Cadence/Verisitys Specman) rely on pseudo-random and constrained pseudo-random simulation. Other verification tools (for example, Synopsys Magellan) use pseudo-random simulation augmented with formal methods to increase simulation coverage. Strictly speaking, these tools are not formal verification tools because they are not exhaustive over a well-defined domain of operation. These tools are sometimes called semi-formal.

Formal Verification Technology Backgrounder

When using a semi-formal tool, it is difficult to quantify what has been verified. Semi-formal tools may be effective for finding surface bugs using simple combinational assertions, and they may occasionally randomly stumble on a deep bug, but, because they analyze only the most probable behavior, they are not the best tools for finding subtle bugs requiring complex sequential setup and sequential assertions. Formal verification technology provides great value because it is exhaustive. FV tools are not probabilistic, they are not subject to human blind spots, and they do not miss subtle corner-case problems. As a result, FV tools can find tough bugs that cannot be found by any other verification method. Formal verification technology also provides great value because it is extremely efficient. FV tools use advanced mathematical techniques to exploit symmetries in the design and avoid repeating redundant analysis. Then, they directly target the assertion being verified, producing the minimum amount of stimulus required to violate the assertion. A semi-formal tool like Synopsys Magellan may require tens of thousands of cycles of pseudo-random operation to trigger a bug requiring complex setup (or may not find the bug at all!), when a true FV engine, such as 0-In Confirm, can reach the same bug using just a hundred cycles of very carefully architected stimulus. Not only is the FV tool faster and more thorough, but the shorter counterexample is much easier to diagnose.

Types of Formal Verification


Formal verification methods can be divided into two classes: semi-automatic methods and automatic methods. Semi-automatic methods include theorem proving and various methods depending on manual abstraction. In semi-automatic methods, a human expert studies the problem and devises a proof (sometimes finding a counterexample instead). The computer helps with the mechanics of the proof and may check the proof to make sure that it is correct. Since a human expert can work out any proof, given enough time, semi-automatic methods are very powerful; however, they require reasoning by formal verification experts and lots of tedious, manual work. Because semi-automatic methods involve humans, they are also prone to error. On the other hand, in automatic methods, a computer runs a deterministic algorithm and produces an answer without human intervention. No formal verification experts are involved. Because very few potential users of FV tools are formal verification experts, most commercial FV tools, including 0-In FV tools, are based on automatic methods. (FV tools from Jasper, however, rely on humans to perform interactive manual abstraction of the design.) One widely used type of FV technology is logic equivalence checking (LEC). LEC tools exhaustively check that two design representations (both derived from the same RTL) have identical function. This exhaustive analysis depends fundamentally on a one-to-one correspondence existing for nearly all the register bits in the two representations. If such a one-to-one correspondence exists, then the formal analysis is reduced to exhaustively verifying the equivalence between the combinational logic

Formal Verification Technology Backgrounder

connecting corresponding registers, and the analysis is tractable. If such a correspondence does not exist, then verifying equivalence is not tractable for the algorithms used by conventional LEC tools. In this discussion, we are concerned only with formal property verification. Formal property verification addresses functional bugs in HDL designs, for example: Can data be lost or corrupted while crossing this bus bridge? Formal property verification involves exhaustively verifying a set of combinational or sequential assertions, resulting in proofs that some assertions cannot be violated and counterexamples showing how other assertions can be violated using legal stimulus at the interface of the DUV.

Formal Verification Algorithms


For an overview of modern formal verification algorithms and many recent references, see Advanced Formal Verification, edited by Rolf Drechsler, Kluwer, 2004. Modern automatic FV tools are based on model-checking algorithms. A model-checking algorithm is any algorithm that proves or disproves that a model satisfies a property (also called assertion) in all reachable states. A model is a mathematical description of a circuit, comprising: 1. A set of states 2. A set of initial states 3. A transition relation from current to next state In the case of formal verification of HDL designs, the transition relation is derived automatically from the HDL description of the DUV together with the assertion being verified. In general, all known model-checking algorithms are exponential with respect to every interesting measure of design size, including: 1. The number of register bits of the assertion and the associated circuit 2. The number of primary inputs affecting the assertion 3. The depth (number of cycles) of exhaustive analysis In fact, it is generally believed by computer scientists that the model-checking problem is fundamentally exponential and that no non-exponential model-checking algorithms exist. Because model checking is exponential, just adding a state bit, adding an input, or verifying one more cycle of operation may double the time or memory required. As a result, FV tools cannot necessarily find proofs or counterexamples for all assertions. Instead, they return one of three results about a given assertion: 1. A proof that the assertion can never be violated (in other words, an unbounded proof) 2. A counterexample showing how to violate the assertion 3. Indeterminate; in other words, no unbounded proof and no counterexample (a.k.a. inconclusive)

Formal Verification Technology Backgrounder

In fact, even given the most advanced FV technology, many assertions in real industrial designs turn out to be indeterminate, especially when analyzed over unbounded time. 0-In FV tools report a bounded proof and a corresponding proof radius for these indeterminate assertions. A bounded proof with a proof radius of N cycles indicates that the assertion cannot be violated within N cycles of the initial state. The larger the proof radius, the more thorough is the verification. Coupled with a little understanding of the design (for example, for a DUV that is a pipeline unit, the sequential depth of the pipeline) a large proof radius can be tantamount to a full, unbounded proof. 0-In was the first to develop and productize proof radius reporting and holds a patent on the method -U.S. Patent Number 6,848,088, Measure of Analysis Performed in Property Checking. Some vendors (for example, Jasper) claim that their FV tools are 100% exhaustive. In fact, these tools rely on assistance from a human when the FV tool fails to find a solution. Helping the tool find a solution can be extremely difficult, and the amount of effort required is unpredictable, so this type of tool is limited to use by FV experts in verifying small numbers of assertions; it is not suitable for use by normal designers and DV engineers who have other work to do. The most basic model-checking algorithm (called explicit model checking) is easy to understand. Explicit model checking evaluates a target assertion in all reachable states as follows: 1. Start with an initial state (in other words, a complete assignment of values to all the state elements of the DUV). 2. For each state analyzed for the first time (new state) a. Evaluate whether the target assertion is violated. If so, report a counterexample and stop. b. Step through all the legal input assignments, producing every reachable next state via simulation. 3. If no new states were produced in Step 2, then all reachable states have been analyzed. If so, report an unbounded proof and stop, otherwise go back to Step 2. Explicit model checking is explained here only as a teaching tool. Explicit model checking requires keeping track of all the states analyzed (reached). Although explicit model checking is often practical for systems with 1010 states or less for example, high-level protocols it is not a practical FV technology for industrial HDL designs, which may have more than 101,000,000 states. Modern FV tools use a variety of techniques to boost the capacity and performance of modelchecking algorithms: Cone-of-influence reduction removes all parts of the circuit that obviously can never affect the assertion. Then, the model-checking algorithm verifies the assertion in the smaller circuit. (Note that different initial states may result in different cones of influence, for example, mode-register values may cut off parts of the circuit.) Cone-of-influence reduction is easy and cheap. Often, it greatly simplifies the problem, especially for circuits that are highly configurable using mode registers. Essentially all modern FV tools use some form of cone-of-influence reduction.

Formal Verification Technology Backgrounder

Symbolic model checking (SMC) technology is used by most modern FV tools, including 0-In FV tools, to increase the size of designs that can be successfully verified. SMC uses binary decision diagrams (BDDs) to represent the state-transition function as well as the set of reachable states. SMC computes the set of all next states from the set of all current states using BDD operations. Similarly, SMC determines whether an assertion is violated in any new states using BDD operations. The highly efficient encoding provided by BDDs can increase the state-space capacity of FV tools by orders of magnitude. However, even the SMC algorithm is fundamentally exponential, and it typically runs out of capacity between 1020 and10200 reachable states. The basic SMC algorithm can sometimes handle simple assertions in small blocks, but not much more. Assume/guarantee (a.k.a. hierarchical abstraction) is sometimes cited as an advanced model-checking method. In fact, it is not an automatic technique. Assume/guarantee refers to a method in which the tool assumes that property A is true in order to make a proof of property B easy and then returns later to prove property A. Assume/guarantee is actually a technique from theorem proving. To be successful, it depends on clever choice of the property to assume, out of all possible properties. Humans can perform this type of general reasoning, but computers cannot. There are no FV tools from any vendors that automatically create useful assume/guarantee assumptions from scratch. 0-In FV tools and infrastructure are designed to allow the user to easily specify assumptions, to take advantage of these human-specified assumptions during formal analysis and to keep track of the human-specified assumptions so they can be verified at a later time.

Proof Technologies
Even using the SMC algorithm, most assertions in industrial HDL designs are too complex to exhaustively verify. Therefore, most modern FV tools also use automatic abstraction in addition to SMC. An abstract circuit is a stripped-down circuit containing only selected registers from the original design, along with the combinational logic connecting those registers. The outputs of all the omitted registers are treated as unconstrained primary inputs for the purpose of the formal analysis. Automatic abstraction proceeds basically as follows: 1. Start with an initial crude abstraction. 2. Perform SMC on the abstract circuit. 3. If SMC successfully proves the assertion or blows up, then stop. 4. If SMC produces a valid counterexample (in other words, one that works in the real DUV), then stop. 5. If SMC produces a false counterexample (in other words, one that does not work in the real DUV), then refine the abstraction by adding circuitry from the real DUV and go back to Step 2.

Formal Verification Technology Backgrounder

Because of the way that the abstract circuit is constructed, any assertion that can be violated in the real DUV can also be violated in the abstract circuit. Thus, if this method proves that the assertion cannot be violated in the abstract circuit, then the proof is valid for the real DUV. On the other hand, the counterexamples this method produces are rarely valid in the real DUV (in other words, they are valid only by accident). Thus, SMC with automatic abstraction is a powerful proof technology, but is not very effective at finding counterexamples. Many heuristics have been developed for picking the initial abstraction in Step 1 and for refining the abstraction in Step 5. Importantly, given that model checking is exponential, any abstraction heuristic is bound to fail to prevent blow-up for some assertions in some designs. 0-In FV tools use the most advanced heuristics for abstraction refinement. These advanced heuristics analyze the failing counterexamples using a SAT algorithm and thereby identify circuitry to add to the abstraction in order to eliminate the false counterexamples on the next iteration. Because model checking is exponential, there is no magic algorithm that will find proofs for all the assertions in a typical large design. Empirically, different proof algorithms tend to be effective for different types of assertions; therefore, modern FV tools use an array of specialized proof algorithms in order to increase the overall proof completion rate. One of the most effective specialized proof algorithms is induction. An inductive proof algorithm verifies that the target assertion is not violated in the initial state, then it proves that if the assertion is not violated in any states reachable in N cycles or less, it is also not violated in any state reachable in N+1 cycles. Then, by induction, we know that the assertion cannot be violated in any reachable state (an unbounded proof). Typically, the induction algorithm uses a combination of BDD and SAT (satisfiability) technology to perform the proof. Induction is surprisingly effective at finding proofs for simple assertions in large designs. If an assertion depends only on local logic, and if that local logic is in turn strongly constrained by the assertion, then induction tends to be effective. Many real assertions in large designs have these characteristics. For example, one-hot assertions for state machines usually have these characteristics; proofs for them are often tractable using induction. These same assertions are often intractable using SMC, even with automatic abstraction. On the other hand, automatic induction is not very effective for proving complex assertions, for example, assertions about end-to-end behavior of a large block. Induction is also not effective for finding counterexamples. Because real designs have such enormous reachable state spaces, the induction algorithm must use an over-approximation for the set of states reachable in N cycles. As a result, counterexamples found in the induction step can rarely be extended to form counterexamples that are valid in the real design. 0-In Prove uses all of these techniques for finding proofs, as well as other proprietary methods.

10

Formal Verification Technology Backgrounder

Unlike FV tools from other vendors: 1. 0-In Prove is optimized to efficiently produce proofs for large numbers of assertions in large designs. 2. 0-In Prove reports conditional proofs (in other words, cases where a proof of one indeterminate assertion implies that one or more other indeterminate assertions cannot be violated) so that human effort can be focused on the most important missing proofs. 3. 0-In Prove reports constraints used in proofs. This report allows the user to find constraints that may be causing false proofs.

The Trouble with Proofs


Unfortunately, interpreting proofs produced by any FV tool can be problematic because many types of errors can result in false proofs. Since proofs are basically long, complex sequences of steps performed by a proof algorithm running on a computer, there is no practical way for a human to check a proof to make sure it is valid. Any of following setup problems can produce false proofs: 1. The target assertion may contain an error. 2. The initial state may be too restrictive (for example, mode registers set to specific values, when other values are possible). 3. The interface constraints may be too restrictive. 4. A manually specified abstraction may be incorrect. 5. The proof may be due to assuming two-value inputs (vs. four-value). 0-In Prove does everything possible to minimize the problems of false proofs. By default, it uses no constraints and assumes that every register bit in the initial state can have any value. Furthermore, unlike other FV tools, it reports information about constraint usage that can help the user understand whether the proofs are false because constraints are too tight. Of course, given enough time, dedicated FV experts can work their way through the problems associated with interpreting proofs. However, designers and DV engineers have a lot of other work to do. For designers and DV engineers, proofs are nice to have. Proofs can eliminate some assertions from further analysis, but proofs do not tell the design team where to look for problems or what to do next. The set of proofs has high value only if the FV tool ends up proving essentially all of the assertions in a design, because the remaining assertions can then be assumed to have counterexamples. Practically speaking, a near-100% completion rate is feasible only for small blocks or for very simple assertions in a large design; typically, automatic formal tools can prove less than half of the complex

Formal Verification Technology Backgrounder

11

assertions in a large, fully constrained design. Given proofs for half of many thousands of assertions, the design team knows for certain only that all the bugs are in the other half!

Counterexample Technologies
Counterexamples have very high value for designers and DV engineers. If the assertions and constraints are correct, then every counterexample represents a real bug in the HDL. Furthermore, counterexamples are actionable: 0-In FV tools reproduce each counterexample in the simulation debugger so the bug can be tracked down and fixed. As discussed above, technologies focused on proofs (for example, the original FormalCheck technology, now part of Cadences IFV product) are not effective at finding counterexamples. Counterexamples require entirely different formal verification technologies. The most effective known exhaustive method for finding counterexamples in general HDL designs is SAT-based bounded model checking (BMC). SAT-based BMC proceeds as follows: 1. Set N = 1. 2. Unroll the real sequential circuit N cycles, creating a combinational circuit equivalent to exactly N cycles of operation of the original sequential circuit. 3. Use exhaustive SAT-based model-checking techniques on the combinational circuit to look for any N-cycle counterexample. 4. If a counterexample is found, then report the counterexample and stop. 5. If a proof is found that no N-cycle counterexample exists, then increment N and go back to Step 2. 6. If the CPU-time or memory budget is exceeded, then the target assertion is indeterminate. Report N-1 as the proof radius. BMC is exhaustive up to a finite depth (the proof radius). The domain over which BMC is exhaustive is all legal stimulus sequences of length up to and including the proof radius. As time and memory budgets are increased, BMC finds bugs at greater depth. Given enough time and memory, BMC will find any bug at any depth. As a side effect of analyzing the DUVs operation cycle-by-cycle, BMC always finds the shortest counterexample, if any counterexample exists. Short counterexamples are easier to debug than long counterexamples. BMC technology is not capable of producing unbounded proofs. However, the proof radius serves as comprehensible feedback about how thoroughly an assertion was verified. SAT algorithms are much more effective than BDD algorithms for BMC. Real circuits have limited, non-random fanin and fanout, resulting in SAT-friendly topologies. (See Why is ATPG Easy?, M. Prasad, K. Keutzer, et al., DAC 1999.) Therefore, the most powerful FV counterexample engines,

12

Formal Verification Technology Backgrounder

including 0-In Confirm and 0-In Search, use BMC technology that is primarily based on SAT algorithms. Researchers have made tremendous progress over the last ten years in increasing the performance of SAT algorithms. All modern, high-performance SAT algorithms use CNF-based Boolean-constraint propagation, backtracking, conflict learning, watch pointers and advanced splitter heuristics. 0-In has developed many other proprietary technologies that substantially increase the analysis depth and performance of SAT-based BMC, for example: 1. 0-In FV tools perform extensive circuit-domain-specific optimizations on the formal model before analysis begins. 2. 0-In FV tools share learned information between large numbers of assertions being simultaneously analyzed. 3. 0-In FV tools combine symbolic simulation with conventional SAT-based BMC. These proprietary technologies have proven to be very important for improving SAT-based BMC performance and have enabled 0-In to achieve the deepest and fastest BMC in the industry. (Based on customer benchmarks, 0-In Confirm leads all other commercial FV tools in terms of BMC depth and speed. See Appendix A for a guide to benchmarking BMC tools.) Advances in SAT technology over the last six years have resulted in more than a 10,000X performance improvement in SAT-based BMC for real assertions in industrial designs. Today, 0-In Confirm regularly analyzes real assertions in industrial designs to depths of hundreds of cycles. From the point of view of the designers and DV engineers, this level of BMC performance amounts to a truly unbelievable amount of verification. As the proof radius grows, the number of inputs that can affect the assertion grows as well. Assuming that a DUV has 300 primary inputs that can affect the assertion, there are 1090 possible input patterns each cycle, so a proof radius of 100 cycles is in some sense equivalent to 109000 cycles of simulation!

Dynamic BMC
Although SAT-based BMC is an extremely powerful technology, no automatic FV technology is capable of creating the long initialization sequences needed to get large (for example, 10M-gate) designs into internal states from which the most interesting behavior can be tested. For example, although a modern BMC tool may be capable of exhaustively searching within a radius of 100 cycles from an initial state, a large design may require many thousands of cycles just to get the internal queues full, setting up cornercase bugs. No FV tool can exhaustively analyze a large design to that depth. Similarly, some bugs may depend on specific configurations of mode register values. If constraints (see below) on the DUV inputs do not allow setting the mode registers to the right values, then a static BMC tool will never be able to find those bugs.

Formal Verification Technology Backgrounder

13

Dynamic BMC solves the problem of hard-to-reach internal states by marrying conventional simulation with BMC. While the user runs system simulation tests or directed simulation tests, dynamic BMC captures states directly from the simulation and repeats BMC analysis many times, each time using a new state from simulation as an initial state for BMC. This method allows BMC to start its exhaustive analysis from any state that is visited by the simulation tests. Dynamic BMC uses the same efficient SAT-based BMC algorithms described above to search for the shortest counterexample within a bounded proof radius. However, because it can start analysis from any of the corner-case states set up by the author of the simulation tests, dynamic BMC is capable of finding counterexamples that are not accessible using conventional static BMC. In fact, these counterexamples are not accessible using any other type of FV tool. Like BMC, dynamic BMC produces no unbounded proofs. Dynamic BMC is more effective at finding bugs than semi-formal tools. Semi-formal tools depend on pseudo-random simulation to set up initial states for FV algorithms, but many important corner-case states (for example, all queues full) are too improbable to be reached pseudo-randomly. On the other hand, simulation tests generated by DV engineers are designed to reach important corner-case states and tuned until they do; dynamic BMC can search for counterexamples starting from any of them. (See Appendix B for a few examples of how dynamic BMC finds bugs that simulation misses.) 0-In brought the first commercial dynamic BMC product to market in 2000. Today, Mentor Graphics is the only EDA vendor offering a dynamic BMC tool (0-In Search). Besides providing access to hard-to-reach corner-case states set up by simulation tests, dynamic BMC technology can provide a number of other practical advantages to designers and DV engineers who are trying to find bugs in large designs, as follows. (Since all of these technologies are based on dynamic BMC, they are available only in FV tools from Mentor Graphics): 1. Partitioned verification using states from simulation. BMC tools provide faster and deeper results on small designs than large designs. 0-In Search allows a full-chip design to be partitioned into smaller units that can be independently verified. The user simply identifies the target unit for example, a single 1M-gate functional unit and runs his existing full-chip simulation tests. 0-In Search will automatically extract the logic of the target unit, build a formal model, extract the simulation states of the target unit, and use those simulation states as initial states for BMC of the target unit. Assertions at the boundaries of the target unit serve as constraints during BMC. Although the logic surrounding the target unit is ignored during BMC itself, the simulation states of the target unit are actually created by the surrounding logic during the fullchip simulation. This process allows 0-In Search to access deep corner-case states of the target unit without requiring simulation tests that exercise the target unit in isolation. 2. Clock behavior taken from simulation. For any static BMC tool, the user must specify exactly how all the DUV clocks can be derived from a single master clock and that clock specification must be consistent with the design intent. This

14

Formal Verification Technology Backgrounder

clock-specification task can be practically impossible for large modern designs, which may contain hundreds of synchronous and asynchronous clocks of various frequencies and phases. 0-In Search can automatically extract the behavior of all the clocks directly from existing simulation tests; no other specification of clock behavior is required. Automatic extraction of clocks from simulation is particularly important for SOC designs containing many standard buses with independent clocks and for low-power designs containing variable clocks to minimize power consumption. 0-In invented the technique of automatically extracting clock behavior from simulation and has U.S. patents pending covering the technology. 3. Simulation values as constraints. Constraints define the legal behavior of inputs on the boundaries of the DUV. As an option, 0-Search allows the simulated behavior of any DUV input to be used directly as a constraint. Although this method is an approximation and is over-constraining, it is an easy, practical way to get started finding counterexamples in case some interfaces are not very important but are difficult to constrain. 4. Counterexample clean up. Dynamic BMC can use the underlying simulation to clean up counterexamples. 0-In Search analyzes each counterexample and sets any dont-care input stimulus (in other words, input stimulus that doesnt affect the assertion violation) to be the same as the corresponding stimulus in the simulation. In practice, the cleaned-up counterexample stimulus ends up looking very similar to the original simulation. As a result, the cleaned-up counterexamples are easier to understand and diagnose than counterexamples produced by other types of FV tools. 5. SAT learning across initial states. High-performance SAT-based BMC depends upon automatically learning facts about the behavior of the target circuit as the SAT analysis progresses. Intuitively, if a learned fact about the behavior of the target circuit is relevant for one initial state of the circuit, it is likely to be relevant for another initial state. 0-In Search reuses learned information from previous initial states when it analyzes a new initial state. This technique substantially improves the performance of BMC as more states are analyzed, yielding faster operation and/or more counterexamples. 0-In invented the technique of reusing learned information across multiple initial states and has U.S. patents pending covering the technology. 6. Priority analysis. Many simulation states are repetitive. Ideally, BMC would be applied only to those simulation states where important new behavior was present. 0-In dynamic BMC tools use heuristics to approximate this result: A priority-analysis tool measures the amount of new behavior present in each simulation state, based on simulation coverage monitors. Optionally, 0-In Search prioritizes simulation states for BMC analysis based on the amount of new coverage measured. Prioritized dynamic BMC produces the same counterexamples as non-prioritized dynamic BMC, but typically produces them much earlier. 0-In invented the technique of priority analysis and has U.S. patents pending covering the technology.

Formal Verification Technology Backgrounder

15

Constraints
Most designs depend on assumptions about the behavior of inputs; if those assumptions are violated, then the design will not operate as intended. Any FV analysis that includes such unintended (or illegal) stimulus may find assertion violations that are irrelevant (and may miss proofs). For example, the correct operation of a DUV may depend on incoming commands being one-hot encoded. If the DUV receives a command that violates the one-hot-encoding assumption, the DUV may enter an illegal state and trigger violations of various assertions. These assertion violations are considered false, because we assume that the device providing input commands to the DUV will actually respect the one-hot assumption. Any FV analysis that does not respect the one-hot assumption will include stimulus that is not one-hot and will report these false assertion violations. Constraints are the formal specification of the legal behavior of DUV inputs, for use by FV tools. Constraints may express simple combinational assumptions (for example, one-hot) or complex sequential assumptions (for example, This interface behaves according to the AMBA AHB specification.). Sequential constraints depend on past behavior of DUV inputs, either via state bits in the constraint logic or via state bits in the DUV itself. It is useful to consider constraints as being part of the formal model and to consider the reachable state space as being the set of DUV states which are reachable using input stimuli that are allowed by the constraints. In practice, two different methods are used to express constraints, with profound implications for the resulting use models. The first method, called assertion-style constraints, uses normal assertions to express all illegal input behavior. 0-In FV tools are based on assertion-style constraints. Assertion-style constraints are used as follows: 1. While coding the DUV, the designer writes normal assertions intended to catch any illegal behavior of DUV inputs. 2. He uses these assertions in the normal simulation runs, including the system simulation runs, in order to find problems in the assertions as well as bugs in the logic surrounding the DUV. 3. He flags the assertions as constraints and runs the FV tool. 4. The FV tool compiles the assertions together with the DUV code and includes them as part of the formal model. 5. The FV tool targets all the assertions in the DUV except the assertions flagged as constraints. 6. When analyzing the reachable state space of the DUV, the FV tool refuses to consider any stimulus that violates the assertion-style constraints. Assertion-style constraints fit in well with existing simulation methodology and make full use of the ABV infrastructures available from the major EDA vendors. They can be specified using any of the

16

Formal Verification Technology Backgrounder

common assertion formats, including 0-In CheckerWare, OVL, SVA, and PSL, as well as standard Verilog and VHDL. Note that the 0-In CheckerWare Monitor library contains pre-verified monitors for many popular standard-interface protocols (for example, AMBA AHB). Each CheckerWare Monitor includes complete assertion-style constraints for the interface (typically, hundreds of separate assertions). A designer can completely constrain a standard interface in one step simply by attaching the appropriate CheckerWare monitor and flagging it as a constraint. Thoroughly instrumenting all interfaces of single-designer units using assertion-style constraints not only enables formal verification of the units (and aggregations of units), it greatly increases observability during system simulation, catching most inter-unit protocol problems as soon as they occur and providing appropriate information to the design team for quick diagnosis. Humans specify constraints and humans make mistakes. In turn, errors in constraints may cause FV results to be meaningless: Constraints that are too tight (restricting away legal input behavior) may cause the FV tool to miss counterexamples and report false proofs. On the other hand, constraints that are too loose (allowing illegal input behavior) may cause the FV tool to report false counterexamples and miss proofs. Given the use of assertion-style constraints, both types of constraint problems can be systematically discovered by the design team: 1. During system simulation including the assertion-style constraints, constraints that are too tight show up as assertion violations. (Every violation of an assertion-style constraint during system simulation is due either to a constraint that is too tight or to a real bug in the design. Note that both cases are actionable.) 2. During formal verification, constraints that are too loose show up as false counterexamples. (Every counterexample reported by the FV tool is due either to a missing constraint or to a real bug in the design. Again, both cases are actionable.) In addition, 0-In FV tools provide various reports to help the user understand whether the design is overconstrained (see the FAQs, below). Some FV tools (for example, some core functionality of Synopsys Magellan tool) rely on generatorstyle constraints. Whereas assertion-style constraints specify all illegal input behavior, generator-style constraints attempt to specify all legal input behavior. Generator-style constraints are basically stimulus generators that attempt to generate all legal orderings of all legal stimuli. They are coded in standard HDLs augmented with non-determinism (in other words, instructions that execute all of a set of actions in parallel). In order to be used by FV tools, generatorstyle constraints must be synthesizable. Although it is relatively easy to code a complete synthesizable generator for a simple combinational constraint, it is practically impossible to code a complete synthesizable generator for a complex

Formal Verification Technology Backgrounder

17

sequential constraint like the AMBA AHB protocol; there are simply too many possible execution threads in such a protocol, and each one needs to be considered by the coder. As a result, human-coded generators for complex sequential constraints are invariably incomplete (like directed tests). FV tools that use these incomplete generators fail to exhaustively cover the reachable state space, resulting in both missed counterexamples and false proofs. For such tools, it is impossible to quantify the level of incompleteness; proof radius measures are meaningful only for exhaustive tools. Unlike assertion-style constraints, generator-style constraints are not compatible with existing system simulation tests and existing directed tests. Therefore, this type of constraint cannot be verified during system simulation and cannot be used to find inter-unit protocol problems during system simulation. Furthermore, because generator-style constraints are incompatible with existing simulation tests, their use precludes using states from existing simulation tests as initial states for FV and therefore precludes using dynamic BMC. Note that, given enough effort, a human, particularly a clever application engineer, can fine-tune a generator to find any specific bug. Benchmarks involving generator-style constraints can therefore be difficult to interpret.

Simulation versus Synthesis Semantics


0-In FV tools are the only FV tools that directly support both simulation semantics (0, 1, X and Z values together with the simulation rules for propagating them), and synthesis semantics (0 and 1 values together with the hardware rules for propagating them). Support for simulation semantics is necessary in order to fit in properly with the simulation infrastructure. In particular, initial states of large designs typically include many X values in registers. FV tools that support only synthesis semantics must assign 0 or 1 to each register bit containing an X. This assignment often leads to false counterexamples, because some combinations of 0 and 1 values in initial state registers may not actually be permitted by the sequential logic driving the registers during initialization. (For example, a multi-bit register that is X in the initial state but is constrained to be onehot by the logic driving it should not be assigned non-one-hot values.) Support for simulation semantics is especially important for dynamic BMC, which uses large numbers of simulation states captured directly from existing simulation tests. In practice, these simulation states contain many X values and produce many false counterexamples when using synthesis semantics. Furthermore, for an FV tool using synthesis semantics, even the valid counterexamples it generates will sometimes fail to produce the desired assertion violations when re-simulated. This problem is due to the formal models inaccuracy in modeling X and Z generated by DUV code itself during simulation. False counterexamples and counterexamples that cannot be re-simulated slow down the bug-finding process, mask real bugs and cause users to lose confidence in the FV tools. These problems can be avoided by using simulation-semantics mode in 0-In FV tools. In simulation semantics mode, 0-In FV

18

Formal Verification Technology Backgrounder

tools match simulation; counterexamples reported by the tool are never false due to Xs in the initial state, and they always re-simulate properly. Simulation semantics also allows 0-In FV tools to verify assertions that explicitly check for X and Z values, such as the known checker and the known_driven checker in the 0-In CheckerWare library. Such assertions are simply ignored by FV tools restricted to synthesis semantics. The formal model used by 0-In FV tools supports simulation semantics by accurately propagating X and Z values according to simulation rules. This four-state formal model is optimized so that it uses extra gates only where strictly necessary, resulting in negligible performance degradation. Note that some FV tools share front-end processing with synthesis tools. The front-end processing of synthesis tools performs two-state optimizations that are invalid for four-state semantics and which preclude building true four-state formal models for the FV tools. Note also that some semi-formal verification tools (for example, Synopsys Magellan) use simulation as a final filter to eliminate false counterexamples (in other words, counterexamples that do not work in simulation are simply not reported). However, finding, simulating and discarding false counterexamples uses CPU time and does not help find bugs. In order to facilitate direct comparison with other FV tools, 0-In FV tools also support synthesis semantics as the default mode. In this mode, 0-In FV tools attempt to minimize the number of false counterexamples due to X values in the initial state by analyzing the sequential logic driving the initialstate registers and justifying that substitutions of 0/1 values for X in the registers are actually permitted by the sequential driving logic. (The user can set the depth of this sequential analysis. Analysis depth of a few cycles is sufficient to prevent this type of false counterexamples in most designs.)

Design Styles
Basically, all FV tools support synthesizable HDL code. (Any HDL code that can be synthesized into gates can be synthesized into an internal formal model for an FV tool, and vice versa.) However, commercial FV tools differ in the way they handle certain problematic constructs, including: 1. 2. 3. 4. 5. Latches Gated clocks Variable and asynchronous clocks Non-synthesizable constructs Memories and multi-dimensional arrays

The 0-In FV tool fully support latches and gated clocks. Mentor Graphics is the only EDA company to support automatic extraction of clocks, including variable and asynchronous clocks, directly from simulation.

Formal Verification Technology Backgrounder

19

Since no accurate formal model can be constructed for non-synthesizable constructs, most FV tools simply treat the outputs of such constructs as unconstrained; in other words, allow them to produce any values to be used as inputs to downstream logic. This treatment results in false counterexamples and missed proofs. In contrast, 0-In FV counterexample engines (0-In Confirm and 0-In Search) gracefully degrade the outputs of non-synthesizable constructs. Graceful degradation prevents 0-In tools from finding any counter-examples that depend on the outputs of non-synthesizable constructs, while still allowing them to find all counterexamples that do not depend on the outputs. (Proofs cannot not benefit from graceful degradation and 0-In Prove does not use it.) Memories are especially problematic for FV tools. Most large designs contain memories, but exact formal models of large memories are intractable. Most FV tools simply treat all memories as nonsynthesizable constructs; in other words, they treat the memory outputs as unconstrained. This treatment results in false counterexamples and missed proofs. In contrast, all 0-In FV tools construct exact formal models for small memories (not more than 128 words, by default, user-settable). In addition, 0-In counterexample tools construct approximate formal models for larger memories. Approximate memory models accurately model a few important memory locations and gracefully degrade all other locations. As a result, 0-In counterexample tools are exhaustive for small memories, find all counterexamples that do not depend on large memory outputs, and also find many counterexamples that do depend on those outputs. (0-In Prove simply treats largememory outputs as unconstrained.) Note that the forgoing discussion applies not only to 2-dimensional arrays (memories), but also to arrays of three or more dimensions.

20

Formal Verification Technology Backgrounder

0-In FV Technology FAQ


What FV tools does Mentor provide? Mentor provides 0-In Formal Verification, which consists of several engines targeting different operations, as follows: 0-In Prove (unbounded proofs), 0-In Confirm (static BMC), and 0-In Search (dynamic BMC). Why are 0-In Prove and 0-In Confirm separate engines? Proof analysis time and counterexample analysis time are unpredictable and unbounded, therefore advanced users often want the flexibility to create scripts that run the tools in either order and with custom effort levels selected. Also, as separate tools, they can be run in parallel on separate CPUs, allowing results to be reported as soon as possible. (in other words, running in parallel, a quick counterexample is not delayed by a difficult proof analysis, and vice versa.) What assertion formats does the Mentor FV tool support? Mentor supports 0-In CheckerWare, OVL, SVA, PSL, Verilog and VHDL assertion formats. What design languages does Mentor FV support? Mentor FV currently supports the Verilog (including certain System Verilog constructs) and VHDL design languages. What is the initial state, where does it come from and why is it important for FV? The initial state is a complete assignment of values to all register bits of the DUV. (More precisely, each register bit in the initial state is assigned a set of allowed values, for example, the assignment 0/1 indicates that the FV tool may set the register bit to either 0 or 1 during formal analysis.) What are sequential and combinational proofs? Sequential proofs are based on the values of the register bits in the initial state. Sequential proofs may be false if the initial state is not correct. On the other hand, combinational proofs assume that every register bit in the initial state is unconstrained (in other words, 0/1/X/Z, in simulation semantics, 0/1 in synthesis semantics). Combinational proofs are stronger than sequential proofs, since they do not depend on the accuracy of the initial state. However, some assertions that can be proven sequentially have no combinational proofs. Combinational proofs are useful in special cases; for example, in case a valid initial state is not known or in case selected assertions must not be violated during scan. 0-In Prove supports both sequential and combinational proofs (selectable by the user). What are constraints, where do they come from and how are they important for FV? Constraints (also called assumptions) are the specification of the legal sequential behavior of DUV inputs for use by an FV tool. They are specified by the user, using assertions or generators. (Pre-verified

Formal Verification Technology Backgrounder

21

constraints for many popular standard interfaces may be selected from the 0-In CheckerWare Monitor library.) If the specified constraints allow illegal input behavior, then the FV tool may produce false counterexamples and miss proofs. If the specified constraints restrict away legal input behavior, then the FV tool may miss counterexamples and produce false proofs.

What is wrong with generator-style constraints and why are assertion-style constraints better? 1. Generator-style constraints are incompatible with the system-level simulation testbench, so they cannot be debugged using the system-level simulations. Assertion-style constraints can be debugged using the system-level simulations.
2. Generator-style constraints attempt to model all legal input behavior by using non-determinism. They must be synthesizable. Complete, synthesizable generator-style constraints for a complex interface are even more difficult to develop than complete directed tests. As a result, they are invariably incomplete and miss bugs. On the other hand, assertion-style constraints attempt to model all illegal input behavior using assertions. Complete assertion-style constraints for a complex interface are relatively easy to develop (in other words, simply write assertions covering all the error cases), and formal analysis using them can be exhaustive.

How are synthesis and simulation semantics different? Synthesis semantics assume 0/1 values and hardware rules for propagating the values. Simulation semantics assume 0/1/X/Z values and simulation rules for propagating the values. 0-In FV tools support both synthesis and simulation semantics, under user control. What benefit do I get from using simulation semantics in FV? If synthesis semantics are used in the formal model, then X values in the register bits of the initial state may lead to false counterexamples. Also, if a counterexample is found and re-simulated for diagnosis, then X and Z values generated by DUV code during simulation may cause the simulation to fail to produce the expected assertion violation. Using simulation semantics in the formal model avoids both of these problems. What is X justification and why is it important? X justification is the process of analyzing the sequential logic driving the initial-state registers and justifying that assignments of 0/1 to replace X are actually permitted by the sequential driving logic. X justification can eliminate most false counterexamples caused X in the initial state. 0-In FV tools support X justification when synthesis semantics are used. The user can set the depth of this sequential analysis.

22

Formal Verification Technology Backgrounder

How much does simulation semantics slow down the FV tools? The four-state formal model used by 0-In FV tools is optimized so that it uses extra gates only where strictly necessary, resulting in negligible performance degradation. Why do 0-In Prove and 0-In Confirm use synthesis semantics by default? 0-In Prove and 0-In Confirm support both synthesis and simulation semantics (selectable by the user). However, FV tools from other vendors support only synthesis semantics. Some users like to evaluate FV tools using the default modes, so the default semantics of 0-In FV tools were set to match the semantics of tools from other vendors. Why does 0-In Search use simulation semantics by default? 0-In Search uses initial states from production simulation tests. In practice, these initial states contain many X values. Furthermore, 0-In Search may use DUV input values from production simulation tests and these inputs values typically contain many X and Z values. Given all these X and Z values, 0-In Search would produce many false counterexamples using synthesis semantics. Furthermore, users need to re-simulate counterexamples reported by 0-In Search. If 0-In Search used synthesis semantics, then X and Z values would prevent some valid counterexamples found by 0-In Search from producing the expected assertion violations in simulation. What is graceful degradation? Graceful degradation is the technique of exhaustively searching for counterexamples that do not depend on the values of the gracefully degraded nodes. How do 0-In FV tools handle memories, non-synthesizable constructs and black boxes? Most FV tools simply treat the outputs of memories, non-synthesizable constructs and black boxes as primary inputs for the DUV (in other words, unconstrained). This technique leads to false counterexamples. 0-In counterexample tools gracefully degrade such outputs and proceed to find all counterexamples that do not depend on their values. (0-In Prove simply treats the outputs as unconstrained.) What is a counterexample and how do I diagnose it? A counterexample is a stimulus sequence for the primary inputs of the DUV that will cause the DUV to violate the target assertion in simulation, starting from the initial state. 0-In FV tools allow you to resimulate counterexamples using VCS, NC, Questa, or ModelSim. You can diagnose the counterexample by viewing the simulation waveforms using a simulation waveform viewer.

Formal Verification Technology Backgrounder

23

What is a false counterexample and how can it occur? A false counterexample is a stimulus sequence for the primary inputs of the DUV that involves illegal input behavior or fails to cause the target assertion to be violated in simulation, starting from the initial state. A false counterexample can be caused by any of the following:
1. The initial state may be incorrect. 2. Constraints may be missing or too loose. 3. The FV tool may approximate the outputs of memories, non-synthesizable constructs or black boxes by treating them as primary inputs of the DUV. (0-In FV tools use graceful-degradation technology to avoid these false counterexamples.) 4. The formal model may use synthesis semantics (incorrectly modeling X in the initial state).

What is a proof? A proof (also called an unbounded proof) means that there is no legal stimulus of any length for the primary inputs of the DUV that will cause the DUV to violate the target assertion in any number of cycles, starting from the initial state. What is a false proof and how can it occur? A false proof is a proof that is invalid because of bad assumptions. That is, if the assumptions were corrected, then a counterexample for the assertion would exist. (We assume that the proof algorithm is sound.) A false proof can be caused by any of the following:
1. The assertion may contain an error. 2. The initial state may be too restrictive (for example, mode registers set to specific values, when other values are possible). 3. Constraints may be too restrictive. 4. A manually specified abstraction may be incorrect. 5. The formal model may use synthesis semantics (incorrectly modeling X and Z values).

What is an indeterminate assertion? An indeterminate assertion is an assertion for which the FV tool can find neither a proof nor a counterexample. (Indeterminate assertions are sometimes also called inconclusive or unknown assertions.)

24

Formal Verification Technology Backgrounder

How do 0-In FV tools tie in to simulation? 1. 0-In FV tools are based on simulatable assertions.
2. 0-In FV tools are based on simulatable, assertion-style constraints. 3. 0-In FV tools support simulation semantics, so results match simulation. 4. 0-In FV tools support capturing the initial state from simulation. 5. 0-In Search supports capturing a large number of initial states from simulation (dynamic BMC). 6. 0-In counterexample tools (0-In Confirm and 0-In Search) support re-simulation of counterexamples using standard simulators.

Why is specifying clocks hard and how do 0-In FV tools automatically extract the clocks from the simulation? Modern large designs often contain large numbers of clocks with frequencies and phases that are variable or unknown. Particularly in SOC and low-power designs, clocks may start and stop at unpredictable times. Specifying an accurate formal model that derives these clocks from a master clock can be extremely difficult. 0-In dynamic BMC tools (0-In Confirm and 0-In Search) analyze a simulation trace and automatically build a formal clock model that duplicates the clock behavior observed in simulation. What is so great about dynamic BMC (0-In Search)?
1. Dynamic BMC can find bugs that cannot be found by any other type of FV tool. Simulation tests generated by DV engineers are designed to reach important corner-case states and tuned until they do. Dynamic BMC automatically uses these corner-case states as starting points to exhaustively search for bugs. 2. Dynamic BMC is especially important for verifying multiple large units of a partitioned design, each of which may require special, complex initialization to enable effective BMC. 3. Simulation inputs can be used to constrain dynamic BMC, thus simplifying initial constraint development and allowing bugs to be found earlier. 4. Dynamic BMC can clean up counterexamples by setting dont-care stimulus to match simulation. Cleaned-up counterexamples are easier to understand and diagnose than counterexamples produced by other types of FV tools.

Formal Verification Technology Backgrounder

25

Dynamic BMC (0-In Search) is great. Why do I need static BMC (0-In Confirm)? 0-In Confirm can be used before the simulation testbench is ready. 0-In Confirm can even be used on small units to replace a simulation testbench. Furthermore, given a fixed CPU time budget, 0-In Confirm will perform deeper exhaustive analysis (from a single initial state). How can I get 0-In Confirm to start at a specific interesting initial state? Set up a simulation that covers the interesting state and capture the state for 0-In Confirm. If you dont know exactly what state you want for the initial state, you can set up and capture several different states and analyze them all. During simulation, you can use assertions to verify that the state you are capturing covers the behavior that you want. What is goalposting? Goal-posting is a powerful interactive method for finding tough bugs using 0-In Confirm. You create a goalpost assertion that detects an early symptom or precondition of the bug and you target the goalpost assertion using 0-In Confirm. When 0-In Confirm finds a way to reach the goalpost assertion, it automatically creates an initialization file that causes a subsequent run of 0-In Confirm to start from the goalpost state. Using this initialization file, you run 0-In Confirm again, targeting the bug assertion or another goalpost assertion. This method allows you to guide 0-In Confirm toward the tough bug knowing only the approximate bug path; 0-In Confirm will create the concrete stimulus necessary to progress along the path. What is semi-formal verification? Semi-formal verification tools use pseudo-random and constrained pseudo-random simulation to increase general simulation coverage in the hope of violating assertions. Some semi-formal tools also use formal methods to increase simulation coverage. Semi-formal methods are not exhaustive. What is wrong with semi-formal verification? 1. Semi-formal tools are not exhaustive. Violating complex assertions (or even setting up the preconditions for violating complex assertions) may require that exactly the right stimulus be applied in exactly the right order for hundreds of cycles. In practice, general coverage measures are not specific enough to create the right stimulus in the right order to violate complex assertions.
2. Unlike true FV tools, semi-formal tools cannot try one stimulus, back up the simulation and re-try a new stimulus. Therefore, when a semi-formal tool pseudo-randomly starts down a wrong path, the damage is permanent (at least, until the tool resets the design and starts over again). 3. Semi-formal tools are forced to provide a single stimulus for all assertions at the same time. For large numbers of complex assertions, the probability is small that the single stimulus provided is exactly right for all assertions in the design. 4. Conventional pseudo-random testbenches already provide most of the value of semi-formal tools.

26

Formal Verification Technology Backgrounder

Why is semi-formal verification not as effective as dynamic BMC? 1. Dynamic BMC targets the assertions in the DUV individually and directly using exhaustive formal analysis. On the other hand, semi-formal tools target all assertions together using non-exhaustive pseudo-random simulation.
2. Semi-formal tools that use formal methods to increase coverage waste time targeting coverage that is irrelevant to violating the assertions. 3. Dynamic BMC automatically uses simulation states captured from directed tests as initial states for exhaustive formal analysis. Directed tests are designed by DV engineers to reach important cornercase states and are tuned until they do. Probabilities are almost zero that pseudo-random simulation can reach many of these corner-case states. 4. Both formal and semi-formal tools require constraints, which are often inaccurate: a. If the constraints are too loose, then the semi-formal tool will waste time exploring illegal states. If it finds an assertion violation, then the violation is likely to be far from any legal state, hard to diagnose and usually a false error. On the other hand, dynamic BMC periodically returns to legal states captured from a known-good simulation. When dynamic BMC finds an assertion violation, it is almost always close to a known-good state, easy to diagnose and not a false error. b. In practice, users tend to over-constrain the DUV inputs to avoid wasting time diagnosing false errors. In this case, the semi-formal tool may be restricted from regions of the state space that contain assertion violations, causing bugs to be missed. (For example, the constraints may disallow setting of certain modes in mode registers. Any bugs that depend on those modes will never be found by the semi-formal tool.) On the other hand, using the same overly restrictive constraints, dynamic BMC can access the restricted regions of the state space by starting from simulation states that are within those regions, thus finding the bugs that semi-formal misses. (For example, if there are simulation tests that set the necessary modes, dynamic BMC can use states from those simulation tests as initial states.)

What feedback does 0-In Prove provide and why is it important? 0-In Prove generates various types of feedback for diagnosing proofs, including:
1. Vacuous proofs. 0-In Prove flags proofs that are vacuous (for example, implications for which the hypothesis is false). Vacuous proofs may be misleading. 2. Inconsistent constraints. If the constraints are inconsistent, then any proofs are meaningless. 3. Constraints that were used in the proofs. This report helps the user to identify extra constraints that may be causing false proofs. 4. X and Z values in registers in the initial state. When using synthesis semantics, X and Z values in the initial state may cause missed proofs. 5. Conditional proofs. 0-In Prove analyzes indeterminate assertions to find cases where a proof of one

Formal Verification Technology Backgrounder

27

indeterminate assertion implies that one or more other indeterminate assertions cannot be violated. 0-In Prove reports these conditional proofs so that human effort can be applied selectively to analyze the indeterminate assertions that control the most additional proofs.

What feedback do 0-In Search and 0-In Confirm provide and why is it important? 0-In Search and 0-In Confirm generate various types of feedback for diagnosing counterexamples and indeterminate assertions, as well as for finding subtle problems with the simulation model:
1. Internally generated simulation waveforms for each counterexample. 2. Optional re-simulation of each counterexample using VCS, NC, ModelSim, or Questa. 3. Proof radius for each indeterminate assertion. This information helps the user understand how thoroughly the indeterminate assertions have been verified. 4. Input usage (which inputs were used to cause assertion violations, which were considered but not used, which were not even considered, and which were constrained). This information helps the user understand why assertions are indeterminate and whether the DUV is over-constrained. 5. Constraints used in bounded proofs. This information helps the user understand whether and how the DUV is over-constrained. 6. Inconsistent constraints. If the constraints are inconsistent, then the DUV is over-constrained; no counterexamples exist that satisfy all the constraints. 7. X and Z values in the initial state. When using synthesis semantics, X and Z values in the initial state may cause false counterexamples. 8. Synthesis-to-simulation mismatches (registers in the DUV where the underlying simulation and the formal model disagree about the next-cycle value of the register). This information helps the user find problems where the simulation model does not reflect the true behavior of the gates that will be synthesized, for example, races (usually, a problem in the RTL).

What is the proof radius and how large does it need to be? A proof radius of N cycles indicates that the target assertion cannot be violated by any legal stimulus sequence of length N cycles or less when simulating the DUV starting from the initial state. If the proof radius is as large as the sequential depth of the DUV, then the bounded proof is equivalent to an unbounded proof. If the sequential depth of the DUV is not known, then human judgment is required to determine when increasing the proof radius is likely to find additional bug.

28

Formal Verification Technology Backgrounder

What should I do about indeterminate assertions? You should include all indeterminate assertions in the simulation regression tests. For assertions that are of high value, maximize the amount of analysis performed by 0-In Search and 0-In Confirm, as follows:
1. Verify that the simulation tests used by 0-In Search cover the preconditions of the assertions (for example, FIFO full, for a FIFO-overflow assertion) and use priority analysis to prioritize the simulation states. 2. Use goalposting to search deeper with 0-In Confirm. 3. Maximize the proof radius by running 0-In Confirm and 0-In Search on each indeterminate assertion separately, for as much time as practical.

How do 0-In FV tools benefit from analyzing large numbers of assertions together? 0-In FV tools are designed to efficiently analyze large designs containing many thousands of assertions. In order to efficiently process large numbers of assertions 0-In FV tools compile and optimize the design once up front, then analyze all of the individual assertions within the optimized design. This method avoids costly re-compilation and re-optimization of the design. Some FV tools offered by other vendors can efficiently process single assertions, but get bogged down re-optimizing the design over and over again when large numbers of assertions are present. Also, 0-In FV tools enhance formal analysis performance by sharing learned information across assertions. How does priority analysis boost the performance of dynamic BMC (0-In Search)? Priority analysis uses heuristics based on simulation coverage information to prioritize simulation states for BMC analysis; states that cover new behavior are prioritized above other states. Empirically, this method substantially decreases the time required to find counterexamples. How do you know that 0-In Confirm is the deepest, fastest BMC? By means of customer benchmarks. How can I test the performance of 0-In Confirm? Use the method described in Appendix A. What should I watch for to prevent being misled in BMC benchmarks? 1. Small designs 2. Small numbers of assertions 3. Non-exhaustive tools 4. Targeting known bugs 5. Use of generator-style constraints that benefit from human reasoning about the bug.

Formal Verification Technology Backgrounder

29

Appendix A: How to Benchmark Exhaustive Counterexample Tools


This appendix explains how to determine which of two FV tools is faster for exhaustively searching a very large state space looking for counterexamples. First, you must check that both FV tools are exhaustive. Some commercial counterexample tools claim to be exhaustive but are not. The comparison is meaningless if one tool is not exhaustive. (Nonexhaustive tools miss bugs and do not produce bounded proofs.) Proceed as follows to check exhaustiveness: 1. Use each tool to find a set of shortest counterexamples for about ten assertions in a representative design. 2. Compare the lengths of the counterexamples found by the tools. 3. If one of the two tools produces any longer counterexamples, then it is not exhaustive. Note that the design, assertions and constraints must be identical for the two tools. If even a single constraint is different in the two setups, the results will be meaningless. Next, you must find a tough indeterminate assertion. There must be no known proof or counterexample for this assertion. Assertions for which proofs are tractable are not representative of typical assertions; they are usually simpler than other assertions. Their atypical characteristics may affect the measurement, so you should avoid them. On the other hand, if there is any counterexample known for the assertion, then the application engineer involved may use human input (for example, generators, tuned constraints, special initial states, etc.) to guide the tool, thus improving its performance. The application engineer may not even understand that this is cheating! Finally, simply measure the CPU time required for each tool to achieve a large (for example, 100-cycle) proof radius for the indeterminate assertion. Differences in performance may not be measurable at small proof radii, but the differences will become very large as the proof radius increases. To increase your confidence, measure the relative performance for a large number of tough indeterminate assertions.

30

Formal Verification Technology Backgrounder

Appendix B: Dynamic BMC Finds Bugs That Simulation Misses


Following are a few examples from 0-In customers illustrating some of the types of bugs that dynamic BMC finds based on clean simulation tests:

Control-Logic Corner-Case Bug Control-logic corner-case bugs involve walking control logic through a complex sequence of unlikely states in exactly the right order.
For example, in a high-speed communication link chip design that was verified using 0-In FV tools, there was a logic bug such that when a re-transmit was requested, and a re-transmit was already queued, and the re-transmit counter was equal to max-1, and a multi-bit error arrived on the receive interface, then re-transmit data at the head of the retransmit FIFO would be lost.

For simulation to detect this bug, the transmit counter must first be incremented up to a large value, then multiple corner-case events must happen in precisely the right order. In this case, the bug was never detected in simulation. Handwritten tests did not target the bug, and extensive pseudo-random simulation never created the right combination of events in the right order. FV tools can deterministically trigger control-logic corner-case bugs. In this case, the design team used 0-In Search. Although this specific bug is outside of the range of static FV tools (because a large counter value is required), 0-In Search found it using dynamic BMC technology. 0-In Search first found a simulation cycle in which the retransmit counter already contained a very large value. Then, using that simulation cycle as a starting point, it triggered the bug by presenting the right corner-case events in the right order.

Interface Bugs (Non-Compliance and Omission) Another frequently occurring bug type involves the failure to comply with a standard interface specification. This bug type can take two forms; the first is an error in implemented behavior, and the second is the complete omission of behavior that should have been implemented.
For example, in the design of a bus bridge from a processor to an AMBA bus that was verified using 0In tools, there was a logic bug such that when the bus master received a retry response, and the retry was interrupted, then the bus master did not continue the retry until it completed or an error response was received.

Formal Verification Technology Backgrounder

31

This bug involves omission of behavior that should have been implemented. Since the protocol has many complex interactions, creating a comprehensive simulation test suite to exercise all combinations of interactions would have been very difficult and time-consuming. In addition, since the design team didnt know that the omitted behavior was part of the protocol specification, it would never have tried to create the right directed test in the first place! In this case, the design team used a pre-verified AMBA monitor together with 0-In Search to deterministically verify the bus bridge design. The AMBA monitor contains one assertion corresponding to each rule in the protocol specification (even the rules that this design team didnt know about). 0-In Search exhaustively targeted each assertion in the AMBA monitor. In doing so, it forced the bus master to enter the idle state after retry interruption instead of continuing to retry. The protocol omission was easily fixed once the bug was identified, but finding the right interaction among many transaction types required the power of FV technology.

Low-Probability Data-Dependent Bug Another frequently occurring bug type involves presenting exactly the right value on a wide data input at exactly the right time.
For example, in a 2M-gate OC-48 network packet traffic management design that was verified using 0In tools, there was a logic bug such that if a certain 1024-bit input variable took on a specific value during a specific one of sixteen pipeline cycles, then data would be corrupted in down-stream pipeline stages. Directed tests didnt trigger the bug, because the test writers didnt happen to apply the right data value. Extensive pseudo-random simulation didnt trigger the bug either; the chances of triggering the bug by applying random values to the 1024-bit input are only about one in 10300! Fortunately, the design team ran 0-In Search before taping out. The FV engine in 0-In Search analyzes all input values and deterministically identifies the data input value which is necessary to violate assertions. In this case, 0-In Search immediately found the bug by changing the value that was present in simulation to exactly the right value in exactly the right cycle, violating an assertion that was monitoring data in the pipeline.

For more information visit: www.mentor.com/fv


Copyright 2006 Mentor Graphics Corporation. All Rights Reserved. This document contains information that is proprietary to Mentor Graphics Corporation and may be duplicated in whole or in part by the original recipient for internal business purposes only, provided that this entire notice appears in all copies. In accepting this document, the recipient agrees to make every reasonable effort to prevent the unauthorized use of this information. Seamless and Mentor Graphics are registered trademarks of Mentor Graphics Corporation. All other trademarks mentioned in this document are trademarks of their respective owners.

32

Formal Verification Technology Backgrounder


MGC 05-06 TECH7100-w

Вам также может понравиться