Saturday, April 1, 2023
HomeElectronicsThe pitfalls of mixing formal and simulation: Where trouble starts

The pitfalls of mixing formal and simulation: Where trouble starts

[ad_1]

The most effective functional verification environments employ multiple analysis technologies, where the strengths of each are combined to reinforce each other to help ensure that the device under test (DUT) behaves as specified. However, this creates an inherent challenge of properly comparing—and combining—the results from each source to give a succinct, accurate picture of the verification effort’s true status.

The most common problem we see is when design engineers want to merge the results from formal analysis with the results of RTL code and functional coverage from their UVM testbench, yet they don’t fully understand what formal coverage is providing. Hence, we will start on the familiar ground of simulation-generated code and functional coverage before going into defining formal coverage.

Simulation code and functional coverage review

Code coverage is simply the percentage of RTL code measuring the number of statements in a body of code that has been executed through a test run. While it’s important that the testbench can exercise all of the RTL code—for example, there is no dead code, implying a bug in the DUT—the design can still be missing important functionality and/or the paths to key functions can be in violation of the specification.

Functional coverage from RTL simulation is the metric of how much design functionality has been exercised—for instance, “covered” by the testbench or verification environment. The amount of functional coverage that must be satisfied is explicitly defined by the verification engineer in the form of a functional coverage model.

In its basic form, it’s a user-defined mapping of each functional feature to be tested to a coverage point, and these coverage points have certain conditions (ranges, defined transitions or crossings, etc.) to fulfill before they are reported as 100% covered during simulation. All these conditions for a cover point are defined in the form of bins. A number of cover points can be captured under one covergroup, and a collection of a number of covergroups is usually called a functional coverage model.

During simulation, when certain conditions of a cover point are hit, those bins (conditions) are being covered, and thus the number of bins hit provides a measurement of verification progress. After executing a number of testcases, a graphical report may be generated to analyze the functional coverage report and plans can be made to “cover up” the “verification holes” by creating new tests that will traverse the as-yet unexercised areas of the DUT code.

Note that it’s important to appreciate that the coverage score is also a reflection of the quality of the testbench—for example, whether the testbench is addressing all the DUT circuitry that you need to verify. Indeed, the coverage score and the number of holes are a reflection of the quality of the verification test plan and its alignment with the original specification itself.

Defining formal-based coverage

Formal analysis delivers the following types of coverage not seen with simulation:

Reachability

Is there any combination of input signals that will bring the state of the circuit to a specified node of importance? If there are none, the point is unreachable. Hence, this yields similar information about the presence of dead code as does simulation code coverage.

Observability

What are all the possible circuit and state space paths from a selected node to signals specified in an assertion? And are these the expected paths, and do they meet the design’s specification?

Structural cone of influence (COI)

Working forwards or backwards from a selected point in the circuit, what is all the logic that could possibly affect this node? This is a rough form of coverage and generally not useful for final signoff analysis.

Model-based mutation coverage

Measures how well checkers cover a design by inserting mutations electronically into the model—the source code itself is not altered—and checking if any checker or assertion detects it. Mutation coverage gives clear guidance on where more assertions are needed and represents a stronger metric than any other type of coverage since it measures error detection, not just exercise of design.

While these measurements are fundamentally different than simulation coverage, the overall application is the same at a high-level: measure progress and simultaneously evaluate the quality of the formal testbench (comprised of constraint and functional verification properties).

Where the trouble starts

The most common designer request is to “merge” simulation and formal-generated coverage metrics corresponding to each specified coverage point; where often the objective is to enable verification teams to select either formal or simulation to exclusively verify a sub-set of a design. For example, imagine a given IP contained a design element that was very well suited to formal analysis—an arbiter is one example—and the rest of the design could be easily verified by either technology.

Naturally, designers want to enable formal to “take credit” for the arbiter verification, and seamlessly merge this with the simulation results on the rest of the circuit. This is where the trouble can start. The major risks to be aware of are:

  • First, regardless of the chosen technology, just because something is covered doesn’t mean it’s properly verified. Again, consider the relationship between code coverage versus functional coverage—just because tests can traverse 100% of the code doesn’t mean that the code is also functionally correct.
  • Simulation coverage only reflects specific forward paths the simulation has traversed from the inputs through the state space for a specific set of stimuli.
  • Some types of formal coverage do reflect a forward traversal, but just because the formal analysis might traverse some of the same states as a simulation, the logic covered is usually greater. Additionally, formal analysis has free-floating input stimulus and is valid for all time.
  • Other types of formal coverage report how logic and signaling “work backwards” from an output.
  • Simulation is run at the cluster/system-on-chip (SoC) level, while formal is typically run at the block level. Hence, code coverage represents end-to-end testing versus a more localized function. How the coverage data was generated can be lost once the results are logged into the coverage database.

Bottom-line: if you are not careful about understanding the differences between formal and simulation coverage—if you simply union the coverage data on a 1-1 line/object/point basis—you can incorrectly conclude that your verification quality is higher than it really is. For instance, you could mislead yourself and your managers that you are done when the reality is that there are actually unverified areas of your code and test plan.

Editor’s Note: It’s a three-part series on the pitfalls of mixing formal and simulation coverage. The second part of this article series will compare the results from simulation and formal side-by-side with progressively complex RTL DUT code examples.

Mark Eslinger is a product engineer in the IC Verification Systems division of Siemens EDA, where he specializes in assertion-based methods and formal verification.

 

Joe Hupcey III is a product marketing manager for Siemens EDA’s Design & Verification Technologies formal product line of automated applications and advanced property checking.

 

Nicolae Tusinschi is a product manager for formal verification solutions at Siemens EDA.

 

 

Related Content



[ad_2]

Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments