Principles of Verifiable RTL Design: A functional coding style supporting verification processes in Verilog / Edition 2

Principles of Verifiable RTL Design: A functional coding style supporting verification processes in Verilog / Edition 2

ISBN-10:
0792373685
ISBN-13:
9780792373681
Pub. Date:
05/31/2001
Publisher:
Springer US
ISBN-10:
0792373685
ISBN-13:
9780792373681
Pub. Date:
05/31/2001
Publisher:
Springer US
Principles of Verifiable RTL Design: A functional coding style supporting verification processes in Verilog / Edition 2

Principles of Verifiable RTL Design: A functional coding style supporting verification processes in Verilog / Edition 2

Hardcover

$169.99
Current price is , Original price is $169.99. You
$169.99 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Overview

System designers, computer scientists and engineers have c- tinuously invented and employed notations for modeling, speci- ing, simulating, documenting, communicating, teaching, verifying and controlling the designs of digital systems. Initially these systems were represented via electronic and fabrication details. F- lowing C. E. Shannon’s revelation of 1948, logic diagrams and Boolean equations were used to represent digital systems in a fa- ion that de-emphasized electronic and fabrication detail while revealing logical behavior. A small number of circuits were made available to remove the abstraction of these representations when it was desirable to do so. As system complexity grew, block diagrams, timing charts, sequence charts, and other graphic and symbolic notations were found to be useful in summarizing the gross features of a system and describing how it operated. In addition, it always seemed necessary or appropriate to augment these documents with lengthy verbal descriptions in a natural language. While each notation was, and still is, a perfectly valid means of expressing a design, lack of standardization, conciseness, and f- mal definitions interfered with communication and the understa- ing between groups of people using different notations. This problem was recognized early and formal languages began to evolve in the 1950s when I. S. Reed discovered that flip-flop input equations were equivalent to a register transfer equation, and that xvi tor-like notation. Expanding these concepts Reed developed a no- tion that became known as a Register Transfer Language (RTL).

Product Details

ISBN-13: 9780792373681
Publisher: Springer US
Publication date: 05/31/2001
Edition description: 2nd ed. 2001
Pages: 282
Product dimensions: 6.14(w) x 9.21(h) x 0.03(d)

About the Author

Lionel Bening Hewlett-Packard Company, Richardson, TX, USA

Harry Foster Hewlett-Packard Company, Richardson, TX, USA

Read an Excerpt


Chapter 2: The Verification Process

Ad-hoc Metrics

Ad-hoc metrics, such as bug detection frequency, length of simulation after last bug found, and total number of simulation cycles are possibly the most common metrics used to measure the degree of confidence in the overall verification process. These ad-hoc metrics indicate, after a stable period, that our verification productivity level has diminished. At this point, the verification manager may choose to employ additional verification strategies--or decide to release the design. Malka and Ziv [1998] have extended the use of these metrics by applying statistical analysis on post-release bug discovery data, cost per each bug found, and the cost of a delayed release to estimate the reliability of the design. This technique provides a method of predicting the number of remaining bugs in the design and the verification mean time to next failure (MTTF). Unfortunately, metrics based on bug rates or simulation duration provide no qualitative data on how well our verification process validated the design space, nor does it reveal the percentage of the specified functionality that remains untested. For example, the verification strategy might concentrate on a few aspects of the design's functionality--driving the bug rate down to zero. Using ad-hoc metrics might render a false sense of confidence in our verification effort, although portions of the design's total functionality remain unverified.

Programming Code Metrics

Most commercial coverage tools are based on a set of metrics originally developed for software program testing [Beizer 1990][Horgan et al. 1994]. These programming code metrics measure syntactical characteristics of the code due to execution stimuli. Examples are as follows:

  • Line coverage measures the number of times a statement is visited during the course of execution.
  • Branch coverage measures the number of times a segment of code diverges into a unique flow.
  • Path coverage measures the number of times each path (i.e., a unique combination of branches and statements) is exercised due to its execution stimuli.
  • Expression coverage is a low-level metric characterizing the evaluation of expressions within statements and branch tests.
  • Toggle coverage is another low-level metric, which provides coverage statistics on individual bits that toggle from 1 to 0, and back. This coverage metric is useful for determining bus or word coverage.

A shortcoming with programming code metrics is that they are limited to measuring the controllability aspect of our test stimuli applied to the RTL code. Activating an erroneous statement, however, does not mean that the design bug would manifest itself at an observable point during the course of simulation. Techniques have been proposed to measure the observability aspect of test stimuli by Devadas et al. [1996] and Fallah et al. [1998]. What is particularly interesting are the results presented by Fallah et al. [1998], which compares traditional line coverage and their observability coverage using both directed and random simulation. They found instances where the verification test stimuli achieved 100% line coverage, yet only achieved 77% observability coverage. Other instances achieved 90% line coverage, and only achieved 54% observability coverage.

Another drawback with programming code metrics is that they provide no qualitative insight into our testing for functional correctness. Kantrowitz and Noack [1996] propose a technique for functional coverage analysis that combines correctness checkers with coverage analysis techniques. In section 2.4, we describe a similar technique that combines event monitors, assertion checkers, and coverage techniques into a methodology for validating functional correctness and measuring desirable events (i.e., observable points of interest) during simulation.

In spite of these limitations, programming code metrics still provide a valuable, yet crude, indication of what portions of the design have not been exercised. Keating and Bricaud [1999] recommend targeting 100% programming code coverage during block level verification. It is important to recognize, however, that achieving 100% programming code coverage does not translate into 100% observability (detection) of errors or 100% functional coverage. The cost and effort of achieving 100% programming code coverage needs to be weighed against the option of switching our focus to an alternative coverage metric (e.g., measuring functional behavior using event monitors or a user defined coverage metric).

State Machine and Arc Coverage Metrics

State machine and arc coverage is another measurement of controllability. These metrics measure the number of visits to a unique state or arc transition as a result of the test stimuli. The value these metrics provide is uncovering unexercised arc transitions, which enables us to tune our verification strategy. Like programming code metrics, however, state machine and arc coverage metrics provide no measurement of observability (e.g., an error resulting from arc transitions might not be detected), nor does it provide a measurement of the state machine's functional correctness (e.g., valid sequences of state transitions).

User Defined Metrics

Grinwald et al. [1998] describe a coverage methodology that separates the coverage model definition from the coverage analysis tools. This enables the user to define unique coverage metrics for significant points within their design. They cite examples of user defined coverage metrics targeting the proper handling of interrupts and a branch unit pipe model of coverage. In general, user defined metrics provide an excellent means for focusing and directing the verification effort on areas of specific concern [Fournier et al. 1999].

Fault Coverage Metrics

For completeness we will discuss fault coverage metrics, which have been developed to measure a design's testability characteristics for manufacturing. Unlike programming code coverage, the fault coverage metrics address both controllability and observability aspects of coverage at the gate-level of design. Applying fault coverage techniques to RTL or functional verification, however, is still an area of research [Kang and Szygenda 1992] [Cheng 1993].

These metrics are commonly based on the following steps:

1. Enumerate stuck-at-faults on the input and output pins on all gate (or transistor) level models in the design.
2. Apply a set of vectors to the design model using a fault simulator to propagate the faults.
3. Measure the number of faults that reach an observable output.

Regression Analysis and Test Suite Optimization

By using the various combined coverage metrics described in this chapter, we are able to perform the processes known as regression analysis and test suite optimization. These combined processes enable us to significantly reduce regression test runtimes while maximizing our overall verification coverage. Using regression analysis and test suite optimization techniques, development labs within Hewlett-Packard have successfully reduced regression vectors by up to 86% while reducing regression simulation runtimes by 91 %. Alternatively, Buchnik and Ur [1997] describe a method they have developed for creating small (yet comprehensive) regression suites incrementally on the fly by identifying a set of coverage tasks using regression analysis.

Event Monitors and Assertion Checkers

In the previous section, we explored various coverage metrics that are used to determine our degree of confidence in the verification process. A deficiency with this standard set of coverage metrics is their inability to quantify functional coverage. In general, verification strategies can be classified as either end-to-end (black-box) testing or internal (white-box) testing. Properties of the design we wish to validate using white-box testing are easier to check, since it is unnecessary to wait for test results to propagate to an observable system output. In this section, we explore the use of event monitors and assertion checkers as a white-box testing mechanism for measuring functional correctness as well as detecting erroneous behavior. The use of event monitors and assertion checkers provide the following advantages:

  • halts simulation (if desired) on assertion errors to prevent wasted simulation cycles
  • simplifies debugging by localizing the problem to a specific area of code
  • increases test stimuli observability, which enhances pseudo-random test generation strategies
  • provides a mechanism for grading test stimuli functional coverage (e.g., event monitoring coverage)
  • enables the use of formal and semi-formal verification techniques (e.g., provides verification targets and defines constraints for formal assertion checkers)
  • provides a means for capturing and validating design environment assumptions and constraints

The last point is a notable application of the Retain Useful Information Principle. Assertion checkers are a useful mechanism for capturing design assumptions and expected input environmental constraints during the RTL implementation phase. Likewise, event monitors embedded in the RTL provide a mechanism for flagging corner-case events for which the design engineer has verification concerns. The loss of this design knowledge and environmental assumptions can result in both higher verification and maintenance costs.

Events

An event can be thought of as a desirable behavior whose occurrence is required during the course of verification for completeness. Examples of events include corner-case detection, such as an error memory read, followed by its proper error handling function...

Table of Contents

The Verification Process.- Coverage, Events and Assertions.- RTL Methodology Basics.- RTL Logic Simulation.- RTL Formal Verification.- Verifiable RTL Style.- The Bad Stuff.- Verifiable RTL Tutorial.- Principles of Verifiable RTL Design.

Preface

The conception of a verifiable register transfer level (RTL) philosophy is a product of two factors: one, inherited seat-of-the-pants experiences during the course of large system design; the other, the sort of investigation which may be called "scientific." Our philosophy falls somewhere between the knowledge gained through experiences and the knowledge gained through scientific research. It corroborates on matters as to which definite knowledge has, so far, been ascertained; but like science, it appeals to reason rather than authority. Our philosophy consists of a fundamental set of principles, which when embraced, yield significant pay back during the process of verification.

The need for a verifiable RTL philosophy is justified by the complexity, density, and clock speeds of today's chips and systems, which continue to grow at exponential rates. This situation has raised the cost of design errors to a critical point--where, increasingly, the resources spent on the process of verification exceeds those spent on design.

Myriad books, manuals, and articles have addressed the issue of RTL Verilog style with an emphasis on synthesis-orientedpolicies. They explain how to write Verilog to wrest optimal gates from the synthesis process. Still other material presents the entire spectrum of Verilog constructs from the architectural specification to switch-level strengths. Yet, these works leave it to the readers to find their way to good practices for verification. Unfortunately, few guidelines govern the coding of RTL Verilog to achieve an optimum flow through the various functional and logical verification processes.

This vacuum clearly becomes a problem as design complexityincreases, and as design teams consider incorporating more advanced traditional and formal verification processes within their flow (for instance, cycle-based simulation, two-state simulation, model checking and equivalence checking). Our solution is to introduce a verifiable subset of Verilog and a simple RTL coding style. The coding policies we present have enabled us to effectively incorporate these new verification technologies into our design flow. To provide a framework for discussion, we place emphasis on describing verification processes throughout the text--as opposed to an in-depth discussion of the Verilog language. Specifically, we are interested in how an engineer's decision to code their RTL impacts a verification tool's performance and the quality of the overall verification process. Thus, we have deliberately linked the RT level verification process to the language and have chosen not to discuss the details of the Verilog language reference manual.

In writing and public speaking training, students are always told to know their reader and audience, and adjust their presentation accordingly. In verification, the audience for a design description is the verification processes and tools. This book presents the verification process and tools in the first chapters, then presents RTL Verilog in later chapters.

This book tells how you can write Verilog to describe chip designs at the RT level in a manner that cooperates with verification processes. This cooperation can return an order of magnitude improvement in performance and capacity from tools such as simulation and equivalence checkers. It reduces the labor costs of coverage and formal model checking, by facilitating communication between the design engineer and the verification engineer. It also orients the RTL style to provide more useful results from the overall verification process.

One intended audience for this book is engineers and students who need an introduction to various design verification processes and a supporting func- tional Verilog RTL coding style. A second intended audience is engineers who have been through introductory training in Verilog, and now want to develop good RTL writing practices for verification. A third audience is Ver- ilog language instructors who are using a general text on Verilog as the course textbook, but want to enrich their lectures with an emphasis on verification. A fourth audience is engineers with substantial Verilog experience who want to improve their Verilog practice to work better with RTL Verilog verification tools. A fifth audience is design consultants searching for proven verification-centric methodologies. A sixth audience is EDA verification tool implementers who want some suggestions about a minimal Verilog verification subset.

The concepts presented in this book are drawn from the authors' experience with large-scale system design projects. The scale of these design projects ranged to more than 200 million gate-equivalents, and we are happy to report that the products were commercially successful. To support the design processes in these projects, we evaluated and integrated verification tools from the marketplace. Before there were commercially available tools, we developed the tools ourselves. The tools include equivalence checkers, cycle-based simulators, linters, implicit interconnection, macro preprocessors, and RTL scan simulation support.

This book is based the reality that comes from actual large-scale product design process and tool experience. We acknowledge that it may not have the pedagogical refinement of other books derived from lecture notes.

Acknowledgments

The authors wish to thank the following people who participated in discussions, made suggestions and other contributions to our Principles of Verifiable RTL Design project:

Greg Brinson, Bill Bryg, Christian Cabal, Dr. Albert Camilleri, Dino Caporossi, Michael Chang, Dr. K.C. Chen, Dr Kwang-Ting (Tim) Cheng, Carina Chiang, Jeanne Foster, Bryan Hornung, Michael Howard, Tony Jones, James Kim, Ruth McGuffey, Dr. Gerard Memmi, Dr. Ratan Nalumasu, Bob Pflederer, Dr. Carl Pixley, Dr. Shyam Pullela, Rob Porter, David Price, Hanson Quan, Jeff Quigley, Mark Shaw, Dr. Eugene Shragowitz, Dr. Vigyan Singhal, Bob Sussman, Paul Vogel, Ray Voith, Chris Yih, Nathan Zelle, and numerous design engineers from the Hewlett-Packard Computer Technology Lab.

From the B&N Reads Blog

Customer Reviews