Agile Transformation in Functional Verification – Part I

Originally published in Mentor Graphics Verification Horizons, Volume 6, Issue 2, June 2010.

Neil Johnson, Principal Consultant – XtremeEDA Corp.
Bryan Morris, VP Engineering – XtremeEDA Corp.

Introduction

The potential for agile methods in IC development is large. Though agile philosophy suggests the benefits of an agile approach are most profound when applied across an entire team, reality often dictates that new techniques and approaches are proven on a smaller scale before being released for team wide application.

This article is the first in a two part series that describes how a functional verification team may benefit from and employ agile planning and development techniques while paving the way for use in RTL design and ultimately, team wide adoption.

Part I of Agile Transformation in Functional Verification opens with the limitations and consequences of big up front design (BUFD) from the perspective of a functional verification engineer. It continues with a discussion on how software engineers have overcome similar limitations through the use of iterative planning and incremental development. The summary includes a preview part II of Agile Transformation in Functional Verification and its recommendations for how functional verification engineers become more agile.

Big Up Front Design and Constrained Random Verification

Big up front design (BUFD) is common in IC development. In BUFD, a team attempts to build detailed plans, processes and documentation before starting development. Features and functional requirements are documented in detail; architectural decisions are made; functional partitioning and implementation details are analyzed; test plans are written; functional verification environments and coverage models are designed.

While BUFD represents a dominant legacy process in IC development, wide adoption of constrained random verification with functional coverage presents a relatively recent and significant shift in IC development. As has been documented countless times, constrained random verification better addresses the exploding state space in current designs. Relative to directed testing, teams are able to verify a larger state space with comparable teams and resources.

But is constrained random verification with functional coverage living up to its potential? Constrained random verification employed with BUFD contains one significant flaw. From a product point view, a design of even moderate complexity is near incomprehensible to a single person or even a team of people; the combination of architecture and implementation details are just too overwhelming. While the technique and associated tools may be adequate for addressing all these details, the human brain is not!

The flaws of constrained random verification and the limitation of the human brain are easy to spot during crunch time, near project end, when the verification team is fully engaged in its quest toward 100% coverage. Because of the random nature of stimulus, it is very difficult for verification engineers to predict progress in the test writing phase of a project. All coverage points are not created equal so the path toward 100% coverage is highly non-linear in terms of time required per coverage item.

To account for the unforeseen limitations in the environment, it is common for verification engineers to rework portions of the environment–or write unexpected and complicated tests–to remove such limitations. This is particularly relevant as the focus of testing moves beyond the low hanging fruit and toward more remote areas of the state space.

Taking the opportunity to rework the environment is rarely accounted for in the project schedule and can cause many small but significant slips in schedule. Ever wonder why tasks sit at 90% complete for so long? It is because those tasks are sucking in work that was not originally accounted for in the first place.

What is truly amazing is not the fact that tasks sit at 90% for so long, it is that it is always a surprise when it happens! This should not be a surprise. It is impossible for people to understand and plan a coverage model that will give you meaningful 100% coverage with BUFD. It is also impossible to comprehend the requirements of the environment and tests, especially when considering the random nature and uncertainty of the stimulus. BUFD will not give a team all the answers; rework of the environment will happen regardless of whether or not the schedule says so!

It is this type of uncertainty that an agile approach to IC development that includes functional verification can help address. Uncertainty is an inherent part of functional verification and cannot be eliminated. The extent to which it negatively influences delivery objectives and product quality, however, can be mitigated significantly.

Other tools, such as intelligent testbench design and formal analysis, can of course be used to supplement or even replace constrained random verification. In the end, however, functional verification is a process that critically depends on people and teamwork for success. Functional verification must be seen as a continuous, people centric process of understanding, refining, and validating potential solutions. This as an alternative to trivializing functional verification as a two step process of capturing, then verifying a list of requirements.

An Agile Alternative to BUFD

Chess is a game that plays out through a combination of strategy and tactics.

Strategy describes a player’s general approach to the game. Will the player go for the win or play for a draw? How can they exploit the weaknesses of their opponent? Will they choose a defensive or attacking posture? Will attacks be launched through the center or on the flanks? Which opening moves best support the strategy?

Good players strive for a well crafted strategy and then move their pawns and pieces in support of that strategy. Throughout a game, great players examine how successful their strategy has been and change it if they perceive an advantage in doing so.

Tactics are described as short sequences of moves that either limit the moves of an opponent or end in material gain (Wikipedia). They are the tools that enable creation of a dominant position, or capture of an opposing pawn or piece. They rely more on current positioning, recognition and opportunism than planning. They are short term execution based on a player’s immediate situation.

Up front planning in functional verification should be similar to forming a strategy in chess. As part of a strategy, a team should have a clear picture of what they want to accomplish with some general guidelines for execution. A solid strategy should also acknowledge the fluidity and unforeseen circumstances that await the team. A functional verification strategy should start with identifying the following:

  • a prioritized feature set;
  • a methodology; and
  • a high-level schedule.

Most teams already start functional verification with a feature list. While some may resist prioritizing features because “all the features are important”, prioritizing is almost guaranteed to happen anyway as budget and delivery pressures intensify. Feature prioritization in initial planning is the key to having a team focus on verifying high value features early, making them less vulnerable to reduction in scope later in the project.

Ignoring the more technical definitions normally associated with the word methodology in functional verification, this article uses a grander definition stated by Alistair Cockburn in Agile Software Development: The Cooperative Game:

“everything you regularly do to get your [hardware] out. It includes who you hire, what you hire them for, how they work together, what they produce, and how they share. It is the combined job descriptions, procedures, and conventions of everyone on your team. It is the product of your particular ecosystem and is therefore a unique construction of your organization.”

Specific to functional verification, a methodology may include identifying team members, skill-sets and suitable roles, where people work and who they work with, modes of interaction with design and modeling teams, reporting structure, coding standards, general delivery requirements, verification libraries, bug-tracking systems, documentation and anything else that governs daily operation within the team. Methodologies need not supply strict rules but they should be visible and universally accepted. A methodology is something that a team can create in a day or less (Cockburn 2006) and should be accessible to anyone with a stake in the functional verification effort.

The schedule produced in initial planning should be very high-level and include feature-based delivery milestones. Most importantly, the team should recognize that the first schedule that gets built is very likely to be wildly optimistic, totally inaccurate and in desperate need of continuous refinement.

With a strategy completed, a team may start identifying the tools–tactics–that they are likely to use through the coarse of the project. This would include things like:

  • options for environment structure
  • functional coverage methods
  • test writing strategy
    • directed
    • constrained random
    • combination of both
  • use of formal verification
  • stimulus modeling
  • verification code reviews
  • black-box/white-box assertions
  • hardware/software co-simulation
  • emulation/acceleration
  • modeling and scoreboarding strategies
  • exact/approximated response checking
  • block or chip-level testing
  • available 3rd party IP
  • internal/external outsourcing

Early in the project, it is only important to identify tactics with the teams’ strengths or deficiencies related to each. Ensure training is arranged if necessary but also ensure training is delivered at an appropriate time (i.e. as tactics are required as opposed to months before hand).

Resist the urge to assume legacy tactics will be well-suited to new development. While wild deviation may not be necessary–or even recommended–the applicability of legacy tactics should at least be assessed prior to use.

While it is not necessary to eliminate detailed discussion on when and how certain tactics will be employed, detailed discussion should be at least limited. Remember that tactics require recognition and opportunism as opposed to detailed planning. Learn to recognize and react to situations on the immediate horizon instead of planning for situations that may never arise.

Chess players that are known strategists may not be great tacticians nor are tacticians always great strategists. While the two are obviously required, they are independent skills. The same is true in functional verification. While strategies keep a complete but concise view into the future, they are fluid and may change during a project given changing conditions. Tactics are tools of high relevance in the short term that cannot be accurately planned for in the long term. To minimize wasted planning effort and enable accurate, high confidence decisions, teams must understand the difference between the two. A team should also understand their strengths and weaknesses and deliberately and methodically work to improve both areas.

Growing Hardware with an Agile Approach

Throughout a project, there are two techniques in particular that can help a team differentiate between long term strategy and short term tactics: iterative planning and incremental development. Both are used extensively used in agile software development. In functional verification, they can be used to promote analysis and refinement of the teams’ verification strategy, perform just-in-time tactical decision making with respect to implementation and provide objective metrics for overall effectiveness.

Iterative Planning

Using iterative planning, the detailed planning and analysis is done continuously through the life of the project. It is assumed that long term planning is inaccurate at best so detailed planning is limited to the immediate horizon; anywhere from 1 week to 3 months into the future depending on the project circumstances. Long term planning is kept to a very coarse level of detail to minimize the time spent updating and maintaining the long term plan.

Agile teams see two advantages to iterative planning. First is that details are limited to a period of time that developers can comprehend. For example, it is relative easy to confidently plan one week of work in great detail. One month of detailed planning, while not as easy as one week, is also very realistic. When planning a year or more into the future, however, it is impossible to have the same level of accuracy or confidence. Changing project dynamics will surely obsolete planning decisions and with time lost to rework the plan and/or implementation.

A second advantage seen in iterative planning is that past experience can be used to plan future progress. This roughly equates to delaying decisions to a time when it is more reasonable to expect that those decisions will be correct. With more experience, subsequent decisions can be made with increasingly greater confidence.

Incremental Development

With BUFD and defined process, progress is generally measured as a completion percentage derived from the work breakdown structure. For example, when half the tasks in the WBS are done, a project is half done.

A problem with measuring progress relative to planned effort identified in Cohn 2005 is that individual tasks have little correlation to the end product and minimal value in the eyes of a customer. Further, Cohn 2005 suggests features do hold value by virtue of being demonstrable to a customer, so many agile teams develop products incrementally and track feature based progress.

In Agile Transformation in IC Development (Johnson, Morris 2010), we describe incremental development as:

“…an approach where functionality is built and tested as a thin slice through the entire development flow, start to finish. A product starts as a small yet functional subset and steadily grows as features are completed. Progress is based on code that works rather than code that’s written. Working code is a great metric for measuring real progress and demonstrating and clarifying implementation decisions.”

Features offer a far more objective metric to measure progress. They are tangible and demonstrable to a customer. They also hold the word completion to a higher standard in that to be complete, a feature must be designed, implemented, tested, documented, etc. such that it could theoretically be delivered to a customer.

Agile Transformation in Functional Verification – Part II

Critical, objective analysis of a problem is key to implementing the right solution. While the concepts in part I are somewhat abstract, they are critical for motivating functional verification teams that crave an agile alternative to BUFD.

In Agile Transformation in Functional Verification – Part II, we look at the details of how a team actually uses agile planning and development techniques.

The opening sections continue the discussion on distributed planning as an alternative to BUFD. They include firm guidelines for conducting up front planning and building a functional verification strategy. The process of feature capture, prioritization and high-level scheduling is also explained in greater detail. Planning discussion continues to a method for iterative planning where past progress is critiqued, the team’s verification strategy is analyzed and short term detailed planning is done.

Part II continues with goals and recommendations for agile functional verification teams at familiar stages of development.

Environment Development

Strive for a working code base as early as possible with an environment that is functional from day one.

RTL Integration

Neither the design nor the verification environment need be complete to warrant integration of the RTL. There is plenty that can be done with partially completed RTL and a minimal yet functional verification environment

Transaction Development

Transactions define communication mechanisms between adjacent components in the DV and between the RTL and DV environment. Ensure they are well designed and tested prior to wide spread use.

Component Development

Define and integrate all the components up front then incrementally add functionality to each component on a feature-by-feature basis.

Block-level and Top-level Testing

Until features are integrated and verified at the top-level (aka the customer perspective), they are not ready for production nor release, and are therefore incomplete.

All of the above are described within the context of incremental development where a functional verification team executes the iterative planning approach to produce a growing subset of verified RTL.

Part II also includes case study examples from a real project to illustrate how functional verification engineers can tailor an agile approach to their particular project circumstances.

References

Beck, K., Extreme Programming Explained: Embrace Change (2nd edition), Addison-Wesley Professional, 2004.

Cockburn, A., Agile Software Development: The Cooperative Game (2nd Edition), Addison-Wesley Professional, 2006.

Cohn, M., Agile Planning and Estimating, Prentice Hall PTR, 2005.

Johnson, N., Morris, B., “Agile Transformation in IC Development”, Verification Horizons, Mentor GraphicsCorp, February 2010.

http://en.wikipedia.org/wiki/Chess_tactics, retrieved May, 2010.

Copyright 2010 – Neil Johnson

4 thoughts on “Agile Transformation in Functional Verification – Part I

  1. I will right away snatch your rss as I can’t to find your e-mail subscription link or newsletter service. Do you have any? Please allow me understand so that I may just subscribe. Thanks.

    1. That’s a really good question. there is no part 2. This is part 1 of 1… “all” you should ever need to know :).

      -neil

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.