TDD In Hardware Development: Who Does What?

Welcome back for TDD month round 2! In our first post, Test Your Own Code!, we laid out some of the motivation for using TDD with some general discussion of the mechanics and benefits. If you read Test Your Own Code!, hopefully you’ve put some thought into where it actually fits into hardware process because that’s the topic we’re tackling today. I don’t think it’s trivial, but we can definitely squeeze TDD in. Here are my thoughts.

NOTE: I don’t mean to devalue or ignore the potential of TDD as a design technique in hardware development (for software teams, TDD is extremely valuable as a design technique), but we’re not going to talk about it much as a design technique here. I’m going to focus on the resulting unit tests and the effect they have on the verification paradigm we have now.

The first mental hurdle I had to jump when it comes to fitting TDD into hardware development has to do with scope because I’ve never been clear on how far software developers go with TDD. Does it apply to a single module/class and no further or would it apply to a subsystem (i.e. block level)? How about at chip level?

I think I now have the issue of scope figured out. As obvious as it feels now, it wasn’t always obvious because I was stuck thinking in terms of the block and top level testing paradigm that most of us are used to. TDD is unit testing which applies to the module/class/entity level. The tests that fall out of TDD, therefore, would correspond to a finer design granularity and consequently would be an addition to what most hardware teams do currently.

Test suites we have now include block and top level tests. TDD means the addition of unit level tests with a granularity that is finer than block level tests.

The other questions I had apply to rigor. Is TDD used for exhaustive testing? Or does it take you <so far> – however far that is – after which a test expert carries on with the exhaustive testing? I think I have that figured out as well, though there seems to be a lot more grey area when it comes to rigor.

From side discussions I had at Agile2011, I got the impression that software testing 10-15 years ago looked a lot like what we do today; designers wrote the design and testers did the testing. Fast forward to present day… teams using TDD share responsibility for the entire test effort between design and test experts. But how is it split? To build my picture of how responsibility is split in a hardware world that incorporates TDD, it’s helped me to think about testing as an accumulation of 3 different types of tests.

  1. First Order Tests: first order tests aim at the lowest level of detail. To me, these tests are the bare minimum needed to properly claim you’ve done something useful. A first order test would clear the compile/simulate hurdle and then verify some very (very) basic design function.
    1. Example: put 1 element in a FIFO, get one element out of a FIFO.
  2. Second Order Tests: second order tests go beyond the basics but still focus on the known. They would verify all the features that the design is specifically intended to handle.
    1. Example: fill a FIFO and verify the full status is asserted, then empty the FIFO – checking validity of each element as you go – and verify the empty status is asserted at the end
  3. Third Order Tests: third order tests are designed to cover the unknown. Third order tests could come about by asking “we’ve done <this> but I wonder what happens if you do <that>”? Is there a lock-up or some sort of data loss, does the design get confused at all or does it handle the scenario without a hitch?
    1. Example: simulating behavior of some peripheral, alternate between a random number of puts and gets for an extended period of time around the empty and full thresholds to verify there are no memory leaks.

The first order testing goes to designers. Not only will they use TDD to build code that is simpler and more robust - remember, TDD is a design technique first! - they’ll have tests that find and eliminate all the simple mistakes (i.e. the first order bugs).

The third order tests go to dedicated verification engineers. Because the third order tests would tend to model the real world interactions, they are better done in top level testing. If efficiency of top level testing is an issue – due to long sim times or lack of control and/or visibility – targeted block level testing becomes the alternative.

Drawing the line through the second order testing is the tough part. I think designers take responsibility for most of these. When a designer is done writing a module with TDD, there should be high confidence that it’s robust in isolation. I think verification engineers could take some responsibility as well, though, by complementing the unit tests with block level tests as required. The block level tests would look similar to what we build now and would be written for design features complicated enough to require more effort and a second perspective.

Neil’s point: With the added emphasis on unit testing, my feeling is that block level tests suites of the future will be just a subset of what we build now since we’ll already have verified some of what we currently do at block level.

Bryan’s counter point: I’m not so sure… I think we’ll still need the more complicated, involved block level testing.  It’s just that we won’t be spending so much time in debugging the little 1st order stuff (on the design and verification side if both are doing TDD) – so we can invest more time/energy in developing better 2nd and some 3rd order tests.  I’m not convinced it will be a subset.  In fact, there may be a super-set from what we do now because we’ll have more time to develop more complicated tests to explore the dark corners in the design.

Regardless of where the line is drawn in terms of responsibility for second order testing, collaboration and cooperation between design and verification engineers will be key to ensuring testing needs are met.

To summarize our new responsibilities, let’s start with some general rules in terms of <who> is responsible for <what> when it comes to testing in a new verification paradigm that includes TDD. Here’s 3 that I think could be helpful:

  1. Designers use TDD to build simpler, more robust code with tests that verify all 1st order design functionality;
  2. Verifiers create top level/integration tests – or block level tests where efficiency becomes an issue – to verify all third order design functionality; and
  3. Designers and verifiers work together to see that 2nd order design functionality is verified through a combination of TDD, block level and top level tests.

Let’s apply those rules, see where they lead and adjust them as we learn. We’ll look at one last general aspect of how everything fits together in the next post.

-neil

Q. How do you see the roles of design and verification change with the introduction of TDD?

Share/Bookmark

About nosnhojn

I've been working in ASIC and FPGA development for more than 13 years at various IP and product development companies and now as a consultant with XtremeEDA Corp. In 2008 I took an interest in agile software development. I've found a massive amount of material out there related to agile development, all of it is interesting and most of it is applicable to hardware development in one form or another. So I'm here to find what agile concepts will work for hardware development and to help other developers use them successfully. I've been fortunate to have the chance to speak about agile hardware development at various conferences like Agile2011, Agile2012, Intel Lean/Agile Conference 2013 and SNUG. I also do lunch-n-learn talks for small groups and enjoy talking to anyone with an agile hardware story to tell! You can find me at neil.johnson@agilesoc.com.
This entry was posted in Functional Verification, TDD and tagged . Bookmark the permalink.

3 Responses to TDD In Hardware Development: Who Does What?

  1. Adam Rose says:

    I am following your posts with interest.

    One question – which relates to other discussions we have had – is the role of testplans and coverage in TDD. Are testplans and coverage overkill for your average TDD ? Are there different answers depending on whether it’s (1), (2) or (3) ? If some tests do use testplans and coverage and others do not, then is there a danger of duplication of effort ? And is this potential duplication a bad thing or not ?

    • nosnhojn says:

      That’s good to hear. You’ll see that we need more thought put into this so the more people that are interested, the better.

      I wouldn’t suggest tdd be used to create any new bureaucracy. To start, we could look at the responsibilities of design and verif engineers as being exactly what they are now, the only difference being the designers are putting out higher quality code because they’re using tdd. There’d be no new test plans, coverage models or any other burden placed on the designers. If higher quality code is the only thing we get from tdd, we’re ahead.

      A step further now, if we want to rely on the unit test suites for more then we’d have to find the right fit. From 1st order testing (the unit test suite), one thing that could be exported and become part of coverage results would be code coverage. Again, no added bureaucracy for the designers and the code coverage gathered from the unit tests would probably be a useful complement to the results verif engineers collect from testing at block/subsystem/top. When we get to 2nd/3rd order testing, admittedly, that’s where things go a little grey for me. Teams would have to find ways to formally transition some of the 2nd order responsibilities to designers. I’d say for now that even informal additions to the coverage model in the form of assertions and/or embedded coverage groups would be a good start. Extracting embedded documentation from the unit tests could be something else. Regardless of what we think might be possible, it’ll take practice to understand what’s practical.

      As for the issue of duplication… I’d assume it’d exist simply by virtue of the same line of code being verified in unit tests (by designers) and integration tests (by verif engineers). As a team gains experience and develops mechanisms where they can formally rely on the unit tests for sign-off (which takes us back to your original question re: testplans/coverage), then maybe it makes sense to avoid some of that duplication. But even if that doesn’t happen, this is the type of duplication that could take less overall time (i.e. tdd prevents bugs and saves time downstream) so I don’t see it as being that bad.

      To summarize, I think verif engineers remain entirely responsible for quality (test plans and coverage models) to start. After designers are comfortable with tdd (and producing higher quality code), then we look for ways to formally share responsibility for quality with designers.

      Feel free to voice your opinion and/or opposition!

      neil

  2. Tobias says:

    Neil,

    Good posts as always. I fully agre to the idea of 1st order unit level testing from a designer. On the first look it seems like overhead, but I strongly believe it makes development more lean as less “waste” is delivered into 2nd/3rd order verification. It also helps the verification engineer to focus more on customer-view testing rather than duplicating RTL in verifiction environments. I see a lot value in formal verification for 1st order testing. The challenge is to break the design/verification separation we preached for years.

    How do you see the role of the specification folks in TDD? Maybe worth another article ;)

    Cheers,Tobias

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>