Test Your Own Code! (I’ve Got Better Things To Do)

November is TDD month on AgileSoC.com. This is the first of a few installments where we look at the potential of Test Driven Development (TDD) in hardware development.

If you’re unfamiliar with TDD, the first and most obvious thing you’ll notice is that TDD breaks a boundary that many hardware teams – ASIC teams in particular – consider absolutely unbreakable.

With TDD, the person that writes the code, tests the code.

[Gasp]

Before you rattle off all the reasons why it doesn’t make sense for people to test their own code, let me give you some background and attempt to explain where we’re going, both in this post and with our wild scheme to introduce TDD to hardware engineers!

I’ve known about TDD for a few years now. My AgileSoC.com counterpart Bryan Morris presented a SNUG paper in 2009 on TDD and the SVUnit framework he and his co-author Rob Saxe put together. I’ve done an Eclipse/Java/TDD tutorial that I found useful. I’ve also read a little about it and I know it’s very widely used in the software development community. The signs were all there but none of the above were convincing enough to get me to use TDD.

In August, at the Agile2011 conference in Salt Lake City, that changed.

On day 2 of the conference I sat through a tutorial delivered by TDD expert James Grenning. Seeing an example of how he practices TDD and being able to ask questions jarred a few ideas loose and opened my mind. For the rest of the conference, I wandered around – aimlessly at times – sketching a mental picture of what hardware development could look like with TDD and scrounging for ideas from others (namely Elisabeth Hendrickson who got me thinking critically about the similarities between software and hardware testing).

It’s time to start putting that mental picture to paper.

I’ll start with the disclaimer that just because TDD works in software development doesn’t automatically mean it’ll work in hardware development. That said, it’s hard to ignore the motivation behind using TDD would be very similar between software and front-end hardware development. So let’s look at why it works in software, and see if we’re reasonable in hoping for the same benefits when we apply it to hardware development.

The simplest benefit of TDD is that designers immediately validate correctness through very short test-and-code feedback loops. In the tutorial James put on, a short feedback loop could be as short as a minute or less and it goes a little like this…

  1. Write some test code, run it and watch it fail (because the code it’s testing doesn’t exist yet).
  2. Write just enough design code, run the test again and watch it pass.
  3. Repeat steps 1 and 2 until done.

Not quite believing test-and-code cycles could be as short as a few minutes – which turns out to be just a few lines of test and design code – I asked James if the exercise resembled what he’d actually do in practice. His answer was a pretty firm yes. It was close to what he does in practice and other TDD practitioners in the room agreed. From that, I gathered that one part of TDD is writing and validating code in baby steps… very small baby steps. Instead of writing code for a few days/weeks/months before testing it, you write only a few lines and test it right away. Obviously, the opportunities for bugs to creep into the code base decrease substantially since TDD effectively helps you find and fix them immediately. Bug prevention is a major benefit, but it’s also just the low hanging fruit.

Better than just validating correctness, TDD is a design technique that helps people think in terms of how a design will actually be used. As you’re thinking about how <feature A> is going to be used, you capture usage scenarios as tests first, then write just enough design code to pass the tests. The change in perspective should translate as more robust design code. It should also help people avoid over-engineering a design, especially with a specific focus on writing just enough design code. Just enough gives you what you need; no more than that. Less complexity. Less code to debug. Less wasted time and effort.

If TDD is completely new to you and you find yourself asking:

“do you really write the tests first?”…”does just enough really mean just enough and no more?”

…this comment snippet from James’s tutorial should answer that:

 * With the previous tests passing you should have at most a counter
 * that knows how many integers have been added to the buffer, with
 * hard coded return value for Get()
 *
 * If you have more, delete it now!  It is not tested code, you
 * are supposed to be doing TDD!

That means you don’t write design code until there is a test ready to verify it. It doesn’t get clearer than that; TDD really does mean test-driven.

The tangible results of TDD, in case I haven’t made it obvious, is a complete design unit AND a suite of unit tests. The tests are automated and run regularly just as most of us already run regular regressions. The granularity of these unit level tests is finer than what is normally done in hardware development (they would apply to the module/class level as opposed to the block/chip level most teams identify and test). The tests aren’t temporary; they live on with the design. If the design is ever changed, the tests are there to ensure nothing breaks.

As far as I’m concerned, the motivation behind using TDD and the reasons for its success in software development can be applied directly in hardware development. Admittedly, it’s taken me a while to come around but I think I’m finally onboard. The challenge from here will be to mangle the design and tests roles so we can find the right fit. That could be tough but Bryan and I have some ideas that we’ll be posting through the rest of the month to get the discussion started.

To summarize, is it really accurate to say that with TDD, the person writing the code tests the code? If you view TDD strictly as more tests, then yes, that’s all it is (and yes it does break the designers shall not test their own code commandment that most of have lived by for a long time). But if you view TDD as we do, you see it bringing a new design technique to hardware development (which just so happens to capture design intent as a list of tests). There’s a big difference between those two views.

There is value in adding designer written tests… but there’s more value in an improved design process.

That’s all for this post. Stay with us through the rest of November for more TDD on AgileSoC.com!

-neil

Q. Are the benefits of TDD similar in software and hardware development? Or is TDD a design practice that doesn’t translate?

Share/Bookmark

About nosnhojn

I've been working in ASIC and FPGA development for more than 13 years at various IP and product development companies and now as a consultant with XtremeEDA Corp. In 2008 I took an interest in agile software development. I've found a massive amount of material out there related to agile development, all of it is interesting and most of it is applicable to hardware development in one form or another. So I'm here to find what agile concepts will work for hardware development and to help other developers use them successfully. I've been fortunate to have the chance to speak about agile hardware development at various conferences like Agile2011, Agile2012, Intel Lean/Agile Conference 2013 and SNUG. I also do lunch-n-learn talks for small groups and enjoy talking to anyone with an agile hardware story to tell! You can find me at neil.johnson@agilesoc.com.
This entry was posted in TDD and tagged . Bookmark the permalink.

4 Responses to Test Your Own Code! (I’ve Got Better Things To Do)

  1. Hi,
    Our experiences with Agile with embedded systems, FPGA-co-processors and SoC
    We have been moving Agile onto embedded systems with co-processors for a couple of years now, with a major article in IEEE Software magazine. We have been teaching this as a practical option with the use of the tool EmbeddedUnit within research and teaching 3rd and 4th year University students doing embedded peripheral interfacing and DSP code development (meeting what the Agile people call non-functional testing — meeting real-time requirements).
    I would to comment on a couple of statements from the web-page.
    1) Write some test code, run it and watch it fail (because the code it’s testing doesn’t exist yet).
    That’s true, but that’s not the real picture. The correct statement should be
    1) Write some test code, run it, and CHECK THAT IT FAILS. If the test passes, then its a poor test because the code it’s testing does not exist yet. So basically, the test failing is a ‘test of the test’.
    Here’s an example — Assume DSPAlgorithmCPP( ) is a working program and DSPAlgorithmASM( ) is an assembly language stub that will later become an optimized real-time version of the CPP and DSPAlgorithmFPGA( ) is a CPP stub that will later transfer info to a FPGA co-processor
    actualCPP = DSPAlgorithmCPP( );
    actualASM = DSPAlgorithmASM( );
    actualFPGA = DSPAlgorithmFPGA( );
    CHECK(actualCPP == actualASM);
    CHECK(actualCPP == actualFPGA);

    These tests will pass — with no code written — because the return value from DSPAlgorithmCPP is stored in a register and not destroyed before the calls to the functions. They accidentally return the correct values and the tests pass with no code written

    You also say
    2) Write just enough design code, run the test again and watch it pass.
    3) Repeat steps 1 and 2 until done.

    Again, true but not true

    In practice, have you have done steps 2 or 3 about 30 times, you have a working prototype (very Agile) and a real mis-mash of code. Then comes the Agile bit I have difficultly in handling — you have to REFACTOR the code to make it more maintainable (readable, extendable etc.) — and use the tests to make sure the refactored code still works as planned.
    So whats the hassle? The answer is this. There is another Agile catch-phrase — Tests should be written in the same language as the code — makes sense, why learn two different things — language for code and language for tests.
    We are ‘using’ Agile to develop medical devices using an eXtreme Programming Inspired (XPI) life cycle.
    Stage 1 — Talk with customer
    Stage 2 — Develop Matlab code and test
    Stage 3 — Move matlab code and test down to C++ simulator and add more tests
    Stage 4 — Move all code and tests on real hardware — turn on the compiler optimizer and add tests for real time performance
    Stage 4A — Move into custom assembly code where necessary
    Stage 5 — Move critical code into FPGA co-processor

    Trouble is — the refactoring need to make Stage 2 do-able (using more than a few lines of code)
    is probably EXACTLY WHAT IS wanted at Stage 3 to make the code translation easier to do
    and EXACTLY WHAT IS NOT WANTED by the optimizing compiler. Basically, I have to add a very none AGile — un-refactor stage — to get real-time performance
    Any way, just a few thoughts on our own experiences in moving Agile into a new environment. We say — Its obvious that a lot of Agile is very useable in an embedded/SoC world — its just not always obvious how. However, its early days yet, and a lot of this material will become obvious once a few good examples are there

    Regards
    Mike SMith

  2. Pingback: TDD In Hardware Development: Does ? | AgileSoC

  3. Pingback: TDD And A New Paradigm For Hardware Verification | AgileSoC

  4. Pingback: Functional Verification Doesn’t Have To Be A Sideshow | AgileSoC

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>