8 Ways To Avoid Ignorance Based Exploratory Testing

Exploratory testing is a term I heard several times at Agile2011 in Salt Lake City a couple weeks ago. As I heard people talking about it, exploratory testing seemed to be something that verification engineers should be familiar with. Admittedly, it was new to me so for anyone else new to exploratory testing, here’s a description from good old wikipedia:

Exploratory testing seeks to find out how the software actually works, and to ask questions about how it will handle difficult and easy cases. The quality of the testing is dependent on the tester’s skill of inventing test cases and finding defects. The more the tester knows about the product and different test methods, the better the testing will be.

From that definition and other things I’ve read since Agile2011, it seems that exploratory testing is more easily pondered relative to it’s antithesis: scripted testing. In scripting testing all the thought is put in upfront; a tester would do all their research first, build the test plan and then they or someone else would execute the test plan. With exploratory testing, the tester assumes its impossible to deduce everything up front and that the goals of the testing will change over time. He/she starts with a more basic plan and then thinks their way through the test process, learning and shaping the test plan as they go along.

With respect to exploratory testing, I think the best bit of advice I heard at Agile2011 came from from Elisabeth Hendrickson in her session called Exploratory Testing In An Agile Context. Near the end of her session, Elisabeth said something to the effect of (and don’t take this as a direct quote… just my understanding of what she was saying):

For proper exploratory testing, a tester has to be curious enough to build and refine an understanding of a design. Without an understanding of the design, the tester is doing ignorance based exploratory testing.

I liked the phrase ignorance based exploratory testing very much because I think its opened my eyes to a “technique” – and I use that word loosely – that I’ve seen misapplied in hardware verification several times (admittedly, I’ve misapplied it myself). Ever write a random test, have it run overnight and hope that it’s catching the things you haven’t specifically thought of? I see that random test as a transition point where we go from scripted to exploratory testing; we ran everything we thought of to build the scripted test plan and the random test is a final shotgun blast for this things we hadn’t thought of.

While we might think we’re bringing new value to the regression suite, that last random test often amounts to little more than confirmation of what we already know or aimless wandering of the state space (aka: ignorance based exploratory testing).

Settling for that kind of “exploration” is something verification engineers need to avoid. To do that, here’s a few guidelines that I think can help us do more effective exploration of design state space.

  1. Remove the strict partition between design and test: this has always been a pet peeve of mine. There’s a broadly held belief in hardware development that too much communication between design and verification will somehow pollute design and test thought processes. That won’t happen when engineers act responsibly so let’s get over the paranoia. Teamwork works so let’s encourage it.
  2. Learn from the tests you’ve written: Don’t restrict yourself to a scripted test plan. When test results teach us something about a design, consider how that new knowledge should impact our test plan.
  3. Learn from the bugs you find: If/when you find bugs don’t just file a bug report and move on, think about why the bug is there, whether or not similar types of bugs could be hiding elsewhere and how to avoid that type of bug in future.
  4. Think like a user: thinking about a device in terms of how it’ll be used is different than thinking about it in terms of what it is designed to do. Testers should be thinking as users to uncover the interesting operational scenarios that designers can sometimes overlook.
  5. Spend more time analyzing the design: I tend to spend a lot of time looking through design code and I know it’s lead to me writing tests that I wouldn’t have otherwise written. I feel more time looking at the design gives me the kind of knowledge that helps flex it (legally) in ways a designer doesn’t expect.
  6. Use whitebox assertions and coverage groups: sometimes the best way to measure whether or not a particular corner case is being exercised is to go directly to the corresponding area of the design and add functional coverage assertions or cover groups.
  7. Use code coverage for confidence and functional coverage for sign-off: I think most verification engineers are on this page here but it’s worth saying it again, while code coverage tells you what code has been exercised, it doesn’t tell you what code you’re missing. It’s OK to use code coverage metrics to increase your confidence that a design is being tested thoroughly but don’t use it to decide whether or not you’re done testing.
  8. Spend more time on exploratory testing: I’ll end with a common sense catch-all. Exploratory testing is an important part of releasing a robust product so it makes sense to spend more time on it. With more time, you can do more exploring. How do you get more time? I have thoughts on that, too… which you should be able to read in the coming weeks.

I’m not an expert in exploratory testing so I could be off on my assessment. But it seems to me the key to effective exploratory testing – and robust product – is an understanding of the design and the corners of the legal state space on the part of the tester.

Let’s put less emphasis on following the script and spend more time learning about what we’re verifying.

-neil

2 thoughts on “8 Ways To Avoid Ignorance Based Exploratory Testing

  1. A constrained-random test approach should satisfy the exploratory requirement, especially if you can adjust the constraints based on feedback from functional coverage results. An evolving test plan is essential, since most of the time the design phase and verification phase overlap considerably, and the design is not fully known when verification starts.
    Close collaboration between designers and verifiers will certainly lead to better testing, as long as the verifier retains a healthy level of skepticism. Ideally, the verification plan should include both white-box and black-box test approaches.

    1. Paul, thanks for the comment. I think we’re in agreement here assuming that an evolving test plan implies an evolving coverage model. If everything in the coverage model has been hit and we’re writing one last random test anyway, I think that’s a sign we’re not sure what we’re looking for.

      As a side note… I’m going to challenge your wording just slightly where you have “as long as the verifier retains a healthy level of skepticism”. Again, I think we’re on the same page though to me, skepticism implies the barrier between dev and test remains through the collaboration. I like “as long as both dev and test act objectively and responsibility with a healthy level of curiosity”. I think the barrier is gone there and it implies both dev and test are responsible for quality, not just test. Hard to convey the different in a written comment and probably me reading too much into it but thought I’d mention it anyway.

      Thanks for reading and thanks again for commenting!

      neil

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.