Agile2015

This was my 4th time at the big agile conference. Each of the 3 previous times I’ve written up a report (or 2), mainly for hardware folks that don’t have the chance to attend. I’m obviously a big believer in the notion that we can borrow a lot of great ideas from the world of agile software development. Since I’m one of the few from our industry that attend I feel a little obligated to pass on what I’ve learned. Plus the writing is a good way for me to line up my notes and pictures so I don’t forget anything!

I’ve learned to expect 5 pretty intense days that add up to a tiring week. But it’s good tiring in that I always come away with a couple things to think about. This year was no different. Here’s how the week went for me…

Story Mapping

High on my list were the sessions on story mapping. Story mapping is a technique I’d heard lots about already though strangely I’ve never found myself in the mindset to dig deeper. So last week I made a point of taking in 3 story mapping sessions and together they convinced me this is another technique we can learn with. The first session was from Elliot Susel in the monday morning A (Story Map) is Worth a 1000 Words. Second was David Hussman on Tuesday with Example Driven Development: Creating Story Maps with Examples. Last was User Story Mapping: Don’t Lose the Big Picture on Wednesday afternoon from Jeff Patton. Coincidentally, the order seemed to matter to me. The material was similar but the details of each presentation seemed to build on each other. Not sure if the details were too much different. Maybe it was the way it was explained. Maybe it was just the repetition because I was new to it! Who knows.

The general idea of story mapping is to build a visual of your product. The visual becomes a model for how people will use and interact with your product. As I’ve found with most agile techniques, the method itself is pretty lightweight and simple with nothing in the way when it comes to thinking through the application.

To build a story map, you start by jotting down a brief description of your product and identifying potential users. I think I remember the recommendation being 3 users. I can see needing 2 at a minimum to guard against pigeonholing yourself. Take one of the users and come up with a few general examples of how they’d use your product. To get organized, you can put the product description up in the top corner, users across the top and examples down 1 side. That’s the setup.

Next is to write down all the possible interactions someone could have in one of the example scenarios. To me this was like brainstorming, capturing everything possible. Then you take the stack of possibilities and lay them out in order of when they’d happen. The order in some cases may be obvious, other times not so much. It sounded like Jeff Patton was suggesting you lay out everything without thinking too deeply at first. Then you go rework it from there. Another suggestion he had was that as a group it’s best to quietly layout as individuals moving things around as you see fit, then leave the discussion for the rework.

Part of laying out the initial map is to group similar actions together as though they were alternatives or variations for the same action. I think these alternatives ultimately become different pathways through the map. I also gathered it was best to order the alternatives according to difficulty and/or importance (I prefer difficulty because I generally like to start with the easy stuff and build on it).

Finally, rows are grouped and labeled as activities. So the final story map has activities listed left to right (yellow), under each activity is 1 or more functions (orange) and each function may have 1 or more alternatives (blud). All together, the story maps shows the flow of interactions. The photos should give you an idea of what things look like.IMG_0503

In terms of applicability to hardware development, I’d like to go through the exercise of doing a story map for an IP… maybe the ethernet mac we’re using for our internal agile training… to see if the visual can guide our development the same way it’s used by software teams to guide development.

Lean Startup

I read The Lean Startup a couple years ago and found some good ideas that have helped guide the evolution of SVUnit (particularly when it comes to building new features and immediately deploying for feedback as opposed to hunkering down to build a bunch of stuff I think is cool only to have our users completely ignore it). Book learning is good, but the conference was the first chance I’ve had to hear real experts talk about their experience with lean startup. David Bland’s talk Organizing for Innovation on Wednesday morning was a highlight that I was kind of expecting (I follow David’s writing/activity via twitter so I went in with high expectations). An unexpected surprise though was Lean Startup Mistakes Stopping Successful Experiments from Erin Beierwaltes and Colleen Johnson in the 9am time slot just before. I didn’t have that session marked initially but I’m quite glad I sat in.

Erin and Colleen guided us through an exercise of building a lean canvas for a new product feature. Both product and feature were real, but the feature apparently turned out to be a miserable failure… which I guess is what made it good for the exercise :).

IMG_0529Here’s a snapshot of the canvas template we used. Hard to see in the photo, but each box is labeled and has a brief summary just detailed enough to get a point across.

The exercise was about taking the starting template and working as table groups to identify its weaknesses and discuss improvements. Much of the improvement revolved around identifying bad assumptions and suggesting objective metrics that help differentiate between success and failure. Considering lean startup relies heavily on experimentation and short learning cycles to guide product development, having concrete metrics is fundamental to how teams validate or invalidate their product assumptions.

David’s talk, as expected, was quite good. As with the earlier session, we did exercises within our table group and luckily there wasn’t much overlap so the two sessions complemented each other nicely. At our table, we were supposed to create a disruptive new technology that targeted a high retention, low satisfaction incumbent. Our idea was a rival product to Microsoft Word (as soon as we heard high retention, low satisfaction half the table immediately thought Microsoft).

IMG_0536 IMG_0535

We identified ways to eliminate and reduce parts of the original solution then improve and create aspects that make our product better. David’s example involved Netflix and how it differed from the traditional (at the time) solution from Blockbuster.

David led us through a follow-up discussion around identifying risky assumptions about our new product. The assumptions apply to acquiring new users, having them activate the product, retaining users, having users generate referrals and then generating revenue from users.

Erin, Colleen and David got me thinking critically about the kinds of products we build in hardware, particularly design and verification IP we build. By lunch time Wednesday, I was thinking the exercises I’d learned through the morning would be outstanding for the early what-should-we-build discussions that IP teams go through (or don’t go through). Specifically, I think teams would examine the early decisions much more closely and also be more focused on how they can have real customers validate (or invalidate) those decisions.

(As an aside… they also got me thinking that I’d like to take a shot at developing a real, web based software product. Honestly. They all talked about the valuable lessons they’ve learned through failure. I’ve always wanted to try something completely different. And I figure I can fail as good as anyone so there’s bound to be some great lessons out there for me!)

Other Useful Stuff

IMG_0517 IMG_0518 IMG_0520Two other notable sessions for me involved Calgary-based presenters. Mike Griffiths had a talk called Eat Risks for Breakfast that I got to part way through. Mike is a notable member of the agile project management community. I think the only reason I didn’t get there earlier was because I tend to shy away from the higher level project management type talks, preferring the more technical talks instead. But late was better than never in this case because Mike showed some easy risk analysis/identification/management techniques that easily apply beyond the project management domain. Favourites were the Doomsday Clock and Karma Day exercises. Mike also emphasized opportunities as risk so it wasn’t just about recognizing what goes wrong but also opportunities to do right.

Janet Gregory (also of Calgary) and Lisa Crispin had Introduction to Agile Testing: Everyone Owns Quality on Tuesday morning. Being a test talk, some of the strategies felt good to me. The idea that teams should be committing to a level of quality, that testing is an activity that is done by everyone on the team as opposed to a phase of the project left to just the testers and that all people on a team have an expertise (i.e I’m a verification engineer) but the roles blur and overlap for collaboration and continuity between disciplines (i.e. I’m a verification than can also write RTL if need be). The session wrapped up with a question about split roles between design and test and whether it was a good idea. Janet’s response was that for small teams at least, embedding testers within the development team is a better idea. That’s something I’d like to see us try in hardware development :).

Finally, the opening keynote from Luke Hohmann was pretty good. Like any good keynote, the theme was larger than industry. Watching the keynote on the Agile Alliance site is much batter than anything I could write about it so I’d definitely recommend that. To give you the gist of the talk, Luke showed examples of how people from the agile software community have reapplied what they’ve learned as socially responsible members of their community. Luke did a great job opening the conference. There was also the Wednesday keynote from Jessie Shternshus called Individuals, Interactions and Improvization. Jessie’s background is improv and she taught us a few exercises to help us think on our feet. Not sure how it’d come across in the recording but it would be worth checking out because it was definitely a fun session in person. Really good energy in the crowd.

IMG_0522In the “other userful stuff” category there was also me on Thursday with TDD for Embedded Systems… All the way Down to the Hardware. I had my hardware TDD demo. We had lots of good questions. I think people enjoyed it. Excellent experience again for me. Glad I proposed a session again and was happy again for the chance to be on the program.

(The picture is a couple of guys that helped me with signs in the common area.)

“Old Friends”

Last but not least, this was the first year I felt like I’ve been making old friends without really knowing it. More than once during the week I heard people describe the agile community as one that’s very inclusive and encouraging. Lots of the same people. You pick up conversations from where they left off.

Anywho, for hardware developers interested in agile and needing or hoping for a helping hand, this is your group. Reach out because someone is guaranteed to respond. And if you’ve been experimenting with agile hardware and you’re wondering if what you have is conference worthy, I encourage you to stop with the wondering and for it with a proposal for next year. Of course, if you’ve got a week next august to be in Atlanta for 2016, I’d recommend the conference. You’re sure to learn a lot and have an excellent 5 days.

9 thoughts on “Agile2015

  1. Thanks Neil! This conference looks awesome! I’ve been applying agile techniques to verification since ’98 or so, but I had no idea there was a conference!

    Regarding your web based application. Is it by any chance going to be a verification web-based app?

    How deep is the embedding between testers, (I assume testers are our verification engineers?), and design were the speakers discussing? This seems like our verification and design in parallel SOP. Are they proposing a tighter integration?

    Thanks again!

    1. hamilton, there’s about 1000 agile conferences! this is one of the biggest, but there are many other smaller/regional conferences. a lot monthly meet-ups as well (that’s how I got started) depending on where you are so if you’re interested, do a little digging and you’re sure to find some people to talk to/learn from.

      it will be a web-based verif app, details still tbd. I wanted to try something totally different yet for a purpose I’m familiar with. we’ll see how it goes :).

      re: embedding verif with design… this was a day-to-day, close contact arrangement and not the parallel development arrangement we’re used to. people with 2 areas of expertise but without the separation in responsibility. basically working side-by-side, step-by-step.

      thanks for following along!

      -neil

  2. Cool deal! I didn’t know about the meetups either. I’m guessing in the Bay area there’s probably a high density of these.

    The side-by-side idea is kind of awesome and kind of not. On the one hand, it’s strikingly similar to pair programming which I’ve had a lot of success with. It also overcomes the ‘I can’t call the designer, I’ll bother him/her’ issue that frequently comes up. On the other hand, the similarity to pair-programming mucks with the ‘two independent sets of eyes on the specification’ verification concept. I could also see the arrangement perpetuating the verification engineer as ‘almost a design engineer’ myth. Did any of this come up during the session, (in the software analogous form)?

    1. yes… in the bay area I don’t think you’d have much trouble finding a monthly group with decent speakers. the monthly meet-ups in calgary really got me going so I’d recommend looking around. I’m sure people were thinking about losing the independent view of the spec being a disadvantage but I’m not a fan of that idea. design/verif come together anyway when there’s a discrepancy in perspectives and I believe the separation just delays the inevitable as opposed to adding any value wrt rigour. but that’s just me.

      -neil

  3. I’ve been thinking about the ‘pair’ programming method, but with designer and verification engineer: because while the DE is thinking about “how do I design this?”, the VE is thinking “is this testable? is it robust?”. Since so much VE time is spent trying to verify very untestable logic, getting rid of that stuff before it gets checked in (vs. after) makes a lot of sense.

    Also: given that there are ALWAYS issues with requirements and specs: the VE watching what the DE writes means issues with requirements/specs (or simply misinterpretations) will get caught much earlier. That can save a lot of time as well.

  4. Neil,
    By the way, thanks for taking the time to write the long post about the conference! I wish I’d gone!

    Erik

    1. thanks erik. I like to think that someday there will be an actual hardware crowd at this conference. first step is for people to know it exists 🙂

      -neil

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.