01 October 2012

Using Fun as your guideline

This post is rewritten, please read the new guided by fun post first.

Very interesting statement from colleague after my presentation on Exploratory Testing long ago:
Documentation is generally not fun, so focusing on fun would leave us with chaos

Last spring I started on a new team as part of an organizational change. The goal was to create something less cumbersome and depressing than our current waterfall implementation. In our old way of working tester's have had time to prepare detailed test cases with instructions and god knows what information.

Things just went bad. We were standing in a situation where a poorly written strategy was all we had while code was handed to us: "Please test so we don't have to wait for months before faults are discovered and we have to dig into code we already dislike" (like we did before) plead the programmers.

So we did the only reasonable thing: We tested, we tested and we tested. Life was great! No public documentation! Success!

Time went by and the first hand over was made between me and the other tester in the team due to vacations. Chaos! I knew to some extent what was tested but I couldn't really say if anything was "done". Soon after he handed the work back to me as he went on vacation. Chaos once again. The team even started to ask us for test cases just to get some kind of idea of what was tested.

Was it fun spending all our time testing? No! Why? Because it lead to tons of frustration and a feeling (a well motivated one) of not being professional as we couldn't answer fundamental questions about our work.

Fun as guideline
Today we use fun as our main guideline in our internal test process.

These are our criterias for fun:
  • More time for testing = more fun 
  • Understanding what we test = more fun 
  • Knowing what's been done = basic requirement for fun 
  • Knowing where we're heading = basic requirement for fun 
  • Being able to answer questions about our work = basic requirement for fun 
To summarize: On top of actual testing (incl. learning anything we need to learn), we put a layer of administration, as small as possible, to reach the basic requirements for fun.

In practice
  • More time for testing
    Exploratory testing, removing anything that does not provide value to us, asking stakeholders what questions why want us to answer rather than what documents they want.
  • Understanding what we test
    Experience database, ask questions, ask for presentations, involve programmers in testing discussions, read, discuss, pair testing 
  • Knowing what's been done
    Test matrix (I'll explain that in a later blog post), SBTM inspired team internal test management (still have a lot to learn about SBTM though) and (at least) weekly test discussions
  • Knowing what's next to test
    Compact test strategy, test matrix  and (at least) weekly test discussions
  • Being able to answer questions about our testing
    Test matrix, bug tracker, backlog, test strategy
Things we fell we've achieved
  • We are more motivated than before 
  • We are a lot more efficient than before 
  • We know what we're doing to a much higher degree (surprising result!) 
  • We constantly learn things about testing and our product (= more fun) 
  • Less frustration 
  • We understand the code better and can write our own corrections / add testability ourselves more often (in our case this is a bit special since our product supports injecting assember instructions in runtime). 
  • The test matrix creates more interesting questions than the test case progress reports ever did (once again: I'll write a post on our Test Matrix later) 
  • Minimal waste upon changes. This makes it far easier to work in an iterative way 
  • More lightweight administration makes test more available to programmers 
  • Quicker feedback to programmers 
  • So far we think we've found more important bugs this way, especially important bugs that are indirectly a consequence of what we've implemented (impacts we could not had anticipated) 
  • We take responsibility for our testing to a much higher degree ("we probably triggered that case" is not enough anymore, instead we learn how to do it, implement the necessary testability or admit it's not tested and explains why). 
  • We've stopped considering issues as something we need to "revert from"/work around and rather consider them as a chance to learn and improve. We've also learned that we can't really know for sure what we actually need before we've lost it a valueable lesson.

No comments:

Post a Comment