Showing posts with label STWC. Show all posts
Showing posts with label STWC. Show all posts

09 July 2014

Software Testing World Cup, Test report

Credit
This is based on my team's effort in the European qualifying round for STWC (Software Testing World Cup). The team consisted of Agnetha Bennstam, Sofia Brusberg and me. Everything mentioned in this post is my interpretation of our collective work.

What is the Software Testing World Cup?
STWC is exactly what it sounds like: Testers from all over the world competing in software testing. You can read about it on the STWC web site but here's the bare minimum you need to understand this post:

To get to the finals you have to win a qualifying round. Typically there's one qualifying round per continent with a maximum of 250 teams each round and 1-4 testers per team. A round is 3 hours long during which the contestants test an application, revealed just before the round starts. Throughout the competition teams can ask questions to the application's product owner who answers these in a live video channel available to everyone. To get a better idea of how this works Matt Heusser have shared recordings of the video feeds. At the end teams are scored mainly based on a test report and the bugs they've filed.

Intro
To make any sense of this post you first need to check out our test report from STWC (in case worried; I've checked with the STWC judges it's okey to share our report).

My hope is this report can help spark some new ideas for other testers as well as, despite being written under severe time pressure, work as an example of how a test report can look.

Overview
Our test report is made up of six sections; Coverage, Bugs, Other risks, Suggestions, Conclusions and Strategy. Generally I think the report turned out pretty well; it's concise (not counting the coverage section), covers what the stakeholder asked for (including improvements) and communicates what we discovered. With that said there is definitely room for improvement...

Coverage
"What did we actually test?"

The reason for this section is bugs, risks and suggestions don't say much about what we actually tested, just where we happen to find notable stuff. Coverage is there to ensure further testing is focused on risky and/or not yet tested areas.

Given the limited time I think we did a decent job with the coverage section. Some of the categories are too broad and/or ambiguous (e.g. Search), thus making it easy to mistake testing from being performed when actually not. Also I think we should had communicated the coverage in a more concise way.

With more time to spend and better understanding of the product I would have liked to add a visual map of the coverage as an overview. Right now I think it's bits and pieces, missing the big picture.

Another way to improve this section would be to give a quick estimate on how confident we are in the respective areas as a way to show how confident we are in these areas. In our report there's no difference between "we made a quick test in this area" and "we spent a lot of time and effort testing this area", which hurts the section's usefulness.

Bugs
"How bad is it? Any reason to fear the product will explode in the customers' face?"

Lesson learned: Add a more business friendly description to bug reports and describe the business impact when actually reporting the bug! This time we didn't and thus had to quickly translate bug descriptions while writing the report. The result is not very impressive unfortunately. But I did learn something and, from what we discovered, I think the right bugs are listed.

Other risks
"Apart from severe bugs, what else did we observe/not observe that might affect release readiness?"

I like what we added to other risks, possibly with the exception of the potential availability issue described. I think that one was added because one of us pointed it out during the stressful last minutes, so we added it without really questioning if it was relevant. Compared to several reported bugs, not mentioned, and based on our interpretation of target audience it should probably had been reported as a minor (or rather potential) bug, not be part of the test report. But once again, overall I think this one did turn out nicely.

Suggestions
"Potential future improvements we discovered"

One of my favorite sections in the report. Could had been somewhat more detailed but to the point, relevant and what the stakeholder specifically asked for. I tend to always note down suggestions for improvements but adding them to the test report (I think) was a first to me. Will definitely consider adding a suggestions/improvements part to future test reports!

Conclusion
"Taking risks into consideration, what's our recommendation going forward?"

From a formatting perspective it's bad to let the page break in the middle of a section. Also I love the "edit Account editing" (time to proof read the report? obviously not enough). But, looking at the conclusion, I still find it relevant and correct even with some time to reflect. Another thing I like is it doesn't only present the stakeholder with one option, instead it embraces the fact we (testers) don't know the market and thus provides information relevant both for a "regular case" and a "we must release now" case.

Strategy
"What resources did we have at our disposal, what limitations did we have to adjust to and what ideas guided our testing?"

Since this report was given to an external customer we figured a rough plan might help their internal test team even though of course highly context dependent.

If you compare the time plan with my previous blog post you can see we didn't use the last hour to update it. I think it's close enough since we didn't diverge too much and updating the image would not had been a good way of spending our last precious minutes, however, a quick note on how we diverged from the plan, I think, would had been helpful/useful information. Also we wrote it on the form "we did", not "we plan to", which is simply wrong. Apart from that, nothing spectacular but hopefully somewhat meaningful.

Coloring
The header colors is there to make the report easier to overview. Apart from the red one for bugs and other risks they aren't very "self explanatory" but I do think they help the reader to find the information (s)he's looking for, especially when looking at the report a second or third time. One thing to improve is to make conclusion stand out more as I imagine a stressed reader would like to find this immediately. A different choice of, or way of using, colors might be a way.

Overall
I think we got the overall layout and categories right, I think most of the content was relevant and, after all, we only had 3 hours to finish planning, testing and reporting! For a 3 hour competition I think it's a well written report despite its shortcoming, which I hope I've highlighted well enough in this post.

To all the other STWC contestants; I would love to hear what you did different in your reports and how that turned out, eager to learn from it!

Finally: Thank you Agnetha and Sofia!

Lessons

  • Colors are great to add visual structure and make the report easier to skim/overview
  • Proof read more than once, stupid linguistic and grammatical errors hurt the report's credibility
  • Think about the actual business impact/description already when writing bug reports
  • It's hard to describe, especially in a concise way, what was tested and what was not
  • Don't write things on beforehand guessing how something will turn out, in that case, describe it as a plan
  • Improvements (what we called "suggestions") can definitely have its place in a test report
  • Don't wait with the report until the end, try to create as much as possible in parallel with the actual testing and planning

07 July 2014

Software Testing World Cup, Time management

Credit
This is based on my team's effort in the European qualifying round for STWC (Software Testing World Cup). The team consisted of Agnetha Bennstam, Sofia Brusberg and me. Everything mentioned in this post is my memory/interpretation of our collective work.

What is the Software Testing World Cup?
STWC is exactly what it sounds like: Testers from all over the world competing in software testing. You can read about it on the STWC web site but here's the bare minimum you need to understand this post:

To get to the finals you have to win a qualifying round. Typically there's one qualifying round per continent with a maximum of 250 teams each round and 1-4 testers per team. A round is 3 hours long during which the contestants test an application, revealed just before the round starts. Throughout the competition teams can ask questions to the application's product owner who answers these in a live video channel available to everyone. To get a better idea of how this works Matt Heusser have shared recordings of the video feeds. At the end teams are scored mainly based on a test report and the bugs they've filed.

The plan
We had a beautiful plan starting the competition:


Personally I have a history of not following plans so I was surprised when we actually executed pretty much according to it (I give Sofia and Agnetha full credit for this). Here is an adjusted, and somewhat extended, version of what actually happened:


First 55 minutes
After some minor technical issues with the YouTube channel we were up and running. While Agnetha and I mainly focused on the stakeholder's live video channel, Sofia started touring the application. We kept going like this gradually focusing less and less on the video until Agnetha mentioned we should stop and debrief what we had learned so far as well as discuss test strategy and possibly adjust the time plan. I would say Agnetha was our time keeper for most of the competition.

During the debrief/strategy discussion we identified 3 main areas to focus our testing on, based on the stakeholders' requests:
  • Email functionality
  • Screen resolutions (mobile devices)
  • Possibility to steal accounts (customers)
To make it easy we focused on one area each. As security is one of my better areas I went with the account task, Sofia had already started looking into the emailing functionality so she continued while Agnetha started playing around with the mobile devices we had/various resolution.

Whiteboard from the competition

Exploring and debriefing
For the next 65 minutes we tested the product and had a debrief after 30 and 50 minutes. When we stopped for the first debrief (once again thanks to Agnetha) we realized all of us were testing user settings even though this was only part of Agnetha's "scope". To some degree this was good because the user settings functionality was rather buggy, but at the same time the bugs we found weren't generally that severe and once again, this was not communicated as a high priority from the stakeholders. So after the first debrief I think we got on track. During the second debrief everyone seemed to be in the zone. We kept this debrief short, just briefly summarizing the more severe bugs we had found, what we were currently looking at and if we needed any help, to make sure the flow wasn't lost.

One big difference between our plan and what actually happened: We didn't pair test nearly as much as we thought we would. I think this was a good decision, it was simply too much to cover in too little time so we had to spread out our resources.

Wrap up
In the initial plan we thought 30 minutes would be enough to finalize the report, review bug reports etc. Luckily I was a bit nervous about the report (my focus area) and suggested we should start with just under an hour left. After all, we could always keep testing after the report was done.

Let's just say there was no time after the report! I reviewed my bug reports and realized some things were not clear or I realized I had forgot to add something, I added things to the test report and often questioned "do we really know that or are we just assuming/remembering it wrong" which required further investigations and so on. We also had to rewrite some bug descriptions to be more business oriented rather than technical. All and all, the last hour went by quickly.

Conclusion
Thanks to Richard Robinson inviting me to a 3 hour test session during Let's Test 2013, I had an idea of how short 3 hours actually is. Still... 3 hours is not much time! I'm glad we did a time plan before the competition and reviewed it during the competition. Without it I think the limited time would had hurt us much more!

... and it was fun to actually follow a plan, I might try that again in a few years .)

Finally: Thank you Agnetha and Sofia!

Lessons
  • Writing a good bug report takes a lot of time
  • Taking notes is crucial, we could had saved a lot of time when writing the test report with better notes
  • 3 hours is not much time
  • A (good) plan helps a lot when under pressure
  • Debriefs are valuable to continuously set/correct direction
  • Even when alone you can benefit from stopping regularly to review what you're doing
  • Visualizing (like a schedule or the image in this blog post) a plan helps as it makes it easier to relate to (at least for me). I think splitting it up in a way where the size of each activity is relative to it's length would had improved our visualization.