10 March 2013

EAST meetup - Trying a new format

What is EAST
Short version: EAST is a local network for software testers in the area I live (Linköping, Sweden). About once a month we meet to share thoughts, learn and socialize/network. If you want to know more you can read my post from my first EAST meetup.

New Format / Concept
Most previous meetups have followed the format: Start with food and socializing, then 1-2 presentations, each followed by discussions. A few other formats have been used, like letting each participant shortly describe how testing is performed at his/hers company (try to get everyone involved), testing games and the pub evening with only beer and socializing. This time some hands-on testing was on the menu.

Webinar
As usual we started with food but instead of casually chatting with each other we together looked at the Six Thinking Hats for Software Testers by Julian Harty (referring to Edward de Bono's thinking hats). The webinar was interesting but I missed the social part. Maybe a 5-15 minutes long webinar would have worked better for me.

Hands on
Second part was hands on. First David, the facilitator, quickly presented the application we were to test and how to get it. The application in this case was an app for taxi drivers to help them keep track of their work (part of a full system for taxi companies). Second we would, in groups or alone, test the application using the six thinking hats in any way we saw fit.

First I and a former colleague from Ericsson started out exploring/scouting the app but soon felt we didn't get any momentum and joined another group of three. A vivid discussion on how to use each hat and what fitted where ended with us being ready to actually start generating test ideas about 5 minutes before deadline but that didn't matter, at lot of interesting stuff came out of the discussions.

Sharing results
When we were back together we started the sharing of ideas and experiences from the hands on part. Immediately it became clear the various groups had used the hats very differently.

We attacked the whole app and used the various hats from the perspective of an actual user (or rather our made up view of how it was to be a taxi driver, lesson learned: better understanding of the user; for instance its needs, knowledge and general behavior, would have helped a lot). This perspective gave us an interesting high level view but it was a bit hard to stay focused as we often drilled down too deep exposing too many details. Others focused on a smaller part of the app which, in this case, seemed a lot more efficient.

Of course how to use the hats would depend on the mission in a real life scenario but in this case the mission was not set like that, the goal was just to try out the hats and learn from each other's way to approach the hats and for this, our perspective took a little too long / became a bit too complex, at least for beginners like us.

Wrap up
To wrap things up we had a more general discussion about the hats and format.

A few thoughts about the hats:
  • In this scenario we had no problem coming up with ideas quickly and the hats initially felt more limiting than helping. In a situation where we would have been stuck or at least worked with the product for a bit longer, the hats would probably had been more helpful. This might also change as we get better and better at using the hats but only time can tell how that turns out.
  • Keep using the hats on items already identified would probably render interesting results. For instance first we could use the white hat to identify a data item, say cars in the Taxi app, then we could use the hats on this item to identify risks (lack of cars), opportunities (more efficient usage of cars), new data (various types of cars), feelings (lust to drive fast), "darkness" (breakage, with passenger in the car) and ideas (what if the cars were any kind of vehicle, like bicycles or trucks, what other users could then benefit from the app). Continuing to apply the hats on each new item added would help to dig really deep down.
  • The hats would be interesting to use on the product elements or quality criteria categories described in the RST Appendices.
  • You should be able to visualize the findings quite effectively using a mind map.
  • The hats could be a great tool to help refining, populating or creating a model.
  • Someone suggested it could be useful to help explaining the various parts of testing to, for instance, a programmer (not sure I understood this correctly but as I interpreted it, it could be used to help them understand the wider "find valuable information" perspective rather than the narrow "find faults" perspective of testing).
  • If you're not stuck yet, starting with no particular hat and instead just add stuff wherever it fits in the beginning might be a more efficient way to start (possibly changes when you're more use to the hats).
  • The hats were helpful to find more non functional aspects to test.
A few reactions to the format/execution itself:
  • Generally people seemed to have liked the format a lot.
  • We had some troubles understanding the app's usage/user. Using a known application (like MS paint) or object (like a chair) might had been better ways to understand the concept of the hats. On the contrary, using this app forced us to do some scouting and work with assumptions which broaden the ways the hats could be used.
  • Like I said before, the webinar was a cool way to mix it up personally I missed the regular chatter while eating. Still very cool so a shorter webinar might be perfect.
Finally it was fun to just study my group during the hands-on part. Let's just say we had very different ways to attack the problem (relentless scouting, analyzing the input information or quickly getting a set of data to work with for instance).

Thanks!
Thanks to David Månsson, Aveso, who facilitated this meetup in a kick ass way! Also thanks to all the participant for sharing interesting thoughts. Looking forward to future experiments, this one was definitely a success in my book!