21 November 2016

SWETish

It was quite a long time since I wrote about an event. This is mainly because "it's not new to learn new stuff" anymore so for me to write about an event it need to be pretty special. SWETish, a peer conference that just ended, was exactly that: Special.

I've tried to wrap my head around what made this different and it boils down to:
I've not had this many "epiphanies"/been surprised this many time at a test conference since... one of my first ever.

Next question is: Why? What made me get all those epiphanies?

Well, I've rewritten my explanation to that I don't know how many times now. It boils down to things I can't seem to describe well enough yet but I'll give it a shot in a separate post, probably after a Transpection Tuesday (I need your brain Helena).

So, let's skip to the good stuff: Content and lessons learned.

Credit

Before we start: Credit for the content below goes to all the participants of SWETish:
  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson

Talk 1: All Pairs by Morgan Filipsson

Morgan basically described a tool he thought was more or less obvious and most of us went like: "mind blown"... it felt a bit like Paul and Karen in the K-card story. It was just a simple non-mechanical combinatorial testing support tool made in Excel where you could decide the input values you would use in a test and the program would help you by showing how many valid pairs (as in "all pairs") you had not yet covered. I don't know if this is exactly what Hexawise, or some other tool, already does but to me it was ingenious.

But this is a peer conference so why stop there; open season time:
  • Forming an unbiased opinion can be expensive and money does matter
  • Getting many peoples biased opinions and comparing them can help sometimes
  • Beware of decision fatigue when using exploratory testing
  • Improving your test strategy or moving towards a more formal approach can delay the decision fatigue process
  • Mind maps are cool but do have limitations
  • Don't underestimate pen and paper
  • Excel is a pretty bad ass testing tool
  • Remember data can come in many shapes and forms and it often has complex relations to other data
  • Sometimes the simplest of ideas are not obvious to others
  • There are so many potential benefits with screen recording

Talk 2: How we test by Anders Elm and Erik Brickarp

A bit unorthodox but a two person experience report at a peer conference. The background was I read Anders' abstract and realized we have a very similar test process at Verisure. So Anders described how they test at SICK, I described how we test at Verisure by focusing on the differences and at the end we shared our key lessons learned and challenges. Long story short: We had both started with the goal to implement SBTM (in my case credit should go to Maria Kedemo et al.) but in both companies this had diverged into something else. I described this new process to be much more similar to TBTM than SBTM.

I might talk more about this in the future but let's skip to the open season for now:
  • How much of this diverging is due to strategic decisions and how much is due to laziness/lack of competence (valid question that requires more thinking on my part)?
  • Internal demos after each completed user story and mob testing during these demos seemed awesome (done at SICK but not Verisure, where I work)
  • We got an interesting question from Björn Kinell about "if you could magically change/correct whatever you wanted, what would that be". I don't want to share my answer but my advice: Ask yourself that question cause it can help when forming a vision for your testing.
  • It's easy to forget/skip debriefs, test planning and testing not immediate related to a story in the sprint but be careful as these activities often provide quite a bit of value.
  • Find activities that are "not testing but still testing" to easier get the developers involved. Examples: Add testability, "try" the product and support a tester when testing.
  • Ask the question "how do we catch <very basic bug> before the product is handed over to testers?" to start a discussion in the team about testing and developer responsibility.
  • Remember that small bugs that normally don't slip through development/testing can be a symptom of a much bigger problem like stress, change in culture or lack of support.
  • Time spent improving charters is rarely a waste of time.
  • SBTM "by the book" in a scrum team is not easy...
  • If the timebox aspect (session) is removed you need to find new stopping mechanisms and/or heuristics to help you stop and reflect on whether or not to continue.
  • Debriefs can be useful for many many reasons.


Lighningtalks

Johan Jonasson spoke about construct validity in metrics
Anna Elmsjö spoke about finding and making allies
  • Candy solves most people problems (my wife approves)
  • Finding allies among the developers is important to help you get attention to testing problems
Tim Jönsson spoke about the value to "knowing your audience" when reporting bugs
  • There are more ways than an entry in a bug reporting tool to report bugs
  • Do highlight helpful developer behavior/good code and show gratitude when developers help you
  • Good lightning talks often focus on one thing and explains that one thing well!
  • A compelling story helps your audience understand your message
  • Testing is about psychology more that you might initially realize
  • Be humble, be helpful, be useful... and be respectful.
Göran Bakken spoke about how integrity can get in the way of efficiency
  • Refusing to do bad work is not always constructive
  • Two different ways to approach a change is to focus on supporting values or to focus on methods.
Kristian Randjelovic spoke about how analogies
  • There are many ways to use analogies to help colleagues with limited understanding of testing understand otherwise rather complex (testing) concepts.
A general lesson for me (even though I didn't present a lightning talk):
If I'm given 10 minutes for my talk and open season, I'll try to aim for a talk shorter than 4 minutes and only focus on one message.

Talk 3: Test documentation in the hands of a stylist by Agnetha Bennstam

Agnetha showed three different examples of how she had helped turn massive 200-page documents into lightweight, often visual, alternatives. It's hard to explain this in text but the running themes were figure out what is important, find the right level of detail and aggregate data in a way that's useful/enough for the receiver rather than show the full set of data.

Open season:
  • Dare to try something different!
  • You can add additional dimensions to a diagram by using the size of the dot you're plotting and color (see slide 55 in Andy Glover's presentation).
  • Use feelings when you test e.g. "why do I feel bored" as indication that there might be something you should react to/take notice of (see Michel Bolton's presentation).
  • Asking for people's gut feeling can be a good way to get their assessment without getting all the motives and data (makes people relax a bit).
  • Sometimes asking for people's gut feeling can help them "dare to answer" so that you can start to figure out what is happening e.g.
    "How 's the quality of ...?"
    "I don't know..."
    "But what's your gut feeling?"
    "Well, so far it's not looking good"
    "Why is that...?"
  • Gut feeling can sometimes mean "very useful and relevant information the person simply cannot/dare not articulate yet"
  • ... but gut feeling can also mean, or more often be interpreted as, "I base this on absolutely nothing but I don't want to admit that"
  • Beware of documentation existing only for the purpose of being ammunition/defense in a blame game
  • A tool that could clearly visualize:
    How fresh the testing is (time and impact of changes since last tested)
    How important the testing is
    How well the testing went (typically quality)
    ... would be cool.
     

Other stuff

One more crazy thing about SWETish: All the talks and open seasons were amazing and that's despite the fact that the experience report all organizers voted as their favorite... was not delivered since Joel Rydén got sick!

Another cool detail. Anna Elmsjö added a nice twist to the K-cards she created for the conference: On each card there was a short helpful sentence/statement/word at the top explaining the card a bit. For instance sentences like "I would like to add..." on a yellow card or "This is the most important thing ever!" on a red card, most were funnier than that but the funny ones I saw don't translate well to English. To make it even cooler she had different statements for every single card... Johan, I expect the same for Let's Test 2017 ,)


Summary

THANK YOU! This was one of the best conferences I've attended, I already look forward to the next one!

... and I hope I can bring some of SWETish awesomeness to PEST, a peer conference I'll attend in just a couple of weeks (we seriously need to work on how we name the various peer conferences by the way...).

No comments:

Post a Comment