24 April 2017

Peer conferences, part 1


This blog series is mainly for people who (would like to) arrange or attend peer conferences (explained below). People not interested in peer conferences will find rather little value in this post. One exception though could be if you want to arrange some workshop/conference at your work and you want inspiration for that. Finally; you can replace testing/software testing below with e.g. "programming", "change management" or something else as the experiences and methods should work for any "topic", I just happen to work with software testing.

Finally, these are my thoughts, not "the truth".

Peer conference

A peer conference in this case refers to a small group of experts, gathered to debate a specific topic, based on their own experiences rather than e.g. abstract models and they do this in a focused manner for at least one day. "Small group" refers to something like 6-16 people. "Expert" is harder to define but means something like "a person with a lot of skill, passion and/or experience in software testing".

With this definition I realize other events I've attended might qualify but I hope it's good enough to at least understand this article.


This section is to help you understand what I base my thoughts on. Feel free to skip it if not interested. Do note that I attend a lot of other "peer conference -like" events such as Transpection Tuesday, local meetups etc. and I will take that experience into consideration but haven't listed it in this chapter for the sake of your (and my) sanity.
  • SWET 4
    15 attendees
    James Bach present, lot's of well known names overall
    I was very inexperienced (compared to the other participants)
    Language: English
    Setup: LAWST inspired
    Topic: Models in testing
    Location: Sweden, fancy conference center
    Length: 1½ days (+optional 1/3 day before), Saturday morning to Sunday lunch
  • SWET 7
    10 attendees
    James Bach present, very inexperienced group
    I was one of the most experienced attendees
    Language: English
    Setup: LAWST inspired
    Topic: Test coaching
    Location: Sweden, fancy conference center
    Length: 1½ days (+optional 1/3 day before), Saturday morning to Sunday lunch
  • PEST 6
    9 attendees
    Michael Bolton present, the best of Estonia (which is pretty awesome btw.)
    I was an "international attendee" so a bit different
    Language: English
    Setup: LAWST inspired
    Topic: Gaining consciousness
    Location: Estonia, at Nortal (company)
    Length: 1½ days, Saturday morning to Sunday lunch
  • SWETish
    10 attendees
    "regional peer conferences" with attendees mainly from Linköping and Örebro
    I was one of the most experienced attendees and co-organizer
    Language: Swedish
    Setup: LAWST inspired
    Topic: Exploratory testing
    Location: Sweden, fancy conference center
    Length: 1½ days, Saturday morning to Sunday lunch
  • EASTish
    8 attendees
    Only attendees from Linköping, mainly from two specific companies
    I was one of the most experienced attendees and co-organizer
    Language: Swedish
    Setup: People brought their own topics so no "formal presentations"
    Topic: Any
    Location: Sweden, at Sectra (company)
    Length: 1 day (optional evening), Saturday
  • SWET 8
    11 attendees
    Experienced group, rather mixed skillsets
    I was one of the more experienced attendees
    Language: Swedish
    Setup: LAWST inspired
    Topic: Testing that's not testing
    Location: Sweden, fancy conference center
    Length: 1 day (+optional 1/3 day before), Saturday morning to Sunday lunch


So far my experience is ~9 is the minimum; below that the amount of conflicting ideas and experiences, which are important, starts to become an issue. EASTish was still a great peer conference but I think that conference would had gotten even better with a couple more attendees. PEST was right at the minimum limit but I personally did not feel the amount of people negatively impacted the quality of the conversations. Of course the people matter a lot in this case; more experienced people with more diverse experiences having a lot of passion and willingness to debate will likely mean you need fewer attendees and vice versa.

My personal upper limit is ~13; beyond that it seems like every single person gets too little time; especially if there are a few very talkative individuals in the group.

I would typically aim for 13 and since people will get sick, can't attend in the first place etc. that might make us end up with 11-12 which seems great. Notice that "aim" in this case does not mean "invite". For SWET 8 we had, for instance, at most 18 invitations out simultaneously but ended up with 12 who accepted and in the end 11 attendees as one had to cancel.


I didn't think of language as much of an issue until I attended SWETish, my first peer conference in Swedish. It helps a lot even in a country where people generally speak pretty good English. To me, using the attendees' native language seems to help people "dare" to share more ideas, there seems to be way fewer misunderstandings and the overall flow is much better.

But it's a balance as using attendees native language rather than a language more broadly spoken (such as English) will limit who can attend... a problem we have in Sweden as well as several top notch testers here don't speak Swedish or at least not well enough (yet) to attend a fast paced peer conference in Swedish.


I have huge respect for people like James Bach and Michael Bolton; they always add a ton of value to a conversation about testing and especially given a format like the one typically used at peer conferences. Also seeing how James really helped a bunch of less experienced testers elevate during SWET 7 was awesome... however...

In Sweden it impacts the language, which I think is a problem (see Language). We also have a lot of talented testers so giving a spot to an expert will naturally stop someone else from attending and/or give some people less room to express themselves. Finally my experience is it steals a bit of focus as some people, knowingly or not, try too hard to impress the expert and/or not look stupid, hurting their overall performance at the conference.

I think inviting someone like James or Michael is amazing if the language is English anyway and/or the group is very experienced (hopefully lowering the "need" to impress) and/or it's hard to attract enough attendees so giving away a spot is not an issue while the expert can act as a motivation for other people to attend... but it's not necessary, you can have an amazing peer conferences without international, or even national, "experts" (e.g. at work, in your city or in you "region")... and I say this from the context: Linköping, Sweden; a city where we have a fairly active and skilled test community, just to make that clear.


Experience (as in how much "testing" and "software development" the person has experienced/seen/participated in) helps as we base the discussions on experience. But mixing in a few rather inexperienced people can really add some interesting new points of view, as long as these inexperienced people feel safe sharing their thoughts. To summarize: Having a lot of  experience helps but lack of it is not a deal breaker.

Skill (as in your actual ability to test and understand testing) is, for me, key. Some people might not be known anywhere outside their own company, they might not have much experience neither as testers or in talking about testing but place them in a situation like this and they will provide value, as long as they themselves understand that their skill level is on par with everyone else's.

So on the topic of "experts", how much experience and skill does the average attendee need to have to make the peer conference amazing? My personal experience is: "a lot less than many seem to think".

Two other interesting attributes to me are passion and (verbal) communication skills.

Passion helps a lot but I think that usually comes naturally with wanting to spend a weekend (during which peer conferences or often organized) "just talking about testing"... be careful though about attendees who just think "it'll look good on their resumés" or who want to attend to advertise their own services or hire skilled testers.

Communication skills are important in general but do not mistake this for "talkative" attendees. The K-card system often used at peer conferences, for instance, can help less talkative people gather their thoughts and help them get into otherwise intense conversation and it can stop people who think in an extroverted way ("while talking") from filling all "the space". Instead, much more important is having attendees who can express something in a concise way, who respect the facilitator/format (primarily who won't speak when they shouldn't) and who can understand when their comments won't help a conversation forward/add value as well as dare to speak up when their comments will.


My first ever peer conference, SWET 4, had an invite only format. My second had an open invitation (first 15 to sign up), my third had an open invitation (everyone may send in an abstract but a program board will select who will actually get an invitation among those) and later I also attended a "semi invite only" where ~15 people got a few days head start (personal invitation) until an open invitation was sent out. Long story short: You can do it in many different ways.

Invite only is great when you know exactly who you want to invite and want control over the group. I also find this to be most efficient as people feel selected and thus prioritze the conference more. It also helps to get those crucial first two or three attendees you need to start a buzz about the conference. There's also a risk that the group becomes too homogeneous; resulting in fewer conflicting ideas/experiences and thus fewer opportunities for people to challenge their own models.

Open invitation is great when you don't know the people you want to invite and/or who you want to come. It also relieves you from some "why did she get an invitation but not be" comments and allows you to better talk about the conference before it actually starts. However, it may make it harder to get those first attendees to sign up as they don't know if the group will be good; this can be somewhat helped by e.g. make in open invitation in a large group (e.g. a Meetup group) but where all the members should be good candidates. Also, there's a risk "the right people" will ignore the invitation because they don't understand they qualify or they, for whatever other reason, don't feel like they are the ones you're looking for.

I personally prefer invite only, even when I don't get an invite myself. I think that allows the organizers to better create a group that's suitable for the topic at hand... but that's my personal preference.

There are like I described in the beginning of this chapter many hybrids between the two but I think that should at least give you and idea. Finding the right format to send out the invitations is pretty straight forward for invite-only (email is pretty good for this) but depends completely on your context if it's an open invitation.

In the next part I'll share a checklist and example of what I think you should consider adding to an invitation, so stay tuned for part 2.

For SWET 8 we wrote a personal note in each invitation explaining why we wanted that particular person to attend. This was definitely a win-win:
  • Participants better understood our expectations
  • Participants got to feel good about themselves
  • It seemed to make more people accept the invitation
  • For us, the organizers, it felt good to tell awesome people why they were awesome and it didn't cost much time or effort.
I'll provide an example of this in part 2.     


My preference so far is "as many as possible but not more than 4" (so I guess 3-4). Communication and taking decisions become a problem as soon as you're two but to me it's still worth the benefits (see below) until you're around 4 to 5. Some benefits of having more organizers are:
  1. There are simply fewer people (easier) you need to invite to get a full group
  2. The first person to accept the invitation will join an already established group
  3. You are much less fragile, if one gets sick/life happens the work can still move forward
  4. Larger personal network, key to avoid too many like-minded attendees when using invite only
  5. Greater presence in social medias etc., key when using an open invitation
  6. You have more options considering location, food etc.
  7. More people mean each person needs to do less... and I'm lazy
  8. It's easier to identify who you want to invite since you better know what's missing in the group
  9. Each organizer can relax more during the conference (less pressure on each)
  10. A lot less risk as with one sick organizer a peer conference might collapse if only having one or two organizers but it's much easier to handle if 3 or 4.
If the organizers can meet in person I think it's a benefit but having a good communication platform (e.g. Slack, Skype etc.) should be sufficient. For SWET 8 the only physical meeting we had together was the evening when we decided we wanted to organize a test conference, everything else was handled via Slack.


All peer conferences, except EASTish, I've attended have used basically the same format:
  • All attendees prepare ~20 min presentations based on something they've experienced.
  • A few attendees will actually present, usually 2-4, for a 1½ day peer conference.
  • After an experience report there's a facilitated discussion around that presentation. The discussion will continue until the group feel done with the topic (usually 1-6 hours).
  • At some point there's time for lightning talks: ~5 min talks including open season.
While I think this is a great format I think other formats could work just as well and potentially even better as they have been explored less.

Two suggestions:
  • Discussion topics and dot voting instead of presentations
  • Solving an actual problem (doing something); with one or more debriefs
At EASTish rather than presentations, attendees got to write down a few topics each (typically in the form of a question or short scenario) and then we dot voted. This basically turned it into a prolonged lean coffee. There were some cool benefits to this format:
  • People got help/got to discuss the exact topics/questions they were interested in
  • Much less time to prepare for attendees
  • People who feel uncomfortable to present didn't have that distraction 
Drawbacks could potentially be more abstract content rather than focus on experience (not my experience from EASTish), if attendees think the format means they don't have to prepare that may negatively impact the topics covered and the presentations (or rather preparation work needed) may act as a useful gate keeper; scaring off people who want to attend for the "wrong reasons". I don't know if any of these drawbacks are actually valid but no matter what; this is a format I would love to try at a peer conference fairly soon; with or without a specific topic/theme, but probably with.

The other suggestion is something I have tried at a local meetup where we split into smaller (mixed) groups, tested a specific application and finally spent a long time debriefing our testing including why we did the testing we did, how we organized ourselves, differences between individuals in the team etc. The idea has also been used at at least one peer conference before: PEST 4.5, where they tried to visualize various reports in testing in new and creative ways. I highly recommend reading about PEST 4.5 and ever since I heard about it I've wanted to attend/arrange something similar.

If you have ideas on other formats that could be useful, please comment and I'll add them to the post. This is one of the areas where I think we could really take the concept of peer conferences to a whole new level!


Peer conferences typically have a topic/theme. This topic/theme is important to keep discussions focused, to get the right people interested in the conference and to help people prepare before the conference.

Generally topics should be broad enough to include some diversity but narrow enough so that we can actually get deep into the topic. You have several examples in the Data chapter and if you search for "peer conference software testing" you'll find more. One issue though is that the one liner rarely tells the full story as there are usually specific aspects you're interested in (the one-liner is too broad). To get a better idea what I mean, do check out the invitation for PEST 6, it explains their topic "Gaining consciousness" in a great way.


This is relevant if you let people present, skip if you want a different format.

First of all, set a clear deadline for when you want attendees to inform you what they want to talk about. Informing you about what they want to talk about is typically done in the form of an abstract (~1 page description of the talk).

When you know what they want to talk about establish a speaker order and if possible, set who will facilitate each talk. You can read more about facilitation in the story behind K-cards. I'll try to share how I do this in some later part but if I forget, feel free to ask me.
If your peer conference looks anything like the ones I've attended you'll likely fit 2-4 presentations into a 1½ day conference, so my advice is to inform the top 5 participants (1 extra to deal with a potential cancellation) and let the rest focus on lightning talks.

I also recommend having appointed mentors (typically organizers, e.g. the facilitator) available for each speaker. This can help both experienced and inexperienced speakers present the "right thing" (an actual experience rather than some abstract concept).


Every time I've attend 1½ day peer conferences, I feel like half the group says "This sucks, I would like to continue" and the other half says: "I loved this, but now I need some sleep/think for myself" at the end. I don't know if that means the length is perfect or too short (or even too long) but I think it's a sign the length is quite good.

However, to avoid stagnation (few new ideas/low energy) when going beyond a day my experience is you need a larger group than ~10 and/or very passionate/skilled/experienced attendees. I for instance felt a bit of stagnation during SWET 7 but not during SWET 4 and SWET 8.

I would love to try a full 2 day version but when trying to find a good schedule I run into problems. If you want to avoid missing too much time from work (problem typically for consultants) you could either start by lunch on Friday and end by lunch on Sunday, this would cost one extra night (compared to the typical Saturday morning to Sunday lunch setup) and interfere with working hours especially for people with a long ride to the conference... or you could start Saturday morning but end Sunday evening instead, so you basically end after dinner (say everyone leave ~21:00). This would not cost an extra night but add quite a bit of conference time. Would suck though for people having e.g. a 3h+ drive home (common in Sweden)... Maybe end at say 17:00 or 18:00 and skip the extra dinner as part of the conference... I need to think about this a bit more.

The other option: Shorter, means you can cut costs (e.g. no nights) and make it simpler for people to "spare the time". We did this in Linköping (city with ~140k population) and it work out nice. I don't think shorter is an option when attendees have more than say 30mins to the conference but worked great if you want to introduce the concept of peer conferences to a more local group.

Facility and food

Facility and food are actually quite important since attendees will like sit for long durations and be exposed to a lot of information. When selecting a place to host the peer conference, take into consideration: costs, food, quality of the conference area, how much you have to fix yourself and facilities to use in the evenings.

For the evenings it seems like you want one of two things: An inspiring area to sit in (beautiful, unusual/creative, enough space etc.) or a relaxing pool area; e.g. a large outdoor jacuzzi... some alcohol (like a beer or two) also helps.

Being in an area that's great for taking a relaxed walk or jog also helps in my experience as people need some air after more or less a full day in a conference room. For this, choosing a location that's somewhat remote seems to help; this also has the benefit of helping people to fully commit to the conference as there are fewer distractions.

A conference center will greatly up the costs but also significantly lower the amount of work for the organizers. I've attended peer conferences hosted both at conference centers and at someones company, both work equally good to me as long as the organizers, in the latter case, have a good plan for e.g. food, evening location, some energy refillers (=candy/sweets) for the breaks etc.


  1. Make sure there's a schedule
  2. Make sure the conference center, food catering etc. agree to your schedule
  3. Be flexible; not interrupting a good discussion is more important than sticking to the plan
  4. Schedule regular breaks but remember to not interrupt good discussions (see previous)


During the evening(s) a lot of important processing, bonding and follow up discussions take place. Make sure there are good facilities for these, that attendees stay (except for the need of sleep or handle social overload) and that there are some "conversation/activity help".

"A good place" and "making attendees stay" were described in the "Facility and food" chapter above so let's focus on "conversation/activity help". A lesson learned (for me) during SWET 8 was I think the group benefits from being split up a bit in the evening. One simple example is having one or more tables devoted to e.g. the dice game, coin game, Test Sphere or Set as this will split up the group. If there's a pool area the size of the jacuzzi and the fact not everyone like spending time in a pool will automatically split up the group (not necessarily in an optimal way though, but hopefully good enough).

Other examples could be to actually schedule activities in the evening. One way would be to split into smaller groups doing some task, challenge or activity and then, in a simple and informal way, let the groups debrief their results to the rest of the attendees (either in the evening or the next morning). Another would be to set specific topics/tasks at different tables so people can rotate and discuss/do different things with different attendees. Be careful about ambitious plans though; it seems like as long as you provide a somewhat quiet area where people can easily split up into smaller groups themselves; you're basically set... but some help rarely hurts.


I think we can take the concept of peer conferences even further if we dare to challenge the current common setup by e.g. trying new formats, longer/shorter conferences, tinkering with the group we invite, try new locations etc. For instance my view of the "minimum viable product" for a peer conference (location, setup etc.) was significantly altered after I had attended PEST 6 which was the first peer conference I attended that wasn't hosted in a fancy conference center. PEST 6 then became important inspiration for how we arranged EASTish here in Linköping.

What's next

Part two will be a checklist for organizing a peer conference including an example of an invitation etc. The goal with this post was to help people learn new ways of organizing a peer conference, part 2 will hopefully inspire new people to organize them as they learn it's not that complicated.

Part three (if I get there) will likely deal with stuff related to the actual execution of the conference e.g. facilitation, check ins/check outs etc.

26 January 2017

How do you help a team become awesome?


I raised a question, first during a Transpection Tuesday, then in the TestSverige Slack chat and finally with all sorts of people I've met; mostly software testers. The question was:

How do you help a team become awesome?

Awesome in this case refers to the kind of team where everyone seems comfortable; they laugh, they communicate, they do silly things but don't seem embarrassed and at the same time they seem productive, motivated and ever evolving with low employee turnover rate.

This is my summary of those discussions.

Before we start: This is not specifically for managers, team leads, scrum masters etc.; it's everyone's responsibility and opportunity; anyone can improve a team's "mood".

Personal attributes/attitudes

Personal attributes and attitudes came up a lot during the discussions and they seemed to be the foundation on which you can add helpful activities. All of these work as self reinforcing systems so if you start to set a positive direction others will (eventually) follow. The same applies if you set a negative direction though, as this will start to create a deeper and deeper hole to get out of.

So why don't we just act "good"? Because we're imperfect, also known as being human: We're scared, we sense injustice, we want revenge, we get stressed, angry or sad, we're sometime egocentric and so forth.

For these reasons there are a few things you need to consider for each of the attributes listed below:
  1. It'll take courage to set a new direction and you might get hurt... sometimes a lot
  2. You'll need to consciously monitor yourself to avoid stress etc. getting the better of you
  3. You'll need to nurture these attributes in the team primarily by making positive examples visible

So, without further ado; dare to...
  • Be vulnerable
    "My uncle used to say that we like people for their qualities but we love them for their defects."
    /John Myers, Hellboy

    Share your struggles, admit you're scared, open up, allow people to come close and dare to be imperfect (aka. human) in general.
  • Be transparent
    Share what you know and do, that's relevant to others even though this might make them question your decisions, force you to (temporarily) stop something or even use the information to personally attack you.
  • Be accountable
    When you've messed up, take responsibility, apologize if appropriate and accept the consequences. Sometimes it's even beneficial to take responsibility for things you weren't responsible for just to get out of a negative loop.
  • Appreciate
    Make it a habit to register when someone makes something good and tell them this. Make sure you're sincere, empty flatter is not helping. Another nice way to appreciate people is to be a proxy for appreciation e.g. "Just so you know, Bob gave a two minute speech this morning about how great he thought your design was".
  • Trust people
    People want to do good so do trust them. Sometimes they'll let you down, sometimes you might even get stabbed in the back but keep trusting them. With that being said, of course bad behavior should be dealt with, e.g. see "be sincere" below, but as soon as you stop trusting people you're heading in a bad direction. After all, if you don't trust people they'll never be able to show you they can be trusted starting a rather destructive loop. Also people grow with responsibility. Finally: trusting people does not mean not helping them and/or helping them realize they need help.
  • Be sincere
    Integrity is sexy; if you think someone, including yourself, is being singled out, is getting unfair criticism or for other reasons aren't treated in a fair way: Speak up! Especially when people aren't given a chance to defend themselves.

    However, stick to your observations not your interpretations. You don't know for sure if "this person is actively trying to hurt you" but you do know for instance that "the person was told to give you the latest version but you never got it". Sincere != Judgmental, quite the opposite actually.
  • Care about people
    Caring about people costs very little and the main risk you face is simply to be creepy. Do notice that care does not mean micromanage, instead it's about genuinely trying to create a good situation for others. Carita Jansson Tsiantes gave a lovely example in the TestSverige Slack chat that went something like:

    When you boil water to make a cup of tea, don't just think about yourself; prepare water for your colleagues who might want tea as well.
  • Help and support
    This can shortly be summarized as:
    "If someone has a problem, we have a problem".

    When asked for help do help and if people express frustration or confusion offer to help. Few people ask questions if they don't need to so rather than telling them "you should know that" try to help them learn how they can find the answer themselves; e.g. by introducing them to the right people, help them get access to some information system, help them get invited to a certain meeting/mailing list etc. An attitude to avoid is "it's not my job to help...". Sure this is sometimes true and you need to work too but then again: help the person help herself rather than ignore the request.
  • Respect everyone
    No job, role or person is more important than any other. Of course some tasks might be more important to finish but then focus on getting them solved as a team. A key aspect in this is understanding your colleagues' tasks, challenges, frustrations and talents. Andreas Cederholm brought up a great example of how to nurture this attitude:

    We run team test sessions where the whole team test together. Add some cookies and laughs and it'll work even better.
  • Try
    If you want to challenge status quo you'll have to try new things. Trying comes with an increased risk of failing and potentially making a fool of yourself but that's necessary and typically a great way to learn. Sometimes trying something you don't really believe in might still be beneficial simply to acknowledge that ideas are appreciated and that you trust in peoples judgement even when you might not agree with them.
  • Auto forgive
    A psychiatrist once told me a very smart thing about eating disorders and how to react when people have not been able to fight the decease (generally applicable of course):

    Guess who'll feel worst when this has happens? You? No, the person who just "failed"! You don't need to remind them they "let you down", they'll know and they'll feel terrible about it.

    People mess up, people take bad decisions, people have bad days. You rarely need to remind them, it's typically much more constructive to say "don't worry, shit happens, let's fix this" and move on. This is also important to nurture previously mentioned attitudes such as "try" and "be transparent"; if people are scared about potential consequences (including reactions) the only thing they'll try is to cover stuff up.
  • Smile (and laugh)
    Being met with a calm, warm smile is great medicine when you feel down or nervous about some bad news you have to deliver. Smiling also helps at least me stay calm making it a useful tool to manage feelings of anger or frustration.
I get the feeling all the attributes/attitudes above point back to some basic principle like "get unhelpful frustration off the table fast; both yours and others" or "always trust in peoples willingness to do good"... but I can't really put it into words. Feel free to help me.


If the personal attributes/attitudes are the foundation the various activities below represent important tools to speed up the process. Notice though that the activities by themselves are not silver bullets and overusing them or using them at the wrong time can actually have a negative impact. Focus on the list above first!
  • Social activities outside of work
    E.g. cook together, sports or boardgames. Activities where everyone is active which is not necessarily true for e.g. your typical after work.
  • Quirky things
    E.g. quote book, silly competitions, fun/silly "rules" or internal titles.
  • Retrospectives taken seriously
    Not specifically the meeting, can be e.g. a continuous, everyday team reflection activity. All problems brought up are dealt with. Problems are taken seriously even by members not personally impacted.
  • One on ones
    Allows people to raise concerns in a safe environment (assuming the person meeting members one on one has earned the members' respect).
  • Do each others work
    An example of this is Team Test Sessions where the team test together (suggested by Andreas Cederholm, TestSverige) or move the other direction and try mob programming with testers included. Everyone (product owner, developers, testers, designers...) together attending e.g. courses in security or usability could also help support as this kind of activities creates some common ground. Yet another suggestion is team members meeting customers, accompanying sales/support people etc.
  • Discussions about values
    E.g. take the "personal attributes/attitudes" list above and talk about each one described. Is this something you want to strive for in the team; can you change something to help nurture this behavior etc. Make it a team goal to improve and nurture the "mood" in the team in general.
  • Personal values
    Most of the personal attributes and attitudes require consistency. An activity where you sit down an state you personal "manifest", goals or values can be important. For instance it might be hard to treat yourself in a fair way without some guidelines; either turning you into an asshole demanding more from other than yourself or a "victim" never treating yourself well enough.
  • Clarify your attentions to your boss
    If you want to invest quite a bit of time in this, go to your boss, explain your intention and ask for her/his support. Making your boss, or if necessary, your boss' boss, an ally can provide access to several powerful tools (e.g. see "Supporting context" further down).


The list below represent "symptoms" that your team (or even company) is moving in the right direction:
  • People laugh.
  • You're met with a smile, even in bad times.
  • You know what your colleagues like, both at work and outside. E.g. their hobbies,interests, spare time activities, important life milestones, work and private goals, "hidden talents" and previous experience.
  • People talk about hobbies, spare time activities and the other things listed above.
  • Conflicts are taken seriously and navigated swiftly.
  • People blame themselves, if anyone, not others.
  • High level of motivation.
  • You rarely feel stupid (in a bad way).
  • Stuff that "should be done", gets done.
  • Ideas are taken seriously, people try new things and experiments are run frequently.
  • People admit mistakes and challenges early as they're not afraid of the consequences.
  • People meet outside work because they want to, not because they feel obligated to.
  • Few taboos.
  • Very limited "bullshit" or backtalk in the team.
  • You know what's happening in the team and rarely get "unpleasant surprises".

Supporting context

These things might be hard for you to actively influence but be aware as they do seem to have an important impact:
  • Reasonable pace
    People need time to do supporting, long term activities and when under immense pressure/unreasonably high pace this is quickly forgotten or down prioritized. These lost activities help you become faster tomorrow than today meaning they're long term, multiplicative investments.
  • Stable organization
    Adding or losing team members can in worse case force the team to start over in their attempts to be awesome. If you're the manager; try not to change teams that work great together even though it might be tempting!
  • Ethics
    A product you believe in and feel ethically good working with, helps. The same goes for the company's actions: If it feels like the company acts in an ethical way that seems to help people "invest" in the company in a way that's helpful.
  • Good social (especially empathic) skills
    Having team members who like the social aspect and are good at nurturing positive social behavior (not to be mixed up with people "talking a lot") helps.
  • Previous friends
    Not always true as the previous friends may create a "sub team" within the team but seems to sometimes help as the friends most likely have a healthy relationship towards each others which can spread.
  • Management accepting problems
    Having a manager/management asking for "solutions, not problems" can suppress people's willingness to bring attention to important problems or make the company accept suboptimal solutions. The intention to focus on what's constructive is not bad but the message delivered can be. It's of course okey to ask the person if they have any ideas themselves on how to solve the problems they bring up but don't make the solutions a "requirement".
  • Culture awareness
    Manager/management that genuinely cares about the company culture and how to improve it helps.


Some "quotes", all loosely translated from Swedish:
  • "I ask myself: How can I make this person feel like I want her to feel?"
    Carita Jansson Tsiantes
  • "It's professional to be personal"
    David Högberg
  • "It's not unprofessional to have fun but to do something in a boring way when it can be achieved just as well in a fun way, that's unprofessional"
    Klas Hallberg, from his book: YCDBRALAI (Swedish).
Finally a comment I didn't know where to place:
  • "If I say I can't talk about it, you know and accept this". Transparency is important but some information you mustn't share for various reasons. However, sometimes the mere knowledge you know some secret information can be enough to help people prepare for a big change, avoid unpleasant surprises etc. One example could be: "We will get a new boss, I know who it's most likely gonna be but I can't tell you until papers are signed; however, I can tell you I think this person will do a terrific job, so don't worry too much about it".


It makes perfect sense but didn't really occur to me when I first asked the question:

Making a team awesome is basically the same thing as making any relationship awesome and it starts with you and all the small decisions you make every day.

Good luck!

29 November 2016

Learning about learning by teaching

I've undergone a tough education in software testing:
  • 15 months long
  • Tests 3 times a week, 7 hours each and in front of a crowd
  • If you skip a test you'll have to do it again, typically within 5 days.
  • The expected level of competence is: "Good enough to teach others"
  • Little or no chance of "redo:s", you better do it right the first time, every time

In other words: I've been teaching a class in software testing.

The intense experience of teaching testing like this has of course taught me tons of things and with this post I want to share the positive effects this particular job had on my own learning. Each benefit (of teaching) comes with a "why I find it useful" and "how you can apply this knowledge in a more everyday context".

Benefit: Curriculum


I'm not a fan of committing to a plan, especially not when it comes to my learning. However, the education's curriculum did force me to look into many topics/material I would otherwise had skipped due to laziness, lack of interest or lack of understanding (not thinking it was useful to me). Some of these have definitely made me a more complete/skilled tester such as test framing, deeper understanding of test techniques and a better understanding of bias.


Benefit: Go deep


I've read tons of articles, watched hundreds of hours of presentations/videos and spent a lot of time practically practicing testing. However, I often end up looking at topics in a quite shallow way especially when I find the topic a bit boring (may still be very useful). When you are to talk about a specific topic for just a couple of hours, you're okey, there's little "need" to go deep. When you have to prepare several weeks of study material though, that's a whole different beast! Being forced to go deep into topics has enables me to better question, improve, explain and argue for the various choices I make (for example why I chose to place my testing mission at the top in a certain status report).


  • Dig deep into fundamental questions e.g. what is testing, why do we test, what is an oracle etc.
  • Look into related topics. Say you want to improve your written reporting skills then look into e.g. rhetoric, design, how people read printed and digital documents, how people interpret colors or tutorials for your word processor/mind mapping tool/whatever. The point is: don't limit yourself to articles specifically about test reporting.
  • Set a "topic of the month" and try to become as skilled as you can in this topic. Don't stop because you feel "done", continue beyond that.

Benefit: Giving feedback


An important part of my job is helping students understand what they do well and what they might need to improve. To do this I have to observe and analyze what they've done, what they think they've accomplished, what actually made them accomplish what they've accomplished etc., all this I have to do rather thorough in order to be able to explain it to them. This helps me create an understanding that is beyond "do this or do that because it works better".

An example of this is when grading various assignments and projects as students, at least on a general level, need to understand what they did good and what they would had to do to get a better grade. If they get the highest grade they need to know why, so they both know what to continue doing and what to improve. As testers we need these kinds of observation and communication skills all the time when working with developers, project managers etc.


  • Study your own testing and try to explain why it was good and how it could be improved.
  • One area where I've found this pretty easy to practice (can't prove that the practice translates to other parts of testing but I think it does) is watching presentations (e.g. YouTube) and try to give feedback to the presenter. What specifically did she/he do good and bad?
  • Study other testers and try to figure out why you find them good/bad testers. Be as specific as you can.
  • When testing, try to find positive and negative patterns: "The log entries are (almost) always relevant and well described making my work so much easier" or "The UI components often have poor vertical alignment".

Benefit: Teaching


Teaching in itself is a great technique for learning. You have to rephrase the content to match your own language, you hear yourself speak about the topic and you get questions pinpointing gaps in your understanding and/or explanation.


  • Do teach colleagues and friends about the various topics you've practiced.
  • Write an educational article/blog post about what you've learned (you don't need to publish it to anyone to still get many of the benefits).
  • Talk at a local test meetup and if there isn't one, arrange one.

Benefit: Peer


Working with Maria Kedemo and Martin Nilsson have allowed me to get feedback on the ideas I'm about to share, feedback on my interpretation of various topics and someone to speak with when I feel stuck. It has also allowed me to learn from their knowledge and experience of testing.


  • Speak with a colleague
  • Join a local tester meetup
  • Go to a test conference
  • Join the active community on Twitter
  • Try your own version of Transpection Tuesday (my post, Helena's post: 1, 2)
  • More ideas...

Benefit: Observe testing


I've spent a significant amount of time both observing testers test (as a group), observed testers test (the actual testing done by an individual) and listened to testers speak about their testing. All three exposed me to new ideas and made me question my own approach. It's also interesting because you get to see a specific problem solved in many different ways which helps you understand what actually impacts the result; e.g. "what is the common denominator in these solutions, is there anything I can learn from that?" or "they all had different ways to setup but all ended up with the same solution, which setup worked best/most efficient and can I learn something from that?".


  • Pair testing
  • Look at other testers' notes, reports etc.
  • Do call for and attend debriefs no matter if you use the concept of test sessions or not
  • Offer to review things
  • Volunteer to mentor/coach another tester; this will enable you to observe another tester as well as get several of the other benefits mentioned in this post

Benefit: Consistency


To sit down and learn about various topics every day for over a year has definitely added some welcomed consistency to my self-education.


Benefit: Questioning basic assumptions


Explaining fundamental concepts is incredibly hard but rewarding! As an experienced tester I take quite a few things for granted and explaining concepts built on these assumptions to someone without experience lead to wonderful questions like "but why do we need testing at all", "what does it actually mean to test something", "why can testers find bugs if developers who know the code can't (as in why do bugs happen at all)?". Answering these questions without being able to rely on "experience based assumptions" has led to more than a few epiphanies (and a lot of frustration of course).


  • Talk testing with people having a different frame of reference (developers, management etc.)
  • Talk testing with people who don't work in the industry; for instance try to explain what you do to a relative.
  • Teach new testers at the company or teach e.g. developers in testing
  • Talk testing with new, inexperienced testers

Benefit: Ask yourself "how do you train this skill"


Reading and listening is nice but sometimes you need to actually practice skills to be able to learn them. When teaching I've spent a fair amount of time trying to figure out exercises pinpointing a specific skill I want the students to practice or just exercises/projects in general helping students practice relevant testing skills. This experience help me now both when less experienced testers want help learning a skill, when I try to explain/teach something and when I try to teach myself something.


  • After e.g. a blog post, YouTube video or book; think about how you can incorporate the new concepts you've just learned about into your own work.
  • Try various exercises and try to replicate various experiments yourself; such as: 1, 2, 3, 4; to help kickstart your brain.
  • Whenever you're asked to explain something; try to come up with an exercise or experiment that helps demonstrating whatever you are to explain.

Benefit: Getting questions


I've already touched upon this but getting questions from the students on anything that's not clear to them is incredibly challenging but rewarding. It has helped me realize flaws in my own understanding, forced me to question my own assumptions and challenged me to find new ways to explain certain concepts in.


  • Explain concepts to others
  • Ask for feedback
  • Ask questions yourself; this both inspires others and help you ask questions "to yourself"
  • When reading/watching e.g. a book, presentation (video) or article; pause periodically and ask yourself: "what did I just read/watch and what of that is unclear/seems strange to me?"

Benefit: Having the time and expectation to learn


When in the middle of deadlines, huge backlogs and conflicting priorities it's easy to forget learning. Having the explicit expectation to learn new things has been an interesting experience and I feel confident saying I leave the teacher assignment as a much more competent tester. Spending as much time as I did on learning is not possible in most working contexts but I think "expectation to learn" is the key concept here as it helps making it happen at all.


  • Ask your boss: "How much time am I expected (or at least allowed) to spend on education?"
  • When doing backlog grooming (if you do this); add learning stories as dependencies e.g. "before we implement the new video player we need to learn a bit about streaming, video formats and performance testing related to streaming". If you end up never having time for these learning dependencies, try timeboxing them to make the expected time invested in learning more explicit.
  • Remember learning is a fundamental part of testing.
  • Differentiate between the learning that's necessary to solve your current task and learning with more long term, strategic value (e.g. learning more about testing in general, about web security, about test planning etc.). The "strategic learning" is often important to keep you and the company progressing but can easily be forgotten if put in the same "budget" as the task solving learning.

Final word

I removed several additional benefits I had initially included just to finally get this blog post published (it's been stuck in "draft" for over a year) so just to be clear: You can learn so much more about learning by teaching; this is just a mere introduction.

21 November 2016


It was quite a long time since I wrote about an event. This is mainly because "it's not new to learn new stuff" anymore so for me to write about an event it need to be pretty special. SWETish, a peer conference that just ended, was exactly that: Special.

I've tried to wrap my head around what made this different and it boils down to:
I've not had this many "epiphanies"/been surprised this many time at a test conference since... one of my first ever.

Next question is: Why? What made me get all those epiphanies?

Well, I've rewritten my explanation to that I don't know how many times now. It boils down to things I can't seem to describe well enough yet but I'll give it a shot in a separate post, probably after a Transpection Tuesday (I need your brain Helena).

So, let's skip to the good stuff: Content and lessons learned.


Before we start: Credit for the content below goes to all the participants of SWETish:
  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson

Talk 1: All Pairs by Morgan Filipsson

Morgan basically described a tool he thought was more or less obvious and most of us went like: "mind blown"... it felt a bit like Paul and Karen in the K-card story. It was just a simple non-mechanical combinatorial testing support tool made in Excel where you could decide the input values you would use in a test and the program would help you by showing how many valid pairs (as in "all pairs") you had not yet covered. I don't know if this is exactly what Hexawise, or some other tool, already does but to me it was ingenious.

But this is a peer conference so why stop there; open season time:
  • Forming an unbiased opinion can be expensive and money does matter
  • Getting many peoples biased opinions and comparing them can help sometimes
  • Beware of decision fatigue when using exploratory testing
  • Improving your test strategy or moving towards a more formal approach can delay the decision fatigue process
  • Mind maps are cool but do have limitations
  • Don't underestimate pen and paper
  • Excel is a pretty bad ass testing tool
  • Remember data can come in many shapes and forms and it often has complex relations to other data
  • Sometimes the simplest of ideas are not obvious to others
  • There are so many potential benefits with screen recording

Talk 2: How we test by Anders Elm and Erik Brickarp

A bit unorthodox but a two person experience report at a peer conference. The background was I read Anders' abstract and realized we have a very similar test process at Verisure. So Anders described how they test at SICK, I described how we test at Verisure by focusing on the differences and at the end we shared our key lessons learned and challenges. Long story short: We had both started with the goal to implement SBTM (in my case credit should go to Maria Kedemo et al.) but in both companies this had diverged into something else. I described this new process to be much more similar to TBTM than SBTM.

I might talk more about this in the future but let's skip to the open season for now:
  • How much of this diverging is due to strategic decisions and how much is due to laziness/lack of competence (valid question that requires more thinking on my part)?
  • Internal demos after each completed user story and mob testing during these demos seemed awesome (done at SICK but not Verisure, where I work)
  • We got an interesting question from Björn Kinell about "if you could magically change/correct whatever you wanted, what would that be". I don't want to share my answer but my advice: Ask yourself that question cause it can help when forming a vision for your testing.
  • It's easy to forget/skip debriefs, test planning and testing not immediate related to a story in the sprint but be careful as these activities often provide quite a bit of value.
  • Find activities that are "not testing but still testing" to easier get the developers involved. Examples: Add testability, "try" the product and support a tester when testing.
  • Ask the question "how do we catch <very basic bug> before the product is handed over to testers?" to start a discussion in the team about testing and developer responsibility.
  • Remember that small bugs that normally don't slip through development/testing can be a symptom of a much bigger problem like stress, change in culture or lack of support.
  • Time spent improving charters is rarely a waste of time.
  • SBTM "by the book" in a scrum team is not easy...
  • If the timebox aspect (session) is removed you need to find new stopping mechanisms and/or heuristics to help you stop and reflect on whether or not to continue.
  • Debriefs can be useful for many many reasons.


Johan Jonasson spoke about construct validity in metrics
Anna Elmsjö spoke about finding and making allies
  • Candy solves most people problems (my wife approves)
  • Finding allies among the developers is important to help you get attention to testing problems
Tim Jönsson spoke about the value to "knowing your audience" when reporting bugs
  • There are more ways than an entry in a bug reporting tool to report bugs
  • Do highlight helpful developer behavior/good code and show gratitude when developers help you
  • Good lightning talks often focus on one thing and explains that one thing well!
  • A compelling story helps your audience understand your message
  • Testing is about psychology more that you might initially realize
  • Be humble, be helpful, be useful... and be respectful.
Göran Bakken spoke about how integrity can get in the way of efficiency
  • Refusing to do bad work is not always constructive
  • Two different ways to approach a change is to focus on supporting values or to focus on methods.
Kristian Randjelovic spoke about how analogies
  • There are many ways to use analogies to help colleagues with limited understanding of testing understand otherwise rather complex (testing) concepts.
A general lesson for me (even though I didn't present a lightning talk):
If I'm given 10 minutes for my talk and open season, I'll try to aim for a talk shorter than 4 minutes and only focus on one message.

Talk 3: Test documentation in the hands of a stylist by Agnetha Bennstam

Agnetha showed three different examples of how she had helped turn massive 200-page documents into lightweight, often visual, alternatives. It's hard to explain this in text but the running themes were figure out what is important, find the right level of detail and aggregate data in a way that's useful/enough for the receiver rather than show the full set of data.

Open season:
  • Dare to try something different!
  • You can add additional dimensions to a diagram by using the size of the dot you're plotting and color (see slide 55 in Andy Glover's presentation).
  • Use feelings when you test e.g. "why do I feel bored" as indication that there might be something you should react to/take notice of (see Michel Bolton's presentation).
  • Asking for people's gut feeling can be a good way to get their assessment without getting all the motives and data (makes people relax a bit).
  • Sometimes asking for people's gut feeling can help them "dare to answer" so that you can start to figure out what is happening e.g.
    "How 's the quality of ...?"
    "I don't know..."
    "But what's your gut feeling?"
    "Well, so far it's not looking good"
    "Why is that...?"
  • Gut feeling can sometimes mean "very useful and relevant information the person simply cannot/dare not articulate yet"
  • ... but gut feeling can also mean, or more often be interpreted as, "I base this on absolutely nothing but I don't want to admit that"
  • Beware of documentation existing only for the purpose of being ammunition/defense in a blame game
  • A tool that could clearly visualize:
    How fresh the testing is (time and impact of changes since last tested)
    How important the testing is
    How well the testing went (typically quality)
    ... would be cool.

Other stuff

One more crazy thing about SWETish: All the talks and open seasons were amazing and that's despite the fact that the experience report all organizers voted as their favorite... was not delivered since Joel Rydén got sick!

Another cool detail. Anna Elmsjö added a nice twist to the K-cards she created for the conference: On each card there was a short helpful sentence/statement/word at the top explaining the card a bit. For instance sentences like "I would like to add..." on a yellow card or "This is the most important thing ever!" on a red card, most were funnier than that but the funny ones I saw don't translate well to English. To make it even cooler she had different statements for every single card... Johan, I expect the same for Let's Test 2017 ,)


THANK YOU! This was one of the best conferences I've attended, I already look forward to the next one!

... and I hope I can bring some of SWETish awesomeness to PEST, a peer conference I'll attend in just a couple of weeks (we seriously need to work on how we name the various peer conferences by the way...).

03 November 2016

Test plan questions

I recently gave a talk on test strategy and gave the participants a list of "test strategy questions" (which should be called test plan questions, adhering to these definitions). I think and hope these can be useful for more people so last in this post you'll find my most recent version.

How I use the questions

I use these questions as a quick way to form a test plan. The test plan might be everything from a high level, early plan for an upcoming test project to something small like the strategy for a specific test charter.

The way I practically use the list is pretty straight forward. I just run through the questions and basically strike out questions that are not relevant. Not relevant refers to questions like "any particular tours we should perform" when creating a high level plan, "what are the key test/release milestones" for a plan covering a 90 minutes test charter or "how do we report bugs and status" if this is already stated by our overall process.

The questions not struck out I try to answer; either myself or in a meeting with the people involved. Involved in this case might refer to anything from a second tester to everyone involved in a project. The outcome is typically:
  • I/we lack the necessary information or competence and need help answering this question
  • I/we have an idea but I/we still want help/input from someone else
  • I/we think I/we know the answer well enough but still want a second opinion
  • I/we think I/we know the answer well enough
The results can be turned into a formally documented test plan, some brief notes next to the questions, a mind map, an image and sometimes the discussion and actions taken are enough (no need to document anything). Do remember that a test plan document is not a test plan; it's a document describing some parts of your test plan.

A few additional notes:
  • The list should be improved to fit your context; add, remove and modify questions on the list accordingly (if you for instance know which bug tracker to always use, which most do, remove that question completely).
  • You always have to take time and value into consideration. For a minor issue with low priority it's probably not worth inviting everyone in the project as the cost of doing so is too high; Your own judgement is cheaper and probably enough.
  • The list can help e.g. project managers, line managers and developers think about testing concerns so do spread (your modified version of) the list to these people.
  • You can split the list into several smaller lists; e.g. one short list of questions to use when creating test charters and a longer/different one for project test plans.
  • It might be wise to think about what's important for a test plan before you even look at the questions. The point here is sometimes key details that definitely should be in your test plan might not be highlighted by any of the questions. If you start by using the list you might miss these key details (your thinking is limited by the questions). Once again I think value is important: How much time and effort seems reasonable to spend on this particular test plan?


I created these questions together with my students so credit should go to all the students involved. I also discovered testers before me had created their lists as well and I would not have thought about some of the questions below if it wasn't for their efforts.

So, before I present "my" version I want to highlight and thank both the students, Michael Larsen, Ministry of testingThe test eye and Michael Bolton. The four lists I just linked to, all come with their own twist/benefit so do check them out.


Notice some questions are generally only applicable when consulting, some only in an agile context, others when testing and development is clearly separated etc. Like has already been pointed out: Do remove or modify questions not relevant to your context.


I typically want to answer these questions before I go on with the rest, because these five typically impact my approach for the strategy in general.
  • How much time do we have? 
  • Do we need to document the strategy? Why? How will we document it? 
  • What’s the primary objective (testing mission)? 
  • Are there any other objectives? 
  • Where can we get the initial information about the product/feature/system we need?


  • What resources do I have available?
    people, money, equipment, licenses, time, expertise, facilities... 
  • Is this a realistic amount of resources for this project? 
  • How flexible is the time plan and what happens if the product is not “ready” by the deadline? 
  • What project risks exist?
    e.g. customers not clear about what they want.
  • What is our backup plan if risk X happens? 
  • What is our backup plan in general if everything fails? 
  • What is likely to change?
    e.g. team setup, requirements, developers' focus, delivery dates… 
  • Any meetings we should attend/discussions we should be part of? 
  • How do we handle handovers/new members/lost members? 
  • Who does what?
    e.g. responsibilities and roles 
  • Any known problems along the road? 
  • Are there any workarounds or solutions to the known problems? 
  • Any regulations, rules, standards, certifications etc. limiting us or forcing us to work/act in a specific way? 
  • What administrative tools are (almost) enforced and what else do we need/benefit from? 
  • How do we plan the everyday work? 
  • What are the key test/release milestones?
  • How flexible is the scope - can the project be down-scaled if some unexpected problem happens?


  • What is most important (to test)? 
  • What is not important (to test)? 
  • What can be skipped all together? 
  • What quality characteristics are most/least important? 
  • Any specific aspect of these characteristics that is more/less important? 
  • What is covered by other teams? 
  • How do we continuously verify we’re doing the right thing? 
  • What is our done criteria
    e.g. strict deadline, customer acceptance tests or some other assessment of "good enough quality" and if so, by who? 
  • What's the general requirement for quality?
    Aim for the stars/critical system or “just don’t explode... too badly”


  • Where can I get information about X and who do I contact if that's not enough? 
  • Which claims exist? 
  • Which customers exist, can we contact them? 
  • Who can answer questions and which questions? 
  • What is still unknown/not clear about the project? 
  • How do we simplify the complexity?
    Simplify the process of learning the product. An example might be "what sort of visual models would help to create?". 
  • Any particular tours we should perform?
  • Are there any general guidelines to how we deal with learning activities and knowledge sharing? 
  • How do we further inspire/reassure/encourage feedback, reviews and knowledge sharing? 
  • How do we stay up to date with what is happening in the project?
  • How do we communicate with various information owners?
    e.g. email (address), phone (number), instant messaging tool, via other person, meeting etc.


  • What kind of built in testability will help testers?
  • Which modifications can/must be done to the process in general to support the testing? 
  • What do we need to learn more about? 
  • Any particular configuration or test data we can prepare? 
  • Which tools can help us? 
  • What other teams should/can/must we cooperate with? When, how and why? 
  • Do I know who the developers are and can I talk to them? 
  • Do the developers have time allotted for us? 
  • Are there any problems getting their time/getting in touch with the developers? 
  • Will they keep working on this?
  • What will the developers test? How does this impact our testing? Can we make suggestions regarding their testing?


  • How do I tell right from wrong?
    Find potential oracles 
  • Which oracles are generally most trustworthy? 
  • What testing risks exist?
    e.g. unstable test environment or lack of knowledge about something.
  • Which test techniques might be useful? 
  • What expertise do we lack? 
  • Which scenarios/typical use cases exist? 
  • Which heuristics might be useful? 
  • What logical components/modules are there?
  • Is there anything I'm not allowed to do?
  • Any (testing) tips the developers can give to the testers?


  • Which product risks exist?
    e.g. complex algorithms likely to be buggy or new technology used.
  • Is there any complexity we might be missing? 
  • Which functions will the system/application/feature have? 
  • Who’s the target audience? 
  • Which platforms, systems etc. should the product support? 
  • What requirements exist for this product? 
  • What problem is the product expected to solve? For who? 
  • What problems have happened in the past with this product? 
  • Any existing functionality that is impacted? 
  • What must the product never do?
    e.g. any data sent as plain text is strictly forbidden


  • What do I need to cover and how well? 
  • How do we track and visualize coverage and progress? 
  • Which stakeholders exist?
  • How do we report bugs and status? To who? Why?
    "Why" as in: Which problems/questions will the receiver hope to solve/answer with our report. 
  • What other artifacts/information do we need to share? To who? Why? 
  • When do we need to report what?

Additional sources

28 September 2016

Next step

After a colleague pointed out I'm not 29 years old anymore I had to revisit my About page. While reading the rest of the text this sentence made me stop...

Next step is to spread the knowledge I've obtained and be a thinking practitioner who can hopefully do what James once did for me.

I wrote that sentence over 4 years ago, before any tester outside Linköping knew I existed. So I took a moment just to reflect.

"Spread the knowledge I've obtained"
I've been teaching students in Software Testing at a vocational university for 1,5 years, I've given numerous lectures, presentations and workshops on various topics at conferences (national and international), meetups and at my workplaces and I'm now test coach at Verisure, a role in which I'm basically paid to share the knowledge I've acquired so far. Finally I've, for most of the period, been a fairly active blogger.

"Be a thinking practitioner"
Transpection Tuesday, my blog, my peer conference appearances, my many dinner discussions with top notch testers (most recently Göran Bakken), my (former) activity on Twitter all add to the "thinking" part while my testing and test related experiments at Ericsson, Verisure and Zenterio, as well as my effort during Software Testing World Cup all add to the "practitioner" part.

Most important though: The two are not separate processes. The best example of this is probably Transpection Tuesday. During Transpection Tuesday, Helena and I often discuss a challenge one or both of us have, together we work out potential solutions or experiments to run, we go back to work to try these solutions/run the experiments and finally we share/ask for help to evaluate the results at a second Transpection Tuesday. Like I said, one process.

"Who can hopefully do what James once did for me"
After graduation I got emails from two former students, both made me feel I've accomplished exactly this...

... hmm, is it raining inside or why are my eyes moist all of a sudden...

On top of that other former students, testers and other colleagues (most recently a developer) have all helped me understand my efforts to inspire, guide and mentor have actually made a difference.

It's not without pride I say: In four years I've wildly exceeded my own expectations based on my memories of what I hoped to achieve 2012. Not just my expectations for the next four years but potentially for my whole career. Shit... I need a break.

... pause music...

What's my "next step"

Took quite a bit of thinking but here's my updated "next step", or mission if you prefer that:

I've presented to, spoken with, inspired, coached and mentored testers in testing and quality. I want to continue this but to a broader audience, in ways different from what has been done before and inspire others to present, speak, inspire, coach and mentor as well.

Broader audience refers to e.g. developers, students (not studying testing), managers etc.

If you happen to read this in few years, do remind me to report my status, please.

What's your "next step"

Enough about me, what's your next step?

24 August 2016

How to come up with test ideas


This post is an attempt to gather the various tools, tips and sources I've found useful to improve my ability to generate test ideas.

Notice that the various concepts described are huge and I will only briefly describe them in this post, it's up to you to learn more and put them into practice. A good start would be to check out the many articles I've linked to.

To developers
Important! The techniques and sources I will describe are just as useful when developing, example:

"This will create a separate log file, what do I need to take into consideration when that file is created..."

If you want to develop high quality software, learn the fundamentals of test design!

Common product risks

Risk catalogs
Risk catalogs are lists of generic risks (things to test) in common components/functions.

You will of course need to judge what's relevant in your context and risk catalogs aren't flexible the way actual thinking techniques are but they are still incredibly useful:

Quality characteristics
Quality characteristics, also known as quality criteria or quality attributes, are an attempt to break down quality into meaningful/helpful categories. When generating test ideas these categories help us see the product from different angles.

Generic risks
Some aspects are (almost) always worth taking into consideration when assessing risk e.g. "is this new functionality?" or "does this module have a history of being buggy?". I can't say this information helps me generate new and creative test ideas but it helps me focus my effort on risky areas thus still relevant.

Test techniques

Test techniques are sort of "compressed tester mindset". The set of 9 techniques defined in the heuristic test strategy model is my personal favorite but there are many different sets out there.

Notice that each technique is way too broad to cover in this blog post but if you want to "learn to think like a tester" I urge you to learn more about and practice them!

Function Testing
Identify and test each function individually.

I prefer to begin with creating a list or mind map of all functions I can identify in whatever I'm about to test. A function in this case is anything the application can do e.g. save a log message, identify that a new version is available or show an error message. To do this, touring is a useful practice described in the links below.

When you've identified the functions, focus on each one individually, example:
"How can delete entry be tested? Successful delete, try to delete an already deleted entry, try to delete when lacking access, stack deletes of the same entry, lose connection to data source when deleting, modified delete request (e.g. SQL injection), potential timing issues, delete different entries in rapid succession, ..."

Domain Testing
Identify the input and output data in the application then look at which of the possible values/combinations of values you should test.

After identifying the variable (input or output) to test, you first group the possible values into equivalence classes. The idea is any of the values within each equivalence class should in theory behave the same. The next step is to select one or more values from each class to test. To do this you for instance look at boundary values, high risk values or realistic values. Finally when several variables need to be combined (such as in a typical form) there are various combinatorial techniques to help you find an efficient way (as few tests as possible) to actually combine these values into individual tests with the level of coverage you're aiming for.

Notice! When reading about domain testing you might get intimidated by all the paper work involved: "I don't have time to do all that for a simple form...". For me the detailed analysis on paper is sometimes useful, especially to practice my domain testing skills, but generally I just keep the various techniques related to partitioning (creating and grouping values into equivalence classes) and value selection in my head and apply them "on the fly". Still: I do urge you to sit down and create e.g. a detailed boundary/equivalence table at least a few times because it's key to actually understand domain testing.

For the combinatorial analysis: Remember there are excellent tools!

Stress Testing
Identify various bottlenecks, input that can be sent in huge volumes or that require more computing power than usual, dependencies, communication to external systems etc. and ask yourself how these could become a problem for the application (e.g. flooding of  requests, dependencies lacking, slow connections/timeouts/lost packages). Also look into the system's ability to restore itself after collapsing (e.g. will the system come back up after a database crash).

Do notice that stress testing often refers to testing the behavior when actually going beyond a system's breaking point. This is important since if you search for "stress testing" you'll mostly find just a subset of what the test technique "stress testing" is suppose to cover. Try also searching for e.g. "robustness testing", "load testing" and "performance testing".

Flow Testing
Testers often reset environments between tests, especially if using test cases, by e.g. closing the application or only focusing on one step in a long flow. The problem: Users don't. The test technique flow testing is thus about focusing on what will happen when a continuous chain of events occur. One way is to actually identify a long, realistic flow (e.g. a "buy, cancel, refund" cycle in a web shop) and test this, another is to not reset between tests (e.g. not close the application or reset data for several days while testing).

Scenario Testing
Scenario testing is, unsurprisingly, about creating a scenario and test this scenario. One important benefit with scenario testing is it can identify gaps in the implementation that other techniques generally don't, as they focus on what has actually been implemented.

Claims Testing
Also known as "specification-based testing". The idea is to identify the claims made about the product and check if they are actually true. Claims can come from a variety of sources e.g. specifications, requirements, manuals, various in-app help sources, public statements or the EULA.

User Testing
User testing is about asking yourself "how can I use users to improve my testing":
  • A user attempts to complete various tasks in the application and walks you (the tester) through how she/he is reasoning. The tester's job is to identify problems the user runs into e.g. not understanding certain parts of the navigation.
  • Use user data to drive the testing e.g. "the most items a user has ever ordered is 104 and in our tests we never order more than 20, we should increase that amount".
  • Personas and other tools helping you "simulate" a user can be used when testing, example:
    "Looking at this from the perspective of 'Nancy, the power user', I realize we lack keyboard shortcuts in much of our application"
Do notice that user testing often refers only to the first bullet above. This is important since if you search for "user testing" you'll mostly find just a subset of what the test technique "user testing" is suppose to cover. Try also searching for e.g. "personas", "ux testing" and "usability testing".

Risk Testing
Identify risks and design tests to cover each one of them.

These risks can either be identified:

... inside-out (ask: what could go wrong here?), example:
"We now allow users to delete their private messages, how could that be a problem? Well what if a user could potentially delete another user's messages? How can we test if that's possible to do?"

... or outside-in (apply common risks to the product), example:
By using Michael Hunter's "You're not done yet" I realize saving the configuration to a read-only folder could be a problem.

The chapter "Common product risks" includes tons of sources useful when applying the outside-in approach to identify risks.

Notice that "risk-based testing" (prioritize testing that's considered the greatest risk first) is not the same as the test technique "risk testing", but will be your most common result in a web search. However, articles about risk-based testing typically covers tools and techniques to identify and prioritize risks making them useful anyhow; just be aware of the difference.

Automatic Checking
How can you use tools you know about to improve your testing?

Do not confuse this with "automation", "automatic checking", "automated testing" or whichever term you prefer, which is a way to actually execute the ideas identified and not a test technique itself; even though unfortunately sometimes used as some sort of technique.

Other useful resources about test techniques
  • What is a good test case, by Cem Kaner
    More information about some of the test techniques above as well as some additional techniques.
  • Lessons learned in software testing (book), by Pettichord, Kaner, and Bach
    The chapter about test techniques is available online as PDF as part of the BBST course. I very much recommend you to get the rest of the book as well.
  • BBST Test design, by Cem Kaner and Rebecca L. Fiedler
    BBST presents a broad set of test techniques. Highly recommended reading in general and for test design in particular.

Critical and lateral thinking

Everything else in this article is more or less applications of critical thinking and supported by lateral thinking. They're thus the most abstract but generally applicable concepts I will describe.

Critical thinking
"The objective analysis and evaluation of an issue in order to form a judgement"
Oxford Dictionaries

Compare this to e.g. James Bach's definitions of software testing:
"Questioning a product in order to evaluate it"
... any similarities?

A more in depth definition of critical thinking has been made by The Critical Thinking Community.

When developing software we make massive amounts of assumptions, simply to save time, example:
This framework will prevent..., this is what users want, this is the simplest way to...
Critical thinking is essentially the skill needed to identify these assumptions.

I also designed a critical thinking exercise for testers, which might help.

I would love some more (and better) sources on critical thinking, so please drop a comment with your suggestions!

Lateral thinking
Lateral thinking is simply put the skill to think outside the box.

"The solving of problems by an indirect and creative approach, typically through viewing the problem in a new and unusual light. Contrasted with vertical thinking."
Oxford Dictionaries

Reading material:

Domain knowledge

To be able to form any test you first need to, at least on a conceptual level, understand the product and its context.

I for instance like to think I'm pretty skilled with test techniques, that my critical and lateral thinking skills are decent and that I know how to get some good value out of tools such as risk catalogs. Still, when new to a product I'm not very effective until I've actually learned the product.

Stuff you can do to quicken the learning of a new application and its context:
  • Ask for a demonstration
  • Look through available documentation (start with documentation intended for customers)
  • Make a visual model of the application, enhance the model as you learn more
  • Tour the application
  • Ask for help to make a SFDIPOT analysis of the application
  • Learn about and try out the tech the application relies on
  • Speak with stakeholders to learn about e.g. the application's purpose and customers
  • Start somewhere and slowly work your way outwards, to avoid getting overwhelmed
  • Observe users, using the application
  • Pair up with developers, testers and others working with the application
  • Ask questions... lots of questions
  • Look into, and apply, various learning techniques (search for "learning techniques")
  • Learn about relevant laws, regulations, standards, certifications and business practices
  • Speak with domain experts, sales people and customer service
  • Look at test documentation available for the application

Other tools

Test framing
Test framing is about being aware of what you're testing and why, or, as Michael Bolton describes it in the link below: "link each testing activity with the testing mission". I find this useful to practice as the awareness, in my experience, improves my ability to see potential risks, better focus on relevant test design and it actively challenge my critical thinking skills.

When you have a few ideas to test it's sometimes useful to look at those and ask:
What's the complete opposite of these ideas and could that potentially be a concern too?

What happens when several entries fit a delete condition?
Opposite: What happens when no entries fit a delete condition?

What if the network has limited speed, delaying requests?
Opposite: What happens if the network floods the application with requests?

How will a user with limited computer skills approach this?
Opposite: How will a tech-savvy user approach this?

Great additional sources
This is a collection of great test idea articles I've used for this article and/or when generating test ideas that just didn't fit into any of the other chapters:

Please help me improve this

Initially this was intended as "just another blog post" but it sort of took a life of its own and became a pretty big reference page.  I always appreciate comments but since I expect people to assume this is "the truth" more than usual, I will appreciate corrections and suggestions even more!