29 November 2016

Learning about learning by teaching

I've undergone a tough education in software testing:
  • 15 months long
  • Tests 3 times a week, 7 hours each and in front of a crowd
  • If you skip a test you'll have to do it again, typically within 5 days.
  • The expected level of competence is: "Good enough to teach others"
  • Little or no chance of "redo:s", you better do it right the first time, every time
In other words: I've been teaching a class in software testing.
The intense experience of teaching testing like this has of course taught me tons of things and with this post I want to share the positive effects this particular job had on my own learning. Each benefit (of teaching) comes with a "why I find it useful" and "how you can apply this knowledge in a more everyday context".

Benefit: Curriculum

Why?

I'm not a fan of committing to a plan, especially not when it comes to my learning. However, the education's curriculum did force me to look into many topics/material I would otherwise had skipped due to laziness, lack of interest or lack of understanding (not thinking it was useful to me). Some of these have definitely made me a more complete/skilled tester such as test framing, deeper understanding of test techniques and a better understanding of bias.

How?

Benefit: Go deep

Why?

I've read tons of articles, watched hundreds of hours of presentations/videos and spent a lot of time practically practicing testing. However, I often end up looking at topics in a quite shallow way especially when I find the topic a bit boring (may still be very useful). When you are to talk about a specific topic for just a couple of hours, you're okey, there's little "need" to go deep. When you have to prepare several weeks of study material though, that's a whole different beast! Being forced to go deep into topics has enables me to better question, improve, explain and argue for the various choices I make (for example why I chose to place my testing mission at the top in a certain status report).

How?

  • Dig deep into fundamental questions e.g. what is testing, why do we test, what is an oracle etc.
  • Look into related topics. Say you want to improve your written reporting skills then look into e.g. rhetoric, design, how people read printed and digital documents, how people interpret colors or tutorials for your word processor/mind mapping tool/whatever. The point is: don't limit yourself to articles specifically about test reporting.
  • Set a "topic of the month" and try to become as skilled as you can in this topic. Don't stop because you feel "done", continue beyond that.

Benefit: Giving feedback

Why?

An important part of my job is helping students understand what they do well and what they might need to improve. To do this I have to observe and analyze what they've done, what they think they've accomplished, what actually made them accomplish what they've accomplished etc., all this I have to do rather thorough in order to be able to explain it to them. This helps me create an understanding that is beyond "do this or do that because it works better".
An example of this is when grading various assignments and projects as students, at least on a general level, need to understand what they did good and what they would had to do to get a better grade. If they get the highest grade they need to know why, so they both know what to continue doing and what to improve. As testers we need these kinds of observation and communication skills all the time when working with developers, project managers etc.

How?

  • Study your own testing and try to explain why it was good and how it could be improved.
  • One area where I've found this pretty easy to practice (can't prove that the practice translates to other parts of testing but I think it does) is watching presentations (e.g. YouTube) and try to give feedback to the presenter. What specifically did she/he do good and bad?
  • Study other testers and try to figure out why you find them good/bad testers. Be as specific as you can.
  • When testing, try to find positive and negative patterns: "The log entries are (almost) always relevant and well described making my work so much easier" or "The UI components often have poor vertical alignment".

Benefit: Teaching

Why?

Teaching in itself is a great technique for learning. You have to rephrase the content to match your own language, you hear yourself speak about the topic and you get questions pinpointing gaps in your understanding and/or explanation.

How?

  • Do teach colleagues and friends about the various topics you've practiced.
  • Write an educational article/blog post about what you've learned (you don't need to publish it to anyone to still get many of the benefits).
  • Talk at a local test meetup and if there isn't one, arrange one.

Benefit: Peer

Why?

Working with Maria Kedemo and Martin Nilsson have allowed me to get feedback on the ideas I'm about to share, feedback on my interpretation of various topics and someone to speak with when I feel stuck. It has also allowed me to learn from their knowledge and experience of testing.

How?

  • Speak with a colleague
  • Join a local tester meetup
  • Go to a test conference
  • Join the active community on Twitter
  • Try your own version of Transpection Tuesday (my post, Helena's post: 1, 2)
  • More ideas...

Benefit: Observe testing

Why?

I've spent a significant amount of time both observing testers test (as a group), observed testers test (the actual testing done by an individual) and listened to testers speak about their testing. All three exposed me to new ideas and made me question my own approach. It's also interesting because you get to see a specific problem solved in many different ways which helps you understand what actually impacts the result; e.g. "what is the common denominator in these solutions, is there anything I can learn from that?" or "they all had different ways to setup but all ended up with the same solution, which setup worked best/most efficient and can I learn something from that?".

How?

  • Pair testing
  • Look at other testers' notes, reports etc.
  • Do call for and attend debriefs no matter if you use the concept of test sessions or not
  • Offer to review things
  • Volunteer to mentor/coach another tester; this will enable you to observe another tester as well as get several of the other benefits mentioned in this post

Benefit: Consistency

Why?

To sit down and learn about various topics every day for over a year has definitely added some welcomed consistency to my self-education.

How?

Benefit: Questioning basic assumptions

Why?

Explaining fundamental concepts is incredibly hard but rewarding! As an experienced tester I take quite a few things for granted and explaining concepts built on these assumptions to someone without experience lead to wonderful questions like "but why do we need testing at all", "what does it actually mean to test something", "why can testers find bugs if developers who know the code can't (as in why do bugs happen at all)?". Answering these questions without being able to rely on "experience based assumptions" has led to more than a few epiphanies (and a lot of frustration of course).

How?

  • Talk testing with people having a different frame of reference (developers, management etc.)
  • Talk testing with people who don't work in the industry; for instance try to explain what you do to a relative.
  • Teach new testers at the company or teach e.g. developers in testing
  • Talk testing with new, inexperienced testers

Benefit: Ask yourself "how do you train this skill"

Why?

Reading and listening is nice but sometimes you need to actually practice skills to be able to learn them. When teaching I've spent a fair amount of time trying to figure out exercises pinpointing a specific skill I want the students to practice or just exercises/projects in general helping students practice relevant testing skills. This experience help me now both when less experienced testers want help learning a skill, when I try to explain/teach something and when I try to teach myself something.

How?

  • After e.g. a blog post, YouTube video or book; think about how you can incorporate the new concepts you've just learned about into your own work.
  • Try various exercises and try to replicate various experiments yourself; such as: 1, 2, 3, 4; to help kickstart your brain.
  • Whenever you're asked to explain something; try to come up with an exercise or experiment that helps demonstrating whatever you are to explain.

Benefit: Getting questions

Why?

I've already touched upon this but getting questions from the students on anything that's not clear to them is incredibly challenging but rewarding. It has helped me realize flaws in my own understanding, forced me to question my own assumptions and challenged me to find new ways to explain certain concepts in.

How?

  • Explain concepts to others
  • Ask for feedback
  • Ask questions yourself; this both inspires others and help you ask questions "to yourself"
  • When reading/watching e.g. a book, presentation (video) or article; pause periodically and ask yourself: "what did I just read/watch and what of that is unclear/seems strange to me?"

Benefit: Having the time and expectation to learn

Why?

When in the middle of deadlines, huge backlogs and conflicting priorities it's easy to forget learning. Having the explicit expectation to learn new things has been an interesting experience and I feel confident saying I leave the teacher assignment as a much more competent tester. Spending as much time as I did on learning is not possible in most working contexts but I think "expectation to learn" is the key concept here as it helps making it happen at all.

How?

  • Ask your boss: "How much time am I expected (or at least allowed) to spend on education?"
  • When doing backlog grooming (if you do this); add learning stories as dependencies e.g. "before we implement the new video player we need to learn a bit about streaming, video formats and performance testing related to streaming". If you end up never having time for these learning dependencies, try timeboxing them to make the expected time invested in learning more explicit.
  • Remember learning is a fundamental part of testing.
  • Differentiate between the learning that's necessary to solve your current task and learning with more long term, strategic value (e.g. learning more about testing in general, about web security, about test planning etc.). The "strategic learning" is often important to keep you and the company progressing but can easily be forgotten if put in the same "budget" as the task solving learning.

Final word

I removed several additional benefits I had initially included just to finally get this blog post published (it's been stuck in "draft" for over a year) so just to be clear: You can learn so much more about learning by teaching; this is just a mere introduction.

21 November 2016

SWETish

It was quite a long time since I wrote about an event. This is mainly because "it's not new to learn new stuff" anymore so for me to write about an event it need to be pretty special. SWETish, a peer conference that just ended, was exactly that: Special.

I've tried to wrap my head around what made this different and it boils down to:
I've not had this many "epiphanies"/been surprised this many time at a test conference since... one of my first ever.

Next question is: Why? What made me get all those epiphanies?

Well, I've rewritten my explanation to that I don't know how many times now. It boils down to things I can't seem to describe well enough yet but I'll give it a shot in a separate post, probably after a Transpection Tuesday (I need your brain Helena).

So, let's skip to the good stuff: Content and lessons learned.

Credit

Before we start: Credit for the content below goes to all the participants of SWETish:
  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson

Talk 1: All Pairs by Morgan Filipsson

Morgan basically described a tool he thought was more or less obvious and most of us went like: "mind blown"... it felt a bit like Paul and Karen in the K-card story. It was just a simple non-mechanical combinatorial testing support tool made in Excel where you could decide the input values you would use in a test and the program would help you by showing how many valid pairs (as in "all pairs") you had not yet covered. I don't know if this is exactly what Hexawise, or some other tool, already does but to me it was ingenious.

But this is a peer conference so why stop there; open season time:
  • Forming an unbiased opinion can be expensive and money does matter
  • Getting many peoples biased opinions and comparing them can help sometimes
  • Beware of decision fatigue when using exploratory testing
  • Improving your test strategy or moving towards a more formal approach can delay the decision fatigue process
  • Mind maps are cool but do have limitations
  • Don't underestimate pen and paper
  • Excel is a pretty bad ass testing tool
  • Remember data can come in many shapes and forms and it often has complex relations to other data
  • Sometimes the simplest of ideas are not obvious to others
  • There are so many potential benefits with screen recording

Talk 2: How we test by Anders Elm and Erik Brickarp

A bit unorthodox but a two person experience report at a peer conference. The background was I read Anders' abstract and realized we have a very similar test process at Verisure. So Anders described how they test at SICK, I described how we test at Verisure by focusing on the differences and at the end we shared our key lessons learned and challenges. Long story short: We had both started with the goal to implement SBTM (in my case credit should go to Maria Kedemo et al.) but in both companies this had diverged into something else. I described this new process to be much more similar to TBTM than SBTM.

I might talk more about this in the future but let's skip to the open season for now:
  • How much of this diverging is due to strategic decisions and how much is due to laziness/lack of competence (valid question that requires more thinking on my part)?
  • Internal demos after each completed user story and mob testing during these demos seemed awesome (done at SICK but not Verisure, where I work)
  • We got an interesting question from Björn Kinell about "if you could magically change/correct whatever you wanted, what would that be". I don't want to share my answer but my advice: Ask yourself that question cause it can help when forming a vision for your testing.
  • It's easy to forget/skip debriefs, test planning and testing not immediate related to a story in the sprint but be careful as these activities often provide quite a bit of value.
  • Find activities that are "not testing but still testing" to easier get the developers involved. Examples: Add testability, "try" the product and support a tester when testing.
  • Ask the question "how do we catch <very basic bug> before the product is handed over to testers?" to start a discussion in the team about testing and developer responsibility.
  • Remember that small bugs that normally don't slip through development/testing can be a symptom of a much bigger problem like stress, change in culture or lack of support.
  • Time spent improving charters is rarely a waste of time.
  • SBTM "by the book" in a scrum team is not easy...
  • If the timebox aspect (session) is removed you need to find new stopping mechanisms and/or heuristics to help you stop and reflect on whether or not to continue.
  • Debriefs can be useful for many many reasons.


Lighningtalks

Johan Jonasson spoke about construct validity in metrics
Anna Elmsjö spoke about finding and making allies
  • Candy solves most people problems (my wife approves)
  • Finding allies among the developers is important to help you get attention to testing problems
Tim Jönsson spoke about the value to "knowing your audience" when reporting bugs
  • There are more ways than an entry in a bug reporting tool to report bugs
  • Do highlight helpful developer behavior/good code and show gratitude when developers help you
  • Good lightning talks often focus on one thing and explains that one thing well!
  • A compelling story helps your audience understand your message
  • Testing is about psychology more that you might initially realize
  • Be humble, be helpful, be useful... and be respectful.
Göran Bakken spoke about how integrity can get in the way of efficiency
  • Refusing to do bad work is not always constructive
  • Two different ways to approach a change is to focus on supporting values or to focus on methods.
Kristian Randjelovic spoke about how analogies
  • There are many ways to use analogies to help colleagues with limited understanding of testing understand otherwise rather complex (testing) concepts.
A general lesson for me (even though I didn't present a lightning talk):
If I'm given 10 minutes for my talk and open season, I'll try to aim for a talk shorter than 4 minutes and only focus on one message.

Talk 3: Test documentation in the hands of a stylist by Agnetha Bennstam

Agnetha showed three different examples of how she had helped turn massive 200-page documents into lightweight, often visual, alternatives. It's hard to explain this in text but the running themes were figure out what is important, find the right level of detail and aggregate data in a way that's useful/enough for the receiver rather than show the full set of data.

Open season:
  • Dare to try something different!
  • You can add additional dimensions to a diagram by using the size of the dot you're plotting and color (see slide 55 in Andy Glover's presentation).
  • Use feelings when you test e.g. "why do I feel bored" as indication that there might be something you should react to/take notice of (see Michel Bolton's presentation).
  • Asking for people's gut feeling can be a good way to get their assessment without getting all the motives and data (makes people relax a bit).
  • Sometimes asking for people's gut feeling can help them "dare to answer" so that you can start to figure out what is happening e.g.
    "How 's the quality of ...?"
    "I don't know..."
    "But what's your gut feeling?"
    "Well, so far it's not looking good"
    "Why is that...?"
  • Gut feeling can sometimes mean "very useful and relevant information the person simply cannot/dare not articulate yet"
  • ... but gut feeling can also mean, or more often be interpreted as, "I base this on absolutely nothing but I don't want to admit that"
  • Beware of documentation existing only for the purpose of being ammunition/defense in a blame game
  • A tool that could clearly visualize:
    How fresh the testing is (time and impact of changes since last tested)
    How important the testing is
    How well the testing went (typically quality)
    ... would be cool.
     

Other stuff

One more crazy thing about SWETish: All the talks and open seasons were amazing and that's despite the fact that the experience report all organizers voted as their favorite... was not delivered since Joel Rydén got sick!

Another cool detail. Anna Elmsjö added a nice twist to the K-cards she created for the conference: On each card there was a short helpful sentence/statement/word at the top explaining the card a bit. For instance sentences like "I would like to add..." on a yellow card or "This is the most important thing ever!" on a red card, most were funnier than that but the funny ones I saw don't translate well to English. To make it even cooler she had different statements for every single card... Johan, I expect the same for Let's Test 2017 ,)


Summary

THANK YOU! This was one of the best conferences I've attended, I already look forward to the next one!

... and I hope I can bring some of SWETish awesomeness to PEST, a peer conference I'll attend in just a couple of weeks (we seriously need to work on how we name the various peer conferences by the way...).

03 November 2016

Test plan questions

I recently gave a talk on test strategy and gave the participants a list of "test strategy questions" (which should be called test plan questions, adhering to these definitions). I think and hope these can be useful for more people so last in this post you'll find my most recent version.

How I use the questions

I use these questions as a quick way to form a test plan. The test plan might be everything from a high level, early plan for an upcoming test project to something small like the strategy for a specific test charter.

The way I practically use the list is pretty straight forward. I just run through the questions and basically strike out questions that are not relevant. Not relevant refers to questions like "any particular tours we should perform" when creating a high level plan, "what are the key test/release milestones" for a plan covering a 90 minutes test charter or "how do we report bugs and status" if this is already stated by our overall process.

The questions not struck out I try to answer; either myself or in a meeting with the people involved. Involved in this case might refer to anything from a second tester to everyone involved in a project. The outcome is typically:
  • I/we lack the necessary information or competence and need help answering this question
  • I/we have an idea but I/we still want help/input from someone else
  • I/we think I/we know the answer well enough but still want a second opinion
  • I/we think I/we know the answer well enough
The results can be turned into a formally documented test plan, some brief notes next to the questions, a mind map, an image and sometimes the discussion and actions taken are enough (no need to document anything). Do remember that a test plan document is not a test plan; it's a document describing some parts of your test plan.

A few additional notes:
  • The list should be improved to fit your context; add, remove and modify questions on the list accordingly (if you for instance know which bug tracker to always use, which most do, remove that question completely).
     
  • You always have to take time and value into consideration. For a minor issue with low priority it's probably not worth inviting everyone in the project as the cost of doing so is too high; Your own judgement is cheaper and probably enough.
     
  • The list can help e.g. project managers, line managers and developers think about testing concerns so do spread (your modified version of) the list to these people.
     
  • You can split the list into several smaller lists; e.g. one short list of questions to use when creating test charters and a longer/different one for project test plans.
     
  • It might be wise to think about what's important for a test plan before you even look at the questions. The point here is sometimes key details that definitely should be in your test plan might not be highlighted by any of the questions. If you start by using the list you might miss these key details (your thinking is limited by the questions). Once again I think value is important: How much time and effort seems reasonable to spend on this particular test plan?

Credit

I created these questions together with my students so credit should go to all the students involved. I also discovered testers before me had created their lists as well and I would not have thought about some of the questions below if it wasn't for their efforts.

So, before I present "my" version I want to highlight and thank both the students, Michael Larsen, Ministry of testingThe test eye and Michael Bolton. The four lists I just linked to, all come with their own twist/benefit so do check them out.

Questions

Notice some questions are generally only applicable when consulting, some only in an agile context, others when testing and development is clearly separated etc. Like has already been pointed out: Do remove or modify questions not relevant to your context.

Initial

I typically want to answer these questions before I go on with the rest, because these five typically impact my approach for the strategy in general.
  • How much time do we have? 
  • Do we need to document the strategy? Why? How will we document it? 
  • What’s the primary objective (testing mission)? 
  • Are there any other objectives? 
  • Where can we get the initial information about the product/feature/system we need?

Planning

  • What resources do I have available?
    people, money, equipment, licenses, time, expertise, facilities... 
  • Is this a realistic amount of resources for this project? 
  • How flexible is the time plan and what happens if the product is not “ready” by the deadline? 
  • What project risks exist?
    e.g. customers not clear about what they want.
  • What is our backup plan if risk X happens? 
  • What is our backup plan in general if everything fails? 
  • What is likely to change?
    e.g. team setup, requirements, developers' focus, delivery dates… 
  • Any meetings we should attend/discussions we should be part of? 
  • How do we handle handovers/new members/lost members? 
  • Who does what?
    e.g. responsibilities and roles 
  • Any known problems along the road? 
  • Are there any workarounds or solutions to the known problems? 
  • Any regulations, rules, standards, certifications etc. limiting us or forcing us to work/act in a specific way? 
  • What administrative tools are (almost) enforced and what else do we need/benefit from? 
  • How do we plan the everyday work? 
  • What are the key test/release milestones?
  • How flexible is the scope - can the project be down-scaled if some unexpected problem happens?

Prioritization

  • What is most important (to test)? 
  • What is not important (to test)? 
  • What can be skipped all together? 
  • What quality characteristics are most/least important? 
  • Any specific aspect of these characteristics that is more/less important? 
  • What is covered by other teams? 
  • How do we continuously verify we’re doing the right thing? 
  • What is our done criteria
    e.g. strict deadline, customer acceptance tests or some other assessment of "good enough quality" and if so, by who? 
  • What's the general requirement for quality?
    Aim for the stars/critical system or “just don’t explode... too badly”

Information

  • Where can I get information about X and who do I contact if that's not enough? 
  • Which claims exist? 
  • Which customers exist, can we contact them? 
  • Who can answer questions and which questions? 
  • What is still unknown/not clear about the project? 
  • How do we simplify the complexity?
    Simplify the process of learning the product. An example might be "what sort of visual models would help to create?". 
  • Any particular tours we should perform?
  • Are there any general guidelines to how we deal with learning activities and knowledge sharing? 
  • How do we further inspire/reassure/encourage feedback, reviews and knowledge sharing? 
  • How do we stay up to date with what is happening in the project?
  • How do we communicate with various information owners?
    e.g. email (address), phone (number), instant messaging tool, via other person, meeting etc.

Support

  • What kind of built in testability will help testers?
  • Which modifications can/must be done to the process in general to support the testing? 
  • What do we need to learn more about? 
  • Any particular configuration or test data we can prepare? 
  • Which tools can help us? 
  • What other teams should/can/must we cooperate with? When, how and why? 
  • Do I know who the developers are and can I talk to them? 
  • Do the developers have time allotted for us? 
  • Are there any problems getting their time/getting in touch with the developers? 
  • Will they keep working on this?
  • What will the developers test? How does this impact our testing? Can we make suggestions regarding their testing?


Testing

  • How do I tell right from wrong?
    Find potential oracles 
  • Which oracles are generally most trustworthy? 
  • What testing risks exist?
    e.g. unstable test environment or lack of knowledge about something.
  • Which test techniques might be useful? 
  • What expertise do we lack? 
  • Which scenarios/typical use cases exist? 
  • Which heuristics might be useful? 
  • What logical components/modules are there?
  • Is there anything I'm not allowed to do?
  • Any (testing) tips the developers can give to the testers?

Product

  • Which product risks exist?
    e.g. complex algorithms likely to be buggy or new technology used.
  • Is there any complexity we might be missing? 
  • Which functions will the system/application/feature have? 
  • Who’s the target audience? 
  • Which platforms, systems etc. should the product support? 
  • What requirements exist for this product? 
  • What problem is the product expected to solve? For who? 
  • What problems have happened in the past with this product? 
  • Any existing functionality that is impacted? 
  • What must the product never do?
    e.g. any data sent as plain text is strictly forbidden

Reporting

  • What do I need to cover and how well? 
  • How do we track and visualize coverage and progress? 
  • Which stakeholders exist?
  • How do we report bugs and status? To who? Why?
    "Why" as in: Which problems/questions will the receiver hope to solve/answer with our report. 
  • What other artifacts/information do we need to share? To who? Why? 
  • When do we need to report what?

Additional sources