29 November 2016

Learning about learning by teaching

I've undergone a tough education in software testing:
  • 15 months long
  • Tests 3 times a week, 7 hours each and in front of a crowd
  • If you skip a test you'll have to do it again, typically within 5 days.
  • The expected level of competence is: "Good enough to teach others"
  • Little or no chance of "redo:s", you better do it right the first time, every time

In other words: I've been teaching a class in software testing.

The intense experience of teaching testing like this has of course taught me tons of things and with this post I want to share the positive effects this particular job had on my own learning. Each benefit (of teaching) comes with a "why I find it useful" and "how you can apply this knowledge in a more everyday context".

Benefit: Curriculum

Why?

I'm not a fan of committing to a plan, especially not when it comes to my learning. However, the education's curriculum did force me to look into many topics/material I would otherwise had skipped due to laziness, lack of interest or lack of understanding (not thinking it was useful to me). Some of these have definitely made me a more complete/skilled tester such as test framing, deeper understanding of test techniques and a better understanding of bias.

How?

Benefit: Go deep

Why?

I've read tons of articles, watched hundreds of hours of presentations/videos and spent a lot of time practically practicing testing. However, I often end up looking at topics in a quite shallow way especially when I find the topic a bit boring (may still be very useful). When you are to talk about a specific topic for just a couple of hours, you're okey, there's little "need" to go deep. When you have to prepare several weeks of study material though, that's a whole different beast! Being forced to go deep into topics has enables me to better question, improve, explain and argue for the various choices I make (for example why I chose to place my testing mission at the top in a certain status report).

How?

  • Dig deep into fundamental questions e.g. what is testing, why do we test, what is an oracle etc.
  • Look into related topics. Say you want to improve your written reporting skills then look into e.g. rhetoric, design, how people read printed and digital documents, how people interpret colors or tutorials for your word processor/mind mapping tool/whatever. The point is: don't limit yourself to articles specifically about test reporting.
  • Set a "topic of the month" and try to become as skilled as you can in this topic. Don't stop because you feel "done", continue beyond that.

Benefit: Giving feedback

Why?

An important part of my job is helping students understand what they do well and what they might need to improve. To do this I have to observe and analyze what they've done, what they think they've accomplished, what actually made them accomplish what they've accomplished etc., all this I have to do rather thorough in order to be able to explain it to them. This helps me create an understanding that is beyond "do this or do that because it works better".

An example of this is when grading various assignments and projects as students, at least on a general level, need to understand what they did good and what they would had to do to get a better grade. If they get the highest grade they need to know why, so they both know what to continue doing and what to improve. As testers we need these kinds of observation and communication skills all the time when working with developers, project managers etc.

How?

  • Study your own testing and try to explain why it was good and how it could be improved.
  • One area where I've found this pretty easy to practice (can't prove that the practice translates to other parts of testing but I think it does) is watching presentations (e.g. YouTube) and try to give feedback to the presenter. What specifically did she/he do good and bad?
  • Study other testers and try to figure out why you find them good/bad testers. Be as specific as you can.
  • When testing, try to find positive and negative patterns: "The log entries are (almost) always relevant and well described making my work so much easier" or "The UI components often have poor vertical alignment".

Benefit: Teaching

Why?

Teaching in itself is a great technique for learning. You have to rephrase the content to match your own language, you hear yourself speak about the topic and you get questions pinpointing gaps in your understanding and/or explanation.

How?

  • Do teach colleagues and friends about the various topics you've practiced.
  • Write an educational article/blog post about what you've learned (you don't need to publish it to anyone to still get many of the benefits).
  • Talk at a local test meetup and if there isn't one, arrange one.

Benefit: Peer

Why?

Working with Maria Kedemo and Martin Nilsson have allowed me to get feedback on the ideas I'm about to share, feedback on my interpretation of various topics and someone to speak with when I feel stuck. It has also allowed me to learn from their knowledge and experience of testing.

How?

  • Speak with a colleague
  • Join a local tester meetup
  • Go to a test conference
  • Join the active community on Twitter
  • Try your own version of Transpection Tuesday (my post, Helena's post: 1, 2)
  • More ideas...

Benefit: Observe testing

Why?

I've spent a significant amount of time both observing testers test (as a group), observed testers test (the actual testing done by an individual) and listened to testers speak about their testing. All three exposed me to new ideas and made me question my own approach. It's also interesting because you get to see a specific problem solved in many different ways which helps you understand what actually impacts the result; e.g. "what is the common denominator in these solutions, is there anything I can learn from that?" or "they all had different ways to setup but all ended up with the same solution, which setup worked best/most efficient and can I learn something from that?".

How?

  • Pair testing
  • Look at other testers' notes, reports etc.
  • Do call for and attend debriefs no matter if you use the concept of test sessions or not
  • Offer to review things
  • Volunteer to mentor/coach another tester; this will enable you to observe another tester as well as get several of the other benefits mentioned in this post

Benefit: Consistency

Why?

To sit down and learn about various topics every day for over a year has definitely added some welcomed consistency to my self-education.

How?

Benefit: Questioning basic assumptions

Why?

Explaining fundamental concepts is incredibly hard but rewarding! As an experienced tester I take quite a few things for granted and explaining concepts built on these assumptions to someone without experience lead to wonderful questions like "but why do we need testing at all", "what does it actually mean to test something", "why can testers find bugs if developers who know the code can't (as in why do bugs happen at all)?". Answering these questions without being able to rely on "experience based assumptions" has led to more than a few epiphanies (and a lot of frustration of course).

How?

  • Talk testing with people having a different frame of reference (developers, management etc.)
  • Talk testing with people who don't work in the industry; for instance try to explain what you do to a relative.
  • Teach new testers at the company or teach e.g. developers in testing
  • Talk testing with new, inexperienced testers

Benefit: Ask yourself "how do you train this skill"

Why?

Reading and listening is nice but sometimes you need to actually practice skills to be able to learn them. When teaching I've spent a fair amount of time trying to figure out exercises pinpointing a specific skill I want the students to practice or just exercises/projects in general helping students practice relevant testing skills. This experience help me now both when less experienced testers want help learning a skill, when I try to explain/teach something and when I try to teach myself something.

How?

  • After e.g. a blog post, YouTube video or book; think about how you can incorporate the new concepts you've just learned about into your own work.
  • Try various exercises and try to replicate various experiments yourself; such as: 1, 2, 3, 4; to help kickstart your brain.
  • Whenever you're asked to explain something; try to come up with an exercise or experiment that helps demonstrating whatever you are to explain.

Benefit: Getting questions

Why?

I've already touched upon this but getting questions from the students on anything that's not clear to them is incredibly challenging but rewarding. It has helped me realize flaws in my own understanding, forced me to question my own assumptions and challenged me to find new ways to explain certain concepts in.

How?

  • Explain concepts to others
  • Ask for feedback
  • Ask questions yourself; this both inspires others and help you ask questions "to yourself"
  • When reading/watching e.g. a book, presentation (video) or article; pause periodically and ask yourself: "what did I just read/watch and what of that is unclear/seems strange to me?"

Benefit: Having the time and expectation to learn

Why?

When in the middle of deadlines, huge backlogs and conflicting priorities it's easy to forget learning. Having the explicit expectation to learn new things has been an interesting experience and I feel confident saying I leave the teacher assignment as a much more competent tester. Spending as much time as I did on learning is not possible in most working contexts but I think "expectation to learn" is the key concept here as it helps making it happen at all.

How?

  • Ask your boss: "How much time am I expected (or at least allowed) to spend on education?"
  • When doing backlog grooming (if you do this); add learning stories as dependencies e.g. "before we implement the new video player we need to learn a bit about streaming, video formats and performance testing related to streaming". If you end up never having time for these learning dependencies, try timeboxing them to make the expected time invested in learning more explicit.
  • Remember learning is a fundamental part of testing.
  • Differentiate between the learning that's necessary to solve your current task and learning with more long term, strategic value (e.g. learning more about testing in general, about web security, about test planning etc.). The "strategic learning" is often important to keep you and the company progressing but can easily be forgotten if put in the same "budget" as the task solving learning.

Final word

I removed several additional benefits I had initially included just to finally get this blog post published (it's been stuck in "draft" for over a year) so just to be clear: You can learn so much more about learning by teaching; this is just a mere introduction.

21 November 2016

SWETish

It was quite a long time since I wrote about an event. This is mainly because "it's not new to learn new stuff" anymore so for me to write about an event it need to be pretty special. SWETish, a peer conference that just ended, was exactly that: Special.

I've tried to wrap my head around what made this different and it boils down to:
I've not had this many "epiphanies"/been surprised this many time at a test conference since... one of my first ever.

Next question is: Why? What made me get all those epiphanies?

Well, I've rewritten my explanation to that I don't know how many times now. It boils down to things I can't seem to describe well enough yet but I'll give it a shot in a separate post, probably after a Transpection Tuesday (I need your brain Helena).

So, let's skip to the good stuff: Content and lessons learned.

Credit

Before we start: Credit for the content below goes to all the participants of SWETish:
  • Agnetha Bennstam
  • Anders Elm
  • Anna Elmsjö
  • Björn Kinell
  • Erik Brickarp
  • Göran Bakken
  • Johan Jonasson
  • Kristian Randjelovic
  • Morgan Filipsson
  • Tim Jönsson

Talk 1: All Pairs by Morgan Filipsson

Morgan basically described a tool he thought was more or less obvious and most of us went like: "mind blown"... it felt a bit like Paul and Karen in the K-card story. It was just a simple non-mechanical combinatorial testing support tool made in Excel where you could decide the input values you would use in a test and the program would help you by showing how many valid pairs (as in "all pairs") you had not yet covered. I don't know if this is exactly what Hexawise, or some other tool, already does but to me it was ingenious.

But this is a peer conference so why stop there; open season time:
  • Forming an unbiased opinion can be expensive and money does matter
  • Getting many peoples biased opinions and comparing them can help sometimes
  • Beware of decision fatigue when using exploratory testing
  • Improving your test strategy or moving towards a more formal approach can delay the decision fatigue process
  • Mind maps are cool but do have limitations
  • Don't underestimate pen and paper
  • Excel is a pretty bad ass testing tool
  • Remember data can come in many shapes and forms and it often has complex relations to other data
  • Sometimes the simplest of ideas are not obvious to others
  • There are so many potential benefits with screen recording

Talk 2: How we test by Anders Elm and Erik Brickarp

A bit unorthodox but a two person experience report at a peer conference. The background was I read Anders' abstract and realized we have a very similar test process at Verisure. So Anders described how they test at SICK, I described how we test at Verisure by focusing on the differences and at the end we shared our key lessons learned and challenges. Long story short: We had both started with the goal to implement SBTM (in my case credit should go to Maria Kedemo et al.) but in both companies this had diverged into something else. I described this new process to be much more similar to TBTM than SBTM.

I might talk more about this in the future but let's skip to the open season for now:
  • How much of this diverging is due to strategic decisions and how much is due to laziness/lack of competence (valid question that requires more thinking on my part)?
  • Internal demos after each completed user story and mob testing during these demos seemed awesome (done at SICK but not Verisure, where I work)
  • We got an interesting question from Björn Kinell about "if you could magically change/correct whatever you wanted, what would that be". I don't want to share my answer but my advice: Ask yourself that question cause it can help when forming a vision for your testing.
  • It's easy to forget/skip debriefs, test planning and testing not immediate related to a story in the sprint but be careful as these activities often provide quite a bit of value.
  • Find activities that are "not testing but still testing" to easier get the developers involved. Examples: Add testability, "try" the product and support a tester when testing.
  • Ask the question "how do we catch <very basic bug> before the product is handed over to testers?" to start a discussion in the team about testing and developer responsibility.
  • Remember that small bugs that normally don't slip through development/testing can be a symptom of a much bigger problem like stress, change in culture or lack of support.
  • Time spent improving charters is rarely a waste of time.
  • SBTM "by the book" in a scrum team is not easy...
  • If the timebox aspect (session) is removed you need to find new stopping mechanisms and/or heuristics to help you stop and reflect on whether or not to continue.
  • Debriefs can be useful for many many reasons.


Lighningtalks

Johan Jonasson spoke about construct validity in metrics
Anna Elmsjö spoke about finding and making allies
  • Candy solves most people problems (my wife approves)
  • Finding allies among the developers is important to help you get attention to testing problems
Tim Jönsson spoke about the value to "knowing your audience" when reporting bugs
  • There are more ways than an entry in a bug reporting tool to report bugs
  • Do highlight helpful developer behavior/good code and show gratitude when developers help you
  • Good lightning talks often focus on one thing and explains that one thing well!
  • A compelling story helps your audience understand your message
  • Testing is about psychology more that you might initially realize
  • Be humble, be helpful, be useful... and be respectful.
Göran Bakken spoke about how integrity can get in the way of efficiency
  • Refusing to do bad work is not always constructive
  • Two different ways to approach a change is to focus on supporting values or to focus on methods.
Kristian Randjelovic spoke about how analogies
  • There are many ways to use analogies to help colleagues with limited understanding of testing understand otherwise rather complex (testing) concepts.
A general lesson for me (even though I didn't present a lightning talk):
If I'm given 10 minutes for my talk and open season, I'll try to aim for a talk shorter than 4 minutes and only focus on one message.

Talk 3: Test documentation in the hands of a stylist by Agnetha Bennstam

Agnetha showed three different examples of how she had helped turn massive 200-page documents into lightweight, often visual, alternatives. It's hard to explain this in text but the running themes were figure out what is important, find the right level of detail and aggregate data in a way that's useful/enough for the receiver rather than show the full set of data.

Open season:
  • Dare to try something different!
  • You can add additional dimensions to a diagram by using the size of the dot you're plotting and color (see slide 55 in Andy Glover's presentation).
  • Use feelings when you test e.g. "why do I feel bored" as indication that there might be something you should react to/take notice of (see Michel Bolton's presentation).
  • Asking for people's gut feeling can be a good way to get their assessment without getting all the motives and data (makes people relax a bit).
  • Sometimes asking for people's gut feeling can help them "dare to answer" so that you can start to figure out what is happening e.g.
    "How 's the quality of ...?"
    "I don't know..."
    "But what's your gut feeling?"
    "Well, so far it's not looking good"
    "Why is that...?"
  • Gut feeling can sometimes mean "very useful and relevant information the person simply cannot/dare not articulate yet"
  • ... but gut feeling can also mean, or more often be interpreted as, "I base this on absolutely nothing but I don't want to admit that"
  • Beware of documentation existing only for the purpose of being ammunition/defense in a blame game
  • A tool that could clearly visualize:
    How fresh the testing is (time and impact of changes since last tested)
    How important the testing is
    How well the testing went (typically quality)
    ... would be cool.
     

Other stuff

One more crazy thing about SWETish: All the talks and open seasons were amazing and that's despite the fact that the experience report all organizers voted as their favorite... was not delivered since Joel Rydén got sick!

Another cool detail. Anna Elmsjö added a nice twist to the K-cards she created for the conference: On each card there was a short helpful sentence/statement/word at the top explaining the card a bit. For instance sentences like "I would like to add..." on a yellow card or "This is the most important thing ever!" on a red card, most were funnier than that but the funny ones I saw don't translate well to English. To make it even cooler she had different statements for every single card... Johan, I expect the same for Let's Test 2017 ,)


Summary

THANK YOU! This was one of the best conferences I've attended, I already look forward to the next one!

... and I hope I can bring some of SWETish awesomeness to PEST, a peer conference I'll attend in just a couple of weeks (we seriously need to work on how we name the various peer conferences by the way...).

03 November 2016

Test plan questions

I recently gave a talk on test strategy and gave the participants a list of "test strategy questions" (which should be called test plan questions, adhering to these definitions). I think and hope these can be useful for more people so last in this post you'll find my most recent version.

How I use the questions

I use these questions as a quick way to form a test plan. The test plan might be everything from a high level, early plan for an upcoming test project to something small like the strategy for a specific test charter.

The way I practically use the list is pretty straight forward. I just run through the questions and basically strike out questions that are not relevant. Not relevant refers to questions like "any particular tours we should perform" when creating a high level plan, "what are the key test/release milestones" for a plan covering a 90 minutes test charter or "how do we report bugs and status" if this is already stated by our overall process.

The questions not struck out I try to answer; either myself or in a meeting with the people involved. Involved in this case might refer to anything from a second tester to everyone involved in a project. The outcome is typically:
  • I/we lack the necessary information or competence and need help answering this question
  • I/we have an idea but I/we still want help/input from someone else
  • I/we think I/we know the answer well enough but still want a second opinion
  • I/we think I/we know the answer well enough
The results can be turned into a formally documented test plan, some brief notes next to the questions, a mind map, an image and sometimes the discussion and actions taken are enough (no need to document anything). Do remember that a test plan document is not a test plan; it's a document describing some parts of your test plan.

A few additional notes:
  • The list should be improved to fit your context; add, remove and modify questions on the list accordingly (if you for instance know which bug tracker to always use, which most do, remove that question completely).
     
  • You always have to take time and value into consideration. For a minor issue with low priority it's probably not worth inviting everyone in the project as the cost of doing so is too high; Your own judgement is cheaper and probably enough.
     
  • The list can help e.g. project managers, line managers and developers think about testing concerns so do spread (your modified version of) the list to these people.
     
  • You can split the list into several smaller lists; e.g. one short list of questions to use when creating test charters and a longer/different one for project test plans.
     
  • It might be wise to think about what's important for a test plan before you even look at the questions. The point here is sometimes key details that definitely should be in your test plan might not be highlighted by any of the questions. If you start by using the list you might miss these key details (your thinking is limited by the questions). Once again I think value is important: How much time and effort seems reasonable to spend on this particular test plan?

Credit

I created these questions together with my students so credit should go to all the students involved. I also discovered testers before me had created their lists as well and I would not have thought about some of the questions below if it wasn't for their efforts.

So, before I present "my" version I want to highlight and thank both the students, Michael Larsen, Ministry of testingThe test eye and Michael Bolton. The four lists I just linked to, all come with their own twist/benefit so do check them out.

Questions

Notice some questions are generally only applicable when consulting, some only in an agile context, others when testing and development is clearly separated etc. Like has already been pointed out: Do remove or modify questions not relevant to your context.

Initial

I typically want to answer these questions before I go on with the rest, because these five typically impact my approach for the strategy in general.
  • How much time do we have? 
  • Do we need to document the strategy? Why? How will we document it? 
  • What’s the primary objective (testing mission)? 
  • Are there any other objectives? 
  • Where can we get the initial information about the product/feature/system we need?

Planning

  • What resources do I have available?
    people, money, equipment, licenses, time, expertise, facilities... 
  • Is this a realistic amount of resources for this project? 
  • How flexible is the time plan and what happens if the product is not “ready” by the deadline? 
  • What project risks exist?
    e.g. customers not clear about what they want.
  • What is our backup plan if risk X happens? 
  • What is our backup plan in general if everything fails? 
  • What is likely to change?
    e.g. team setup, requirements, developers' focus, delivery dates… 
  • Any meetings we should attend/discussions we should be part of? 
  • How do we handle handovers/new members/lost members? 
  • Who does what?
    e.g. responsibilities and roles 
  • Any known problems along the road? 
  • Are there any workarounds or solutions to the known problems? 
  • Any regulations, rules, standards, certifications etc. limiting us or forcing us to work/act in a specific way? 
  • What administrative tools are (almost) enforced and what else do we need/benefit from? 
  • How do we plan the everyday work? 
  • What are the key test/release milestones?
  • How flexible is the scope - can the project be down-scaled if some unexpected problem happens?

Prioritization

  • What is most important (to test)? 
  • What is not important (to test)? 
  • What can be skipped all together? 
  • What quality characteristics are most/least important? 
  • Any specific aspect of these characteristics that is more/less important? 
  • What is covered by other teams? 
  • How do we continuously verify we’re doing the right thing? 
  • What is our done criteria
    e.g. strict deadline, customer acceptance tests or some other assessment of "good enough quality" and if so, by who? 
  • What's the general requirement for quality?
    Aim for the stars/critical system or “just don’t explode... too badly”

Information

  • Where can I get information about X and who do I contact if that's not enough? 
  • Which claims exist? 
  • Which customers exist, can we contact them? 
  • Who can answer questions and which questions? 
  • What is still unknown/not clear about the project? 
  • How do we simplify the complexity?
    Simplify the process of learning the product. An example might be "what sort of visual models would help to create?". 
  • Any particular tours we should perform?
  • Are there any general guidelines to how we deal with learning activities and knowledge sharing? 
  • How do we further inspire/reassure/encourage feedback, reviews and knowledge sharing? 
  • How do we stay up to date with what is happening in the project?
  • How do we communicate with various information owners?
    e.g. email (address), phone (number), instant messaging tool, via other person, meeting etc.

Support

  • What kind of built in testability will help testers?
  • Which modifications can/must be done to the process in general to support the testing? 
  • What do we need to learn more about? 
  • Any particular configuration or test data we can prepare? 
  • Which tools can help us? 
  • What other teams should/can/must we cooperate with? When, how and why? 
  • Do I know who the developers are and can I talk to them? 
  • Do the developers have time allotted for us? 
  • Are there any problems getting their time/getting in touch with the developers? 
  • Will they keep working on this?
  • What will the developers test? How does this impact our testing? Can we make suggestions regarding their testing?


Testing

  • How do I tell right from wrong?
    Find potential oracles 
  • Which oracles are generally most trustworthy? 
  • What testing risks exist?
    e.g. unstable test environment or lack of knowledge about something.
  • Which test techniques might be useful? 
  • What expertise do we lack? 
  • Which scenarios/typical use cases exist? 
  • Which heuristics might be useful? 
  • What logical components/modules are there?
  • Is there anything I'm not allowed to do?
  • Any (testing) tips the developers can give to the testers?

Product

  • Which product risks exist?
    e.g. complex algorithms likely to be buggy or new technology used.
  • Is there any complexity we might be missing? 
  • Which functions will the system/application/feature have? 
  • Who’s the target audience? 
  • Which platforms, systems etc. should the product support? 
  • What requirements exist for this product? 
  • What problem is the product expected to solve? For who? 
  • What problems have happened in the past with this product? 
  • Any existing functionality that is impacted? 
  • What must the product never do?
    e.g. any data sent as plain text is strictly forbidden

Reporting

  • What do I need to cover and how well? 
  • How do we track and visualize coverage and progress? 
  • Which stakeholders exist?
  • How do we report bugs and status? To who? Why?
    "Why" as in: Which problems/questions will the receiver hope to solve/answer with our report. 
  • What other artifacts/information do we need to share? To who? Why? 
  • When do we need to report what?

Additional sources

28 September 2016

Next step

After a colleague pointed out I'm not 29 years old anymore I had to revisit my About page. While reading the rest of the text this sentence made me stop...

Next step is to spread the knowledge I've obtained and be a thinking practitioner who can hopefully do what James once did for me.

I wrote that sentence over 4 years ago, before any tester outside Linköping knew I existed. So I took a moment just to reflect.

"Spread the knowledge I've obtained"
I've been teaching students in Software Testing at a vocational university for 1,5 years, I've given numerous lectures, presentations and workshops on various topics at conferences (national and international), meetups and at my workplaces and I'm now test coach at Verisure, a role in which I'm basically paid to share the knowledge I've acquired so far. Finally I've, for most of the period, been a fairly active blogger.

"Be a thinking practitioner"
Transpection Tuesday, my blog, my peer conference appearances, my many dinner discussions with top notch testers (most recently Göran Bakken), my (former) activity on Twitter all add to the "thinking" part while my testing and test related experiments at Ericsson, Verisure and Zenterio, as well as my effort during Software Testing World Cup all add to the "practitioner" part.

Most important though: The two are not separate processes. The best example of this is probably Transpection Tuesday. During Transpection Tuesday, Helena and I often discuss a challenge one or both of us have, together we work out potential solutions or experiments to run, we go back to work to try these solutions/run the experiments and finally we share/ask for help to evaluate the results at a second Transpection Tuesday. Like I said, one process.

"Who can hopefully do what James once did for me"
After graduation I got emails from two former students, both made me feel I've accomplished exactly this...

... hmm, is it raining inside or why are my eyes moist all of a sudden...

On top of that other former students, testers and other colleagues (most recently a developer) have all helped me understand my efforts to inspire, guide and mentor have actually made a difference.

It's not without pride I say: In four years I've wildly exceeded my own expectations based on my memories of what I hoped to achieve 2012. Not just my expectations for the next four years but potentially for my whole career. Shit... I need a break.

... pause music...

What's my "next step"

Took quite a bit of thinking but here's my updated "next step", or mission if you prefer that:

I've presented to, spoken with, inspired, coached and mentored testers in testing and quality. I want to continue this but to a broader audience, in ways different from what has been done before and inspire others to present, speak, inspire, coach and mentor as well.

Clarification:
Broader audience refers to e.g. developers, students (not studying testing), managers etc.

If you happen to read this in few years, do remind me to report my status, please.

What's your "next step"

Enough about me, what's your next step?

24 August 2016

How to come up with test ideas

Intro

This post is an attempt to gather the various tools, tips and sources I've found useful to improve my ability to generate test ideas.

Notice that the various concepts described are huge and I will only briefly describe them in this post, it's up to you to learn more and put them into practice. A good start would be to check out the many articles I've linked to.

To developers
Important! The techniques and sources I will describe are just as useful when developing, example:

"This will create a separate log file, what do I need to take into consideration when that file is created..."

If you want to develop high quality software, learn the fundamentals of test design!

Common product risks

Risk catalogs
Risk catalogs are lists of generic risks (things to test) in common components/functions.

You will of course need to judge what's relevant in your context and risk catalogs aren't flexible the way actual thinking techniques are but they are still incredibly useful:

Quality characteristics
Quality characteristics, also known as quality criteria or quality attributes, are an attempt to break down quality into meaningful/helpful categories. When generating test ideas these categories help us see the product from different angles.

Generic risks
Some aspects are (almost) always worth taking into consideration when assessing risk e.g. "is this new functionality?" or "does this module have a history of being buggy?". I can't say this information helps me generate new and creative test ideas but it helps me focus my effort on risky areas thus still relevant.

Test techniques

Test techniques are sort of "compressed tester mindset". The set of 9 techniques defined in the heuristic test strategy model is my personal favorite but there are many different sets out there.

Notice that each technique is way too broad to cover in this blog post but if you want to "learn to think like a tester" I urge you to learn more about and practice them!


Function Testing
Identify and test each function individually.

I prefer to begin with creating a list or mind map of all functions I can identify in whatever I'm about to test. A function in this case is anything the application can do e.g. save a log message, identify that a new version is available or show an error message. To do this, touring is a useful practice described in the links below.

When you've identified the functions, focus on each one individually, example:
"How can delete entry be tested? Successful delete, try to delete an already deleted entry, try to delete when lacking access, stack deletes of the same entry, lose connection to data source when deleting, modified delete request (e.g. SQL injection), potential timing issues, delete different entries in rapid succession, ..."

Domain Testing
Identify the input and output data in the application then look at which of the possible values/combinations of values you should test.

After identifying the variable (input or output) to test, you first group the possible values into equivalence classes. The idea is any of the values within each equivalence class should in theory behave the same. The next step is to select one or more values from each class to test. To do this you for instance look at boundary values, high risk values or realistic values. Finally when several variables need to be combined (such as in a typical form) there are various combinatorial techniques to help you find an efficient way (as few tests as possible) to actually combine these values into individual tests with the level of coverage you're aiming for.

Notice! When reading about domain testing you might get intimidated by all the paper work involved: "I don't have time to do all that for a simple form...". For me the detailed analysis on paper is sometimes useful, especially to practice my domain testing skills, but generally I just keep the various techniques related to partitioning (creating and grouping values into equivalence classes) and value selection in my head and apply them "on the fly". Still: I do urge you to sit down and create e.g. a detailed boundary/equivalence table at least a few times because it's key to actually understand domain testing.

For the combinatorial analysis: Remember there are excellent tools!

Stress Testing
Identify various bottlenecks, input that can be sent in huge volumes or that require more computing power than usual, dependencies, communication to external systems etc. and ask yourself how these could become a problem for the application (e.g. flooding of  requests, dependencies lacking, slow connections/timeouts/lost packages). Also look into the system's ability to restore itself after collapsing (e.g. will the system come back up after a database crash).

Do notice that stress testing often refers to testing the behavior when actually going beyond a system's breaking point. This is important since if you search for "stress testing" you'll mostly find just a subset of what the test technique "stress testing" is suppose to cover. Try also searching for e.g. "robustness testing", "load testing" and "performance testing".

Flow Testing
Testers often reset environments between tests, especially if using test cases, by e.g. closing the application or only focusing on one step in a long flow. The problem: Users don't. The test technique flow testing is thus about focusing on what will happen when a continuous chain of events occur. One way is to actually identify a long, realistic flow (e.g. a "buy, cancel, refund" cycle in a web shop) and test this, another is to not reset between tests (e.g. not close the application or reset data for several days while testing).

Scenario Testing
Scenario testing is, unsurprisingly, about creating a scenario and test this scenario. One important benefit with scenario testing is it can identify gaps in the implementation that other techniques generally don't, as they focus on what has actually been implemented.

Claims Testing
Also known as "specification-based testing". The idea is to identify the claims made about the product and check if they are actually true. Claims can come from a variety of sources e.g. specifications, requirements, manuals, various in-app help sources, public statements or the EULA.

User Testing
User testing is about asking yourself "how can I use users to improve my testing":
  • A user attempts to complete various tasks in the application and walks you (the tester) through how she/he is reasoning. The tester's job is to identify problems the user runs into e.g. not understanding certain parts of the navigation.
     
  • Use user data to drive the testing e.g. "the most items a user has ever ordered is 104 and in our tests we never order more than 20, we should increase that amount".
     
  • Personas and other tools helping you "simulate" a user can be used when testing, example:
    "Looking at this from the perspective of 'Nancy, the power user', I realize we lack keyboard shortcuts in much of our application"
Do notice that user testing often refers only to the first bullet above. This is important since if you search for "user testing" you'll mostly find just a subset of what the test technique "user testing" is suppose to cover. Try also searching for e.g. "personas", "ux testing" and "usability testing".

Risk Testing
Identify risks and design tests to cover each one of them.

These risks can either be identified:

... inside-out (ask: what could go wrong here?), example:
"We now allow users to delete their private messages, how could that be a problem? Well what if a user could potentially delete another user's messages? How can we test if that's possible to do?"

... or outside-in (apply common risks to the product), example:
By using Michael Hunter's "You're not done yet" I realize saving the configuration to a read-only folder could be a problem.

The chapter "Common product risks" includes tons of sources useful when applying the outside-in approach to identify risks.

Notice that "risk-based testing" (prioritize testing that's considered the greatest risk first) is not the same as the test technique "risk testing", but will be your most common result in a web search. However, articles about risk-based testing typically covers tools and techniques to identify and prioritize risks making them useful anyhow; just be aware of the difference.

Automatic Checking
How can you use tools you know about to improve your testing?

Do not confuse this with "automation", "automatic checking", "automated testing" or whichever term you prefer, which is a way to actually execute the ideas identified and not a test technique itself; even though unfortunately sometimes used as some sort of technique.

Other useful resources about test techniques
  • What is a good test case, by Cem Kaner
    More information about some of the test techniques above as well as some additional techniques.
  • Lessons learned in software testing (book), by Pettichord, Kaner, and Bach
    The chapter about test techniques is available online as PDF as part of the BBST course. I very much recommend you to get the rest of the book as well.
     
  • BBST Test design, by Cem Kaner and Rebecca L. Fiedler
    BBST presents a broad set of test techniques. Highly recommended reading in general and for test design in particular.

Critical and lateral thinking

Everything else in this article is more or less applications of critical thinking and supported by lateral thinking. They're thus the most abstract but generally applicable concepts I will describe.

Critical thinking
Definition:
"The objective analysis and evaluation of an issue in order to form a judgement"
Oxford Dictionaries

Compare this to e.g. James Bach's definitions of software testing:
"Questioning a product in order to evaluate it"
... any similarities?

A more in depth definition of critical thinking has been made by The Critical Thinking Community.

When developing software we make massive amounts of assumptions, simply to save time, example:
This framework will prevent..., this is what users want, this is the simplest way to...
Critical thinking is essentially the skill needed to identify these assumptions.

Learning:
I also designed a critical thinking exercise for testers, which might help.

I would love some more (and better) sources on critical thinking, so please drop a comment with your suggestions!

Lateral thinking
Lateral thinking is simply put the skill to think outside the box.

Definition:
"The solving of problems by an indirect and creative approach, typically through viewing the problem in a new and unusual light. Contrasted with vertical thinking."
Oxford Dictionaries

Reading material:
Exercises:

Domain knowledge

To be able to form any test you first need to, at least on a conceptual level, understand the product and its context.

I for instance like to think I'm pretty skilled with test techniques, that my critical and lateral thinking skills are decent and that I know how to get some good value out of tools such as risk catalogs. Still, when new to a product I'm not very effective until I've actually learned the product.

Stuff you can do to quicken the learning of a new application and its context:
  • Ask for a demonstration
  • Look through available documentation (start with documentation intended for customers)
  • Make a visual model of the application, enhance the model as you learn more
  • Tour the application
  • Ask for help to make a SFDIPOT analysis of the application
  • Learn about and try out the tech the application relies on
  • Speak with stakeholders to learn about e.g. the application's purpose and customers
  • Start somewhere and slowly work your way outwards, to avoid getting overwhelmed
  • Observe users, using the application
  • Pair up with developers, testers and others working with the application
  • Ask questions... lots of questions
  • Look into, and apply, various learning techniques (search for "learning techniques")
  • Learn about relevant laws, regulations, standards, certifications and business practices
  • Speak with domain experts, sales people and customer service
  • Look at test documentation available for the application

Other tools

Test framing
Test framing is about being aware of what you're testing and why, or, as Michael Bolton describes it in the link below: "link each testing activity with the testing mission". I find this useful to practice as the awareness, in my experience, improves my ability to see potential risks, better focus on relevant test design and it actively challenge my critical thinking skills.

Opposites
When you have a few ideas to test it's sometimes useful to look at those and ask:
What's the complete opposite of these ideas and could that potentially be a concern too?

Examples:
What happens when several entries fit a delete condition?
Opposite: What happens when no entries fit a delete condition?

What if the network has limited speed, delaying requests?
Opposite: What happens if the network floods the application with requests?

How will a user with limited computer skills approach this?
Opposite: How will a tech-savvy user approach this?

Great additional sources
This is a collection of great test idea articles I've used for this article and/or when generating test ideas that just didn't fit into any of the other chapters:

Please help me improve this

Initially this was intended as "just another blog post" but it sort of took a life of its own and became a pretty big reference page.  I always appreciate comments but since I expect people to assume this is "the truth" more than usual, I will appreciate corrections and suggestions even more!

17 August 2016

PSL alumni Slack group

Problem Solving Leadership is a quite different course in that the time in Albuquerque is just a start rather than the learning experience itself. For this reason a Slack group has been set up to facilitate learning between PSL graduates from different classes rather than just one class as at a time; which is normally the case.

How to join the Slack group
To be able to register you first need an invite (this is how Slack works). Use the following form and I'll send you an invite:
http://www.brickarp.se/psl

If you feel like it's taking too long; feel free to contact me via Skype (@brickuz).

Important! I cannot control that every single person joining has actually attended PSL but beware! Joining without having participated will:
  1. Likely hurt your own experience when/if you attend.
  2. Not provide you with very much since you don't have any knowledge about the simulations referred to.
If you're a graduate and like the initiative I would very much appreciate if you spread the knowledge about this group to other participants in your class or other PSL graduates you know.

If you want a very brief background to who else was involved in starting this group, check out this Twitter thread.

Finally: This group is in no way limited to software testers, I just happen to be a software tester and my most efficient way to reach out is via my blog.

Take care and remember to watch out for the sheriff...

07 June 2016

Time estimates in testing, part 2

Intro

Before I begin: Thanks to feedback from especially Jari Laakso, I've found several flaws in my first post which was what I hoped for when I published it as the flaws help me understand estimation better. I will highlight some of them in this post but I also want to point out that part 1 has been updated several times since I first published it to clarify or correct content.

How to interpret the graphic below

Do notice that these are not polarities where you switch from one to the other, instead they are continua. I will however describe one rather specific scenario for each box for the sake of simplicity, a chance for you to see the differences between "each corner". My hope is these examples can work as inspiration/help for you to think different.

One final remark: I will provide all these examples as cross-functional scrum teams consisting of only developers and testers. Of course this is just one of many ways a team can be set up but I'll use it once again for the sake of simplicity.

1 - Estimate testing stories

Separated planning with detailed estimates.

In this scenario testers create their own stories, estimate them and get them prioritized as part of the regular scrum flow (the cross-functional team will now have a mix of stories relevant to developers/testers/others). This allows testers to create their own plan optimized for testing needs and I find it easier to motivate developers to help out with testing as needed as the only content left in the sprint might be test work (and this is very much visible) but...

Since planning testing even a week forward is rather hard in most scenarios (open bugs blocking stories, open bugs delaying stories, late code deliveries, level of quality differs from the expected etc.) I find this approach somewhat chaotic (even though you might expect the complete opposite).

A second challenge with this approach is the risk of creating a gap, within the team, between testers and developers. This, I think, is because if you actually try to follow the plan (test stories) you will lose some of the flexibility to drop something in favor of something else in order to minimize the feedback loop between feature implemented and feature tested.

Finally my experience is it takes a certain skill as a tester and/or a certain group of developers to actually make developers engage in the test part of the sprint planning, further increasing the risk of a gap between testers and developers.

For me, session based test management is an approach to exploratory testing worth looking into if you want a separate test planning, with estimates, without making it too expensive.

If you choose this approach, I would strongly recommend you to share sprint plannings, backlog groomings etc. so that everyone sits down together. It's easy to "optimize" so that testers have their groomings or that developers don't engaging in those, which is like asking for a disconnect to happen.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
The project manager/product owner/manager/test lead who pretends/thinks testing can be controlled.

2 - Estimate dev stories

Shared planning with detailed estimates.

In this scenario testers and developers share stories (DoD includes both implementation and testing). A story is estimated both in regards of testing effort and developer effort for a combined estimate. A story is considered open until "someone" decide that the implementation, testing and bug fixing is done. That "someone" is typically the concerned testers and developers together but if they disagree or the estimate turns out to be way off, a product owner or similar might have to step in.

My experience is that this fosters a very close bond between testers and developer which typically has a positive effect on e.g. tester status, information flow and code related testability. Other benefits are developers have a better motivation (and help) to test themselves as their work is not over until testing is "finished".

However, since the testers are normally the minority it's easy they become the ones always losing votes and debates which may impact how they test, when they stop testing etc. and often not in a good way. Also since the testing is so closely tied to the development things like integration testing, general quality assessments (e.g. security, performance, consistency, test legacy for support of changed context etc.) and regression testing may be missed (regression can be helped by developers taking a more active role in creating automated smoketests but that's just a part of it).

One last problem is if you actually create new tester-developer hybrids and the pressure builds up (important release, budget cuts etc.) teams might trick themselves by moving more members into developing and tamper with their normal quality standards (which in a few cases might be the right choice but often not in my experience, often because the "temporary need" becomes permanent).

If you choose this approach, I would strongly recommend you to introduce activities such as system tests or release tests. In this activity, also involve developers as part testers, part bug fixers, so that a gap between developing and testing isn't created (testers not being able to provide feedback quick enough in the new sprint). A different method is to have teams solely dealing with integration testing (more waterfall like/support team) but for this to work there has to be strong information channels established between the cross-functional teams and the integration test teams.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
The R&D manager/developer who doesn't really understand there's a need to test anything else than the code that was just written or the developer who rush the testing part since "that's not that interesting".

3 - Follow development planning

Shared planning without detailed estimates.

Very similar to the above but without time estimates of the testing. The difference that happens is you save a bit of money (less administration) but testers are a bit more detached from the development process (but not necessarily from the developers) and if estimates can actually be somewhat accurate you lose that as a planning benefit/potential delay indicator; in this scenario sprint plans assume testers will manage time limits if developer manage these which is of course not always true. But do remember that estimates are often not that accurate to begin with (described in part 1) so the actual effect of no test estimates might be small, personally I would most often worry more about the potential disconnect between testers and developers.

If you choose to approach testing like this, I would strongly recommend you to closely monitor so that testers participate actively in all the typical planning activities (groomings and sprint plans). This is to help build a team where testers and developer keep working closely together. The goal is to  "just" remove the part where testers perform time estimates, nothing else, in the planning meetings.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
Testers who simply dislike administration and don't even care to try estimates. A reason why this might be very troublesome is also because these testers could easily contribute to the disconnect between testers and developers in that they might want to use the removal of estimates to skip out on "irrelevant meetings" where key implementation decisions are taken.

4 - Report on/off track

Separate planning without detailed estimates.

In this scenario testers attend backlog groomings and sprint plans to stay up to date with the planning and the product as well as get attention on testability needs; but they have a complete separate planning done together with developers (costs time) or with testers only (may hurt developers' understanding of/respect for/interest in testing). From a project planning perspective testing "just exists" and whenever the team suspects there's a risk for the testing/bug fixing to be late this is reported and dealt with just like any other case where estimates are wrong or development delayed.

This separate planning would happen as soon as possible after the sprint plan is complete (so that the scope for the sprint is known). Testers discuss if there are any other activities they should plan other than the regular testing of stories. They also look into if stories should be tested together to avoid costly retests. This makes it easier to take testing needs into consideration without messing up the development planning and testers can make rapid changes in their plan without impacting development stories.

This has some interesting benefits but there's a great risk of testers forming a subteam within the team potentially hurting the developers interest and effort in testing (information flow should be less of a problem since groomings and sprint plans are key to be able to do the test planning).

If you choose this approach, I would strongly recommend you to leave a bit of time open for unplanned as bugs not related to development stories are more likely to be discovered and help testers need might not be taken into consideration when planning and thus that extra time has to still exist when testers need help. This might seem stupid but if you've been in a software project you know some sudden needs from testers (blocking bugs, needed support tool updates etc.) are hard to predict at the sprint start.

Finally the name for this is approach is a bit strange. If we're not estimating testing (at all) we can't possibly say if we're on track or off track. I thus want to again stress that in reality we always make estimates but rather than debating whether one of many tasks should be estimated to 3 or 5 hours we look at the whole picture and make a rough estimate if we'll make it or not in time.

If you choose to approach testing like this, be careful. Putting testers with low communication skills, reporting skills or unwillingness to make work transparent, in a team like this can lead to isolation and a negative mythic image of testing where developers neither really understand nor care about testing ("they don't seem to care for the planning and the team so why should we care about them").

Examples of two dangerous archetypes potentially advocating for this approach for "the wrong reasons":
Waterfall accustomed testers or developers who still haven't figured out how to work with testing in scrum and other agile approaches. Also testers who don't want to/have the ability to make their work reasonably transparent.

The actual implementation

In a real life scenario we move a bit in between two or more "squares"; for instance testers will sooner or later need to do work that only benefits testing, so completely following the developer's stories is a myth (or an example of a dysfunctional team). However, testers not feeling like they are allowed to test things outside the current development scope e.g. because of a big pressure to close stories, is something I've come across multiple times and with that said my goal is to highlight options. The examples in this post is meant to help you see alternatives; they're not models, ready to implement.

My hope is part 1 helped you understand the dynamics of estimates and planning while the second gave you some practical ideas to experiment with. Like in part 1 I want to emphasize that this is still thoughts in process but I can't wait forever to share them if I want to get feedback...

Finally: Thank you once again Jari Laakso for all your feedback on the first post!

26 May 2016

House of Test

Earlier this month I left House of Test. It was not because I disliked the company but because I was offered an amazing opportunity at another company I obviously love.

Since so many have asked me how it is to work for House of Test I hope giving my view, as an ex-employee, will shed some lights on what "the House" really is.

Consultancy is still consultancy
The first thing to make clear is that House of Test is still "just" a consultant company. Money doesn't magically materialize and as an employee you're expected to actively work to find or create new working opportunities and help with marketing the company. Also far from all assignments are glamorous and you won't actually meet your House of Test colleagues that often.

Growing pains
House of Test is also growing quite a bit right now which slowly lessens some of the challenges being a small consultant company (e.g. less pressure on each consultant to help market and hunt for assignments). But it also introduce new challenges related to e.g. communication and each employee's influence.

As with any growth there are growing pains as everyone tries to adjust to the new environment, nothing strange but also nothing magically gone because it's House of Test. One example would be finding a good format to better facilitate the growing numbers at the company's gatherings.

"I'm not good enough for House of Test"
The reason people ask about how it is to work for House of Test is because House of Test is pretty famous within the (context driven) testing community. That in terms creates an interesting challenge (which is also partly why I write this post):

Great people not thinking they are good enough to work for House of Test.

This might sound silly to some but it's been a serious recruitment issue for House of Test and only since I joined two years ago I've heard "I don't think I'm good enough" more than a couple of times from great testers I know.

The truth: House of Test is filled with passionate testers, many of which have made a name for themselves by speaking at conferences (we'll get back to this), doing well in testing challenges or being otherwise active in the testing community. The common denominator is not godlike testing skills however; it's passion and a willingness to learn. Many of the testers in House of Test started fairly junior but has grown for reasons I will later explain and even so, many of the current ones are still fairly early in their development.

... meaning you're good enough, just to make that clear.

Education as a focal point
The one thing I think differentiates House of Test is their view on education and learning.

Internally conferences, courses and other learning activities are probably the single most discussed topic and call for paper reminders are posted for basically every (major) testing conference on the planet. This is reinforced by management in many ways, one being that I don't think I've ever heard them say e.g. "you would miss too much time from your assignment" or "the timing is bad" when someone requests to go to a conference or course... of course there's a maximum budget for education but that budget is also through the roof compared to any other company I've been in contact with.

Another interesting thing is how the top end education is preferred over the cheaper, local alternatives. Many hotties (myself included) have went to, for instance, PSL in Albuquerque, CAST in various places in the US, RST no matter where it's available and Let's Test, simply because they are considered to be the best available. This focus on quality rather than cost is something I, now being an ex-hottie, will definitely bring with me.

On the flip side there's an expectation to want all this. It might go to a level of unhelpful pressure to send in abstracts or attend learning activities (including after work hours in some cases).

One final clarification though: A problem I've experienced in other companies is the pressure to immediately explain and demonstrate what you've learned from a course or conference. That doesn't exist at House of Test (my experience). The founders seem to understand/trust that it often takes plenty of time and/or specific contexts to fully grasp the value and that education is a long term investment in general.

To summarize:
My view is hotties improve their testing skills unusually quick simply because all the best tools to do so are introduced and available to them. For some this may create an unhelpful feeling of pressure to improve though.

Communication
Being spread out in "four and a half countries" (the half country is actually not Denmark; it's the globetrotters) combined with being rather small poses a major communication challenge. This works fairly well though...

House of Test has an active Slack chat, quarterly gatherings and at basically any major or local test conference you'll meet colleagues.

The gatherings themselves are worth mentioning as they are one of the greatest perks I've experienced. Imagine having a small conference packed with people like Maria Kedemo, Ilari Henrik Aergerter or Carsten Feilberg every three months. That's a pretty awesome employee benefit in itself! I don't know how the format or dynamics in these gatherings will be impacted by the company's growth but during my time they were simply amazing.

Is House of Test for you?
If you like to work with highly skilled professionals, having strong opinions and a willingness to debate basically anything, you will feel right at home. A willingness to learn is definitely required to function well and an interest to, at some point, stand on a stage and share your experiences, helps too but is not necessary.

If you just want a quiet 9 to 5 job, have troubles dealing with consultancy in general or disagree with the context driven testing principles you probably have less to gain from joining the House.

... that's the simplified explanation.

Summary
On one hand House of Test is just an ordinary consultant firm with, for instance, the need to hunt for assignments, the risk of ending up in less than optimal work places and distance to your closest colleagues.

On the other hand House of Test is like a wet dream for passionate testers; you will work with some of the best testers (and testing teachers) in the world, education will be a focal point and anything awesome happening in testing will be introduced to you the moment it becomes public... or often before.

Thank you House of Test for two awesome years!

11 April 2016

Time estimates in testing, part 1

Why this blog post

Activity planning and time estimation in testing is complex to begin with; combining that with cross-functional teams (having to actively take other's planning into consideration) and I've seen a lot of confusion. In this blog post I hope I can arm you with some useful ideas on how to deal with activity planning and time estimates in testing.

... and that turned out to be a lot bigger task than I anticipated so in this first part I will speak about the more overall concepts and in the second part I'll provide more concrete explanations of each approach (see the graphic below).

Key attributes

When I started listing various ways I've been working with estimates I continually came back to two variables that seemed central:
  1. To what degree does testers and developer share their planning?
  2. How much trust and effort is put into time estimates?
For this reason I made this little graphic:


Do note that this is still thoughts in process so I do appreciate any feedback on whether this makes sense or not.

Separate planning vs Shared planning

I've been in teams where testers were only or almost only expected to test what had just been implemented (as in testing the exact work described in the implementation stories thus sharing the planning with developers) as well as teams where testers were expected to create their own planning based on the testing needs; both extremes come with their own benefits and drawbacks:

Shared planning:
 + less administration (as in cheaper)
 + closer relationsship between testers and developers
 + quick feedback loops in general
 + easier to get focus on testability improvements in the code

Separate planning
 + more focus put on regression, system (e.g. consistency) and integration testing
 + easier to get work only affecting testers prioritized
 + planning can be optimized to fit testing

Two dangerous mind traps I've experienced:

Shared planning
Risk: Testing seen as something very simplistic where you just test the product's new features with no or little regard to the big picture.

Negative effect: General testing activities, such as overall performance testing, integration testing or regression testing are down-prioritized.

Implications: Bugs related to e.g. integration, unforeseen legacy impacts or product inconsistency.
Coping: To deal with this it takes testers or other team members who are good at advocating for testing in general and can motivate why certain test activities makes sense even though obviously impacting code is not changed.


Separated planning
Risk: Testers and developers move apart and create two separated subteams within the team.

Negative effect: Impacts information sharing, understanding of the other's work (which in terms may impact the respect for one another's profession) and developer's losing the feeling of responsibility for quality.

Implications: Prolonged feedback loop (code developed to bug reported to bug fixed to fix verified), worse quality on code handed over from development, testers lacking important information making them less effective/making them take bad decisions.
Coping: Well functioning team and social management, testers who're good at communicating their work and good at speaking with developers on the developers terms or vice versa, developers putting a great interest in testers and what they do.

I'll elaborate a bit more on this in part 2.

Effort put into estimates

Let's talk about the two extremes first.

When we put a lot of effort into estimates we may for instance split the work into small tasks to make them easier to estimate and try to give each an as exact time estimate as possible; e.g. "these 6 tasks are estimated to 3, 4, 6, 3, 1 and 9 hours so the time to test this feature should be around 26 hours".

Little effort are put into time estimates when we for instance accept an external time constrain, e.g. "get the testing ready for the release" or "finish your testing within this sprint" and we more inform the one setting the time constrain what we roughly think the quality of the testing will be at this point.

Important: "Quality of the testing" in this case refers to how deep/well we expect we will be able to cover the various features. This is not the same as the "quality of the product". Why the latter is not what is (typically) talked about is because that oe is way too complex to estimate and often out of what we can actively influence. For instance it's not our call if a bug we've identified should actually be fixed, that's up to the stakeholder(s).

At this point the former might seem more reasonable but it's not that simple...

Accuracy of estimates

There's no "done" in testing. This is sort of true for development as well, we can always continue to improve the code itself or polish the actual feature, but at least we can observe the product and say "this is good enough" and, important, that state of "done" can be fairly well-defined/described.

Even though some try to pretend it's not true; testing does not at all work like this.

Testing happens inside you head, development too (to a large degree); However, the product of testing also stays in your head and is relying on your ability to communicate it while the product of development can be easily observed by others. This complicates estimates since we cannot keep discussions around a fairly well defined end product; instead we have to leave it to everyone's interpretation of the task and do our best to communicate our own interpretation so we roughly estimate the same thing at least. For this to work at all, tester's first need consciousness of what they actually do, the necessary communication skills to explain this to others and the receivers must have enough understanding to interpret this message correctly. No human on earth (as far as I know) do even one of these things flawlessly (aka: communication is hard).

With no well defined "done" everyone will have to rely on their interpretations of what they think needs to be done and what they think the receiver asks for. That in terms will impact the estimate's accuracy but this's just part of the problem...

... the other part

On top of estimating a task we cannot clearly define we also have to estimate something that is inherently plagued with great levels of uncertainty:

The time it’ll take to test something depends on external factors such as the quality of the code, the complexity of the chosen design (which is often not set when estimates are done), the stability (up-time and robustness) of necessary environments, the help we'll get to correct problems found/blocking etc. To continue the reasoning the quality of the code depends on things such as how hard something is to implement, the skill level of the developer(s), how stressed they are (if a developer has a tight schedule she can typically stress the design a bit to get it out the door in time which in terms affect the quality which in terms affect the effort required to reach the abstract "done state" when testing) and stress level depends on even more factors. I can go on but let’s just say the time it takes to test something to the “abstract, hard to reasonably accurately define, level we desire” depends on so many external factors that the estimate's uncertainty is huge.

Bullshit!

I once told a project manager this and he replied: "Bullshit! My testers ace their estimates almost every time!" (in a product with pretty crappy quality and expensive testing should be added). And here comes a conflicting truth:

We can't predict when we'll be "done" but we can be "done" by Wednesday if you want.

In development we need to produce some kind of agreed upon product based on, at least, our explicit requirements. Remember how testing did not have a defined "done"; well taken to its extreme we can (almost) say we're done at any point in time since it's unclear what done is anyhow: "This is the findings we've made so far (and are able to communicate). You told us to be "done" until today so we are "done" but our professional judgement is this needs more testing as the following areas are still considered unstable or only superficially tested...". In a sense we always do this, estimates or not, but don't even spend time trying to guess in the latter case, instead we do our best to put us in a good enough state when the time is up.

... and before someone talks about bad attitude or lack of professionalism; this is of course not how it's practically done. Some level of estimation is always performed but rather than saying: "We think this will take 10 hours, this 5 hours, this 2 hours and this 8 hours." we might say for instance "We think we're on track for the release but the quality at this point is unusually low so we're a bit worried this might change".

Planning based on time or effort

This topic is too big to cover in this post but a quick rundown.

When planning our testing we can either focus on effort or time. Effort means "we want to reach a state where we/someone else feels confident about this particular part before we leave it" while time means "we want to spend the fixed amount of time we have in a way so that we've looked at everything at least briefly". In the former we're doomed to not be "done" if something runs late and leave the project management with little choice but delay e.g. a release if they don't want to take on a considerable risk (leaving parts completely untested). But this also allows us to argue for the need of more testing better by saying "we haven't even looked at these four features" and we'll spend less time revisiting areas having to "relearn them" as an area is left more in a "done state".

In the latter we will hopefully have found at least the most obvious, serious problems in all areas of the product and can thus make a general assessment: "our testing is fairly shallow in most areas and we would like to spend more time with the product but from what we've seen all of the new features seems to be okey at a basic level". The drawback with this is it's harder to argue that we actually do need more time to test if stakeholders aren't well informed and comfortable with the concept of "uncertainty" as well as a greater risk of needing to revisit areas for further testing.

How does this relate to estimates? Well in my experience the effort approach prevails when we have estimates and plan our work around these actual stories since "we shouldn't" have too many open stories at once and when they are closed, they are closed so we have to "complete them". In the same way the time approach is more common when we skip estimates (my experience), at least when in combination with a separate test planning. If we have a set deadline (e.g. sprint end or release date) we can more naturally plan in a way so that we've at least looked at everything once.

I say this because most stakeholders I've been in contact with seem to prefer the time approach but still argue for the importance of estimates and these concepts seem a bit contradictory, at least in the context of testing. One last note: The contradiction problem still applies to development: If we get a bit of everything ready we will at least have some kind of product in the end but as described earlier: Since more has to be in place for a piece of software to make sense combined with that estimates should be easier to get accurate enough; the contradiction is not as big of a deal (my biased opinion, feel free to challenge this).

Wrap up of estimate or not

To wrap up:

Since time estimates of testing will always be plagued with great levels of uncertainty, no matter the estimation effort, the question is if detailed estimates really provide enough additional value to support their cost (both time and the risk of misleading stakeholder to believe we know when we'll be "done"). The "ideal" level of estimation is also highly context dependent, the context changes over time and we can't really objectively measure whether our approach is good or bad; so we'll have to experiment and rely on our ability to observe/analyze.

... and finally don't confuse time estimates with planning; Planning is made in both cases and the effort spent on planning has no/little correlation to the effort spent estimating (my experience).

Stay tuned for part 2...