07 June 2016

Time estimates in testing, part 2

Intro

Before I begin: Thanks to feedback from especially Jari Laakso, I've found several flaws in my first post which was what I hoped for when I published it as the flaws help me understand estimation better. I will highlight some of them in this post but I also want to point out that part 1 has been updated several times since I first published it to clarify or correct content.

How to interpret the graphic below

Do notice that these are not polarities where you switch from one to the other, instead they are continua. I will however describe one rather specific scenario for each box for the sake of simplicity, a chance for you to see the differences between "each corner". My hope is these examples can work as inspiration/help for you to think different.

One final remark: I will provide all these examples as cross-functional scrum teams consisting of only developers and testers. Of course this is just one of many ways a team can be set up but I'll use it once again for the sake of simplicity.

1 - Estimate testing stories

Separated planning with detailed estimates.

In this scenario testers create their own stories, estimate them and get them prioritized as part of the regular scrum flow (the cross-functional team will now have a mix of stories relevant to developers/testers/others). This allows testers to create their own plan optimized for testing needs and I find it easier to motivate developers to help out with testing as needed as the only content left in the sprint might be test work (and this is very much visible) but...

Since planning testing even a week forward is rather hard in most scenarios (open bugs blocking stories, open bugs delaying stories, late code deliveries, level of quality differs from the expected etc.) I find this approach somewhat chaotic (even though you might expect the complete opposite).

A second challenge with this approach is the risk of creating a gap, within the team, between testers and developers. This, I think, is because if you actually try to follow the plan (test stories) you will lose some of the flexibility to drop something in favor of something else in order to minimize the feedback loop between feature implemented and feature tested.

Finally my experience is it takes a certain skill as a tester and/or a certain group of developers to actually make developers engage in the test part of the sprint planning, further increasing the risk of a gap between testers and developers.

For me, session based test management is an approach to exploratory testing worth looking into if you want a separate test planning, with estimates, without making it too expensive.

If you choose this approach, I would strongly recommend you to share sprint plannings, backlog groomings etc. so that everyone sits down together. It's easy to "optimize" so that testers have their groomings or that developers don't engaging in those, which is like asking for a disconnect to happen.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
The project manager/product owner/manager/test lead who pretends/thinks testing can be controlled.

2 - Estimate dev stories

Shared planning with detailed estimates.

In this scenario testers and developers share stories (DoD includes both implementation and testing). A story is estimated both in regards of testing effort and developer effort for a combined estimate. A story is considered open until "someone" decide that the implementation, testing and bug fixing is done. That "someone" is typically the concerned testers and developers together but if they disagree or the estimate turns out to be way off, a product owner or similar might have to step in.

My experience is that this fosters a very close bond between testers and developer which typically has a positive effect on e.g. tester status, information flow and code related testability. Other benefits are developers have a better motivation (and help) to test themselves as their work is not over until testing is "finished".

However, since the testers are normally the minority it's easy they become the ones always losing votes and debates which may impact how they test, when they stop testing etc. and often not in a good way. Also since the testing is so closely tied to the development things like integration testing, general quality assessments (e.g. security, performance, consistency, test legacy for support of changed context etc.) and regression testing may be missed (regression can be helped by developers taking a more active role in creating automated smoketests but that's just a part of it).

One last problem is if you actually create new tester-developer hybrids and the pressure builds up (important release, budget cuts etc.) teams might trick themselves by moving more members into developing and tamper with their normal quality standards (which in a few cases might be the right choice but often not in my experience, often because the "temporary need" becomes permanent).

If you choose this approach, I would strongly recommend you to introduce activities such as system tests or release tests. In this activity, also involve developers as part testers, part bug fixers, so that a gap between developing and testing isn't created (testers not being able to provide feedback quick enough in the new sprint). A different method is to have teams solely dealing with integration testing (more waterfall like/support team) but for this to work there has to be strong information channels established between the cross-functional teams and the integration test teams.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
The R&D manager/developer who doesn't really understand there's a need to test anything else than the code that was just written or the developer who rush the testing part since "that's not that interesting".

3 - Follow development planning

Shared planning without detailed estimates.

Very similar to the above but without time estimates of the testing. The difference that happens is you save a bit of money (less administration) but testers are a bit more detached from the development process (but not necessarily from the developers) and if estimates can actually be somewhat accurate you lose that as a planning benefit/potential delay indicator; in this scenario sprint plans assume testers will manage time limits if developer manage these which is of course not always true. But do remember that estimates are often not that accurate to begin with (described in part 1) so the actual effect of no test estimates might be small, personally I would most often worry more about the potential disconnect between testers and developers.

If you choose to approach testing like this, I would strongly recommend you to closely monitor so that testers participate actively in all the typical planning activities (groomings and sprint plans). This is to help build a team where testers and developer keep working closely together. The goal is to  "just" remove the part where testers perform time estimates, nothing else, in the planning meetings.

Example of a dangerous archetype potentially advocating for this approach for "the wrong reasons":
Testers who simply dislike administration and don't even care to try estimates. A reason why this might be very troublesome is also because these testers could easily contribute to the disconnect between testers and developers in that they might want to use the removal of estimates to skip out on "irrelevant meetings" where key implementation decisions are taken.

4 - Report on/off track

Separate planning without detailed estimates.

In this scenario testers attend backlog groomings and sprint plans to stay up to date with the planning and the product as well as get attention on testability needs; but they have a complete separate planning done together with developers (costs time) or with testers only (may hurt developers' understanding of/respect for/interest in testing). From a project planning perspective testing "just exists" and whenever the team suspects there's a risk for the testing/bug fixing to be late this is reported and dealt with just like any other case where estimates are wrong or development delayed.

This separate planning would happen as soon as possible after the sprint plan is complete (so that the scope for the sprint is known). Testers discuss if there are any other activities they should plan other than the regular testing of stories. They also look into if stories should be tested together to avoid costly retests. This makes it easier to take testing needs into consideration without messing up the development planning and testers can make rapid changes in their plan without impacting development stories.

This has some interesting benefits but there's a great risk of testers forming a subteam within the team potentially hurting the developers interest and effort in testing (information flow should be less of a problem since groomings and sprint plans are key to be able to do the test planning).

If you choose this approach, I would strongly recommend you to leave a bit of time open for unplanned as bugs not related to development stories are more likely to be discovered and help testers need might not be taken into consideration when planning and thus that extra time has to still exist when testers need help. This might seem stupid but if you've been in a software project you know some sudden needs from testers (blocking bugs, needed support tool updates etc.) are hard to predict at the sprint start.

Finally the name for this is approach is a bit strange. If we're not estimating testing (at all) we can't possibly say if we're on track or off track. I thus want to again stress that in reality we always make estimates but rather than debating whether one of many tasks should be estimated to 3 or 5 hours we look at the whole picture and make a rough estimate if we'll make it or not in time.

If you choose to approach testing like this, be careful. Putting testers with low communication skills, reporting skills or unwillingness to make work transparent, in a team like this can lead to isolation and a negative mythic image of testing where developers neither really understand nor care about testing ("they don't seem to care for the planning and the team so why should we care about them").

Examples of two dangerous archetypes potentially advocating for this approach for "the wrong reasons":
Waterfall accustomed testers or developers who still haven't figured out how to work with testing in scrum and other agile approaches. Also testers who don't want to/have the ability to make their work reasonably transparent.

The actual implementation

In a real life scenario we move a bit in between two or more "squares"; for instance testers will sooner or later need to do work that only benefits testing, so completely following the developer's stories is a myth (or an example of a dysfunctional team). However, testers not feeling like they are allowed to test things outside the current development scope e.g. because of a big pressure to close stories, is something I've come across multiple times and with that said my goal is to highlight options. The examples in this post is meant to help you see alternatives; they're not models, ready to implement.

My hope is part 1 helped you understand the dynamics of estimates and planning while the second gave you some practical ideas to experiment with. Like in part 1 I want to emphasize that this is still thoughts in process but I can't wait forever to share them if I want to get feedback...

Finally: Thank you once again Jari Laakso for all your feedback on the first post!