15 January 2014

So when will you be done?

Project manager: How's it going?
Tester: Right now we see a lot of problems in the export functionality but import seems a lot more stable than last week
Project manager: Okey, so when will you be done?

The next time you get this question, answer:

I can be done tomorrow, or next week, or in September if you want, when I'm done (testing) is irrelevant, what's relevant is how confident you want to be with the product when it's released.

When are we "done"?
We don't really decide when we're done, that's up to the people delivering something for us to test (developers, designers, people responsible for builds etc.) and the people deciding when a product is ready for the market (let's call that person(s) a project manager in this post). How is that?

First of all we don't create quality, developers etc. do, we only make a quality assessment. That means we don't control when the "desired level of quality" is reached, that's once again up to developers etc.

Second of all we don't decide the "desired level of quality" itself, that's up to people with better knowledge of the market, internal business situation etc., the person(s) I call the project managers in this post.

And finally we don't know about the actual level of quality in a product, we only try to make an, as good as possible, assessment. This poses another problem as we have to try and communicate the level of quality we think the product is at and communicate this in a way so that the person actually deciding if the quality is good enough (once again, the project manager) can take an as good decision as possible. This is typically where risk comes in: "If we release now, here are the risks I see, it's your call if we can live with those or not" and the last part, if we can live with those or not, that could be translated to "if I'm done or not".

Problem, QA
Now there are a couple of problems.

First, why doesn't testers then decide if we should release or not? After all, we have the most detailed view of the system? Well, we call that quality assurance (QA) and there's several reason why I highly dislike that concept: One is we don't know the business well enough! There might be a market window we can't miss or that will be very costly to miss, we might need more money now to make crucial investments and so on. Long story short: Quality, no matter if we like it or not, is far from the only thing deciding when a product is ready to release. And also, I strongly believe QA could, or rather would, hamper our ability to stay critical and "test minded" as we would be biased to fulfill other goals (like the wish/pressure to release).

Problem, estimates
Second I know someone is saying: "But as a project manager I can't just have a huge question mark in my time plan, I need an estimate of when we think we can release!". Sure, in my team test is currently not estimated (we do have some days between code freeze and release but they are set, and same for every sprint), instead I just report if I see problems a long the way that I think jeopardize our ability to release when planned (risk). That's one way to do it.

At my previous work we did specific estimates for testing, nothing wrong in that, but it doesn't change anything. Once again, that's for someone to be able to make a plan. In the middle of everything though, the time is suddenly irrelevant and we can only hope our assessment of how long it will take developers etc. to reach a perceived level of quality where the project manager feels confident to release, is anywhere near reality.

... Oh, and test cases, or rather pass-fail/pass-total ratios, don't change any of this, it's only a way to pretend you have control over something you don't control.

Never ask a tester when (s)he will be done, ask about current risks and what consequences (s)he thinks come with those risks.

As a tester, don't provide a "time left", it's not up to you when you're done. But do help the person asking to figure out an estimate for themselves to when the product/feature might be ready by presenting known problems, how deep areas are covered and other relevant information.

This comes from a Transpection Tuesday with Helena Jeret-Mäe.

I received a comment and want to add a clarification: At my current work, this is not a problem and has never been (at least I haven't come in contact with it)... but it has been earlier in my career.