What?
This "exercise" is rather simple:
Describe an ideal testing project
So first off, there's a lot of ambiguity in that statement. Ideal? To who? In what context? Testing project? Means what? What are the boundaries? Valid in what context?
Exactly what this means is not so important, and I'll explain why in the next chapter. But to give you a good enough idea to understand this post: "Using the experience you have, list the actions/rituals/behaviors etc. you think would make for the best possible testing". That still leaves you with a ton of ambiguity but hopefully you agree that's not the bigger issue here when I explain...
Why?
I often find when I try to look at my current context and how to improve it, I limit my thinking. By trying to list "ideal attributes" I can look at each of them and say: "do we do this today, to what extent, is it ideal in our context and how would the ideal scenario look in my specific context". Those questions, for me, tend to break that mental limitation and it's, once again for me, especially useful to see those, individually insignificant changes, that together really make for a huge improvement.
How?
Feel free to skip to the next chapter if you don't care about how this list was created.
I've tried this "exercise" three times. First time was in a workshop at Securitas Direct/Verisure on testability. One question was: "how can we improve our testability" and my brilliant colleague Niklas Grönberg, suggested we, instead of thinking about what we had, should think about how we wish things were. It generated several great ideas and everyone, no matter of current context, could add to the list.
Soon after I suggested trying the same thing on a Transpection Tuesday with Helena Jeret Mäe but focus on the project in general. We spent roughly three hours adding to the list and while doing it sharing ideas on how this could be achieved in our respective contexts.
Finally I tried the idea as a group exercise at EAST. In this case 8-10 people shouted out ideas that were discussed/further described and then added to lists on a whiteboard. When we were done everyone got a few minutes to look at the lists (we filled two whiteboards) and in an open forum we shared our key takeaways. I'll add a few notes from that at the end.
To support/guide the thinking a bit I added a few categories both when running it as an exercise on EAST and with Helena. This also (I think) helped explain what I was looking for. You will see a similar setup in this post.
How to use the list?
This is not attributes of an ideal project, it's attributes that to someone seemed ideal in some, to that person, relevant context. Thus you can't just take an item from the list and say: "well how do we achieve this". Instead I suggest you look at an idea and ask yourself, as described before:
- Is this relevant to my context?
- Why/why not?
- Do we do this today?
- To what extent?
- How would this ideally work in my unique context?
Also notice some of the items are in conflict with each other. Once again something might be valid in one context or aspect of a context while something else is true in a different context.
General
- Clear boundaries/limitations throughout the whole project
Time, resources etc. also knowledge about why it's limited and why the limit is set to what it is - Clear and well communicated goals and milestones
Iteratively challenged throughout the whole project - Knowledge about users and buyers
In bigger companies the buyer and user can be two very different groups - Access (throughout the whole project) to users and buyers
- Knowledge about stakeholder
- Knowledge about market (e.g. competitors and market windows)
- Manageable amount of changes and quickly informed about these
- No hidden agendas
- A product you...
... like to work with
... find meaningful
... believe in
... want - Balanced work flow
- Time to experiment
Including product, working methods, techniques and tools - Being included and listened to
E.g. invited to meetings - See the product being used
And see that it matters to the customer - Low level of (boring) repetition
- Some "hype factor"
- Test ideas generated from day one
- The agreed level of quality and customer focus is not lowered throughout the project
- Everyone strives to achieve better results when agreed upon
Includes everything from quality/scope to efficiency/cost/time - One focus at a time (no multitasking)
- Testing being a respected and prioritized activity that everyone cares about
- Involved when it's a problem to solve not a solution to implement
- Well communicated why (why it's a problem, why it's important)
This includes being presented the real why, not the "official" why. - Kick off to meet, bond and discuss
- Being listened to regarding testing issues and general risks we see
- Everyone genuinely agrees upon what to build and why
Also includes for instance salespeople and maintenance/support
- Motivated
- Mixed group of expertise/experience
- Skilled to do the job
- A willingness to get better, both individually and as a team
- Problems and concerns are shared
- Having fun together!
- Everyone is listened to
- Respect for each other
- Working well together
- Domain knowledge
- Clear, effective communication
- Quality characteristics are discussed and clearly prioritized.
This is an iterative process
Stakeholders, customers, testers, programmers should be involved
Most important is to state what is not that prioritized - All solutions should have a known "why"
- Testable paper mocks or similar
- Customer feedback on these mocks, communicated to the testers
- Testability and relevant quality characteristics issues should be discussed as part of each solution
- Always 3 options to consider (<- much debate to "at least", "roughly" or "exactly" three)
- Key areas and lower prioritized areas are discussed and agreed upon
- Risks collected from testers, customers, stakeholders etc. discussed and prioritized
- Discussions on how to test with programmers involved
- Test process revisited and tweaked to fit current context
E.g. what documentation, for who and why. When/how to involve programmers etc. - Testers, programmers, customers and stakeholders read, understand and commit to the strategy
- Strategy is continuously revisited throughout the whole project
- Feedback on errors (email, popup), rather than scanning a log
(could be achieved with a tool scanning a log of course) - Being able to start a flow at any relevant position
(e.g. in the middle of a setup wizard) - Being able to extract any relevant information from the product
- Early first delivery and frequent deliveries throughout the whole project
- Attention on bugs, not just new functionality
- Programmers being aware of and taking relevant quality characteristics into account when developing
- High quality code delivered
Helped by code reviews, static code analysis, TDD/BDD, quick manual sanity tests, (automated) smoketests, skilled programmers taking pride in delivering great quality code etc. - Testers pairing up with programmers, customers, stakeholders when testing
- No distance between programmers and testers (and stakeholders)
- No delays from code written to code up and running in the test environment
- All target systems/platforms/versions used by customers available to testers
- Regularly time to test with and speak to customers
- Why should be well explained and motivated for every demanded administrational action
E.g. documentation and reporting - Iterative reflection on processes and tools that everyone is engage in, genuinely want to engage in and given time to act upon.
- Progress is celebrated!
- Everyone is having fun!
- Testers are not the only ones testing
Stakeholders, programmers and customers test the product as well, both while pairing with testers and alone. - Mistakes being embraced
Admitting mistakes should never lead to shame, everyone naturally want to share mistakes done rather than keep them a secret. - Lots of questions asked and people taking time to answer questions
- Amazing tools that support the desired process, not dictating it
- Short optimized feedback loops
Includes communication - Testers focusing on testing not checking
- No stupid measurements and KPIs
- Stakeholders accept relevant information, no extra translation necessary
Much of this is probably better explained as both parties striving to understand each other and to learn speaking each other's language. - Team! Not individuals
- Continuously challenging goals and milestones as more is learned about the product
- Stakeholders clearly explaining why they chose to ignore certain risks communicated by testers
- Customers blown away
- World peace and end of world hunger
- Time to reflect
... and result of reflection being used as input for the next project - Clear finish
- Getting the time to feel proud
- Being shown appreciation from managers, customers and stakeholders
- Lessons learned shared between various projects (if projects are run in parallel with different testers)
Some reflections from the wrap up during EAST
As I said before we ended the exercise during EAST with sharing lessons and ideas to bring back to our respective contexts. One observation was most of them related to communication like ideas to improve communication or information someone should ask for. Another was very different kinds of lessons were described but everyone seemed to identify something relevant to their context that they might not had thought about before, or now thought about in a different way, which to me indicates that the list can be valuable to most testers.
Feel free to share your own additions as comments below, I will gladly add them to the list!
And to all of you who've added to this list already thank you! This is not my work, it's our work! Actually I even tried to shut up during EAST! (which went... okey I guess, at least compared to how it usually ends up)
Thank you for reading!