Showing posts with label Transpection Tuesday. Show all posts
Showing posts with label Transpection Tuesday. Show all posts

26 January 2017

How do you help a team become awesome?

Background

I raised a question, first during a Transpection Tuesday, then in the TestSverige Slack chat and finally with all sorts of people I've met; mostly software testers. The question was:

How do you help a team become awesome?

Awesome in this case refers to the kind of team where everyone seems comfortable; they laugh, they communicate, they do silly things but don't seem embarrassed and at the same time they seem productive, motivated and ever evolving with low employee turnover rate.

This is my summary of those discussions.

Before we start: This is not specifically for managers, team leads, scrum masters etc.; it's everyone's responsibility and opportunity; anyone can improve a team's "mood".

Personal attributes/attitudes

Personal attributes and attitudes came up a lot during the discussions and they seemed to be the foundation on which you can add helpful activities. All of these work as self reinforcing systems so if you start to set a positive direction others will (eventually) follow. The same applies if you set a negative direction though, as this will start to create a deeper and deeper hole to get out of.

So why don't we just act "good"? Because we're imperfect, also known as being human: We're scared, we sense injustice, we want revenge, we get stressed, angry or sad, we're sometime egocentric and so forth.

For these reasons there are a few things you need to consider for each of the attributes listed below:
  1. It'll take courage to set a new direction and you might get hurt... sometimes a lot
  2. You'll need to consciously monitor yourself to avoid stress etc. getting the better of you
  3. You'll need to nurture these attributes in the team primarily by making positive examples visible

So, without further ado; dare to...
  • Be vulnerable
    "My uncle used to say that we like people for their qualities but we love them for their defects."
    /John Myers, Hellboy

    Share your struggles, admit you're scared, open up, allow people to come close and dare to be imperfect (aka. human) in general.
       
  • Be transparent
    Share what you know and do, that's relevant to others even though this might make them question your decisions, force you to (temporarily) stop something or even use the information to personally attack you.
       
  • Be accountable
    When you've messed up, take responsibility, apologize if appropriate and accept the consequences. Sometimes it's even beneficial to take responsibility for things you weren't responsible for just to get out of a negative loop.
       
  • Appreciate
    Make it a habit to register when someone makes something good and tell them this. Make sure you're sincere, empty flatter is not helping. Another nice way to appreciate people is to be a proxy for appreciation e.g. "Just so you know, Bob gave a two minute speech this morning about how great he thought your design was".
       
  • Trust people
    People want to do good so do trust them. Sometimes they'll let you down, sometimes you might even get stabbed in the back but keep trusting them. With that being said, of course bad behavior should be dealt with, e.g. see "be sincere" below, but as soon as you stop trusting people you're heading in a bad direction. After all, if you don't trust people they'll never be able to show you they can be trusted starting a rather destructive loop. Also people grow with responsibility. Finally: trusting people does not mean not helping them and/or helping them realize they need help.
       
  • Be sincere
    Integrity is sexy; if you think someone, including yourself, is being singled out, is getting unfair criticism or for other reasons aren't treated in a fair way: Speak up! Especially when people aren't given a chance to defend themselves.

    However, stick to your observations not your interpretations. You don't know for sure if "this person is actively trying to hurt you" but you do know for instance that "the person was told to give you the latest version but you never got it". Sincere != Judgmental, quite the opposite actually.
       
  • Care about people
    Caring about people costs very little and the main risk you face is simply to be creepy. Do notice that care does not mean micromanage, instead it's about genuinely trying to create a good situation for others. Carita Jansson Tsiantes gave a lovely example in the TestSverige Slack chat that went something like:

    When you boil water to make a cup of tea, don't just think about yourself; prepare water for your colleagues who might want tea as well.
       
  • Help and support
    This can shortly be summarized as:
    "If someone has a problem, we have a problem".

    When asked for help do help and if people express frustration or confusion offer to help. Few people ask questions if they don't need to so rather than telling them "you should know that" try to help them learn how they can find the answer themselves; e.g. by introducing them to the right people, help them get access to some information system, help them get invited to a certain meeting/mailing list etc. An attitude to avoid is "it's not my job to help...". Sure this is sometimes true and you need to work too but then again: help the person help herself rather than ignore the request.
       
  • Respect everyone
    No job, role or person is more important than any other. Of course some tasks might be more important to finish but then focus on getting them solved as a team. A key aspect in this is understanding your colleagues' tasks, challenges, frustrations and talents. Andreas Cederholm brought up a great example of how to nurture this attitude:

    We run team test sessions where the whole team test together. Add some cookies and laughs and it'll work even better.
       
  • Try
    If you want to challenge status quo you'll have to try new things. Trying comes with an increased risk of failing and potentially making a fool of yourself but that's necessary and typically a great way to learn. Sometimes trying something you don't really believe in might still be beneficial simply to acknowledge that ideas are appreciated and that you trust in peoples judgement even when you might not agree with them.
       
  • Auto forgive
    A psychiatrist once told me a very smart thing about eating disorders and how to react when people have not been able to fight the decease (generally applicable of course):

    Guess who'll feel worst when this has happens? You? No, the person who just "failed"! You don't need to remind them they "let you down", they'll know and they'll feel terrible about it.

    People mess up, people take bad decisions, people have bad days. You rarely need to remind them, it's typically much more constructive to say "don't worry, shit happens, let's fix this" and move on. This is also important to nurture previously mentioned attitudes such as "try" and "be transparent"; if people are scared about potential consequences (including reactions) the only thing they'll try is to cover stuff up.
       
  • Smile (and laugh)
    Being met with a calm, warm smile is great medicine when you feel down or nervous about some bad news you have to deliver. Smiling also helps at least me stay calm making it a useful tool to manage feelings of anger or frustration.
         
I get the feeling all the attributes/attitudes above point back to some basic principle like "get unhelpful frustration off the table fast; both yours and others" or "always trust in peoples willingness to do good"... but I can't really put it into words. Feel free to help me.

Activities

If the personal attributes/attitudes are the foundation the various activities below represent important tools to speed up the process. Notice though that the activities by themselves are not silver bullets and overusing them or using them at the wrong time can actually have a negative impact. Focus on the list above first!
  • Social activities outside of work
    E.g. cook together, sports or boardgames. Activities where everyone is active which is not necessarily true for e.g. your typical after work.
        
  • Quirky things
    E.g. quote book, silly competitions, fun/silly "rules" or internal titles.
        
  • Retrospectives taken seriously
    Not specifically the meeting, can be e.g. a continuous, everyday team reflection activity. All problems brought up are dealt with. Problems are taken seriously even by members not personally impacted.
           
  • One on ones
    Allows people to raise concerns in a safe environment (assuming the person meeting members one on one has earned the members' respect).
        
  • Do each others work
    An example of this is Team Test Sessions where the team test together (suggested by Andreas Cederholm, TestSverige) or move the other direction and try mob programming with testers included. Everyone (product owner, developers, testers, designers...) together attending e.g. courses in security or usability could also help support as this kind of activities creates some common ground. Yet another suggestion is team members meeting customers, accompanying sales/support people etc.
         
  • Discussions about values
    E.g. take the "personal attributes/attitudes" list above and talk about each one described. Is this something you want to strive for in the team; can you change something to help nurture this behavior etc. Make it a team goal to improve and nurture the "mood" in the team in general.
        
  • Personal values
    Most of the personal attributes and attitudes require consistency. An activity where you sit down an state you personal "manifest", goals or values can be important. For instance it might be hard to treat yourself in a fair way without some guidelines; either turning you into an asshole demanding more from other than yourself or a "victim" never treating yourself well enough.
        
  • Clarify your attentions to your boss
    If you want to invest quite a bit of time in this, go to your boss, explain your intention and ask for her/his support. Making your boss, or if necessary, your boss' boss, an ally can provide access to several powerful tools (e.g. see "Supporting context" further down).
          

Symptoms

The list below represent "symptoms" that your team (or even company) is moving in the right direction:
  • People laugh.
       
  • You're met with a smile, even in bad times.
       
  • You know what your colleagues like, both at work and outside. E.g. their hobbies,interests, spare time activities, important life milestones, work and private goals, "hidden talents" and previous experience.
       
  • People talk about hobbies, spare time activities and the other things listed above.
        
  • Conflicts are taken seriously and navigated swiftly.
      
  • People blame themselves, if anyone, not others.
        
  • High level of motivation.
        
  • You rarely feel stupid (in a bad way).
        
  • Stuff that "should be done", gets done.
        
  • Ideas are taken seriously, people try new things and experiments are run frequently.
        
  • People admit mistakes and challenges early as they're not afraid of the consequences.
         
  • People meet outside work because they want to, not because they feel obligated to.
        
  • Few taboos.
         
  • Very limited "bullshit" or backtalk in the team.
        
  • You know what's happening in the team and rarely get "unpleasant surprises".
        

Supporting context

These things might be hard for you to actively influence but be aware as they do seem to have an important impact:
  • Reasonable pace
    People need time to do supporting, long term activities and when under immense pressure/unreasonably high pace this is quickly forgotten or down prioritized. These lost activities help you become faster tomorrow than today meaning they're long term, multiplicative investments.
         
  • Stable organization
    Adding or losing team members can in worse case force the team to start over in their attempts to be awesome. If you're the manager; try not to change teams that work great together even though it might be tempting!
        
  • Ethics
    A product you believe in and feel ethically good working with, helps. The same goes for the company's actions: If it feels like the company acts in an ethical way that seems to help people "invest" in the company in a way that's helpful.
         
  • Good social (especially empathic) skills
    Having team members who like the social aspect and are good at nurturing positive social behavior (not to be mixed up with people "talking a lot") helps.
         
  • Previous friends
    Not always true as the previous friends may create a "sub team" within the team but seems to sometimes help as the friends most likely have a healthy relationship towards each others which can spread.
         
  • Management accepting problems
    Having a manager/management asking for "solutions, not problems" can suppress people's willingness to bring attention to important problems or make the company accept suboptimal solutions. The intention to focus on what's constructive is not bad but the message delivered can be. It's of course okey to ask the person if they have any ideas themselves on how to solve the problems they bring up but don't make the solutions a "requirement".
         
  • Culture awareness
    Manager/management that genuinely cares about the company culture and how to improve it helps.
         

Misc

Some "quotes", all loosely translated from Swedish:
  • "I ask myself: How can I make this person feel like I want her to feel?"
    Carita Jansson Tsiantes
        
  • "It's professional to be personal"
    David Högberg
         
  • "It's not unprofessional to have fun but to do something in a boring way when it can be achieved just as well in a fun way, that's unprofessional"
    Klas Hallberg, from his book: YCDBRALAI (Swedish).
Finally a comment I didn't know where to place:
  • "If I say I can't talk about it, you know and accept this". Transparency is important but some information you mustn't share for various reasons. However, sometimes the mere knowledge you know some secret information can be enough to help people prepare for a big change, avoid unpleasant surprises etc. One example could be: "We will get a new boss, I know who it's most likely gonna be but I can't tell you until papers are signed; however, I can tell you I think this person will do a terrific job, so don't worry too much about it".
        

Conclusion

It makes perfect sense but didn't really occur to me when I first asked the question:

Making a team awesome is basically the same thing as making any relationship awesome and it starts with you and all the small decisions you make every day.

Good luck!

28 September 2016

Next step

After a colleague pointed out I'm not 29 years old anymore I had to revisit my About page. While reading the rest of the text this sentence made me stop...

Next step is to spread the knowledge I've obtained and be a thinking practitioner who can hopefully do what James once did for me.

I wrote that sentence over 4 years ago, before any tester outside Linköping knew I existed. So I took a moment just to reflect.

"Spread the knowledge I've obtained"
I've been teaching students in Software Testing at a vocational university for 1,5 years, I've given numerous lectures, presentations and workshops on various topics at conferences (national and international), meetups and at my workplaces and I'm now test coach at Verisure, a role in which I'm basically paid to share the knowledge I've acquired so far. Finally I've, for most of the period, been a fairly active blogger.

"Be a thinking practitioner"
Transpection Tuesday, my blog, my peer conference appearances, my many dinner discussions with top notch testers (most recently Göran Bakken), my (former) activity on Twitter all add to the "thinking" part while my testing and test related experiments at Ericsson, Verisure and Zenterio, as well as my effort during Software Testing World Cup all add to the "practitioner" part.

Most important though: The two are not separate processes. The best example of this is probably Transpection Tuesday. During Transpection Tuesday, Helena and I often discuss a challenge one or both of us have, together we work out potential solutions or experiments to run, we go back to work to try these solutions/run the experiments and finally we share/ask for help to evaluate the results at a second Transpection Tuesday. Like I said, one process.

"Who can hopefully do what James once did for me"
After graduation I got emails from two former students, both made me feel I've accomplished exactly this...

... hmm, is it raining inside or why are my eyes moist all of a sudden...

On top of that other former students, testers and other colleagues (most recently a developer) have all helped me understand my efforts to inspire, guide and mentor have actually made a difference.

It's not without pride I say: In four years I've wildly exceeded my own expectations based on my memories of what I hoped to achieve 2012. Not just my expectations for the next four years but potentially for my whole career. Shit... I need a break.

... pause music...

What's my "next step"

Took quite a bit of thinking but here's my updated "next step", or mission if you prefer that:

I've presented to, spoken with, inspired, coached and mentored testers in testing and quality. I want to continue this but to a broader audience, in ways different from what has been done before and inspire others to present, speak, inspire, coach and mentor as well.

Clarification:
Broader audience refers to e.g. developers, students (not studying testing), managers etc.

If you happen to read this in few years, do remind me to report my status, please.

What's your "next step"

Enough about me, what's your next step?

13 October 2013

Arguing for Exploratory Testing, part 2, Reuse

Intro
You can read a bit more about this series in the first post:
Arguing for Exploratory Testing, part 1, Traceability

The topic for our second Transpection Tuesday on "Arguing for Exploratory Testing" was Reuse.

We finished with two open questions:
Can we ensure we actually repeat the exact same test a second time?
How do you actually achieve reuse in exploratory testing (when it is desired)?

Reasons to reuse tests
First we tried to state reasons someone would want to reuse test cases:
  • Save time during test design
  • Functionality is changed and we want to rerun the full/part of the test scope
  • We want to verify a (bug) fix
Preconceptions
Looking at reasons quickly led us to some preconceptions which became the topic for a big portion of the session:
  • Effort = Value
  • Equal execution = Equal value
  • Our scope is (almost) complete
  • Reuse = free testing
  • A monkey can run a test case
Preconception: Effort = ValueSince we've invested so much time (as well as money and prestige) in writing test cases they must be worth more than a single execution.
  • Even if presented with clear evidence we may reject it to defend out judgement
  • We may overestimate what a test case is useful for (we want to get the most out of our work)
  • It's my work, criticize it and you criticize me not the work! (common and unfortunate misconception)
It takes a lot of self-esteem to say: "Yeah I screwed up, could you help me?", especially in an environment where mistakes are not accepted. Notice many of the "so how can we make the most of this mistake" still communicates "so you made a mistake, now you'll have to suffer for it by telling us why you are a failure". It takes a lot of work to change this.

Preconception: Equal execution = Equal value
Let's say we execute the exact same steps in a scripted and an exploratory way, wouldn't that be two identical tests? We believe not.
  1. Your goal differs. With test cases your goal is to finish as many test cases as possible (progress). That's how you measure "how much testing you were able to perform". In exploratory testing you are judged based on the information you provide thus you should be more incline to spend a few extra minutes observing/following something up even when it's not "part of your test".
  2. Your focus differs. When you have a script you have to focus on following that script. In exploratory testing your goal is typically to find new leads to base the next test on. That means in one case your focus is on the product and in the other on an artifact. Think about the Invisible Gorilla experiment.
  3. Scripts easier bias you not to observe. In a script you typically have verification steps e.g. "verify X=5". We believe this could bias you to not be as observant during the other steps: "this is just setup so nothing should happen that concerns me".
Preconception: Our scope is (almost) complete
We know a feature's boundaries (specifications, requirements) so when we set the scope for testing we can, and usually will, cover almost the entire feature.
  • We can't know the boundaries of a feature:
    • We will impact and use other components not "part of the feature" e.g. other code in the application, the operating system, surrounding applications, third party plugins, hardware, hardware states etc.
    • We interpret planning documents differently, adding parts, discover things we couldn't had anticipated, correct mistakes or interpret something differently than intended by the author and/or interpreted by the tester.
  • We can (almost) always tweak a test a little bit (e.g. change input data or timing). But testing all combinations (we recognize) is way too expensive. Also there are usually so many ways an application can be misused (intentionally or unintentionally) that even with a ton of creativity we can't figure out them all (ask any security expert .)
So our scope is basically a few small dots on a big canvas rather than a well colored map. But those dots are (hopefully) carefully selected to protect us from the greatest risks we can anticipate. Still, they are only dots.
As testers we easily support the preconception of full coverage by answering questions like "do we cover this feature now?", "is all testing done?" etc. with a simple "yes" or "almost". The more accurate answer would be "we cover the most important risks we identified limited by our current knowledge, time available and other constraints", but that answer is not very manager friendly which leads us to...

There is a general lack of knowledge and understanding of testing in most organizations. And we decided to stop there since that was a way too big question to tackle at the point we got there. But it's an important questions so please take a moment and think about it for yourself: How can you improve understanding and interest for test in your organization?

A final note. Since we only cover a small part, reusing a test scope will not help us catch the bugs we missed the first time. How big of a problem that is differs but repeat a few times and it may scale up in a nasty way.

Preconception: Reuse = free testing
We've already written the test case so wherever it's applicable (which should be self-explanatory) we can just paste it into out test scope and voí la! Free coverage!

The big issue here is the "self-explanatory" part. Problem is what fitted well in one feature might not do it in another even similar one. Even without needed tweaks we still have to figure out what the test case actually does, so that we know what we have covered with it and what we still need to cover in other ways.

This process is expensive, really expensive, so sure we save time not having to figure out the test and how to practically run it all over again but consider the time it takes to find the test case, analyse what it covers, analyse what it doesn't cover, analyse how it interacts with existing test cases, analyse if something has changed that impacts the test case compared to last time and so forth.

Preconception: A monkey can run a test case
  • We all interpret things differently. Click can mean single click, double click, right click (already assuming the first two were left clicks), tab and use enter, middle button click, etc. Even a well written, simple test case can lead to different interpretations.
  • One thing we're looking for is unexpected behavior and it's in the nature of "unexpected" to be something we can't plan for. Thus to get much use of a test case we need to handle the system well enough to investigate and identify when a behavior is unexpected or undesired.
  • We do a ton more observations than we consciously think of. These observations takes practice, focus and skill. For example, when you boot your computer you would react to a lot more things than you would add in a "boot test". Examples: screen is blinking, smoke is coming out, lights in the room flickers, you hear strange mechanical sounds, all these should catch you attention but are unlikely written down.

    More skill and/or focus can lead to more valuable observations: The login screen looks different, memory calculations are wrong, it's slower than usual/expected, BIOS version is incorrect, the operating system's starting mode is wrong etc.
  • When we don't fully understand something we tend to write it down less detailed (sucks to look stupid by writing down something incorrect and we're too lazy to investigate every detail we don't understand, it's easier to investigate as we get there).
  • When we write a test case based on specifications, requirements and other "guesses" of how the end system will work even a flawless instruction will sometimes not correspond to how the system is actually working (including when working as desired). This of course requires the person executing to be able to correct the test case thus understand both the intention with the test case and how the system works.
  • If we don't understand the system we may lose a lot of time setting up fully or partly irrelevant variables to the values stated in the instructions. The immediate comment is, if we have stated irrelevant variables in the test case we've failed. Consider then that the variable might be irrelevant to the test but mandatory to the system (e.g. you have to set a valid time server). Leave that out and the person executing once again needs to understand the system.
When is reuse actually beneficial?
  • We have rewritten something from the ground up but want it to externally still work the same. Reuse could save time.
  • We have some critical paths through the system that can't break.
  • We need to quickly regression test a feature without having to dig in too deep in the feature itself.
But remember that the one executing still should understand the test and the system to ensure tweaks (using different input values, triggering different fault cases etc.) can be made and important observations are more likely to be made.

How can we achieve reuse in Exploratory Testing
Not covered much by this particular session but a few thoughts:
  • Charters
  • Debrief notes
  • Test ideas
  • Test plans
Try creating a report specifically used as a "feature summary" including valuable operational instructions, general test ideas, impacts, lessons, problems, tools, important details, testability etc. We did kind of this at my former company where we let the test plan continuously turn into an end report as we added lessons from our testing. This would not only help when retesting a similar feature but also as educational material or test plan input for instance. Important though is to stay concise, noise is a huge enemy! The number of readers of a document is inversely proportional to the number of pages in the document, you know .)

A few notes on test case storage
First off I love this post on having a big inventory of test cases by Kristoffer Nordström.

It's easy to think something you've already created is free, but there's no such thing. Having a large inventory to test cases costs in many different ways:
  • Storage
  • Noise (it's one more thing testers have to keep track of)
  • Another tool/part of tool for testers to stay updated with / learn / understand
  • For a test case to be fully reusable later it should be kept up to date. How many refactors all their old test cases as functionality is changed?
  • ... if you do, that sounds really expensive.
Summary
Reuse has it's place but be careful!

Remember reuse means inheriting blind spots, has a cost and still requires the person "reusing" to know just as much about the feature, system and testing in general as if (s)he wasn't reusing old checks.

Take care, and I hope these Transpection Tuesday notes (even though somewhat messy) were helpful!

... and of course, thank you Helena!

03 October 2013

Arguing for Exploratory Testing, part 1, Traceability

Background
The topic for my and Helena Jeret Mäe's last Transpection Tuesday was Arguing for Exploratory Testing. What we basically wanted to achieve was to get better at explaining the pros (and cons) about exploratory testing, in a concise way, as well as identify common preconceptions about scripted versus exploratory testing.

Input
We had defined 15 subtopics such as time estimations, credibility and making sure the important testing is done. The first item on this list was traceability which turned out to be enough material to fill the whole 2 hour session.

What is Traceability
First question was: What do we mean with traceability?

Our answer: Being able to track what has been tested, how, when and by who.

Why do we want Traceability
The next question was why we want traceability. We quickly formed a list but reading it now makes me realize we mixed together traceability and claimed benefits of having a trunk of test cases. But anyway:
  • External demands
  • Ensure work has been performed
  • Base for further testing
  • Support handovers
  • Create a map
  • Reuse
General thoughts
One thing we got back to over and over again was: The best way (often related to level of detail) to achieve good enough traceability is highly context dependent! For example having a simple mind map with short comments is good enough for one company while another requires every session to be recorded with the recordings being stored and indexed together with session notes, debrief summaries and saved logs. It all depends!

Another reoccurring theme was: "But do we really achieve that kind of traceability with test cases". I will not bring up those discussions much in this post but expect another one on "false assumptions about scripted and exploratory testing" soon.

Terms

Charter
Charter is basically an area to test, a way to break down a big testing mission. Notice though that as you test new charters might come up so it's by no means a definite plan. Read more >>

Test idea
Typically a one liner describing one or more tests you want to do. Read more >>

Session
A timeboxed, uninterrupted test sitting, typically 60-120 minutes. Read more >>

Debrief
Refers to an activity happening after a session where the tester explains what has been done to, for example, a test manager. This also includes clarifying questions, feedback and other kinds of dialog to help both parties learn from the session. Read more >>

Recording

We mainly refer to screen recording (video, either using a screen recording tool or an external video camera) but could as well mean record audio, save logs/traces or other ways to save what has been done. A good resource >>

External demands
This refers to regulated businesses (watch the excellent presentation What is good evidence by Griffin Jones), evidence in a potential lawsuit or customers demanding test data.

Possible solutions:
  • Record the sessions, preferably with configuration (device, version, settings etc.) explained if that matters. Adding commentary might improve the value as well (communicating purpose, observations etc.). This is also typically a scenario where logs/traces can be a required addition to a video recording. Once again, watch What is good evidence.
  • Store session notes
  • Store session summaries
  • Store charters
  • Store debrief summaries
  • Store test ideas (assuming they has been covered by your testing)
Creating support to find old information (index) seems key as well. For this charters, time stamps and/or categories might be useful to tag your save material with.

Ensure work has been performed
First question raised was: Is this really something we want to encourage? And our general answer is no; with the motivation that people in our experience tend to do things to look good rather than do what is needed/valuable when closely monitored. But being able to know that the testers actually do their job is closely connected to credibility and transparency so still a valid question.

Possible solutions:
  • Debriefs
  • Recordings
  • Notes
  • Bugs reported (a really bad metric for this but can indicates something!)
Debriefs seemed to most often be the preferred approach. During a good debrief the person being debriefed asks followup questions that will require the person debriefing to explain the testing done. A byproduct in this process would be to ensure that the tester actually did a good job / any job at all. But once again; if your focus is on monitoring, the people monitored (testers as well as non-testers) is likely to waste time proving job has been done rather than actually work!

Base for further testing
Let's say we've finished the prepared scope or are suddenly given an extra week to test something. If we can't go back and use already executed tests as inspiration, how do we know where to continue?

Possible solutions:
  • Having a bulk of charters as inspiration
  • Make comments about testing you've left out in your finished charters/sessions
  • Review session notes
We also brought up if there's a value of actually looking at what has been done. Often we found that the time it takes to analyse the work already done might not be worth it (information being too detailed making it hard to overview and learn from quickly). Simply exploring using knowledge we might not had had the first time or by having a different tester from when we first tested, is often more than enough to add value. After all, the time we analyse is time we cannot test (which might or might not be well invested).

Support handovers
One tester leaves (quits, parental leave, other tasks etc.) and another has to take over, how can we manage such a change when not having a set scope of test cases? First of all the new tester do have to spend some time getting familiar with the feature in exploratory testing but this is also true for using test cases since we, for instance, can't predict what problems we will run into thus can't prepare instructions for those!

But we can make it easier:
  • Charters (with status)
  • Debrief
  • Documented test ideas with already investigated ideas being marked
  • Session notes or session summaries
  • Mind maps or other test planning with already tested parts commented
  • Documenting lessons learned (like operational instructions)
Debrief in this case refers to a general debrief of what has been done, what we know is left, problems seen, lessons learned, where information is stored, who to talk to etc. by the tester leaving. Of course if the switch happens very suddenly (e.g. sickness) performing this is not possible and in that case it's important testers are professional enough to document what has been done (mind maps, short plans, visualizations, debrief/session summaries, charters). This is once again true for both exploratory and scripted testing.

Create a map
A bulk of test cases combined with statuses can somewhat be used to draw a map of what has been covered and what is left to test. How can we visualize this without test cases?

Possible solutions:
  • Charters
  • A mind map describing what has been tested
  • A picture/model of our product with comments about testing/coverage
  • Other visualizations like diagrams
  • The Low Tech Dashboard
A few important notes:
  1. You sure have a map with test cases but is it actually anyway near accurate? Say we have two equally complex functions. One takes 1 argument, one takes 10. We likely will have at least 10 times as many test cases to cover the second function. So if we execute all the test cases for the second function, have we really covered over 90% (with "covered" only considering these 2 functions)?
  2. Even if equally sized, that map would not cover what we didn't anticipate from the beginning so you still need to add an up to date judgement/evaluation (e.g. "wow that network protocol sure was more complex when we expected during the planning, we need more testing of it!").
  3. Scale is really important. Do we want to see Tartu, Estonia, Europe, the world or the Milky Way galaxy? We might need different visualizations to create all the maps we need (once again, think about value, how much time can we spare to keep these updated).
Reuse
Later a similar feature or a feature impacting the one we just tested is developed and we want to reuse the work previously done. How can we do this without test cases?

First of all, reuse is one of the places where test cases are powerful. However you have the minesweeper problem: If you walk the same lane in a mine field over and over, as new mines are constantly added, it's likely that the number of mines beside your narrow track start to build up while few will happen to end up in your path. Meaning, running the same tests over and over is less likely catch new bugs as creating new tests are so value quickly diminishes (more tests executed is not equal to more valuable ground covered).

What we often would suggest is to use knowledge acquired the first time as foundation for new testing to speed it up. Think about the new risks introduced and what needs to be tested based on that (like with new functionality) rather than how old test cases might fit into your testing.

Possible solutions:
  • Reuse of charters
  • Reuse of test ideas
  • Look at old session notes / summaries
  • Use old recordings (the simpler the form of the recordings the better for this, watching several hours of screen recording is probably waste)
  • Start a wiki page/document/similar for each feature and add lessons learned, where to find info, problems etc. as you test.
Summary
There are many ways of achieving traceability (and similar potential benefits of test case trunks) in exploratory testing, Session Based Test Management principles seems to be the most straight forward way but keeping track of test ideas or using other approaches works as well. All have their own contexts where they seem to work best (e.g. SBTM might add too much overhead for a simple project).

All and all, if someone claims "You lose traceability with exploratory testing", ask what that person means more precisely (e.g. present testing data to customer) and explain the alternatives. Notice this is only based on our two hour discussion and there are a whole lot more to add so think for yourself as well! Also question whether you actually achieve the kind of traceability requested using a scripted approach and to what cost. Finally question if the requested traceability is actually worth its cost no matter if exploratory or scripted testing is used. Doing unnecessary work is wasteful no matter what approach you use.

Finally: There are still contexts where a highly scripted approach is likely the best option but the closer you get to a pure scripted approach the fewer and more extreme the contexts become.

Thank you for reading!

And thank you Helena, see you next week!

25 September 2013

Transpection Tuesdays

A couple of weeks ago me and Helena Jeret Mäe sent a few cryptic tweets about "Transpection Tuesday". Since then Helena has written a great summary of our first session and to add to that here are my thoughts so far.

What is Transpection
The best explanations I've found are James Bach's explanation of transpection and Michael Bolton's transcript of a transpection session. But basically it's one person asking questions to another person with the twist that the one asking has already tried to answer those questions. The result is you get two sets of answers, hopefully less biased from each other.

What is Transpection Tuesday
When I first met Helena it was during Let's Test earlier this year. She had tried to grab me a few times but sessions and other conversations had interrupted us. So suddenly during lunch she stood up and said: "You sit here!", and pointed at the chair in front of her. And so I did, which was one of the smarter decisions I've taken as a tester.

Transpection Tuesday happened in a somewhat similar fashion:

Helena: Hey. I was thinking this morning that we could have Transpection Tuesdays (because it rhymes :P) or something. if there's something each of us is trying to work through and solve, then we could try this transpection thing out. It's a bit random thought
Erik: Sounds cool... I'll take care of the kids alone tonight since my fiancé is working so suits me perfect... with the possible problem that a kid wakes up and I have to leave very suddenly and might not come back .)
Helena: I understand that and won't hold a grudge :P
Erik: In that case I'm in! Usually kids are asleep after 19:30 CET, hopefully there's no change tonight

So after some technical difficulties we started a Skype call with video. By the way, video was awesome for this since it made sharing visualization easier, body language is useful when trying to explain something and it helped, at least me, stay focused.

So to summarize:
Transpection Tuesday is simply a Skype session ~3 hours long, happening every Tuesday evening between me and Helena, dedicated to talk about testing.

Confession
Okey, so what we've done so far has been far from just transpection. Instead it has more been like a weekly dose of conferring where we discuss topics that matters to us in various forms; where form depends on mood and topic. You could argue the name thus is misleading, and it probably is... But we like it.

What did we discuss?
Helena shared our combined notes, from the first session. I won't write anything detailed about it now but if something in those notes seems interesting, feel free to leave a comment and I can try to write a future blog post about it (my guess is some of the more "community challenging" stuff would fit that criteria).

How did we discuss?
I recognized five different "styles"/formats used during our talks, plus one suggested that we haven't tried yet:
  1. Transpection
    I tried this briefly when speaking about why/if we should send people to courses and Helena did it much more extensively during the last TT when speaking about bias. When talking about these topics one of us asked questions we already had answers to but felt we wanted to improve/challenge. What makes this powerful is the person answering last get slightly less biased since he/she hasn't heard the first person's answers which gives you interesting answers to compare.

  2. Conferring
    Most of the talking was just quick questions being thrown out and then we together tried to answer and discuss them. I would say this is the style we use most (it's like our "go to" style) and for those of you who've been to Let's Test or CAST, this is very similar to the discussions you typically end up in during lunches (which is a good thing!).

  3. Challenging
    I plan to do a lot more of this going forward because it felt like an awesome way to learn. Essentially it's one of us asking the other to clarify something to a great level of detail or challenge a claim being made (similar to what happens during RST for instance). Very powerful as assumptions were uncovered and you constantly had to be on your toes.

  4. Role play
    We take opposite roles (e.g. the manager who questions the professionalism in ET versus a context driven tester considering ET a good approach in the current context) and simply fight it off with the goal to pressure each other to explain/motivate something at a deep level, as well as highlight assumptions. We have actually done this very briefly but I don't think enough to really call it role play, more quick imitations and reactions to that.

  5. Transfer
    Simply one of us telling the other something without any questioning going on; so basically a monologue about a topic. What was interesting to me was the information I received this way really seemed valuable but missing the interactive aspect hampered my ability to connect it to something else/make "lasting connections". Also I often felt the energy was lost a bit when one of us talk for long stretches.

  6. Interview
    Similar to Transfer but more driven by questions so basically the receiver "choosing" what information he/she is interested in. Better than the monologue version but still a lot less valuable than any of the more interactive ways. It was like, even with questions, I didn't get my brain into gears, so new information simply didn't stick as it was when e.g. Conferring.
Improvements
Here are a few things I've noticed:
  • Having a clear topic helps as we got fewer side tracks and thus easier could focus on going deep into the current topic. Side tracks are not evil per se, but too many and too big ones seem to make discussions rather shallow (my feeling so far).
  • Having prepared followup questions would had been helpful, we have improved this for next week, will be interesting to see the result.
  • We did get deeper into the topic during our third TT as we pressured each others to better explain what we meant and why even when agreeing. Think we can do this to much greater level though.
  • I need to try to quickly structure the information in my head, before starting long rants/monologues about something.
  • I think both of us could benefit from monitoring our own speaking trying to stop ourselves during long monologues. It's much harder for the other person to recognize a good break point.
Wrap up
So, Transpection Tuesdays are not so much about transpection but rather about weekly conferring. I already look forward to next week (preliminary topic: "Arguing for Exploratory Testing") and that's a great sign. I hope this turns into a long lasting tradition and it will definitely be the foundation for many future blog posts!